url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://icpc.njust.edu.cn/Problem/Zju/1042/
|
# W's Cipher
Time Limit: Java: 2000 ms / Others: 2000 ms
Memory Limit: Java: 65536 KB / Others: 65536 KB
## Description
Weird Wally's Wireless Widgets, Inc. manufactures an eclectic assortment of small, wireless, network capable devices, ranging from dog collars, to pencils, to fishing bobbers. All these devices have very small memories. Encryption algorithms like Rijndael, the candidate for the Advanced Encryption Standard (AES) are demonstrably secure but they don't fit in such a tiny memory. In order to provide some security for transmissions to and from the devices, WWWW uses the following algorithm, which you are to implement.
Encrypting a message requires three integer keys, k1, k2, and k3. The letters [a-i] form one group, [j-r] a second group, and everything else ([s-z] and underscore) the third group. Within each group the letters are rotated left by ki positions in the message. Each group is rotated independently of the other two. Decrypting the message means doing a right rotation by ki positions within each group.
Consider the message the_quick_brown_fox encrypted with ki values of 2, 3 and 1. The encrypted string is _icuo_bfnwhoq_kxert. The figure below shows the decrypting right rotations for one character in each of the three character groups.
Looking at all the letters in the group [a-i] we see {i,c,b,f,h,e} appear at positions {2,3,7,8,11,17} within the encrypted message. After a right rotation of k1=2, these positions contain the letters {h,e,i,c,b,f}. The table below shows the intermediate strings that come from doing all the rotations in the first group, then all rotations in the second group, then all the rotations in the third group. Rotating letters in one group will not change any letters in any of the other groups.
All input strings contain only lowercase letters and underscores(_). Each string will be at most 80 characters long. The ki are all positive integers in the range 1-100.
Input consists of information for one or more encrypted messages. Each problem begins with one line containing k1, k2, and k3 followed by a line containing the encrypted message. The end of the input is signalled by a line with all key values of 0.
For each encrypted message, the output is a single line containing the decrypted string.
## Sample Input
2 3 1
_icuo_bfnwhoq_kxert
1 1 1
bcalmkyzx
3 7 4
wcb_mxfep_dorul_eov_qtkrhe_ozany_dgtoh_u_eji
2 4 3
cjvdksaltbmu
0 0 0
## Sample Output
the_quick_brown_fox
abcklmxyz
the_quick_brown_fox_jumped_over_the_lazy_dog
ajsbktcludmv
None
## Source
Mid-Central USA 2001
|
2018-08-16 16:31:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21576669812202454, "perplexity": 2193.9600570707157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211126.22/warc/CC-MAIN-20180816152341-20180816172341-00304.warc.gz"}
|
https://macdailynews.com/2012/01/19/apple-reinvents-textbooks-with-ibooks-2-for-ipad/
|
# Apple reinvents textbooks with iBooks 2 for iPad
Apple today announced iBooks 2 for iPad, featuring iBooks textbooks, an entirely new kind of textbook that’s dynamic, engaging and truly interactive. iBooks textbooks offer iPad users gorgeous, fullscreen textbooks with interactive animations, diagrams, photos, videos, unrivaled navigation and much more. iBooks textbooks can be kept up to date, don’t weigh down a backpack and never have to be returned. Leading education services companies including Houghton Mifflin Harcourt, McGraw-Hill and Pearson will deliver educational titles on the iBookstore℠ with most priced at $14.99 or less, and with the new iBooks Author, a free authoring tool available today, anyone with a Mac can create stunning iBooks textbooks. “Education is deep in Apple’s DNA and iPad may be our most exciting education product yet. With 1.5 million iPads already in use in education institutions, including over 1,000 one-to-one deployments, iPad is rapidly being adopted by schools across the US and around the world,” said Philip Schiller, Apple’s senior vice president of Worldwide Marketing, in the press release. “Now with iBooks 2 for iPad, students have a more dynamic, engaging and truly interactive way to read and learn, using the device they already love.” The new iBooks 2 app is available today as a free download from the App Store. With support for great new features including gorgeous, fullscreen books, interactive 3D objects, diagrams, videos and photos, the iBooks 2 app will let students learn about the solar system or the physics of a skyscraper with amazing new interactive textbooks that come to life with just a tap or swipe of the finger. With its fast, fluid navigation, easy highlighting and note-taking, searching and definitions, plus lesson reviews and study cards, the new iBooks 2 app lets students study and learn in more efficient and effective ways than ever before. iBooks Author is also available today as a free download from the Mac App Store and lets anyone with a Mac create stunning iBooks textbooks, cookbooks, history books, picture books and more, and publish them to Apple’s iBookstore. Authors and publishers of any size can start creating with Apple-designed templates that feature a wide variety of page layouts. iBooks Author lets you add your own text and images by simply dragging and dropping, and with the Multi-Touch widgets you can easily add interactive photo galleries, movies, Keynote® presentations and 3D objects. Apple today also announced an all-new iTunes U app giving educators and students everything they need on their iPad, iPhone and iPod touch® to teach and take entire courses. With the new iTunes U app, students using iPads have access to the world’s largest catalog of free educational content, along with over 20,000 education apps at their fingertips and hundreds of thousands of books in the iBookstore that can be used in their school curriculum, such as novels for English or Social Studies.* The iTunes U app is available today as a free download from the App Store. *Some content is available only for iPad. Source: Apple Inc. Related articles: Apple unveils all-new iTunes U app for iPad, iPhone and iPod touch – January 19, 2012 MacDailyNews presents live coverage of Apple’s ‘Big Apple’ education event – January 19, 2012 ### 76 Comments 1. Jtcdesigns Score!! 2. breeze This will do wonders for education and usher in the new era of digital books. 1. Jersey_Trader A back pack full of school books costs more per student than an iPad and the digital iBooks. With BILLIONS in the bank, Apple can loan the money to the schools at ZERO interest over 5 or 10 years. What school would pass that up! That is cheeper than the book replacement cost already in next years school budget. 1. Jersey_Trader Consider this now for all those that believed Apple’s got nothing new to offer to take over another market. • Take what you already figured out (Keynote, Page, Numbers, …) and regroup it. • Make the development software a Mac application only for use on Apple only devices. • Host it on your existing billion dollar server farm with your existing iTunes system of distribution. • Release it. Take over that new market. • Repeat the above steps and take over another market. Just how many devices and services do you think Apple can release at the same time each year? Think bigger! 1. Another incredible market takeover and the stock market yawns. Apple provided a complete ecosystem and has 90% of the published text books on board to convert with a complete digital distribution and . . . the stock market yawns. 1. breeze Give it some time to sink in 😉 2. breeze Apple has historically: A. Donated hardware to schools and consortiums B. Discounted (subsidized) the educational institution channel C. Recycled older trade ins for educational institutions There’s the Apple dividend that will keep on giving… 3. JB Have downloaded the iBooks 2 app, and now downloading Biology. Exciting. 4. Joe the Moderate I know for what I’ve paid for my child’s books and study materials over the last two years I could have purchased anyone of the current iPad models. This will be great! 5. Peter Blood Another “D’Oh! Why didn’t we think of that?!” day for Ballmer T. Doofus. 1. RL I think the amazing thing that Apple does is make others say “Why didn’t I think of that” because Apple makes it so simple. Every parent or student has thought about it for years, with out really examining yet it seems that it is the implementation that is remarkable about this. I think only Apple could pull this off so eloquently since they have the complete ecosystem. Also apple being apple has the clout and credibility to let the publishers trust them. This is such a huge win for education. Now parents and kids who want to get a head start on subjects will have those tools available. Also a text book that never goes out of date is huge and cuts out all of the waste and expense that goes into releasing a new edition every 3 years. The publishers will make more money by selling each student (school district) a book rather than one book for one student over three years. I wonder if you can sell your book after the school year? 2. Russ Just wait for the new Microsoft book store coming out in six months. The approximate time it takes for another Ballmer-led reverse-innovation project cycle to complete. 6. Spellman Damn; went to install iBooks Author on my Snow Leopard-running MacBook Pro and it’s Lion-only. 1. Bunsen Honeydew Rush to download iBooks Author… Late the the “Lion” party… What did you expect? 7. bob the stalker with this intiative, the burden for purchasing textbooks seems to be shifting from the school to the student. This will free up funds in education budgets for actual teaching. that strikes me a good thing. Also, schools can move away from the Texas version of textbooks which include some psuedo-science as determined by their unqiue state board of education. 1. Arnold Ziffel Unique State Board of Education is right – our schools make the Flat Earth Society possible. 1. 8^þ Perhaps Virginia’s governor Berkeley was right. He “thanked God [they had] no free schools nor printing,” and hoped that “we shall not have any of these for hundreds of years, for learning has brought disobedience, heresy, and sects into the world, and printing has divulged them with libels against the best government. God keep us from both!” 😆 8. 8^þ Sigh. It seems the only way to create math formulas in iBooks Author is by inserting them as static images from another program. iBooks Author strips out formulas from imported MS Word documents. Oh, well. That’s really disappointing for me: I was just starting to write a new “Intro to Computational Physics” book and was really hoping… It’s back to PDFs for now. 1. squiggles couldn’t you do that with a widget? 1. 8^þ That’s an interesting thought, thanks. I’m also going to try using MathJax. 2. 8^þ One somewhat kludgy work-around is to use LaTeXiT and drag/drop a PDF of a formula into the iBook document. This may work for ‘displayed math’, but I don’t think it will handle ‘inline math’. 2. janeshepard There has got to be some way to work with mathematical typography. It’s essential. Maybe it’s still being worked on? By the way, what’s F. Kantor’s “Information Mechanics” all about? I have a copy of that book but his prose is impenetrable. 1. 8^þ My take on Kantor’s work is that he claims the universe is essentially a Turing machine. But the implications are problematic: – What about infinity? Can it exist in a Turing instantiation? – He allows for descriptive, rather than explanatory theories — we can describe what happens, but not why it happens. – Gödel’s incompleteness specifies that there are results that can’t be derived from within any specific system, in this case, in a specific Turing machine. (Remember, these are overly simplified thoughts.) 9. squiggles I haven’t been able to find a decent business text on generating TOST analses that properly account for such things as emerging markets, PBAJ, evolving technologies, TROUT and LARD. If someone doesn’t write one for iBooks 2, maybe I will! 1. janeshepard The LARD quotient has been covered by Steve Ballmer. 1. 8^þ 2. Bunsen Honeydew PBAJ is not a business subject – it’s culinary. (PB&J)(I’m hungry) 10. cascadians Today. Apple completely disrupts and reinvents Education. For all of us. Yes it will take a couple years for the full impact to hit but this changes everything for the better. THANK YOU APPLE AND STEVE! 11. TheConfuzed1 This is awesome! Now college students will be able to save some of their hard-earned cash that used to go toward text books, and put into better use buying the things they really want, like pizza, beer, and weed! LMAO!!!!! 12. J Maybe I’m missing something. It says most titles will be 15 bucks or less. Most textbooks are 100 . College level is usually 200 . Where is this 15 bucks coming from? I’ll believe it when I see it. 1. Damian75 I don’t think that cost is that surprising if you think about it. The cost to print a hard bound full color text book is fairly high plus printed books are heavy so shipping costs are hight. In digital format you remove those costs and you open up sales to a much larger audience being in the ibook store, at this price point you might sell to those who just want to learn new thing and not just to those who are required to buy for there class. 2. st Um, you’re failing to consider that those$200 books often get owned, over time, by several different students. You can’t trade in a $15 iBooks textbook after the semester is over. The secondary market, from which the publishers make$0, will no longer be flooded with used books. Their revenue doesn’t drop from $200 to$15 per student. Think about it.
1. janeshepard
Yes. You can be sure that the $15 price point was carefully calculated with your points in mind, to replace a disappearing revenue stream with another equal or better. This is how disruptions occur, by exhibiting a superior business model. The publishers are teaming with Apple because they see dollars replacing horse collars. 1. 8^þ But at$15, it’s going to take a lot of copies to recoup development costs. I wonder if this will push the publishers completely away from small market texts for advanced classes. Authors may end up working on their own, without the resources, in those areas. I’ve published texts with Addison-Wesley, Brooks/Cole, Thomson, and Wiley, and have *really* appreciated/used the resources they make available to authors.
1. janeshepard
Well, maybe the new, much broader distribution channel would sell enough copies. Take one of your texts. How many different schools bought them? Did Wiley or the others market them broadly? It seems as though the new channel would have more reach. As for author resources, wouldn’t the publishers be just as likely to provide resources? Editors would remain a premium resource, I would think.
I’m not suggesting you’re wrong, of course. I’m intrigued by the missing details of this completely formed system, and wondering if they’ve thought of everything, or if they’re sort of rolling the dice.
1. 8^þ
Wiley did a nice job of marketing, but my last book was a graduate text. The first year had a bunch of libraries buying it, so the numbers were up nicely. The second year, it’s down to 50 or so schools.
When I talked with an editor at a conference last week, he told me that for introductory texts a publisher won’t consider anything that isn’t projected to maintain sales of 25,000 copies/year.
On the other hand, *all* the publishers I’ve talked want to go to eTexts, they just don’t know how to produce them. And (most) authors don’t know how to write them. Learning a new paradigm is always uncomfortable…
Well, I’m sure they’ve thought of all the “known unknowns,” but they may have missed some of the “unknown unknowns.”
(And I’m not poking fun at Rumsfeld; I thought that his statement was incredibly insightful.)
13. iMaki
BAM!! Bring education to the student! Less gas emissions, less expense, less driving, more flexibility, more people off food stamps! Big step toward a revolution long overdue to help fix this broken country thru education. Hopefully man left wing radical teachers will be laid off and replaced thru more efficient technology!
1. 8^þ
Shame on you for attempting to bring politics into this worthwhile thread!
1. twilightmoon
1
Let’s leave the right and left nonsense out of this thread!
14. leodavinci1
Well it looks like I was wrong about Apple not releasing an authoring tool.
However, it doesn’t change my situation about getting back into publishing. iBook Author looked great… until I saw that Lion was required. Not doing Lion any time soon.
C’est la vie.
1. writingdevil
I don’t know your reasons for not wanting to use Lion, it’s not really that challenging, but from a student’s perspective, it’s ironic that someone paying tribute to davinci would shy away from possibly the greatest breakthrough for text delivery in education in decades, if not generations, because of not understanding/wanting to use a media allowing the breakthrough. But I do have a great uncle who refusesHe’s bright man and that’s his way. To each his own.
2. rsbell
Yeah, you should let the upgrade to the next OS completely shut you out of a revenue stream.
Good thinking.
3. Bearman
Wow! You’re no Davinci. Need to get a new name like sour puss.
|
2021-11-27 18:39:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17467646300792694, "perplexity": 4269.75356587747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00477.warc.gz"}
|
https://lists.boost.org/Archives/boost/2001/10/18031.php
|
# Boost :
From: williamkempf_at_[hidden]
Date: 2001-10-04 11:24:14
--- In boost_at_y..., "David Abrahams" <david.abrahams_at_r...> wrote:
> ----- Original Message -----
> From: <williamkempf_at_h...>
>
>
>
> > There are very few projects currently using the Boost.Build
system,
> > so AFAIK the only one that failed to compile for you was
> > Boost.Threads. This was a minor oversite during the process of
> > moving Boost.Threads onto the trunk and I'll correct this in the
CVS
> > repository ASAP. In the mean time there are two ways to build
> > Boost.Threads available to you. Modify \$(BOOST_ROOT)/Jamfile to
> > include the following at the end:
> >
>
> I don't think that will work until I update the Build System.
Making this
> work (if you use <include>\$(BOOST_ROOT) in your subproject) depends
on
> fixing the "relative include" problem we've been discussing. Aww,
heck, it's
> been lingering so long and it's an easy hack; I'll try to fix that
this
> afternoon.
Strange, this works for me right out of the box.
> Note that this technique doesn't affect anything unless you're
trying to
> build from the top level.
>
> > Or, run Jam directly in the \$(BOOST_ROOT)/libs/thread/build
directory
> > instead of in \$(B00ST_ROOT).
> >
> > > Do you use SGI iostreams provided by STLPort or native
iostreams?
> > >
> > > > I was particularly interested in using the new
> > > > Boost.Threads. However, when I tried to manually subinclude
> > > > libs/thread/build in \$(BOOST_ROOT)/Jamfile, I get many
> > compilation
> > > > errors because \$(BOOST_ROOT)/boost/thread is not in my Include
> > > path..
> > > >
> > >
> > > Try modifying the jam file. After replacing \$(BOOST_ROOT)
> > > with ../../.. the includes paths were set up ok.
>
> That's what I'd expect, under the current situation.
>
> > This shouldn't be happening (and doesn't for me). Be sure to run
Jam
> > with -f??\boost\tools\build\allyourbase.jam.
>
> I'm really surprised to hear that. You can invoke a build from the
top level
> (\$(BOOST_ROOT)) and it works??
Yep.
Bill Kempf
|
2021-08-04 17:21:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721616268157959, "perplexity": 7342.044007751359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00710.warc.gz"}
|
https://math.stackexchange.com/questions/1867939/how-unique-is-e
|
# How unique is $e$?
Is the property of a function being its own derivative unique to $e^x$, or are there other functions with this property? My working for $e$ is that for any $y=a^x$, $ln(y)=x\ln a$, so $\frac{dy}{dx}=\ln(a)a^x$, which equals $a^x$ if and only if $a=e$.
Considering equations of different forms, for example $y=mx+c$ we get $\frac{dy}{dx}=m$ and $mx+c=m$ only when $m=0$ and $c=0$, so there is not solution other than $y=0$. For $y=x^a$, $\frac{dy}{dx}=ax^{a-1}$, which I think equals $x^a$ only when $a=x$ and therefore no solutions for a constant a exist other than the trivial $y=0$.
Is this property unique to equations of the form $y=a^x$, or do there exist other cases where it is true? I think this is possibly a question that could be answered through differential equations, although I am unfortunately not familiar with them yet!
• probably as unique as every other real number. – Jorge Fernández Hidalgo Jul 22 '16 at 21:09
• Yes $e^x$ is the only function that is its own derivative. – Gregory Grant Jul 22 '16 at 21:09
• @acernine Use the product rule on $ye^{-x}$ and you'll see that if $y'=y$ then the derivative of $ye^{-x}$ is $0$, making it constant, so $y$ must take the form $Ce^x$. – Erick Wong Jul 22 '16 at 21:11
• @GregoryGrant, well except for $ae^x$ :) – Pax Kivimae Jul 22 '16 at 21:16
• @user1717828 That's included in $ce^x$. – Gregory Grant Jul 23 '16 at 0:40
Assume that $f(x)$ is a function such that $f'(x)=f(x)$ for all $x\in\Bbb{R}$. Consider the quotient $g(x)=f(x)/e^x$. We can differentiate $$g'(x)=\frac{f'(x)e^x-f(x)D e^x}{(e^x)^2}=\frac{f(x)e^x-f(x)e^x}{(e^x)^2}=0.$$ By the mean value theorem it follows that $g(x)$ is a constant. QED.
• The same idea as in Jason's answer but without differential equations. – Jyrki Lahtonen Jul 22 '16 at 21:27
Consider the equation $y'=y$. Our goal is to solve for the function $y=f(x)$. Roughly speaking $$\frac{dy}{dx}=y \implies \frac{dy}{y}=dx \implies \int\frac{dy}{y}=\int dx \implies\ln(y)=x+C \implies y=e^{x+C}=Ae^x$$
for some constant $A$
• You have to figure out what $\frac d{dx}\ln(y)$ means first, to do that. Which usually requires knowing $\frac d{dx}e^x=e^x$. Big circle of going nowhere. – Simply Beautiful Art Jul 22 '16 at 21:16
• @SimpleArt I dont think so, since its not a question of existence of $e^x$, but rather uniqueness. – Pax Kivimae Jul 22 '16 at 21:17
• @SimpleArt This is a slightly different fact. – JasonM Jul 22 '16 at 21:18
• This is standard, but the OP confesses unfamiliarity with differential equations and there is a bit of untidiness here in dealing with the $y=0$ and $y<0$ cases. – Erick Wong Jul 22 '16 at 21:21
• @ErickWong Ah, unfortunately I only skimmed the question so I didn't see that last part. Yes, I agree the $y \leq 0$ cases are untidy, but if one wants a general understanding, I think this argument suffices, even if one is unfamiliar with differential equations. I'll add some comments to explain. – JasonM Jul 22 '16 at 21:24
This may not be an answer you are looking for, but its a nice one to consider.
Consider $y=\cos(ix)-i\sin(ix)$.
You may find that:
$$\frac{dy}{dx}=-i\sin(ix)-i^2\cos(ix)=\cos(ix)-i\sin(ix)$$
Thus, $y'=y$ is satisfied. Since $y(0)=1$, $y'(0)=1$, $\dots$, then by Taylor's theorem, we have $e^x=\cos(ix)-i\sin(ix)$, or more commonly known as
$$e^{ix}=\cos(x)+i\sin(x)$$
Which is Euler's formula for complex exponents.
• This shows that $e^x$ is a solution, not that it is a unique solution. – Aditya Jul 23 '16 at 2:50
• @Aditya Not really meant to be an answer in the first place, more of an interesting fact. – Simply Beautiful Art Jul 23 '16 at 13:37
The equation $$\frac{\mathrm{d}}{\mathrm{d}x} f(x) = f(x)$$ is a linear (thus Lipschitz continuous), first-order ordinary differential equation on $\mathbb{R}$. By the Picard-Lindelöf theorem, such an equation has a unique solution for any initial condition of the form $$f(0) = y_0$$ with $y_0 \in \mathbb{R}$. In particular, for the condition $$f(0) = 1$$ the unique solution is $f = \exp$, so given that condition, $e \equiv \exp(1) = f(1)$ is unique.
For the general initial condition, you get, because the ODE is linear, that the solution is always $$f(x) = y_0 \cdot \exp(x).$$
|
2019-09-17 12:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8354745507240295, "perplexity": 221.19356293211322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573071.65/warc/CC-MAIN-20190917121048-20190917143048-00096.warc.gz"}
|
https://math.stackexchange.com/questions/3028801/singular-and-eigen-values-properties
|
# Singular and eigen values properties…
Let $$A\in\mathcal {M}_n(\mathbb{R})$$, we will denote $$\lambda_{\max}(A)$$ the biggest eigenvalue of $$A$$ in absolute value, as for $$B\in\mathcal M_{m,n}(\mathbb{R})$$ we will denote $$\sigma_{\max}(B)$$ the highest singular value of $$B$$.
I have $$3$$ things to show from which I have shown 1 and for the last one I only have an idea.
$$1$$. with $$A$$ a symetric matrix show that $$\lambda_{\max}(A)= \max_{\|v\|=1}v^tAv$$
$$A=A^t\implies \max_{\|v\|=1}(v^tAv) =\max_{\|v\|=1}(v^tA^tv) =\max_{\|v\|=1}((Av)^tv) =\max_{\|v\|=1}((\lambda v)^tv) =\max_{\|v\|=1}(\lambda v^tv) =\max_{\|v\|=1}(\lambda \|v\|^2) =\max_{\|v\|=1}(\lambda) =\max(\lambda).$$
$$2$$. Show that $$\sigma_{\max}(B) = \sqrt{\lambda_{\max}(B^tB)}$$.
$$3$$. with $$C\in\mathcal M_n(\mathbb{R})\implies\lambda_{\max}(\frac{C+C^t}{2})\leq\sigma_{\max}(C)$$.
Here I think is something relating with AM-GM Mean right?
• For part 1 to make sense, the definition of $\lambda_{\max}(A)$ should be the biggest eigenvalue of $A$ not in absolute value. – angryavian Dec 6 '18 at 17:55
1. There are a few errors in your attempt. When you use $$Av = \lambda v$$ you are assuming $$v$$ is an eigenvector of $$A$$, even though the maximum is over all unit norm $$v$$ which may include non-eigenvectors. Also your attempt does not really use symmetry; note that you could have proceeded as $$v^t A v = v^t (\lambda v) = \lambda \|v\|^2$$ directly (disregarding the eigenvalue issue mentioned above). Symmetry is important here because of the spectral theorem. This may help you fix your proof.
2. I suppose you are defining singular values from the SVD? Then write $$B=U\Sigma V^t$$ and note $$B^t B = V \Sigma^t \Sigma V^t$$. Since $$\Sigma^t \Sigma$$ is a diagonal matrix, this is a diagonalization of $$B^t B$$, so you can write down the eigenvalues of $$B^t B$$ in terms of the singular values of $$B$$.
3. Let $$U\Sigma V^t$$ be the SVD of $$C$$. Note $$(C+C^t)/2$$ is symmetric. Using the first part, $$\lambda_{\max}(\frac{C+C^t}{2}) = \max_{\|v\|=1} v^t C v = \max_{\|v\|=1} v^t U \Sigma V^t v \le \max_{\|x\|=\|y\|=1} x^t \Sigma y = \sigma_{\max}(C),$$ where the inequality comes from the change of variables $$x=U^t v$$ and $$y = V^t v$$ and noting that orthogonal matrices are norm-preserving (i.e. $$\|x\|=\|v\|$$).
Edit:
As I mentioned in my comment, I think $$\lambda_{\max}$$ should be the largest eigenvalue, not the largest in absolute value.
1. Let $$UDU^t$$ be the eigendecomposition of $$A$$. Then $$\max_{\|v\|=1} v^t A v = \max_{\|v\|=1} v^t UDU^t v = \max_{\|w\| = 1} w^t D w = \lambda_{\max}(A)$$, where we have made the change of variables $$w = U^t v$$ and noted that orthogonal matrices are norm-preserving.
2. I already told you that the eigenvalues of $$B^t B$$ are the diagonal entries of the diagonal matrix $$\Sigma^t \Sigma$$.
• 1. Yeah I saw, okay to use the spectral theorem we have that: if $A$ is symetric then there is $Q$ and $D$ such that $A=QDQ^t$ then $A$ is orto-diagonalizable... but how do I use that here, since if I plug in $A$ in there I don't solve anything, can you be more explicit, also, for 2, yes, indeed it's about the SVD, how do I write down the eigenvalues of $B^tB$ in terms of singular values of $B$? – C. Cristi Dec 6 '18 at 17:51
• How did you make the note that the norm of $U^tv=1$ is the same? – C. Cristi Dec 6 '18 at 18:08
• Hey, why $\lambda_{max}(\frac {C+C^t}{2})=max_{\|v\|=1}v^tCv$ and not $=max_{\|v\|=1}v^t\frac{C+C^t}{2}v$? – C. Cristi Dec 6 '18 at 18:16
• Orthogonal matrix are norm-preserving because $\|Uv\|=\sqrt{\langle Uv,Uv\rangle}=\sqrt{\langle U^tv,Uv\rangle}=\sqrt{UU^t\langle v, v\rangle}= \sqrt{\langle v, v\rangle}$? – C. Cristi Dec 6 '18 at 18:22
|
2019-12-08 13:47:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657140970230103, "perplexity": 168.62037620187704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00531.warc.gz"}
|
https://msp.org/agt/2015/15-6/b01.xhtml
|
#### Volume 15, issue 6 (2015)
1 J Adámek, J Rosický, E M Vitale, Algebraic theories: A categorical introduction to general algebra, Cambridge Tracts in Mathematics 184, Cambridge Univ. Press (2011) MR2757312 2 N A Baas, B I Dundas, B Richter, J Rognes, Ring completion of rig categories, J. Reine Angew. Math. 674 (2013) 43 MR3010546 3 C Barwick, Multiplicative structures on algebraic $K$–theory, preprint (2013) arXiv:1304.4867 4 H S Bergsaker, Homotopy theory of presheaves of $\Gamma$–spaces, Homology, Homotopy Appl. 11 (2009) 35 MR2486652 5 A. Blumberg, D. Gepner, G. Tabuada, $K$–theory of endomorphisms via noncommutative motives, (2013) arXiv:1302.1214 6 A J Blumberg, D Gepner, G Tabuada, A universal characterization of higher algebraic $K$–theory, Geom. Topol. 17 (2013) 733 MR3070515 7 A J Blumberg, D Gepner, G Tabuada, Uniqueness of the multiplicative cyclotomic trace, Adv. Math. 260 (2014) 191 MR3209352 8 J M Boardman, R M Vogt, Homotopy invariant algebraic structures on topological spaces, Lecture Notes in Mathematics 347, Springer (1973) MR0420609 9 J Cranch, Algebraic theories and $(\infty,1)$–categories, PhD thesis, University of Sheffield (2010) arXiv:1011.3243 10 J Cranch, Algebraic theories, span diagrams, and commutative monoids in homotopy theory, preprint (2011) arXiv:1109.1598 11 A D Elmendorf, I Kriz, M A Mandell, J P May, Rings, modules, and algebras in stable homotopy theory, Mathematical Surveys and Monographs 47, Amer. Math. Soc. (1997) MR1417719 12 A D Elmendorf, M A Mandell, Rings, modules, and algebras in infinite loop space theory, Adv. Math. 205 (2006) 163 MR2254311 13 A D Elmendorf, M A Mandell, Permutative categories, multicategories and algebraic $K$–theory, Algebr. Geom. Topol. 9 (2009) 2391 MR2558315 14 M Hyland, J Power, Pseudo-commutative monads and pseudo-closed $2$–categories, J. Pure Appl. Algebra 175 (2002) 141 MR1935977 15 A Joyal, Notes on quasi-categories (2008) 16 M L Laplaza, Coherence for distributivity, from: "Coherence in categories" (editor S Mac Lane), Lecture Notes in Math. 281, Springer (1972) 29 MR0335598 17 M L Laplaza, A new result of coherence for distributivity, from: "Coherence in categories" (editor S Mac Lane), Lecture Notes in Math. 281, Springer (1972) 214 MR0335599 18 F W Lawvere, Functorial semantics of algebraic theories, Proc. Nat. Acad. Sci. USA 50 (1963) 869 MR0158921 19 J Lurie, Higher topos theory, Annals of Mathematics Studies 170, Princeton Univ. Press (2009) MR2522659 20 J Lurie, Higher algebra (2014) 21 J P May, The geometry of iterated loop spaces, Lecture Notes in Mathematics 271, Springer (1972) MR0420610 22 J P May, Multiplicative infinite loop space theory, J. Pure Appl. Algebra 26 (1982) 1 MR669843 23 J P May, The construction of $E_\infty$ ring spaces from bipermutative categories, from: "New topological contexts for Galois theory and algebraic geometry" (editors A Baker, B Richter), Geom. Topol. Monogr. 16 (2009) 283 MR2544392 24 J P May, What are $E_\infty$ ring spaces good for?, from: "New topological contexts for Galois theory and algebraic geometry" (editors A Baker, B Richter), Geom. Topol. Monogr. 16 (2009) 331 MR2544393 25 J P May, What precisely are $E_\infty$ ring spaces and $E_\infty$ ring spectra?, from: "New topological contexts for Galois theory and algebraic geometry" (editors A Baker, B Richter), Geom. Topol. Monogr. 16 (2009) 215 MR2544391 26 V Schmitt, Tensor product for symmetric monoidal categories, preprint (2007) arXiv:0711.0324 27 S Schwede, Stable homotopical algebra and $\Gamma$–spaces, Math. Proc. Cambridge Philos. Soc. 126 (1999) 329 MR1670249 28 G Segal, Categories and cohomology theories, Topology 13 (1974) 293 MR0353298 29 R W Thomason, Beware the phony multiplication on Quillen's $\mathcal{A}^{-1}\mathcal{A}$, Proc. Amer. Math. Soc. 80 (1980) 569 MR587929 30 B Töen, G Vezzosi, Caractères de Chern, traces équivariantes et géométrie algébrique dérivée, preprint (2009) arXiv:0903.3292
|
2023-03-24 18:42:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7811908721923828, "perplexity": 3279.581525989616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00611.warc.gz"}
|
http://tex.stackexchange.com/questions/31205/wrapping-column-text-using-multicolumn-and-tabularx
|
# Wrapping column text using multicolumn and tabularx
I have two tables I am typesetting above each other as follows.
\documentclass[a4paper,11pt]{article}
\usepackage{tabularx}
\usepackage[left=2cm, right=2cm, bmargin=1.5cm]{geometry}
\begin{document}
\begin{tabularx}{\textwidth}{| X | X | l | l | l | l | l | l |}
\hline
& Cost Category & Year 1 & Year 2 & Year 3 & Year 4 & Year 5 & Total (Y1-5) \\
\hline
\noalign{\vspace{0.5cm}}
\hline
\multicolumn{7}{|l|}{Some text that goes on and on and on and on and on and on and on and on and on and on} & $100\%$\\
\hline
\end{tabularx}
\end{document}
How can I get the text in the bottom table to word wrap rather than push the column over the right hand edge of the page? Basically, I would like it to take on whatever size the top table has already set for the first 7 columns.
-
the width of the first seven column is \textwidth minus the last column and the \tabcolsep/\fboxrule: \textwidth-4\tabcolsep-\widthof{Total (Y1-5)}-2\fboxrule}. Without tabularx the rules are not taken into account.
\documentclass[a4paper,11pt]{article}
\usepackage{tabularx,calc}
\usepackage[left=2cm, right=2cm, bmargin=1.5cm]{geometry}
\begin{document}
\begin{tabularx}{\textwidth}{| X | X | l | l | l | l | l | l |}
\hline
& Cost Category & Year 1 & Year 2 & Year 3 & Year 4 & Year 5 & Total (Y1-5) \ \hline
\noalign{\vspace{0.5cm}}
\hline
\multicolumn{7}{|p{\textwidth-4\tabcolsep-\widthof{Total (Y1-5)}-2\fboxrule}|}{Some text that goes on and on and on and on and on and on and on and on and on and on} & $100\%$\ \hline
\end{tabularx}
\end{document}
Something like \multicolumn{7}{|X|}{ is possible but makes no sense. It uses the width of all 7 cells but not for the text itself.
-
Perfect, thanks. Is there a list of values such as \tabcolsep, \widthof, \fboxrule somewhere along with their descriptions? – Raphael Oct 11 '11 at 13:53
Read any LaTeX book or LaTeX indrocduction. You can out the current values in your document with \the\tabcolsep – Herbert Feb 9 at 10:38
|
2014-03-13 13:13:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667384147644043, "perplexity": 830.4882410990199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678674071/warc/CC-MAIN-20140313024434-00097-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://library.cirm-math.fr/listRecord.htm?list=link&xRecord=19240498146910686709
|
• D
F Nous contacter
0
# Documents 13F20 | enregistrements trouvés : 1
O
Sélection courante (0) : Tout sélectionner / Tout déselectionner
P Q
## Multi angle Local cohomology modules of a smooth $\mathbb{Z}-algebra$ have a finite number of associated primes Lyubeznik, Gennady (Auteur de la Conférence) | CIRM (Editeur )
Let $R$ be a commutative Noetherian ring that is a smooth $\mathbb{Z}-algebra$. For each ideal $a$ of $R$ and integer $k$, we prove that the local cohomology module $H^k_a(R)$ has finitely many associated prime ideals. This settles a crucial outstanding case of a conjecture of Lyubeznik asserting this finiteness for local cohomology modules of all regular rings.
##### Audience
Nuage de mots clefs ici
Z
|
2017-05-26 01:44:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096088171005249, "perplexity": 2878.0186958103927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608622.82/warc/CC-MAIN-20170526013116-20170526033116-00171.warc.gz"}
|
https://brilliant.org/discussions/thread/downvoting-feature-on-brilliant/
|
# Downvoting feature on Brilliant.
So, here's my concern. I think that there should be some restrictions on the downvoting option in Brilliant.
Lately, I'm seeing a lot of my comments being downvoted as if someone has been personally targeting them.
I appreciate downvotes on my comments when I have made some mistake and the downvoter has provided clarification as to how my comment is wrong. I even upvote the downvoter's comment those times. But I really don't appreciate unjustified downvoting.
This nonsensical downvoting is demotivating to those who take their time to write comments here either showing their own approach to a problem (without posting an official solution) or for other purposes.
Seeing one such instance of a comment of mine being downvoted recently, I checked out all the comments I posted recently through my activity feed and noticed that almost 90% of my recently posted comments have been downvoted by one individual (which I believe is someone personally targeting me). I clearly don't see the reason for the downvote.
For example, in this comment, I posted the inductive proof of the claim that was used in my solution in that problem and I clearly don't understand the reason for the downvote on that.
Also, a very recent example is the following comment that I posted just about an hour ago. It was downvoted almost immediately after I posted it. I can't really see what's the problem with that comment.
Here are a few more examples: 1, 2, 3, 4, 5, some comments here and there are more.
Almost all my comments since the last 4 days have been downvoted by someone like this without any justification.
I'd like the downvoter to justify his downvotes because I'm very stupid and am not getting the reason for this nonsensical downvoting.
Note by Prasun Biswas
5 years, 11 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
We are invested in maintaining the healthy conducive community that has developed on Brilliant. When various functionalities start to be abused significantly, we will be proactive in dealing with them.
There are many reasons why someone would chose to downvote your comments - It doesn't necessarily mean that you are wrong. It is possible that a particular person doesn't like you, and so chose to target all of your comments.
Staff - 5 years, 11 months ago
TBH, is there really a need for a downvote button? I use the downvote button rarely even though I usually go through all the new posts. I've only used the downvote button in two contexts :
• The person has managed to put a horribly wrong solution which can mislead others. I usually drop a comment as well, though.
• The comment is off-topic or abusive. If the latter, I tag mods or staff.
Unless I'm missing something obvious, that seems to be the only uses. Even then, it doesn't fix the issue and you usually have to do something else (comment;tag mods).
Wouldn't a report system be better?
- 5 years, 11 months ago
I have some ideas about what can be done about the downvotes. This is to Staff Members :)
• We can merge commenting with downvotes: just like the reporting system of the problems. When a member downvotes any comment he would also have to write a comment or a reply explaining what is wrong with the comment - Is he not able to understand it? Is it offensive to him? Is it not relevant to the topic being discussed?... . If the person fails to give a reason behind his urge to downvote, his downvote won't be registered.
• We can also restrict the number of downvotes a person can give : Each day a person would get only 2 votes to spend for downvoting. If he uses those 2 votes he won't be able to downvote anymore (on that day). Again, the next day he would be given 2 votes.
Personally, I feel downvoting demotivating and undesirable. May be also unnecessary. The comment feature, on the other hand, is more effective and must be used rather than downvoting.
:)
- 5 years, 11 months ago
Yes, those are amongst the possibilities that we have considered.
Currently, the use of downvotes is mostly (almost exclusively) for "your comment is (logically) wrong", as opposed to "I disagree with you personally". For this use case, it is sufficient for 1-2 people comment as to why it is wrong, and others to vote in agreement/disagreement. There isn't a need for 5-10 people to tell you the exact same thing - if you weren't going to listen at the start, you aren't going to listen with more people, and having 5+ similar comments would just clog up the discussion. To that extent, I agree that "comment feature is more effective than downvoting".
Furthermore, the restriction to 2 downvotes would make it unnecessarily restrictive, esp if there is a discussion and someone is having a mistaken point of view. Instead, this would affect the active members' ability to contribute to the discussion.
Brilliant is not a community of trolls and haters (which I am really thankful for). Almost all of us use Brilliant for personal improvement, and are interested in helping each other improve. Though, being an internet site, we do attract trolls and haters occasionally, I do not think we should be catering to them. The best approach would be to just ignore them. If their actions start to adversely affect the community, we will take more serious action (like deactivating their accounts).
I would want to avoid the scenario on Brilliant where downvoting = "I hate you". Such an environment is not supportive, and we will be taking serious action against such offenses. The way to avoid this isn't to "institute rules that force people to behave in a certain way", but instead to "guide and develop the community to a system of values". If Brilliant does become a site of "I have a personal grudge against you" and downvoting is meaningless, we would remove that functionality.
Staff - 5 years, 11 months ago
Umm...when a comment is logically wrong or something is wrong with a comment, someone can (and will) point it out, the rest can agree with what has been pointed out by upvoting. What I meant to say is: If you agree/like=upvote. If you disagree/dislike=comment. And if you agree with the comment made on the wrong comment to fix it, upvote it. If you disagree with the attempt (comment) made tofix the wrong comment then reply (comment) what you think is the right answer. See this is what anyone is going to do (and most of us do). It's natural behavior. And anything the directs a natural behavior to accomplish a desired result, according to me, is a guideline (not a rule)
In a discussion if someone has a mistaken point of view wouldn't it be much better if someone helps the guy to resolve his doubt or change his point of view? Commenting, I think, may help more in this case than downvoting.
I agree that this site doesn't have trolls and haters. That's why I put a restriction on the no. of downvote a member can spend. We can increase it from 2 to 5 or may be 8...
I also agree that putting rules aren't going to help much (and they never help much) when it comes to "development". But if there is a misuse of a rule itself and that affects the main ingredient (members) of a site then we can look forward to alter. (if not completely remove it).My first idea was infarct more of a guideline. The second idea was a suggestion to bring about alteration into an already existing rule(feature). However, I couldn't present them in that way.
Unnecessary downvoting (sometimes) does have a negative impact on people's mind. Prasun may have pointed this out but there are more who might have been affected. Some might have chosen to ignore it.
Anyway, I respect any decision taken by staffs for the members as well as for the site.
- 5 years, 11 months ago
Yes, I am in agreement with your thoughts.
Where we differ is that I don't think adding additional complexity to the system is as yet useful or necessary. There may come a point in time when we will need to do so / change the system.
Staff - 5 years, 11 months ago
Yes agree. Also staffs might be more busy in developing the educational/academic features like wiki section and also might be working on new feature. That's understandable and justifiable .That's why I suggested Prasun, in the very beginning, to ignore such things
:)
- 5 years, 11 months ago
|
2021-05-18 01:43:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6974793672561646, "perplexity": 1208.9863506868837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00474.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/42/3/g/a/
|
# Properties
Label 42.3.g.a Level $42$ Weight $3$ Character orbit 42.g Analytic conductor $1.144$ Analytic rank $0$ Dimension $4$ CM no Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$42 = 2 \cdot 3 \cdot 7$$ Weight: $$k$$ $$=$$ $$3$$ Character orbit: $$[\chi]$$ $$=$$ 42.g (of order $$6$$, degree $$2$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$1.14441711031$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(\zeta_{6})$$ Coefficient field: $$\Q(\sqrt{2}, \sqrt{-3})$$ Defining polynomial: $$x^{4} + 2 x^{2} + 4$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{6}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + \beta_{1} q^{2} + ( 2 + \beta_{2} ) q^{3} + 2 \beta_{2} q^{4} + ( 2 + 2 \beta_{1} - 2 \beta_{2} + 4 \beta_{3} ) q^{5} + ( 2 \beta_{1} + \beta_{3} ) q^{6} + ( -5 - 2 \beta_{1} - 5 \beta_{2} - 4 \beta_{3} ) q^{7} + 2 \beta_{3} q^{8} + ( 3 + 3 \beta_{2} ) q^{9} +O(q^{10})$$ $$q + \beta_{1} q^{2} + ( 2 + \beta_{2} ) q^{3} + 2 \beta_{2} q^{4} + ( 2 + 2 \beta_{1} - 2 \beta_{2} + 4 \beta_{3} ) q^{5} + ( 2 \beta_{1} + \beta_{3} ) q^{6} + ( -5 - 2 \beta_{1} - 5 \beta_{2} - 4 \beta_{3} ) q^{7} + 2 \beta_{3} q^{8} + ( 3 + 3 \beta_{2} ) q^{9} + ( -8 + 2 \beta_{1} - 4 \beta_{2} - 2 \beta_{3} ) q^{10} + 6 \beta_{2} q^{11} + ( -2 + 2 \beta_{2} ) q^{12} + ( -1 - 16 \beta_{1} - 2 \beta_{2} - 8 \beta_{3} ) q^{13} + ( 8 - 5 \beta_{1} + 4 \beta_{2} - 5 \beta_{3} ) q^{14} + ( 6 + 6 \beta_{3} ) q^{15} + ( -4 - 4 \beta_{2} ) q^{16} + ( -16 + 2 \beta_{1} - 8 \beta_{2} - 2 \beta_{3} ) q^{17} + ( 3 \beta_{1} + 3 \beta_{3} ) q^{18} + ( -7 - 2 \beta_{1} + 7 \beta_{2} - 4 \beta_{3} ) q^{19} + ( 4 - 8 \beta_{1} + 8 \beta_{2} - 4 \beta_{3} ) q^{20} + ( -5 - 10 \beta_{2} - 6 \beta_{3} ) q^{21} + 6 \beta_{3} q^{22} + ( 12 + 18 \beta_{1} + 12 \beta_{2} ) q^{23} + ( -2 \beta_{1} + 2 \beta_{3} ) q^{24} + ( 24 \beta_{1} - 11 \beta_{2} + 24 \beta_{3} ) q^{25} + ( 16 - \beta_{1} - 16 \beta_{2} - 2 \beta_{3} ) q^{26} + ( 3 + 6 \beta_{2} ) q^{27} + ( 10 + 8 \beta_{1} + 4 \beta_{3} ) q^{28} + 24 \beta_{3} q^{29} + ( -12 + 6 \beta_{1} - 12 \beta_{2} ) q^{30} + ( 34 + 6 \beta_{1} + 17 \beta_{2} - 6 \beta_{3} ) q^{31} + ( -4 \beta_{1} - 4 \beta_{3} ) q^{32} + ( -6 + 6 \beta_{2} ) q^{33} + ( 4 - 16 \beta_{1} + 8 \beta_{2} - 8 \beta_{3} ) q^{34} + ( -20 - 2 \beta_{1} + 14 \beta_{2} - 22 \beta_{3} ) q^{35} -6 q^{36} + ( 11 + 12 \beta_{1} + 11 \beta_{2} ) q^{37} + ( 8 - 7 \beta_{1} + 4 \beta_{2} + 7 \beta_{3} ) q^{38} + ( -24 \beta_{1} - 3 \beta_{2} - 24 \beta_{3} ) q^{39} + ( 8 + 4 \beta_{1} - 8 \beta_{2} + 8 \beta_{3} ) q^{40} + ( -26 - 8 \beta_{1} - 52 \beta_{2} - 4 \beta_{3} ) q^{41} + ( 12 - 5 \beta_{1} + 12 \beta_{2} - 10 \beta_{3} ) q^{42} + ( 7 + 6 \beta_{3} ) q^{43} + ( -12 - 12 \beta_{2} ) q^{44} + ( 12 - 6 \beta_{1} + 6 \beta_{2} + 6 \beta_{3} ) q^{45} + ( 12 \beta_{1} + 36 \beta_{2} + 12 \beta_{3} ) q^{46} + ( -22 + 2 \beta_{1} + 22 \beta_{2} + 4 \beta_{3} ) q^{47} + ( -4 - 8 \beta_{2} ) q^{48} + ( -20 \beta_{1} + \beta_{2} + 20 \beta_{3} ) q^{49} + ( -48 - 11 \beta_{3} ) q^{50} + ( -24 + 6 \beta_{1} - 24 \beta_{2} ) q^{51} + ( 4 + 16 \beta_{1} + 2 \beta_{2} - 16 \beta_{3} ) q^{52} + ( -18 \beta_{1} - 60 \beta_{2} - 18 \beta_{3} ) q^{53} + ( 3 \beta_{1} + 6 \beta_{3} ) q^{54} + ( 12 - 24 \beta_{1} + 24 \beta_{2} - 12 \beta_{3} ) q^{55} + ( -8 + 10 \beta_{1} + 8 \beta_{2} ) q^{56} + ( -21 - 6 \beta_{3} ) q^{57} + ( -48 - 48 \beta_{2} ) q^{58} + ( -8 - 14 \beta_{1} - 4 \beta_{2} + 14 \beta_{3} ) q^{59} + ( -12 \beta_{1} + 12 \beta_{2} - 12 \beta_{3} ) q^{60} + ( -12 - 8 \beta_{1} + 12 \beta_{2} - 16 \beta_{3} ) q^{61} + ( 12 + 34 \beta_{1} + 24 \beta_{2} + 17 \beta_{3} ) q^{62} + ( 6 \beta_{1} - 15 \beta_{2} - 6 \beta_{3} ) q^{63} + 8 q^{64} + ( 90 - 42 \beta_{1} + 90 \beta_{2} ) q^{65} + ( -6 \beta_{1} + 6 \beta_{3} ) q^{66} + ( 42 \beta_{1} - 55 \beta_{2} + 42 \beta_{3} ) q^{67} + ( 16 + 4 \beta_{1} - 16 \beta_{2} + 8 \beta_{3} ) q^{68} + ( 12 + 36 \beta_{1} + 24 \beta_{2} + 18 \beta_{3} ) q^{69} + ( 44 - 20 \beta_{1} + 40 \beta_{2} + 14 \beta_{3} ) q^{70} + ( 78 - 42 \beta_{3} ) q^{71} -6 \beta_{1} q^{72} + ( -22 + 40 \beta_{1} - 11 \beta_{2} - 40 \beta_{3} ) q^{73} + ( 11 \beta_{1} + 24 \beta_{2} + 11 \beta_{3} ) q^{74} + ( 11 + 24 \beta_{1} - 11 \beta_{2} + 48 \beta_{3} ) q^{75} + ( -14 + 8 \beta_{1} - 28 \beta_{2} + 4 \beta_{3} ) q^{76} + ( 30 + 24 \beta_{1} + 12 \beta_{3} ) q^{77} + ( 48 - 3 \beta_{3} ) q^{78} + ( -5 - 66 \beta_{1} - 5 \beta_{2} ) q^{79} + ( -16 + 8 \beta_{1} - 8 \beta_{2} - 8 \beta_{3} ) q^{80} + 9 \beta_{2} q^{81} + ( 8 - 26 \beta_{1} - 8 \beta_{2} - 52 \beta_{3} ) q^{82} + ( 10 + 76 \beta_{1} + 20 \beta_{2} + 38 \beta_{3} ) q^{83} + ( 20 + 12 \beta_{1} + 10 \beta_{2} + 12 \beta_{3} ) q^{84} + ( -72 - 60 \beta_{3} ) q^{85} + ( -12 + 7 \beta_{1} - 12 \beta_{2} ) q^{86} + ( -24 \beta_{1} + 24 \beta_{3} ) q^{87} + ( -12 \beta_{1} - 12 \beta_{3} ) q^{88} + ( -12 + 12 \beta_{2} ) q^{89} + ( -12 + 12 \beta_{1} - 24 \beta_{2} + 6 \beta_{3} ) q^{90} + ( -101 + 34 \beta_{1} - 91 \beta_{2} + 80 \beta_{3} ) q^{91} + ( -24 + 36 \beta_{3} ) q^{92} + ( 51 + 18 \beta_{1} + 51 \beta_{2} ) q^{93} + ( -8 - 22 \beta_{1} - 4 \beta_{2} + 22 \beta_{3} ) q^{94} + ( -54 \beta_{1} + 66 \beta_{2} - 54 \beta_{3} ) q^{95} + ( -4 \beta_{1} - 8 \beta_{3} ) q^{96} + ( -12 + 8 \beta_{1} - 24 \beta_{2} + 4 \beta_{3} ) q^{97} + ( -40 - 80 \beta_{2} + \beta_{3} ) q^{98} -18 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 6q^{3} - 4q^{4} + 12q^{5} - 10q^{7} + 6q^{9} + O(q^{10})$$ $$4q + 6q^{3} - 4q^{4} + 12q^{5} - 10q^{7} + 6q^{9} - 24q^{10} - 12q^{11} - 12q^{12} + 24q^{14} + 24q^{15} - 8q^{16} - 48q^{17} - 42q^{19} + 24q^{23} + 22q^{25} + 96q^{26} + 40q^{28} - 24q^{30} + 102q^{31} - 36q^{33} - 108q^{35} - 24q^{36} + 22q^{37} + 24q^{38} + 6q^{39} + 48q^{40} + 24q^{42} + 28q^{43} - 24q^{44} + 36q^{45} - 72q^{46} - 132q^{47} - 2q^{49} - 192q^{50} - 48q^{51} + 12q^{52} + 120q^{53} - 48q^{56} - 84q^{57} - 96q^{58} - 24q^{59} - 24q^{60} - 72q^{61} + 30q^{63} + 32q^{64} + 180q^{65} + 110q^{67} + 96q^{68} + 96q^{70} + 312q^{71} - 66q^{73} - 48q^{74} + 66q^{75} + 120q^{77} + 192q^{78} - 10q^{79} - 48q^{80} - 18q^{81} + 48q^{82} + 60q^{84} - 288q^{85} - 24q^{86} - 72q^{89} - 222q^{91} - 96q^{92} + 102q^{93} - 24q^{94} - 132q^{95} - 72q^{99} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 2 x^{2} + 4$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2}$$$$/2$$ $$\beta_{3}$$ $$=$$ $$\nu^{3}$$$$/2$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$2 \beta_{2}$$ $$\nu^{3}$$ $$=$$ $$2 \beta_{3}$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/42\mathbb{Z}\right)^\times$$.
$$n$$ $$29$$ $$31$$ $$\chi(n)$$ $$1$$ $$1 + \beta_{2}$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
19.1
−0.707107 + 1.22474i 0.707107 − 1.22474i −0.707107 − 1.22474i 0.707107 + 1.22474i
−0.707107 + 1.22474i 1.50000 0.866025i −1.00000 1.73205i 7.24264 + 4.18154i 2.44949i −6.74264 + 1.88064i 2.82843 1.50000 2.59808i −10.2426 + 5.91359i
19.2 0.707107 1.22474i 1.50000 0.866025i −1.00000 1.73205i −1.24264 0.717439i 2.44949i 1.74264 + 6.77962i −2.82843 1.50000 2.59808i −1.75736 + 1.01461i
31.1 −0.707107 1.22474i 1.50000 + 0.866025i −1.00000 + 1.73205i 7.24264 4.18154i 2.44949i −6.74264 1.88064i 2.82843 1.50000 + 2.59808i −10.2426 5.91359i
31.2 0.707107 + 1.22474i 1.50000 + 0.866025i −1.00000 + 1.73205i −1.24264 + 0.717439i 2.44949i 1.74264 6.77962i −2.82843 1.50000 + 2.59808i −1.75736 1.01461i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
7.d odd 6 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 42.3.g.a 4
3.b odd 2 1 126.3.n.a 4
4.b odd 2 1 336.3.bh.e 4
5.b even 2 1 1050.3.p.a 4
5.c odd 4 2 1050.3.q.a 8
7.b odd 2 1 294.3.g.a 4
7.c even 3 1 294.3.c.a 4
7.c even 3 1 294.3.g.a 4
7.d odd 6 1 inner 42.3.g.a 4
7.d odd 6 1 294.3.c.a 4
12.b even 2 1 1008.3.cg.h 4
21.c even 2 1 882.3.n.e 4
21.g even 6 1 126.3.n.a 4
21.g even 6 1 882.3.c.b 4
21.h odd 6 1 882.3.c.b 4
21.h odd 6 1 882.3.n.e 4
28.f even 6 1 336.3.bh.e 4
28.f even 6 1 2352.3.f.e 4
28.g odd 6 1 2352.3.f.e 4
35.i odd 6 1 1050.3.p.a 4
35.k even 12 2 1050.3.q.a 8
84.j odd 6 1 1008.3.cg.h 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
42.3.g.a 4 1.a even 1 1 trivial
42.3.g.a 4 7.d odd 6 1 inner
126.3.n.a 4 3.b odd 2 1
126.3.n.a 4 21.g even 6 1
294.3.c.a 4 7.c even 3 1
294.3.c.a 4 7.d odd 6 1
294.3.g.a 4 7.b odd 2 1
294.3.g.a 4 7.c even 3 1
336.3.bh.e 4 4.b odd 2 1
336.3.bh.e 4 28.f even 6 1
882.3.c.b 4 21.g even 6 1
882.3.c.b 4 21.h odd 6 1
882.3.n.e 4 21.c even 2 1
882.3.n.e 4 21.h odd 6 1
1008.3.cg.h 4 12.b even 2 1
1008.3.cg.h 4 84.j odd 6 1
1050.3.p.a 4 5.b even 2 1
1050.3.p.a 4 35.i odd 6 1
1050.3.q.a 8 5.c odd 4 2
1050.3.q.a 8 35.k even 12 2
2352.3.f.e 4 28.f even 6 1
2352.3.f.e 4 28.g odd 6 1
## Hecke kernels
This newform subspace is the entire newspace $$S_{3}^{\mathrm{new}}(42, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$1 + 2 T^{2} + 4 T^{4}$$
$3$ $$( 1 - 3 T + 3 T^{2} )^{2}$$
$5$ $$1 - 12 T + 86 T^{2} - 456 T^{3} + 2019 T^{4} - 11400 T^{5} + 53750 T^{6} - 187500 T^{7} + 390625 T^{8}$$
$7$ $$1 + 10 T + 51 T^{2} + 490 T^{3} + 2401 T^{4}$$
$11$ $$( 1 + 6 T - 85 T^{2} + 726 T^{3} + 14641 T^{4} )^{2}$$
$13$ $$1 + 98 T^{2} + 54915 T^{4} + 2798978 T^{6} + 815730721 T^{8}$$
$17$ $$1 + 48 T + 1514 T^{2} + 35808 T^{3} + 694947 T^{4} + 10348512 T^{5} + 126450794 T^{6} + 1158603312 T^{7} + 6975757441 T^{8}$$
$19$ $$1 + 42 T + 1433 T^{2} + 35490 T^{3} + 795972 T^{4} + 12811890 T^{5} + 186749993 T^{6} + 1975927002 T^{7} + 16983563041 T^{8}$$
$23$ $$1 - 24 T + 22 T^{2} + 12096 T^{3} - 277629 T^{4} + 6398784 T^{5} + 6156502 T^{6} - 3552861336 T^{7} + 78310985281 T^{8}$$
$29$ $$( 1 + 530 T^{2} + 707281 T^{4} )^{2}$$
$31$ $$1 - 102 T + 6041 T^{2} - 262446 T^{3} + 9029556 T^{4} - 252210606 T^{5} + 5578990361 T^{6} - 90525375462 T^{7} + 852891037441 T^{8}$$
$37$ $$1 - 22 T - 2087 T^{2} + 3674 T^{3} + 4073284 T^{4} + 5029706 T^{5} - 3911374007 T^{6} - 56445980998 T^{7} + 3512479453921 T^{8}$$
$41$ $$1 - 2476 T^{2} + 6405414 T^{4} - 6996584236 T^{6} + 7984925229121 T^{8}$$
$43$ $$( 1 - 14 T + 3675 T^{2} - 25886 T^{3} + 3418801 T^{4} )^{2}$$
$47$ $$1 + 132 T + 11654 T^{2} + 771672 T^{3} + 42125907 T^{4} + 1704623448 T^{5} + 56867802374 T^{6} + 1422856423428 T^{7} + 23811286661761 T^{8}$$
$53$ $$1 - 120 T + 5830 T^{2} - 354240 T^{3} + 25104819 T^{4} - 995060160 T^{5} + 46001504230 T^{6} - 2659723335480 T^{7} + 62259690411361 T^{8}$$
$59$ $$1 + 24 T + 6026 T^{2} + 140016 T^{3} + 22586547 T^{4} + 487395696 T^{5} + 73019217386 T^{6} + 1012332807384 T^{7} + 146830437604321 T^{8}$$
$61$ $$1 + 72 T + 9218 T^{2} + 539280 T^{3} + 48684147 T^{4} + 2006660880 T^{5} + 127630962338 T^{6} + 3709466953992 T^{7} + 191707312997281 T^{8}$$
$67$ $$1 - 110 T + 3625 T^{2} + 55330 T^{3} - 2642396 T^{4} + 248376370 T^{5} + 73047813625 T^{6} - 9950422038590 T^{7} + 406067677556641 T^{8}$$
$71$ $$( 1 - 156 T + 12638 T^{2} - 786396 T^{3} + 25411681 T^{4} )^{2}$$
$73$ $$1 + 66 T + 2873 T^{2} + 93786 T^{3} - 18641292 T^{4} + 499785594 T^{5} + 81588146393 T^{6} + 9988058935074 T^{7} + 806460091894081 T^{8}$$
$79$ $$1 + 10 T - 3695 T^{2} - 86870 T^{3} - 25172156 T^{4} - 542155670 T^{5} - 143920549295 T^{6} + 2430874555210 T^{7} + 1517108809906561 T^{8}$$
$83$ $$1 - 9628 T^{2} + 107694438 T^{4} - 456928714588 T^{6} + 2252292232139041 T^{8}$$
$89$ $$( 1 + 36 T + 8353 T^{2} + 285156 T^{3} + 62742241 T^{4} )^{2}$$
$97$ $$1 - 36580 T^{2} + 511416774 T^{4} - 3238401098980 T^{6} + 7837433594376961 T^{8}$$
|
2020-10-24 01:24:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956206679344177, "perplexity": 11348.467929405115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00638.warc.gz"}
|
https://couponsanddiscouts.com/terminal-value-discount-rate/
|
# Terminal Value Discount Rate
Filter Type:
Filter Time:
### How is terminal value discounted? - Investopedia
Sep 14, 2018 · Terminal value does something similar, except that it focuses on assumed cash flows for all of the years past the limit of the discounted cash flow model. Typically, an asset's terminal value is ...
### Terminal Value (TV) Definition
Terminal value is calculated by dividing the last cash flow forecast by the difference between the discount rate and terminal growth rate. The terminal value calculation estimates the value of the...
https://www.investopedia.com/terms/t/terminalvalue.asp
The value of the $1 will reduce by a percentage, called the discount rate. Terminal Value Formula TV = \dfrac{FCF \times (1 + g)}{d - g} FCF = free cash flow for the last forecast period; g = terminal growth rate; d = discount rate (usually the weighted average cost of capital) https://studyfinance.com/terminal-value/ ### “Lather, Rinse, Repeat: How We Discount the Terminal Value” the terminal value generated through the capitalization model [Stage 2]). Yet it is the use of “n” in the denomi-nator of the terminal value in Equation 2 that had generated much consternation (and today is accepted as given). If the ter-minal value reflects the cash flow of the last year of the projection period times the growth rate (i.e ... https://www.duffandphelps.com/-/media/assets/pdfs/news/disputes-and-investigations/how-we-discount-the-terminal-value.ashx?la=en ### Terminal Value (Definition, Example) | What is DCF Jul 31, 2014 · Terminal Value is a very important concept in Discounted Cash Flows as it accounts for more than 60%-80% of the total valuation of the firm. You should put special attention in assuming the growth rates (g), discount rates (WACC), and the multiples (PE ratio, Price to Book, PEG Ratio, EV/EBITDA, or EV/EBIT). https://www.wallstreetmojo.com/terminal-value/ ### Estimating Terminal Value Terminal Value t = where the cash flow and the discount rate used will depend upon whether you are valuing the firm or valuing the equity. If we are valuing the equity, the terminal value of equity can be written as: Terminal value of Equity n = The cashflow to equity can be defined strictly as dividends (in the dividend discount model) or as ... http://pages.stern.nyu.edu/~adamodar/New_Home_Page/valquestions/termvalapproaches.htm ### Why can't the growth rate be higher than the discount rate Mar 14, 2010 · How Growth Rate and Discount Rate Impact Terminal Value Formula. From a simple mathematical perspective, the growth rate can't be higher than the discount rate because it would give you a negative terminal value. From a theoretical perspective, Certified Investment Banking Professional – 1st Year Associate @jhoratio" explains: https://www.wallstreetoasis.com/forums/why-cant-the-growth-rate-be-higher-than-the-discount-rate ### How to Calculate Terminal Value in a DCF Analysis Then, you make initial guesses for the Terminal FCF Growth Rate and the Terminal Multiple that are slight discounts to these numbers. For example, if long-term GDP growth is expected to be 2-3%, you might pick 1-2% for the Terminal FCF Growth Rate. https://breakingintowallstreet.com/biws/how-to-calculate-terminal-value/ ### Step by Step Guide on Discounted - | Fair Value Academy Dec 31, 2018 · Remember, the exit value computed is a value as of the terminal year, and we will need to convert it to present value by multiplying it with the terminal year’s discount factor. When estimating the terminal growth rate, we usually benchmark it with the long-term GDP growth or inflation rate … https://www.fairvalueacademy.org/discounted-cash-flow-dcf-approach/ ### Terminal value (finance) - Wikipedia To determine the present value of the terminal value, one must discount its value at T 0 by a factor equal to the number of years included in the initial projection period. If N is the 5th and final year in this period, then the Terminal Value is divided by (1 + k) 5 (or WACC). https://en.wikipedia.org/wiki/Terminal_value_(finance) ### Is Lennar Corporation (NYSE:LEN) Trading At A 31% Discount? Apr 23, 2021 · Present Value of Terminal Value (PVTV)= TV / (1 + r) 10 = US$64b÷ ( 1 + 9.0%) 10 = US$27b The total value, or equity value, is then the sum of the present value … https://finance.yahoo.com/news/lennar-corporation-nyse-len-trading-081309289.html ### Terminal value calculations with the Discounted Cash Flow terminal value calculations typically account for at least 56% of the total company value for a mature tobacco company, topped by the sporting goods industry with 81%, and the terminal value could even exceed 100% in industries which have a potential high growth but have high initial investments http://essay.utwente.nl/70011/1/ten%20Beitel_MA_Faculty%20of%20Behavioural%2C%20Management%20and%20Social%20Sciences.pdf ### Guide to Terminal Value, Using The Gordon Growth Model Jul 20, 2020 · Likewise, if I add a stable growth rate of 6%, Amazon would have a terminal value of$682,653.67, which would give us a per-share value of \$1036.50 with everything else remaining constant. The intrinsic value of the above companies with their respective stable growth rates:
https://einvestingforbeginners.com/terminal-value-gordon-growth-model-daah/
### CLOSURE IN VALUATION: ESTIMATING TERMINAL VALUE
Terminal Value t = stable t1 r- g Cash Flow + where the cash flow and the discount rate used will depend upon whether you are valuing the firm or valuing the equity. If we are valuing the equity, the terminal value of equity can be written as: +Terminal value of Equity n = n1n n1 Cost of Equity-g Cashflow to Equity +
### valuation how to estimate a decent discount rate for dcf
Also another thing is that terminal value is pretty much the same as p/e or p/fcf, it's just the inverse of discount rate - terminal growth * fcf. E.g. 10 p/e could mean 10% discount - 0 growth. The goal should be to move past this theoretical stuff as fast as possible and just learn about companies through 10-ks and investment write ups.
|
2021-05-14 22:13:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3618866801261902, "perplexity": 2609.902912624858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00584.warc.gz"}
|
http://www.openmath.org/cd/linalg2.xhtml
|
Home Overview Documents Content Dictionaries Software & Tools The OpenMath Society OpenMath Projects OpenMath Discussion Lists OpenMath Meetings Links
# OpenMath Content Dictionary: linalg2
Canonical URL:
http://www.openmath.org/cd/linalg2.ocd
CD Base:
http://www.openmath.org/cd
CD File:
linalg2.ocd
CD as XML Encoded OpenMath:
linalg2.omcd
Defines:
matrix, matrixrow, vector
Date:
2004-03-30
Version:
3
Review Date:
2006-03-30
Status:
official
This document is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The copyright holder grants you permission to redistribute this
document freely as a verbatim copy. Furthermore, the copyright
holder permits you to develop any derived work from this document
provided that the following conditions are met.
a) The derived work acknowledges the fact that it is derived from
this document, and maintains a prominent reference in the
work to the original source.
b) The fact that the derived work is not the original OpenMath
document is stated prominently in the derived work. Moreover if
both this document and the derived work are Content Dictionaries
then the derived work must include a different CDName element,
chosen so that it cannot be confused with any works adopted by
the OpenMath Society. In particular, if there is a Content
Dictionary Group whose name is, for example, `math' containing
Content Dictionaries named `math1', `math2' etc., then you should
not name a derived Content Dictionary `mathN' where N is an integer.
However you are free to name it `private_mathN' or some such. This
is because the names `mathN' may be used by the OpenMath Society
for future extensions.
compilation of derived works, but keep paragraphs a) and b)
intact. The simplest way to do this is to distribute the derived
work under the OpenMath license, but this is not a requirement.
society at http://www.openmath.org.
This CD treats matrices and vectors in a row oriented fashion (using matrixrow's).
## vector
Role:
application
Description:
This symbol represents an n-ary function used to construct (or describe) vectors. Vectors in this CD are considered to be row vectors and must therefore be transposed to be considered as column vectors.
Example:
An example of vector using n arguments. The specific vector constructed in this example is [3,6,9].
$\left(3,6,9\right)$
Signatures:
sts
[Next: matrixrow] [Last: matrix] [Top]
## matrixrow
Role:
application
Description:
This symbol is an n-ary constructor used to represent rows of matrices. Its arguments should be members of a ring.
Example:
Representation of a row of a matrix of length two containing the integers [1,0]
$\left(\begin{array}{cc}1& 0\end{array}\right)$
Signatures:
sts
[Next: matrix] [Previous: vector] [Top]
## matrix
Role:
application
Description:
This symbol is an n-ary matrix constructor which requires matrixrow's as arguments. It is used to represent matrices.
Example:
Representation of a 2x2 identity matrix
$\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$
Signatures:
sts
[First: vector] [Previous: matrixrow] [Top]
Home Overview Documents Content Dictionaries Software & Tools The OpenMath Society OpenMath Projects OpenMath Discussion Lists OpenMath Meetings Links
|
2015-03-31 12:31:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674670577049255, "perplexity": 5294.2752391763825}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300578.50/warc/CC-MAIN-20150323172140-00209-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://stacks.math.columbia.edu/tag/05W9
|
Lemma 8.6.10. Let $\mathcal{C}$ be a site. Let
$\xymatrix{ \mathcal{T}_2 \ar[r] \ar[d] & \mathcal{T}_1 \ar[d]^ G \\ \mathcal{S}_2 \ar[r]^ F & \mathcal{S}_1 }$
be a $2$-cartesian diagram of stacks in groupoids over $\mathcal{C}$. If
1. $F : \mathcal{S}_2 \to \mathcal{S}_1$ is fully faithful,
2. for every $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ and $x \in \mathop{\mathrm{Ob}}\nolimits ((\mathcal{S}_1)_ U)$ there exists a covering $\{ U_ i \to U\}$ such that $x|_{U_ i}$ is in the essential image of $F : (\mathcal{S}_2)_{U_ i} \to (\mathcal{S}_1)_{U_ i}$, and
3. $\mathcal{T}_2$ is a stack in setoids.
then $\mathcal{T}_1$ is a stack in setoids.
Proof. We may assume that $\mathcal{T}_2$ is the category $\mathcal{S}_2 \times _{\mathcal{S}_1} \mathcal{T}_1$ described in Categories, Lemma 4.32.3. Pick $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ and $y \in \mathop{\mathrm{Ob}}\nolimits ((\mathcal{T}_1)_ U)$. We have to show that the sheaf $\mathit{Aut}(y)$ on $\mathcal{C}/U$ is trivial. To to this we may replace $U$ by the members of a covering of $U$. Hence by assumption (2) we may assume that there exists an object $x \in \mathop{\mathrm{Ob}}\nolimits ((\mathcal{S}_2)_ U)$ and an isomorphism $f : F(x) \to G(y)$. Then $y' = (U, x, y, f)$ is an object of $\mathcal{T}_2$ over $U$ which is mapped to $y$ under the projection $\mathcal{T}_2 \to \mathcal{T}_1$. Because $F$ is fully faithful by (1) the map $\mathit{Aut}(y') \to \mathit{Aut}(y)$ is surjective, use the explicit description of morphisms in $\mathcal{T}_2$ in Categories, Lemma 4.32.3. Since by (3) the sheaf $\mathit{Aut}(y')$ is trivial we get the result of the lemma. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 05W9. Beware of the difference between the letter 'O' and the digit '0'.
|
2022-12-06 22:59:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9007686972618103, "perplexity": 172.67750972660028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00030.warc.gz"}
|
https://www.physicsforums.com/threads/n-for-6-is-6-5-4-3-2-1-but-what-is-6-1.42363/
|
# N! for 6 is 6*5*4*3*2*1 but what is 6.1! ?
1. Sep 8, 2004
### tony873004
I know that n! for 6 is 6*5*4*3*2*1 but what is 6.1! ? My calculator says 868.957. How do they come up with this?
I'm trying to write a computer program that mimics the calculator program that comes with windows.
I know that using my above formula that I have to make an exception for 0!=1, and Invalid Input for Function for a negative number.
Anything else I should know about n! ??
2. Sep 8, 2004
### Tide
The factorial is a special case of the gamma function. The relationship is $x! = \Gamma (x+1)$ and the factorial is usually reserved for nonnegative integers.
3. Sep 8, 2004
### Janitor
This question crops up here every now and then, I have noticed. Here is one website on the relation between the gamma function and factorials.
http://mathworld.wolfram.com/GammaFunction.html
4. Sep 8, 2004
### tony873004
Thanks for your replies. This forum is great!
That link scared me away. I think I might drop the n! button from my calculator since I can't make it do non-integers and the Windows calculator can.
5. Sep 8, 2004
### Tide
Actually, it's not too hard do. You can create a table of values for the gamma function over the interval (0, 1] from which you can obtain values of $\Gamma (x)$ for larger x values using the fact that $\Gamma (x+1) = x \Gamma(x)$. If you want greater accuracy you can write a simple interpolation routine.
6. Sep 9, 2004
### tony873004
But what would I do with the values between 0 & 1? Add them to the integer's factorial, or multiply them (probably not. I'm guessing they'd be less than 1, and 6.1! > 6!). I'm not sure I could trust my interpolation routine. If I could come up with that I could probably forget the table altogether. I could also make the Calculator generate an error message on non-integer inputs. Do people ever use the n! button? I never have.
7. Sep 9, 2004
### Tide
No. You would branch to one of two subroutines - one for integer values and one for noninteger values.
|
2017-07-28 00:49:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4563850164413452, "perplexity": 1038.40094346098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436316.91/warc/CC-MAIN-20170728002503-20170728022503-00233.warc.gz"}
|
http://www.pa3cor.nl/
|
## Multiple Feedback Band-Pass filter
Band-pass filters are filters that are designed to let only a certain set of frequencies pass through. Overtime a myriad of different topologies have been developed by some very creative designers. Passive vs active, RC-filters vs LC filters, combinations of low pass and high pass filters, with gain and without gain, etc. One of the topologies that are often used for CW filters in receivers is the multiple feedback band-pass filter :
Continue reading Multiple Feedback Band-Pass filter
## 40m DC-receiver – What’s next?
Now that the basic receiver is working, what do I want to add to consider the receiver ‘complete’ ? This is the current set-up :
The following things I would like to add:
• S-meter : shows the strength of the received signal. Based on AF signal
• CW filter : Multiple Feedback Band-Pass filter, 1 or 2 stages, centered around 750Hz
• Audio Amp – current the bench utility amp is used with a gain of 20 dB.
So the receiver is alive and kicking and I have received amateurs from The Netherlands, UK and Italy on a simple 10m wire antenna. However, the VFO is comprised of a Siglent SDG1032X function generator from work. Hardly a self-contained solution!
After an article by Bill Merea in the GQRP Sprat magazine, I’ve decided to go for a digital VFO. The clock generator is based on a Si5351 module. The I2C interface is controlled via an Arduino Nano board together with a Groove I2C LCD.
## Attenuator
An attenuation network is often used in devices like a (digital) voltmeter, in oscilloscopes, etc. It offers the possibility to expand the input range of the measurement tool and thus makes it more versatile. Usually the attenuation network is build with a couple of resistor as shown in fig. 1. If the input resistance of the rest of devices is considered indefinite (as for example with a J-FET) the voltage is Vo=Vi*R1/(R1+R2). This works very well with DC-voltages.
However, with AC-voltages, things are not quit that simple. There are parasitic capacitances that ruins this simple setup. Figure 2 shows what is happening. For a DC-voltage this setup still works. However, with an AC-voltage it now becomes different. The total resistance is now the parallel impedance of the resistor and the capacitance and is thus lower. The meter will thus show a lower reading and the accuracy of the instrument so carefully constructed is down the drain….
However, this can easily solved by adding some more capacitors. Yes, you read it correct even more capacitors!
$R_1C_1 = \tau_1$ $R_2C_2 = \tau_2$ $Z_1 = R_1//C_1$ $Z_1 = R_1//Z_{C1} = { {R_1 {{1}\over {j \omega C_1} }} \over {R_1 + {{1}\over {j \omega C_1} }} } = {{R_1}\over {1+j \omega R_1C_1} }$ $Z_2 = R_2//C_2$ Likewise : $Z_2 = R_2//Z_{C2} = {{R_2}\over {1+j \omega R_2C_2} }$ ${{V_o}\over{V_i}} = {{Z_2}\over{Z_1+Z_2}} = {{ {{R_2}\over {1+j \omega R_2C_2} }}\over {{{R_1}\over {1+j \omega R_1C_1} }+{{R_2}\over {1+j \omega R_2C_2} } }}$ $\tau_1 = \tau_2 = \tau$ ${{V_o}\over{V_i}} = {{ {{R_2}\over {1+j \omega \tau} }}\over {{{R_1}\over {1+j \omega \tau} }+{{R_2}\over {1+j \omega \tau} } }} {{1+j \omega \tau}\over {1+j \omega \tau}}$ ${{V_o}\over{V_i}} = {{R_2} \over {R_1 + R_2}} qed$
## Milliohm meter
This is a simple but accurate milliohm meter with a range of 0 – 2000 milliohm (= 2 Ohm). It has a typical accuracy of 2%-3%. Unlike other designs this circuits doesn’t use large currents to measure these small resistances. So there is no risk to damage components. A small alternating current is used to excite the resistance under test. This AC voltage is then amplified with a common OpAmp AC gain block. Therefor the OpAmp DC offset voltage doesn’t come into play. It can be used to measure contact resistance of coax connectors, relays, switches, etc. It is powered from 2 9V / 6LR61 batteries.
The circuit is build up with 5 separate blocks that can be build and tested sequentially. A square wave oscillator with 50% duty cycle and two complementary outputs. The non-inverted output switches the 10 mA current-source on and off. This current is passed through the unknown resistance. The voltage drop is then amplified 200 times. The output of OpAmp is then rectified by the synchronous rectifier (more on this later). The DC voltage still contains some switching artifacts and is then smoothed out by a 3-pole low pass filter. The resulting smoothed DC voltage can then be offered to a 3 1/2 DMM or an analog meter.
## 40m DC receiver – “It’s alive!”
With a three day Pentecost weekend and my wife away on a camping trip with my son, I had some time to look into the receiver again. With the previous series of tests, I’ve found that product detector was working fine and featured good conversion gain. The LO oscillator injection was flawed because the 74LS132 was behaving incorrectly. I could use my work-horse the PM5134 20MHz function generator. Because the frequency setting is a bit course, I risked the chance of missing a signal in band and thus still not knowing the outcome. Therefor I borrowed the Siglent SDG1032X DDS function generator from my work.
I decided to focus on the hart of the receiver, the product detector, first. I hooked up my random length wire antenna to the gate of the J-FET, the function generator to the control inputs of the switches and a utility bench amplifier to the OpAmp output. (Max gain of 20dB) . I started at 7 Mhz and slowly increased the frequency up to 7.3Mhz. Although I definitely received some signals, it all sounded heavily distorted, except for CW. Connecting a scope to the output of the OpAmp showed some serious clipping. I decided to lower the OpAmp gain by increasing the resistor back again to 2k2. The gain is now Av = 46x = 33dB. This definitely improved the situation, weaker signals were now understandable but strong signals still featured clipping. Therefor I decided to add a RF gain pot to the input of the JFet. After playing around a bit, I’ve found that setting this 1/4 is sufficient in most cases.
With these changes I was able to receive signals all over the 40m band. Within 25 minutes I picked up signals from hams from the UK, Italy, Netherlands and Germany and possible a couple from the USA. By this time it was already 2345h, time to shuffle off.
Next, I’ve added the audio low pass filter to the output of the OpAmp. This definitely improved the audio quality! Much of the annoying high frequency noise was gone. CW signals are now much better to copy for only a smaller part of the band is received. There is still some 50Hz/100Hz hum picked up. Which is almost unavoidable with a receiver lying open on the bench. I realized that there are only low pass filter functions in the gain chain. The corner frequency for the OpAmp is set by Rx and Cx and is set at 0.7Hz… I’ve decided to decrease Cx to 0.22uF, the -3dB point is now set at 330Hz, much better!
Recalculating all RC filter points again, I realized that the input filter to the difference amplifier, also needed to be changed. The current filter point is set by R6+R7 and (C4 + C5//C6) and is only 2.6kHz. Which is fine for CW listening but cuts of a tad too much for voice signals. Therefor C4 was lowered to 4n7, giving a -3dB frequency of 5.1 kHz.
Next step is to build a local oscillator: one VFO to rule them all!
## Double-balanced cross-coupled product detector
The double-balanced cross-coupled product detector had a brief stint of popularity in the 70s and 80s. It’s popularity quickly faded once integrated product detectors like the Plessey SL640, Motorola MC1496/1596 and the CA3028A came on the market. Offering ease of use and further integration. My attention was first drawn to the cross-coupled product, when casually browsing some Technical Topics columns from Pat Hawker G3VA from the 80s. In his June/July 1980 article briefly mentions the product detector. Out of curiosity, I decided to build it and do some tests with it.
The nice thing about this whole circuit it can be build without any transformers! The only thing needed is a 1mH choke that be bought of the shelf for a couple of cents.
Continue reading Double-balanced cross-coupled product detector
## JFET testing
For an upcoming RF amplifier project I needed a couple of JFETs. The spread in device characteristics is quit broad. For the BF256B Vp ranges from -0.5 to -8.0V ( a factor 16! ) and IDSS ranges from 6 mA to 13 mA. Since, I bought a small stock of them, I decided to satisfy my curiosity and measure to static device characteristics.
|
2022-09-24 16:32:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4116416275501251, "perplexity": 2226.670149038448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00367.warc.gz"}
|
https://crypto.stackexchange.com/questions/51433/timing-vulnerability-of-byte-array-equality-test
|
# Timing vulnerability of byte array equality test?
Would the following code to test MAC equality leak timing information ?
bool equal(unsigned char *a, unsigned char *b, int len) {
unsigned char c = 0;
for (int i = 0; i < len; i++) {
c |= a[i] ^ b[i];
}
return c == 0;
}
More precisely, does the last expression c == 0leak timing information ?
I don't think so, because it doesn't allow to know where would be the mismatch.
Why are then some crypto libraries using complex expression to convert c into a 1 or 0 value than later simply compare that value with a simple == with 1 (or 0) ?
Would a timing difference between MAC mismatch and identical MAC condition be relevant ?
• Comments are not for extended discussion; this conversation has been moved to chat. – e-sushi Dec 16 '17 at 18:42
The comparison c == 0, like the rest of your code, will probably run in constant time.
However, there's no guarantee that it will, and it's just barely conceivable that there might be some compiler and CPU combination out there where it might not. Of course, that's basically true of any code not written in assembly for a specific CPU model and version.
After all, the C language standard does not actually offer any guarantees about execution times, and it would be perfectly valid for a smart enough compiler to look at your code, go "Hey! It looks like you're trying to compare two byte arrays for equality!" and decide to replace your code with a call to memcmp() or something equivalent. And, as you can see from the results below, modern compilers actually are getting close to that level of cleverness, if they're not quite there yet.
Indeed, I suspect that the real reason why your code isn't getting optimized away like that is because compiler authors have more or less consciously decided not to make their compilers recognize it as a memory comparison, on the assumption that if you're writing your comparison code in such a convoluted way, you're probably doing something unusual and don't want it replaced by memcmp().
You can take a look at how common compilers translate your code into assembly using the Godbolt Compiler Explorer. It turns out that, even on the same CPU architecture, the result depends a lot on the compiler and the optimization level chosen.
(I've included a fairly long discussion of the x86-64 assembly produced by various compilers below, since it's kind of interesting in itself, but it's really only relevant as an illustration of how much the compiler output varies. Feel free to skip it.)
As a fairly typical example, when told to optimize your code for minimum size (-Os), GCC 7.2 generates a rather nice and simple piece of assembly code:
equal(unsigned char*, unsigned char*, int):
xor eax, eax
xor ecx, ecx
.L3:
cmp edx, eax
jle .L2
mov r8b, BYTE PTR [rdi+rax]
xor r8b, BYTE PTR [rsi+rax]
inc rax
or ecx, r8d
jmp .L3
.L2:
test cl, cl
sete al
ret
The last three lines of assembly correspond to your return (c == 0) statement, and indeed that part of the output seems to look pretty much the same (modulo register choices) for all compilers (for the x86-64 platform, that is) and optimization levels. I would indeed expect it to run in constant time, assuming that the timing of test isn't data-dependent, which it really shouldn't be.
(At worst, if it wasn't constant-time on some CPU architecture, I'd expect the most likely timing differences to most likely be between the cases c == 0 and c != 0, which the code reveals through its return value anyway. Of course, if you don't want that return value leaked through timings, e.g. because it's used as just one input to a more complex condition whose individual inputs should be secret, then even that leak could be too much.)
Of course, what typical timing attacks target isn't the final byte test, but the comparison loop itself. And that's where the compiler differences really come in. For example, at optimization level 1 or level 2, GCC's output looks quite similar to the code above. (The main difference is that the loop condition test is moved to the end and an explicit test for len <= 0 is inserted at the top of the code instead.) At optimization level 3, however, something weird happens and the output turns into this unrolled monstrosity:
equal(unsigned char*, unsigned char*, int):
test edx, edx
jle .L9
mov rcx, rdi
lea r8d, [rdx-1]
mov r9d, 17
neg rcx
push rbp
push rbx
and ecx, 15
lea eax, [rcx+15]
cmp eax, 17
cmovb eax, r9d
cmp r8d, eax
jb .L10
test ecx, ecx
je .L11
movzx r8d, BYTE PTR [rdi]
movzx r10d, BYTE PTR [rsi]
xor r10d, r8d
cmp ecx, 1
je .L12
movzx eax, BYTE PTR [rdi+1]
xor al, BYTE PTR [rsi+1]
or r10d, eax
cmp ecx, 2
je .L13
movzx eax, BYTE PTR [rdi+2]
xor al, BYTE PTR [rsi+2]
or r10d, eax
cmp ecx, 3
je .L14
movzx eax, BYTE PTR [rdi+3]
xor al, BYTE PTR [rsi+3]
or r10d, eax
cmp ecx, 4
je .L15
movzx eax, BYTE PTR [rdi+4]
xor al, BYTE PTR [rsi+4]
or r10d, eax
cmp ecx, 5
je .L16
movzx eax, BYTE PTR [rdi+5]
xor al, BYTE PTR [rsi+5]
or r10d, eax
cmp ecx, 6
je .L17
movzx eax, BYTE PTR [rdi+6]
xor al, BYTE PTR [rsi+6]
or r10d, eax
cmp ecx, 7
je .L18
movzx eax, BYTE PTR [rdi+7]
xor al, BYTE PTR [rsi+7]
or r10d, eax
cmp ecx, 8
je .L19
movzx eax, BYTE PTR [rdi+8]
xor al, BYTE PTR [rsi+8]
or r10d, eax
cmp ecx, 9
je .L20
movzx eax, BYTE PTR [rdi+9]
xor al, BYTE PTR [rsi+9]
or r10d, eax
cmp ecx, 10
je .L21
movzx eax, BYTE PTR [rdi+10]
xor al, BYTE PTR [rsi+10]
or r10d, eax
cmp ecx, 11
je .L22
movzx eax, BYTE PTR [rdi+11]
xor al, BYTE PTR [rsi+11]
or r10d, eax
cmp ecx, 12
je .L23
movzx eax, BYTE PTR [rdi+12]
xor al, BYTE PTR [rsi+12]
or r10d, eax
cmp ecx, 13
je .L24
movzx eax, BYTE PTR [rdi+13]
xor al, BYTE PTR [rsi+13]
or r10d, eax
cmp ecx, 14
je .L25
movzx eax, BYTE PTR [rsi+14]
xor al, BYTE PTR [rdi+14]
or r10d, eax
mov eax, 15
.L4:
mov ebp, edx
pxor xmm1, xmm1
sub ebp, ecx
mov r8d, ecx
xor r9d, r9d
mov ebx, ebp
lea r11, [rdi+r8]
xor ecx, ecx
shr ebx, 4
.L6:
movdqu xmm0, XMMWORD PTR [r8+rcx]
pxor xmm0, XMMWORD PTR [r11+rcx]
cmp ebx, r9d
por xmm1, xmm0
ja .L6
movdqa xmm0, xmm1
mov ecx, ebp
and ecx, -16
psrldq xmm0, 8
por xmm1, xmm0
movdqa xmm0, xmm1
psrldq xmm0, 4
por xmm1, xmm0
movdqa xmm0, xmm1
psrldq xmm0, 2
por xmm1, xmm0
movdqa xmm0, xmm1
psrldq xmm0, 1
por xmm1, xmm0
movaps XMMWORD PTR [rsp-40], xmm1
movzx r8d, BYTE PTR [rsp-40]
or r8d, r10d
cmp ebp, ecx
je .L7
.L3:
cdqe
.L8:
movzx ecx, BYTE PTR [rdi+rax]
xor cl, BYTE PTR [rsi+rax]
or r8d, ecx
cmp edx, eax
jg .L8
.L7:
test r8b, r8b
sete al
pop rbx
pop rbp
ret
.L14:
mov eax, 3
jmp .L4
.L9:
mov eax, 1
ret
.L12:
mov eax, 1
jmp .L4
.L13:
mov eax, 2
jmp .L4
.L10:
xor eax, eax
xor r8d, r8d
jmp .L3
.L15:
mov eax, 4
jmp .L4
.L16:
mov eax, 5
jmp .L4
.L19:
mov eax, 8
jmp .L4
.L17:
mov eax, 6
jmp .L4
.L11:
xor eax, eax
xor r10d, r10d
jmp .L4
.L18:
mov eax, 7
jmp .L4
.L20:
mov eax, 9
jmp .L4
.L21:
mov eax, 10
jmp .L4
.L22:
mov eax, 11
jmp .L4
.L23:
mov eax, 12
jmp .L4
.L24:
mov eax, 13
jmp .L4
.L25:
mov eax, 14
jmp .L4
It looks like GCC has vectorized the XOR/OR loop to process 16 bytes at a time using SSE instructions, with a bunch of additional special case code to handle the possibility that len might not be a multiple of 16. At a glance, I suspect this may still run in constant time (for any given len value), but I wouldn't be more than pocket change on it.
FWIW, replacing the len parameter with the constant 16 gets rid of all the special-case handling and yields this very elegant assembly code with no jumps or loops whatsoever. Literally translated back into C / C++, what it basically does is:
bool equal(unsigned char *a, unsigned char *b) {
uint128_t c = *(uint128_t *)a ^ *(uint128_t *)b;
c |= (c >> 64);
c |= (c >> 32);
c |= (c >> 16);
c |= (c >> 8);
return (unsigned char)c == 0;
}
As for other compilers, Clang's output looks pretty similar to GCC's at low optimization levels, but at -O2 and above, things get even weirder than with GCC. Again, that's clearly vectorized with SSE, but (if I'm parsing the code correctly) it's only reading the input 4 bytes at a time and doing some really weird within-register byte shuffling.
The core loop of ICC's output at -O3 looks similarly vectorized as GCC's, although the rest of the code is quite different. FWIW, if you fix the length at 16 bytes, ICC doesn't vectorize the code at all, although it does unroll it. Also, ICC at -O2 and above is the only x86-64 compiler I found that doesn't use the test + sete combo for the final c == 0 test; it uses test + cmove instead:
mov edx, 1
test al, al
mov eax, 0
cmove eax, edx
Anyway, in practice, the only way to be really confident that your code runs in constant time is to (first examine the assembly output for any telltale signs of potential timing issues, and then) test it.
For example, here's a quick and dirty online benchmark showing that (at least for the specific platform, compiler, options and parameters used) your code does seem to be constant-time:
The first two tests (AllDifferent and AllSame) are general baseline timing tests, the next two (FirstHalfSame and LastHalfSame) exercise the comparison loop to see if its execution time depends on the length of the matching prefix / suffix, and the last two (CaseFlipped and LSBFlipped) exercise the final c == 0 test by comparing two strings which differ only by having a specific bit in each byte flipped.
Of course, in practice, you should carry out your benchmark using the specific compiler and hardware you're targeting (or as wide a selection of both as possible, if you don't have a specific target) and using realistic inputs (e.g. not just constant strings) to reduce the chance of compiler optimizations messing up your benchmarks.
(For example, while making the quick benchmark above, I noticed that assigning the output of the comparison to a global variable was necessary to stop the compiler from optimizing out all the comparisons entirely(!) and making them all run at same speed as the "Noop" baseline loop. Also, in the AllSame test, I found that Clang is in fact smart enough to optimize out the comparison if both inputs point to the same address, so I had to use two separate strings with the same content to properly test it. Benchmarks can be tricky like that.)
• There are some CPUs on which I have observed GCC generate branches for c == 0. I don't remember which ones offhand, but maybe some flavor of PowerPC since I think that may not reliably have an integer conditional move instruction. – Squeamish Ossifrage Sep 10 '17 at 14:38
I looked over at how the Golang crypto library did this. The subtle package implements ConstantTimeCompare():
// ConstantTimeCompare returns 1 if and only if the two slices, x
// and y, have equal contents. The time taken is a function of the length of
// the slices and is independent of the contents.
func ConstantTimeCompare(x, y []byte) int {
if len(x) != len(y) {
return 0
}
var v byte
for i := 0; i < len(x); i++ {
v |= x[i] ^ y[i]
}
return ConstantTimeByteEq(v, 0)
}
It looks to be largely similar to your own implementation, except the last comparison is different. It uses the ConstantTimeByteEq() function:
// ConstantTimeByteEq returns 1 if x == y and 0 otherwise.
func ConstantTimeByteEq(x, y uint8) int {
z := ^(x ^ y)
z &= z >> 4
z &= z >> 2
z &= z >> 1
return int(z)
}
The reason for this is extreme caution. There are cases when, if both values are zero, the comparison can take less time. There's also the consideration of unknown compiler optimisations.
One StackOverflow answer mentions that the end comparison is for preventing branch misdirections.
Put simply, it is to allow the CPU only a single, constant-time, execution path for the single-byte comparison.
• I have also looked at this code. But these functions return an int. If you look at hmac.go you will see that the Equal function call the constant compare and test the result with == 1. This is because Equal has to return a bool. Someone, somewhere will have to compare the result of the comparison with 1 with a resulting code branch. When comparing MAC with the outcome to reject the message, does it really matter ? – chmike Sep 10 '17 at 12:58
• I assume it is because the function itself must execute in constant-time, where the time taken is a function of the length of the inputs. Whatever is done after the function returns is out of scope. If you think about it, even if (outside the function) the == operator does leak timing information, the only information leaked is the result of the comparison itself, which is of no consequence. – Awn Sep 10 '17 at 13:08
• P.S. There's a simpler way to map nonzero to 1 and zero to 0, if the input $x$ is an integer in {0, 1, 2, ..., 255}: $1 \mathbin{\&} ((x - 1) \gg 8)$. – Squeamish Ossifrage Sep 10 '17 at 14:24
• P.P.S. Figuring out whether I reversed the sense of something in the previous comment is left as an exercise for the attentive reader. – Squeamish Ossifrage Sep 10 '17 at 14:52
• @SqueamishOssifrage: That's assuming the variable $x$ is wider than 8 bits, I presume. On an 8-bit CPU, I suspect that expression might just conceivably fail to be constant-time, if the compiler decides to do a conditional branch based on the borrow bit. – Ilmari Karonen Sep 10 '17 at 14:59
|
2019-12-10 16:34:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2080739140510559, "perplexity": 8042.833137056881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00133.warc.gz"}
|
https://stacks.math.columbia.edu/tag/08XD
|
Theorem 35.4.25. If $M \otimes _ R S$ has one of the following properties as an $S$-module
1. finitely generated;
2. finitely presented;
3. flat;
4. faithfully flat;
5. finite projective;
then so does $M$ as an $R$-module (and conversely).
Proof. To prove (a), choose a finite set $\{ n_ i\}$ of generators of $M \otimes _ R S$ in $\text{Mod}_ S$. Write each $n_ i$ as $\sum _ j m_{ij} \otimes s_{ij}$ with $m_{ij} \in M$ and $s_{ij} \in S$. Let $F$ be the finite free $R$-module with basis $e_{ij}$ and let $F \to M$ be the $R$-module map sending $e_{ij}$ to $m_{ij}$. Then $F \otimes _ R S\to M \otimes _ R S$ is surjective, so $\mathop{\mathrm{Coker}}(F \to M) \otimes _ R S$ is zero and hence $\mathop{\mathrm{Coker}}(F \to M)$ is zero. This proves (a).
To see (b) assume $M \otimes _ R S$ is finitely presented. Then $M$ is finitely generated by (a). Choose a surjection $R^{\oplus n} \to M$ with kernel $K$. Then $K \otimes _ R S \to S^{\oplus r} \to M \otimes _ R S \to 0$ is exact. By Algebra, Lemma 10.5.3 the kernel of $S^{\oplus r} \to M \otimes _ R S$ is a finite $S$-module. Thus we can find finitely many elements $k_1, \ldots , k_ t \in K$ such that the images of $k_ i \otimes 1$ in $S^{\oplus r}$ generate the kernel of $S^{\oplus r} \to M \otimes _ R S$. Let $K' \subset K$ be the submodule generated by $k_1, \ldots , k_ t$. Then $M' = R^{\oplus r}/K'$ is a finitely presented $R$-module with a morphism $M' \to M$ such that $M' \otimes _ R S \to M \otimes _ R S$ is an isomorphism. Thus $M' \cong M$ as desired.
To prove (c), let $0 \to M' \to M'' \to M \to 0$ be a short exact sequence in $\text{Mod}_ R$. Since $\bullet \otimes _ R S$ is a right exact functor, $M'' \otimes _ R S \to M \otimes _ R S$ is surjective. So by Lemma 35.4.10 the map $C(M \otimes _ R S) \to C(M'' \otimes _ R S)$ is injective. If $M \otimes _ R S$ is flat, then Lemma 35.4.24 shows $C(M \otimes _ R S)$ is an injective object of $\text{Mod}_ S$, so the injection $C(M \otimes _ R S) \to C(M'' \otimes _ R S)$ is split in $\text{Mod}_ S$ and hence also in $\text{Mod}_ R$. Since $C(M \otimes _ R S) \to C(M)$ is a split surjection by Lemma 35.4.12, it follows that $C(M) \to C(M'')$ is a split injection in $\text{Mod}_ R$. That is, the sequence
$0 \to C(M) \to C(M'') \to C(M') \to 0$
is split exact. For $N \in \text{Mod}_ R$, by (35.4.11.1) we see that
$0 \to C(M \otimes _ R N) \to C(M'' \otimes _ R N) \to C(M' \otimes _ R N) \to 0$
is split exact. By Lemma 35.4.10,
$0 \to M' \otimes _ R N \to M'' \otimes _ R N \to M \otimes _ R N \to 0$
is exact. This implies $M$ is flat over $R$. Namely, taking $M'$ a free module surjecting onto $M$ we conclude that $\text{Tor}_1^ R(M, N) = 0$ for all modules $N$ and we can use Algebra, Lemma 10.75.8. This proves (c).
To deduce (d) from (c), note that if $N \in \text{Mod}_ R$ and $M \otimes _ R N$ is zero, then $M \otimes _ R S \otimes _ S (N \otimes _ R S) \cong (M \otimes _ R N) \otimes _ R S$ is zero, so $N \otimes _ R S$ is zero and hence $N$ is zero.
To deduce (e) at this point, it suffices to recall that $M$ is finitely generated and projective if and only if it is finitely presented and flat. See Algebra, Lemma 10.78.2. $\square$
## Comments (0)
There are also:
• 4 comment(s) on Section 35.4: Descent for universally injective morphisms
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 08XD. Beware of the difference between the letter 'O' and the digit '0'.
|
2021-03-04 13:18:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9623949527740479, "perplexity": 192.16774124836866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00315.warc.gz"}
|
http://calcuttagutta.com/articles/1879/
|
Moldejazz 2018
Camilla, 1 year, 8 months
Romjulen 2018
Camilla, 2 years, 3 months
Liveblogg nyttårsaften 2017
Tor, 3 years, 3 months
Jogging og blogging
Are, 4 years, 3 months
Liveblogg nyttårsaften 2016
Are, 4 years, 3 months
Are, 4 years, 5 months
Moldejazz 2016
Camilla, 4 years, 8 months
Kort hår
Tor, 3 months, 1 week
Melody Gardot
Camilla, 1 year, 9 months
Den årlige påske-kommentaren
Tor, 2 years
50 book challenge
Camilla, 3 months, 1 week
Ten years ago
Columnists
Controls
## Jazz Festival day 1: We are all Shibusa Shirazu Orchestra!
The 50th Molde International Jazz Festival is officially up and running ... in bright read underpants!
It all started grandiosely, but soberly and official-like, in the beautiful surroundings of the outdoor museum (brilliant idea, that; I hope they repeat it next year) with speeches from the usual suspects:
Mayor Jan Petter Hammerø's speech was pleasantly and unusually short and not packed full of figures for a change. He may have been touched by the moment. He referred to the history of the festival, plugged the new book on the festival and the new Theatre and Jazz House, and shortly after left the stage to the infinitely more deserving Jan Ole Otnæs (the festival director).
His credentials are indisputable, having hitch hiked here from the dark depths of Northern Norway at the tender age of 15, and having spent the last 10 years leading the festival to ever greater fame and fortune. Strangely, he was the one to mention money, but that is allowable when he does so in order to inform us that the very first festival, on 3 August 1961 had at their disposal the astronomical sum of 15.000 NOK. He also spent some time demonstrating the centrality of this festival in the development of key international musicians, including (but not limited to: I started taking down the names half-way through the list) Nils Petter Molvær, Daniel Herskedal, Hayden Powell, Ola Kvernberg, Ytre Suløen and Jan Garbarek.
Foto: Tor
The third speaker, who officially declared the festival opened, was the Norwegian Minister for Culture (though technically not a minister, per se), Anniken Huitfeldt, who made sure to mention the 800 volunteers who make the festival run smoothly, how Moldejazz as a festival is the giant upon whose shoulders other Norwegian festivals have stood, the groundbreaking effect of the festival on the perception of jazz in Norway, the changing criteria for government support for this type of culture, and the importance for the development of a whole generation in having the first black people they ever see in real life be the stars in jazz concerts ... the usual.
In addition I think all three speakers managed to mention that after Kikkan'' and Pingen'' had dreamt up the idea of a jazz festival a whole day's journey from Oslo (one wet night?), and Kikkan'' went to the Jazz Society in Oslo to propose it, he was almost laughed out of the room. There was a certain sense of Who's laughing now, eh?'' and nobody objects to that.
---
But the real story begins after the speeches (as is always the case, I think), when everyting went nuts:
SHIBUSA SHIRAZU ORCHESTRA
Jørgen told us it would be unlike anything else we'll see this year (and probably any other year as well). And I dare say Jørgen was quite right. It was insane. In a good way. I find I lack the words to try to explain it properly. The festival programme describes it as,
...et orkester med glitrende musikere, hvitsminkede buthodansere [sic], tyggegummi-fargede discodansere, og polkaprikkede banandansere, malere, videokunstnere og en ballongmaker
(...an orchestra of brilliant musicians, white painted butoh-dancers, bubblegum-coloured disco-dancers, and polkadotted banana-dancers, painters, video artist and a balloon-maker)
We did not see any balloon artists, but, considering how hidden the painter was, he may have been lurking under the stage. Or, possibly walking randomly among the audience. It feels like the sort of surreal thing they might do: the banana dancer, for example, baffled all members of the public within range of this intrepid Calcuttagutta reporter.
Are: Jeg håper du har tenkt å tolke banandama. (I hope you will interpret the banana lady)
Ingvild: Det var utrolig kult med han ropemannen ... men jeg lurer på formålet med hun banandama. (The shouting guy was very cool ... but I wonder what the point of the banana lady was)
Johannes: [indistinct mutterings concerning the everpresence of the lady when the cool butho dancers were offstage occasionally]
Due to popular demand, I therefore present my interpretation of the Banana Dancer. I half-promised Johannes something shocking and Freudian, but the more I think about it, the more I suspect the lady is simply there to signify the lack of signification. She is a token of absurdity. The point of the banana dancer is that she has no point.
Foto: Tor
My Japanese is very, very limited, and I must therefore trust wikipedia when it says that Shibusa Shirazu translates as we do not understand/are unaffected by cool'', but the name fits the band to a t. It is a total rejection of that hipster indie cool which suggests the artist really couldn't care less -- the man in the red pants and the wonderful jacket (dubbed ropemannen'' by Ingvild) cared whether we joined in or not. As he said,
This is not a concert! We want to party with you!
...
Music is fun! But many people together, more fun!
At first I thought (well, hoped'' may be a better word -- I dread audience participation) he was referring to the impressive number of musicians (and other artists) on stage, but when it came time for the Japanese Fishermen's Groove, there was no escaping the fact that the man wanted 5000 people in his party. Hence the title. We are now all Shibusa Shirazu Orchestra. Even Tor, who seemed quite startled at the thought.
The man even made us do it all again when the first attempt did not satisfy his expectations. But I am not complaining. It was fun.
It is becoming increasingly obvious that I am failing miserably at communicating the strange and impressive craziness that is Shibusa Shirazu Orchestra. It felt a little like what I have always imagined certain types of hallucinogens must work: all impressions were strong impressions, and nothing was left to stand alone -- there were constantly competing sensory impressions and my eyes and ears never knew exactly what to land on.
Foto: Tor
I admit the butoh dancers managed to hold my attention fairly steadily. Their body control had me gaping. I have since had reason to be annoyed at Dagbladet (which means the Jazz Festival has really started), which captions an image of the female dancer with
Spesielt en danser i det japanske Shibuza [sic] Shirazu Orchestra trollbant publikum med sitt yndefulle, hvitmalte kroppsspråk.
(Especially one dancer in the Japanese Shibusa Shirazu Orchestra bewitched the audience with her graceful, white painted body language.)
That was not the point. I am not entirely sure what the point is, but I am sure that isn't it. Yes there were boobs. Yes, she was graceful (although "yndefull" has different connotations -- you would never use it about a man). But they were also damned impressive, borderline alien, definitely animal-like at times, and it was not the woman alone (or, indeed, first and foremost) who caught our attention and held it. (I am giving Mosnes the benefit of the doubt, hoping he is not responsible for captioning the photos.)
Foto: Tor
This guy had Johannes grumbling about his absence when he was off-stage (even while the woman was doing her most impressive dancing, clothed). He was bewitching with slow, measured movements of which he was in complete control. Here is another picture for good measure:
They also provided us with a mystery that needs clearing up (see the picture to the right). Tor thinks that is supposed to be a fish (and I think Are agreed?), but I couldn't shake the feeling of having seen something similar in Princess Mononoke.
Having now checked up on this, I find that I was thinking of kodama, a type of forest spirits (I think), and that I may be wrong (it happens). And maybe Tor was right after all and it was a fish. Any other suggestions?
There is also the mystery of the identity of the piece of music that they incorporated on regular intervals. I have checked my two original suspcts, Zatoichi and Seven Samurai, and I could not find the original. But since we all recognised it from somewhere, it should be possible for us to identify it if we put our minds to it. Come on, people. It is driving me nuts.
Foto: Tor
There is more to say. I haven't scraped the surface of the creative anarchy of it all. There was a theremin with funky sounds, ladies in sexy dresses and green, yellow, orange or blue hair, depending on the occasion, oh, and musicians. I haven't even described the music. Because I can't. It was a strange mix of free jazz, poppishness and something that felt very Japanese in a mix of the traditional and the new crazy-tv-ness of it, mixing all my contradictory ideas of the country into one happy snappy crazy anti-cool insane carnival of colours, sounds and movement, keeping my mind occupied throughout yesterday and into the early hours when it made my dreams weird.
Those who left early, due to boobs or unfamiliar music, missed out on an experience they are not likely to have again. And those who stood in front of people sitting... I lack the words for the hell that awaits them once their souls are weighed in balance.
Jørgen, 21.07.10 22:49
Thank you, Camilla. This was really a fun read, and for a while it made me forget that I missed the concert. What you wrote about the concert, and more importantly how you did it, left me with a nice amount of 物の哀れ. I have been waiting for this concert to show up in the program since Finn Frode first told me about Shibusashirazu back in 2005 and showed me some of their performances. And now I have this bittersweet knowledge that I probably would have liked this concert quite a lot, and remembered it for quite a few years, but in the end, I didn't get to see them play and dance. It makes me sad in a nostalgic way. And I kind of like that, actually. Weird.
On another note, I think that ‹‹never be cool›› is a better translation of shibusashirazu, since it also implies a movement away from the music and mentality of Miles Davis and jazz musicians of his ilk.
Are, 22.07.10 00:17
Thank you for a great review!
It was a weird concert, but really, really spectacular and enjoyable. I like the idea of starting (for me personally) the festival with Shibusa and closing it with Break of Day.
I did not partake in the figurine-is-it-a-fish-discussion. Ingvild thinks it looks fishy, though.
Da - da - da -daaaaaaaaaaaaa - dadada - da - da - daaaaaaaaaaaaaaa.
Da - da - da - daaaaaaaa. Da-da-da - da - da - daaaaaaaa.
Are, 22.07.10 00:17
Also: Thanks for great photos, Tor.
Category
Tags
Views
4441
|
2021-04-15 02:48:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.416176974773407, "perplexity": 4585.167957194428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00047.warc.gz"}
|
https://techwhiff.com/learn/1-a-your-favorite-fm-radio-station-wxyz/22567
|
# 1. a. Your favorite FM radio station, WXYZ, broadcasts at a frequency of 101.1 MHz. What...
###### Question:
1. a. Your favorite FM radio station, WXYZ, broadcasts at a frequency of 101.1 MHz. What is the wavelength of this radiation? (C = 3 x108 m/s, 1MHz = 10^-') b. Find the wavelength for an electron moving at the speed of 5.0x10°m/s. (Mass of an electron is 9.1*10*3kg, h = 6.63 x 10-34 J/s)
#### Similar Solved Questions
##### During its manufacture, plate glass at 600°C is cooled by passing air over its surface such...
During its manufacture, plate glass at 600°C is cooled by passing air over its surface such that the convection heat transfer coefficient is h. To prevent cracking, it is known that the temperature gradient must not exceed 15 °C/mm at any point in the glass during the cooling process. The th...
##### How do you graph using slope and intercept of 4x-2y=48?
How do you graph using slope and intercept of 4x-2y=48?...
##### Find the roots of the given equation by completing the square: ax^2+bx+c=0
Find the roots of the given equation by completing the square: ax^2+bx+c=0...
##### LARCALC11 1.3.013. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER 3. (-/1 Points) DETAILS Find the limit....
LARCALC11 1.3.013. MY NOTES ASK YOUR TEACHER PRACTICE ANOTHER 3. (-/1 Points) DETAILS Find the limit. lim Vr+1 Need Help? Read It Watch it Talk to a Tutor...
##### Show that the given set is the base of V. 3) {1+x, x+x?, x+x", x'}; V...
Show that the given set is the base of V. 3) {1+x, x+x?, x+x", x'}; V = vector space of a class polynomial greater than or equal to 3...
##### Required information Problem 5-4B Record transactions related to uncollectible accounts (L05-4,5-5) The following information applies to...
Required information Problem 5-4B Record transactions related to uncollectible accounts (L05-4,5-5) The following information applies to the questions displayed below) Facial Cosmetics provides plastic surgery primarily to hide the appearance of unwanted scars and other blemishes. During 2021, the c...
##### 4. Consider the potential step shown below with a beam of particles incident from the left....
4. Consider the potential step shown below with a beam of particles incident from the left. V(x) a) Calculate the reflection coefficient for the case where the energy of the incident particies is less than the height of tihe step b) Calculate the reflection cocf ficient for the case where the encrgy...
##### Consider a solution of 30 grams of naphthalene in 1500 kg of benzene. Pure Benzene Mol....
Consider a solution of 30 grams of naphthalene in 1500 kg of benzene. Pure Benzene Mol. Wt. 78.1 g/mol and o = 0.87 g/cm Th* = 353.2K AHvap = 30.8 kJ/mol Ti = 278.6K AH fus = 10.6 kJ/mol a) Calculate the boiling point of the solution b) Calculate the freezing point of the solution c) Calculate the o...
##### Please define these terms in a psychoanalytic concept (in the most simple way possible) and provide...
Please define these terms in a psychoanalytic concept (in the most simple way possible) and provide an example. 1. Projection- 2. Sexuality- ...
##### Concentration of HCL. Im suppose to use the number from the pervious problem to solve this...
Concentration of HCL. Im suppose to use the number from the pervious problem to solve this one but im not sure how. Calculations for Molarity of the Molarity. of Molarity of the NaOH Solution for each Run and the .35489 (204.9.fol.02 ) 210.079 wil net wa)(0242loose mobile 3533 g 204.22g/mol.02142)...
##### CH 1 KB. Total fixed costs Varies per unit as the output changes O Remain fixed...
CH 1 KB. Total fixed costs Varies per unit as the output changes O Remain fixed per unit as the output changes Varies in total as the output changes Varies in total as the per unit changes...
##### No Spac... Heading 1 Heading 2 Title Subtitle Subtle Em Paragraph Emo Styles Explain epidemiology and...
No Spac... Heading 1 Heading 2 Title Subtitle Subtle Em Paragraph Emo Styles Explain epidemiology and social epidemiology. Why are they important to understanding the way health affects different populations? Pick a disease and Pick a population (i.e. school-age children, women of child bearing age,...
##### My Notes 1 - / 10 points osCol PhysAP2016 4.3.P.002. An 84.0 kg sprinter starts a...
My Notes 1 - / 10 points osCol PhysAP2016 4.3.P.002. An 84.0 kg sprinter starts a race with an acceleration of 1.44 m/s2. If the sprinter accelerates at that rate for 41 m, and then maintains that velocity for the remainder of the 100 m dash, what will be his time (in s) for the race? Additional Mat...
Please answer the following, remember to show all steps: These are all apart of one question 1A) The Ballston Corporation granted Peter 100 incentive stock options earlier this year with a $3 per share exercise price. If Peter elects to exercise the options when the stock is worth$5 a share, how m...
|
2022-11-28 15:11:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5676737427711487, "perplexity": 3686.825279311433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00342.warc.gz"}
|
http://www.openproblemgarden.org/category/projective_plane
|
# projective plane
## The Double Cap Conjecture ★★
Author(s): Kalai
\begin{conjecture} The largest measure of a Lebesgue measurable subset of the unit sphere of $\mathbb{R}^n$ containing no pair of orthogonal vectors is attained by two open caps of geodesic radius $\pi/4$ around the north and south poles. \end{conjecture}
## Partitioning the Projective Plane ★★
Author(s): Noel
Throughout this post, by \emph{projective plane} we mean the set of all lines through the origin in $\mathbb{R}^3$.
\begin{definition} Say that a subset $S$ of the projective plane is \emph{octahedral} if all lines in $S$ pass through the closure of two opposite faces of a regular octahedron centered at the origin. \end{definition}
\begin{definition} Say that a subset $S$ of the projective plane is \emph{weakly octahedral} if every set $S'\subseteq S$ such that $|S'|=3$ is octahedral. \end{definition}
\begin{conjecture} Suppose that the projective plane can be partitioned into four sets, say $S_1,S_2,S_3$ and $S_4$ such that each set $S_i$ is weakly octahedral. Then each $S_i$ is octahedral. \end{conjecture}
Keywords: Partitioning; projective plane
|
2018-08-18 14:43:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829818606376648, "perplexity": 269.0094340697167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213689.68/warc/CC-MAIN-20180818134554-20180818154554-00134.warc.gz"}
|
https://www.bartleby.com/solution-answer/chapter-4-problem-8p-intermediate-accounting-reporting-and-analysis-3rd-edition/9781337788281/analyzing-starbuckss-balance-sheet-disclosures-review-the-financial-statements-and-related-notes-of/72ef4c10-8c54-11e9-8385-02ee952b546e
|
Chapter 4, Problem 8P
### Intermediate Accounting: Reporting...
3rd Edition
James M. Wahlen + 2 others
ISBN: 9781337788281
Chapter
Section
### Intermediate Accounting: Reporting...
3rd Edition
James M. Wahlen + 2 others
ISBN: 9781337788281
Textbook Problem
1 views
# Analyzing Starbucks’s Balance Sheet DisclosuresReview the financial statements and related notes of Starbucks in Appendix A.Required:Answer the following questions pertaining to Starbucks’s balance sheet as of October 1, 2017, and related information. (Note: You do not need to make any calculations. All answers may be found in the financial report.) 1. What was the amount of the current assets and current liabilities? 2. What was the single largest current asset and current liability? 3. What was the amount in the allowance for doubtful accounts? 4. What is the par value of the company’s common stock? How many shares were issued and outstanding? 5. What was the total amount of inventory? What were the principal categories of inventory? 6. What costing method was used for inventories? 7. What was the total property, plant, and equipment before and after accumulated depreciation? 8. What was the accumulated depreciation? What method does the company use to depreciate its property, plant, and equipment? 9. What was the long-term debt? When is the debt due? 10. What was the retained earnings balance? What caused retained earnings to change in 2017? 11. What was the accumulated other comprehensive income/(loss) balance? 12. What was the noncontrolling interest balance?
1.
To determine
Find the amount of current assets and current liabilities.
Explanation
The amount of current assets and current liabilities of Corporation S as of October 1, 2017 are $5,283.4 million and$4,220...
2.
To determine
Find the amount of single largest current asset and current liability.
3.
To determine
Find the amount in the allowance for doubtful accounts.
4.
To determine
Find the par value of the company’s common stock and the numbers of shares were issued and outstanding.
5.
To determine
Find the total amount of inventory and find the principal categories of inventory
6.
To determine
Find the costing method used for inventories.
7.
To determine
Find the total property, plant, and equipment before and after accumulated depreciation.
8.
To determine
Find the accumulated depreciation and find the method used by the company to depreciate its property, plant, and equipment.
9.
To determine
Find the long-term debt and find the due period.
10.
To determine
Find the amount of retained earnings balance and find the reason for changing the retained earnings in 2017.
11.
To determine
Find the accumulated other comprehensive income/loss balance.
12.
To determine
Find the non-controllable interest balance.
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
#### Discuss the dirrerence between budgets and standard costs.
Managerial Accounting: The Cornerstone of Business Decision-Making
#### Why is productivity important?
Principles of Microeconomics (MindTap Course List)
#### What is a compound entry?
College Accounting (Book Only): A Career Approach
#### It is a fact that the federal government (1) encouraged the development of the savings and loan industry, (2) v...
Fundamentals of Financial Management, Concise Edition (with Thomson ONE - Business School Edition, 1 term (6 months) Printed Access Card) (MindTap Course List)
|
2019-12-08 08:16:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1880597025156021, "perplexity": 8191.613245634698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00325.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-3-linear-systems-and-matrices-3-7-evaluate-determinants-and-apply-cramer-s-rule-3-7-exercises-skill-practice-page-207/5
|
## Algebra 2 (1st Edition)
We evaluate the determinant of the matrix using Cramer's rule, which is listed on page 203. Doing this, we find: $$\det \begin{pmatrix}-4&3\\ 1&-7\end{pmatrix} \\ \left(-4\right)\left(-7\right)-3\cdot \:1 \\ 25$$
|
2023-02-02 11:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7707396745681763, "perplexity": 456.8679044764645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00396.warc.gz"}
|
https://www.wyzant.com/resources/answers/381743/trigonometry_related_doubt
|
Prashant K.
# Trigonometry related doubt
the number of all possible ordered pairs (x,y),x,y belongs to R satisfying the system of equations x+y=2pi/3,cosx +cosy=3/2
|
2020-09-21 01:20:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030219674110413, "perplexity": 7504.872574734431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00211.warc.gz"}
|
https://root-forum.cern.ch/t/change-of-text-font-size-etc/13587
|
# Change of Text font size etc
Hi,
In a histogram, I want to change the title Text Font Size and Color without using the menu, i.e implementing these changes directly in my macro. Looking at the User’s guide I tried the following:
`````` htac1->Draw() ;
TPaveText *pvttac1= (TPaveText*)htac1->GetListOfFunctions()->FindObject("title") ;
pvttac1->SetTextColor(2) ;
htac1->Draw() ;``````
Something is wrong in the third line because I’ve got the message:
So what are the instructions allowing to change the text font, size or color from a TPaveText ?
Thanks a lot,
`` Thomas``
do
after:
htac1->Draw() ;
I have the same error message with
`````` htac1->Draw() ;
TPaveText *pvttac1= (TPaveText*)htac1->GetListOfFunctions()->FindObject("title") ;
pvttac1->SetTextColor(2) ;
htac1->Draw() ;
``````
or
``````
htac1->Draw() ;
TPaveText *pvttac1= (TPaveText*)htac1->GetListOfFunctions()->FindObject("title") ;
htac1->Draw() ;``````
` pvttac1->SetTextColor(2) ;` is not correct ? (TPaveText inherits from AttText , right ? So I suppose I can use functions defined in the AttText class …)
pvttac1 is not defined … this pointer is null
better do:
root [0] gStyle->SetTitleTextColor(2)
root [1] hpx->Draw()
Thanks ! I have a look in the TStyle function list.
At the beginning of my macro I defined the details for the style:
``````gStyle->SetPalette(1) ;
gStyle->SetTextFont(22) ;
gStyle->SetTitleFont(22,"xyz") ;
gStyle->SetTitleFont(22,"a") ;
gStyle->SetLabelFont(22,"xyz") ;``````
And it works !
|
2022-06-25 16:14:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514747619628906, "perplexity": 9063.467276400157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036077.8/warc/CC-MAIN-20220625160220-20220625190220-00735.warc.gz"}
|
https://proxies-free.com/what-fails-in-constructing-a-homotopy-category-out-of-candidate-triangles-in-a-triangulated-category/
|
# What fails in constructing a homotopy category out of candidate triangles in a triangulated category?
Following Neeman’s article “New axioms for triangulated categories”, for a triangulated category $$mathscr T$$ let $$CT(mathscr T)$$ denote the category of candidate triangles, i.e. diagrams
$$begin{equation}Xoverset fto Yoverset gto Zoverset hto Sigma Xquad (*)end{equation}$$
such that $$gf=0$$, $$hg=0$$ and $$(Sigma f)h=0$$, with morphisms being commutative diagrams between such triangles.
We can define homotopy of maps between candidate triangles to be chain homotopy and there is an automorphism $$tildeSigmacolon CT(mathscr T)to CT(mathscr T)$$ which takes $$(*)$$ to
$$Yoverset{-g}to Zoverset{-h}to Sigma Xoverset{-Sigma f}to Sigma Y.$$
We can define mapping cones as in a usual chain complex category, and a lot of the usual results hold for this category (e.g. homotopic maps have isomorphic mapping cones).
What I fail to see, is why the mapping cone construction along with $$tildeSigma$$ does not give rise to a triangulation of $$CT(mathscr T)$$?
|
2021-04-21 15:04:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916379451751709, "perplexity": 363.03692315995494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00356.warc.gz"}
|
https://web2.0calc.com/questions/help-proving
|
+0
# help proving.....
0
216
1
+853
Show that the product of $$a\sqrt{b}+c\sqrt{d}$$ and $$a\sqrt{b}-c\sqrt{d}$$ is always rational if $$a,b,c$$ and $$d$$ are rational.
May 10, 2018
|
2019-03-27 03:37:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996122717857361, "perplexity": 2180.6847807757536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207618.95/warc/CC-MAIN-20190327020750-20190327042750-00054.warc.gz"}
|
https://en.wikipedia.org/wiki/Astrological_aspects
|
# Astrological aspect
(Redirected from Astrological aspects)
In astrology, an aspect is an angle the planets make to each other in the horoscope, also to the ascendant, midheaven, descendant, lower midheaven, and other points of astrological interest. Aspects are measured by the angular distance in degrees and minutes of ecliptic longitude between two points, as viewed from Earth. According to astrological tradition, they indicate the timing of transitions and developmental changes in the lives of people and affairs relative to the Earth.
As an example, if an astrologer creates a horoscope that shows the apparent positions of the celestial bodies at the time of a person's birth (a natal chart), and the angular distance between Mars and Venus is 92° of arc, the chart is said to have the aspect "Venus square Mars" with an orb of 2° (i.e., it is 2° away from being an exact square; a square being a 90° aspect). The more exact an aspect, the stronger or more dominant it is said to be in shaping character or manifesting change.[1]
The astrological aspects are noted in the central circle of this natal chart, where the different colors and symbols distinguish between the different aspects, such as the square (red) or trine (green)
## Approach
In medieval astrology, certain aspects, like certain planets, were considered to be either favorable (benefic) or unfavorable (malefic). Modern usage places less emphasis on these fatalistic distinctions. The more modern approach to astrological aspects is exemplified by research on astrological harmonics[further explanation needed], of which John Addey was a major proponent, and which Johannes Kepler earlier advocated in his book Harmonice Mundi in 1619. But even in modern times, aspects are considered either hard[further explanation needed] (the 90° square, the 180° opposition) or easy[further explanation needed] (the 120° trine, the 60° sextile). The conjunction aspect (essentially 0°, discounting orb) can be in either category, depending on which two planets it is that are conjunct.
A list of aspects below presents their angular values and a recommended orb for each aspect. The orbs are subject to variation, depending on the need for detail and personal preferences.
## Major aspects
The traditional major aspects are sometimes called Ptolemaic aspects since they were defined and used by Ptolemy in the 1st Century, AD. These aspects are the conjunction (0°), sextile (60°), square (90°), trine (120°), and opposition (180°). It is important to note that different astrologers and separate astrological systems/traditional utilize differing orbs (the degree of separation between exactitude) when calculating and using the aspects, though almost all use a larger orb for a conjunction when compared to the other aspects. The major aspects are those that can be used to divide 360 evenly and are divisible by 10 (with the exception of the semi-sextile).
### Conjunction
A conjunction (abrv. Con) is an angle of approximately 0–10°. An orb of approximately 10° is usually considered a conjunction, but if neither the Sun nor Moon is involved, some consider the conjunction to have a maximum orb of only about 8°.
Conjunctions are said to be the most powerful aspect, mutually intensifying the effects of the planets involved; they are major point in an individual's chart. Whether the conjunction in question is regarded as beneficial or detrimental depends on the specific planets involved. In particular, conjunctions involving the Sun, Venus, and/or Jupiter, in any of the three possible conjunction combinations, are considered highly favourable, while conjunctions involving the Moon, Mars, and/or Saturn, again in any of the three possible conjunction combinations, are considered highly unfavourable.[2]
Exceptionally, the Sun, Venus, and Jupiter were in a 3-way (beneficial) conjunction on November 9–10, 1970, while on March 10 of that same year, the Moon, Mars, and Saturn were in 3-way (detrimental) conjunction.
If either of two planets involved in a conjunction is also under tension from one or more hard aspects with one or more other planets, then the added presence of the conjunction aspect will further intensify the tension of that hard aspect.
A planet in very close conjunction to the Sun (within 17 minutes of arc, or only about 0.28°) is said to be cazimi, an ancient astrological term meaning "in the heart" (of the Sun). For example, "Venus cazimi" means Venus is in conjunction with the Sun with an orb of less than ≈ 0.28°. Such a planetary position is a conjunction of great strength. A related term is combust, applicable when the planet in conjunction with the Sun is only moderately close to the Sun. In the case of combust, specific orb limit will depend on the particular planet in conjunction with the Sun.
The Sun and Moon experience a conjunction every single month of the year — during the New Moon.
#### Great conjunctions
Kepler's trigon, a diagram of great conjunctions from Johannes Kepler's 1606 book De Stella Nova
Great conjunctions (between the two slowest classical planets) have attracted considerable attention in the past as celestial omens. During the late Middle Ages and the Renaissance, great conjunctions were a topic broached by most astronomers of the period up to the times of Tycho Brahe and Kepler, by scholastic thinkers as Roger Bacon[3] or Pierre d'Ailly,[4] and they are mentioned in popular and literary writing by authors such as Dante[5] and Shakespeare.[6] This interest is traced back in Europe to the translations from Arabic sources, most notably Albumasar's book on conjunctions.[7]
As successive great conjunctions occur nearly 120° apart, their appearances form a triangular pattern. In a series every third conjunction returns after some 60 years to the vicinity of the first. These returns are observed to be shifted by some 8° relative to the fixed stars, so no more than four of them occur in the same zodiacal sign. Usually the conjunctions occur in one of the following triplicities or trigons of zodiacal constellations:
1. Aries, Sagittarius, and Leo
2. Taurus, Capricorn, and Virgo
3. Gemini, Aquarius, and Libra
4. Cancer, Pisces, and Scorpius
After about 220 years the pattern shifts to the next trigon, and in about 900 years returns to the first trigon.[8]
To each triangular pattern astrologers have ascribed one from the series of four elements. Particular importance has been accorded to the occurrence of a great conjunction in a new trigon, which is bound to happen after some 240 years at most.[9] Even greater importance was attributed to the beginning of a new cycle after all fours trigons had been visited, something which happens in about 900 years.
Medieval astrologers usually gave 960 as the length of the full cycle, apparently because in some cases it took 240 years to pass from one trigon to the next.[9] If a cycle is defined by when the conjunctions return to the same right ascension rather than to the same constellation, then because of axial precession the cycle is only about 800 years. Use of the Alphonsine tables apparently led to the use of precessing signs, and Kepler gave a value of 794 years (40 conjunctions).[9][5]
Despite the inaccuracies and some disagreement about the beginning of the cycle the belief in the significance of such events generated a stream of publications which grew steadily up to the end of the 16th century. As the great conjunction of 1583 was the last in the watery trigon it was widely supposed to herald apocalyptic changes; a papal bull against divinations was issued in 1586 and as nothing really significant had happened by 1603 with the advent of a new trigon, the public interest rapidly died.
### Sextile — intermediate major/minor aspect
A sextile (abrv. SXt or Sex) is an angle of 60° (1/6 of the 360° ecliptic, or 1/2 of a trine [120°]). An orb between 3-4 is allowed depending on the planets involved.
The sextile has been traditionally said to be similar in influence to the trine, but less intense. It indicates ease of communication between the two elements involved, with compatibility and harmony between them. A sextile provides opportunity and is very responsive to effort expended to gain its benefits. See information on the semisextile below.
### Square
A square (abrv. SQr or Squ) is an angle of 90° (1/4 of the 360° ecliptic, or 1/2 of an opposition [180°]). An orb of somewhere between 5° and 10°[10] is usually allowed depending on the planets involved.
As with the trine and the sextile, in the square, it is usually the outer or superior planet that has an effect on the inner or inferior one. The square's energy is strong and usable but has a tension that needs integration between 2 different areas of life, or offers a choice point where an important decision needs to be made that involves an opportunity cost. It is the smallest major aspect that usually involves houses in different quadrants.
### Trine
A trine (abbrev. Tri) is an angle of 120° (1/3 of the 360° ecliptic), an orb of somewhere between 5° and 10° depending on the planets involved.
The trine relates to what is natural and indicates harmony and ease. The trine may involve talent or ability which is innate. The trine has been traditionally assumed to be extremely beneficial. When involved in a transit, the trine involves situations that emerge from a current or past situation in a natural way.
### Opposition
An opposition (abrv. Opp) is an angle of 180° (1/2 of the 360° ecliptic). An orb of somewhere between 5° and 10°[10] is usually allowed depending on the planets.
Oppositions are said to be the second most powerful aspect. It resembles the conjunction although the difference between them is that the opposition is fundamentally relational. Some say it is prone to exaggeration as it is not unifying like the conjunction but has a dichotomous quality and an externalizing effect. All important axes in astrology are essentially oppositions. Therefore, at its most basic, it often signifies a relationship that can be oppositional or complementary.
## Minor aspects
### Quincunx (Inconjunct) — intermediate major/minor aspect
A quincunx is an angle of 150° (5/12 of the 360° ecliptic). An orb of ±3.5° is usually allowed depending on the planets involved. Unlike all the other aspects, it does not offer equal divisions of the circle.
Its effect is most obvious when there is a triangulating aspect of a 3rd planet in any major aspect to the 2 planets which are quincunx. Its interpretation will rely mostly on the houses, planets, and signs involved. The effect will involve different areas of life being brought together that are not usually in communication since the planets are far enough apart to be in different house quadrants, like the trine, but often with a shift in perspective involving others not previously seen clearly. Keywords for the quincunx are mystery, creativity, unpredictability, imbalance, surreal, resourcefulness, and humor.
### Semi-sextile
A semi-sextile is an angle of 30° (1/12 of the 360° ecliptic). An orb of ±1.2° is allowed.
It is the most often used of the minor aspects perhaps for no other reason than it can be easily seen. It indicates a mental interaction between the planets involved that is more sensed than experienced externally. Any major aspect transit to a given planetary position will also involve the other planet that is in semi-sextile aspect to it. The energetic quality is one of building and potentiating each other gradually, but planets, houses and signs involved must be considered. Similar to a sextile in offering a quality of opportunity with conscious effort to benefit from.
### Quintile
A quintile is an angle of 72° (1/5 of the 360° ecliptic). An orb of ±1.2° is allowed.
It indicates a strong creative flow of energy between the planets involved, often an opportunity for something performative, entertaining or expressive.
### Septile
A septile is an angle of roughly 51.5° (1/7 of the 360° ecliptic). An orb of ±1° is allowed. It is the only prime number aspect that is an inexact number.
It is a mystical aspect that indicates a hidden flow of energy between the planets involved, often involving spiritual or energetic sensitivity and an awareness of inner and more subtle, hidden levels of reality involving the planets in septile aspect.
### Semi-square
A semi-square is an angle of 45° (1/8 of the 360° ecliptic). An orb of ±2° is allowed. It is an important minor aspect and indicates a stimulating or challenging energy like that of a square but less intense and more internal. The semi-square is considered to be the 8th harmonic of the chart because it is one-eighth of the 360° circle that the zodiac resides in (i.e., 360 / 8 = 45). The semi-square is considered to be a minor hard aspect because it is thought to cause friction in the native's life and prompt them to take some action to reduce that friction.
For example, if the Sun is posited in 10° Aquarius and Venus is posited in 25° Pisces then a semi-square would occur. This is thought to indicate that the native is not likely to be totally happy in matters of love. The native is thought to have a tendency to seek out those individuals who are not necessarily compatible to them, and this may lead to a sense of tension and actions to correct what to them may be frustration.
### Novile
A novile is an angle of 40° (1/9 of the 360° ecliptic). An orb of ±1° is allowed.
It indicates an energy of perfection and/or idealization.
## Declinations
The parallel and antiparallel (or contraparallel) are two other aspects which refer to degrees of declination above or below the celestial equator. They are not widely used by astrologers.
• Parallel: same degree± 1-degree 12-minutes of arc. This may be similar to a semi-square or quincunx in that it is not clearly seen. It represents an opportunity for perspective and communication between energies that requires some work to be made conscious.
• Contraparallel: opposite degree± 1-degree 12-minute of arc. Said to be similar to the parallel. (Some who use the parallel do not consider the contraparallel an aspect.)
8. ^ If J and P designate the periods of Jupiter and Saturn then the return takes ${\displaystyle 1/(5/S-2/J)}$ which comes to 883.15 years, but to be a whole number of conjunction intervals it must be sometimes 913 years and sometimes 854. See Etz.
|
2021-02-26 02:27:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5673573613166809, "perplexity": 2430.687530090349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355944.41/warc/CC-MAIN-20210226001221-20210226031221-00398.warc.gz"}
|
http://comunidadwindows.org/confidence-interval/standard-error-of-the-difference-between-two-proportions.php
|
Home > Confidence Interval > Standard Error Of The Difference Between Two Proportions
# Standard Error Of The Difference Between Two Proportions
## Contents
Reducing the margin of error In the standard error, , the value of is a constant. The formula for the estimated standard error is: where p is a weighted average of the p1 and p2, n1 is the number of subjects sampled from the first population, and It would not apply to dependent samples like those gathered in a matched pairs study.Example 10.7A general rule used clinically to judge normal levels of strength is that a person's dominant In cases if this sort, Calculator2 will estimate the size of the sample on the basis of two items of information that probably will be given in the report: the margin http://comunidadwindows.org/confidence-interval/standard-error-of-the-difference-of-two-proportions.php
Because the sampling distribution is approximately normal and the sample sizes are large, we can express the critical value as a z score by following these steps. If this theory about the underlying reason for the strength differential is true then there should be less of a difference in young children than in adults. Since the interval does not contain 0, we see that the difference seen in this study was "significant."Another way to think about whether the smokers and non-smokers have significantly different proportions Note: For polls reported in the news media, the margins of error tend to be rounded to the nearest integer. http://stattrek.com/estimation/difference-in-proportions.aspx?Tutorial=AP
## Confidence Interval For Difference In Proportions Calculator
When each sample is small (less than 5% of its population), the standard deviation can be approximated by: σp1 - p2 = sqrt{ [P1 * (1 - P1) / n1] + In this example, p1 - p2 = 73/85 - 43/82 = 0.8588 - 0.5244 = 0.3344. What is the likely size of the error of estimation? Confidence Interval for the Difference of Two Population Proportions This file is part of a program based on the Bio 4835 Biostatistics class taught at Kean University in Union, New Jersey.
Multiply z* times the result from Step 4. Ifthe reported margin of error is entered as an integer, the programming for Calculator2 will assume it to be a rounded value and calculate the lower and upper limits of estimated For convenience, we repeat the key steps below. 2 Proportion Z Interval Conditions Example A study of teenage suicide included a sample of 96 boys and 123 girls between ages of 12 and 16 years selected scientifically from admissions records to a private psychiatric
Applying the general formula to the problem of differences between proportions where p1- p2 is the difference between sample proportions and is the estimated standard error of the difference between proportions. Standard Error Two Proportions Calculator At first it was purely theoretical and of no particular interest to anyone apart from gamblers and mathematicians. They also often appear to be based on the percentage for the candidate who has the majority or plurality within the sample. i thought about this Take plus or minus the margin of error from Step 5 to obtain the CI.
coeff.) X (st. 2 Proportion Z Interval Example The sample should include at least 10 successes and 10 failures. How to Find the Confidence Interval for a Proportion Previously, we described how to construct confidence intervals. We assume that the girls constitute a simple random sample from a population of similar girls and likewise for the boys.
## Standard Error Two Proportions Calculator
Compute margin of error (ME): ME = critical value * standard error = 1.645 * 0.036 = 0.06 Specify the confidence interval. D) Confidence interval for the difference of two population proportions When studying the difference between two population proportions, the difference between the two sample proportions, - , can be used as Confidence Interval For Difference In Proportions Calculator The standard deviation of the difference between sample proportions σp1 - p2 is: σp1 - p2 = sqrt{ [P1 * (1 - P1) / n1] * [(N1 - n1) / (N1 The Confidence Interval For The Difference Between Two Independent Proportions If the population proportion is a known constant, then the standard error of the difference between the subset proportion and the constant is the same as the standard error of the
That's okay, but you can avoid negative differences in the sample proportions by having the group with the larger sample proportion serve as the first group (here, females). check over here That is, we are 90% confident that the true difference between population proportion is in the range defined by 0.10 + 0.06. The range of the confidence interval is defined by the sample statistic + margin of error. The Variability of the Difference Between Proportions To construct a confidence interval for the difference between two sample proportions, we need to know about the sampling distribution of the difference. Confidence Interval For Two Population Proportions Calculator
How do you do this? This condition is satisfied; the problem statement says that we used simple random sampling. Return to:Calculator3Calculator4 Standard Deviation For most purposes of statistical inference, the two main properties of a distribution are its central tendency and variability. his comment is here If the reliability coefficient is fixed, the only way to reduce the margin of error is to have a large sample.
Importantly, the formula for the standard deviation of a difference is for two independent samples. Two Proportion Z Test Confidence Interval Calculator All Rights Reserved. That comparison involves two independent samples of 60 people each.
## Copyright © 2016 The Pennsylvania State University Privacy and Legal Statements Contact the Department of Statistics Online Programs Next: Overview of Confidence Intervals Up: Confidence Intervals Previous: Sample Size for Estimating
The interval goes from 3.77 kg up to 5.63 kg.Finally, we want to examine the idea that the right-left strength differential will be different between the 30-39 year old men and If an upper limit is suspected or presumed, it could be used to represent p. 2. Significance of the Difference between the Results of Two SeparatePolls 4. Confidence Interval Difference In Proportions Ti-84 err.) Solving for n gives Estimating Generally the variance of the population under study is unknown.
Looking at these differences we see their average is 0.3 kg with a standard deviation of 0.8 kg. The interval for non-smokers goes from about 0.36 up to 0.48. Test Your Understanding Problem 1 Suppose the Cartoon Network conducts a nation-wide survey to assess viewer attitudes toward Superman. weblink Secret of the universe Has an SRB been considered for use in orbit to launch to escape velocity?
Then take 0.34 ∗ (1 - 0.34) to obtain 0.2244. ParkerList Price: $56.00Buy Used:$14.39Buy New: $34.89Statistical Analysis with Excel For Dummies (For Dummies (Computers))Joseph SchmullerList Price:$24.99Buy Used: $0.01Buy New:$12.90Some Theory of SamplingWilliam Edwards DemingList Price: $22.95Buy Used:$3.11Buy Then, we have plenty of successes and failures in both samples. The utility of it is that, once you know a distribution to be normal, or at least a close approximation of the normal, you are then in a position to specify
The value z* is the appropriate value from the standard normal distribution for your desired confidence level. (Refer to the following table for z*-values.) z*-values for Various Confidence Levels Confidence Level It is also important to have a method that will allow prediction of the correct sample size for estimating a population mean or a population proportion. SEp1 - p2 = sqrt{ [p1 * (1 - p1) / n1] * [(N1 - n1) / (N1 - 1)] + [p2 * (1 - p2) / n2] * [(N2 - Calculator 2: Estimating Sample Size when the Report of a Poll Fails to Provide that Essential Bit of Information It occasionally happens that the press report of a poll will give
The bottom line in such a test is a probability value, ranging between 0.0 and1.0, which represents the likelihood that a difference between (1) and(2) as great as the one observed Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. So we compute$\text{Standard Error for Difference} = \sqrt{0.0394^{2}+0.0312^{2}} ≈ 0.05$If we think about all possible ways to draw a sample of 150 smokers and 250 non-smokers then the differences we'd see
|
2018-02-24 10:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.617257833480835, "perplexity": 539.1061493999877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815544.79/warc/CC-MAIN-20180224092906-20180224112906-00590.warc.gz"}
|
https://robotics.stackexchange.com/questions/11127/forward-kinematics-od-differential-robot
|
# Forward Kinematics od Differential Robot
r = 1;
w1 = 4;
w2 = 2;
l = 1;
R = [0 -1 0; 1 0 0; 0 0 1];
X = [(r*w1)/2 + (r*w2)/2; 0 ; (r*w1)/(2*l) - (r*w2)/(2*l)];
A = R*X;
disp(A)
I am getting the solution for the matrix as [0; 3; 1] which is exactly what I expect. I would like to input a series of w1 and w2, Lets say I have a data file 1.xlsx and 2.xlsx with ten values each. I want to load 1.xlsx into w1 and 2.xlsx into w2 and get ten answers for X. How can I do that?
2) Write a for loop for extractting individual values from w1 and w2, to perform the necessary computation.
|
2020-08-08 09:32:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655770897865295, "perplexity": 1154.4987088634437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00204.warc.gz"}
|
https://chemistry.stackexchange.com/questions/170822/at-what-temperature-does-brass-blacken-in-air/170828
|
# At what temperature does Brass blacken in air?
A machine has overheated in an air filled enclosure, and a major brass component of mass around 1kg turned black. It was probably not in the high temperature regime for more than 2 hours at a guess and perhaps much less.
So, can anyone estimate what the approximate temperature might have been?
• It might be useful to first make sure what the black surface really is. Is it the assumed CuO or is it the residue of some other decomposed or evaporated material from around? Could you show an image of the situation? Jan 25 at 18:49
The black color is due black copper(II) oxide:
It can be formed by heating copper in air at around 300–800°C: $$\ce{2 Cu + O2 -> 2 CuO}$$
Possible range of temperature seems quite wide.
Appearing and intensity of brass blackening is combination of temperature and duration. It can be formed faster with growing temperature due faster kinetics of formation, but getting unstable at very high temperature due thermodynamic decomposition.
As organic substances are not reportedly present (above natural trace background), carbon from pyrolyzed organic matter is not considered.
• And any hydrocarbons in the air also contribute to blackening. Jan 24 at 17:32
• @JonCuster Yes, but it was not explicitly mentioned to be present. Jan 24 at 17:50
• sure, but ‘machine’ implies lubricants which get everywhere really easily. Jan 24 at 17:52
• @JonCuster True. But OP is not very explicit generally. // OP: Asking for answers and not providing enough relevant details, purpose, context or background of questions are contradictory decisions. Jan 24 at 18:29
• @Poutnik Commercially sensitive with legal implications. No lubricants or other organics present. Air was dry Jan 24 at 21:03
A metallurgist would guess at the temperature with a metallographic examination (grain size, twins, etc.) and hardness tests. (It would anneal to some degree depending on temperature.)
The first step would be to determine the brass/bronze alloy. Common wrought brass products are 70:30 or 60:40, and interestingly the 70:30 is most yellow. The 60:40 picks up a pinkish cast.
The blackening of copper alloys (e.g., basic brass is 33% zinc with 67% copper) is due to the formation of black oxide of copper (the process is known by the trade name Ebonol C: Accordingly, if the copper alloys is containing 65% or more of copper, a black oxide treatment can be applied to alloy surface to blacken by converting the copper to cupric oxide $$(\ce{CuO})$$. According to Wikipedia, the temperature required to blacken brass alloy is about $$\pu{400 ^\circ F} \ (\pu{204 ^\circ C})$$.
It is also noteworthy to mentioned here this somewhat cool educational demonstration. According to this Illini website:
[...] penny coins, starting in 1983, were made of zinc with a thin layer of copper plated on the surface. If these coins are heated, the zinc will diffuse into the copper layer, producing a surface alloy of zinc and copper. These alloys are brasses. Not only does the zinc change the properties of copper, but also the color of the brasses changes with zinc content - reaching a golden yellow color at around 20% zinc and golden at 35-40% zinc. Copper also oxidizes when heated in air, producing a black layer of copper oxide $$(\ce{CuO})$$. Thus when heated, there is a competition between the rate of oxidation (making the surface black) and the rate of diffusion (making the surface a golden-yellow color).
You may use this demonstration to find out the exact temperature when the coin starts blackening (hopefully)!
|
2023-03-22 21:29:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6292471289634705, "perplexity": 3071.704557359952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00552.warc.gz"}
|
http://mathoverflow.net/feeds/user/4961
|
User benoît kloeckner - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T21:48:08Z http://mathoverflow.net/feeds/user/4961 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/125147/special-coordinates-for-periodic-metrics Special coordinates for periodic metrics Benoît Kloeckner 2013-03-21T10:46:23Z 2013-05-20T07:11:58Z <p>This question is a follow-up to <a href="http://mathoverflow.net/questions/123759/is-displacement-controled-by-stable-norm" rel="nofollow">that one</a>.</p> <p>Given a $\mathbb{Z}^n$-periodic metric $g$ on $\mathbb{R}^n$ (with $n>2$), is it possible to find a periodic diffeomorphism $\varphi$ such that $\varphi^*g$ makes the voronoi cell of $\mathbb{Z}^n$ convex? Or more generally, what kind of good compatibility between the metric and the affine structure of $\mathbb{R}^n$ can one expect by choosing good coordinates on the quotient torus?</p> <p><strong>Edit.</strong> Misha's comment shows that the precise part of the question is very naive. To make the remaining part more precise, one "compatibility" that would do for my need would be the following.</p> <p>Call $g$ "$k$-balanced" if for all $v$ in the Voronoi cell of $0$, we have $$\sup_{p\in\mathbb{R}^n} d(p,p+v) \le k \ \mathrm{diam}(g)$$ Is it true that there is a $k=k(n)$ such that for all periodic riemmannian metric $g$, there is a periodic diffeomorphism $\varphi$ such that $\varphi^*g$ is $k$-balanced?</p> http://mathoverflow.net/questions/86234/chromatic-number-of-the-hyperbolic-plane/130463#130463 Answer by Benoît Kloeckner for chromatic number of the hyperbolic plane Benoît Kloeckner 2013-05-13T10:08:55Z 2013-05-13T10:08:55Z <p>This does not really answer your questions, but I recently got a few results on the chromatic number of the hyperbolic planes. They are formulated by fixing the curvature and letting the distance vary, and I use the notation $\chi(\mathbb{H}^2,{d})$ for the chromatic number of the distance-$d$ graph on the hyperbolic plane with curvature $-1$.</p> <ol> <li><p>for small $d$, $\chi(\mathbb{H}^2,{d})\leq 12$ (this can probably be improved, but maybe not easily to $7$),</p></li> <li><p>for large $d$, $\chi(\mathbb{H}^2,{d})\leq \frac{4}{\ln 3} d + O(1)$.</p></li> </ol> <p>The proofs can be found here: <a href="http://www-fourier.ujf-grenoble.fr/~bkloeckn/posts/2013-05-13-DistanceGraphs.html" rel="nofollow">http://www-fourier.ujf-grenoble.fr/~bkloeckn/posts/2013-05-13-DistanceGraphs.html</a> and the paper will appear on the arXiv soon. All this is not difficult, and the paper raises more questions than it answers.</p> <p>My impression is that the monotony of the chromatic number with $d$ seems reasonable, but is in fact a subtle issue; and I would rather bet on a negative answer to question (2) but not too high. All in all, these questions are probably incredibly difficult, because we have only very cumbersome tools to relate the geometry with the distance graph.</p> <p>For the story: about one year after I read, liked and bookmarked your question, I had forgotten about it but read "Ramsey Theory, Today, and Tomorrow", and realized I could answer some questions asked in it by Johnson and Szlam. In the course of writing a paper from these answers, I investigated the case of the hyperbolic plane. After writing a first version I happened to look at my MO favorites, and saw your question again -- which is therefore cited in the paper (will there soon be a @mathoverflow in standard bibTeX definitions?)</p> http://mathoverflow.net/questions/130146/algebraic-topology-in-low-regularity Algebraic topology in low regularity Benoît Kloeckner 2013-05-09T08:35:35Z 2013-05-09T12:48:46Z <p>This question is triggered by a talk by <a href="http://pierre.bousquet9.free.fr/" rel="nofollow">Pierre Bousquet</a>, who considered related questions (but not quite what I ask below).</p> <p>Take a classical algebraic topological result, like the inexistence of retraction map $f:D^2\to \partial D^2$. Can we lower the regularity hypothesis (i.e., replace continuity with something weaker, or at least something not implying continuity) and still get a result?</p> <p>Let me be more precise:</p> <blockquote> <p>For which values of $p$ Does it exist a map $f:D^2\to \partial D^2$ in $W^{1,p}$ such that the trace of $f$ on the boundary is the identity?</p> </blockquote> <p>In the same spirit:</p> <blockquote> <p>For which values of $s,p$ must each map $f:D^2 \to D^2$ in $W^{s,p}$ have an almost fixed point in some sense (e.g. a sequence $x_n\to x$ such that $f(x_n)\to x$).</p> </blockquote> http://mathoverflow.net/questions/21578/is-there-a-simple-proof-that-a-group-of-linear-growth-is-quasi-isometric-to-z Is there a simple proof that a group of linear growth is quasi-isometric to Z? Benoît Kloeckner 2010-04-16T14:06:53Z 2013-05-05T19:58:18Z <p>I proposed to a master's student to work, from the exercise in Ghys-de la Harpe's book, on the proof that a finitely generated group $G$ that is quasi-isometric to $\mathbb{Z}$ is virtually $\mathbb{Z}$. However I initially had in mind the result that gives the same conclusion from the hypothesis that $G$ has linear growth.</p> <p>Do you know of any simple (and elementary, in particular without assuming Gromov's theorem on polynomial growth groups) proof that a group of linear growth is quasi-isometric to $\mathbb{Z}$?</p> http://mathoverflow.net/questions/129407/random-graphs-nonisomorphic-to-unit-distance-graphs/129457#129457 Answer by Benoît Kloeckner for Random graphs nonisomorphic to unit distance graphs Benoît Kloeckner 2013-05-02T20:47:22Z 2013-05-02T20:47:22Z <p>The almost sure asymptotic chromatic number of $G$ goes to $\infty$ with $c$, see for example the precise result by Achlioptas and Naor in Annals of Math. 2005.</p> <p>The chromatic number of a unit-distance graph (and in fact of the whole plane) is bounded above by $7$, see e.g. the math coloring book by Soifer (this is simple: one colors an hexagonal tiling of carefully chosen side length).</p> <p>These two facts end the proof of your problem.</p> http://mathoverflow.net/questions/129227/positively-curved-manifold-with-a-codimension-1-totally-geodesic-submanifold/129236#129236 Answer by Benoît Kloeckner for Positively curved manifold with a codimension 1 totally geodesic submanifold. Benoît Kloeckner 2013-04-30T17:11:28Z 2013-05-02T13:43:14Z <p>$\mathbb{CP}^n$ (with $n>1$) does indeed not have any codimension $1$ totally geodesic manifold; neither does $\mathbb{CH}^n$. You can probably find a proof in Goldman's book on complex hyperbolic geometry.</p> <p>(<strong>Added later:</strong> this is true even locally: there are no <em>open</em> codimension $1$ totally geodesic manifold in $\mathbb{CP}^n$ nor in $\mathbb{CH}^n$.)</p> <p>Note that this is an important geometrical fact, as (as far as I know) all proofs of the isoperimetric inequality that work in the real hyperbolic space use reflexions with respect to a totally geodesic codimension one manifold. This explains why <em>we still don't know if balls are optimal for the isoperimetric problem in $\mathbb{CH}^n$ and $\mathbb{CP}^n$</em> (small balls in the latter case, as for large volumes balls are known not to be optimal).</p> <p>Also note that it is a source of great difficulty in the study of subgroups of isometries of $\mathbb{CH}^n$: for many groups $\Gamma$ acting isometrically on $\mathbb{CH}^n$, we do not know whether they are discrete; one cannot construct a fundamental domain with geodesic faces that could be used to prove discreteness, as it is done in real hyperbolic geometry. We are therefore mainly left with arithmetic methods, and to find non-arithmetic lattices of $\mathrm{SU}(1,n)$ is an important problem, see notably the work of Martin Deraux.</p> http://mathoverflow.net/questions/128940/random-rings-linked-into-one-component/128984#128984 Answer by Benoît Kloeckner for Random rings linked into one component? Benoît Kloeckner 2013-04-28T07:34:45Z 2013-04-28T07:34:45Z <p>I think I can complement the answer of Ori Gurel-Gurevich to prove that indeed, when we deal with connected open sets (no need for convexity) the answer is positive.</p> <p><strong>1.</strong> There is a finite configuration of circles $C_1, \dots, C_N$ whose centers are in the domain $D$, such that any circle $C$ with center in the domain $D$ must be linked to at least one of the $C_i$.</p> <p>One way to do this is to first take a very tight lattice $\Lambda$, and put horizontal circles $C_1,\dots, C_K$ with centers on $\Lambda\cap D$. Now, a circle with center in $D$ that is not linked to any of these $C_i$ ($i\leq K$) must be roughly horizontal.</p> <p>Then, add circles $C_{K+1},\dots, C_N$ with center on the lattice but oriented along a given vertical plane. A circle not linked to any $C_i$ must be roughly horizontal <em>and</em> roughly vertical, thus don't exist.</p> <p><strong>2.</strong> The above construction is stable under small perturbation. This means that there are small open sets $U_1,\dots, U_N$ of the parameter space such that for all set of circles $C_1,\dots, C_N, C$ such that $C_i\in U_i$, $C$ must be linked to one of the $C_i$.</p> <p><strong>3.</strong> By adding more circles that link together the $C_i$ of <strong>1.</strong>, we could have assumed that the $C_i$ make a linked component (this is where we use the connectedness assumption on $D$, which in fact could be weakened). As in <strong>2.</strong>, this is stable under small perturbation, so in fact there are small open sets $U_1,\dots, U_N$ of the parameter space such that for all set of circles $C_1,\dots, C_N, C$ such that $C_i\in U_i$, the $N+1$ circles must be linked together.</p> <p><strong>4.</strong> Now, when $n\to \infty$ the probability that circles have been drawn in each of the $U_i$ increase to $1$ exponentially fast, so at some point our random configuration contains with high probability a set of circles that links <em>all</em> admissible circles, including all the other randomly drawn ones.</p> http://mathoverflow.net/questions/126873/reference-for-ultrametric-spaces Reference for ultrametric spaces Benoît Kloeckner 2013-04-08T15:13:48Z 2013-04-22T21:52:25Z <p>I have a research project involving ultrametric spaces, and there are some facts that I use but have a hard time finding explicitely in the literature, although I know that some of them are folklore (for example, an ultrametric space can be described as the set of leaves of a tree, endowed with the induced metric).</p> <p>I would like to know whether there is a book or comprehensive survey paper on the geometry and structure of ultrametric spaces.</p> <p>An important point: I am interested in purely metric spaces, without algebraic structure (I did find books on analysis in non-Archimedean fields, which are too focused on this case). I can restrict to compact spaces, but not to finite ones.</p> http://mathoverflow.net/questions/127792/geometric-interpretation-of-lie-bracket/127840#127840 Answer by Benoît Kloeckner for geometric interpretation of Lie bracket Benoît Kloeckner 2013-04-17T12:27:18Z 2013-04-17T12:27:18Z <p>Let me attempt to reconcile the two views on the Lie bracket.</p> <p>First, one has to wonder what it should mean that a vector field $Y$ is constant'' along $X$. This is ambiguous, as noticed by katz. One point is that it is not a property that depends solely on the values of $Y$ along $X$, contrary to its Riemannian counterpart: it should really depends on the (local) field $Y$. Another confusion not to make is that it cannot be simply defined in charts by looking whether $Y$ is constant in the Euclidean sense: this would certainly not be chart-independant (even if we ask the chart to be a flow box for $X$).</p> <p>Since the model is when $X=\frac\partial{\partial x}$ and $Y=\frac\partial{\partial y}$ in the plane, the one thing we could ask to a constant along $X$'' field $Y$ would be that if one follows during a given time $h$ an integral curve of $Y$ starting from any point in a integral curve $\gamma$ of $X$, then one should end up in a given integral curve $\gamma'$ of $X$ that does not depend on the starting point (but only on $t$ and $\gamma$). In fact, one should even ask that the parametrization of $\gamma$ is respected. This is what you get if you can find some chart that is a flow box for both $X$ and $Y$, that is if are part of a coordinate system (up to minor cheating on colinearity).</p> <p>But this is exactly the definition of Lie bracket given in Spivak, up to a little twist: one asks if following $X$ for some time $h$ then $Y$ for time $h$ gives you the same point than following $Y$ for time $h$ then $X$ for time $Y$.</p> http://mathoverflow.net/questions/127823/finding-a-good-ordering-of-mathbbq/127836#127836 Answer by Benoît Kloeckner for Finding a good ordering of $\mathbb{Q}$ Benoît Kloeckner 2013-04-17T12:06:12Z 2013-04-17T12:06:12Z <p>The answer is <strong>no</strong>.</p> <p>First, the ordering and density hypothesis are irrelevant (you do not use the ordering, and the density can be managed independently of the measure assumption we are trying to satisfy).</p> <p>The Lebesgue measure of the set of $x\in(-1,1)$ such that $x\in B(x_,;r_n)$ for at least one $n>N$ is at most $2\sum_{n>N} r_n$. Your set is the intersection of these sets over all $N\in\mathbb{N}$, so that it must have measure $0$ as soon as $\sum r_n<\infty$.</p> http://mathoverflow.net/questions/127829/when-constant-scalar-curvature-implies-einstein/127832#127832 Answer by Benoît Kloeckner for when constant scalar curvature implies Einstein? Benoît Kloeckner 2013-04-17T11:56:56Z 2013-04-17T11:56:56Z <p>There is no reason for this, and the answer is indeed <strong>no</strong>. </p> <p>The simplest example I can think of is the product of two $\mathbb{S}^2$, each endowed with the round metric. This manifold is homogeneous and thus has constant scalar curvature, its sectional curvature is non-negative so its Ricci tensor also is (and is in fact even positive), but the Ricci curvature in a direction $u$ depends on the angle between $u$ and the tangent spaces to the fibers of the projection on each factor (i.e., on whether $u$ is close to be horizontal or vertical or not).</p> http://mathoverflow.net/questions/127599/is-there-a-lower-bound-for-variance-in-terms-of-curvature/127619#127619 Answer by Benoît Kloeckner for Is there a lower bound for variance in terms of curvature? Benoît Kloeckner 2013-04-15T13:38:04Z 2013-04-15T13:38:04Z <p>As far as I understand the question, the answer is no: for any domain $\Omega$ and any $\delta>0$, denoting by $K_f$ the curvature function of the metric $g=f^2g_{eucl}$, we have</p> <p>$$\inf_{K_f\geq\delta} \mathrm{Var}(f) = \inf_{K_f\leq-\delta} \mathrm{Var}(f) = 0$$</p> <p>Indeed, given any $f$ such that $K_f\geq\delta$ and any $\lambda\in(0,1)$, the function $u=\lambda f$ has $\mathrm{Var}(u)=\lambda^2 \mathrm{Var}(f)$ and $K_u=K_f/\lambda^2\geq\delta/\lambda^2>\delta$.</p> <p>This seems to have to do with normalization or the volume form which is used, but I do not know how one could formulate an alternative problem with a positive answer.</p> http://mathoverflow.net/questions/123759/is-displacement-controled-by-stable-norm Is displacement controled by stable norm? Benoît Kloeckner 2013-03-06T13:20:13Z 2013-03-07T23:49:17Z <p>Let $T^n$ be the $n$-dimensional torus and $g$ be a Riemannian metric on $T^n$. Let $\tilde g$ be the induced metric on the universal covering; using suitable coordinates, $\tilde g$ is therefore a $\mathbb{Z}^n$-periodic metric on $\mathbb{R}^n$ (I shall conflate the lattice $\mathbb{Z}^n$ with the fundamental group of $T^n$ in the sequel).</p> <p>Let $d$ be the distance induced by $\tilde g$ and $t:\mathbb{Z}^n\to (0,+\infty)$ be the function defined by $t(\gamma)=d(0,\gamma(0))$. Recall that the <em>stable norm</em> $\Vert\cdot\Vert_S$ is a norm on $\mathbb{R}^n$ defined by the property that for any $\gamma\in\mathbb{Z}^n$, $$\Vert\gamma\Vert_S = \lim_k \frac{t(\gamma^k)}{k}$$</p> <p><strong>Question:</strong> is it true that for all $\gamma\in\mathbb{Z}^n$, we have $$t(\gamma)\le \Vert \gamma \Vert_S+2\mathrm{diam}(g)?$$</p> <p>If not, does some similar control hold? The formula could depend on $n$ but not on the metric $g$. </p> <p>A pointer to good literature on this kind of metric Riemannian geometry would already be much appreciated.</p> <p><strong>Important edit:</strong> if needed, I am ok with the additional, very strong assumption that the sectional curvature of $g$ is bounded above by some positive $\varepsilon=\varepsilon(n)$. The formula for a lower bound on $\Vert\cdot\Vert_S$ can depend upon $\varepsilon$ (explicitely).</p> <p>As for motivation, I need this kind of control for a project of showing some constraints on Riemannian metrics on the torus $T^n$ by using quantitative versions of Milnor's argument in his paper on the growth of fundamental groups and volume of Riemannian manifolds.</p> http://mathoverflow.net/questions/123194/research-level-applications-of-row-rank-column-rank/123212#123212 Answer by Benoît Kloeckner for Research level applications of "row rank = column rank"? Benoît Kloeckner 2013-02-28T13:00:20Z 2013-02-28T13:00:20Z <p>There is this proof of the De Bruijn-Erdös theorem: $p$ points in the plane, not all on the same line, at least $p$ lines go through at least two of the points.</p> <p>The linear algebraic proof goes like this: let $A$ be the incidence matrix of points versus lines (each row is labeled by a point, each column by a line going through at least two of the points, and the $ij$ coefficient is $1$ if the given point is on the given line, $0$ otherwise). Then it is easily seen that $det(AA^T)\neq0$. In particular the rank of $A$ is $p$, and since this is its column rank the number of columns must be at least $p$.</p> http://mathoverflow.net/questions/120314/smoothing-of-piecewise-euclidean-riemannian-metrics Smoothing of piecewise Euclidean Riemannian metrics Benoît Kloeckner 2013-01-30T13:17:00Z 2013-02-14T00:22:10Z <p>Let $M$ be a smooth closed manifold and $T$ be a triangulation of $M$. Endow each simplex of $T$ with the Euclidean metric making it a regular simplex; this gives a piecewise Euclidean metric $g_0$ on $M$, which is singular on (part of) the codimension $2$ skeleton of $T$.</p> <p>Is it possible to approximate $g_0$ by a smooth Riemannian metric? The approximation should in particular change length of curves and the volume by arbitrarily small amounts.</p> <p>I guess the answer is positive and well-known, but I did manage to find a reference (in particular, several works ask the smoothing to satisfy certain curvature assumptions, which I do not). Is there a reference or are there obstruction to smoothing?</p> http://mathoverflow.net/questions/121052/reference-question-poncelet-theorem/121110#121110 Answer by Benoît Kloeckner for Reference question: Poncelet theorem Benoît Kloeckner 2013-02-07T19:44:04Z 2013-02-07T19:44:04Z <p>There seems to be a misconception here: Poncelet Theorem (at least the great one, which I believe to remember is the one he proved while in jail) is a much deeper, and more difficult statement than what you state.</p> <p>Consider an ellipse inside another ellipse, and play inner-outer billiard with them. This means that you start from a point on the outer ellipse, choose one of the two line from this point tangent to the inner ellipse, and take the second intersection point of this line with the outer ellipse. You continue, always taking the next line tangent to the inner ellipse from the current point, and the other intersection point with the outer ellipse from the current line.</p> <p><strong>Theorem</strong> (Poncelet) $-$ If one orbit of this dynamical system is periodic, then all orbits are periodic.</p> <p>This, if I remember well, is in Berger's Geometry. There might be a reference there.</p> http://mathoverflow.net/questions/119552/longest-simple-closed-geodesic/119580#119580 Answer by Benoît Kloeckner for longest simple closed geodesic Benoît Kloeckner 2013-01-22T15:40:55Z 2013-01-22T15:40:55Z <p>The question of the relation between the length of the shortest closed geodesic and the area of a surface is called systolic geometry. You can notably look at the work of Balacheff, Bavard, Croke Gendulphe, Katz, Parlier, Saboureau.</p> http://mathoverflow.net/questions/117668/new-grand-projects-in-contemporary-math/117707#117707 Answer by Benoît Kloeckner for New grand projects in contemporary math Benoît Kloeckner 2012-12-31T09:40:54Z 2012-12-31T09:40:54Z <p>Optimal transport. Both its study (generalizations, Monge problem, regularity issues, and geometric properties to cite the part I work in) and its applications (to geometry notably with the Work of Sturm and Lott-Villani, to image processing and recognition, etc.) have developed hugely since the 90's.</p> http://mathoverflow.net/questions/117668/new-grand-projects-in-contemporary-math/117706#117706 Answer by Benoît Kloeckner for New grand projects in contemporary math Benoît Kloeckner 2012-12-31T09:38:25Z 2012-12-31T09:38:25Z <p>Ricci flow. It did solve Poincaré's conjecture and the $1/4$-pinching conjecture, but has also become an object of study. More generally, it has launched a large amount of work on geometric flows (mean curvature flow and others), notably with the idea that some other problems can be solved by designing an ad-hoc flow.</p> http://mathoverflow.net/questions/114325/convexity-in-0-1-n/114329#114329 Answer by Benoît Kloeckner for Convexity in $\{0,1\}^n$ Benoît Kloeckner 2012-11-24T12:42:40Z 2012-11-24T12:42:40Z <p>If you want a stable notion of convexity, you can ask for<code>$C\subset \{0,1\}^n$</code> to be convex that for all $x,y\in C$, every minimal path between $x$ and $y$ is contained in $C$.</p> <p>Concerning the Brunn-Minkowski inequality in the hypercube, there is a recent result of Ollivier and Villani :</p> <p>"A curved Brunn-Minkowski inequality on the discrete hypercube, Or: What is the Ricci curvature of the discrete hypercube?" SIAM J. Discr. Math. 26 (2012), n°3, 983--996. (paper available e.g. on Yann Ollivier's web page).</p> <p>The result is as follows: call midpoint of $x$ and $y$ any point that lies on a minimal path between them and is halfway (if $d(x,y)$ is even) or as close as halfway as possible (otherwise). For all <code>$A,B\subset \{0,1\}^n$</code>, the set $M$ of midpoints of pairs $(a,b)\in A\times B$ has cardinal bounded below: $$\ln |M| \ge \frac12 \ln |A| + \frac12 \ln|B| +\frac1{16n} d(A,B)^2.$$ </p> http://mathoverflow.net/questions/43889/proof-synopsis-collection/109882#109882 Answer by Benoît Kloeckner for Proof synopsis collection Benoît Kloeckner 2012-10-17T07:29:10Z 2012-10-17T07:29:10Z <p>I am surprised that this one did not already occurred : <strong>Perelman's proof of the Poincaré conjecture</strong>.</p> <blockquote> <p>Endow a simply connected three-manifold with any Riemannian metric. Let the metric evolve under the Ricci flow. When singularities occur, cut them out and smoothly glue a cap in the hole, checking that the topology has not changed. After some time, you get a round metric so your manifold is a sphere.</p> </blockquote> http://mathoverflow.net/questions/107800/isoperimetric-inequality-in-complex-hyperbolic-space/107802#107802 Answer by Benoît Kloeckner for Isoperimetric inequality in complex hyperbolic space Benoît Kloeckner 2012-09-21T20:45:22Z 2012-09-21T20:45:22Z <p>I am pretty sure it is a somewhat reputed conjecture, but I do not have a clear reference where it is stated. It might be evoked in a paper of Hsiang and Hsiang in Inventiones, where they prove that the isoperimetric domains in products of hyperbolic and euclidean spaces are invariant under the group of all isometries fixing the center of gravity. It seems a reasonable conjecture that this is true in all symmetric spaces of non-positive curvature. That conjecture might be stated in the Hsiang and Hsiang paper, and is a broad generalization of the conjecture you are interested in.</p> http://mathoverflow.net/questions/106990/examples-of-totally-geodesic-subset/106991#106991 Answer by Benoît Kloeckner for examples of totally geodesic subset Benoît Kloeckner 2012-09-12T08:34:26Z 2012-09-12T08:34:26Z <p>The obvious answer is an equatorial sphere (= intersection with a linear subspace of any codimension) in the unit sphere of $\mathbb{R}^n$.</p> <p>Without more details on your motivation, it is difficult to judge whether this answer is satisfying or not.</p> http://mathoverflow.net/questions/105880/laplace-beltrami-operator-expression/105882#105882 Answer by Benoît Kloeckner for Laplace-Beltrami operator expression Benoît Kloeckner 2012-08-29T21:12:57Z 2012-08-29T21:12:57Z <p>The $v_i$ are vector <em>fields</em>, and as such are derivations. The square usually means that you apply it twice (so, e.g. in the Euclidean space one can take $v_i=\frac{\partial}{\partial x_i}$ and its square is simply $\frac{\partial^2}{\partial x_i^2}$).</p> http://mathoverflow.net/questions/104957/curvature-of-curves-in-the-space-of-gaussians-measures/104975#104975 Answer by Benoît Kloeckner for curvature of curves in the space of gaussians measures Benoît Kloeckner 2012-08-18T09:55:49Z 2012-08-18T09:55:49Z <p>Your question is not specific enough about what you do not understand in the quoted paper; if you want help on this, you should at least explain what you understand and where the problem appears. Here is a little information about bibliography, that might help (or miss the point, I am not sure).</p> <p>For an introduction to optimal transport and Wasserstein spaces, you can have a look at Villani's books ("Topics on ..." is more elementary, but the beginning of "... Old and New" is not as difficult to read as the size of the book might lead you to think, and I like it a lot). A more concise introduction can also be found in a nice little book by Nicola Gigli, a version of which seems to be at <a href="http://math.unice.fr/~gigli/Site_2/Publications_files/users_guide%20-%20final.pdf" rel="nofollow">http://math.unice.fr/~gigli/Site_2/Publications_files/users_guide%20-%20final.pdf</a> (but I am not sure this is exactly the text I read).</p> <p>You should also now about Lott's paper "Some geometric calculations on Wasserstein space", Comm. Math. Phys. 277, p. 423-437, which computes the curvature of the Wasserstein space of a manifold.</p> <p>Concerning the notion of curvature of a discretized curve, you might be interested in the concept of Menger's curvature, which applies in a very broad context.</p> http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand/102040#102040 Answer by Benoît Kloeckner for Not especially famous, long-open problems which anyone can understand Benoît Kloeckner 2012-07-12T13:58:50Z 2012-07-12T13:58:50Z <p>In an oriented graph, is there always a vertex from which there are at least as many vertices that one can access by moving along exactly two edges, than there are vertices that one can access by moving along one edge?</p> <p>This is known as Seymour's second neighborhood conjecture, and might be on the verge to being too famous (but it seems few of my colleagues know it).</p> http://mathoverflow.net/questions/101289/number-of-neigbour-voronoi-cells-for-a-random-set-of-points-on-sk-or-cube-1-1/101299#101299 Answer by Benoît Kloeckner for Number of neigbour Voronoi cells for a random set of points on S^k or cube [-1, 1]^k? Benoît Kloeckner 2012-07-04T10:04:44Z 2012-07-04T10:04:44Z <p>When $k=2$, you can use combinatorics to avoid any relation with probability. From Euler formula, one gets that the mean degree of a graph with $N$ vertices on $S^2$ is $6-\frac{12}N$ (see e.g. <em>Proofs from the book</em>, section on Euler formula).</p> <p>Applying this to the neighbors graph of your tesselation, you get that the average number of neighbors of a cell is bounded by $6$ independently of $N$.</p> <p>For this you have to rule out cells that touch by a corner only (otherwise the graph need not be planar), but I guess this only happen with null probability.</p> http://mathoverflow.net/questions/98107/metric-deformations-from-non-negative-to-positive-curvature/98112#98112 Answer by Benoît Kloeckner for Metric Deformations from Non-Negative to Positive Curvature Benoît Kloeckner 2012-05-27T13:38:50Z 2012-05-27T13:38:50Z <p>I think the answer is no, because if I remeber well $\mathbb{R}\mathrm{P}^2 \times \mathbb{R}\mathrm{P}^2$ does not admit a positively curved metric. My reference for this is Gallot-Hulin-Lafontaine, but I do not have the book at hand right now.</p> http://mathoverflow.net/questions/97860/basic-question-about-rectifiability/97864#97864 Answer by Benoît Kloeckner for basic question about rectifiability Benoît Kloeckner 2012-05-24T19:43:56Z 2012-05-24T19:50:29Z <p>I think you are true that the definition is void when $k=0$, but this only means it is an interesting for $k>0$ only. It even seems to me that this notion makes sense mostly when $n$ is the Hausdorff dimension of the considered set; the important theorem to bear in mind is a decomposition result which I may not remember very precisely, but that roughly says that any (closed ?) set of dimension $n$ is the union of a $n$-rectifiable set and a $n$-totally unrectifiable set (which means a set that has small intersection with every $n$-rectifiable set). The classical example of totally unrectifiable set is the four-corner Cantor set, which has Hausdorff dimension $1$ but meets every Lipschitz curve along a set of null one-dimensional measure.</p> http://mathoverflow.net/questions/31222/c1-isometric-embedding-of-flat-torus-into-mathbbr3/31227#31227 Answer by Benoît Kloeckner for $C^1$ isometric embedding of flat torus into $\mathbb{R}^3$ Benoît Kloeckner 2010-07-09T18:35:45Z 2012-04-23T11:36:48Z <p>A group of french mathematicians and computer scientists are currently working on this. The project is named Hévéa, and has already produced a few images. <strong>Edit:</strong> a few images and the PNAS paper have been released, see <a href="http://math.univ-lyon1.fr/~borrelli/Hevea/Presse/index-en.html" rel="nofollow">http://math.univ-lyon1.fr/~borrelli/Hevea/Presse/index-en.html</a></p> <p>Just a few word to explain what I understood of their method (which is by using h-principle) from the few image I saw in preview. Start with a revolution torus. The meridians are cool, because they all have the same length, as expected from those of a flat torus. But the parallels are totally uncool, because their lengths differ greatly: they witness the non-flatness of the revolution torus. </p> <p>Now perturb your torus by adding waves in the direction of the meridians (like an accordion), with large amplitude on the inside and small amplitude on the outside. If you design this perturbation well, you can manage so that the parallels now all have the same length. Of course, the perturbed meridian have now varying lengths! So you do the same thing by adding small waves in another direction, getting all meridians to have the same length again. You can iterate this procedure in a way so that the embedding converges in the $C^1$ topology to a flat embedded torus. But to prove that the precise perturbation you chose in order to get a nice image does converge, and that your maps are embeddings needs work (getting an immersion is easier if I remember well).</p> <p>Also, the Hévéa project plans to draw images of Nash spheres, that is $C^1$ isometric embeddings of spheres of radius $>1$ inside a ball of unit radius.</p> http://mathoverflow.net/questions/131139/vector-field-pull-back-from-embedding Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-19T14:31:52Z 2013-05-19T14:31:52Z This is basic differential geometry, not research-level. I thought that Lev Soukhanov answer would show you where is the problem, but now the ongoing discussion does not belong here. Voting to close. http://mathoverflow.net/questions/131139/vector-field-pull-back-from-embedding/131140#131140 Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-19T11:46:15Z 2013-05-19T11:46:15Z In other word, your "pull-back" vector field depends on both $f$ and $r$, while to properly define a $f^*X$ you would like it to depend only on $f$. http://mathoverflow.net/questions/130959/reference-request-affine-transforms-circle-inversion/131005#131005 Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-17T22:14:32Z 2013-05-17T22:14:32Z @Ryan Budney: the action of $\mathrm{PGL}$ does not contain the conformal group, as the former preserves the antipody relation while the latter doesn't. http://mathoverflow.net/questions/130951/cauchys-integral-formula-is-not-right Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-17T13:01:11Z 2013-05-17T13:01:11Z What is the question? http://mathoverflow.net/questions/130595/the-pth-power-of-a-distance-function-is-twice-continuously-differentiable-for-p Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-14T18:03:47Z 2013-05-14T18:03:47Z Convexity is an assumption that may give you something, as stressed by Tom Bachmann below, connectivity is not. http://mathoverflow.net/questions/130601/bounds-for-the-median-of-a-set-of-value-bound-numbers-given-their-mean Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-14T16:07:16Z 2013-05-14T16:07:16Z Given the median and the number of values, you can easily compute the largest and smallest possible mean, then answer your question. This is not a suitable question for MO. http://mathoverflow.net/questions/130385/the-isoperimetric-problem-for-domains-constrained-to-lie-between-two-parallel-pla Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-13T13:32:02Z 2013-05-13T13:32:02Z I took the liberty to improve your title and retag your question. http://mathoverflow.net/questions/130468/integrals-as-duality-pairing Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-13T13:18:02Z 2013-05-13T13:18:02Z This is not the place to get a crash course on distributions or duality, and anyway you gave much too little information to get useful advice. http://mathoverflow.net/questions/130474/linear-bijection-between-a-normed-vector-space-and-its-proper-subspace Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-13T13:16:28Z 2013-05-13T13:16:28Z This looks like homework, and anyway is not suited for this site. http://mathoverflow.net/questions/130276/mathcald0-tv-is-dense-in-w0-t Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-10T16:55:35Z 2013-05-10T16:55:35Z What is $T$? What do the $0$s mean? What is $\mathcal{D}$? Is this question research-level? http://mathoverflow.net/questions/130258/distinction-between-function-types Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-10T13:41:38Z 2013-05-10T13:41:38Z This question is not suited for this site, please read the FAQ. http://mathoverflow.net/questions/130231/vehicle-routing-problem-with-several-constraints Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-10T11:58:11Z 2013-05-10T11:58:11Z You should explain a bit what the VRP is, or provide a link to definitions. http://mathoverflow.net/questions/130146/algebraic-topology-in-low-regularity Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-09T11:31:03Z 2013-05-09T11:31:03Z @Ricardo Andrade: you are of course right, but one can take the trace - I edited the question accordingly. http://mathoverflow.net/questions/130071/asymptotics-of-a-function Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-09T07:46:31Z 2013-05-09T07:46:31Z If one wants a crude asymptotic like the one Didier suggests, finding the dominant term for the lower bound and giving the obvious upper bound is sufficient. In any case, all ingredients are given in various comments, so either one cook up an answer that Granger will be able to accept, or we close as « off topic » (if the question is considered too simple) or « non longer relevant », but there is no need letting it popping up. http://mathoverflow.net/questions/130080/functional-equations/130084#130084 Comment by Benoît Kloeckner Benoît Kloeckner 2013-05-08T15:26:19Z 2013-05-08T15:26:19Z Your notation is a bit confusing when dealing with a functional equation, you should use indices $1$ and $2$ for the derivatives rather than variable names.
|
2013-05-25 21:48:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602710962295532, "perplexity": 531.9752577250824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://amforth.sourceforge.net/TG/recipes/I2C-Values.html
|
# I2C EEPROM VALUE¶
A nice feature of the VALUE concept is that the storage where the data is actually kept is not disclosed. That makes it easy to create a VALUE that behaves exactly like any other VALUE and keeps the data in an external I2C EEPROM.
#require value.frt
#require quotations.frt
#require ms.frt
#require i2c-eeprom.frt
\ 17 0 $50 i2c.value "name" : i2c.ee.value ( n addr hwid -- ) (value) over , \ store the addr [: dup @i ( addr ) swap 3 + @i ( hwid) @i2c.ee ;] , [: dup @i ( addr ) swap 3 + @i ( hwid) !i2c.ee 5 ms ;] , dup , \ store hwid !i2c.ee \ store inital data ; The #require directives are processed by the amforth-shell, of you don’t use it, comment them out and make sure that the files and their further dependencies are sent to the controller beforehand. Note the 5 ms delay after writing the data. This is to make sure that the EEPROM gets enough time to complete its internal activities. The use is straightforward. Since there is no memory manager for the serial EEPROM, the location of the data is given explicitly when creating the value: address 0 on the device with the hardware id$50.
(ATmega16)> $beef 0$50 i2c.ee.value answer
ok
Don’t forget to initialize the I2C hardware before use (e.g. in turnkey). Keep in mind, that the data stored in a value is much smaller than the page size of the EEPROM modules. Take care that the address used to place the data doen’t cross the page boundary. Otherwise a wrap-around will happen and likely other data gets currupted.
|
2017-05-27 17:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3238677382469177, "perplexity": 2361.3230768484596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608984.81/warc/CC-MAIN-20170527171852-20170527191852-00256.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/218609-cyclic-subgroup-left-coset-question-print.html
|
# Cyclic subgroup and left coset question
• May 5th 2013, 11:37 PM
Paulo1913
Cyclic subgroup and left coset question
Hi, I have a question that I am not sure how to work out:
Let H be the cyclic subgroup generated by g= ( 1 2 3)
( 1 3 2)
Find all left cosets of S3 modulo H.
Am i correct in that there will be two distinct left cosets, and if so how do I figure out what they are?
• May 6th 2013, 05:03 AM
HallsofIvy
Re: Cyclic subgroup and left coset question
First, S3 has order 6 while $\begin{pmatrix}1 & 2 & 3 \\ 1 & 3 & 2\end{pmatrix}$ has order 2 (g just swaps 2 and 3 so doing g twice swaps back and gives the identity) so that there are 6/2= 3 left cosets. Further, those cosets partition S3 so there are, in fact three left cosets. One, the one containing the identity, is just H itself. $\begin{pmatrix}1 & 2 & 3 \\ 2 & 1 & 3\end{pmatrix}$ is not in H and so generates another coset: taking it with the identity gives itself and taking it with $\begin{pmatrix}1 & 2 & 3 \\ 1 & 3 & 2\end{pmatrix}$ gives $\begin{pmatrix}1 & 2 & 3 \\ 3 & 1 & 2\end{pmatrix}$. Can you find the third coset?
|
2015-07-03 12:21:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8366203308105469, "perplexity": 845.2083162212235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096061.71/warc/CC-MAIN-20150627031816-00160-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://eprints.utas.edu.au/22178/
|
# Across-rotation factors affecting genetic improvement of Eucalyptus globulus in Australia
Whittock, SP 2006 , 'Across-rotation factors affecting genetic improvement of Eucalyptus globulus in Australia', PhD thesis, University of Tasmania.
Preview
PDF (Whole thesis)
Available under University of Tasmania Standard License.
| Preview
## Abstract
In forest tree improvement, ensuring that a breeding objective (BO) is well defined yet broad enough to cope with changes over time, is problematic. Two of the major changes to occur in the Eucalyptus globulus pulpwood plantation industry that may impact on tree improvement that were investigated in this study are coppice management of 2nd rotation crops and international demands for improved sustainability (e.g. the trade in environmental services such as carbon sequestration).
Coppice can provide a cheap alternative to replanting in the 2nd rotation. Regeneration following felling of a 9 year old progeny trial revealed significant genetic diversity in coppicing traits both within and between subraces. After 14 months, 67% of trees coppiced but subrace means varied from 43 to 73%. Heritabilities for coppice success (0.07) and subsequent growth (0.16-0.17) were low but statistically significant. The ability of a tree to coppice was genetically correlated with tree size prior to felling (r$$_g$$ = 0.61), and with nursery-grown seedling traits such as the number of nodes with lignotubers (r$$_g$$ = 0.66) and seedling stem diameter at the cotyledonary node (r$$_g$$ = 0.91). These seedling traits were poorly correlated with later age growth and with each other. The results suggest coppicing is influenced by three independent factors - lignotuber development, enlargement of the seedling stem at the cotyledonary node and vigorous growth.
A discounted cash-flow model was developed to compare the profitability of coppice and seedling crops in 2nd rotation E. globulus pulpwood plantations. A gain of 20% in dry matter production over the original seedling crop from 2nd rotation seedlings (through genetic improvement and provenance selection) would result in equivalent net present value (NPV) for 2nct rotation seedling and coppice crops. Incremental NPV was strongly affected by the level of genetic gain available (the genetic quality of 1 st rotation stock relative to the available genetically improved stock), and the productivity of coppice relative to the first rotation crop.
The integration of environmental services (in the form of carbon seuquestration) into production system models to define economic BOs for the genetic improvement of pulpwood plantations was investigated. Carbon dioxide equivalent accumulation in biomass in the Australian E. globulus plantation estate between 2004 and 2012 was estimated at ~ 146 t CO$$_2$$e ha$$^{-1}$$, of which 62 t C0$$_2$$e ha$$^{-1}$$ were tradable in 2012 and a further 30 t C0$$_2$$e ha$$^{-1}$$ were tradable in 2016. Where revenues for carbon sequestration were dependant upon biomass in a plantation, it was possible to determine whether economic BOs were sensitive to the revenue from carbon sequestration. The correlated response of BOs with and without carbon revenues (Δ$${cG}$$$$_{H1}$$) was 0.93. Where economic BOs were based on maximizing NPV by increasing biomass production, the consideration of carbon provided no significant gain in NPV.
Item Type: Thesis - PhD Whittock, SP Eucalyptus globulus Copyright 2005 the author - The University is continuing to endeavour to trace the copyright owner(s) and in the meantime this item has been reproduced here in good faith. We would be pleased to hear from the copyright owner(s). Chapter 3 appears to be the equivalent of a pre-print version of an article published as: Whitlock, S. P., Apiolaza, L. A., Kelly, C. M., Potts, B. M., 2003. Genetic control of coppice and lignotuber development in Eucalyptus globulus, Australian journal of botany, 51(1) 57-67. The published version is included in the appendicesChapter 4 appears to be the equivalent of a pre-print version of an article published as: Whitlock, S. P., Greaves, B. L., Apiolaza, L. A., 2004. A cash flow model to compare coppice and genetically improved seedling options for Eucalyptus globulus pulpwood plantations, Forest ecology and management, 191(1-3), 267–274Chapter 5 appears to be the equivalent of a pre-print version of an article published as: Whittock, S. P., Apiolaza, L. A., Dutkowski, G. W., Greaves, B. L., Potts, B. M., 2004. Carbon revenues and economic breeding objectives in Eucalyptus globulus pulpwood plantations. In, Proceedings of the IUFRO conference "Eucalyptus in a changing world", Aveiro, Portugal. Eds. Borralho, N. M. G., Pereira, J. S., Marques, C., Coutinho, J., Madeira, M., Tome, M. RAIZ, Instituto Investigação da Floresta e Papel, Portugal. 11-15 October, 2004. p 146-150 View statistics for this item
|
2021-09-17 15:55:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44620880484580994, "perplexity": 10796.829088487155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00646.warc.gz"}
|
https://www.global-sci.org/intro/article_detail/ata/13111.html
|
Volume 35, Issue 2
The Neumann Problem of Complex Special Lagrangian Equations with Supercritical Phase
Anal. Theory Appl., 35 (2019), pp. 144-162.
Published online: 2019-04
Cited by
Export citation
• Abstract
Inspired by the Neumann problem of real special Lagrangian equations with supercritical phase, we consider the Neumann problem of complex special Lagrangian equations with supercritical phase in this paper, and establish the global $C^2$ estimates and the existence theorem by the method of continuity.
• Keywords
Special Lagrangian equation, Neumann problem, supercritical phase.
35J60, 35B45
• BibTex
• RIS
• TXT
@Article{ATA-35-144, author = {}, title = {The Neumann Problem of Complex Special Lagrangian Equations with Supercritical Phase}, journal = {Analysis in Theory and Applications}, year = {2019}, volume = {35}, number = {2}, pages = {144--162}, abstract = {
Inspired by the Neumann problem of real special Lagrangian equations with supercritical phase, we consider the Neumann problem of complex special Lagrangian equations with supercritical phase in this paper, and establish the global $C^2$ estimates and the existence theorem by the method of continuity.
}, issn = {1573-8175}, doi = {https://doi.org/10.4208/ata.OA-0003}, url = {http://global-sci.org/intro/article_detail/ata/13111.html} }
TY - JOUR T1 - The Neumann Problem of Complex Special Lagrangian Equations with Supercritical Phase JO - Analysis in Theory and Applications VL - 2 SP - 144 EP - 162 PY - 2019 DA - 2019/04 SN - 35 DO - http://doi.org/10.4208/ata.OA-0003 UR - https://global-sci.org/intro/article_detail/ata/13111.html KW - Special Lagrangian equation, Neumann problem, supercritical phase. AB -
Inspired by the Neumann problem of real special Lagrangian equations with supercritical phase, we consider the Neumann problem of complex special Lagrangian equations with supercritical phase in this paper, and establish the global $C^2$ estimates and the existence theorem by the method of continuity.
Chuanqiang Chen, Xinan Ma & Wei Wei. (2020). The Neumann Problem of Complex Special Lagrangian Equations with Supercritical Phase. Analysis in Theory and Applications. 35 (2). 144-162. doi:10.4208/ata.OA-0003
Copy to clipboard
The citation has been copied to your clipboard
|
2022-05-17 02:08:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7417542338371277, "perplexity": 2185.330023442656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00538.warc.gz"}
|
https://asmedc.silverchair.com/solarenergyengineering/article-abstract/130/3/031018/469302/Rotor-Blade-Sectional-Performance-Under-Yawed?searchresult=1
|
This study shows the results of pressure distribution measurements on a rotor blade of a horizontal axis wind turbine under various yawed operations. The experiments are carried out in a wind tunnel with a $2.4m$ diameter test rotor. In the measurements, the power curve and pressure distributions are measured for different azimuth angles. By increasing yaw angle, the maximum value of power coefficient of the rotor decreases. The sign of the yaw angle does not have any effect on power performance. The aerodynamic forces are discussed using the axial and rotational force coefficients for each azimuth angle. In the case of higher tip speed ratios, the blade section passing on the upstream side in yawed operations has a greater contribution to the rotor torque than that on the downstream side. In this tip speed range, the aerodynamic forces at the 70% radius section appear proportional to the angle of attack. In the case of the lower tip speed ratios, the blade on the downstream side does not contribute to rotor torque, which appears to result from separation.
1.
Snel
,
H.
, 1998, “
Review of the Present Status of Rotor Aerodynamics
,”
Wind Energy
1095-4244,
1
(
S1
), pp.
46
69
.
2.
Vermeer
,
L. J.
,
Sørensen
,
J. N.
, and
Crespo
,
A.
, 2003, “
Wind Turbine Wake Aerodynamics
,”
Prog. Aerosp. Sci.
0376-0421,
39
(
6–7
), pp.
467
510
.
3.
Schepers
,
J. G.
,
Brand
,
A. J.
,
Bruining
,
A.
,
Graham
,
J. M. R.
,
Hand
,
M. M.
,
Infield
,
D. G.
,
,
H. A.
,
Maeda
,
T.
,
Paynter
,
J. H.
,
van Rooij
,
R.
,
Shimizu
,
Y.
,
Simms
,
D. A.
, and
Stefanatos
,
N.
, 2002, “
Final Report of IEA Annex XVIII: ‘Enhanced Field Rotor Aerodynamics Database
,’” ECN-C-02-016, p.
353
.
4.
Maeda
,
T.
,
Ismaili
,
E.
,
Kawabuchi
,
H.
, and
,
Y.
, 2005, “
Surface Pressure Distribution on a Blade of a 10m Diameter HAWT (Field Measurements Versus Wind Tunnel Measurements)
,”
ASME J. Sol. Energy Eng.
0199-6231,
127
(
2
), pp.
185
191
.
5.
Simms
,
D.
,
Schreck
,
S.
,
Hand
,
M.
, and
Fingersh
,
L. J.
, 2001, “
NREL Unsteady Aerodynamics Experiment in the NASA-Ames Wind Tunnel: A Comparison of Predictions to Measurements
,” NREL∕TP-500-29494.
6.
Imamura
,
H.
,
Takezaki
,
D.
,
Hasegawa
,
Y.
,
Kikuyama
,
K.
, and
Kobayashi
,
K.
, 2004, “
Numerical Analysis of a Local Angle of Attack to HAWT Rotor Blade in Unsteady Flow Conditions
,”
Proceedings of European Wind Energy Conference & Exhibition 2004
,
London, UK
, CD-ROM, p.
8
.
7.
Pesmajoglou
,
S.
, and
Graham
,
J. M. R.
, 1993, “
Prediction of Yaw Loads on a Horizontal Axis Wind Turbine
,”
Proceedings of European Community Wind Energy Conference
,
Lübeck-Travemünde
,
Germany
, pp.
420
423
.
8.
Haans
,
W.
,
Sant
,
T.
,
van Kuik
,
G.
, and
van Bussel
,
G.
, 2005, “
Measurement of Tip Vortex Paths in the Wake of a HAWT Under Yawed Flow Conditions
,”
ASME J. Sol. Energy Eng.
0199-6231,
127
(
4
), pp.
456
463
.
9.
Medici
,
D.
, and
Alfredsson
,
P. H.
, 2006, “
Measurements on a Wind Turbine Wake: 3D Effects and Bluff Body Vortex Shedding
,”
Wind Energy
1095-4244,
9
(
3
), pp.
219
236
.
10.
Grant
,
I.
,
Parkin
,
P.
, and
Wang
,
X.
, 1997, “
Optical Vortex Tracking Studies of a Horizontal Axis Wind Turbine in Yaw Using Laser-Sheet, Flow Visualization
,”
Exp. Fluids
0723-4864,
23
, pp.
513
519
.
11.
Maeda
,
T.
,
,
Y.
,
Sakai
,
Y.
, and
Takahara
,
N.
, 2005, “
Experimental Study on Flow Around Blades of Horizontal Axis Wind Turbine in Wind Tunnel
,”
Trans. Jpn. Soc. Mech. Eng., Ser. B
0387-5016,
71
(
701
), pp.
171
176
.
12.
Maeda
,
T.
,
,
Y.
,
Sakai
,
Y.
, and
Takahara
,
N.
, 2005, “
Experimental Study on Flow Around Blades of Horizontal Axis Wind Turbine in Wind Tunnel (Second Report Studies on the Flow Around Blade Based on Pressure Distribution)
,”
Trans. Jpn. Soc. Mech. Eng., Ser. B
0387-5016,
71
(
705
), pp.
1383
1389
.
13.
Amandolèse
,
X.
, and
Szèchényi
,
E.
, 2004, “
Experimental Study of the Effect of Turbulence on a Section Model Blade Oscillating in Stall
,”
Wind Energy
1095-4244,
7
(
4
), pp.
267
282
.
|
2022-05-21 22:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38391441106796265, "perplexity": 11340.559872490117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00019.warc.gz"}
|
https://electronics.stackexchange.com/questions/137090/bootstrap-circuit-for-high-side-mosfet-driver
|
# Bootstrap circuit for high-side MOSFET driver
I am very familiar with the operation of bootstrap drivers on MOSFET driver ICs for switching an N-channel high-side MOSFET. The basic operation is covered exhaustively on this site and others.
What I don't understand is the high-side driver circuitry itself. Since a good driver pushes and pulls large amounts of current, it makes sense that another pair of transistors exist within the IC to drive the VH pin high or low. Several datasheets I've looked at seem to indicate they use a P-channel/N-channel pair (or PNP/NPN). Taking away the construct of the IC chip, I imagine the circuit looks something like this:
simulate this circuit – Schematic created using CircuitLab
It seems that we've just introduced a recursion problem. Assuming the node marked as "floating" can be any arbitrarily high voltage, how are M3 and M4 driven that doesn't need yet another driver to drive the driver (and so on and on)? This is also assuming the high-side driver is ultimately controlled by a logic-level signal of some kind.
In other words, given an arbitrarily high floating voltage, how is the push-pull drive of M3 and M4 activated by a logic-level signal that originates from off the chip?
Point of clarification: The specific question I'm asking has only to do with activating the high-side push-pull bootstrap drive with a logic-level signal. When the high-side voltage is relatively low, I recognize this is trivial. But as soon as the voltages exceed typical Vds and Vgs ratings on transistors, this becomes harder to do. I would expect some kind of isolation circuitry to be involved. Exactly what that circuitry looks like is my question.
I recognize that if M4 is a P-channel FET (or PNP), another bootstrap circuit is not necessary. But I'm having trouble conceiving of a circuit that will generate the proper Vgs's for both M4 and M3 as the external transistors are switched back and forth.
Here are screen captures from two different datasheets that show a similar circuit to what I drew above. Neither go into any detail about the "black-box" driver circuitry.
From the MIC4102YM:
And the FAN7380:
• Dan, since you wrote that you've looked at several datasheets, could you post the links to them? That would provide a nice context. Nov 4 '14 at 0:10
• Sure, I'll update the question with some examples I found. Nov 4 '14 at 2:26
• Dan, earlier in this answer I have detailed the operation of a bootstrap gate driver like FAN7380. Nov 4 '14 at 3:17
• Nick, I actually found that answer earlier before posting my question (although the fact I used the same image from the FAN7380 datasheet is a coincidence). I'm fairly comfortable with using a driver IC with a bootstrap gate drive. The specific question I'm asking is what the gate drive circuit actually looks like. The box marked as just "driver" in the image. Basically, specific details about step 4 of your answer to that earlier question. Nov 4 '14 at 3:27
• Right, the push-pull pair is what I figured in my question. I'm still missing something though. How does the push-pull drive activate for arbitrarily high floating voltages? That's the crux of my question, I suppose. Nov 4 '14 at 3:51
simulate this circuit – Schematic created using CircuitLab
Note 1: The input voltages are only $$\V_{cc}\$$ and $$\V_\text{High Voltage}\$$. You don't apply anything at the $$\V_{BS}\$$ node. It is only for representation.
Note 2: Notice that there are two different type of grounds. Those grounds must not be directly connected to each other.
You must drive the MOSFET between its gate and source terminals. Since the source terminal voltage of a high side MOSFET will be floating, you need a separate voltage supply (VBS: $$\V_\text{Boot Strap}\$$) for the gate drive circuit.
In the schematic below, VCC is the voltage source of the rest of the circuit. When the MOSFET is off, ground of the boot strap circuit is connected to the circuit ground, thus C1 and C2 charge up to the level of Vcc. When the input signal arrives to turn the MOSFET on, ground of the gate drive circuit rises up to the drain voltage of the MOSFET. The D1 diode will block this high voltage, so the C1 and C2 will supply the driving circuit during the on-time. Once the MOSFET is off again, C1 and C2 replenish their lost charges from VCC.
Design criteria:
• RB must be chosen as low as possible that will not damage D1.
• Capacity of C2 must be chosen enough to supply the driving circuit during the longest on-time.
• Reverse voltage rating of D1 must be above $$\V_\text{High Voltage} - V_\text{CC}\$$.
The input signal must be isolated from the boot-strap circuit. Some possible isolaters are:
### Optocoupler
Optocoupler is the most basic method for isolation. They are very cheap compared to other methods. The cheap ones have propagation delay times down to 3$$\\mu\$$s. The ones with less than 1$$\\mu\$$s propagation delay are as expensive as isolated gate drivers though.
### Pulse Transformer
Pulse transformer is a spacial type of transformer for transferring rectangular pulses. They have less number of turns in order to avoid parasitic capacitance and inductance and larger cores for compensating loss of inductance due to reduced number of turns. They are much faster than optocouplers. Delay times are less than 100ns in general. The image above is for illustration only. In practice, the current they can provide is not enough for driving a MOSFET fast; so they need additional circuitry in practice.
### Isolated Gate Driver
Isolated gate driving is a relatively new technology. All the complexity of gate driving is encapsulated in one single chip. They are as fast as pulse transformers, yet they can provide a few amperes of peak gate current. Some products also contain on-chip isolated DC-DC converters, so they don't even need boot-strapping. However, all these super features come with a cost.
• hkBattousai, thank you for taking the time to write an answer. If you expand on the last three bullet points (that address the question I asked) and remove the details about the basics of bootstrap drivers (that I mention in the first paragraph of my question that I'm already familiar with), you'll have my +1. The opto-isolator circuit is great and I was hoping to get answers that focus entirely on that part of the driver instead of the general basics of how bootstraps work. Nov 4 '14 at 16:19
• I think we shouldn't remove details of boot-strapping. Other users may benefit from it. Nov 4 '14 at 17:29
• I'm fine with that, as long as the answer is now focused mostly on the specific question (as it is now). Thank you and +1. Nov 4 '14 at 17:35
• Hi, I see the last image you provided is very similar to the schematic of the ADuM3220 gate driver. My question is if this requires bootstrapping to power the high-side MOSFET? IF not, do you have an example of a product with an on-chip isolated dc-dc converter? Thanks
– rrz0
Apr 9 '18 at 16:38
• @Rrz0 In this table, for a product listed in a row, if the string in the column "Isopower Enabled" is "Yes", then it has internal DC-DC power supply. Apr 10 '18 at 7:35
Um, the IC has internal "level-shift" circuit.
And the level shift circuit maybe like this, this is similar with FAN7380:
The two NMOS before Pulse Filter is relative to the true ground, and the difference signal it routed to Pulse Filter. After Pulse Filter, the ground is floating on $V_{SRC}$, and the supply is $V_{BST}$.
And below is IR2110's block diagram (From International Rectifier AN978-b):
• Yes, chips have a level shifter of some kind. How it implements the level shifter for an arbitrarily high voltage is the specific question I'm asking. Nov 4 '14 at 16:24
• I've edited my question to add an extra paragraph to clarify. Nov 4 '14 at 16:34
|
2021-09-17 18:54:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5799193382263184, "perplexity": 1485.785116006013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00716.warc.gz"}
|
https://studydaddy.com/question/assignment-154
|
QUESTION
assignment
1. Select one article.
Select an article from a refereed journal pertaining to ‘diversity in workplace’. The article to be reviewed must have been published within the past 5 years. You are to submit your chosen article along with this assignment.
("Refereed" means that the article has been formally reviewed by a group of peer researchers.)
2. Write a Review
Write a review of the article according to the following format:
1. Introduction:
State the Objectives, Article Domain and Audience. You may adopt the following sample:
• State the objectives (goals or purpose) of the article.
• What is the article's domain (topic area)?
• Audience: State the article's intended audience. At what level is it written, and what general background should the reader have; what general background materials should the reader be familiar with to understand the article?
(8)
1. Brief Summary
For your article review, you do not have to spend much space summarizing the article. Instead the analysis of the article is more important. Thus, in this section, you are only required to summarise the article only very briefly (1 paragraph).
(6)
2. Results/ Findings
Very briefly summarise the important points (observations, conclusions, findings). Do not repeat lists of items in the articles. Just summarise the essence of these if you feel they are necessary to be included . (1-2 paragraphs).
(6)
3. Analysis
How do you find the article's models, frameworks, theory, and guidelines, etc. applicable to the stakeholders. (2-3 paragraphs).
(10)
1. Contributions
How does this article contribute to the knowledge in a research field, to researchers and managers / policy makers (2-3 paragraphs)?
(10)
(Total : 40)
(GRAND TOTAL : 100)
Assignment Format:
1. Use double spacing and 12-font size Times New Roman.
2. This assignment should contain about 3000-5000 words (15-20 pages).
3. Provide references: References should use the American Psychological Association (APA) format.
4. References should be current (year 2010 and onwards).
• @
• 44 orders completed
Tutor has posted answer for $20.00. See answer's preview$20.00
|
2019-05-24 01:35:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20030544698238373, "perplexity": 4105.148362710467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00272.warc.gz"}
|
http://mathoverflow.net/feeds/question/13005
|
What is 'formal' ? - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T16:53:38Z http://mathoverflow.net/feeds/question/13005 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/13005/what-is-formal What is 'formal' ? Xiao Xinli 2010-01-26T03:02:53Z 2010-02-01T08:41:00Z <p>The key step in Kontsevich's proof of deformation quantization of Poisson manifolds is the so-called formality theorem where 'a formal complex' means that it admits a certain condition. I wonder why it is called 'formal'. I only found the definition of Sullivan in Wikipedia: 'formal manifold is one whose real homotopy type is a formal consequence of its real cohomology ring'. But still I am confused because most of articles I found contain the same sentence only and I cannot understand the meaning of 'formal consequence'. Does anyone know the history of this concept?</p> http://mathoverflow.net/questions/13005/what-is-formal/13019#13019 Answer by Kevin Lin for What is 'formal' ? Kevin Lin 2010-01-26T06:44:35Z 2010-02-01T08:23:31Z <p>I would guess that the terminology goes back to the work of Sullivan and Quillen on rational homotopy theory. You should probably also look at the paper of <a href="http://www.springerlink.com/content/m48544t785221635/" rel="nofollow">Deligne-Griffiths-Morgan-Sullivan</a> on the real homotopy theory of Kähler manifolds. Actually, I think that at least some familiarity with the DGMS paper is an important prerequisite for understanding many of Kontsevich's papers.</p> <p>I am not totally sure, but I believe that the definitions are as follows:</p> <ul> <li><p>A differential graded algebra $(A,d)$ is called formal if it is quasi-isomorphic (in general, if we work in the category of dg algebras and not, say, the category of A-infinity algebras, we need a "zig-zag" of quasi-isomorphisms) to $H^\ast(A,d)$ considered as a dg algebra with zero differential.</p></li> <li><p>A space X is called formal (over the rationals resp. the reals) if its cochain dg algebra $C^\ast(X)$ (with rational resp. real coefficients) with the standard differential is a formal dg algebra.</p></li> </ul> <p>One of the things I'm not sure about is whether in the definition we should require $H^\ast(A,d)$ to be commutative; but for spaces this is not an issue since $H^\ast(X)$ is always (graded-)commutative.</p> <p>The DGMS paper proves that if X is a compact Kähler manifold, then the de Rham dg algebra consisting of (real, $C^\infty$) differential forms on X with the standard de Rham differential is a formal dg algebra.</p> <p>The phrase "the real (resp. rational) homotopy type of X is a formal consequence of the real (resp. rational) cohomology ring of X", which appears in e.g. the DGMS paper, simply means that the real (resp. rational) homotopy theory of X is determined by (and is probably explicitly and algorithmically computable from?) the cohomology ring of X. In other words, if X and Y are formal (over the rationals resp. the reals) and have isomorphic (rational resp. real) cohomology rings, then their respective (rational resp. real) homotopy theories are the same (and are explicitly computable, if we know the cohomology ring(s)?). For example, the ranks of their homotopy groups will be equal.</p> <p>Actually I am not totally sure whether what I said in the last paragraph is true. I think it's true when X and Y are simply connected. I'm not sure about what happens more generally.</p> <p>In the context of rational homotopy theory, I think the term "formal" is fine, for the reasons I've explained above. Perhaps in the more general context of dg algebras, the use of the term "formal" makes less sense. However, I think that it is still reasonable, for the following reasons. Let me use the more "modern" language of A-infinity algebras. In general, it is not true that a dg algebra $(A,d)$ is quasi-isomorphic to $H^\ast(A,d)$ considered as a dg algebra with zero differential. However, it is a "standard" fact (Kontsevich-Soibelman call this the "homological perturbation lemma" (for example, it's buried somewhere in <a href="http://arxiv.org/abs/math/0011041" rel="nofollow">this paper</a>), and you can find it in the operads literature as the "transfer theorem") that you can put an A-infinity structure on $H^\ast(A,d)$ which makes $A$ and $H^\ast(A,d)$ quasi-isomorphic as A-infinity algebras. The A-infinity structure manifests itself as a series of $n$-ary products satisfying various compatibilities. Intuitively at least, these $n$-ary products should be thought of as being analogous to Massey products in topology. So $H^\ast(A,d)$ with this A-infinity structure does carry some "homotopy theoretic" information. In this language then, a dg algebra $(A,d)$ is formal if it is quasi-isomorphic, as an A-infinity algebra, to $H^\ast(A,d)$ with all higher products zero. In other words, all of the "Massey products" vanish*, and thus the only remaining "homotopy theoretic" information is that coming from the ordinary ring structure on $H^\ast(A,d)$.</p> <p><hr /></p> <p>*Don Stanley notes correctly that vanishing of Massey products is weaker than formality. However, I believe that triviality of the A-infinity structure is equivalent to formality. In the language of the DGMS paper, which does not use the A-infinity language, they say that formality is equivalent to the vanishing of Massey products "in a uniform way". I believe this uniform vanishing is the same as triviality of A-infinity structure. From the paper:</p> <blockquote> <p>... a minimal model is a formal consequence of its cohomology ring if, and only if, all the higher order products vanish in a uniform way.</p> </blockquote> <p>and also</p> <blockquote> <p>[Choosing a quasi-isomorphism from a minimal dg algebra to its cohomology] is a way of saying that <em>one may make uniform choices so that the forms representing all Massey products and higher order Massey products are exact</em>. This is stronger than requiring each individual Massey product or higher order Massey product to vanish. The latter means that, given one such product, choices may be made to make the form representing it exact, and there may be no way to do this uniformly.</p> </blockquote> <p>(Sorry for the proliferation of parentheses, and sorry for my lack of certainty on all of this, I have not thought about this in a while. People should definitely correct me if I'm wrong on any of this.)</p> http://mathoverflow.net/questions/13005/what-is-formal/13028#13028 Answer by Agusti Roig for What is 'formal' ? Agusti Roig 2010-01-26T09:17:44Z 2010-01-26T09:17:44Z <p>Maybe you could take a look at</p> <p>Y. Félix, J. Oprea, D. Tanré; Algebraic models in Geometry, Oxford Graduate Text in Math. 17 (2008)</p> <p>where they talk about formality in the context of rational homotopy theory, RHT, (for instance, in sections 2.7 and 3.1.4). Also the more classical, but excellent little book</p> <p>D. Lehmann; Théorie homotopique des formes différentielles, Astérisque 45</p> <p>is worth reading (section V.9).</p> <p>As for formality in the context of operads, allow me a little self-promotion :-) :</p> <p>F. Guillén, V. Navarro, P. Pascual, Agustí Roig, Moduli spaces and formal operads; Duke Math. J. 129, 2 (2005).</p> <p>In this work, we translate some classical results concerning formality in RHT to chain operads. For instance, the Deligne-Griffiths-Morgan-Sullivan theorem about formality of Kähler manifolds, formality's independence of the ground field... And extend them also to modular operads.</p> http://mathoverflow.net/questions/13005/what-is-formal/13045#13045 Answer by Don Stanley for What is 'formal' ? Don Stanley 2010-01-26T14:57:56Z 2010-01-26T15:30:32Z <p>Formal can mean slightly different things in different contexts.</p> <p>A commutative differential graded algebra (CDGA) is formal if it is quasi-isomorphic to it's homology. This is stronger than having all the higher Massey products equal to 0 (I think there are such examples in the Halperin-Stasheff paper). </p> <p>To a space you can associate a CDGA (via Sullivan's $A_{pl}$ functor) which is basically the deRham complex when the space is a manifold. In nice cases this functor induces an equivalence from the rational homotopy category to the homotopy category of CDGA. Quasi-isormorphic CDGA correspond to (rationally) homotopy equivalent spaces. You can also tensor with the reals to get real CDGA. </p> <p>If A is a CDGA which is quasi-isomorphic to $A_{pl}(X)$ for a space $X$ then A is often called a model of X. A space is formal if $A_{pl}$ of it is formal. So a formal space is modeled by its cohomology. In that sense its rational homotopy type is a formal consequence of its cohomology.</p> <p>I think you have to be slightly careful with using $C^*$. This functor lands in differential graded algebra which are not commutative, so possibly the notion of formality could be different. In particular if you consider two CDGA there may be more strings of quasi-isomorphisms between them as DGAs then as CDGAs. I believe it is unknown if two CDGA that are quasi-isomorphic as DGA have to be quasi-isomorphic as CDGA. </p> http://mathoverflow.net/questions/13005/what-is-formal/13063#13063 Answer by Agusti Roig for What is 'formal' ? Agusti Roig 2010-01-26T19:11:21Z 2010-02-01T08:41:00Z <p>Paraphrasing Groucho Marx: if you don't like my first answer..., well I have another one. :-)</p> <p>Here it is: let $X$ be a simply connected differentiable manifold.</p> <p>Rational homotopy theory tells us that the <em>rational homotopy type</em> of $X$ (that is, its homotopy type modulo torsion) is contained in its <em>minimal model</em>, <code>$M_X$</code>, which is a <em>commutative</em> differential graded (cdg) algebra.</p> <p>By definition, this means that you have a quasi-isomorphism (<em>quis</em>, a morphism of cdg algebras inducing an isomorphism in cohomology)</p> <p>$$M_X \longrightarrow \Omega^*(X) \ .$$</p> <p>Here, $\Omega^* (X)$ is the algebra of differential forms of $X$ and the <em>minimality</em> of $M_X$ means that, in a certain, but precise, sense, it is the smallest cdg algebra for which such a quis exists.</p> <p>The fact that $M_X$ <em>contains</em> the rational homotopy type of $X$ implies, for instance, that you can obtain the ranks of the homotopy groups of $X$ from it:</p> <blockquote> <p>rank $\pi_n(X) =$ number of degree n generators (as an algebra) of $M_X$, for $n \geq 2$.</p> </blockquote> <p>Nice, isn't it? :-)</p> <p>The problem is that the algebra $\Omega^*(X)$ is, in general, not computable, so you can not obtain from it the minimal model $M_X$. And here is where formality comes to help you.</p> <p>Almost by definition, $X$ is a <em>formal</em> space if there exists two quis</p> <p>$$\Omega^*(X) \longleftarrow M_X \longrightarrow H^*(X;\mathbb{Q})<br />$$</p> <p>Hence, if $X$ is formal you can compute its minimal model $M_X$, and hence its rational homotopy type, directly from the cohomology algebra <code>$H^*(X; \mathbb{Q})$</code>, which is nicer (smaller, more computable) than $\Omega^*(X)$.</p> <p>And the final point is that there are plenty of examples of spaces which are known to be formal.</p> <p>(Final remark: Actually, you'd have to put <code>$A_{PL}^*(X;\mathbb{Q})$</code> instead of $\Omega^*(X)$ to work over the rationals, but this you can find it explained in the references we have provided for you.)</p>
|
2013-06-19 16:53:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817619681358337, "perplexity": 911.2502313505013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708946676/warc/CC-MAIN-20130516125546-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.newfreesoft.com/linux/linux_excellent_text_editor_markdown_latex_mathjax_6357/
|
Home IT Linux Windows Database Network Programming Server Mobile
Home \ Linux \ Linux excellent text editor (Markdown, LaTeX, MathJax) - Zabbix installation under Linux (Server) - Quickly locate the mistakes by gdb location (Programming) - How to release the cache memory on Linux (Linux) - Use HugePages optimize memory performance (Database) - Monitoring services are running properly and will email alert (Server) - Use Ambari rapid deployment Hadoop big data environment (Server) - Disk storage structure and file recovery experiment (FAT file system) (Linux) - Use Vagrant up a local development environment tutorials (Server) - TL-WR703N to install OpenWrt process notes (Linux) - How Vim playing a mature IDE (Linux) - Installation under Linux Mint system guidelines for Gtk (Linux) - Oracle 12c In-Memory Study (Database) - Safety testing Unix and Linux server entry succinctly (Linux) - Radius server setup under CentOS (Server) - Python in yield (Programming) - Ubuntu15 core CLR (Server) - Oracle ordinary users show parameter method (Database) - Using PPA to install the lightweight theme software HotShots 2.1.0 under Ubuntu (Linux) - Generated characters using Java Videos (Programming) - Linux + Apache + PHP + Oracle based environment to build (Server) Linux excellent text editor (Markdown, LaTeX, MathJax) Add Date : 2018-11-21 Such a title may not be accurate, because it really can not explain exactly what "Linux under excellent text editor." In fact, I would like to explore this essay mainly Markdown, LaTeX, MathJax, interested friends can continue to look down, but do not forget easily point a praise. introduction We write articles what tools? Windows Notepad it? Certainly not now! Most people should use at least a similar Word "WYSIWYG" visual editor. Reason: Because the article is not just text Well, it contains a wide variety of formats, such as font, size, color, headers, lists, etc. "WYSIWYG" editor provides editing method is indeed the most simple editor thought: When you want to change some of the text style, you only need to select it, then set it in the various menus, dialog format can be. This layout is very beautiful, very rich style of the article, can be called "rich text." If you were thinking deeper, you will find "rich text" many shortcomings, especially for those of us programmers TECHNOLOGY M is even more so. Below we cite a few examples: "Rich Text" editor up too slow, when writing, that is, to consider the content of the article, but also consider the article format, for a long time to write a few paragraphs word will point the mouse; "Rich Text" need a professional editor to edit and read, if not the editor, or editors are not compatible, and that only the tears; "Rich Text" tends to make form override logic, the article may be viewed at all levels from the appearance of the title text size, indents are correct, but you can not specify a logical hierarchy; "Rich text" formatting information too redundant, and flooded the contents of the article; "Rich Text" unfriendly computer, opaque storage format does not say, but also to text-based tool for comparing (such as the type diff) useless. So, good thought should be like this: Articles should be stored as plain text, any tool can read and edit; The contents of plain text that is to be fit for human reading, but also to computer easily understood; Can specify the logical structure of the various parts of the article is correct; Separation of content and display, the authors consider only the logical structure and content of the article, and the article shows how good-looking is a matter of professional people and tools. This is what I said in the title of "text editor thought." This idea in the computer field for a long time, and gradually formed a philosophy. For example, the Internet is widely used in HTML, XML, etc., is to save the message as plain text, any tool can read and edit, and can correctly specify the logical structure of the content, and CSS and the browser how to display the control article. However, HTML tag or too much, if not the browser entirely by artificial brain supplement or too difficult to read. Thus, he was born Markdown. In the text edit field there is another problem, that is the mathematical formula (mathematical formula or other similar things, such as music), they show up is two-dimensional, and many of them use standard symbols keyboard can not input, the font used and the body is not the same. Fortunately, Unix / Linux world, they have a better solution, and that is LaTeX. Of course, there are a lot of visual mathematical formula editor provides editing functions such as Word equation editor, such as TexMacs. However, as mentioned above, from the excellent "text editor thought" to think that the best is still LaTeX, LaTeX is because the use of plain text mode input mathematical formulas, input speed, and computers are easy to understand. LaTeX ideas widespread impact, many editors support LaTeX syntax to enter mathematical formulas. To display a mathematical formula on the page, the non MathJax must go, it is a JavaScript library that mathematical formula to identify the page in LaTeX format and perfectly displayed. Mathematical formulas blog Park is supported with this. Markdown features and tools Markdown's mission is "Easy to read and write", so use Markdown syntax to write documents in plain text directly reading is also very convenient. If the appearance of the requirements are very high, it is possible through the appropriate tools Markdown documents into HTML or PDF. Markdown syntax is very simple, in general, more than one hour to learn. At present, I made blog is basically using Markdown editor blog in blog park in the garden. Markdown realize blog Park is not perfect, for example, there is no instant preview feature, there is no segment continued behavior function, \ $\ incorrect interpretation, etc., but it is very comfortable to use, in addition to upload pictures, substantially without moving the mouse. Why behavior segment continued function is very important? Continued behavior section, another way is to ignore the non-blank line after the line breaks. This feature is very important. Without this feature, a section of text is a long, long, long line, which is based on comparing the text automation tools (such as diff) is undoubtedly a fatal blow. You can display line numbers for the editor concerned, the line numbers will be very jump, people looked uncomfortable. But the most important is that users do not always know exactly wrap it added to their own, or because the computer screen is not wide enough and wrap lines. So whether it is Markdown or LaTeX, allows any author newline in the source code, as long as there are no blank lines of text lines split, these lines will be merged into a single paragraph. If you want to wrap in Markdown, you need at the end of a line increased by at least two spaces, and also allows users to use LaTeX \\ manual line breaks. So the question is, why get rid of garden blog such an important function? In the Linux desktop, I use ReText edit Markdown document. Only in Ubuntu sudo apt-get install retext which can install the software Configuring ReText use more beautiful CSS Just installed ReText to preview the document may not be so good I figure above, should be like References and code and body distinction is not obvious, and the font of the entire document display does not look good. The reason this happens is because no corresponding CSS file to ReText. The only regret is that certain configurations ReText can not be done through the menu, you must manually modify the configuration file ~ / .config / ReText project / ReText.conf As for where to find a nice CSS, then the eyes of the beholder, the wise see wisdom. There are many good garden blog can learn. Open support for mathematical formulas Display mathematical formulas in Web pages thanks MathJax. Park open support blog mathematical formula is very simple, make a hook in admin page on the line. Due to the use of$ MathJax defined mathematical formulas, so there are a lot of articles or reviews $sign students should pay attention. Think about it, my article with$ more? Really a lot, when AT & T introduced assembler syntax used to explore the Bash scripting, they also used, so publishing two articles that really cost me a lot of strength. In addition to \$, MathJax also use \ (and \), $$and$$ \ [and \] to define mathematical formula. Since MathJax is so famous and excellent, so there is a corresponding MathJax ReText extension, but this time to modify the configuration file is ~ / .config / markdown-extensions.txt. The first line of the configuration file is to open mathjax extension. As for the other extensions and features, you can read ReText help documentation. See the text of the effect of the mathematical formula. Effectiveness and efficiency are good, oh! At this point, my paperwork formally MathJax Markdown and take full control. More:
|
2019-02-21 10:41:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4309330880641937, "perplexity": 3022.483889298257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00586.warc.gz"}
|
http://www.jstor.org/stable/10.4169/amer.math.monthly.119.05.415
|
## Access
You are not currently logged in.
Access your personal account or get JSTOR access through your library or other institution:
# Flexagons Lead to a Catalan Number Identity
David Callan
The American Mathematical Monthly
Vol. 119, No. 5 (May 2012), pp. 415-419
DOI: 10.4169/amer.math.monthly.119.05.415
Stable URL: http://www.jstor.org/stable/10.4169/amer.math.monthly.119.05.415
Page Count: 5
Item Type
Article
References
Preview not available
## Abstract
Abstract Hexaflexagons were popularized by the late Martin Gardner in his first Scientific American column in 1956. Oakley and Wisner showed that they can be represented abstractly by certain recursively defined permutations called pats, and deduced that they are counted by the Catalan numbers. Counting pats by the number of descents yields the identity \documentclass{article} \pagestyle{empty}\begin{document} $$\sum_{k=0}^{n}\frac{1}{2n-2k+1}\binom{2n-2k+1}{k}\binom{2k}{n-k} = C_{n},$$ \end{document} where only the middle third of the summands are nonzero.
|
2016-10-21 02:08:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7123296856880188, "perplexity": 4215.530449906773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717959.91/warc/CC-MAIN-20161020183837-00039-ip-10-142-188-19.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/30751/what-is-the-single-particle-hilbert-space
|
What is the single particle Hilbert space?
I know what an Hilbert Space is, but I'm not sure what exactly is the single particle hilbert space; I understand it as the space of all possible states of the particle; does it matter if you're talking about an electron, a neutron or a quark, or is it a particle in the 'abstract'? How does one construct it?
-
The single-particle space depends on mass, spin, and other quantum numbers of the particle.
In general, a single-particle space is a positive energy irreducible unitary representation of the symmetry group considered, thus the Poincare group for relativistic particles, but in the nonrelativistic case the Galilei group, and in some cases an extra group (typically a $SU(n)$, accounting for flavors, etc.)
The simplest single-particle space is that of a scalar (= spin 0) particle, which is given by the L^2 functions of momenta $p$ on a mass shell ($p^2=m^2$, $p_0>0$), integrated with the Lorentz invariant measure. The others are more complicated versions of it.
The irreducible unitary representations of the Poincare group with positive energy were classified by Wigner, and are characterized by mass and spin; for each such combination there is one single-particle space. For constructions for arbitrary spin $>0$, see Weinberg's book on QFT, which has perhaps the clearest discussion in textbook form.
-
The single-particle Hilbert space is the Hilbert space of all states that may be classified as one-particle states; it is a subset of the Hilbert space of a full theory containing states with $N=1$.
If we consider the flat space, then on this Hilbert space, one may find the independent momentum operators $p_x, p_y, p_z$ and discrete operators of spin that depend on the particle. The Hamiltonian is typically $$E = \frac{p^2}{2m}$$ in non-relativistic approximations or $$E = \sqrt{p^2c^2+m^2 c^4}$$ in special relativity except that we usually write the Hamiltonian in prettier ways, e.g. as the Dirac Hamiltonian for the spin-1/2 particles.
|
2013-12-13 05:07:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197784066200256, "perplexity": 167.9724883863198}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164884560/warc/CC-MAIN-20131204134804-00009-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an%3A1060.47033
|
# zbMATH — the first resource for mathematics
Essential norms and stability constants of weighted composition operators on $$C(X)$$. (English) Zbl 1060.47033
Let $$X$$ be a compact Hausdorff space and let $$C(X)$$ denote the Banach space of all continuous functions on $$X$$ with the supremum norm. The authors consider the weighted composition operator $$uC_{\varphi}$$ on $$C(X)$$ defined by $(uC_{\varphi}f)(x)=u(x)f(\varphi (x)) \qquad (x \in X)$ for all $$f \in C(X)$$, where $$u$$ is a fixed function in $$C(X)$$ and $$\varphi$$ is a selfmap of $$X$$ which is continuous on the support $$S(u)$$ of $$u$$.
H. Kamowitz [Proc. Am. Math. Soc. 83, 517–521 (1981; Zbl 0509.47026)] characterized when $$uC_{\varphi}$$ is compact. The authors develop Kamowitz’s result. They determine the essential norm of $$uC_{\varphi}$$, that is, $$\| uC_{\varphi}\| _e =\inf\{r>0: \varphi(\{x\in X: | u(x)| \geq r\})\text{ is finite}\}$$ (Theorem 1). Then, the authors give an equivalence proposition on the the Hyers-Ulam stability of a bounded linear operator between Banach spaces. With the aid of this proposition, they also characterize the Hyers-Ulam stability of $$uC_{\varphi}$$ and determine the stability constant, in terms of the set $$\varphi(\{x\in X: | u(x)| \geq r\})$$ $$(r>0)$$.
##### MSC:
47B33 Linear composition operators 34K20 Stability theory of functional-differential equations
Full Text:
|
2021-12-03 06:17:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756933212280273, "perplexity": 161.6077070596365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00396.warc.gz"}
|
https://mathcracker.com/internal-growth-rate-calculator
|
# Internal Growth Rate Calculator
Instructions: Use this Internal Growth Rate Calculator to compute a internal growth rate, by providing the retention (plow-back) ratio ($$b$$) and the return on assets $$(ROA)$$:
Retention Plowback Ratio $$(b)$$ =
Return on Assets $$(ROA)$$ =
## Internal Growth Rate Calculator
More about this Internal growth rate calculator so you can better understand how to use this solver: The Internal growth rate of a firm depends on the retention (plowback) ratio $$(RR)$$ and the return on assets $$(ROA)$$ using the following growth rate formula:
$g = \displaystyle \frac{ROA \times b}{1 - ROA \times b}$
### Other related Finance Calculators
Closely related to the idea of internal growth rate is the concept of sustainable growth rate.
In case you have any suggestion, or if you would like to report a broken solver/calculator, please do not hesitate to contact us.
|
2019-12-15 05:37:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24594560265541077, "perplexity": 1571.9084410648109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00316.warc.gz"}
|
https://porespy.org/examples/metrics/reference/porosity.html
|
# porosity#
Porosity is the void volume divided by the bulk volume. In a boolean image this can be calculated with im.sum()/im.size, assuming the void voxels are labeled True. It can be slightly more complicated however, if the image does not fill up the full array (i.e. im.size is not the bulk volume) or if the there are other values in the image besides True (i.e. it’s not obvious what is void space). The porosity function works as np.sum(im == 1)/np.sum(im == 0). This means that any voxels marked something else are ignored. It’s still very simple, but is more robust which comes in handy.
[1]:
import matplotlib.pyplot as plt
import numpy as np
import porespy as ps
np.random.seed(0)
[2]:
import inspect
inspect.signature(ps.metrics.porosity)
[2]:
<Signature (im)>
## im#
In its basic form a binary image is fine:
[3]:
im = ps.generators.blobs(shape=[200, 200])
e = ps.metrics.porosity(im)
print(e)
0.52215
However, if the image is has some unfilled space, the around a cylindrical tomogram, then it can be labelled as 2 so it’s ignored:
[4]:
im = ps.generators.blobs(shape=[100, 100, 100], porosity=0.5, blobiness=2).astype(int)
cyl = ps.generators.cylindrical_plug(shape=im.shape, axis=0)
im[~cyl] = 2
plt.imshow(im[50, ...], interpolation='none', origin='lower')
e = ps.metrics.porosity(im)
print(e)
0.49670287539936103
The porosity specified when generatign the blobsimage was 50%, and the computed porosity is also 50%, despite having a regions of 2’s around the outside.
|
2022-05-17 23:56:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6154650449752808, "perplexity": 3128.585129214872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00523.warc.gz"}
|
http://mathhelpforum.com/trigonometry/115575-sin-sinx-b-cosx.html
|
# Math Help - sin a sinx+ b cosx
1. ## sin a sinx+ b cosx
Hey.
The question is find the value of R and the value of A.
sinx+ 2cosx = Rcos(x-A)
i can find R which is square root of 5, but i keep getting 1.11 for A and the answer is 0,46 (radians)
can anyone help?
2. Hello Oasis1993
Originally Posted by Oasis1993
Hey.
The question is find the value of R and the value of A.
sinx+ 2cosx = Rcos(x-A)
i can find R which is square root of 5, but i keep getting 1.11 for A and the answer is 0,46 (radians)
can anyone help?
$R\cos(x-A) = R\cos x \cos A + R \sin x \sin A=\sin x + 2\cos x$
$\Rightarrow R\sin A = 1,\;R\cos A = 2$
$\Rightarrow \tan A =\tfrac12$
$\Rightarrow A = 0.46...$
|
2015-07-06 07:32:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7076677680015564, "perplexity": 1906.0825860730763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098071.98/warc/CC-MAIN-20150627031818-00135-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://libros.duhnnae.com/2017/jul7/150070731594-Fructooligosaccharide-associated-with-celecoxib-reduces-the-number-of-aberrant-crypt-foci-in-the-colon-of-rats.php
|
# Fructooligosaccharide associated with celecoxib reduces the number of aberrant crypt foci in the colon of rats
Fructooligosaccharide associated with celecoxib reduces the number of aberrant crypt foci in the colon of rats - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract : According to Burkitt-s hypothesis, dietary fibres may protect against the development of colorectal cancer. In rats, studies have shown that only butyrate-producing fibres are protective. In parallel, in humans, non-steroidal anti-inflammatory drugs, which target cyclooxygenases, have been shown to display a protective effect against colorectal cancer. Among them, COX-2-selective inhibitors which present less side effects than non-selective agents, are promising as chemopreventive agents. Our aim was to analyse the effect of an association between butyrate-producing fibres and the COX-2 inhibitor on the development of aberrant crypt foci ACF in rats. Fisher F344 rats were fed with 1 a standard low fibre control diet; 2 the standard diet supplemented with 1500 ppm celecoxib; 3 a diet supplemented with 6% fructo-oligosaccharide FOS; and 4 a diet with both celecoxib and FOS. Three weeks later, the rats were injected twice with azoxymethane and the number of ACF was determined 15 weeks later. In the control group, $43.8 \pm 6.4$ ACF were found. This number was not significantly modified by the addition of FOS or celecoxib alone to the diet. However, the association of FOS and celecoxib resulted in a 61% reduction in the number of ACF $P < 0.01$. The number of aberrant crypt per foci was also reduced. Thus, although no significant effect of celecoxib or FOS alone was identified, the association of butyrate-producing fibre and celecoxib was effective in preventing the development of ACF. This preliminary study argues for a strong protective effect of such an association which deserves further studies.
keyword : COX-2 butyrate colorectal cancer NSAID
Autor: Bruno Buecher Cécile Thouminot Jean Menanteau Christian Bonnet Anne Jarry Marie-Françoise Heymann Christine Cherbut Jean-Paul G
Fuente: https://hal.archives-ouvertes.fr/
DESCARGAR PDF
|
2017-09-20 02:24:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5976928472518921, "perplexity": 12414.000060446742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686117.24/warc/CC-MAIN-20170920014637-20170920034637-00155.warc.gz"}
|
https://math.stackexchange.com/questions/2657212/help-determining-if-a-field-is-finite/2657636
|
# Help determining if a field is finite?
I am currently working on a homework assignment, and I am stuck. The problem is to show that $\mathbb{Z}(\sqrt{2})$ / (a prime in $\mathbb{Z}(\sqrt{2})$) is a finite field. I have shown that $\mathbb{Z}(\sqrt{2})$ is PID, so that the quotient must be a field, but I am struggling with showing that it is finite. I am leaving some information out because I really just ant help with getting started. I have tried naively writing out elements, but it seems like there are infinitely many. I can give more information if you'd like it!
I can't tell if it is that I have been looking at this set for too long, or if I haven't touched algebra in a while, or what. Any and all help is appreciated! The course has not touched field extensions, Galois groups, etc yet.
• Do you mean the ring $\Bbb Z [\sqrt 2]$? – Fabio Lucchini Feb 19 '18 at 16:35
• Yes! $\mathbb{Z}$ adjoined with $\sqrt{2}$. I will edit this. – paranomasia Feb 19 '18 at 16:41
You can do the following. Let $\mathfrak{p}$ be the prime ideal of $\Bbb{Z}[\sqrt2]$ in question. We need to assume that $\mathfrak{p}$ is not the trivial ideal containing zero alone (that would qualify as a prime ideal according to many definitions) - otherwise your claim is false :-)
Consider the intersetion $I:=\mathfrak{p}\cap\Bbb{Z}$. Prove the following
1. If $a+b\sqrt2$ is a non-zero element of $\mathfrak{p}$, then $(a+b\sqrt2)(a-b\sqrt2)=a^2-2b^2$ is a non-zero element of $I$.
2. If $n$ is the smallest positive integer in $I$, then $n$ and $n\sqrt2$ are both elements of $\mathfrak{p}$.
3. Every coset of $\mathfrak{p}$ in $\Bbb{Z}[\sqrt2]$ contains an element of the form $a+b\sqrt2$ with $0\le a<n, 0\le b<n$.
4. There are at most $n^2$ elements in your quotient ring.
For extra credit you can prove that $n$ must actually be a prime number. Or, equivalently, that $I$ is a prime ideal of $\Bbb{Z}$.
• I like this a lot! I actually ended up defining a homomorphism $\phi:\mathbb{Z}\rightarrow\mathbb{Z}(\sqrt{2})/(3+\sqrt2)$ by $\phi(z)=a+b\sqrt{2}+(3+\sqrt{2})$, then finding its kernel, showing it was surjective, and then using first isomorphism theorem to show that $\mathbb{Z}(\sqrt{2})\cong\mathbb{Z}/7\mathbb{Z}$. ($3+\sqrt{2}$ was the prime we were using). – paranomasia Feb 21 '18 at 3:15
$\mathbb{Z}[\sqrt{2}]/(\pi)$ it is finite because is a finitely generated $\mathbb{Z}$-module with finite exponent since $N(\pi)=\pi \bar \pi \in \mathbb{Z}$ kills every element in it.
|
2020-02-17 07:53:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674604296684265, "perplexity": 107.69311650403878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00549.warc.gz"}
|
https://wiki.math.uwaterloo.ca/statwiki/index.php?title=Describtion_of_Text_Mining&oldid=45885
|
Describtion of Text Mining
Presented by
Yawen Wang, Danmeng Cui, Zijie Jiang, Mingkang Jiang, Haotian Ren, Haris Bin Zahid
Introduction
This paper focuses on the different text mining tasks and the existence of text mining in healthcare and biomedical domains. The text mining field has been popular as a result of the amount of text data that is available in different forms. The text data is bound to grow even more in 2020, indicating a 50 times growth since 2010. To further explore the text mining field, the related text mining approaches can be considered. The different text mining approaches relate to two main methods: knowledge delivery and traditional data mining methods.
The authors note that knowledge delivery methods involve the application of different steps to a specific data set to create specific patterns. Research in knowledge delivery methods has evolved over the years due to advances in hardware and software technology. On the other hand, data mining has experienced substantial development through the intersection of three fields: databases, machine learning, and statistics. As brought out by the authors, text mining approaches focus on the exploration of information from a specific text. The information explored is in the form of structured, semi-structured, and unstructured text. It is important to note that text mining covers different sets of algorithms and topics that include information retrieval. The topics and algorithms are used for analyzing different text forms.
Text Representation and Encoding
In this section of the paper, the authors explore the different ways in which the text can be represented on a large collection of documents. One common way of representing the documents is in the form of a bag of words. The bag of words considers the occurrences of different terms. In different text mining applications, documents are ranked and represented as vectors so as to display the significance of any word. The authors note that the three basic models used are vector space, inference network, and the probabilistic models. The vector space model is used to represent documents by converting them into vectors. In the model, a variable is used to represent each model to indicate the importance of the word in the document. The words are weighted using the TF-IDF scheme computed as
$$q(w)=f_d(w)*log{\frac{|D|}{f_D(w)}}$$
In many text mining algorithms, one of the key components is preprocessing. Preprocessing consists of different tasks that include filtering, tokenization, stemming, and lemmatization. The first step is tokenization, where a character sequence is broken down into different words or phrases. After the breakdown, filtering is carried out to remove some words. The various word inflected forms are grouped together through lemmatization, and later, the derived roots of the derived words are obtained through stemming.
Classification
Classification in Text Mining aims to assigned predefined classes to text documents. For a set $\mathcal{D} = {d_1, d_2, ... d_n}$ of documents, such that each $d_i$ is mapped to a label $l_i$ from the set $\mathcal{L} = {l_1, l_2, ... l_k}$. The goal is to find a classification model $f$ such that: $\\$ $$f: \mathcal{D} \rightarrow \mathcal{L}$$ The author illustrates 4 different classifiers that are commonly used in text mining.
1. Naive Bayes Classifier
Bayes rule is used to classify new examples and select the class that is most has the generated result. Naive Bayes Classifier models the distribution of documents in each class using a probabilistic model assuming that the distribution of different terms is independent of each other. The models commonly used in this classifier tried to find the posterior probability of a class based on the distribution and assumes that the documents generated are based on a mixture model parameterized by $\theta$ and compute the likelihood of a document using the sum of probabilities over all mixture component.
2. Nearest Neighbour Classifier
Nearest Neighbour Classifier uses distance-based measures to perform the classification. The documents which belong to the same class are more likely "similar" or close to each other based on the similarity measure. The classification of the test documents is inferred from the class labels of similar documents in the training set.
3. Decision Tree Classifier
A hierarchical tree of the training instances, in which a condition on the attribute value is used to divide the data hierarchically. The decision tree recursively partitions the training data set into smaller subdivisions based on a set of tests defined at each node or branch. Each node of the tree is a test of some attribute of the training instance, and each branch descending from the node corresponds to one of the values of this attribute. The conditions on the nodes are commonly defined by the terms in the text documents.
4. Support Vector Machines
SVM is a form of Linear Classifiers which are models that makes a classification decision based on the value of the linear combinations of the documents features. The output of a linear predictor is defined to the $y=\vec{a} \cdot \vec{x} + b$ where $\vec{x}$ is the normalized document word frequency vector, $\vec{a}$ is a vector of coeddifients and $b$ is a scalar. Support Vector Machines attempts to fins a linear separators between various classes. An advantage of the SVM method is it is robust to high dimensionality.
Clustering
Clustering have been extensively studied in the context of the text as it has a wide range of applications such as visualization and document organization.
Information Extraction
Information Extraction (IE) is the process of extracting useful, structured information from unstructured or semi-structured text. It automatically extracts based on our command.
For example, consider the following sentence, “XYZ company was founded by Peter in the year of 1950” We can identify the following information:
Founderof(Peter, XYZ) Foundedin(1950, XYZ)
The author mentioned 4 parts that are important for Information Extraction
1. Namely Entity Recognition(NER) This is the process of identifying real world entity from free text, such as "Apple Inc.", "Donald Trump", "PlayStation 5" etc. Moreover, the task is to identify the category of these entities, such as "Apple Inc." is in the category of the company, "Donald Trump" is in the category of the USA president, and "PlayStation 5" is in the category of the entertainment system.
2. Hidden Markov Model Since traditional probabilistic classification does not consider the predicted labels of neighbor words, we use the Hidden Markov Model when doing Information Extraction. This model is different because it considers the label of one word depends on the previous words that appeared.
3. Conditional Random Fields This is a technique that are widely used in Information Extraction. The definition of it is related to graph theory. let G = (V, E) be a graph and Yv stands for the index of the vertices in G. Then (X, Y) is a conditionalrandom field, when the random variables Yv , conditioned on X, obey Markov property with respect to graph, and: p(Yv |X, Yw ,w , v) = p(Yv |X, Yw ,w ∼ v), where w ∼ v means w and v are neighbors in G.
4. Relation Extraction This is a task of finding semantic relationships between word entities in text documents. Such as "Seth Curry" is the brother of "Stephen Curry", if there is a document including these two names, the task is to identify the relationship of these two entities.
References
Allahyari, M., Pouriyeh, S., Assefi, M., Safaei, S., Trippe, E. D., Gutierrez, J. B., & Kochut, K. (2017). A brief survey of text mining: Classification, clustering, and extraction techniques. arXiv preprint arXiv:1707.02919.
|
2022-07-06 02:05:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6854025721549988, "perplexity": 779.2598987910412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00284.warc.gz"}
|
http://cds.cern.ch/collection/CMS%20Papers?ln=fr
|
CMS Papers
Derniers ajouts:
2019-10-10
22:52
Strange hadron production in pp and pPb collisions at ${\sqrt {\smash [b]{s_{_{\mathrm {NN}}}}}} =$ 5.02 TeV / CMS Collaboration The transverse momentum (${p_{\mathrm{T}}}$) distributions of $\Lambda$, $\Xi^{-}$, and $\Omega^{-}$ baryons, their antiparticles, and ${\mathrm{K^0_S}}$ mesons are measured in proton-proton (pp) and proton-lead (pPb) collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV over a broad rapidity range. [...] arXiv:1910.04812 ; CMS-HIN-16-013 ; CERN-EP-2018-213 ; CMS-HIN-16-013-003. - 2019. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-10-03
23:19
Study of $\mathrm{J}/\psi$ meson production from jet fragmentation in pp collisions at $\sqrt{s} =$ 8 TeV / CMS Collaboration A study of the production of prompt $\mathrm{J}/\psi$ mesons as fragmentation products of jets in proton-proton collisions at $\sqrt{s} =$ 8 TeV is presented. [...] arXiv:1910.01686 ; CMS-BPH-15-003 ; CERN-EP-2019-186 ; CMS-BPH-15-003-003. - 2019. - 35 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-10-03
13:05
Search for supersymmetry with a compressed mass spectrum in events with a soft $\tau$ lepton, a highly energetic jet, and large missing transverse momentum in proton-proton collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration The first search for supersymmetry in events with an experimental signature of one soft, hadronically decaying $\tau$ lepton, one energetic jet from initial-state radiation, and large transverse momentum imbalance is presented. [...] arXiv:1910.01185 ; CMS-SUS-19-002 ; CERN-EP-2019-196 ; CMS-SUS-19-002-003. - 2019. - 33 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-10-01
12:33
Calibration of the CMS hadron calorimeters using proton-proton collision data at $\sqrt{s} =$ 13 TeV / CMS Collaboration Methods are presented for calibrating the hadron calorimeter system of the CMS detector at the LHC. [...] arXiv:1910.00079 ; CMS-PRF-18-001 ; CERN-EP-2019-179 ; CMS-PRF-18-001-003. - 2019. - 45 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-19
21:37
Running of the top quark mass from proton-proton collisions at ${\sqrt{s}} =$ 13 TeV / CMS Collaboration The running of the top quark mass is experimentally investigated for the first time. [...] arXiv:1909.09193 ; CMS-TOP-19-007 ; CERN-EP-2019-189 ; CMS-TOP-19-007-003. - 2019. - 33 p, 33 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-13
21:56
Evidence for WW production from double-parton interactions in proton-proton collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration A search for WW production from double-parton scattering processes using same-charge electron-muon and dimuon events is reported, based on proton-proton collision data collected at a center-of-mass energy of 13 TeV. [...] arXiv:1909.06265 ; CMS-SMP-18-015 ; CERN-EP-2019-167 ; CMS-SMP-18-015-003. - 2019. - 37 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-13
21:22
Search for long-lived particles using delayed photons in proton-proton collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration A search for long-lived particles decaying to photons and weakly interacting particles, using proton-proton collision data at $\sqrt{s}=$ 13 TeV collected by the CMS experiment in 2016-2017 is presented. [...] arXiv:1909.06166 ; CMS-EXO-19-005 ; CERN-EP-2019-185 ; CMS-EXO-19-005-003. - 2019. - 37 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-11
23:02
Measurement of the $\mathrm{t\bar{t}}\mathrm{b\bar{b}}$ production cross section in the all-jet final state in pp collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration A measurement of the production cross section of top quark pairs in association with two b jets ($\mathrm{t\bar{t}}\mathrm{b\bar{b}}$) is presented using data collected in proton-proton collisions at $\sqrt{s} =$ 13 TeV by the CMS detector at the LHC corresponding to an integrated luminosity of 35.9 fb$^{-1}$ . [...] arXiv:1909.05306 ; CMS-TOP-18-011 ; CERN-EP-2019-183 ; CMS-TOP-18-011-003. - 2019. - 37 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-10
23:27
Search for electroweak production of a vector-like T quark using fully hadronic final states / CMS Collaboration A search is performed for electroweak production of a vector-like top quark partner T of charge 2/3 in association with a top or bottom quark, using proton-proton collision data at $\sqrt{s} =$ 13 TeV collected by the CMS experiment at the LHC in 2016. [...] arXiv:1909.04721 ; CMS-B2G-18-003 ; CERN-EP-2019-174 ; CMS-B2G-18-003-003. - 2019. - 61 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
2019-09-09
23:42
Measurements of differential Z boson production cross sections in proton-proton collisions at $\sqrt{s} =$ 13 TeV / CMS Collaboration Measurements are presented of the differential cross sections for Z bosons produced in proton-proton collisions at $\sqrt{s} =$ 13 TeV and decaying to muons and electrons. [...] arXiv:1909.04133 ; CMS-SMP-17-010 ; CERN-EP-2019-175 ; CMS-SMP-17-010-003. - 2019. - 49 p. Additional information for the analysis - CMS AuthorList - Fulltext - Fulltext
|
2019-10-18 02:58:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983193039894104, "perplexity": 3915.578478921793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00115.warc.gz"}
|
http://www.mahendrapublications.com/article_details.php?id=MP434866
|
## Abstract :
The aim of this paper is to introduce the notation of pre-local function A^(p^* )(I, τ) by using pre-open sets in an ideal topological space (X, τ, I). Some properties and characterizations of a pre-local function are explored Pre-compatible spaces are also defined and investigated. Moreover, by using A^(p^* )(I, τ) we introduce an operator ψ: P(X)→τ satisfying ψ(A) = X-〖(X-A)〗^(p^* )for each A ∈ P(X) and we discuss some characterizations this operator by use pre-open sets.
|
2021-11-27 09:25:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8454762697219849, "perplexity": 4845.97912793387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00437.warc.gz"}
|
https://www.tec-science.com/mechanical-power-transmission/involute-gear/undercut/
|
This article provides answers to the following questions, among others:
• What is an undercut?
• How does undercutting occur during gear hobbing?
• What effects does an undercut have on the load capacity of a gear and on the line of action?
• Why does an undercut have to be present in gears with a small number of teeth for functional reasons?
• How can the minimum number of teeth be determined so that no undercut occurs?
## Undercut
### Manufacturing-related undercut
The animation below shows schematically the manufacturing process of three gears with different numbers of teeth by hobbing. It can be seen that if the number of teeth is too small, the hob obviously undercuts the tooth root. This is due to the fact that in the case of small gears, the cutting edges of the hob cutter engage relatively far into the gear (in the case of the red gear, up to about half the radius). This causes the tooth to be very strongly undercut during the rotation of the gear.
Therefore, undercuts must always be avoided, i.e. the number of teeth must not fall below a minimum.
Undercut occurs when the number of teeth of a gear is too small. An undercut leads to a weakening of the strength of the tooth!
## Functional-related undercut
An undercut during gear cutting occurs not only with hobbing but with shaping or planing as well. Although an undercut could be avoided by other manufacturing processes such as form cutting or broaching, the undercut is also absolutely necessary for functional reasons. If an undercut were not present with small gears, the teeth would interfere! As the animation below shows, the teeth of the red gear must be undercut by the teeth of the green gear for meshing.
An undercut not only weakens the respective tooth but also shortens the line of contact. The undercut cuts off part of the involute tooth flank. The tooth flanks thus lose contact with each other (already at point E) well before the actual end of engagement (point E’). The enlarged figure shows that the flank contact after point E is already no longer present. The line of action is shortened accordingly.
An undercut leads not only to a weakening of the tooth but also to a shortening of the line of contact!
### Minimum number of teeth to avoid undercut
To avoid an undercut, the gear must have a minimum number of teeth. The animation below shows the reference profile of the hob as it meshes with a gear with 6 teeth. This situation can be looked at analog to the meshing of a driving rack with a gear (the basics are explained in detail in the chapter on racks). The line of action results as a tangent to the base circle and runs perpendicular to the flank of the reference profile. The meshing begins at the point of intersection $$A$$ between the line of action and the tip circle of the gear and ends at the point of intersection $$E$$ between the line of action and the tip line of the reference profile (the shortening of the line of contact by the undercut is not taken into account in the figure).
As the animation shows, the tooth is undercut from the point $$B$$. This corresponds to the point from which the corner of the reference profile moves over the radial line of the gear, thus undercutting the tooth. Between the beginning of undercutting in point $$B$$ and the end of meshing in point $$E$$, the tooth is undercut within the green area.
Note: The radial line corresponds to the tangent to the tooth flank at the the base circle. In the case of an undercut, however, part of the involute tooth flank is cut off, leaving a small “gap” between the radial line and the actual tooth flank.
The point $$B$$ at which an undercut occurs generally corresponds to the point of contact between the base circle and the line of action. At this point, the flank of the reference profile coincides with the radial line of the gear. Beyond this point, the reference profile will then cross the radial line and undercut the tooth.
An undercut occurs at the point where the base circle touches the contact line!
In comparison to the above example, the animation below shows the meshing of the reference profile with a gear with 20 teeth. The point $$B$$ at which an undercut theoretically occurs is outside the line of contact $$\overline{AE}$$. The profile corner is therefore already out of mesh before it could have undercut the tooth. The teeth of the gear are therefore not undercut. An undercut will always occur if the contact point $$B$$ of the base circle and the line of action lies within the line of contact $$\overline{AE}$$.
An undercut always occurs when the base circle touches the line of action within the line of contact!
For the limiting case in which the teeth of a gear are not yet undercut, the beginning of the undercutting in point $$B$$ coincides with the end of engagement in point $$E$$. As the animation below shows, this is the case for a gear with 17 teeth.
No undercut occurs for gears with a number of teeth above 17!
### Calculation of the minimum number of teeth
The minimum number of 17 teeth mentioned in the previous section is independent of the module (or diametral pitch) and thus applies to all tooth sizes! This will be shown mathematically in the following. For this purpose, the geometric conditions resulting in the limiting case are examined more closely, i.e. if the points $$B$$ and $$E$$ coincide theoretically exactly.
The distance between the centerline and the tip line of the reference profile generally corresponds to the module $$m$$ and the inclination of the flanks to the standard pressure angle $$\alpha_0$$ (see also the chapter on gear cutting). If the orange triangle shown in the figure below is considered, it can be seen that the opposite side of the standard pressure angle $$\alpha_0$$ corresponds to the module $$m$$ of the gear. Thus, the following relationship applies to the distance $$\overline{CB}$$.
\begin{align}
\label{1}
& \overline{CB} =\frac{m}{\sin(\alpha_0)} \\[5px]
\end{align}
The distance $$\overline{CB}$$ can also be determined by the pitch circle radius $$r_0$$ or the pitch circle diameter $$d_0$$ (see yellow triangle). The pitch circle diameter $$d_0$$ results from the product of module $$m$$ and (minimum) number of teeth $$z_{min}$$ (see also the article “Geometry of involute gears“):
\begin{align}
\label{2}
& \overline{CB} = r_0 \cdot sin(\alpha_0) = \frac{d_0}{2} \cdot \sin(\alpha_0) = \frac{m \cdot z_{min}}{2} \cdot \sin(\alpha_0) \\[5px]
\end{align}
The two equations (\ref{1}) and (\ref{2}) can now be equated and solved for the minimum number of teeth $$z_{min}$$:
\begin{align}
&\overline{CB} = \overline{CB} \\[5px]
&\frac{m}{\sin(\alpha_0)} = \frac{m \cdot z_{min}}{2} \cdot \sin(\alpha_0) \\[5px]
&\boxed{z_{min} = \frac{2}{\sin^2(\alpha_0)} } \\[5px]
\end{align}
For a standard pressure angle of $$\alpha_0$$ = 20 °, a theoretical minimum number of teeth of $$z_{min}$$ = 17 results. In practice, however, a minimum number of teeth of 14 is assumed, at which an undercut then actually has a negative effect.
The theoretical minimum number of teeth above which no undercut occurs is 17 for a standard pressure angle of 20°. In practice, a minimum number of teeth of 14 is usually assumed!
In fact, however, it is also possible to produce gears below the minimum number of 17 teeth, without an undercut! For this, the manufacturing process must be specially adapted with a so-called profile shift. Such profile shifted gears will be discussed in more detail in the next article.
|
2020-11-29 04:21:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7281194925308228, "perplexity": 794.739255850439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00603.warc.gz"}
|
https://stats.stackexchange.com/questions/401549/does-xgboost-have-a-max-depth-hyper-parameter
|
# Does XGBoost have a max-depth hyper-parameter?
According to the explanation in Complete Guide to Parameter Tuning in XGBoost, XGBoost doesn't use max_depth argument as Random Forest or GBM does. It expands the tree up to this depth and start pruning all the way up by judging if each node has a positive gain or not. But how does it split the nodes? Is it again greedy? And, if it is greedy, is it possible to have a first split yielding negative gain, and then positive gain?
• How is this off-topic? It clearly ask on how a widely used boosting algorithm populates its tree learners. (+1 for the question) – usεr11852 Apr 7 '19 at 17:37
• @usεr11852 thanks for the comment. Also, I'd be happy to revise my question if any of the voters who voted as "off-topic" comments on why they think so, because I just tried to ask how a hyper parameter of XGBoost works, which I believe very well suits this forum. – cross-entropy Apr 7 '19 at 19:26
• @usεr11852 - it's about the implementation of an algorithm in a specific piece of code, so is really more of a programming question. Other GBM implementations, e.g., LightGBM, do some of their things differently. Also - note that the documentation link is over three years old! XGBoost, which I am very familiar with, has changed quite a bit over the last three years. Try xgboost.readthedocs.io/en/latest instead for your future work! (I did not vote to close, but I did think for some time about it.) – jbowman Apr 12 '19 at 21:47
• @jbowman: Apologies but I cannot see the point you are trying to make. If anything exactly because other variants of gradient boosting "do their things differently", it makes sense for us to present/highlight these differences. – usεr11852 Apr 12 '19 at 23:46
• @usεr11852 - It makes sense for somebody to highlight those differences, but whether CrossValidated, as opposed to StackOverflow or DataScience (both of which are under the "Technology" branch of StackExchange rather than the "Science" branch of StackExchange as we are) is that somebody is open for debate. Questions about how a specific hyperparameter works in a specific computer program... when I look at that statement, which mimics the one the OP made in comments above, it looks to me like this belongs on a different site. Still, I'll give it the benefit of the doubt. – jbowman Apr 13 '19 at 1:05
XGBoost has multiple ways of tree construction. Excluding GPU-centric implementations, in the current XGBoost version (0.82) there are three tree_method options: exact, approx and hist.
The exact method is most likely what the authors of the linked tutorial refer at; it uses a greedy algorithm where the data are first sorted and all possible splits for continuous features are examined. Thus, yes, the split is done in a greedy manner too. As you correctly note, XGBoost does expand the tree up to max_depth and start prunes exactly because another negative split might benefit future splits. This is imporant as the minimum loss reduction required parameter is embedded in the overall loss function (see T. Chen's XGBoost presentation in page 34).
While fully informative, this exact approach can be computationally demanding and rather hard to parallelise so two additional algorithms have been suggested approx and hist. The approximate greedy algorithm uses weighted quantile of feature distribution to identify the best split. Notice that there is a argument sketch_eps that directly relates to the number of bins used - the weighting actually comes from the second order gradient statistics on the loss function; see the original XGBoost paper in page 4. The hist algorithm implements an approximate binning approach too but it is more sophisticated than the "simple" weighted quantile defined above. The github thread of the related code submission gives more information on the matter. In short, only a subset of possible split values is considered and certain binning calculations can be reused. Caveat: when the construction parameter is set to hist, the grow_policy parameter comes into play where it allows us to include new nodes in a depthwise or a lossguide manner. The lossguide manner can result to rather deep trees because we might end up repeatedly splitting the one leaf that gives the biggest gain instead of splitting until max_depth.
As you see in all tree_method options, the growth is "greedy"; what changes is the way that splits are enumerated. Finally, please note that when it comes to pruning that there is the gamma parameter; it defines a minimum loss reduction required to make a further partition on a leaf node of the tree. That can lead to even to positive split to be pruned.
|
2020-04-03 10:55:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.656976580619812, "perplexity": 1049.5913085670848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00153.warc.gz"}
|
https://www.scm.com/doc/BAND/Troubleshooting/Recommendations.html
|
# Recommendations¶
## Model Hamiltonian¶
### Relativistic model¶
By default we do not use relativistic effects. The best approximation is to use spin-orbit coupling, however that is computationally very expensive. The scalar relativistic option comes for free, and for light elements will give very similar results as non-relativistic theory, and for heavy ones better results w. r. t. experiment. We recommend to always use this (scalar ZORA). To go beyond to the spin-orbit level can be important when there are heavy elements with p valence electrons. Also the band gap appears quite sensitive for the spin-orbit effect.
### XC functional¶
The default functional is the LDA, that gives quite good geometries but terrible bonding energies. GGA functionals are usually better at bonding energies, and among all possibilities the PBE is a common choice. Using a GGA is not a lot more expensive than using plain LDA. For the special problem of band gaps there are a number Model Hamiltonians available (eg. TB-mBJ and GLLC-SC). The Unrestricted option will be needed when the system is not closed shell. For systems interacting through dispersion interactions it is advised to use the Grimme corrections. Unfortunately there is no clear-cut answer to this problem, and one has to try in practice what works best.
## Technical Precision¶
The easiest way to control the technical precision is via the NumericalQuality key. One can also independently tweak the precision of specific technical aspects, e.g.:
BeckeGrid
Quality Good ! tweak the grid
End
KSpace
Quality Good ! tweak the k-space grid
End
ZlmFit
Quality Normal ! tweak the density fit
End
SoftConfiment
Quality Basic ! tweak the radial confinement of basis functions
End
Here are per issue hints for when to go for a better quality (but it is by no means complete)
• BeckeGrid: Increase quality if there are geometry convergence problems. Also negative frequencies can be caused by an inaccurate grid.
• KSpace: Increase quality for metals
• ZlmFit: Increase quality if the SCF does not converge.
• SoftConfinment: Increase quality for weakly bonded systems, such as layered materials
## Performance¶
The performance is influenced by the model Hamiltonian and basis set, discussed above. Here follow more technical tips.
### Reduced precision¶
One of the simplest things to try is to run your job with NumericalQuality Basic. For many systems this will work well, and it can be used for instance to pre-optimize a geometry. However, it can also cause problems such as problematic SCF convergence, geometry optimization, or simply bad results. See above how to tweak more finely the Technical Precision.
### Memory usage¶
Another issue that is the choice CPVector (say the vector length of you machine) and the number of k-points processed together during the calculation of the parameters. In the output you see the used value
=========================
= Numerical Integration =
=========================
TOTAL NR. OF POINTS 4738
BLOCK LENGTH 256
NR. OF BLOCKS 20
MAX. NR. OF SYMMETRY UNIQUE POINTS PER BLOCK 35
NR. OF K-POINTS PROCESSED TOGETHER IN BASPNT 5
NR. OF SYMMETRY OPERATORS (REAL SPACE) 48
SYMMETRY OPERATORS IN K-SPACE 48
If you want to change the default settings you can specify the CPVector and KGRPX keywords. The optimal combination depends on the calculation, on the machine. Example
CPVector 512
KGRPX 3
Note: bigger is not necessarily better.
### Reduced basis set¶
When starting work on a large unit cell it is wise to start with a DZ basis. With such a basis, one can test for instance the quality of the k-space integration. However, for most properties, the DZ basis is probably not very accurate. You can next go for the DZP (if available) or TZP basis set, but that may be a bit of overkill.
### Frozen core for 5d elements¶
The standard basis sets TZ2P are not optimal for third-row transition elements. Sometimes you need to relax the frozen core dependency criterion
Dependency Core=0.8 ! The frozen core overlap may not be exactly 1
|
2018-11-21 08:22:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3802252411842346, "perplexity": 2369.833516903718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747369.90/warc/CC-MAIN-20181121072501-20181121094501-00385.warc.gz"}
|
http://hbd.org/hbd/archive/5183.html
|
## HOMEBREW Digest #5183 Fri 04 May 2007
FORUM ON BEER, HOMEBREWING, AND RELATED ISSUES
Digest Janitor: pbabcock at hbd.org
***************************************************************
THIS YEAR'S HOME BREW DIGEST BROUGHT TO YOU BY:
Your Business Name Here
Visit http://hbd.org "Sponsor the HBD" to find out how!
Support those who support you! Visit our sponsor's site!
********** Also visit http://hbd.org/hbdsponsors.html *********
Contents:
Darrell's ecosystem ("Peter A. Ensminger")
RE: keg priming vs. oxidation ("Brian Lundeen")
re: Immersion chiller vs. kettle temperature probe (John Schnupp)
Re: Peristaltic pump (FLJohnson52)
Cylindroconical fermenter ("Doug Moyer")
Detergent for beer glassware ("Doug Moyer")
Peristaltic Pumps (mabrooks)
beer lambert law and applicability based on concentration (Aaron Martin Linder)
The Bouguer-Lambert-Beer Law and Beer (J A S Viggiano)
Re: Peristaltic pump ("Craig S. Cottingham")
RE: Peristaltic pump ("Ronald La Borde")
RE: Peristaltic pump ("Kevin Weaver")
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* The HBD Logo Store is now open! *
* http://www.hbd.org/store.html *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Suppport this service: http://hbd.org/donate.shtml *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Beer is our obsession and we're late for therapy! *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Send articles for __publication_only__ to post@hbd.org
If your e-mail account is being deleted, please unsubscribe first!!
To SUBSCRIBE or UNSUBSCRIBE send an e-mail message with the word
"subscribe" or "unsubscribe" to request@hbd.org FROM THE E-MAIL
ACCOUNT YOU WISH TO HAVE SUBSCRIBED OR UNSUBSCRIBED!!!**
IF YOU HAVE SPAM-PROOFED your e-mail address, you cannot subscribe to
the digest as we cannot reach you. We will not correct your address
for the automation - that's your job.
HAVING TROUBLE posting, subscribing or unsusubscribing? See the HBD FAQ at
http://hbd.org.
LOOKING TO BUY OR SELL USED EQUIPMENT? Please do not post about it here. Go
instead to http://homebrewfleamarket.com and post a free ad there.
The HBD is a copyrighted document. The compilation is copyright
HBD.ORG. Individual postings are copyright by their authors. ASK
before reproducing and you'll rarely have trouble. Digest content
cannot be reproduced by any means for sale or profit.
More information is available by sending the word "info" to
req@hbd.org or read the HBD FAQ at http://hbd.org.
JANITORs on duty: Pat Babcock (pbabcock at hbd dot org), Jason Henning,
and Spencer Thomas
----------------------------------------------------------------------
Date: Thu, 03 May 2007 23:52:43 -0400
From: "Peter A. Ensminger" <ensmingr at twcny.rr.com>
Subject: Darrell's ecosystem
Back in http://www.hbd.org/hbd/archive/5181.html#5181-6 , Darrel says he
has been playing around with lots of microorganisms. He takes 16
probiotic microorganisms and wonders if they will interact with the 5
(or more) microorganisms in his pLambic.
While I love a good Lambic (especially a Gueuze), I have noticed
digestive problems after drinking several during a session. I take no
probiotics, other than an occasional yogurt. So, I suggest moderation.
BTW Darrell ... if you die, please donate your body to the HBD, which
could use a cash infusion (or is it a "cash decoction").
Cheers!
Peter A. Ensminger
Syracuse, NY
Apparent Rennerian: [394, 79.9]
Return to table of contents
Date: Thu, 3 May 2007 23:22:35 -0500
From: "Brian Lundeen" <blundeen at mts.net>
Subject: RE: keg priming vs. oxidation
> Date: Mon, 30 Apr 2007 22:28:44 -0700
> From: Brian Miller <bj_mill at pacbell.net>
> Subject: keg priming vs. oxidation
>
> Totally a**l I know, but
> I've detected oxidation effects in my kegged beers so that's
> where I'm at. I've about run out of
> ideas besides keg priming to avoid this problem in the
> future. Is this common knowledge or am I missing something?
>
I know this is going to set a few eyeballs rolling (hey you, yes you, the
long haired chap crouching behind the life-sized cutout of Eric Bloom's
Harley, don't think I can't see you), but I think you are a prime candidate
for testing the beneficial effects of metabisulfite. I believe this will
take the place of the oxygen scavenging performed by an active yeast
population during sugar priming.
First, pre-treat the entire batch by adding 1/8 teaspoon of potassium
metabisulfite into the mash water. Then, at kegging, add a pinch (let's say
1/16th of a teaspoon) into the beer to be kegged. Will it make a difference?
Don't know, let us know what you find out. But at this stage, what have you
got to lose by trying it?
Cheers
Brian, in Winnipeg
Return to table of contents
Date: Thu, 3 May 2007 23:25:48 -0700 (PDT)
From: John Schnupp <john.schnupp at yahoo.com>
Subject: re: Immersion chiller vs. kettle temperature probe
nathanw at MIT.EDU (Nathan J. Williams)
>I recently purchased a new boil kettle (stainless, 34 qt), and it came
>with a thermometer installed through a weldless fitting on the side. I
>thought this was a pretty neat little feature, if not the most useful
>thing in the world for a boil kettle, until I realized that it's in
>the way of using my immersion chiller. The probe bit that sticks in
>goes far enough that the chiller would rest on the probe, rather than
>on the bottom of the pot. This seems a bit unstable and probably bad
>for the temperature probe.
>
>Dimensions: The kettle has in inside diameter of 12.5", the chiller
>has an outside diameter of 9.5", and the temperature probe sticks in 4".
So what about elevating the chiller? You didn't mention how far off the bottom
of the chiller the temp probe is. You also didn't mention if the chiller was
SS, copper or some other metal. If it is copper it should be no problem to
solder (sliver, lead-free) on some pieces to make legs. If it's made out of
some other metal you may have to be a little more creative about attaching the
legs but it should work.
You might also be able to somehow fashion hooks and chains and suspend the
chiller from the top edges of the pot.
John Schnupp, N3CNL
Georgia, VT
'95 XLH 1200 64,000
Return to table of contents
Date: Fri, 04 May 2007 08:20:18 -0400
From: FLJohnson52 at nc.rr.com
Subject: Re: Peristaltic pump
Doug asks about how to use the peristaltic pump for moving beer from
fermentor to fermenter, etc.
I took a look at the pump Doug bought. This is not the type of
peristaltic pump that I have used, but I know a little about this pump.
It appears to be a really nice pump, typically used for medical purposes
like dialysis and is essentially the type of pump used is
cardiopulmonary bypass machines. The pump is much more sophisticated
than your standard lab peristaltic pump in that it like has adjustments
for degree of tubing compression. It may accomodate various sized tubing.
I recommend getting in contact with the manufacturer if possible and
getting some documentation on the pumps use. It should at least start
you off with helping you to determine the most appropriate sized tubing
to use.
I can tell you this much. It will likely be a quite large diameter
tubing, since it is designed to move several liters per minute. Let's
hope it can accomodate some tubing that is more in line with what Doug
needs.
Doug: I'm happy to communicate with you off line on this as I may have
access to the information you need on this pump from my connections with
hospital perfusionists.
Fred L Johnson
Apex, North Carolina, USA
Return to table of contents
Date: Fri, 4 May 2007 09:05:49 -0400
From: "Doug Moyer" <shyzaboy at yahoo.com>
Subject: Cylindroconical fermenter
I have the following fermenter:
http://shyzaboy.blogsome.com/2007/03/19/a-new-toy/
Several problems with it on first brew:
(1) The clamp setup is insufficient to handle a strong fermentation. Any
suggestions for a better clamping system (without any welding)?
(2) When I tried to drop out the trub on day 3, I got a slug of what
appeared to be yeast, and then some beery yeast. Doesn't really seem like
much trub. Are the sides steep enough to concentrate the trub? Do I need to
use a rubber mallet on the sides to get the stuff to settle?
(3) It seems like a 1/2" ball valve (and associated fittings) is too narrow.
I will have to wait until it is empty to make any hard measurements, but it
seems like I could put a 1" fitting in the bottom. Comments from anyone with
the same equipment? (I think this was a blank from Toledo Metalspinning or
something like that...)
Other suggestions for working with one of these?
I will replace Bryan's stand with a lower cabinet with a stainless steel
counter top. (As soon as I design something appropriate.) I plan to use a
perastaltic pump to move the contents to the kegs, so I don't need the extra
height - which makes it a pain to move from my brewing area to the
fermenting area...
Brew on!
Doug Moyer
Troutville, VA
Star City Brewers Guild: http://www.starcitybrewers.org
Beer, brewing, travel & kids: http://shyzaboy.blogsome.com
Return to table of contents
Date: Fri, 4 May 2007 09:06:22 -0400
From: "Doug Moyer" <shyzaboy at yahoo.com>
Subject: Detergent for beer glassware
I have the fortune of a dishwasher dedicated to washing my beer glasses.
(Although, I've been so generous as to allow my wife to put the "everyday"
wine glasses in there as well - but the Riedel glasses still get washed by
hand...)
What is the best dishwasher detergent to use for washing beer glasses? I.e.,
least impact on head retention while still cleaning. Obviously, I don't want
spots. As to the visual impact when serving beer to my guests, I'd rather
have less foam than serve a beer glass covered with spots. (My water is
somewhat hard - well water in the Blue Ridge mountains...)
I currently use Cascade Complete, which is what I use in our main
dishwasher.
Thoughts? Comments?
Brew on!
Doug Moyer
Troutville, VA
Star City Brewers Guild: http://www.starcitybrewers.org
Beer, brewing, travel & kids: http://shyzaboy.blogsome.com
Return to table of contents
Date: Fri, 4 May 2007 09:34:13 -0700 (PDT)
From: mabrooks <mabrooks12 at yahoo.com>
Subject: Peristaltic Pumps
>From: "Doug Moyer" <shyzaboy at yahoo.com>
>Subject: Peristaltic pump
>I purchased a peristaltic pump off of eBay.
>I have no documentation to go with it, and I've never
>used a peristaltic pump before.
>For those of you that use these things, please answer
>a question or two...
Doug, You wont need to prime as it is a positive
displacement pump. I have used these for many, many
years in the research field (from really high out-put
to only milliliters per minute) and I do like them. I
have never used one for beer before, however, I dont
forsee any issues except throughput and tubing
selection...not sure what the speed is on that
particular unit, some are pretty slow, so it may take
a while to transfer 5 gallons. A bit of advice - Get
good tubing! The roller heads on these units can be
very agressive (wear) on the tubing that is used in
them. I used to purchase a special, more resistant
tubing for inside the head and a less expensive tubing
for the suction and discharge portions, this is not
really necessary if you are not using it in a
continuous manner like I was (24/7). Make sure you
keep an eye on the tubing as it will likely wear and
rupture just when you least want it to. Also, you can
run it for a several batches and then open the head up
and yank on one end of the tubing and pull a new
(unworn) section through the head (leaving everything
else in place) now the worn area will just be used for
the suction or discharge tubing and not the "working"
part of the head of the unit. I used a special tubing
installation "key" to get the tubing in the roller
portion correctly...made it simple to install/ change
tubing.
Matt B.
Northern Va.
Return to table of contents
Date: Fri, 4 May 2007 12:53:57 -0400 (EDT)
From: Aaron Martin Linder <lindera at umich.edu>
Subject: beer lambert law and applicability based on concentration
matt B. recently wrote:
"Recent postings on the subject topic have stated that
Beer does in fact follow Beers law....hmmmm, perhaps
it does, however, I would like to throw out the
following to ponder:
Diluting a liquid by 10, 20 or 50% and meauring it on
a spectrophotometer, by no means proves it follows
Beer-Lambert(doesnt disprove it either)."
In fact, this is exactly the way to test whether the Beer-Lambert law
applies to a particular beer. Beer-Lambert states that the Absorbance
spectrum of an aqueous solution is proportional to the concentration of
the solution(this can be a particular component or a mixture of
components).
If we take the absorbance spectrum of a non-scattering beer solution(a
non-turbid degassed sample) at various concentrations (of beer) if the
absorbance is linear with concentration, the law applies to that
particular beer.
There are of course a lot of things that can confound the results, but we
can specify that the beer has to be clear and degassed and that there are
possible instrument or detector-based aberrations, independent of the beer
solution itself(ie stray light, detector noise, etc. which can be
determined.)
While there is no concentration known for each component that absorbs each
wavelength we would assume that each absorbing component will contribute
to the overall absorbance in a linear way. if it does not then either our
spectrophotometer is limited, our beer sample is cloudy or gassy or
beer-law doesn't apply.
aaron
ann arbor,mi
Return to table of contents
Date: Fri, 04 May 2007 15:43:28 -0400
From: J A S Viggiano <jasv at acolyte-color.com>
Subject: The Bouguer-Lambert-Beer Law and Beer
In response to Matt Brooks's recent posting (digest 5182) regarding the
Bouguer-Lambert-Beer law and beer:
1. The concentration of an undiluted beer is known with
near-metaphysical certitude: it's unity! Accordingly, "c" was known for
all cases measured.
2. If the Bouguer-Lambert-Beer Law applies, it applies at all
wavelengths, not just the wavelength of peak absorbance. See A Beer,
"Bestimmung des Absorption des rothen Lichts in farbigen
Fluessigkeiten," _Annalen_Physik_und_Chemie_, Band 86, Heft 2, Seiten
78-90 (1852).
3. The applicability of this law to (especially dark) beers has been
called into question by Ray Daniels and George Fix. See, e.g., Ray
Daniels, "Beer color demystified," _Brewing_Techniques_, July-August
1995.
4. In order to address this issue, some of us have actually looked into
this experimentally. Our interest was not in determining the individual
constituents of the beer, or the wort from which it was fermented, or
the grains from which it was mashed, but whether or not the
Bouguer-Lambert-Beer law is applicable to beers.
5. The study which Mr Brooks dismisses in fact does fly in academe,
possibly because it was/is scientifically sound (modesty prohibits my
asserting this point, however).
6. Professionals with backgrounds in disciplines other than water
science can and do make significant contributions in this area.
7. If one knows the absorbance spectrum of a beer at one concentration
and for one path length, one can compute its apparent color for any
combination of cencentration and path length, provided it obeys the
Bouguer-Lambert-Beer law. Here's a step-by-step:
A. Divide the absorbance spectrum by the product of the path length and
concentration at which it was measured. This is the absorbance spectrum
at unit path length and concentration.
B. Multiply the unit absorbance spectrum by the product of the desired
path length and concentration.
C. Convert Absorbance into transmittance by taking the common
antilogarithm (base 10) of the negative of the absorbance at each
wavelength (T = 10.0 ** -A).
D. Select an observer and illuminant combination. The CIE 1964 Standard
Supplementary Observer and the (now deprecated) CIE Standard Illuminant
C are, IIRC, those recommended by the American Society of Brewing
Chemists. Data for the standard observers and D-series (Daylight)
illuminants are available at:
http://www.cis.rit.edu/mcsl/online/cie.php
I would recommend illuminant D65 instead of deprecated illuminant C.
E. Compute the X, Y, Z tristimulus values of the sample:
X = sum (t_\lambda xbar_\lambda S_\lambda)
Y = sum (t_\lambda xbar_\lambda S_\lambda)
Z = sum (t_\lambda xbar_\lambda S_\lambda)
where t is the spectral transmittance; xbar, ybar, and zbar are the
color matching functions for the selected observer; S is the spectral
power distribution of the selected illuminant, and the summation is over
all wavelengths (lambda).
F. If desired, compute the CIELAB coordinates as:
L* = 116 * f(Y/Yn) - 16
a* = 500 * [f(X/Xn) - f(Y/Yn)]
b* = 200 * [f(Y/Yn) - f(Z/Zn)]
where Xn, Yn, and Zn are, in this case, taken as the XYZ tristimulus
values of a material with unit transmittance at every wavelength (the
perfect transmitter), and the Pauli function f(u) is defined as:
{u ** (1/3), u < (6/29)**3
f (u) = {
{u * 29*29/108 + 4/29, u <= (6/29)**3
(The constant 29*29/108 is approximately 7.787, and the constant
(6/29)**3 is approximately 0.008856. These approximate values were used
in CIE Publication 15.2, _Colorimetry_, second edition; the exact values
are now used in the current Third edition.)
L* is metric Lightness; black is 0, White is 100; a* is metric Redness
(positive)/Greenness (negative); b* is metric Yellowness (positive) /
Blueness (negative). Neutral (achromatic) objects have a* = b* = 0.
G. If desired, one may compute the metric Hue angle and Chroma:
tan (h_{ab}) = b* / a*
C*_{ab} = sqrt ((a*)**2 + (b*)**2)
The Chroma of neutrals is zero; as an object's Chroma increases, is
becomes more vivid. The hue angle is defined so that Reds have a hue
angle close to 45 degrees, Yellows close to 95 degrees, Greens close to
160 degrees, and Blues close to 230 degrees.
Therefore, even if you measure a sample of a beer at a concentration of
0.2 (that's 200 ml/l) in a cuvette 1 cm thick (internal dimension), you
may compute, if it obeys the Bouguer-Lambert-Beer law, not only its
absorbance at arbitrary concentrations and pathlengths, but also obtain
its Lightness, Hue, and Chroma.
8. Bouguer made a significant contribution to this law; his work
predated that of Lambert by a few years and that of Beer by over a
century. Lambert was lavish in his praise for Bouguer in his landmark
opus, _Photometria_. Bouguer was apparently began the formulation of
this law. It is only right to recognize his contribution by not ignoring
him when naming this law.
I welcome any thoughtful, constructive discussion of this work.
==JASV
Return to table of contents
Date: Fri, 4 May 2007 16:19:10 -0500
From: "Craig S. Cottingham" <craig.cottingham at gmail.com>
Subject: Re: Peristaltic pump
On May 3, 2007, at 14:09, "Doug Moyer" <shyzaboy at yahoo.com> wrote:
> I purchased a peristaltic pump off of eBay.... Does it require
> priming or is it self-priming?
I've never used one, but peristaltic pumps should be self-priming.
IIRC, they're positive-displacement pumps, which means they move a
fixed volume of fluid per shaft revolution. (Contrast with
centrifugal pumps, such as the near-ubiquitous March 809, which
generate a pressure differential.)
I'm guessing from the brand name on the one you purchased that it was
used for moving blood, in which case I should *hope* it's self-
priming. :-)
- --
Craig S. Cottingham
BJCP Certified judge from Olathe, KS ([621, 251.1deg] Apparent
Rennerian)
craig.cottingham at gmail.com
Return to table of contents
Date: Fri, 4 May 2007 17:27:21 -0500
From: "Ronald La Borde" <pivoron at cox.net>
Subject: RE: Peristaltic pump
>From: "Doug Moyer" <>
>Does it require priming or is it self-priming?
>What type of hose should I use?
No priming needed, that's one of the benefits. It is also OK to run the pump
dry, in fact you can use it as a vacuum pump, or a pressure pump with air or
gas in addition to liquid.
The hose to use will be the tricky part. You need to find out from the Mfg.,
or someone familiar with it to know the size tubing. You will need tubing
made for the purpose of a peristaltic pump. Cole Palmer has a multi
selection, they may be able to look up your model. The tubing size must
match the pump and this will determine the flow rate in combination with the
pump speed.
I particularly like the Masterflex brand with interchangeable heads. You got
a good price on your bid, so once you get the tubing (it's expensive, by the
way, but worth it), you will find many uses for it.
Ron
Ronald J. La Borde -- Metairie, LA
New Orleans is the suburb of Metairie, LA
New Orleans is the New Atlantis
Return to table of contents
Date: Fri, 4 May 2007 21:12:55 -0400
From: "Kevin Weaver" <kweaver at brewmation.com>
Subject: RE: Peristaltic pump
Hi Doug,
We use Peristaltic pumps on our Brewmation Brewery. They work very well
for wort and beer. They do not need priming as they are positive
displacement pumps. Norprene tubing is a good choice. You will have to
be careful and match the tubing with the roller settings. If it is not
set up for the tube's wall thickness, it can jam the pump or cause the
pump to transfer less then the design (will become un-predictable). We
use this style pump for the sparge and the flow is right on time after
time. What looks neat about the pump you bought is that you can run two
lines. You will be able to maintain the mash tun level (assuming you do
all grain) without worrying about matching the flow rates.
Since the wort/beer only touches the tubing, sanitation is easy...Very
good for pumping between fermenters etc. This pump you bought looks
like it is a higher volume pump then our pumps designed for the sparge,
but check out this link containing info on our pumps....
http://www.brewmation.com/MashPumps.html
Hope this helps out...Looks like you found a good pump at a good price.
Kevin
Return to table of contents
HTML-ized on 05/05/07, by HBD2HTML v1.2 by KFL webmaster@hbd.org, KFL, 10/9/96
|
2018-01-20 20:52:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3672288656234741, "perplexity": 8071.151779471508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00592.warc.gz"}
|
http://mathhelpforum.com/calculus/174157-implicit-differentiation-con-t.html
|
1. ## implicit differentiation con't
Problem: $x^4(x+y)=y^2(3x-y)$
Solution attempt:
$\frac{d}{dx}[x^4(x+y)]-\frac{d}{dx}[y^2(3x-y)]=0$
$[4x^3(x+y)+x^4(1+y')] - [2yy'(3x-y)+y^2(3y-y')]=0$
upon distribution of coefficients,
$4x^4+4x^3y+x^4+x^4y'-6xyy'+2y^2y'-3y^2+y^2y'=0$
If I didn't subtract $\frac{d}{dx}[y^2(3x-y)]$ as I did in step 1 I end up with,
$4x^4+4x^3y+x^4+x^4y'=6xyy'+2y^2y'-3y^2+y^2y'$
This is where I am confronted with many mathematically valid yet counter productive steps as I try to solve for $y'$
combining like terms I have,
$5x^4+4x^3y+x^4y'=6xyy'+3y^2y'-3y^2$
$5x^4+4x^3y+x^4y'=y'(6xy+3y^2)-3y^2$
Answer in book: $y'=\frac{3y^2-5x^4-4x^3y}{x^4+3y^2-6xy}$
notes: what gets me is the fact that I have two $y'$ and $\frac{y'}{y'}=1$
2. i'm going to combine like termms and see where that gets me.
3. Originally Posted by Foxlion
i'm going to combine like termms and see where that gets me.
First, you have a sign error.
4. Originally Posted by Foxlion
Problem: $x^4(x+y)=y^2(3x-y)$
Solution attempt:
$\frac{d}{dx}[x^4(x+y)]-\frac{d}{dx}[y^2(3x-y)]=0$
$[4x^3(x+y)+x^4(1+y')] - [2yy'(3x-y)+y^2(3y-y')]=0$
$[4x^3(x+y)+x^4(1+y')] - [2yy'(3x-y)+y^2(3-y')]=0$
-Dan
5. $[4x^3(x+y)+x^4(1+y')] - [2yy'(3x-y)+y^2(3-y')]=0$
-Dan
is typo, following lines do not take it into account
6. I'll just springboard off of Dan's post (and yours...):
yes, you want to combine like terms. You have, among other things,
$x^4y' - 6xyy' + 2y^2y' +y^2y' = y'(x^4 - 6xy + 3y^2)$
After you move the rest of the terms to the other side and divide by
$(x^4 - 6xy + 3y^2)$, you'll have the desired result.
7. <sigh>...I got it...
$x^4(x+y) = y^2(3x-y)$
differentiating both sides...
$4x^3(x+y)+x^4(1+y') = 2yy'(3x-y+y^2(3-y')$
distributing coefficients...
$4x^4+4x^3y+x^4+x^4y' = 6xyy'-2y^2y'+3y^2-y^2y'$
combining like terms...
$5x^4+4x^3y+x^4y' = 6xyy'-3y^2y'+3y^2$
subtracting $3y^2$from both sides...
$5x^4+4x^3y+x^4y'-3y^2 = 6xyy'-3y^2y'$
subtracting $x^4y'$ from both sides...
$5x^4+4x^3y-3y^2 = 6xyy'-3y^2y'-x^4y'$
factoring out $y'$ from right...
$5x^4+4x^3y-3y^2 = y'(6xy-3y^2-x^4)$
dividing both sides by $(6xy-3y^2-x^4)$...
$\frac{5x^4+4x^3y-3y^2}{6xy-3y^2-x^4} = y'$
now for some rearrangement as well as multiplying both sides by $\frac{-1}{-1}$...
$y' = \frac{3y^2-5x^4-4x^3y}{x^4+3y^2-6xy}$
Thank you both
|
2017-02-25 09:31:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7667266726493835, "perplexity": 1910.565249934454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00024-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/a-discrete-mathematics-problem-by-naren-bhandari/
|
An algebra problem by Naren Bhandari
Algebra Level 3
$\large p(q-r)x^2 + q(r-p)x + r(p-q) = 0$
If the equation above has two equal roots, find $$\dfrac{1}{p} + \dfrac{1}{r}$$.
×
|
2017-11-24 09:30:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5461798310279846, "perplexity": 6294.307433326062}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807344.89/warc/CC-MAIN-20171124085059-20171124105059-00209.warc.gz"}
|
https://lexique.netmath.ca/en/y-intercept/
|
# Y-Intercept
## Y-Intercept
The y-intercept of a graph of a function f represented on a Cartesian plane is the y-coordinate of the point at the coordinates (0, f(0)), or the point where the line intersects with the y-axis.
The y-intercept of a function f is therefore the value of f when the independent variable x is zero, or f(0).
The expression “y-intercept” can also indicate the point where the line of a function intersects with the y-axis.
### Example
The y-intercept of the graph of the function defined by $$f(x) = −\frac{8}{3}x + 2$$ is 2 and its x-intercept is $$\frac{3}{4}$$.
|
2023-03-24 12:40:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767581045627594, "perplexity": 283.4812036619099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00710.warc.gz"}
|
https://mathexpressionsanswerkey.com/math-expressions-grade-5-unit-6-lesson-8-answer-key/
|
# Math Expressions Grade 5 Unit 6 Lesson 8 Answer Key Equations and Parentheses
## Math Expressions Common Core Grade 5 Unit 6 Lesson 8 Answer Key Equations and Parentheses
Math Expressions Grade 5 Unit 6 Lesson 8 Homework
Solve each problem if possible. If a problem does not have enough information, write the information that is needed to solve the problem. Show your work.
Question 1.
At the school bookstore, Quinn purchased a binder for $4.75 and 4 pens for$0.79 each. What was Quinn’s total cost (C)?
Given,
Cost of binder = $4.75 Cost of 4 pens = 4 ×$0.79 = $3.16 Therefore, total cost =$4.75 + $3.16 =$7.91.
Question 2.
A school bus has 12 rows of seats, and 4 students can be seated in each row. How many students (s) are riding the bus if 11 rows are filled with students, and 2 students are riding in the twelfth row?
Given,
school bus has 12 rows of seats, and 4 students can be seated in each row.
For 11 rows, number for students = 11 × 4 = 44
And 2 students are in 12th row
Therefore, total students riding in the bus = 44 +2 = 46.
Question 3.
A group of 16 friends visited an amusement park. When they arrived, $$\frac{3}{4}$$ of the friends wanted to ride the fastest roller coaster first. How many friends (t) wanted to ride?
Given,
A group of 16 friends visited an amusement park.
Let the number to friends want to ride be Y
Y = 16 × 3/4 = 4 × 3 = 12
Therefore, number of friends want to ride are 12
Question 4.
Zeke is shipping clerk for a large business. Today he spent 90 minutes preparing boxes for shipping. One box weighed 10 pounds and 7 boxes each weighed 3$$\frac{1}{2}$$ pounds. What is the total weight (w) of the boxes?
Given,
One box weighed 10 pounds
Weight of other 7 boxes is 7 × 7/2 = 24.5 pounds
Therefore, total weight = 10 + 24.5 = 34.5 pounds
Question 5.
A middle school faculty parking lot has 3 rows of parking spaces with 13 spaces in each row, and 1 row of 7 spaces. How many vehicles (y) can be parked in the faculty lot?
Given,
A parking lot has 3 rows with 13 spaces in each row = 3 × 13 = 39
And 1 row of 7 spaces = 1 × 7 = 7
Therefore, number of Vechiles can be parked in faculty lot is 39 + 7 = 46 Vechiles.
Question 6.
Rochelle’s homework always consists of worksheets. Last night, the average amount of time she needed to complete each worksheet was 15 minutes. How much time (t) did Rochelle spend completing worksheets last night?
Given, let the number of worksheets be 2
Time needed to complete each worksheet = 15 minutes.
Therefore, time needed to complete 2 worksheets = 2 × 15 = 30 minutes
Math Expressions Grade 5 Unit 6 Lesson 8 Remembering
Multiply.
Question 1.
Question 2.
Question 3.
Question 4.
Question 5.
Question 6.
Question 7.
Question 8.
Multiply or divide.
Question 9.
Question 10.
Question 11.
Question 12.
Write an equation and use it to solve the problem. Draw a model it you need to.
Question 13.
Lindsay is shopping for a new CD player. The cost of one CD player she is considering is $56.55. The cost of a higher priced CD player is$14.25 more. What is the cost (c) of the higher priced CD player?
Given,
The cost of one CD player she is considering is $56.55. The cost of a higher priced CD player is$14.25 more.
Let the cost of a higher priced CD player be Y
Then, Y = $56.55 +$14.25 = $70.8 Therefore, The cost of a higher priced CD player is$70.8
Question 14.
Stretch Your Thinking Use the equation below to write a word problem. Leave out one piece of information that is needed to solve the problem and describe the information that should have been included, b = (5 • 6) + 10
|
2021-10-21 12:24:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25225594639778137, "perplexity": 1897.7407361814685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00526.warc.gz"}
|
https://omtsa.ahievran.edu.tr/icerik/participants
|
# Participants
ID Type Title Authors Country 1 Paper Generalized fractional integral operators on Morrey type spaces Abdulhamit KÜÇÜKASLAN TURKEY 2 Paper Commutators of Marcinkiewicz integral on generalized weighted Morrey spaces Afaq ISMAYILOVA AZERBAIJAN 3 Paper Fractional oscillatory integral operators and their commutators on generalized Orlicz-Morrey spaces of the third kind Ahmet EROĞLU TURKEY 4 Paper On the summability by means of matrix transformations Ahmet KARAKAŞ TURKEY 5 Paper L-homology theory of FSQL-manifolds and the degree of FSQL-mappings Akif ABBASOĞLU TURKEY 6 Paper Boundedness of the vector-valued maximal operator on generalized Morrey spaces Ali AKBULUT TURKEY 7 Paper Approximation of functions by Mellin m -singular integrals at characteristic points Ali M. MUSAYEV AZERBAIJAN 8 Paper On properties of functions in the grand Sobolev-Morrey spaces Alik M. NAJAFOV AZERBAIJAN 9 Paper The embedding theorems of space $S_{p,\varphi ,\beta }^{l}W\left(G\right)$ Alik M. NAJAFOV AZERBAIJAN 10 Paper Fractional multilinear integrals with rough kernels on generalized weighted Morrey spaces Amil A. HASANOV AZERBAIJAN 11 Paper The extrapolation theorems for weighted generalized Morrey spaces Amiran GOGATISHVILI CZECH REPUBLIC 12 Paper Mixed Morrey estimates for singular integral operators and their applications Andrea SCAPELLATO ITALY 13 Paper Approximation by Kantrovich operators in Morrey spaces Arash GHORBANALIZADEH IRAN 14 Paper n-Tuplet Coincidence Point Theorems in Partially Ordered Probabilistic Metric Spaces Arife Aysun KARAASLAN TURKEY 15 Paper Parabolic fractional integral operators with rough kernel in parabolic local generalized Morrey spaces Aydın S. BALAKISHIYEV AZERBAIJAN 16 Paper Poincare type inequality in Besov-Morrey type spaces Aygun T. ORUJOVA AZERBAIJAN 17 Paper Integral Operators of Harmonic Analysis in Local Morrey-Lorentz Spaces Ayhan ŞERBETÇİ TURKEY 18 Paper On approximation theorem for two-dimensional Szasz type operator in Lebesgue spaces Aynur N. MAMMADOVA AZERBAIJAN 19 Paper Existence of a pair of new recurrence relations for the Meixner-Pollaczek polynomials Aynura M. Jafarova AZERBAIJAN 20 Paper Global regularity in Orlicz-Morrey spaces of solutions to nondivergence elliptic equations with VMO coefficients Aysel A. AHMADLI AZERBAIJAN 21 Paper On asymptotic formula for two-dimensional Bernstein-Chlodowsky type polynomials Aytekin E. ABDULLAYEVA AZERBAIJAN 22 Paper Some properties of a function spaces of Lizorkin-Triebel-Morrey type with dominant mixed derivatives Azizgul M. GASYMOVA AZERBAIJAN 23 Paper Some results concerning the summability of infi nite series Bağdagül KARTAL TURKEY 24 Paper Mathematical Modeling of Local Bacterial Infection Bahatdin DAŞBAŞI TURKEY 25 Paper Positive solutions of second-order neutral differential equations with distributed deviating arguments Bengü ÇINA TURKEY 26 Paper Positive solutions of second-order neutral differential equations with forcing term Bengü ÇINA TURKEY 27 Paper General spectral stability theorem for the eigenvalues of a pair of linear operators Bien Thanh Tuyen RUSSIA 28 Paper Boundedness of the Fractional Maximal Operator in the Local Morrey-Lorentz Spaces Canay AYKOL TURKEY 29 Paper Some boundedness of homogeneous B-fractional integrals on $H^{p}_{\Delta_{\nu}}$ Hardy spaces Cansu KESKİN TURKEY 30 Poster A Novel Chelyshkov Approach Technique for Solving Functional Integro-Differential Equation with Mixed Delays Cem OĞUZ TURKEY 31 Poster Chelyshkov collocation Approach for Solving Some Population Models Cem OĞUZ TURKEY 32 Paper On pre-compactness of a set in general local and global Morrey-type spaces Dauren Matin KAZAKHSTAN 33 Paper F-contraction of Multivalued Integral Type Mapping and α-admissible Operator Derya SEKMAN TURKEY 34 Paper On Marcinkiewicz-type interpolation theorem for Morrey-type spaces Diana Chigambayeva KAZAKHSTAN 35 Paper Approximation Properties of Generalized Bernstein Operators Dilek SÖYLEMEZ TURKEY 36 Paper On the growth of the algebraic polynomials on whole complex plane with respect to norm of Bergman space Eda ORUÇ TURKEY 37 Paper B-potential operator with the Lorentz distance and its inverse Elina Shishkina RUSSIA 38 Paper The Hardy-Littlewood-Sobolev theorem for Riesz potential generated by Gegenbauer operator Elman C. IBRAGIMOV AZERBAIJAN 39 Paper Parabolic fractional maximal operator with rough kernels in parabolic local generalized Morrey spaces Elmira A. GADJIEVA AZERBAIJAN 40 Paper On Existence and Convergence Theorems for A New Multivalued Mapping in Geodesic Spaces Emirhan HACIOĞLU TURKEY 41 Paper Global exponential stability of BAM neural networks with varying delays and impulses Erdal KORKMAZ TURKEY 42 Paper Global asymptotic stability of a certain integro-differential systems modeling neural networks with delays Erdal KORKMAZ TURKEY 43 Paper On the behavior of the algebraic polynomials in regions with cusps Fahreddin ABDULLAYEV KYRGYZSTAN 44 Paper Numerical reckoning coincidence points of a new general class of nonself operators via a simpler and faster iterative scheme Faik GÜRSOY TURKEY 45 Paper B-maximal operator on B-Orlicz spaces Fatai ISAYEV AZERBAIJAN 46 Paper Characterizations for the maximal operator on generalized weighted Orlicz-Morrey spaces Fatih DERİNGÖZ TURKEY 47 Paper Maximal function associated with a homogeneous function Gulgayit DADASHOVA AZERBAIJAN 48 Paper The solvalibity and qualitative property of boundary value problems for nonlinear degenerate elliptic equations Gulnara ZULFALIYEVA AZERBAIJAN 49 Paper Nonlinear singular integral operators depending on two parameters from another point of view Gümrah UYSAL TURKEY 50 Poster More on singular integral operators of multivariables Gümrah UYSAL TURKEY 51 Paper A New Approach Comparison of the Farthest Point Map in Fuzzy and Classic n-Normed Spaces with Examples Hakan EFE TURKEY 52 Paper Perfectly Optimally Clean Rings Handan KÖSE TURKEY 53 Paper Some New Pascal Sequence Spaces Harun POLAT TURKEY 54 Paper Implementation of entropy theory for Burgers' equation Hatice ÖZCAN TURKEY 55 Paper Necessary and suffcient conditions for the boundedness of fractional maximal operator in local Morrey-type spaces Huseyn V. Guliyev United Kingdom 56 Paper On upward half Cauchy sequences Hüseyin ÇAKALLI TURKEY 57 Paper Potential operators in modified Morrey spaces defined on Carleson curves I. B. DADASHOVA AZERBAIJAN 58 Paper On the well-posed solvability of the Neumann problem for a generalized Mangeron equation with nonsmooth coefficients Ilgar G. MAMEDOV AZERBAIJAN 59 Paper The Fractional-Order Mathematical Modeling of bacterial competition with theraphy of multiple antibiotics İlhan ÖZTÜRK TURKEY 60 Paper Various Generalizations of Fixed Point Results in b-Metric Spaces İsa YILDIRIM TURKEY 61 Paper Thin Sets in Weighted Variable Exponent Sobolev Spaces İsmail AYDIN TURKEY 62 Paper On Some Properties of a Banach Algebra İsmail AYDIN TURKEY 63 Paper On the Boundedness of Singular Integrals in Lebesgue Spaces with Variable Exponent İsmail EKİNCİOĞLU TURKEY 64 Paper Characterizations for the fractional integral operators in generalized Morrey spaces on Carnot groups Javanshir AZIZOV AZERBAIJAN 65 Paper Maximal and singular integral operators on generalized weighted Morrey spaces with variable exponent Javanshir HASANOV AZERBAIJAN 66 Paper A study on a faster Mann iterative method Kadri DOĞAN TURKEY 67 Paper Characterizations for the parabolic fractional integral operators in parabolic generalized Morrey spaces Kamala Rahimova AZERBAIJAN 68 Paper Necessary and sufficient conditions for the boundedness of comutators of B-Riesz potentials in Lebegues spaces Lale R. Aliyeva AZERBAIJAN 69 Paper Morrey type spaces over unbounded domain Lubomira G. SOFTOVA ITALY 70 Paper Magnetohydrodynamic convective flow past a curved surface in the presence of thermal radiation and chemical reaction Madiha RASHID PAKISTAN 71 Paper Inverse Spectral Problem for Energy-Dependent Integro-Differential Operator with point $\delta-$ Interaction Manaf Dzh. MANAFOV TURKEY 72 Paper On some classical operators in generalized Morrey spaces Maria Alessandra RAGUSA ITALY 73 Paper High order differentiability properties of the composition operator in Sobolev Morrey spaces Massimo Lanza de Cristoforis ITALY 74 Paper Porosity Convergence and Porosity Cluster Points in Metric Spaces Maya ALTINOK TURKEY 75 Paper Solitons in optical metamaterials with anti-cubic nonlinearity by extended G'/G-expansion approach Mehmet EKİCİ TURKEY 76 Paper Soliton and other solutions in nonlinear negative-index materials Mehmet EKİCİ TURKEY 77 Paper Embedding theorems on generalized Besov space Mehrali K. ALİEV AZERBAIJAN 78 Paper Characterizations for the nonsingular integral operator and its commutators on generalized Orlicz-Morrey spaces Mehriban OMAROVA AZERBAIJAN 79 Paper Common Fixed Point Results for the (F, L)-Weak Contraction on Complete Weak Partial Metric Spaces Meltem KAYA TURKEY 80 Paper On the Characterizations of Timelike Curves which Spherical Indicatrices are Conics in Minkowski 3-space Mesut ALTINOK TURKEY 81 Paper On Generalized Deferred Cesàro Mean Mikail ET TURKEY 82 Paper Variation Diminishing Convolution Kernels Associated with Second Order Differential Operators Moncef DZIRI TUNISIA 83 Paper Boundedness in weighted Lebesgue spaces of Riesz potentials on commutative hypergroups Mubariz G. HAJIBAYOV AZERBAIJAN 84 Paper λ-Statistical Convergence in Fuzzy Normed Linear Spaces Muhammed Recai TÜRKMEN TURKEY 85 Paper Singularities of Ruled Surfaces and Legender Curves Murat BEKAR TURKEY 86 Paper A Study on Complexified Semi-Quaternions Murat BEKAR TURKEY 87 Paper The boundedness of the Hardy-Littlewood maximal operator Müberra DİKMEN TURKEY 88 Paper Norm and endpoint estimates for commutators of fractional maximal function Müjdat AĞCAYAZI TURKEY 89 Paper Some fixed point results for a new class of multivalued operators in the metric spaces Müzeyyen ERTÜRK TURKEY 90 Paper On Certain Modified Balázs-Szabados Operators in Polynomial Weight Spaces Müzeyyen ÖZHAVZALI TURKEY 91 Paper A Numerical Application for Some Modifi ed Operators Müzeyyen ÖZHAVZALI TURKEY 92 Paper Some Fixed Point Results About Multivalued Almost F-Contraction with α-Admissible Mapping Necip ŞİMŞEK TURKEY 93 Paper Necessary conditions for the absolute matrix summability of infinite series Nedret ÖZGEN TURKEY 94 Paper Generalized maximal functions in classical Lorentz spaces Nevin BİLGİÇLİ TURKEY 95 Paper Interpolation Theorem for Besov-Morrey type Spaces Nilufer R. RUSTAMOVA AZERBAIJAN 96 Paper A New Penalty Function Approach for Inequality Constrained Optimization Problems Nurullah YILMAZ TURKEY 97 Paper Fractional differential and integral operators: Properties and some applications Praveen AGARWAL INDIA 98 Paper Totally bounded sets in nonstandard function spaces Przemysław GORKA POLAND 99 Paper On the Hardy averaging operator in variable exponent weighted Lebesgue spaces Rabil AYAZOĞLU TURKEY 100 Paper On the sub-supersolution method for p(x)-Laplacian equations Rabil AYAZOĞLU TURKEY 101 Paper Spectral Analysis of Hill operator On lassoshaped graph Rakib EFENDIEV AZERBAIJAN 102 Paper On the weighted pseudo almost periodic solutions of Liѐnard-type system with time-lag Ramazan YAZGAN TURKEY 103 Paper On Hardy inequality in weighted variable Lebesgue spaces with mixed norm Rovshan A. BANDALIYEV AZERBAIJAN 104 Paper The link between orthomorphisms and bi-orthomorphisms Ruşen YILMAZ TURKEY 105 Paper Bilinear Hardy inequalities Rza Mustafayev AZERBAIJAN 106 Paper Parametric Marcinkiewicz integral operator on generalized Orlicz-Morrey spaces Sabir G. HASANOV AZERBAIJAN 107 Paper Hardy operators in grand Lebesgue spaces Salaudin UMARKHADZHIEV RUSSIA 108 Paper An analogue of Young's inequality for convolutions in Morrey-type spaces of sequences Salazar Castro RUSSIA 109 Paper On the boundedness of Dunkl-type maximal function in the generalized Dunkl-type Morrey spaces Samira A.HASANLI AZERBAIJAN 110 Paper Fourier Series on Banach Function Space Selim YAVUZ TURKEY 111 Paper Hardy type integral inequalities involving many functions for 0 < p < 1 Senouci Abdelkader ALGERIA 112 Paper Apriori estimates of solutions higher order elliptic and parabolic equations of higher order in Morrey spaces Shahla GALANDEROVA AZERBAIJAN 113 Paper On the square function generated by the Bessel differential operator Simten BAYRAKÇI TURKEY 114 Paper Transmutation theory and its applications Sitnik Sergei MIKHAILOVICH RUSSIA 115 Paper The solutions of stohastic differential equations connected with nonliear elliptic equations Soltan ALIEV AZERBAIJAN 116 Paper Riesz potential associated with Schrödinger operator on generalized Morrey spaces Süleyman ÇELİK TURKEY 117 Paper An Application on Local Property of Matrix Summability of Factored Fourier Series Şebnem YILDIZ TURKEY 118 Paper Boundedness of $B$-square functions Şeyda KELEŞ TURKEY 119 Paper Error Analysis of XDG Methods for Singularly Perturbed Problems Şuayip TOPRAKSEVEN TURKEY 120 Paper A Finite Difference Methods For Fractional Differential Equations Şuayip TOPRAKSEVEN TURKEY 121 Paper The some property of solutions degenerate nonlinear parabolic equations Tahir GADJIEV AZERBAIJAN 122 Paper Complex interpolation theorem on $B^u_w$ spaces Takuya SOBUKAWA JAPAN 123 Paper Spectral stability estimates for the eigenvalues of a Dirichlet p-elliptic differential operator To GIANG RUSSIA 124 Paper On bases from cosines in Lebesgue spaces with variable summability index Togrul MURADOV AZERBAIJAN 125 Paper Embeddings between weighted complementary local Morrey-type spaces and weighted local Morrey-type spaces Tuğçe ÜNVER YILDIZ TURKEY 126 Paper On the growth of the algebraic polynomials on whole complex plane with respect to norm of Lebesgue space Tuncay TUNÇ TURKEY 127 Paper Characterizations for the fractional maximal operator, Riesz potential and their commutators on generalized Orlicz-Morrey spaces Vagif S. GULIYEV TURKEY 128 Paper Some Application Areas of Fixed Point Theory and Connections with Dynamical System Vatan KARAKAYA TURKEY 129 Paper Interpolation theory and local Morrey-type spaces Victor I. BURENKOV RUSSIA 130 Paper Rough singular integral operators on generalized weighted Morrey spaces Vugar H. HAMZAYEV AZERBAIJAN 131 Paper Maximal and singular integral operators on generalized weighted Morrey spaces with variable exponent Xayyam A. BADALOV AZERBAIJAN 132 Paper Fractional maximal operator on Heisenberg group on generalized Morrey spaces Yagub Y. MAMMADOV AZERBAIJAN 133 Paper The strong convergence result of Mann-type iterative method in the Hilbert spaces Yılmaz ALTUN TURKEY 134 Paper Data dependence analysis for a new faster iteration method Yunus ATALAN TURKEY 135 Paper Some Questions of Harmonic Analysis in Weighted Morrey Type Spaces Yusuf ZEREN TURKEY 136 Paper Estimations of the norm of functions from Sobolev-Morrey type space, reduced by polynomials Zaman SAFAROV AZERBAIJAN
|
2018-05-28 02:57:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6679336428642273, "perplexity": 8714.706540539191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00405.warc.gz"}
|
http://www.sciforums.com/threads/capacitor-to-store-lightning.40964/page-4
|
# Capacitor to store lightning?
Discussion in 'General Science & Technology' started by cato, Sep 21, 2004.
Not open for further replies.
1. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member
Messages:
23,198
To get to just a million volts BennyF would need a series string of 50 of those large 20KV capacitors. The string would be a would have only 0.02 fraction of the capacity one of the capacitors in the string, which I would guess is not more than 20 microfarads. (I used quite similar capacitors for 15 years when working on the controlled fusion problem.) If that guess is correct, he would with 50 units have a 0.4 microfarad / million volt capacitor in which he could store the staggering quantity of 0.2 joules!
(Note 0.2J could sustain a 100W light bulb for 2 milli seconds! If the filament were cold and 0.2J were dumped into it, I doubt it would even get hot enough to emit any light. BennyF is going to disconnect entire office buildings from the grid with his invention! We should move this thread to "jokes and funny stories" thread. Great ignorance can be amusing.)
Most high voltage capacitors are used in a fast discharge mode to get very high powers. Thus they are also designed to keep the internal inductance as low as possible as it is the LC time constant that determines how fast you can dump the stored energy. This makes these capacitors more expensive. You did not tell the price, but I would guess at least $400 each.* If that is correct, he would pay$20,000 to store 0.2J at in at a million volts.
The 1.25 J stored at 5 V in the 1 Farad capacitors of post 53 photo is 6.5 times more energy than 0.2J so if he wanted to store the same 1.25J at a million volts, and my guesses are about correct, it would cost him 6.5x20,000 = $1.25 million dollars - nearly what I calcualted before in post 54 but less as only 50, not a million subunits to buy and wire up. These near identical results tend to confirm my guesses. PS one reason why high voltage capacitors with rating above about 20KV are not common is that is about the limit of Hg vapor ignatron switches - You don't dischage these 20KV capacitors with a knife switch especially in a string with a million volt charge. -------- * I would not be the least surprized if a low inductance, 20 microfarad, 20KV capacitor cost$1000 now. If that is the case, then BennyF string would cost more than 3 million dollars to store the same energy as the $5, low-voltage capacitor of the post 53 photo! And that does not include the oil filled room they operate in to avoid air breakdown discharge. Last edited by a moderator: Mar 26, 2010 2. ### Google AdSenseGuest Advertisement to hide all adverts. 3. ### MacGyver1968Fixin' Shit that Ain't BrokeValued Senior Member Messages: 7,028 4. ### Google AdSenseGuest Advertisement to hide all adverts. 5. ### Captain KremmenAll aboard, me Hearties!Valued Senior Member Messages: 12,738 Anyone got ideas on how to capture some of the energy from a hurricane? 6. ### Google AdSenseGuest Advertisement to hide all adverts. 7. ### Stoniphiobscurely fossiliferousValued Senior Member Messages: 3,118 8. ### MacGyver1968Fixin' Shit that Ain't BrokeValued Senior Member Messages: 7,028 A really big windmill? Please Register or Log in to view the hidden image! 9. ### BennyFRegistered Senior Member Messages: 448 Au Revoir, my fellow Americans This may be my last post on this topic. I have seen signs of a discouraged U.S. energy market, suffering from a formally-recognized recession, credible talk of a peak in oil supplies, a President of questionable birth who doesn't seem to want businessmen to make profits in any industry, and anecdotal stories of independent inventors who have had their workshops raided, their families threatened, and their inventions stolen. Two example: The inventor of the supercomputer, Seymour Cray, died in a car accident of suspicious nature, and a man who designed and tested an electrolyzer for vehicles was personally threatened so badly that he passed his ideas along to friends before he publicly announced he was quitting the business. Please Register or Log in to view the hidden image! Here's an exact quote from one paragraph of a 2006 web page: "After announcing that he had successfully built a truck that runs on Joe Cell technology, drawing energy from water and Orgone, Bill Williams said he was approached by two men who demanded that he stop his research, threatening him with dire consequences if he didn't. Others are keeping it alive." I decided to fight this malaise by registering on this board with a pseudonym (to protect my identity) and by posting enough generic information to give the country some hope that a new energy source was possible, that the proof of its' existence would come from the U.S. Patent Office, and that once a patent had been approved, the new energy source would be developed privately which would enable the existing electric grid to be spared more usage by another company. My company will not need any electricity from my grid, because my office will be electrically self-sufficient. Those were the reasons why I posted my first messages. However, as time went on, I saw few signs that anything had changed. I still saw a search for the technical details that I must keep hidden in order to satisfy the requirements of the patent office. I still saw more than enough scepticism that any energy could ever come from lightning, which has a lot of it, just waiting to be developed. I will stop posting for awhile on this topic. I may post on other topics on this website, but I will not discuss lightning (not lightening, Nasor), and I will not talk about my circuit designs, because they won't be relevant to the topics of the other boards. You all are welcome to compare the size of your dielectrics without me. Just remember that I am still working alone on my patent application. Oh, and just because my previous goals have been challenged by people who haven't seen my circuit designs, I have set a new goal. I now intend to store ONE HUNDRED BILLION VOLTS of DC electricity, using a single lightning bolt as my power source. And no, I still won't be breaking any laws, including Ohm's. Vaya con Dios, Benny F Please Register or Log in to view the hidden image! (a pseudonym) 10. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member Messages: 23,198 Even if submerged in the highest dielectric strength oil known, you will not store it for more than a few microsecond before there is an electrical breakdown discharge. I know you are not interested in learning about these things, but if you ever change your mind, you might read about "bloom line" capacitors. They are two simple plates with extremely pure water between as the dielectric. Other capacitors quickly dump their charge into the bloom line, and over volt it. I.e. the water dielectric starts to breakdown and discharge the bloom line capacitor internally, but before an arc path thru the water can be established, the bloom line capacitor is dumped into the external load. Bloom lines are rarely used, but they can achieve the greatest power out puts of all capacitors as they can be dumped (must be dumped) in a few micro seconds or less. Even only 1 Joule, dumped in a microsecond is a 1MW power level. I don't remember the details, never worked with a bloom line, but think a well designed one can deliver higher power levels than the entire output of the largest electric plant in the world. Any discussion of bloom lines you find will help you understand dielectric breakdown mechanisms. There are no dielectric that can resist breakdown if ONE HUNDRED BILLION VOLTS exists between any two points which are not many meters* apart but as the bloom line technology shows you can overvolt the dielectric for a few microseconds of storage. Vaya con Dios, Billy T ------------------- *As the typical voltage difference between the cloud and the ground is only 200 million volts, never more than a billion volts, and you are speaking of 100 times greater voltage you had better keep the "two points" with hundred billion volts voltage difference of your device many kilometers apart to avoid air break down lightning bolt discharging your storage. Last edited by a moderator: Mar 27, 2010 11. ### Captain KremmenAll aboard, me Hearties!Valued Senior Member Messages: 12,738 Bye Benny. Here's another quote from wiki. The terawatt is equal to one trillion watts. The total power used by humans worldwide (about 16 TW in 2006) is commonly measured in this unit. The most powerful lasers from the mid-1960s to the mid-1990s produced power in terawatts, but only for nanosecond time frames. The average stroke of lightning peaks at 1 terawatt, but these strokes only last for 30 microseconds. http://en.wikipedia.org/wiki/Terawatt#Multiples @Nasor, how does that compare in energy output with the previous calculation? 12. ### NeverflyBannedBanned Messages: 3,576 Please Register or Log in to view the hidden image! Oh God! Tell another one! 13. ### BennyFRegistered Senior Member Messages: 448 "Oh God! Tell another one!" - Neverfly OK, I know where to find brand-new capacitors with voltage ratings greater than 20Kv. See you at the patent office, Benny 14. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member Messages: 23,198 One hundred billion v = 10E11 V and 20KV = 2E4 thus in a series string you will need 10E11/2E4 or 5E7 or 50,000,000 or fifty million of them. What do they coast? Surely more than$100 so you must be very rich to foolishly spend 5 billion dollars on a scheme that will breakdown the air and discharge the stored energy in a few micro seconds.
Actually you will never get it charged up to a million volts before the corona discharge bleeds charge off as fast as you can supply it. (That rate will be limited by the LC time constant of you charging system - why a lightning bolt last for a few milli-seconds.) You know all about corona discharges, do you not?
:roflmao:
Last edited by a moderator: Mar 27, 2010
15. ### BennyFRegistered Senior Member
Messages:
448
Billy, the fact that lightning voltage only lasts for a fraction of a second simply means that lightning has a high amount of current. I've seen reputable current levels in the tens of thousands of amps, and Nasa reported that a 200 K amp lightning bolt hit a structure on the grounds once.
If you'll remember, I said in an early post that high voltage levels and high current levels were GOOD NEWS, not bad, for anyone that wants to use lightning as a power source.
I meant what I said.
I'm eager to store the energy from my first one terawatt lightning bolt, so that my company office can be electrically self-sufficient for more than a year.
16. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member
Messages:
23,198
Terawatts are no more a measure of energy than volts are, but you have shown you do not want to learn so I will not waste more time.
Some poster already gave the energy of a typical lightning bolt and noted it is dissipated over the entire length of the bolt, not just near the ground where some could be collected. Energy you can collect is very tiny fraction of the total -perhaps \$1 worth from the power company. Your office must be very efficient (and dark at night) if your annual electric bill is only one dollar.
17. ### BennyFRegistered Senior Member
Messages:
448
Ask not what your lightning-supplied electric utility company can do for you.
Ask what you can do for your lightning-supplied electric utility company, the one that is turning water into hydrogen and oxygen on the side.
18. ### MacGyver1968Fixin' Shit that Ain't BrokeValued Senior Member
Messages:
7,028
What do you plan on doing during the winter...when electrical storms don't occur? (very often)
19. ### BennyFRegistered Senior Member
Messages:
448
One lightning bolt, processed properly by my collection and storage equipment, will supply electricity to my office and my electrolyzer. Before the voltage has been drained, another storm will come by and supply more voltage.
Please don't try to tell me that you've seen my circuit diagrams, and please don't tell me that there isn't enough juice in lightning to make collection worthwhile.
I know that most of the energy is dissipated in the air. I know that most of the lightning bolts travel from one cloud to another one. I know that people have been searching for two centuries for a method of storing that much electricity.
I also know that the spot where a lightning bolt hits becomes four times hotter than the surface of the sun. This is why lightning can and does start forest fires, including one in June of 2008 that scorched most of a wildlife reservation in North Carolina. Do you really think that this kind of energy isn't worth collecting??
Oh, please permit me to repeat myself, simply for the sake of emphasis. Any lightning bolt that hits my collection equipment won't hit anywhere else.
DO YOU REALLY THINK THAT WHAT I'M DOING ISN'T WORTHWHILE ??
20. ### BennyFRegistered Senior Member
Messages:
448
I'm trying to save the dozens of lives that are snuffed out by direct hits with lightning bolts. I'm trying to save the property that gets damaged by lightning. I'm trying to prevent firefighter resources from being mobilized on short notice and without enough food and water being given to them when they arrive at the scene of a wildfire. I'm trying to prevent the residue of fire-retardant chemicals from blighting our landscape.
I'm also trying to reduce the need for more fossil fuels to be dug up or imported, just to supply the energy that turns turbines that generate electricity for the grid. If one lightning bolt can keep my office going AND turn water into hydrogen and oxygen at a low cost, then I'm going to do it, and if nobody else knows how to process lightning, then you all can keep asking me for my circuit diagrams, and you all can keep on guessing, because it'll be MY name on the patent application, not yours.
21. ### BennyFRegistered Senior Member
Messages:
448
Hey MacGyver, lightning hits Toronto's CN tower over twenty times every year. Just how warm do you think their summer is?
And just how difficult is it really to collect 100-500Mv from a single lightning bolt, turn that into a hundred billion volts, and store it in a capacitor-based system?
Gee, it must be terribly difficult. Nobody's been able to do it since Mr. Franklin flew his kits.
Then again, nobody thought in 1950 that less than twenty years later, a man would be standing on the moon and brought back to earth safely.
22. ### BennyFRegistered Senior Member
Messages:
448
Hey, does anybody want to save 80-100 lives every year? Does anybody want to prevent forest fires, except the ones set on purpose by the US Government to reduce the amount of deadwood? Does anybody want to save the animals that are killed by lightning-sparked wildfires?
Fine. That part is easy. Set up some grounded lightning rods in the eastern half of the country, especially in northern Florida, and tell all the air-traffic controllers where they are, so that airplanes and helicopters won't bump into them. Every time a bolt hits the tower, the voltage will be grounded.
And therefore wasted.
I want more. Much more. I want a hundred billion volts to be stored in my equipment, ready to be directed through an electrolyzer and into a DC-AC inverter.
And I'm not going to rest until I get them.
23. ### MacGyver1968Fixin' Shit that Ain't BrokeValued Senior Member
Messages:
7,028
Well Benny, you didn't answer my question. To answer your's...Toronto sees temperatures in the 70's and 80's during the summer, about the same we see here in Dallas in the early Spring.
In most places, electrical storms just don't occur during the winter months. There just isn't enough energy in the atmosphere in the form of heat for the storms to form. Depending on where you live, that's 3 (Dallas) to 6 (Ohio) months of the year that your system will sit idle. What do you plan on doing during the winter months?
I'm not trying to be a "nay-sayer". I like conceptualizing new ideas too. The first thing I do with any new idea is run a "feasibility study". I welcome others to point out potential problems in my ideas, because I may not have thought of everything. You don't seem to want to hear any problems your idea might have.
|
2017-12-14 22:40:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25142064690589905, "perplexity": 1458.5621275093022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00130.warc.gz"}
|
http://www.smartthoughtssolutions.com/6kl8lj2f/b95640-product-rule-derivatives-with-radicals
|
# product rule derivatives with radicals
Then: The "other terms" consist of items such as ⋅ $\begingroup$ @Jordan: As you yourself say in the second paragraph, the derivative of a product is not just the product of the derivatives. 1. We just applied + The rule may be extended or generalized to many other situations, including to products of multiple functions, … g The derivative of e x. ( − is equal to x squared, so that is f of x of the first one times the second function dv is "negligible" (compared to du and dv), Leibniz concluded that, and this is indeed the differential form of the product rule. : h ) Another function with more complex radical terms. h … rule, which is one of the fundamental ways ) Ultimate Math Solver (Free) Free Algebra Solver ... type anything in there! f x 2 Tutorial on the Quotient Rule. And we won't prove In the context of Lawvere's approach to infinitesimals, let dx be a nilsquare infinitesimal. + to be equal to sine of x. In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. x Then du = u′ dx and dv = v ′ dx, so that, The product rule can be generalized to products of more than two factors. $\endgroup$ – Arturo Magidin Sep 20 '11 at 19:52 ) h The rule follows from the limit definition of derivative and is given by . The derivative of f of x is the derivative exist) then the product is differentiable and, Each time, differentiate a different function in the product and add the two terms together. Popular pages @ mathwarehouse.com . = g 4. To get derivative is easy using differentiation rules and derivatives of elementary functions table. f Learn more Accept. Since two x terms are multiplying, we have to use the product rule to find the derivative. Product Rule. 4 ′ h = ) ′ product of two functions. and around the web . → For example, for three factors we have, For a collection of functions The product rule says that if you have two functions f and g, then the derivative of fg is fg' + f'g. They also let us deal with products where the factors are not polynomials. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. In this free calculus worksheet, students must find the derivative of a function by applying the power rule. of this function, that it's going to be equal , we have. ( (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. x h The following chain rule examples show you how to differentiate (find the derivative of) many functions that have an “inner function” and an “outer function.”For an example, take the function y = √ (x 2 – 3). This is going to be equal to h For instance, to find the derivative of f (x) = x² sin (x), you use the product rule, and to find the derivative of g f Calculus: Product Rule, How to use the product rule is used to find the derivative of the product of two functions, what is the product rule, How to use the Product Rule, when to use the product rule, product rule formula, with video lessons, examples and step-by-step solutions. I do my best to solve it, but it's another story. = ′ Dividing by how to apply it. Here is what it looks like in Theorem form: of two functions-- so let's say it can be expressed as with-- I don't know-- let's say we're dealing with g Examples: 1. And we're done. {\displaystyle q(x)={\tfrac {x^{2}}{4}}} If the rule holds for any particular exponent n, then for the next value, n + 1, we have. ... back to How to Use the Basic Rules for Derivatives next to How to Use the Product Rule for Derivatives. ψ x In abstract algebra, the product rule is used to define what is called a derivation, not vice versa. x squared times cosine of x. such that o k Worked example: Product rule with mixed implicit & explicit. f ′ x and taking the limit for small what its derivative is. 0 To use this formula, you'll need to replace the f and g with your respective values. times sine of x. ′ ) {\displaystyle \lim _{h\to 0}{\frac {\psi _{1}(h)}{h}}=\lim _{h\to 0}{\frac {\psi _{2}(h)}{h}}=0,} Derivative of sine h Example 1 : Find the derivative of the following function. times the derivative of the second function. The remaining problems involve functions containing radicals / … f Quotient Rule. 1 {\displaystyle f_{1},\dots ,f_{k}} x f Derivatives of functions with radicals (square roots and other roots) Another useful property from algebra is the following. apply this to actually find the derivative of something. A function S(t) represents your profits at a specified time t. We usually think of profits in discrete time frames. g ) just going to be equal to 2x by the power rule, and g In words, this can be remembered as: "The derivative of a product of two functions is the first times the derivative of the second, plus the second times the derivative of the first." {\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)} when we just talked about common derivatives. ( 1 f . it in this video, but we will learn and Derivatives of Exponential Functions. ©n v2o0 x1K3T HKMurt8a W oS Bovf8t jwAaDr 2e i PL UL9C 1.y s wA3l ul Q nrki Sgxh OtQsN or jePsAe0r Fv le Sdh. ): The product rule can be considered a special case of the chain rule for several variables. ⋅ Let's do x squared ′ By using this website, you agree to our Cookie Policy. x {\displaystyle hf'(x)\psi _{1}(h).} The derivative of a product of two functions, The quotient rule is also a piece of cake. × is sine of x plus just our function f, The product rule is if the two "parts" of the function are being multiplied together, and the chain rule is if they are being composed. ′ If you're seeing this message, it means we're having trouble loading external resources on our website. Then B is differentiable, and its derivative at the point (x,y) in X × Y is the linear map D(x,y)B : X × Y → Z given by. , ′ h ( = For any functions and and any real numbers and , the derivative of the function () = + with respect to is immediately recognize that this is the Example 4---Derivatives of Radicals. then we can write. ⋅ Therefore, if the proposition is true for n, it is true also for n + 1, and therefore for all natural n. For Euler's chain rule relating partial derivatives of three independent variables, see, Proof by factoring (from first principles), Regiomontanus' angle maximization problem, List of integrals of exponential functions, List of integrals of hyperbolic functions, List of integrals of inverse hyperbolic functions, List of integrals of inverse trigonometric functions, List of integrals of irrational functions, List of integrals of logarithmic functions, List of integrals of trigonometric functions, https://en.wikipedia.org/w/index.php?title=Product_rule&oldid=995677979, Creative Commons Attribution-ShareAlike License, One special case of the product rule is the, This page was last edited on 22 December 2020, at 08:24. derivative of the first function times the second Remember the rule in the following way. There is a proof using quarter square multiplication which relies on the chain rule and on the properties of the quarter square function (shown here as q, i.e., with h f prime of x times g of x. We use the formula given below to find the first derivative of radical function. The product rule is a snap. We are curious about ′ h There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient: Among the applications of the product rule is a proof that, when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). If we divide through by the differential dx, we obtain, which can also be written in Lagrange's notation as. Product Rule. f We can use these rules, together with the basic rules, to find derivatives of many complicated looking functions. ′ Product and Quotient Rule for differentiation with examples, solutions and exercises. = Here are some facts about derivatives in general. times the derivative of the second function. This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above). ) A LiveMath notebook which illustrates the use of the product rule. Where does this formula come from? about in this video is the product , ⋅ ) ( ( y = (x 3 + 2x) √x. 2. ( ψ Let's say you are running a business, and you are tracking your profits. This is the only question I cant seem to figure out on my homework so if you could give step by step detailed … Could have done it either way. x 1 When you read a product, you read from left to right, and when you read a quotient, you read from top to bottom. For example, your profit in the year 2015, or your profits last month. of x is cosine of x. {\displaystyle f,g:\mathbb {R} \rightarrow \mathbb {R} } Royalists and Radicals What is the Product rule for square roots? Example. ) {\displaystyle h} The challenging task is to interpret entered expression and simplify the obtained derivative formula. Differentiation rules. x I think you would make the bottom(3x^2+3)^(1/2) and then use the chain rule on bottom and then use the quotient rule. g So here we have two terms. j k JM 6a 7dXem pw Ri StXhA oI 8nMfpi jn EiUtwer … Improve your math knowledge with free questions in "Find derivatives of radical functions" and thousands of other math skills. of evaluating derivatives. And we are curious about A LiveMath Notebook illustrating how to use the definition of derivative to calculate the derivative of a radical at a specific point. f Section 3-4 : Product and Quotient Rule. x plus the first function, not taking its derivative, h When finding the derivative of a radical number, it is important to first determine if the function can be differentiated. Solution : y = (x 3 + 2x) √x. R the derivative of f is 2x times g of x, which apply the product rule. For many businesses, S(t) will be zero most of the time: they don't make a sale for a while. ) , ( Using this rule, we can take a function written with a root and find its derivative using the power rule. f 2 Δ To do this, Want to know how to use the product rule to calculate derivatives in calculus? also written 1) the sum rule: 2) the product rule: 3) the quotient rule: 4) the chain rule: Derivatives of common functions. I can't seem to figure this problem out. In each term, we took product of-- this can be expressed as a ) Δ the derivative of g of x is just the derivative of sine of x, and we covered this And all it tells us is that and not the other, and we multiplied the We explain Taking the Derivative of a Radical Function with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. For problems 1 – 6 use the Product Rule or the Quotient Rule to find the derivative of the given function. if we have a function that can be expressed as a product ) h [4], For scalar multiplication: g By definition, if → , This rule was discovered by Gottfried Leibniz, a German Mathematician. ⋅ The derivative of 2 x. {\displaystyle (f\cdot \mathbf {g} )'=f'\cdot \mathbf {g} +f\cdot \mathbf {g} '}, For dot products: = Donate or volunteer today! If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. ψ h g In the list of problems which follows, most problems are average and a few are somewhat challenging. {\displaystyle h} f From Ramanujan to calculus co-creator Gottfried Leibniz, many of the world's best and brightest mathematical minds have belonged to autodidacts. ( g R × f(x) = √x. There is nothing stopping us from considering S(t) at any time t, though. And we could think about what g For the sake of this explanation, let's say that you busi… ∼ Rational functions (quotients) and functions with radicals Trig functions Inverse trig functions (by implicit differentiation) Exponential and logarithmic functions The AP exams will ask you to find derivatives using the various techniques and rules including: The Power Rule for integer, rational (fractional) exponents, expressions with radicals. these individual derivatives are. The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn − 1 = 0. The derivative of a quotient of two functions, Here’s a good way to remember the quotient rule. f prime of x-- let's say the derivative Or let's say-- well, yeah, sure. The derivative of (ln3) x. Product Rule. g, times cosine of x. The product rule extends to scalar multiplication, dot products, and cross products of vector functions, as follows. ) f'(x) = 1/(2 √x) Let us look into some example problems to understand the above concept. Elementary rules of differentiation. From the definition of the derivative, we can deduce that . The rule in derivatives is a direct consequence of differentiation. ψ So let's say we are dealing And with that recap, let's build our intuition for the advanced derivative rules. It is not difficult to show that they are all 3. We could set f of x ( q → And we could set g of x Then, they make a sale and S(t) makes an instant jump. Derivative Rules. Derivatives have two great properties which allow us to find formulae for them if we have formulae for the function we want to differentiate.. 2. f of x times g of x-- and we want to take the derivative are differentiable at Suppose $$\displaystyle f(x) = \sqrt[4] x + \frac 6 {\sqrt x}$$. . The Derivative tells us the slope of a function at any point.. There are rules we can follow to find many derivatives.. For example: The slope of a constant value (like 3) is always 0; The slope of a line like 2x is 2, or 3x is 3 etc; and so on. Find the derivative of the … 2 ( The Derivative tells us the slope of a function at any point.. f ′ Drill problems for differentiation using the product rule. The rules for finding derivatives of products and quotients are a little complicated, but they save us the much more complicated algebra we might face if we were to try to multiply things out. (Algebraic and exponential functions). We have our f of x times g of x. to the derivative of one of these functions, Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-standard analysis, specifically the hyperreal numbers. × the product rule. Free radical equation calculator - solve radical equations step-by-step . ψ The product rule Product rule with tables AP.CALC: FUN‑3 (EU) , FUN‑3.B (LO) , FUN‑3.B.1 (EK) This website uses cookies to ensure you get the best experience. {\displaystyle f(x)\psi _{2}(h),f'(x)g'(x)h^{2}} o What we will talk ) + To differentiate products and quotients we have the Product Rule and the Quotient Rule. ( 2 It may be stated as ′ = f ′ ⋅ g + f ⋅ g ′ {\displaystyle '=f'\cdot g+f\cdot g'} or in Leibniz's notation d d x = d u d x ⋅ v + u ⋅ d v d x. g Here are useful rules to help you work out the derivatives of many functions (with examples below). AP® is a registered trademark of the College Board, which has not reviewed this resource. ( ψ Unless otherwise stated, all functions are functions of real numbers that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers ().. Differentiation is linear. . Let h(x) = f(x)g(x) and suppose that f and g are each differentiable at x. f f the derivative of one of the functions 2 Like all the differentiation formulas we meet, it … x Tutorial on the Product Rule. ′ 1 The chain rule is special: we can "zoom into" a single derivative and rewrite it in terms of another input (like converting "miles per hour" to "miles per minute" -- we're converting the "time" input). Now let's see if we can actually Our mission is to provide a free, world-class education to anyone, anywhere. {\displaystyle o(h).} Khan Academy is a 501(c)(3) nonprofit organization. right over there. The Product Rule. {\displaystyle x} f which is x squared times the derivative of {\displaystyle {\dfrac {d}{dx}}={\dfrac {du}{dx}}\cdot v+u\cdot {\dfrac {dv}{dx}}.} It can also be generalized to the general Leibniz rule for the nth derivative of a product of two factors, by symbolically expanding according to the binomial theorem: Applied at a specific point x, the above formula gives: Furthermore, for the nth derivative of an arbitrary number of factors: where the index S runs through all 2n subsets of {1, ..., n}, and |S| is the cardinality of S. For example, when n = 3, Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X × Y → Z is a continuous bilinear operator. taking the derivative of this. f For example, if we have and want the derivative of that function, it’s just 0. Back to top. ) h ) + And so now we're ready to is deduced from a theorem that states that differentiable functions are continuous. The product rule tells us how to differentiate the product of two functions: (fg)’ = fg’ + gf’ Note: the little mark ’ means "Derivative of", and f and g are functions. , {\displaystyle (\mathbf {f} \times \mathbf {g} )'=\mathbf {f} '\times \mathbf {g} +\mathbf {f} \times \mathbf {g} '}. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next big test). Product Rule of Derivatives: In calculus, the product rule in differentiation is a method of finding the derivative of a function that is the multiplication of two other functions for which derivatives exist. ( But what you are claiming is that the derivative of the product is the product of the derivatives. ( {\displaystyle \psi _{1},\psi _{2}\sim o(h)} We want to prove that h is differentiable at x and that its derivative, h′(x), is given by f′(x)g(x) + f(x)g′(x). The derivative of 5(4.6) x. And there we have it. The inner function is the one inside the parentheses: x 2-3.The outer function is √(x). Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives. The derivative rules (addition rule, product rule) give us the "overall wiggle" in terms of the parts. ( This last result is the consequence of the fact that ln e = 1. Differentiation: definition and basic derivative rules. lim ) function plus just the first function x g + {\displaystyle (\mathbf {f} \cdot \mathbf {g} )'=\mathbf {f} '\cdot \mathbf {g} +\mathbf {f} \cdot \mathbf {g} '}, For cross products: lim It's not. , Back to top. So f prime of x-- The rule holds in that case because the derivative of a constant function is 0. ( Well, we might x The first 5 problems are simple cases. f Product Rule If the two functions $$f\left( x \right)$$ and $$g\left( x \right)$$ are differentiable ( i.e. 5.1 Derivatives of Rational Functions. 0 gives the result. g 0 This is an easy one; whenever we have a constant (a number by itself without a variable), the derivative is just 0. ⋅ ( ) For the advanced derivative rules many functions ( with examples below ). x squared times sine of to., your profit in the year 2015, or your profits at a specified time t. we usually of... Your profits the consequence of differentiation from Ramanujan to calculus co-creator Gottfried Leibniz, a German Mathematician that case the. Products, and you are tracking your profits differentiate products and quotients we the. The power rule to ensure you get the product rule derivatives with radicals experience to remember the quotient to... Functions with Radicals ( square roots and other roots ) Another useful from... Want to know How to use the product rule and the quotient rule derivatives calculus... Define what is called a derivation, not vice versa apply the rule... That differentiable functions are continuous calculate derivatives in calculus, the product rule the... Not vice versa world-class education to anyone, anywhere just 0 us look into example. Complicated looking functions business, and you are tracking your profits at a specified time t. we usually of! My best to solve it, this gives to denote the standard part above ). taking limit... Derivative using the power rule: they don't make a sale and S ( t ) makes an instant.... ) will be zero most of the College Board, which has not reviewed resource... G of x times g of x is cosine of x to be equal to f of. Log in and use all the features of Khan Academy, please make sure that the domains * and. 1/ ( 2 √x ) let us look into some example problems to understand the above concept } the... = 1/ ( 2 √x ) let us deal with products where the factors are not polynomials derivative and given... Rules for derivatives by using this rule was discovered by Gottfried Leibniz, many of the standard above! Quotient rule to find the derivative of a function at any point co-creator Gottfried Leibniz, many of world... Case because the derivative of a function written with a root and its... If the rule holds for any particular exponent n, then for the next value, +... Businesses, S ( t ) represents your profits, yeah, sure next to How to apply.... With free questions in find derivatives of radical function slope of a function... Function is 0 is cosine of x prove it in this video, it! Limit for small h { \displaystyle hf ' ( x ). n't to! Intuition for the advanced derivative rules 's proof exploiting the transcendental law of homogeneity ( in place the... Need product rule derivatives with radicals replace the f and g with your respective values and thousands of other math.... To f prime of x tracking your profits last month } ( h ). say -- well we. Jm 6a 7dXem pw Ri StXhA oI 8nMfpi jn EiUtwer … derivative rules wo n't prove it in this calculus! F ( x ) = \sqrt [ 4 ] x + \frac {!, yeah, sure derivatives are will talk about in this video is the consequence of differentiation $Arturo... Of -- this can be expressed as a product of two functions as! Take a function written with a root and find its derivative using the power rule you. Leibniz 's proof exploiting the transcendental law of homogeneity ( in place the! Be expressed as a product of two functions, the product rule to calculate derivatives in calculus, the is. Is to interpret entered expression and simplify the obtained derivative formula and Radicals what is the product rule written... Somewhat challenging next to How to use the product and add the two terms together rules derivatives! 4 ] x + \frac 6 { \sqrt x }$ $and derivatives of elementary functions.. \Endgroup$ – Arturo Magidin Sep 20 '11 at 19:52 the rule holds for any particular n! Exploiting the transcendental law of homogeneity ( in place of the College Board, which can be. ) makes an instant jump problems are average and a few are somewhat.! Are curious about taking the derivative of radical function Khan product rule derivatives with radicals, make! Is the consequence of the given function the exponent n. if n = 0 anyone anywhere. A radical number, it means we 're having trouble loading external resources on our website \$. With mixed implicit & explicit log in and use all the features of Khan Academy is a direct of. That differentiable functions are continuous must find the derivative of radical function, not vice versa is called derivation...: y = ( x ). example 1: find the first derivative of this and derivatives of functions. The context of Lawvere 's approach to infinitesimals, let dx be a nilsquare infinitesimal in that because... Is a registered trademark of the time: they don't make a sale for a while nxn. The domains *.kastatic.org and *.kasandbox.org are unblocked divide through by the differential dx, we have the rule!, S ( t ) makes an instant jump next value, n +,! Derivative, we can use these rules, together with the basic rules for derivatives law of homogeneity in! Part above ). any particular exponent n, then for the product rule derivatives with radicals. What we will learn How to use product rule derivatives with radicals product and add the two terms together and... Deduce that at a specified time t. we usually think of profits in discrete time frames of of... F ' ( x ) = 1/ ( 2 √x ) let us deal with products where the are. Show that they are all o ( h ). with mixed implicit & explicit given to! J k JM 6a 7dXem pw Ri StXhA oI 8nMfpi jn EiUtwer … derivative rules out the derivatives with..., so that is f of x is equal to f prime of x different function the..., they make a sale for a while list of problems which follows, most problems are average a... Elementary functions table context of Lawvere 's approach to infinitesimals, let dx be a nilsquare infinitesimal expressed as product... Has not reviewed this resource Solver... type anything in there are not polynomials best solve. A LiveMath notebook which illustrates the use of the following function products of two functions, here ’ S good. Or more functions our f of x is cosine of x right over.! Use of the product rule and the quotient rule is used to find the first derivative of a constant is. If we can take a function S ( t ) at any point is a 501 ( )! Can deduce that with a root and find its derivative using the power rule take a function by the! Dividing by h { \displaystyle hf ' ( x 3 + 2x ) √x homogeneity... To show that they are all o ( h ). help work. The above concept solve it, but it 's Another story into some example problems understand! 2015, or your profits say you are tracking your profits last month we our... Of cake is 0 and derivatives of many functions ( with examples below ). is easy differentiation! Means we 're ready to apply the product rule, which is one of the 's. 'Re ready to apply it by the differential dx, we obtain, has. Improve your math knowledge with free questions in find derivatives of many functions ( with examples )... We obtain, which can also be written in Lagrange 's notation as differentiable product rule derivatives with radicals are continuous somewhat.! The basic rules, together with the basic rules, together with product rule derivatives with radicals basic rules derivatives... Homogeneity ( in place of the fundamental ways of evaluating derivatives these rules, together with the basic,... You 'll need to replace the f and g with your respective values video, but we will learn to! For the advanced derivative rules Lawvere 's approach to infinitesimals, let dx be a nilsquare infinitesimal functions. Case because the derivative when finding the derivative of a constant function is following. More functions derivatives is a formula used to find the derivative of a function! The factors are not polynomials knowledge with free questions in find derivatives of elementary functions.... Trouble loading external resources on our website and want the derivative tells us the slope of a function any! Calculus co-creator Gottfried Leibniz, a German Mathematician and is given by not reviewed this.. And use all the features of Khan Academy is a direct consequence of differentiation profits a. Take a function S ( t ) makes an instant jump obtain, which also. H { \displaystyle h } gives the result \displaystyle h } gives the result, your. Arturo Magidin Sep 20 '11 at 19:52 the rule follows from the for! Cookies to ensure you get the best experience factors are not polynomials be a nilsquare infinitesimal next! The use of the world 's best and brightest mathematical minds have belonged to autodidacts times g of x cosine. The quotient rule part above ). to figure this problem out LiveMath notebook which illustrates the use the. Brightest mathematical minds have belonged to autodidacts show that they are all o h! Your profits last month you are running a business, and cross products of two functions, the rule... Useful rules to help you work out the derivatives of elementary functions table from a Theorem that states differentiable... Constant and nxn − 1 = 0 then xn is constant and nxn 1!, as product rule derivatives with radicals 's build our intuition for the next value, n + 1, we can these... A constant function product rule derivatives with radicals √ ( x 3 + 2x ) √x reviewed this resource in! Following function to know How to use this formula, you agree to our Policy...
|
2021-03-01 22:13:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990124464035034, "perplexity": 740.0339177748999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00294.warc.gz"}
|
https://www.techwhiff.com/learn/please-use-ths-calculous-2-class-information-help/15901
|
1 answer
# Please use ths calculous 2 class information help me with this . 4) Fifty meters of...
###### Question:
please use ths calculous 2 class information help me with this .
4) Fifty meters of a circular observation tube with a flat glass window at the bottom is submerged in water as shown. Assuming P(h) = 9800 h / m2, calculate the total force due to hydrostatic pressure (30pts.) on the exterior of the tube. The radius of the tube is 1 meter. -r=1m water's surface 50m glass window
## Answers
#### Similar Solved Questions
1 answer
##### RM-Q15 Match these compounds 1 & 2 with their IUPAC names. OH ОН 1) 2) Drag...
RM-Q15 Match these compounds 1 & 2 with their IUPAC names. OH ОН 1) 2) Drag and drop options on the right-hand side and submit. For keyboard navigation... Show More 1) = Cyclohexylcarboxylic acid 2) = 2,2-Dimethylbutanoic acid = 3-cyclopentylpropanoic acid INI 3.3. Dimethylbutanoic a...
1 answer
##### (A) Harry, William and Kate are management consultants and for a number of years have together...
(A) Harry, William and Kate are management consultants and for a number of years have together run ‘HKW Management Consultants’ as an ordinary, unregistered, partnership. The business has become increasingly successful and Harry, William and Kate want to borrow money from the bank so tha...
1 answer
##### [1] The joint probability mass function of two discrete random variables A and B is 0,...
[1] The joint probability mass function of two discrete random variables A and B is 0, Pab(a,b) = Sca²b, a = -2, 2 and b = 1,2 otherwise Clearly stating your reasons, answer the following two (i) Are A and B are uncorrelated? (ii) Are A and B independent?...
1 answer
##### 10) A solution of HA acid has an initial concentration of 0.36M. If the pH is...
10) A solution of HA acid has an initial concentration of 0.36M. If the pH is 6.26, what is the Ka of the acid? a. 8.39*10-13 b. 1.53*10-6...
1 answer
##### Thermodynamics Page 5 of 12 Problem 3 (SS pts): The following figure provides the schematic o using Refrigerant 134a as the working fluid to keep a ro the outdoor air at 34 "C. The refrigeran...
thermodynamics Page 5 of 12 Problem 3 (SS pts): The following figure provides the schematic o using Refrigerant 134a as the working fluid to keep a ro the outdoor air at 34 "C. The refrigerant enters the co leaves dhe rate of O.06 m steady site compressor at 320 kPa as a satwrated vapor w a ...
1 answer
##### 1) Which of the following produces a magnetic field? a stationary net charge b electric current...
1) Which of the following produces a magnetic field? a stationary net charge b electric current c mass d heat 2) A long copper wire of radius 10cm is laid on the ocean floor connecting an island to the mainland. A small species of shrimp is sensitive to magnetic fields, and i...
1 answer
##### I am outside the path
I am outside the path. outside numbers 8,10,3,7)I am the difference of 2 numbers inside the path. Inside umbers are (2,6,9,10)I am less than 5. What number am I?I am inside the path. Inside numbers (2,4,6,7,9)`I am the difference of 2 numbers outside the path. Outside numbers are (5,8,10)What number...
1 answer
##### (4 points) This problem is concerned with solving an initial boundary value problem for the heat ...
(4 points) This problem is concerned with solving an initial boundary value problem for the heat equation: u,(x, t)- uxx(x,), 0...
1 answer
##### Equation to pre 22 Determine an on the of mass based 7 he metabolism Hrate as...
equation to pre 22 Determine an on the of mass based 7 he metabolism Hrate as a function to predict the following data. Dse it rate of a 200-kg tiger. metabolisme watts) Animal cow Masst kg) 270 400 Human Sheep нел e AU Rat Dove 0.16...
1 answer
##### These items are taken from the financial statements of Blossom Company for 2022. Retained earnings (beginning...
These items are taken from the financial statements of Blossom Company for 2022. Retained earnings (beginning of year) Utilities expense Equipment Accounts payable Cash Salaries and wages payable Common stock Dividends Supplies Debt investment (long-term) Trademarks Service revenue Prepaid insurance...
1 answer
##### How do you use the ratio test to test the convergence of the series ∑ (-5)^(n+1)n / 2^n from n=1 to infinity?
How do you use the ratio test to test the convergence of the series ∑ (-5)^(n+1)n / 2^n from n=1 to infinity?...
1 answer
##### Problem 2-7 Margin Calls (LO3, CFA5) You decide to buy 1700 shares of stock at a...
Problem 2-7 Margin Calls (LO3, CFA5) You decide to buy 1700 shares of stock at a price of \$94 and an initial margin of 50 percent. What is the maximum percentage decline in the stock price before you will receive a margin call if the maintenance margin is 25 percent? (Do not round intermediate calcu...
1 answer
##### QUESTION8 Let Y,,Y2, ..., Yn denote a random sample of size n from a population whose...
QUESTION8 Let Y,,Y2, ..., Yn denote a random sample of size n from a population whose density is given by (a) Find the maximum likelihood estimator of θ given α is known. (b) Is the maximum likelihood estimator unbiased? (c) is a consistent estimator of θ? (d) Compute the Cramer-Ra...
1 answer
##### 6) From the following, what about microtubules is true? I. Microtubules have structural polarity for bidirectional...
6) From the following, what about microtubules is true? I. Microtubules have structural polarity for bidirectional transport II. GDP-bound tubulin has a straight conformation III. Microtubules are crucial for mechanical cellular properties IV. Protofilaments are composed of tubulin dimers V. Protofi...
1 answer
##### How have you observed expectations being communicated in regards to behavior management concerns within your classroom...
How have you observed expectations being communicated in regards to behavior management concerns within your classroom observations? Do you find these strategies to be effective? Do you have suggestions for improvement?...
1 answer
##### Use the given information to find the number of degrees of freedom, the critical values chi...
Use the given information to find the number of degrees of freedom, the critical values chi Subscript Upper L Superscript 2 and chi Subscript Upper R Superscript 2, and the confidence interval estimate of sigma. It is reasonable to assume that a simple random sample has been selected f...
1 answer
##### Y Tools The Paulson Company's year end balance sheet is shown below. Its cost of common...
y Tools The Paulson Company's year end balance sheet is shown below. Its cost of common equity is 18%, its before-tax cost of debt is 8%, and its marginal tax rate is 40% Assume that the firm's long-term debt sels at par value. The firm's total debt, which is the sum of the company's...
|
2022-11-30 20:40:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3684127628803253, "perplexity": 3607.057248369334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00000.warc.gz"}
|
http://naturalunits.blogspot.com/2012/
|
## Dec 31, 2012
### text censor/keyboard focus lost issues in Ubuntu/Linux
As the title states, in some older versions of Ubuntu/Linux (confirmed on 8 - 10, not know if still exists in 11, 12), the text censor focus sometimes just loses when auto-complete is up. The keyboard seems frozen and one has to click other places first and then click again to gain focus back. This bug is particularly annoying when working with IDEs with auto-complete features on, such as Eclipse. This issue is caused by SCIM (SCIM daemon is on, although one doesn't explicitly summon it.) As the SCIM development has been off for many years, this bug has not been fixed so far.
Nevertheless, there are several temporary solutions,
1. use scim-bridge:
im-switch -s scim-bridge
This method does not work on my system (karmic koala, 9.10). It is also reported that this may cause SCIM crashes.
2. http://hrstc.org/node/21
in System/Administration/Language Support, uncheck the "Enable support to enter complex characters".
There is no such an option at all in karmic koala.
3. use ibus/shut down SCIM when using Eclipse
4. delete ~/.xinput.d/en_US
This method works for me, but SCIM are not stable when working with Google-Chrome browser. Note that you may need to logout and login several times.
## Dec 8, 2012
### Dark Matter VI: Beyond Standard Model
To explain the microscopic nature of dark matter requires physics beyond standard model. In fact, virtually all particle physicists believe that standard model is only an effective theory of some more fundamental theory. The following problems of standard model are the main motivation.
• Gauge hierarchy problem. Gauge hierarchy problem asked why Higgs mass ($\sim$100 GeV) is much smaller than Planck scale energy, the believed grand unification scale. A possible explanation is there is a TeV scale physics, which affects Higgs mass through quantum fluctuation.
Moreover, calculation shows if Higgs mass is smaller than 134 GeV, the quantum fluctuation will cause the collapse of Higgs field vacuum. The current preliminary Higgs mass from LHC (125.7 GeV) demands a mechanism to stabilize the Higgs field vacuum. All these problem can be solved by TeV physics including super-symmetric models and universal extra dimension models.
• Neutrino mass, left-handed neutrino problem. In standard model, fermions acquire mass via Higgs mechanism. But fermion masses extend 14 orders of magnitude. This fact can hardly be viewed as a coincidence. Moreover, if neutrinos are massive, right-handed neutrinos must exist. One possibility is that there exist massive, non-interacting right-handed neutrinos, called sterile neutrinos. Sterile neutrino can solve the neutrino mass problem through see-saw mechanism.
Fig. 1: fermion masses.
• Strong CP problem. In QCD, a term proportional to $\theta \epsilon_{\mu\nu\rho\lambda} F^{\mu\nu} F^{\rho\lambda}$ is allowed, which, if exists, will break CP-symmetry and lead to phenomena like nonzero neutron electric dipole moment. The current experimental observation constrains this term to be close to zero. Physicists do not believe this is a coincidence. Introducing axions can solve this problem
There have been many well motivated extension of standard model. Each of them is designed to solve one or more problems listed above.
#### Supersymmetry
Supersymmetric standard models introduce an additional symmetry between fermions and bosons called supersymmetry. Supersymmetry assigns each boson a fermionic supersymmetric partner. The lightest supersymmetric neutral particles (LSPs) are natural dark matter candidates, as protected by $R$-parity from decaying into lighter SM particles. If supersymmetry is broken at TeV scale, gauge hierarchy and Higgs vacuum stability problem can be solved elegantly.
Another strong support of supersymmetry comes from attempt towards the grand unification. Physicists postulate all interactions have the same origin at Plank scale $10^{18} \sim 10^{19}$ GeV. If this is true, strong weak and electromagnetism forces should be unified at around $10^{16}$ GeV. However, study of the evolution of three standard model couplings shows a discrepancy from unification at Planck scale. The discrepancy can be greatly reduced by introducing supersummetry (See Fig. 2).
Fig. 2: Left, evolution of couplings within standard model; Right, evolution of couplings by introducing a pair of vector like fermions carrying SM charges and masses of order 300 GeV-1 TeV.
For all these reasons, supersymmetric models are among the most favored standard model extensions. But one common issue of these models is that they have a vast set of parameters. The Minimal Supersymmetric Standard Model (MSSM) for example already has 63 parameters. It is impossible to explore the whole parameter space. By using theoretical arguments and/or current experimental data, majority of the parameter can be fixed to certain reasonable values. The left parameter set, containing 5 - 10 parameters, can be constrained by experiment from the upcoming particle physics experiments and astrophysics observation if the model is designed for dark matter (See Fig 3).
Fig. 3: The parameter space of various MSSM models. The shaded area is excluded by measurement of $B_s \to \mu^+ + \mu^-$ and $B_d → \mu^+ \mu^-$ branching ratios.
#### Extra Dimensions
Extra Dimension models (ED) assume there exist extra dimensions besides the usual 3+1 spacetime. The shape of the extra dimensions are usually curled, such as a small circle, to explain their invisibility. In ED, each normal particle corresponds to a set of excitations in the extra dimension, known as Kaluza-Klein (KK) tower. The mass of KK excitations in the tower, $m^2_\mathrm{KK} = m^2_\mathrm{SM} + n^2 / R^2, \quad n = 0, 1, 2, \dots$ where $R$ is the radius of the extra dimension, $R^{-1} \gtrsim$ 300 GeV to explain why they have not been observed in current colliders. Lightest KK particles (LKPs) are also natural dark matter candidates, as protected by KK-parity. Meanwhile, if the radius of the extra dimension is about $\mathrm{TeV}^{-1}$ , ED extension of standard model known as Universal Extra Dimension (UED) can replace Higgs mechanism to break the electroweak gauge symmetry, hence solves the gauge hierarchy problem and Higgs vacuum stability problem.
Just like supersymmetry, ED may also modify gravity. In the large extra dimension model of Arkani-Hamed, Dimopoulos and Dvali (ADD), standard model is lived on TeV scale 4D surface (called brane), whereas gravity and only gravity can penetrate from Planck scale brane and the TeV scale brane by propa- gating in the extra dimensions. EDs are constrained by various collider experiments, gravity tests and dark matter relic density. In theminimal case (mUED), UED only has one free parameter, the radius R of the curled extra dimension (See Fig. 4).
Fig. 4: Combined collider constraints on mUED. MH is standard model Higgs mass. By Belanger et al.(2012)
#### Peccei-Quinn Theory
Peccei-Quinn theory is proposed to solve the strong CP problem in QCD. Peccei- Quinn theory postulates the coupling constant θ of strong CP term is a dynamical field with a new global symmetry Peiccei-Quinn symmetry. The new field is called axion. In some models, axion is also a dark matter candidate.
#### See-Saw Mechanism and Neutrino Masses
To explain neutrino mass, one may extend standard model to include neutrino mass via Higgs mechanism as other fermions. So one need a right-handed neutrino. Right-handed neutrino is a isospin singlet. It does not interaction with gauge bosons. Meanwhile, it can also acquire a Majorana mass term. $\mathcal{L} = m_D \bar{\nu}_R \nu_L + m_M \bar{\nu}^c_L \nu_R + \mathrm{h.c.}$ The mass matrix is, $$\left( \begin{array}{c c} 0 & m_D \\ m_D & m_M \\ \end{array} \right)$$ $m_M \sim M_\text{Pl} \gg m_D$. there are two mass eigenstates, $m_1 \simeq m_M, m_2 \simeq -\frac{m_D^2}{m_M}$. $m_2 \ll m_D \ll m_M$. This is the so-called see-saw mechanism. It explains why neutrino mass $m_2$ is much smaller than fermion mass $m_D$. The first mass eigenstate is a massive nearly non-interacting neutrino, called sterile neutrino. Sterile neutrino is a very good dark matter candidate.
It should be noted that these models may appear together, even producing new dark matter candidates. For example, the supersymmetric partner of axion, called axino is also a dark matter candidate.
#### Weakly Interacting Massive Particles
A large class of dark matter candidates is weakly interaction massive particles (WIMPs). WIMPs are neutral stable particles with mass of weak scale $\sim$ 100 GeV, and coupling with standard model particles on weak coupling scale $\alpha \sim 10^{-2}$. Candidates in WIMP class include lightest supersymmetric particles (LSPs) such as neutralinos (the linear superposition of photino, Zino and Higgsino), lightest Kaluza-Klein excitations (LKPs) from universal extra dimension models.
WIMPs may be produced in early universe as the thermal relic of Big Bang. At very early universe, WIMPs are in equilibrium. As the temperature falls below its $m_{\chi}$ , there are more WIMPs annihilating into lighter particles than the converse reaction. The WIMP density drops exponentially, until the reaction rate below the Universe expansion. Then WIMPs freeze-out from equilibrium as the thermal relic and their co-moving number density remains fixed as the relic density. The above description can be formally modeled by Boltzmann equation: $\frac{\mathrm{d}n_\chi}{\mathrm{d} t} + 3 H(t) n_\chi = -\left< \sigma_A v \right> (n_\chi^2 - \bar{n}^2_\chi )$ where $H$ is the Hubble parameter, $\bar n$ is the equilibrium number density. The equation can be solved numerically. In the simplest case, the frozen-out temperature is about $T_f \sim m_\chi^2 /20$ and the relic density is, $\Omega_\chi h^2 \simeq \frac{ 3\times 10^{-27} \mathrm{cm^3 s^{-1}}}{\left< \sigma_A v \right>}$. The cross section of WIMPs is about $\left< \sigma_A v \right> \sim \frac{\alpha^2}{m_\chi^2} (a + b v^2 + \mathcal{O}(v^4))$. The first two terms represent $s$-wave and $p$-wave cross section respectively. $v \ll 1$, so higher order terms can be neglected. For the case of s-wave dominated annihilation, $a \sim \mathcal{O}(1)$, then $\Omega_\chi \sim 10^{-3} - 10^{-1}$, where we have allowed one order of magnitude fluctuation. The cursory estimation shows, particles with weak scale mass and coupling have naturally the correct relic density as dark matter. This is called the "WIMP miracle". These properties make WIMPs the leading dark matter candidates.
Fig. 4: The WIMP relic density. The dashed line is relic density without thermal frozen-out. The solid lines are the actual relic density. The strips represents a variation of the cross section of orders of magnitude from the one with the "correct relic density".
## Dec 4, 2012
### Newton's Law of Gravity for Solar System Planets (visualization)
According to Newtonian gravity theory $v(r) =\sqrt{ \frac{G M}{r}}$ If the orbit is circular, the speed is simply proportional to $1/\sqrt{r}$. For general cases, however, after some derivation, the average speed, $\bar{v} = \frac{1}{2\pi}\int_0^{2\pi} \mathrm{d}\theta v(\theta) = \sqrt{\frac{GM}{a} } \frac{2 \mathrm{E}(\frac{2\epsilon}{1+\epsilon})}{\pi \sqrt{1-\epsilon}},$ where $\mathrm{E}(z) = \int_0^{\frac{\pi}{2}} (1-z \sin^2\theta)^{1/2} \mathrm{d}\theta$ is the elliptic function. Note that $\mathrm{E}(0) = \frac{\pi}{2}$, restoring the circular motion result. So, the average speed is proportional to $\frac{1}{\sqrt{a}}$ where $a$ is the semi-major axis.
Fig. 1: the semi-major axis vs. average orbital speed for solar system planets in linear coordinates
Fig. 2: the semi-major axis vs. average orbital speed for solar system planets in logarithmic coordinates
The best fit of the slope gives 29.779763 km/s/AU, which is about the earth average orbital speed. Using the data of solar mass and gravitational constant, the average eccentricity is about 0.0195386. This is the absolute value.
## Dec 1, 2012
### Dark Matter V: Galaxy Rotation Curves
According to Newtonian gravity, the rotational velocity $v(r)$ of an object is related to the mass enclosed by its orbit (integral mass): $v^2(r) = \frac{G M(r)}{r}.$ Rotation curves plot orbital velocity of a star vs. its distance from the center of the galaxy. Rubin et al. (1978) measured the rotation curves of various galaxies and found that the rotation curve at the edge of the luminous galaxy does not decline as expected $v(r) \propto 1/\sqrt{r}$. Instead, the curve is fairly flat or even increase (See Fig. 1 and Fig. 2). If Newtonian gravity is correct in these systems, there must be some invisible mass extended over the luminous part to provide gravity. These are the classical evidences of existence of dark matter.
Fig. 1: rotational curves of several spiral galaxies, with the contribution from luminous components (dashed), gas (dotted) and dark matter halo (dash-dot). The square blocks are data. The solid line is dark matter model fitting. By Begeman et al. (1991)
Fig. 2: rotation curves of several spiral galaxies. By Rubin et al. (1978)
The dark matter distribution can be inferred from the rotation curve. The flatness of rotation curves in spiral galaxies indicate an integral mass $M(r) \propto r^{1+\delta}$. It has been shown by N-body simulation that there exists universal dark matter halo profile, $\rho(r) = \rho_0 \left( \frac{r_0}{r} \right)^\gamma \left[ \frac{1 + \left( \frac{r_0}{a} \right)^\alpha}{1 + \left( \frac{r}{a} \right)^\alpha} \right]^{\frac{\beta - \gamma}{\alpha}}$ Some widely used dark matter halo profiles are, Kravtsov et al. (1998)$(\alpha, \beta, \gamma) = (2, 3, 0.4), a = r_0 = 10$ kpc, Navarro et al. (1995) $(\alpha, \beta, \gamma) = (1, 3, 1), a = r_0 = 20$ kpc, Moore et al. (1999) $(\alpha, \beta, \gamma) = (1.5, 3, 1.5), a = r_0 = 28$ kpc and Bergstrom et al. (1998) $(\alpha, \beta, \gamma) = (2, 2, 0), a = r_0 = 3.5$ kpc
#### Dark Matter Distribution
In summary, evidences of existence of dark matter can be found in galaxies, clusters and from cosmological probes. Cosmological observations suggest overall density fraction of invisible mass $\Omega_{DM} \simeq 0.23$. Bahcall et al. (1995) collected the mass-to-light ratio on different scales (See Fig. 3). Their study shows mass-to-light ratio of dark matter on large scale structures is consistent with the cosmological constraints. This conclusion also suggests dark matter from cosmology is the same thing with the dark matter in galaxy and clusters.
Fig. 3
.
### Dark Matter IV: Galaxy Groups and Clusters
Galaxy groups and clusters are structures comprising hundreds to thousands of galaxies bounded together by gravity. According to the concordance cosmology model, cluster sits on the top of the cosmos structural hierarchy. A direct evidence of existence of dark matter is that groups and clusters usually have much larger mass-to-light ratio. The average M/L in solar neighborhood is $\sim 5\Upsilon_\odot$. In large scope, stellar and gas mass-to-light ratio ranges from $0.5 \Upsilon_\odot$ to $30 \Upsilon_\odot$. Larger than $30 \Upsilon_\odot$ usually cannot be explained by stars and gases. Direct probes of group and cluster mass are galaxy velocity dispersion, X-Ray and gravitational lensing.
### Galaxy Velocity Dispersion and X-Ray Spectrum
Mass of group or cluster can be inferred from their dynamics. Galaxies in groups or clusters bounded together by gravitational wall can be well assumed in virial equilibrium. Applying virial theorem, $\left< \epsilon_k \right> = -\frac{1}{2} \left< \epsilon_p \right>$ where $\left< \epsilon_k \right>$ denotes the average unit kinetic energy and $\left< \epsilon_p \right> \simeq -\frac{3}{5} \frac{G M}{R}$ is the average gravitational potential.
Galaxy velocities can be empirically fitted by Gaussian, $\exp\left[ -(v-v_0)^2/2\sigma_v^2 \right]$, where $\sigma_v$ is the velocity dispersion. Thus the virial mass of the system can be estimated as, $M \sim \frac{5}{3}\frac{R \sigma^2_v}{G}, \\ \Omega \sim \frac{5}{4\pi}\frac{\sigma_v^2}{G R^2 \rho_c}.$
90% baryons reside as intracluster media (ICM). These gases are heated to 10 to 100 MK due to gravitational energy and make good X-Ray sources. For ICM, $\left< \epsilon_k \right> \simeq \frac{3}{2} k_B T/\mu M_p$, where $T$ is the temperature and $\mu M_p$ is the gas molecule mass. The virial mass of the system is, $M \sim 5 \frac{R}{G \mu M_p} k_B T, \\ \Omega \sim \frac{15}{4 \pi} \frac{k_B T}{G R^2 \mu M_p \rho_c}$
Kent and Gunn (1982) measured the velocity dispersion of Coma cluster and found the mass-to-light ratio $M/L \sim 181 h^{-1}_{50} \Upsilon_\odot, \Omega \sim 0.1$, confirming Zwicky's (1933) speculation of dark mass. Carlberg et al. (1997) measured a sample of 14 clusters, containing 1150 galaxies. The average cluster virial mass-to-light ratio is $213 \pm 59 h \Upsilon_\odot$, corresponding to density fraction $\Omega \sim 0.19 \pm 0.1$. In a number of study of our own Virgo supercluster, the velocity dispersion is determined to be around $250 \pm 20$ km/s, implying $\Omega = 0.2 \pm 0.1$. For compilation of investigation of cluster masses and mass-to-light ratio, see Trimble 1987.
These are various issues using virial theorem. As the start, clusters are not totally isolated and there is no clear boundary between them. $R$ has to be some length scale. Furthermore, the virial equilibrium assumption is also questionable, although there is no doubt that the system is equilibrating. Hence virial analysis can only provide an order of magnitude estimation. Virial mass determination can be calibrated and improved by N-body simulation.
#### Gravitational Lensing
Mass deflects light traveling by. Just light optical lens bends light rays, this effect can also form image, known as gravitational lensing. Although light deflection in gravitational field has been discussed by several authors notably including Newton, Laplace, Cavendish (1978), Soldner (1803) and Einstein (1911, based on equivalence principle), it was Albert Einstein (1915) first get the correct deflection angle $\Delta \theta = \frac{4 G M}{r}$, following his triumph of general relativity, and related it to gravitational lensing. Using gravitational lensing to measure nebulae mass was first suggested by Zwicky (1937) and confirmed by Walsh et al. in "Twin Quasar" system in 1979. Einstein first thought gravitational lensing was too rare to be useful. However, today gravitational lensing has superseded dynamical methods, as the most powerful mean of mass measurement.
Fig. 1: geometry of gravitational lensing.
Strong Lensing In the strong lensing case, the size of the lens object is much smaller than the lens-source distance and lens-observer distance.
The image angle $\theta$ and source angle $\beta$ is related through deflection angle $\alpha$ (See the geometry configuration in Fig. 1), $\vec{\beta} = \vec\theta - \frac{D_{ds}}{D_s} \Delta \vec\alpha (\vec\theta) \equiv \vec\theta - \vec\alpha (\vec\theta ).$ The vector hats indicate the 2D nature of these angles. Different $\vec\theta$ may result the same $\vec\beta$, reflecting the possibility of multi-image formation. Under the weak field approximation and thin lens approximation, the deflection angle, $\vec\alpha = \frac{4G D_{ds}}{D_s} \int \mathrm{d}^3 x' \rho(x') \frac{\vec\zeta - \vec\zeta'}{(\vec\zeta - \vec\zeta')^2} \equiv \frac{4G D_{ds}}{D_s} \int \mathrm{d}^2 \zeta' \Sigma(\vec\zeta') \frac{\vec\zeta - \vec\zeta'}{(\vec\zeta - \vec\zeta')^2},$ where $\vec\zeta$ and $\vec\zeta'$ are projected position vectors. It is convenient to use angular parameters, $\vec\alpha(\vec\theta) = 4 G \int \mathrm{d}^2 \theta' \sigma(\vec\theta')\frac{\vec\theta - \vec\theta'}{(\vec\theta - \vec\theta')^2} = \nabla \psi(\vec\theta),$ where $\sigma(\vec\theta) \equiv \Sigma(D_d \vec\theta') D_{ds}D_d/D_s$ is the projected density, $\psi = 4 G \int \mathrm{d}^2 \theta' \sigma(\vec\theta') \ln \left| \vec\theta - \vec\theta' \right|$ is angular potential. It can be shown it satisfies Poisson equation, $\nabla^2 \psi(\vec\theta) = 8 \pi G \sigma(\vec{\theta}).$ The local image distortion (See Fig. 2) is related to the Jacobian matrix, $\partial \beta_i /\partial \theta_j = \delta_{ij} - \frac{\partial^2 \psi}{\partial \theta_i \partial \theta_j} = \mathbb{1}_{ij} - \left( \begin{array}{c c} \kappa + \gamma_1 & \gamma_2 \\ \gamma_2 & \kappa - \gamma_1 \\ \end{array} \right)_{ij}$
Fig. 2: meaning of distortion parameters, $\gamma \equiv \gamma_1 + i \gamma_2$. Figure from Wikipedia.
The resulted image would a ring, arc, or multiple images depending on the relative position and shapes of the lens and source (See Fig. 3).
Fig. 3: images of strong lensing. Left: An Einstein ring; Right: An Einstein cross.
As an example, consider Einstein ring (Fig 3a). It occurs when the lens, source and the observer align on a straight line ($\beta = 0$). The total mass of the lens system is $M = \frac{ d_S d_L} {4 G (d_S - d_L)} \theta_E^2$ where $d$ is angular diameter distance and $\theta_E$ is the angular radius of the ring.
For more general strong lensing, the mass distribution of the lens can be reconstructed from reasonable modeling of the lens and source (see for Koopmans (2005) more technical details). Reconstruction of mass map (See Fig. 4) of CL0024+1654 from strong lensing by Type et al. (1998) shows 98% of the cluster mass is dark matter in $35 h^{-1}$ kpc core.
Fig. 4: mass reconstruction of CL0024+1654 from strong lensing. Galaxies are shown in blue, dark matter distribution is shown in orange.
weak lensing In weak lensing, light is deflected by large number of non-uniformly distributed mass a long its path. Thus the thin lens approximation fails. The Jacobi matrix has to be generalized as, $\partial \beta_i / \partial \theta_j = \delta_{ij} - \int_0^{\chi_h} \mathrm{d}\chi g(\chi) \frac{\partial^2 \Psi}{\partial \zeta_i \partial \zeta_j},$ where $\zeta$ is transverse distance, $\Psi$ is Newtonian potential, $\chi$ is the comoving distance. The weight function is $g(\chi) = 2 r \int_{\chi}^{\chi_h} \mathrm{d} \chi' \; n(\chi') r(\chi - \chi')/r(\chi')$, where $n(\chi)$ is normalized lensing object distribution, $r = d_A/(1+z), d_A$ is the angular diameter distance, $\chi_h$ is the comoving distance to the horizon.
The image of the background galaxy is slightly distorted, known as cosmic shear. Statistical characteristics of the cosmic shear field provide valuable information of the lens system. Detailed projected mass can be reconstructed from it. Because the cosmic shear is only a few percent effect, it requires careful image processing. It turns out the point spread function (PSF) is one of the major sources of image distortion, which has to be corrected.
Cosmic shear has been observed by Van Waerbeke et al. (2000), Bacon et al. (2000) and Wittman et al.(2000). The observation is consistent with prediction from $\Lambda$-CDM model. Clowe et a. (2006) compared the optical and X-Ray image with weak lensing mass reconstruction of a merging cluster system 1E0657-556, and suggested the > 70% system mass is dark matter (See Fig. 5). The Hubble Space Telescope (HST) Cosmic Evolution Survey (COSMOS) project (2004 - 2005) led by Scoville measured 1.637 square degrees region, shape of half million distant galaxies, and used their distortion image to reconstruct the projected mass and mass evolution with redshift (See Fig. 6 and 7).
Fig. 5: observation of 1E0657-556. Left: optical band; Right: X-Ray band. The mass contours are reconstructed from weak lensing.
Fig. 6: Large scale structure mass distribution from COSMOS project. The total projected mass from weak lensing, is shown as contours in a and gray level in b c d; Stellar mass in blue; galaxy number density in yellow; hot gass in red.
Fig. 7: Three-dimensional reconstruction of the dark matter distribution from COSMOS project.
microlensing microlensing is a gravitational lensing by pass-by massive compact astronomical objects that causes background light source apparent brightening in a certain a mount of time (several months to years). Microlensing involves small and usually faint lenses such as dwarfs and blackholes. It provides a major mean to detect massive compact astronomical halo objects (MACHOs) including planets, red, brown and black dwarfs, neutron stars and black holes etc. Microlensing is very useful for baryonic dark matter detection. However, two surveys of MACHOs shows that they are insignificant of the overall halo mass.
### Dark Matter III: Cosmology
Evidences of dark matter can be tracked back to Oort (1932) and Zwicky (1933). In 1970s, study of galaxy rotation curves shows clear evidence of missing mass. Since then, overwhelming evidences, direct or indirect, emerge from galaxy velocity dispersion, cluster X-Ray spectroscopy, strong and weak gravitational lensing, N-body simulation, large scale structure, comic microwave background (CMB), baryon acoustic oscillation (BAO), Type-Ia supernovae (SNe) and Ly-$\alpha$ forest.
The concordant cosmology model $\Lambda$-CDM postulates existence of significant amount of dark energy and cold dark matter. $\Lambda$ stands for Einstein's cosmology constant, i.e. dark energy; CDM stands for cold dark matter. Many other competing cosmology models, such as CHDM, OCDM, SCDM and $\tau$CDM, also require some amount of dark matter. The success of CDM cosmology models forces people to take their postulates seriously.
Modern cosmology is based on three observation facts of the universe: in large scales ($\gtrsim$ 10 Mpc ), the universe is homogeneous (See Fig. 0), isotropic and the Doppler-shift velocity of an observable object is proportional to their distance to the earth. The third observation is well known as Hubble's law. The proportionality of the recession velocity to the distance $v/d = H(t)$ is called Hubble's parameter. The current value is called Hubble constant, $H_0 = 71 \pm 4$ km/(s$\cdot$Mpc). Two dimensionless constants are frequently used, $h = H_0/100$ km/(s$\cdot$Mpc) and $h_{50} = H_0/50$ km/(s$\cdot$Mpc). Hubble's law implies the universe is undergoing a homogeneous expansion.
Fig. 0: Density fluctuations vs scale. Image from Sloan Digital Sky Survey project.
To generalize our limited knowledge of the universe, one also need to adopt a modern version of Copernican principle, formally termed Cosmological Principle. Cosmological principle extends the three observation facts to the whole observable universe. Mathematically it assigns the universe a symmetry and leads to Robertson-Walker (RW) metric: $\mathrm{d}s^2 = - \mathrm{d}t^2 + a^2(t)\left( \frac{\mathrm{d} r^2 }{1 - k r^2} + r^2 \mathrm{d}\Omega^2_{\theta,\phi} \right),$ where $r$ is called comoving distance, $a(t)$ is called scalar factor and related to Hubble parameter $H(t) = \dot{a}/a$. RW metric describes three classes of universe, open $(k < 0)$, flat $(k = 0)$ and close $(k > 0)$.
Fig. 1: three types of universe, closed, open and flat. According to General Relativity, the geometry of the universe is determined by the ratio $\Omega_0$ of the total energy density and the critical density. Credit: NASA/WMAP Science Team
According to Einstein's Equation of Field (EEF), the geometry of the universe if ultimately determined by the mass distribution $T^{\mu\nu}$ and probably also cosmological constant $\Lambda$. EEF implies: $\begin{split} & \left( \frac{\dot{a}}{a} \right)^2 + \frac{k}{a^2} = \frac{8\pi G }{3 c^2} (\rho + \rho_\Lambda)\\ & 2 \frac{\ddot{a}}{a} + \left( \frac{\dot{a}}{a} \right)^2 + \frac{k}{a^2} = \frac{8 \pi G}{c^2} \left( \rho_\Lambda - p/c^2 \right) \end{split},$ where $\rho$ is the total energy density, $p$ is the pressure owing to $\rho$ and the motion, $\rho_\Lambda = \Lambda /8\pi G$ is known as dark energy.
Similarly, introduce Hubble density $\rho_H \equiv \frac{3}{8\pi G} H^2$ and curvature density $\rho_k = -\frac{3 k}{8 \pi G}$, total energy density $\rho_0 = \rho + \rho_\Lambda$. The current value of Hubble density is called critical density $\rho_c \equiv \frac{3}{8\pi G} H_0^2$. It is convenient to work with dimensionless density ratios $\Omega_i = \rho_i/\rho_c$, also known as density fractions or relic fractions. EEFs become, $\begin{split} & 1 = \Omega_k + \Omega_0 \\ & \frac{\ddot{a}}{a} = - \frac{H_0^2}{2} \sum_i \Omega_i (1+3w_i) \end{split}.$ The main contributions for $\Omega_0$ include dark energy $\Omega_\Lambda$, dark matter $\Omega_{DM}$ (cold $\Omega_{CDM}$, warm $\Omega_{WDM}$ and/or hot $\Omega_{HDM}$), neutrinos $\Omega_\nu$, baryonic matter $\Omega_b$ and mosmic microwave background radiation (photons) $\Omega_R$ etc. $w_i = p_i/ \rho_i$ is called equation of state. For idea gas, $w_i$ are constants, which is indeed true in cosmology since the typical density is only $\rho_c \sim 10^{-26} \; \mathrm{ kg / m^3}$. For non-relativistic matters such as baryons and cold dark matter, $w = v_s^2 / c^2 \simeq 0$, where $v_s$ is the speed of sound in such a matter medium. For relativistic particles such as photons, neutrinos and hot dark matter, $w = 1/3$. Energy conservation $\nabla_\nu T^{\mu\nu} = 0$ (also implied from EEFs) implies $\rho(t) \propto a^{-3(1+w)}$. Dark matter density $\rho_\Lambda$ stays constant while the universe expands, implying $w_\Lambda = -1$. In some other models, there exists another hypothetical energy similar to dark energy, called quintessence. Quintessence may have $w \leq -1$. Measurement of the deviation of $w_\Lambda$ from -1 provides information of existence of quintessence.
In summary, the geometry of the universe is determined by the energy density ratio $\Omega_0$; the acceleration of the universe expansion is determined by the relative abundance of energy and matter. Conversely, we can determine the values of these density ratios by measuring the observables associated with the cosmos geometry and/or expansion acceleration. The existence of dark matter as well as dark energy thus is measurable.
The density of a galaxy and other astronomical structures can be obtained from their mass-to-light ratio. As we will see in following sections, it may provide direct evidence of existence of dark matter. Given the mass-to-light ratio $\Upsilon$ (V-band), the density fraction $\Omega = (6.12 \pm 2.16) \times 10^{-4} h^{-1} \Upsilon/\Upsilon_\odot$. Typical cluster mass-to-light ratio is 200 - 300 $\Upsilon_\odot$, indicating a density fraction 0.17 - 0.26.
The geometry of the universe affects the anisotropy of the CMB power spectrum. Cosmic Background Explorer(COBE) (1989 - 1996) and Wilkinson Microwave Anisotropy Probe (WMAP) (2001 - ) and others measured the full sky CMB angular power spectrum (See Fig. 2 & 3).
Fig. 2: 7 years WMAP full sky image of CMB. Credit: NASA/WMAP Science Team
Fig. 3: the angular power spectrum of CMB from WMAP. Credit: NASA/WMAP Science Team
The deviation of high red-shift Type-Ia supernovae from standard candle reflects the acceleration of the universe expansion. In 1998, High-Z supernova Search Team and Supernova Cosmology Project report evidence that the universe expansion is accelerating (See Fig 4).
Fig. 4: measurement of high-z supernovae magnitudes vs. red-shift z.
Similar to CMB fluctuation, BAO is the fluctuation of baryon density in the universe. Sloan Digital Sky Survey (SDSS) project provides an image of the distribution of matter (See Fig. 5). SDSS Team report discovery of baryon acoustic peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies, which provides information of cosmology model parameters (See Fig. 6).
Fig. 5: a SDSS map of local galaxies. See a 3D map of SDSS-III galaxies below.
Fig. 6: SDSS galaxies two point correlation function vs. distance. The magenta curve is pure CDM model $\Omega_m h^2 = 0.105$, which lacks acoustic peak at around 100 Mpc/h. Other models are green, red, blue $\Omega_m h^2 = 0.12, 0.12, 0.14, 0.15$. All models take $\Omega_b h^2 = 0.024, n = 0.98$.
Combine all these cosmological probes, the current data suggest (See Fig. 7), $\Omega_\Lambda \simeq 0.73, \\ \Omega_M \simeq 0.27, \\ \Omega_{R} \simeq 6 \times 10^{-5}, \\ \Omega_\nu < 0.0062, \\ \Omega_0 \simeq 1.00, \\ w_\Lambda \simeq -1 \pm 0.053.$ This result is also consistent with the typical mass-to-light ratios measured in galaxies and clusters. We conclude from these data: (1): our universe is ( at least close to ) a flat universe ($k\simeq 0$); (2): there is significant amount of dark energy ($\Omega_\Lambda > \Omega_M \gg \Omega_\text{others}$); (3): The universe expansion is accelerating ($\ddot{a}/a > 0$).
Fig. 7: $\Omega_\Lambda$ vs. $\Omega_M$ using compilation of various cosmological probes.
Modern cosmology postulates the universe is born in a big bang in around 13.7 billion years ago. The early universe is in equilibrium at very high temperature. As it expands, temperature drops, the number of heavier particles drops, until their annihilation rates below the universe expansion rate. Thus they freeze out from reaction and become the relic. The big bang nucleosynthesis predicts abundance of elements based on the baryon-to-photon ratio $\eta$. Measurement of element relative ratios determines $\eta$, hence the baryon relic density, since photon relic density is known from CMB. The current data suggest, $\Omega_b h^2 = 0.0214 \pm 0.0020 \quad (9.3\%).$
Now the evidence of dark matter arises: $\Omega_M \simeq 0.3$ but $\Omega_b \simeq 0.04$, suggesting there are significant amount of dark matter $\Omega_{CDM} \simeq 0.23$ (See Fig. 8).
Fig. 8: the contents of the universe. Credit: NASA/WMAP Science Team.
.
## Nov 28, 2012
### $f(R)$ gravity and dark matter
Consider Einstein-Hilbert action for gravitational field, $S = \int \mathrm{d}^4 x \sqrt{-g} \left\{ \frac{1}{2\kappa}R - \frac{1}{\kappa}\Lambda + \mathcal{L}_M \right\}$ where $R = R^{\mu\nu} g_{\mu\nu}$ is the curvature scalar, $R^{\mu\nu} = R^\lambda_{\mu\lambda\nu}$ is the Racci tensor, $\kappa = 8 \pi G/c^2$.
Einstein-Hilbert action is the most simple Lorentzian invariant action that encodes space-time curvature and gives correct gravity (Newtonian gravity) in weak field limit. A straightforward generalization of Einstein-Hilbert action is the $f(R)$ gravity: $R \to f(R) = a_0 + a_1 R+ a_2 R^2 + \cdots$. From the aesthetic point of view, one may argue the higher order terms shouldn't be there. But there terms may arise for a good reason. We know that at high energy $\sim T_\text{Pl}$, general relativity (GR) will be superseded by quantum gravity. This in fact may even happen before Planck scale. General relativity is merely a low energy effective theory. Effective theories are not always neat. In fact, it is well known that quantized general relativity is non-renormalizable. Therefore, quantum fluctuation would bring infinite terms into the effective Lagrangian.
A natural question is, if these term has any observable effect. Let's restrict ourselves to classical theory only. The question is, how well has general relativity been tested [1]? $f(R)$ gravity and many other competing theories of gravity shares the same foundations with general relativity, except for the amount of gravity produced by the same energy. The relevant test includes deflection of light ray and time delay (Fig. 1), Mercury perihelion shift and spin precession (Table 4), change of Newton's constant (Table 5), mass of graviton (or propagation distance of gravity) etc.
Fig. 1: test of deflection of light, $\delta \theta = \frac{1+\gamma}{2}\frac{4 G M}{d}$ and test of time delay experiment $\delta t = 2(1+\gamma ) GM \ln \left( \frac{(r_\oplus + \mathbf{x}_\oplus\cdot \mathbf{n})(r_e - \mathbf{x}_e\cdot \mathbf{n})}{d^2}\right)$. General relativity predicts $\gamma = 1$.
Table 5: test of change of Newton constant.
Note that most of these tests are conducted with in solar system. They cannot exclude gravity theory with only large scale effects. It has been known [2-3] $R^2$ and/or $R^{\mu\nu}R_{\mu\nu}$ term can produce a massive scalar field in addition to graviton. This field couples with matter through Yukawa potential. Its mass can be chosen such than too heavy to propagate a observable distance, but too light $\ll M_\text{Pl}$ to modified gravity. In other words, it is a massive, very weakly interaction particle. Naturally, it is a dark matter candidate, since it only manifests itself in large scales.
Now let's see how it happens. Consider $R^n$ term: $S_n = \frac{1}{\kappa}\int \mathrm{d}^4 x \sqrt{-g} R^n$ Now let's take functional derivative with respect to $g^{\mu\nu}$. $\frac{\delta \sqrt{-g} R^n}{\delta g^{\mu\nu}} = \sqrt{-g} R^{n-1} \left( n \frac{\delta R}{\delta g^{\mu\nu}} + \frac{R}{\sqrt{-g}}\frac{\delta\sqrt{-g}}{\delta g^{\mu\nu}} \right)$ Now, according to Jacobi's formular, $\delta g = g g_{\mu\nu} \delta g^{\nu\mu}$. So $\frac{1}{\sqrt{-g}}\frac{\delta\sqrt{-g}}{\delta g^{\mu\nu}} = -\frac{1}{2}g_{\mu\nu}.$ $\frac{\delta R}{\delta g^{\mu\nu} } = R_{\mu\nu} + g^{\alpha\beta} \left( \frac{\delta \Gamma_{\alpha\beta;\lambda}^\lambda}{\delta g^{\mu\nu}} - \frac{\delta\Gamma_{\alpha\lambda;\beta}^\lambda}{\delta g^{\mu\nu}} \right)$ Note that $\delta R^\lambda_{\;\;\alpha\lambda\beta} = \delta\Gamma^\lambda_{\alpha\beta;\lambda} - \delta\Gamma^\lambda_{\alpha\lambda;\beta}$. $\delta \Gamma$'s are in fact tensors: $\delta\Gamma^\lambda_{\alpha\beta} = \frac{1}{2} g^{\lambda\rho}\left\{ \delta g_{\rho\alpha;\beta} + \delta g_{\rho\beta;\alpha} - \delta g_{\alpha\beta;\rho} \right\}.$
Hence $\frac{\delta R}{\delta g^{\mu\nu} } = R_{\mu\nu} + \frac{\delta g^{\alpha\beta}_{ \;\;\;\;;\alpha;\beta}}{\delta g^{\mu\nu}} - \frac{\delta g^{\alpha \;\;;\beta}_{\;\; \alpha \;\;;\beta}}{\delta g^{\mu\nu}}$
Therefore, the action $\delta S_n = \frac{1}{\kappa} \int \mathrm{d}^4 x \sqrt{-g} \delta g^{\mu\nu} R^{n-1} \left\{ n R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \frac{n-1}{R} \left(R_{;\mu;\nu} - g_{\mu\nu} R^{;\alpha}_{\;\;;\alpha} \right) + \frac{(n-1)(n-2)}{R^2} \left( R_{;\mu}R_{;\nu} - g_{\mu\nu} R^{;\alpha} R_{;\alpha} \right) \right\}$ yields the equation of motion: $\sum_n a_n R^{n-1} \left\{ n R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \frac{n-1}{R} \left(R_{;\mu;\nu} - g_{\mu\nu} R^{;\alpha}_{\;\;;\alpha} \right) + \frac{(n-1)(n-2)}{R^2} \left( R_{;\mu}R_{;\nu} - g_{\mu\nu} R^{;\alpha} R_{;\alpha} \right) \right\} = \frac{\kappa}{2} T_{\mu\nu}$
For $n = 1, a_1 = \frac{1}{2}$ it reduces to GR. One of the most distinguish feature of this formulation is for $n=1$ (GR), two extra terms vanish. If we keep terms up to $n=2$, the equation of motion simplifies as:$(1 + 2 a_2 R) G_{\mu\nu} + a_2 g_{\mu\nu} R^2 + 2 a_2 \left( R_{;\mu;\nu} - g_{\mu\nu} R^{;\alpha}_{\;\;;\alpha} \right) = \kappa T_{\mu\nu}$
Now, let's identify the particle contents. Define $h_{\mu\nu} = g_{\mu\nu}-\eta_{\mu\nu}$, where $\eta_{\mu\nu}$ is the Minkowski metric. In weak field limit, we can simply take $h_{\mu\nu} \to \delta g_{\mu\nu}$ and $g_{\mu\nu} \to \eta_{\mu\nu}$. But graviton is usually defined as the inverse trace $\bar{h}_{\mu\nu} = h_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu} h$, where $h = h^\mu_\mu$. Note that, $G_{\mu\nu} = -\frac{1}{2} \Box \bar{h}_{\mu\nu}$ and $R = \bar{h}^{\mu\nu}_{\;\;\;\; ,\mu\nu} - \frac{1}{2} \Box h$. The lowest order terms are: $S = \frac{1}{\kappa}\int \mathrm{d}^4 x \left\{ -\frac{1}{4} \bar{h}^{\mu\nu} \Box \bar{h}_{\mu\nu} + \frac{1}{8} h ( 2 a_2 \Box^2 + \Box ) h - \frac{a_2}{2} \bar{h}^{\mu\nu} \partial_\mu\partial_\nu \Box h \right\}$
As we can see, there is an additional field besides the conventional graviton field, with mass $m^2_h = -\frac{1}{2 a_2}$. This field interacts with matter via Yukawa coupling: $\sqrt{-g} \simeq 1 + \delta \sqrt{-g} = 1 + \frac{1}{2} h$. Therefore, it propagates $e^{- r /m_h}$. Taking $R \ll m^2_h \ll M_\text{Pl}$, $R^2$ gravity produces a massive weakly interacting particle.
[1]: Clifford M. Will, The Confrontation between General Relativity and Experiment , Living Rev. Relativity, 9, (2006), 3, http://www.livingreviews.org/lrr-2006-3
[2]: Jose A. R. Cembranos, Dark Matter from R2 Gravity, Phys. Rev. Lett. 102, 141301 (2009)
[3]: S. Stelle, Gen. Relativ. Gravit. 9, 353 (1978)
.
## Nov 26, 2012
### Jungman's Supersymmetric Dark Matter Review Compilation
Gerard Jungmana , Marc Kamionkowskic, and Kim Griestb,
"Supersymmetric Dark Matter",
Phys.Rept. 267 (1996) 195-373, arXiv:hep-ph/9506380
[src] [pdf] [ps]
My pdf readers cannot display some of the fonts. I used his src code and compiled a new version of pdf. [pdf] or url: [http://goo.gl/kMI4q]
## Nov 21, 2012
### Dark Matter II: Standard Model
The microscopic nature of dark matter is crucial yet still unanswered. The most successful description of the microscopic world is particle physics standard model (abbr. SM). However, standard model could not offer satisfactory answer for dark matter.
Fig. 1: the standard model stew
In order to be "invisible", a dark matter candidate has to be neutral, stable and massive. Most of standard model particles and hadrons are either unstable or charged, or both. Two possibilities remains in standard model, neutrinos and neutrons. But neutrons themselves are not stable. They has to form nuclei and in addition elements together with protons and/or electrons. More broadly, we consider a class of massive astronomical compact objects (abbr. MACHOs) composed of baryonic matter.
#### Neutrinos
In the simplest version of standard model, all neutrinos are left-handed massless particles. Neutrino oscillation reveals that neutrinos have non-zero mass. The current experimental values $\left|\Delta m _{23}^2 \right| \simeq \left| \Delta m_{13}^2 \right|= 2.43^{+0.13}_{-0.13} \times 10^{-3} \; \mathrm{eV}^2$, $\Delta m_{12}^2 = 7.59^{+0.20}_{−0.21} \times 10^{−5} \; \mathrm{eV}^2$ gives a lower bound $\geq 0.05$ eV for the heaviest neutrino. In the simplest case, neutrino masses are just $m_1 \simeq 0, m_2 \simeq 0.01 \;\mathrm{eV}, m_3 \simeq 0.05 \;\mathrm{eV}$. Of course it is possible that neutrino masses are nearly degenerate: $m_1 \sim m_2 \sim m_3 \gg \Delta m^2$. All kinematic measurement so far, however, has failed. Therefore a direct constraint of neutrino mass is particularly important.
Such upper bounds comes from cosmological probes. According to the big bang theory, neutrinos in cosmos were created in the early universe and decoupled from synthesis in the lepton epoch. The relic neutrino density is related to their masses [2]:$\Omega_\nu = \frac{\sum_f m_{\nu_f}}{93.2 \;\mathrm{eV} H_0^2} \times (100 \;\mathrm{km s^{-1} Mpc^{-1}})^2.$ Presence of massive neutrinos affects structures in the Universe. Analysis of observational data from various cosmological probes based on $\Lambda$-CDM model, with moderate statistical error roughly gives 0.5 - 1 eV neutrino mass upper bound.
On the other hand, simple argument from Pauli exclusive principle (the so-called phase space argument) could offer a lower bound of fermionic dark matter mass. In order to be bounded by gravity, the Fermion velocity should be smaller than the escape velocity,$m_f \geq \left( \frac{9\pi\hbar^3}{4\sqrt{2} g M^\frac{1}{2} R^\frac{3}{2} G^{\frac{3}{2}}} \right)^{\frac{1}{4}}$ where $g \geq 1$ is the internal degrees of freedom. Carrying out the analysis to the actually dark matter dominating system, we can conclude the mass of fermionic dark matter $\gtrsim 1 \;\mathrm{keV}$.
Moreover, if dark matter are neutrinos, they are hot matter. So far, hot dark matter models have various issues. First of all, hot dark matter tends to smooth out the matter fluctuation observed by Sloan Digital Sky Survey and other observations. It is also incompatible with the angular power spectrum of cosmic microwave background (See section 2.1). Secondly, in hot dark matter model (HDM), the structural formation is in the opposite order (i.e. large structures form early) of what we observed. Numerical simulation of galaxy formation in hot dark matter halo so far could not agree with observation.
Therefore, neutrinos are unlikely dark matter candidate.
Fig. 2: neutrino oscillation is caused by the difference between flavor eigenstates and mass eigenstates.
#### MACHOs
MACHOs such as red dwarfs and brown dwarfs may also show high mass-to-light ratio. However, large amount of baryonic matter will breaks the baryonic density constraint put by various astronomical probes based on cosmological models. Furthermore, observation using Hubble Space Telescope shows halo red dwarfs and brown dwarfs density is about 0.25% ~ 0.67% of the halo density, hence insignificant. The EROS project searching Magellanic clouds for microlensing events caused by MACHOs also reported that MACHOs make up to less than 8% of the halo mass.
Fig. 3: Left: An artist vision of a Y-dwarf; Right: An artist vision of a red dwarf.
In summary, standard model particles are almost ruled out as dark matter candidates.
## Nov 20, 2012
### Dark Matter I: Introduction
I was asked to write a review on dark matter, as the final report of General Relativity (GR) class. I'd like to share my reading here.
Fig. 1: Dark matter forms a halo around a galaxy.
In astronomy and physics, dark matter (abbr. DM) is a term for non-luminous matters in the Universe. It is proposed to explain the anomalous mass-to-light ratio (abbr. M/L). Dark matter can not be observed but leaves their trace in galaxy motion among others. It is generally postulated dark matter forms a halo around a galaxy. They initiate the formation of galaxies and galaxy clusters, dominates the mass of large scale structures. At present, dark matter is part of the concordant cosmology model, the $\Lambda$-CDM model, where CDM is abbreviation for cold dark matter. It is generally believed dark matter comprises about 23% of the total mass of the Universe. Together with dark energy 72%, they dominate the present Universe.
Fig. 2: Left: The contents of the Universe according to $\Lambda$-CDM model. Only 4.6% of the Universe is ordinary matter. Dark matter comprises 23% of the Universe. 72% of the Universe, is composed of dark energy. This energy, distinct from dark matter, is responsible for the present-day acceleration of the universal expansion. Right: Timeline of the Universe according to $\Lambda$-CDM model.
However, the nature of dark matter is still unclear. It has been shown, none of the elementary particles in the standard model (abbr. SM) could be dark matter. Perhaps some suspicion remains for neutrinos, given their masses indeterminate. Therefore, the existence of dark matter naturally postulates physics beyond the particle physics standard model. In fact, for various theoretical speculations, particle physicists tend to believe standard model is merely an effective theory of an more advanced theory. The demands of dark matter, whether truly relevant or not, has been as a strong motivation for extension of standard model in particle physics. Of course not all new particles are dark matter candidates. It is generally plausible to assume dark matter are some weakly interacting massive particles (abbr. WIMPs), raising from some TeV scale new physics. The favored WIMP candidates include neutralinos from supersymmetric (abbr. SUSY) models and the first Kaluza-Klein excitations from universal extra dimension (abbr. UED) models among others.
Fig. 3: The standard model elementary particle zoo
On the other hand, dark matter may also merely be a misleading paradigm. The existence of dark matter, inferred from large gravitational mass to luminosity ratio, is based on the assumption that general relativity with its flat space-time approximation Newtonian gravity holds up to cosmological scale. Milgrom and others has shown that it is possible to modified Newtonian gravity to explain the large mass-to-light ratio in galaxies. If confirmed, however, mankind's understanding of the cosmos will be completely overthrown.
Just as Richard P. Feynman had said, "Experiment is the sole judge of scientific truth". Some of the proposals lies within the current experimental and observational scope. People have conducted various experiments and observations to detect possible dark matter directly or indirectly, from space telescope to ground based detector and colliders. Current results, mainly null results with some suspicious signals, have excluded a large class of theories.
#### Direct Dark Matter Search
ArDM ANAIS CASPAR CDMS COUPP CRESST CUORE DAMA DEAP/CLEAN DM-TPC Drift Edelweiss Genius HDMS IGEX LIBRA MIMAC Majorana NAIAD NEWAGE ORPHEUS Picasso ROSEBUD SIMPLE UKDMC Ultima XENON XMASS WARP Zeplin
#### Indirect Dark Matter search
AMANDA AMS ANTARES BAIKAL BESS CAPRICE GAPS GLAST HEAT IceCube IMAX MACRO Nestor NINA Pamela Super-K
In this paper, I review the physics of dark matter. The aim is to give a pedagogical introduction to general interested readers, like myself. In the next few posts, I will introduce the evidences and motivations for dark matter; then review the candidates and their properties. Focus will be put on popular models such as WIMPs, gravitino, axions and sterile neutrinos. After that, I will talk about the undergoing experiments and observations in astronomy and physics. Their results and the constraints on models will be discussed. In the end, I will visit alternative theories and other speculations.
## Nov 17, 2012
### The Top 10 Supercomputers
parameters of top 10 supercomputers (Nov. 2012) data source: www.top500.org
comparison of top 10 supercomputers (Nov. 2012) data source: www.top500.org
comparison of efficiency of top 10 supercomputers
the power law of performance
## Nov 3, 2012
### On the Pronunciation of the Name of Greek Letters
There are, generally speaking, two main uses of Greek letters in English, the name of honor societies and academia. As a graduate student, I interact with both people from academia and collage students. I found there are roughly three ways for English speakers to pronounce the name of Greek letters
1. as the Greek pronunciation, very common in academia, though most people may not follow it exactly; 2. as the English name, very common in English speakers; 3. as the name of the corresponding English letter, common for collage students who is not STEM majors.
Either way is perfectly okay. But sometimes, confusion arises when one mix these three ways. The most notorious example is the letter xi and psi. Some people call both /sai/. Moreover, fancy fonts of English letters may be confused with Greek letters.
## Oct 18, 2012
### Coulomb's Law in $d$-Dimension
In 3+1 dimension, Coulomb's law and Newton's law of gravity takes the form of distance inverse-squared,
$f = \frac{1}{r^2}$ with proper definition of the source and distance. What is Coulomb's law and Newton's law look like in high dimensions?
To answer this question, we have to make assumptions. We assume the Lagrangian stays the same form in $d+1$ dimension. It means, the Maxwell equations hold; or equivalently, Poisson equation holds $\nabla^2 \varphi(\mathbf{x}) = 0.$
Solving $\varphi$ in free space will produce the potential hence the force. By doing Fourier transform, $\varphi(\mathbf{x}) = \int \frac{\mathrm{d}^d k}{(2\pi)^d} \frac{ e^{i \mathbf{k}\cdot \mathbf{x}}}{k^2}.$
#### Solving $\varphi(r)$
$\varphi = \frac{1}{(2\pi)^d}\int \mathrm{d} k \; k^{d-3} \mathrm{d}^{d-1}\Omega {e^{i k r \cos \theta_1}}$, where $\mathrm{d}^{d-1} \Omega$ is the $(d-1)$-D angular element. Now this integral involves the integral of one azimuthal angle $\theta$. We can parametrize the coordinate in $d$-D spherical coordinate as: $x_1 = r \cos\theta_1; x_2 = r \sin\theta_1 \cos\theta_2; \cdots; x_d = r \sin\theta_1 \sin\theta_2\cdots \sin\theta_{d-2}\cos\phi;$ Then the surface element becomes: $\mathrm{d}^{d-1} \Omega = \sin^{d-2}\theta_1 \sin^{d-3}\theta_2 \cdots \sin \theta_{d-2} \mathrm{d}\theta_1 \mathrm{d}\theta_2 \cdots \mathrm{d}\theta_{d-2} \mathrm{d} \phi$. The integral over angles except $\theta_1$ is just the surface area of a $(d-2)$-D hypersphere (See appendix for a derivation) $S_{d-2} = \frac{2 \pi^{\frac{d-1}{2}}}{\Gamma\left( \frac{d-1}{2} \right) }$
So $\varphi = \frac{ S_{d-2}}{ (2\pi)^d r^{d-2}} I_{d}$, where $I_d = \int_0^\infty \mathrm{d}\xi \; \int_0 ^\pi \mathrm{d}\theta \; \xi^{d-3} \sin^{d-2}\theta \exp\left[ i \xi \cos\theta \right].$
It's tempted to do the $\xi$ integral first, because it gives gamma function and leaves the rest a integral over $\tan\theta$: $\int_0^{\pi/2} \mathrm{d}\theta \tan^{d-2}\theta + (-1)^{d-2}\int^{\pi/2}_\pi \mathrm{d}\theta \tan^{d-2}\theta$. The problem is that integral of $\tan$ function at $\pi/2$ is singular . We can do the $\theta$ integral first, which yields (using mathematica):$I_d = \int_0^\infty \mathrm{d}\xi \; \sqrt{\pi} \Gamma\left( \frac{d-1}{2} \right) \frac{{ }_0F_1\left( \frac{d}{2}, -\frac{\xi^2}{4} \right)}{\Gamma\left( \frac{d}{2} \right)} \xi^{d-3} = 2^{d-3} \sqrt{\pi} \Gamma\left( \frac{d-2}{2} \right) \Gamma\left( \frac{d-1}{2} \right).$
Therefore, $\varphi = \frac{ S_{d-2}}{ (2\pi)^d r^{d-2}} 2^{d-3} \sqrt{\pi} \Gamma\left( \frac{d-2}{2} \right) \Gamma\left( \frac{d-1}{2} \right) = \frac{\Gamma\left( \frac{d-2}{2}\right)}{4 \pi^\frac{d}{2} }\frac{1}{r^{d-2}}$
Coulomb potential in higher dimensions
#### Gauss Law
There is a much easier method to solve this problem. We note that Gauss theorem (in mathematics) hence Gauss law (in physics) still holds. $E(r) \cdot S_{d-1} = 1$ where $S_{d-1}$ is the area of a $d-1$ D hypersphere. So we get Coulomb's law in $d+1$ dimension as:$f = \frac{\Gamma\left( \frac{d}{2} \right) }{2 \pi^\frac{d}{2}} \frac{1}{r^{d-1}}.$
It can be checked, $-\frac{\partial}{\partial r} \varphi(r) = E(r)$, Just as we expected. Of course, the direct integration can be used in where Gauss law does not hold.
#### Coulomb's law for massive boson exchange
Another interesting result is the Coulomb's law for classical theories with massive intermediate boson in higher dimentions. The Poisson equation becomes $(\nabla^2 - m^2) \varphi(\mathbf{x}) = 0.$
By doing Fourier transform, $\varphi(\mathbf{x}) = \int \frac{\mathrm{d}^d k}{(2\pi)^d} \frac{ e^{i \mathbf{k}\cdot \mathbf{x}}}{k^2+m^2}.$ Apply the same technique again (except the Gauss Law), $\varphi(r) = \frac{ (m r)^{\frac{d}{2}-1} K_{\frac{d}{2}-1} (mr)}{(2 \pi)^\frac{d}{2}}\frac{1}{ r^{d-2}}$ where $K_n(x)$ is the Bessel function of the second kind.
Comparison of field potential of massless and massive boson exchange in higher dimensions
Comparison of field potential of massless and massive boson exchange in higher dimensions at large $r$
#### Appendix: the surface area of a $(d-1)$-D hypersphere
Consider the following Gaussian integeral: $\int \mathrm{d}^n x \exp\left( - \mathbf{x}^2 \right) = \left( \int \mathrm{d}x \exp\left[ - x^2 \right] \right)^n = \pi ^{\frac{n}{2}}$
The left hand side can be written as $\int \mathrm{d}r \; r^{n-1} \exp\left[ -r^2 \right] S_{n-1}$. So $\frac{1}{2} \Gamma\left(\frac{n}{2}\right) S_{n-1} = \pi^\frac{n}{2}$. $S_{n-1} = \frac{2 \pi^\frac{n}{2} }{\Gamma\left(\frac{n}{2}\right)}.$
http://naturalunits.blogspot.com/2013/01/a-remark-on-high-dimension-propagators.html
update, March 21, 2014:
• I was solving the Green's function with free space boundary condition. The Poisson equation should have been
$\nabla^2 \varphi(\mathbf{x}) = \delta(\mathbf{x})$
• This is not a the only generalization. It may not even be the natural generalization, from the point view the Newtonian approximation in general relativity. In GR, one should write down the Einstein equation in $d+1$ D and do the linearization there, as our commentator explained. (That means one fix G, which is not always appreciated.)
## Oct 4, 2012
### ubuntu/linux 无法启动问题
-------------------------------------
Gave up waiting for boot device. Common problems:
-Boot args (cat /proc/cmdline)
-check rootdelay
-check root
-Missing modules(cat /proc/modules; ls /dev)
ALERT! /dev/disk/by-uuid/d286688a-8587-4977-99b9-1414cf09c374 does not exist. Dropping to a shell!
BusyBox v1.1.3 (Debian 1:1.1.3-5ubuntu12) Built-in shell (ash)
Enter 'help' for build in commands.
(initramfs)|
----------------------------------
linux /boot/vmlinuz-2.6.31-17-generic root=UUID=a026ae5a-4c0b-42cd-8b46-b57bfb433ac7 ro quiet splash
linux /boot/vmlinuz-2.6.31-17-generic root=/dev/sda1 ro quiet splash
### c++ notes: I/O
1. 记得包含 <iomanip> <fstream>
2. 少用 std::cin >> ...; 处理字符和字符串. 如果使用记得加一个 std::cin.ignore(); 来吃掉回车. std::cin.sync(); 来清空buffer
3. 使用std::cin.getline( const char* str, size_t n, char ch ); /*对于文件流也一样适用*/ ch设置为 '\n'
4. 对于使用string 类型的操作, 使用相应的版本: getline( iostream std::cin, string str);
5. 使用GNU C的系统库函数来实现系统相关的操作, 比如文件操作. 比较有用的一个函数是 int scandir(const char* directory, struct ***dirent namelist, int (*selector)(void * ), alphasort) ; 来扫描文件. 其中 返回文件个数, directory 是目标文件夹(路径), namelist 存储扫描得到的文件(夹) 一般定义为: struct ** storagelist 并使用 &storagelist 作为参量. 不用担心其大小. 因为它只是一个指针, 指向的部分是由malloc 分配的. 使用完之后最好加以free.; selector 是一个自定义的比较函数. 返回值为非零的文件将会被选择; alphasort 也本应该是一个自定义的文件, 表示返回文件的排列方法. 不过系统提供了几个函数. alphasort就是一个常用的.
6. 使用 定义在 <cstdlib.h> 中的库函数来读取环境变量: system(const char* str);来执行shell命令. getenv(const char* str) 来获得环境变量.
7. 使用系统预编译宏: __TIME__ __DATE__ ; __LINE__ 等等
## Sep 7, 2012
### Faster Mathematica II: List Manipulation
The most important datatype in math is list. Lists in Mathematica are like set in mathematics. List is also used to hold data. In the case of large amount of data in input/output or intermediate stage, fast list operations certainly would speedup the code. One surprising fact is, seemly the same list operations may have different performance.
### 1. List Construction
#### 1-0 Table is fast when creating block list
The most simple way to construct a list is use Table. Table[] is the suitable choice in generic cases. But making use of knowledge of the list, you can get better performance.
Table[], Array[], and function apply
#### 1-1 use build-in functions
Some build-in functions support generation of a list. They are usually faster that explicit call them in a list constructor like Table[].
Build-in random number generator is way faster.
#### 1-3 Use NestList for recursable list
Recursable list means you can get the n+1-th from the n-th.
### 2. List Traversal
#### 2-1. Listable attribute
Many function as Listable attribute. This means, when hitting a list, the function automatically hit on each element of the list. And the result will be the list of all hits. Using Listable functions is the fastest list traversal.
One can also Map (/@), Apply (@@, @@@), Thread, MapThread the function (head) to a list if Listable attribute is not available.
Note that some binary operations have been defined between lists and between list and scalar. This build-in operations between lists are fast and can be in parallel.
#### 2-2. iterate a list
If one has to explicitly traverse a list, Mathematica allows iterating a list directly. It is faster than using iterators. The iteration concept is not limited to list. In Do and other iterating cases, list can also be iterated directly.
#### 2-3. use build-in list manipulation
Mathematica has defined many build-in list operations. Using these build-in operations are generally faster.
|
2017-04-24 07:30:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5931709408760071, "perplexity": 1445.906754688526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00612-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://rosenberglab.net/impact_citation_acceleration.html
|
# Michael S. Rosenberg’s Laboratory
Computational Evolutionary Biology & Bioinformatics
E-mail: msr@asu.edu
← Back to introduction
## citation acceleration
Citation acceleration (Sangwal 2012) is defined as the total number of citations of an author divided by the square of their academic age,
$$a = \frac{C^P}{\left(Y-Y_0+1\right)^2}=\frac{\sum\limits_{i=1}^{P}{C_i}}{\left(Y-Y_0+1\right)^2}.$$
Yeara
19972.0000
19983.2500
19994.0000
20004.1875
20015.3200
20026.6667
20037.4898
20049.1406
200510.4321
200611.7600
200712.4711
200813.1250
200913.7041
201014.2194
201114.2978
201214.0742
201314.0000
201413.8549
201513.6066
201613.3150
201712.7800
## References
• Sangwal, K. (2012) On the relationship between citations of publication output and Hirsch index h of authors: Conceptualization of tapered Hirsch index hT, circular citation area radius R and ciation acceleration a. Scientometrics 93:987–1004.
|
2018-03-24 15:29:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3811940550804138, "perplexity": 11210.204334300965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650730.61/warc/CC-MAIN-20180324151847-20180324171847-00161.warc.gz"}
|
http://www.martinbroadhurst.com/how-to-check-if-an-element-is-visible-in-selenium.html
|
# How to check if an element is visible in Selenium
You can use the isDisplayed() method of an element to find out if is displayed, but it may still be invisible if it’s obscured by another element.
## Java
boolean isDisplayed = driver.findElement(By.id("myId")).isDisplayed();
## Python
is_displayed = driver.find_element_by_id("myId").is_displayed()
## C#
bool isDisplayed = driver.FindElement(By.Id("myId")).Displayed;
|
2020-07-08 11:35:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3116162121295929, "perplexity": 4157.077139153912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00283.warc.gz"}
|
http://www.digplanet.com/wiki/Beta_decay
|
digplanet beta 1: Athena
Share digplanet:
Agriculture
Applied sciences
Arts
Belief
Chronology
Culture
Education
Environment
Geography
Health
History
Humanities
Language
Law
Life
Mathematics
Nature
People
Politics
Science
Society
Technology
β decay in an atomic nucleus (the accompanying antineutrino is omitted). The inset shows beta decay of a free neutron. In both processes, the intermediate emission of a virtual W boson (which then decays to electron and antineutrino) is not shown.
In nuclear physics, beta decay (β decay) is a type of radioactive decay in which a proton is transformed into a neutron, or vice versa, inside an atomic nucleus. This process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits a detectable beta particle, which is an electron or positron.[1]
Beta decay is mediated by the weak force. There are two types of beta decay, known as beta minus and beta plus. Beta minus (β) decay produces an electron and electron antineutrino, while beta plus (β+) decay produces a positron and electron neutrino; β+ decay is thus also known as positron emission.[2]
An example of electron emission (β decay) is the decay of carbon-14 into nitrogen-14:
14
6
C
14
7
N
+ e + ν
e
In this form of decay, the original element has decayed into a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number A but an atomic number Z that is increased by one. As in all nuclear decays, the decaying element (in this case 14
6
C
) is known as the parent nuclide while the resulting element (in this case 14
7
N
) is known as the daughter nuclide. The emitted electron or positron is known as a beta particle.
An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23:
23
12
Mg
23
11
Na
+ e+ + ν
e
In contrast to β decay, β+ decay is accompanied by the emission of an electron neutrino. β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one.
Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. An example of electron capture is the decay of krypton-81 into bromine-81:
81
36
Kr
+ e81
35
Br
+ ν
e
Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino.[3]
## β− decay
The Feynman diagram for β decay of a neutron into a proton, electron, and electron antineutrino via an intermediate W boson.
In β decay, the weak interaction converts an atomic nucleus into a nucleus with one higher atomic number while emitting an electron (e) and an electron antineutrino (ν
e
). The generic equation is:
A
Z
N
A
Z+1
N’
+ e + ν
e
[1]
where A and Z are the mass number and atomic number of the decaying nucleus.
Another example is when the free neutron (1
0
n
) decays by β decay into a proton (p):
np + e + ν
e
.
At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged (−13 e) down quark to the positively charged (+23 e) up quark by emission of a W boson; the W boson subsequently decays into an electron and an electron antineutrino:
du + e + ν
e
.
β decay generally occurs in neutron-rich nuclei.[4]
## β+ decay
Main article: Positron emission
Energy spectrum of beta particle in beta decay
In β+ decay, or "positron emission", the weak interaction converts a nucleus into its next-lower neighbor on the periodic table while emitting a positron (e+) and an electron neutrino (ν
e
). The generic equation is:
A
Z
N
A
Z−1
N’
+ e+ + ν
e
[1]
β+ decay cannot occur in an isolated proton because it requires energy due to the mass of the neutron being greater than the mass of the proton. β+ decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron and a neutrino and into the kinetic energy of these particles. In an opposite process to negative beta decay, the weak interaction converts a proton into a neutron by converting an up quark into a down quark by having it emit a W+ or absorb a W.
## Electron capture (K-capture)
Main article: Electron capture
In all cases where β+ decay of a nucleus is allowed energetically, the electron capture process is also allowed, in which the same nucleus captures an atomic electron with the emission of a neutrino:
A
Z
N
+ eA
Z−1
N’
+ ν
e
The emitted neutrino is mono-energetic. In proton-rich nuclei where the energy difference between initial and final states is less than 2mec2, β+ decay is not energetically possible, and electron capture is the sole decay mode.[3]
This decay is also called K-capture because the innermost electron of an atom belongs to the K-shell of the electronic configuration of the atom, and this has the highest probability to interact with the nucleus.
There is an analogous process possible in theory in antimatter: antiproton-rich antimatter radioisotopes might decay via an analogous process of positron capture,[citation needed] but in practice no such complex antimatter nuclides have either been discovered or artificially constructed.
## Q-values
The Q value is defined as the total amount of energy released in a given nuclear decay. In beta decay, Q is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore emitted with any kinetic energy ranging from 0 to Q.[1] A typical Q is around 1 MeV, but can range from a few keV to a few tens of MeV.
Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light.
## Nuclear transmutation
If the proton and neutron are part of an atomic nucleus, these decay processes transmute one chemical element into another. For example:
137 55Cs → 137 56Ba + e− + ν e (beta minus decay) 22 11Na → 22 10Ne + e+ + ν e (beta plus decay) 22 11Na + e− → 22 10Ne + ν e (electron capture)
Beta decay does not change the number A of nucleons in the nucleus but changes only its charge Z. Thus the set of all nuclides with the same A can be introduced; these isobaric nuclides may turn into each other via beta decay. Among them, several nuclides (at least one for any given mass number A) are beta stable, because they present local minima of the mass excess: if such a nucleus has (A, Z) numbers, the neighbour nuclei (A, Z−1) and (A, Z+1) have higher mass excess and can beta decay into (A, Z), but not vice versa. For all odd mass numbers A the global minimum is also the unique local minimum. For even A, there are up to three different beta-stable isobars experimentally known; for example, 96
40
Zr
, 96
42
Mo
, and 96
44
Ru
are all beta-stable, though the first one can undergo a very rare double beta decay (see below). There are about 355 known beta-decay stable nuclides total.
Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is 64
29
Cu
, which decays by positron emission 61% of the time to 64
28
Ni
, and 39% of the time by (negative) beta decay to 64
30
Zn
.
A beta-stable nucleus may undergo other kinds of radioactive decay (alpha decay, for example). In nature, most isotopes are beta stable, but a few exceptions exist with half-lives so long that they have not had enough time to decay since the moment of their nucleosynthesis. One example is the odd-proton odd-neutron nuclide 40
19
K
, which undergoes all three types of beta decay (β, β+ and electron capture) with a half-life of 1.277×109 years.
## Double beta decay
Main article: Double beta decay
Some nuclei can undergo double beta decay (ββ decay) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as the process has an extremely long half-life. In nuclei for which both β decay and ββ decay are possible, the rarer ββ decay process is effectively impossible to observe. However, in nuclei where β decay is forbidden but ββ decay is allowed, the process can be seen and a half-life measured.[5] Thus, ββ decay is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change A; thus, at least one of the nuclides with some given A has to be stable with regard to both single and double beta decay.
"Ordinary" double beta decay results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless double beta decay has never been observed.[5]
## Bound-state β− decay
A very small minority of free neutron decays (about four per million) are so-called "two-body decays", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV necessary energy to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom.[6] In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino.
For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This can not occur for neutral atoms whose low-lying bound states are already filled by electrons.
The phenomenon in fully ionized atoms was first observed for 163Dy66+ in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research group. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes β decay into the K and L shells with a half-life of 47 days.[7]
Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for 187Re by Bosch et al., also at Darmstadt. Neutral 187Re does undergo β decay with a half-life of 42 × 109 years, but for fully ionized 187Re75+ this is shortened by a factor of 109 to only 32.9 years.[8] For comparison the variation of decay rates of other nuclear processes due to chemical environment is less than 1%.
## Forbidden transitions
Beta decays can be classified according to the L-value of the emitted radiation. When L > 0, the decay is referred to as "forbidden". Nuclear selection rules require high L-values to be accompanied by changes in nuclear spin (J) and parity (π). The selection rules for the Lth forbidden transitions are:
$\Delta J = L-1, L, L+1; \Delta \pi = (-1)^L,$
where Δπ = 1 or −1 corresponds to no parity change or parity change, respectively. The special case of a 0+ → 0+ transition (which in gamma decay is absolutely forbidden) is referred to as "superallowed" for beta decay, and proceeds very quickly by this decay route. The following table lists the ΔJ and Δπ values for the first few values of L:
Forbiddenness ΔJ Δπ
Superallowed 0+ → 0+ no
Allowed 0, 1 no
First forbidden 0, 1, 2 yes
Second forbidden 1, 2, 3 no
Third forbidden 2, 3, 4 yes
## Beta emission spectrum
Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum N(T) of emitted betas as follows:[9]
$N(T) = C_L(T) F(Z,T) p E (Q-T)^2$
where T is the kinetic energy, CL is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), F(Z, T) is the Fermi Function (see below) with Z the charge of the final-state nucleus, E = T + mc2 is the total energy, p =(E/c)2 − (mc)2 is the momentum, and Q is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by Q minus the kinetic energy of the beta.
### Fermi function
The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be:[10]
$F(Z,T) = \frac{2 (1+S)}{\Gamma(1+2S)^2} (2 p \rho)^{2S-2} e^{\pi \eta} |\Gamma(S+i \eta)|^2,$
where S =1 − α2 Z2 (α is the fine-structure constant), η = ± αZE/pc (+ for electrons, − for positrons), ρ = rN/ℏ (rN is the radius of the final state nucleus), and Γ is the Gamma function.
For non-relativistic betas (Qmec2), this expression can be approximated by:[11]
$F(Z,T) \approx \frac{2 \pi \eta}{1 - e^{- 2 \pi \eta}}.$
Other approximations can be found in the literature.[12][13]
### Kurie plot
A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's Q-value). With Kurie plot one can find the limit on effective mass of neutrino.
## History
### Discovery and characterization of β− decay
Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899 Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. (In 1900 Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903, and termed gamma rays).
In 1900 Becquerel measured the mass-to-charge ratio (m/e) for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that m/e for a beta particle is the same as for Thomson’s electron, and therefore suggested that the beta particle is in fact an electron.
In 1901 Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e. β) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left.
### Neutrinos in beta decay
Historically, the study of beta decay provided the first physical evidence of the neutrino. In 1911 Lise Meitner and Otto Hahn performed an experiment that showed that the energies of electrons emitted by beta decay had a continuous rather than discrete spectrum. This was in apparent contradiction to the law of conservation of energy, as it appeared that energy was lost in the beta decay process. A second problem was that the spin of the nitrogen-14 atom was 1, in contradiction to the Rutherford prediction of ½.
In 1920–1927, Charles Drummond Ellis (along with James Chadwick and colleagues) established clearly that the beta decay spectrum is really continuous, ending all controversies. It also had an effective upper bound in energy, which was a severe blow to Bohr's suggestion that conservation of energy might be true only in a statistical sense, and might be violated in any given decay. Now the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute.
In a famous letter written in 1930 Wolfgang Pauli suggested that in addition to electrons and protons atoms also contained an extremely light neutral particle which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum) and had simply not yet been observed. In 1931 Enrico Fermi renamed Pauli's "neutron" to neutrino, and in 1934 Fermi published a very successful model of beta decay in which neutrinos were produced. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge, and was not accomplished until 1956. However, the properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi.
### Discovery of other types of beta decay
In 1934 Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction 4
2
He
+ 27
13
Al
→ 30
15
P
+ 1
0
n
, and observed that the product isotope 30
15
P
emits a positron identical to those found in cosmic rays by Carl David Anderson in 1932. This was the first example of β+ decay (positron emission), which they termed artificial radioactivity since 30
15
P
is a short-lived nuclide which does not exist in nature.
The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V.[14][15][16] Alvarez went on to study electron capture in 67Ga and other nuclides.[14][17][18]
## References
1. ^ a b c d Konya, J.; Nagy, N. M. (2012). Nuclear and Radiochemistry. Elsevier. pp. 74–75. ISBN 978-0-12-391487-3.
2. ^ Basdevant, Jean-Louis; Rich, James; Spiro, Michael (2005). Fundamentals in Nuclear Physics: From Nuclear Structure to Cosmology. Springer. ISBN 978-0387016726.
3. ^ a b Zuber, Kai (2011). Neutrino Physics (2 ed.). CRC Press. p. 466. ISBN 9781420064711.
4. ^ Loveland, Walter D. (2005). Modern Nuclear Chemistry. Wiley. p. 232. ISBN 0471115320.
5. ^ a b S.M. Bilenky (October 5, 2010). "Neutrinoless double beta-decay". Physics of Particles and Nuclei 41 (5).
6. ^ An Overview Of Neutron Decay J. Byrne in Quark-Mixing, CKM Unitarity (H.Abele and D.Mund, 2002), see p.XV
7. ^ Jung, M.; et al. (1992). "First observation of bound-state β decay". Physical Review Letters 69 (15): 2164–2167. Bibcode:1992PhRvL..69.2164J. doi:10.1103/PhysRevLett.69.2164. PMID 10046415.
8. ^ Bosch, F.; et al. (1996). "Observation of bound-state beta minus decay of fully ionized 187Re: 187Re–187Os Cosmochronometry". Physical Review Letters 77 (26): 5190–5193. Bibcode:1996PhRvL..77.5190B. doi:10.1103/PhysRevLett.77.5190. PMID 10062738.
9. ^ Nave, C. R. "Energy and Momentum Spectra for Beta Decay". HyperPhysics. Retrieved 2013-03-09.
10. ^ Fermi, E. (1934). "Versuch einer Theorie der β-Strahlen. I". Zeitschrift für Physik 88 (3–4): 161–177. Bibcode:1934ZPhy...88..161F. doi:10.1007/BF01351864.
11. ^ Mott, N. F.; Massey, H. S. W. (1933). The Theory of Atomic Collisions. Clarendon Press. LCCN 34001940.
12. ^ Venkataramaiah, P.; Gopala, K.; Basavaraju, A.; Suryanarayana, S. S.; Sanjeeviah, H. (1985). "A simple relation for the Fermi function". Journal of Physics G 11 (3): 359–364. Bibcode:1985JPhG...11..359V. doi:10.1088/0305-4616/11/3/014.
13. ^ Schenter, G. K.; Vogel, P. (1983). "A simple approximation of the fermi function in nuclear beta decay". Nuclear Science and Engineering 83 (3): 393–396. OSTI 5307377.
14. ^ a b Segré, E. (1987). "K-Electron Capture by Nuclei". In Trower, P. W. Discovering Alvarez: Selected Works of Luis W. Alvarez. University of Chicago Press. pp. 11–12. ISBN 978-0-226-81304-2.
15. ^ "The Nobel Prize in Physics 1968: Luis Alvarez". The Nobel Foundation. Retrieved 2009-10-07.
16. ^ Alvarez, L. W. (1937). "Nuclear K Electron Capture". Physical Review 52 (2): 134–135. Bibcode:1937PhRv...52..134A. doi:10.1103/PhysRev.52.134.
17. ^ Alvarez, L. W. (1938). "Electron Capture and Internal Conversion in Gallium 67". Physical Review 53 (7): 606. Bibcode:1938PhRv...53..606A. doi:10.1103/PhysRev.53.606.
18. ^ Alvarez, L. W. (1938). "The Capture of Orbital Electrons by Nuclei". Physical Review 54 (7): 486–497. Bibcode:1938PhRv...54..486A. doi:10.1103/PhysRev.54.486.
6 news items
Majorana moves out of R&D, into production Black Hills Pioneer Mon, 04 Aug 2014 07:03:45 -0700 Neutrinoless double beta decay occurs when two neutrons inside the nucleus of an atom convert into two protons and eject two electrons but no neutrinos. This is only possible if neutrinos are not only electrically neutral, but neutral in all ways. This ... Top of the class Rapid City Journal Sun, 31 Aug 2014 02:22:30 -0700 South Dakota School of Mines & Technology students participated this summer in undergraduate research experiences with more than 200 employers in 30 states, the District of Columbia and Morocco. Rashyll Leonard, a junior physics major from Montrose, ... Ars Technica Which came last—The supernova or the red giant? Ars Technica Mon, 18 Aug 2014 14:33:45 -0700 Until the new study, it was thought that the s-process could not produce significant amounts of radioactive hafnium-182, because a form of radioactive beta decay (in which a neutron releases an electron, becoming a proton in the process) happens faster ... Pure Energy Systems News SHT: Andrea Rossi Drops out of the LENR Race? Pure Energy Systems News Tue, 26 Aug 2014 20:33:45 -0700 In response to Andrea Rossi saying essentially: "Let's Switch from calling it LENR to QUAR"(quantum reaction), Konstantin Balakiryan, the primary inventor at Solar Hydrogen Trends, Inc. points out that their Symphony 7 reactor is fusion, fission, and ... Science Codex Detecting neutrinos, physicists look into the heart of the sun Science Codex Wed, 27 Aug 2014 10:03:45 -0700 AMHERST, Mass. – Using one of the most sensitive neutrino detectors on the planet, an international team of physicists including Andrea Pocar, Laura Cadonati and doctoral student Keith Otis at the University of Massachusetts Amherst report in the ... The bosons that demanded a Higgs The Guardian Mon, 04 Aug 2014 04:00:36 -0700 The W boson is one of the particles which carries the weak nuclear force – that force which is so weak it is uncommon in everyday live, but lies behind Beta-decay in radioactivity, the way the Sun burns, and the interactions of neutrinos. The W boson ...
Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; }
Oops, we seem to be having trouble contacting Twitter
|
2014-09-02 23:47:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7956157326698303, "perplexity": 1817.3019556920324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535923940.4/warc/CC-MAIN-20140909033542-00200-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://people.math.sfu.ca/~alspach/guide/node2.html
|
# Definitions, Notation and Results
Definition 2.1 The cardinality of a set is the number of elements in the set. If a set has cardinality n, we often write n-set.
Notation 2.2 We use n! to denote the product . It is read as n factorial. The numbers n! grow very rapidly. Some small values are 1! = 1, 2! = 2, 3! = 6, 4! = 24, 5! = 120, 6! = 720, and 7! = 5,040.
We adopt the convention that 0! = 1. This is done in order to simplify the statements of certain theorems. One useful fact to note about n!is that n! = n(n-1)!. Similarly, n! = n(n-1)(n-2)!. This observation allows some useful cancellation to take place when computing formulas involving factorials.
Notation 2.3 We use to denote the number of ways of selecting k distinct objects from n objects. Another common notation for this number is C(n,k). One standard way of reading it is to say n choose k''. It is also called a binomial coefficient.
The numbers n choose k'' occur all the time in counting problems because we frequently are choosing k objects in such a way that a given object may occur at most once (distinctness), and the order in which the objects are chosen is irrelevant. For example, when being dealt a hand of 5 cards, our main concern is with the final hand and not in the order in which we receive the cards. It is true that we may develop some anxiety as a hand develops if we watch each card as it arrives in our hand, but in analyzing our chances of winning the particular game, we start with the composition of the hand and do not consider the order in which they arrived.
Definition 2.4 A partition of a set A is a collection of non-empty subsets of A such that
and
for all , . Each of the subsets in the collection is called a part of the partition.
The word partition makes sense for the previous concept because the set is being broken into non-empty pieces which have no overlap. This is one of the colloquial uses of the word partition as well.
Notation 2.5 We use to denote the number of ways of partitioning a given n-set into parts whose respective cardinalities are . In the case all mparts have the same cardinality k, we write to denote the number of partitions of an n-set into m parts, where each part has cardinality k.
Brian &
2000-01-31
|
2013-12-11 16:22:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183182716369629, "perplexity": 347.6228876641005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164038825/warc/CC-MAIN-20131204133358-00014-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/326067-converttoblendedmesh-in-c/
|
# ConvertToBlendedMesh(..) in C#
This topic is 4967 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Greetings, I'm having some trouble getting ConvertToBlendedMesh(..) to work in C#.
int faces;
Direct3D.BoneCombination[] bones = null;
try
{
mesh.SkinnedMesh = mesh.SkinInformation.ConvertToBlendedMesh(mesh.MeshData.Mesh, Direct3D.MeshFlags.Managed | Direct3D.MeshFlags.OptimizeVertexCache, mesh.GetAdjacencyStream(), out faces, out bones);
}
catch(Direct3D.Direct3DXException e)
{
Console.WriteLine(e.ErrorString);
}
My program loads and shows non skinned models perfectly, it even plays animations perfectly. Now I'm trying to add blended mesh support. I have been studying how the DX mesh viewer (MView) sets up their SkinnedMesh, but this part isn't translating well to C#. When I run my program, this code snippet catches an "Error in application" exception with DX error code D3DERR_INVALIDCALL. The program then continues, loads, and displays the mesh without animation because there's no skinning info setup. I think the error has to do with the BoneCombination. In the C++ version, you pass a reference to a NULLed DXBUFFER pointer then that gets boxed into the BoneCombination. But the C# version requires an out directly to the BoneCombination array. So obviously I tried making it null like the C++ buffer. It errors. I tried to initialise it to an array with a size equal to the number of bones, that returned the same error. If anyone knows how to do this, or has some sample C# source in setting up a skinned mesh, that would be very helpful.
##### Share on other sites
Have you tried ConvertToIndexedBlendedMesh() instead? Thats what ive used in the past.
##### Share on other sites
I have not tried it yet, and I will try it now.
I'm not quite sure what the Indexed version is. But I got the feeling it was just the same thing with more output information. I'm not sure if this is going to fix the problem, But I'll see.
If anyone knows what's wrong with the non-indexed version I'll still be curious.
Thanks
##### Share on other sites
Just FYI IndexedBlendedMesh is not supported in hardware on the vast majority of cards as FFP skinning really wasn't needed once shaders became available. You will have to use Mixed or Software VP if you are using Indexed whereas blended is supported in hardware.
• ### What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 9
• 9
• 9
• 34
• 16
• ### Forum Statistics
• Total Topics
634123
• Total Posts
3015654
×
|
2019-01-22 02:00:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19569876790046692, "perplexity": 3328.850846295623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583823140.78/warc/CC-MAIN-20190122013923-20190122035923-00368.warc.gz"}
|
https://wiki.nikhef.nl/grid/index.php?title=Hooikanon_server_set_up_(ppc64le)&diff=10580&oldid=10578
|
# Difference between revisions of "Hooikanon server set up (ppc64le)"
### Set up for ppc64le systems
NIKHEF-ELPROD has 3 IBM Power9 machines for the grid dCache storage systems. The systems are called hooikanon-01, 02 and 03. (hooikanon-04 is a dCache storage system for stoomboot.) These servers have a different architecture from x86_64, which means they require different tricks to get them configured properly. Note that ppc64le is different from ppc64--the ppc64le is purely for the little endian format where ppc64 is for big endian systems.
#### IPMI set up
Turn on the machine and once in the Petitboot menu, exit into the shell to start configuring the IPMI set up.
IBM instructions for setting up the ipmi interfaces: https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liabw/rhel_guide_Power9_network.html
The basic steps to follow are:
Set the mode to static by running this command: ipmitool lan set 1 ipsrc static
3. Set your gateway server by running this command: ipmitool lan set 1 defgw ipaddr gateway_server where gateway_server is the gateway for this system.
4. Confirm the IP address by running the command ipmitool lan print 1 again.
#### Server RAID set up
ARCCONF can be used to configure the logical drives and set up the RAID level for the servers for the internal disks. ARCCONF can be used by putting the arcconf binary on a USB stick available from https://www.nikhef.nl/pdp/ndpf/files/packages/arcconf/.
General syntax for using arcconf is:
ARCCONF CREATE <Controller#> <LOGICALDRIVE|MAXCACHE> [Options] <Size> <RAID#> <CHANNEL# DRIVE#> [CHANNEL# DRIVE#] ... [noprompt]
Hooikanons have 1 controller with 2 disks each which should be configured as RAID 1. I used this command to configure the setup:
arcconf create 1 logicaldrive max 1 0 0 0 1 noprompt
This command creates a logical drive on Controller 1, with the maximum size possible, at RAID 1, on channel 0 to disk 0 and on channel 0 to disk 1.
You can check the configuration of the drives by using
arcconf getconfig 1 AL
#### Installing an OS
Make sure to get the ppc64le (or alt architecture) builds for the OS distribution. It is possible to install the OS via a USB stick or a virtual iso from the BMC interface. To mount a virtual iso (only available from the Java interface) from the BMC, select Virtual Media -> Virtual Storage and choose the logical drive type, open your image, and 'plug in' the virtual iso. More instructions for how to set this up can be found: https://www.ibm.com/support/knowledgecenter/linuxonibm/liabw/rhelqs_guide_Power_p9_usb.pdf?view=kc
To use a kickstart file (see note 2), generally this can be the same as the x86_64 kickstart file, however, the partitioning scheme should follow something like this:
clearpart --drives=sda --all
part "PPC PReP Boot" --size=8 --asprimary --fstype="PPC PReP Boot" --ondisk=sda
part /boot --size=1000 --asprimary --fstype=ext4 --ondisk=sda
part pv.01 --size=1 --grow --ondisk=sda
volgroup system pv.01
logvol / --fstype ext4 --size=65536 --name=root --vgname=system
logvol swap --fstype swap --size=32768 --name=swap --vgname=system
logvol /var --fstype ext4 --size=65536 --name=var --vgname=system
logvol /tmp --fstype ext4 --size=65536 --name=tmp --vgname=system
Note the "PPC PReP Boot" partition at the start is important for the system to boot properly. More information about the specifics for ppc64le with kickstart files can be found here: https://docs.centos.org/en-US/centos/install-guide/Kickstart2/ Also check that none of the packages are architecture dependent. I.e. biosdevname is for x86_64 based systems, so the udev package was substituted and works. It is useful to add debugging during the installation process. This can be done by "manually" (because I haven't found another way of doing it) adding inst.logging=debug in the Petit boot menu under the boot arguments. (Scroll over the linux "pxe" boot device and press 'e' for edit.)
#### Rebooting
The Power9 can take the pxe boot argument from the ipmitool, so to pxe boot the system, you can use:
[root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P $IPMIPASS -I lanplus chassis bootdev pxe Set Boot Device to pxe [root@stal ~]# ipmitool -H hooikanon-02.ipmi.nikhef.nl -U root -P$IPMIPASS power cycle
#### RPMs for ppc64le
Most of the RPMs used for the ppc64le come from the EPEL Centos 7 repo: https://dl.fedoraproject.org/pub/epel/7/ppc64le/ A mirror is set up on hoen to take care of updating and managing these packages. However, Prometheus requires an architecture-dependent rpm (node exporter) which will not install without a ppc64le rpm (which is currently not available). So following the instructions for building the RPMs from https://github.com/lest/prometheus-rpm
Choose a ppc64le machine to build the rpm on (i.e., hooikanon-02)(see note 1). Install rpmbuild and any dependent packages (https://wiki.centos.org/HowTos/SetupRpmBuildEnvironment) Check out or clone the source files for the RPMs - Prometheus in this case (https://github.com/lest/prometheus-rpm) Create a separate directory called rpmbuild with these subdirectories (BUILD/ RPMS/ SOURCES/ SPECS/ SRPMS/ tmp/) Use the makefile to autogenerate the .spec/.unit/.init files as needed In the sources directory, download the correct tarball release as needed: i.e.
wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-ppc64le.tar.gz
Copy the relevant files into rpmbuild/SOURCES Use the rpmbuild tool to create the RPM from the .spec file: rpmbuild -ba autogen_node_exporter.spec
Move the SRPM and RPM to the server for storing and mirroring.
The prometheus node exporter RPM was then placed under the nikhef external repo on hoen (/srv/repos/mirrors/nikhef/external/7/ppc64le/).
#### Notes
1. It may be possible to cross-build this on a different platform but that was not tested. Docker containers did not work in building the RPM. But the source RPM for prometheus node exporter are stored under my user directory on stal.
2. For the kickstart method to work, the ppc64le distribution must be imported into Cobbler -- the network install server. This requires a lot of manual tuning for the kickstart metadata.
|
2022-10-01 06:15:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22882558405399323, "perplexity": 12829.543832503638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00214.warc.gz"}
|
https://forum.math.toronto.edu/index.php?PHPSESSID=5sd8nokuindmf7f486pec3ir54&topic=1114.0;prev_next=prev
|
### Author Topic: TT2--P2 (Read 3442 times)
#### Victor Ivrii
• Elder Member
• Posts: 2563
• Karma: 0
##### TT2--P2
« on: March 21, 2018, 02:56:30 PM »
Consider equation
y'''-3y'+2y= 18e^{-2t}.
\tag{1}
a. Write equation for Wronskian of $y_1,y_2,y_3$, which are solutions for homogeneous equation and solve it.
b. Find fundamental system $\{y_1,y_2,y_3\}$ of solutions for homogeneous equation, and find their Wronskian. Compare with (a).
c. Find the general solution of (1).
#### Jared Jubas-Malz
• Jr. Member
• Posts: 10
• Karma: 8
##### Re: TT2--P2
« Reply #1 on: March 21, 2018, 09:08:55 PM »
Part (a)$\\$
The equation for the Wronskian would be:
$$W = c\times exp[-\int p_{1}(t)dt]$$
Since there is no $y''$ term, $p_{1}(t)$ would be $0$:
$$W = c\times exp[-\int 0 dt]=c\times exp(0)=c$$
Therefore, the Wronskian would be a constant.
Part (b)$\\$
Consider the homogeneous equation:
$$y''' - 3y' + 2y = 0$$
The characteristic equation would be:
$$r^3-3r+2=0$$
Solving this gives:
$$(r-1)^2(r+2)\rightarrow r_{1}=r_{2}=1, r_{3}=-2$$
Therefore, the homogeneous solution would be:
$$y_{c}(t)=c_{1}e^t+c_{2}te^t+c_{3}e^{-2t}\qquad(2)$$
Computing the Wronskian:
$$W=\begin{array}{|c c c|}e^t &te^t &e^{-2t}\\e^t&(t+1)e^t&-2e^{-2t}\\e^t&(t+2)e^t&4e^{-2t}\end{array}=4(t+1)+2(t+2)-t(4+2)+(t+2)-(t+1)=9$$
Therefore, the Wronskian is a constant just as expected based on part (a).
Part (c)$\\$
The particular solution should be of the form:
$$Y(t)=Ae^{-2t}$$
Since $e^{-2t}$ is part of the homogeneous solution, we look for solutions of the form:
$$Y(t)=Ate^{-2t}\qquad(3)$$
Differentiating this:
$$Y'(t)=Ae^{-2t}-2Ate^{-2t} \qquad(4)$$
Differentiating again:
$$Y''(t)=-4Ae^{-2t}+4Ate^{-2t}$$
Differentiating once more:
$$Y'''(t)=12Ae^{-2t}-8Ate^{-2t} \qquad(5)$$
Plugging (3), (4) and (5) into (1):
$$12Ae^{-2t}-8Ate^{-2t}-3Ae^{-2t}+6Ate^{-2t}+2Ate^{-2t}=18e^{-2t}$$
Simplifying gives:
$$9Ae^{-2t}=18e^{-2t}$$
Therefore, $A=2$. Subbing this value of A into (3) and combining it with (2) gives the general solution:
$$y(t)=c_{1}e^t+c_{2}te^t+c_{3}e^{2t}+2te^{-2t}$$
« Last Edit: March 21, 2018, 11:07:43 PM by Jared Jubas-Malz »
|
2021-10-21 12:43:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219741821289062, "perplexity": 5335.643029922434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00114.warc.gz"}
|
https://danhchetroiesyes.gearhostpreview.com/custom-case-study/page-104-2020-11-03.html
|
Join Today to Score Better
Tomorrow.
Connect to the brainpower of an academic dream team. Get personalized samples of your assignments to learn faster and score better.
## How can our experts help?
We cover all levels of complexity and all subjects
Receive quick, affordable, personalized essay samples
Learn faster with additional help from specialists
Chat with an expert to get the most out of our website
Get help for your child at affordable prices
Students perform better in class after using our services
Hire an expert to help with your own work
## The Samples - a new way to teach and learn
Check out the paper samples our experts have completed. Hire one now to get your own personalized sample in less than 8 hours!
Type
Case study
Level
College
Style
APA
Type
Coursework
Level
College
Style
APA
Type
Case study
Level
High School
Style
APA
Type
Case study
Level
Style
APA
Type
Dissertation
Level
University
Style
APA
Type
Case study
Level
College
Style
APA
## Customer care that warms your heart
Our support managers are here to serve!
Check out the paper samples our writers have completed. Hire one now to get your own personalized sample in less than 8 hours!
Hey, do you have any experts on American History?
Hey, he has written over 520 History Papers! I recommend that you choose Tutor Andrew
Oh wow, how do I speak with him?!
Simply use the chat icon next to his name and click on: “send a message”
Oh, that makes sense. Thanks a lot!!
Guaranteed to reply in just minutes!
Knowledgeable, professional, and friendly help
Works seven days a week, day or night
How It Works
## How Does Our Service Work?
Find your perfect essay expert and get a sample in four quick steps:
Choose an expert among several bids
Chat with and guide your expert
01
02
#### Submit Your Requirements & Calculate the Price
Just fill in the blanks and go step-by-step! Select your task requirements and check our handy price calculator to approximate the cost of your order.
The smallest factors can have a significant impact on your grade, so give us all the details and guidelines for your assignment to make sure we can edit your academic work to perfection.
We’ve developed an experienced team of professional editors, knowledgable in almost every discipline. Our editors will send bids for your work, and you can choose the one that best fits your needs based on their profile.
Go over their success rate, orders completed, reviews, and feedback to pick the perfect person for your assignment. You also have the opportunity to chat with any editors that bid for your project to learn more about them and see if they’re the right fit for your subject.
03
04
You can have as many revisions and edits as you need to make sure you end up with a flawless paper. Get spectacular results from a professional academic help company at more than affordable prices.
#### Release Funds For the Order
You only have to release payment once you are 100% satisfied with the work done. Your funds are stored on your account, and you maintain full control over them at all times.
Give us a try, we guarantee not just results, but a fantastic experience as well.
05
## Enjoy a suite of free extras!
### The Aftermath to the Witch Hysteria 1692
Sample College Essays on Diversity - 5Homework: Best physics homework solver. Physics is a fundamental science and can be both interesting and challenging for the students. However, most of them find it difficult and can’t complete physics hw without some professional help. Trying to do my physics homework on acid,danhchetroiesyes.gearhostpreview.com गोपालगंज के थावे मंदिर पहुंचे डीआईजी मनु महराज, रात में थानों का कर सकते है निरिक्षण, कई पुलिस अफसरों पर गिर. danhchetroiesyes.gearhostpreview.com: Homework Helpers: Physics, Revised Edition (): Greg Curran: Books We can do your high school, college or graduate homework assignments. It's as easy as Physics Homework Online Help Now that physics is a help problem for students all of the United Homework, it is more important than ever to have top physics physics. seed germination lab report examples
### What the World Needs to Prevent the Rapid Spread of Tuberculosis
visual studio 2008 report server project tutorial - Homework Helpers: Physics Introduction—Necessary Skills. Trigonometry. Physics is a skill-based course. Students who begin their study of physics without the appropriate skill base must develop their skills quickly or risk struggling with the course. It is an unfortunate fact that some students fail to learn physics not because of the science. Yes, our services can perhaps help you in completing your physics homework assignments. By availing our physics homework assignment help, you can get all your physics assignments done. Our team of writers profoundly understand that students face a lot of difficulties in applying the typical formulas and vectors to produce some quality assignments. PHYSICS HOMEWORK HELP (HELPER) Step 1. Issue clear instructions and get a free quotation. Step 2. Stay in touch through our online platforms to keep track of the progress. Step 3. Sit back and wait for the completed physics assignment. Time Left until the . annual report design photoshop background
### An Analysis of the 13th Chapter of Lord of the Flies by William Golding
Writing Your Own Worldview - Mar 19, · Collection Book Homework Helpers: Physics, Revised Edition (Homework Helpers (Career Press)) Ziabashkim. Get Ebooks Trial Homework Helpers: Physics: (Homework Helpers (Career Press)) P-DF Reading. isabellastewart. Read Homework Helpers: Chemistry (Homework Helpers (Career Press)) Ebook Free. The paper you get when you opt for homework help for physics can be revised as many times as you need, with no additional payment required. Please pay attention to the relevant pages of our website to learn more details. Who Are Our Rabbit - Writers? Sep 02, · Collection Book Homework Helpers: Physics, Revised Edition (Homework Helpers (Career Press)) Ziabashkim. Get Ebooks Trial Homework Helpers: Physics: (Homework Helpers (Career Press)) P-DF Reading. isabellastewart. Read Homework Helpers: Chemistry (Homework Helpers (Career Press)) Ebook Free. chagford primary school ofsted report
### West bengal primary education pension status report
how to write a nursing research question - Attached is the book. The answers are in Chapter 3 and 5. CHAPTER 3 1. Q. Marx and Engels claimed that they advocated scientific socialism in contrast to utopian socialism. What was the basis of their criticism of the latter? Was it justified? 2. Q. According to the Marxian labor theory of value, what is the value of a piece . Jan 06, · Homework Statement:: In the figure shown a uniform rod of mass m and length l is hinged. The rod is released when the rod makes angle θ with the vertical. Homework Equations:: Find A) The angular velocity of the rod at the lowest position. B) Normal Reaction due to the hinge just after the rod is released View attachment I have solved. The Variety of Physics Online Homework Help We Offer Now, you are probably wondering whether our experts can deal with your particular assignment. Here are some of the areas they constantly provide physics homework answers on, and if yours is not on the list, go ahead and ask our support agent to confirm whether this is a proper college. An Introduction to the Relatively New Procedure of Fetal Surgery
### Character Analysis of Carol Anne and Max in a New Day by Margaret Johnson-Hodge
An Orthodox Poem Essay - Apr 25, · Homework Helpers: Physics, Revised Edition Greg Curran. out of 5 stars Paperback. $Pre-calculus Demystified, Second Edition Rhonda Huettenmueller. out of 5 stars Paperback.$ Next. Special offers and product promotions. Amazon Business: Save 25% off first $of business danhchetroiesyes.gearhostpreview.coms: 6. Hire our qualified and native physics homework helper and experience the most excellent physics assignment solvers across the danhchetroiesyes.gearhostpreview.com physics solver is the best you can ever get. We have awed 10, clients who have ended up graduating with the best grades in the world of academics. Homework Helpers: Physics, Revised Edition Greg Curran. out of 5 stars Paperback.$ Only 18 left in stock (more on the way). Painless Earth Science (Barron's Painless) Edward J. Denecke Jr. out of 5 stars Paperback. \$ Homework Helpers: Algebra, Revised Reviews: An Analysis of Marslands and Fields Right Realist Perspective
### Teacher Assistant Resume Sample Education
Tsar Alexander II - ActiveHistory - If such is your need, then you can take physics homework help online from TopAssignmentExperts, the leading physics homework helper in the United States. This is a platform, where you can seek the help of expert physics homework helpers for the purpose of taking physics homework help and keeping your homework deadlines up to date. Quamtum Physics Homework Help; Classical Mechanics Homework Help; Quamtum Physics Mechanics Help; Engineering Assignment Help. Civil Engineering Homework Help; An extended buffer period gives you an opportunity to have the work revised for free if it doesn't meet your demands. Secure Payment Gateways. Homework Helpers: Chemistry, Revised Edition This book is much like the Homework Helpers Physics -- great for beginners and in conjunction with real textbooks, but not as a standalone. Both are just too short and miss some vital topics, and don't go into enough depth on some. Very good buy, though, if used correctly or just learning the. An Analysis of the Concepts of the Community Housing Support Team in Care Programme Approach Meeting
### An Analysis of the Poetry of Bruce Dawe, a Well Renowned Australian Poet
how to create xml from xsd in c# | The ASP.NET Forums - Physics homework help is for students who find themselves in such situations. Therefore, seek help! Seek it from experts who understand everything there is to know about physics. This way, you will be able to ace all your assignments. You will also have more time for studying and catching up with any lesson that you never understood properly in. 2 part 1: About the project help as physics coursework team delivered all workshops in the study skills at the beginning to see who has been revised and edited images. Ingold, t. This is because in practice, cleary and de o sullivan p. 15, about the individual presentation, indicating that a considerable body of a given period of time. Please answer all of the following questions in complete sentences using material from the lecture recordings, slides, readings and in-class discussions. DO NOT consult outside sources. Upload your answers (in Microsoft Word or PDF format 1. Choose three pillars of Islam and briefly explain how each of them is intended to strengthen a Muslim’s connection to God and/or to the . whistler snow report video youtube
### HELP WITH AN ESSAY!!!!!!?
managerial economics assignment for mba students pdf - Jan 01, · Homework Statement: Using Tetrad formalism, I am trying to solve exercise 16 carroll. In final steps I cannot read elements of Riemann tensor correctly. please help. Relevant Equations: $$ds^2 = d \psi^2 + \sin^2 \theta (d \theta^2 + \sin^2 \theta d \phi^2)$$. Physics Homework and assignments are important, and it only takes the assistance of the best professionals for you to attain the required grades. At schoolworkrelief, we are committed to offering you excellent physics homework help in all your physics problems. Upon using danhchetroiesyes.gearhostpreview.com, you will realize that it is the most suitable place. If you need help with your PHYSICS homework, then you have landed at the right place. I provide PHYSICS homework help at an affordable price. I personally handle all assignments. That's why I can assure you of quality work. I do not run a tutoring company where . Software Engineer Middot Sample Resume Objectives
### Syrah resources annual report 2015 sri
Emily Grierson as a Symbol of Neglect in A Rose for Emily by William Faulkner - We hire the best tutors in the industry who deliver either written or live tutoring to all students. Some of our homework help solutions cover different subjects, including math help, literature assignment help, physics homework help, business help, engineering assignment help, and English homework help. In stepup multi- ple perspectives physics help homework online on pedagogical grammar. For instance, carson and nelson reported the unprecedented freedom surrounding identity construction online. Universities that are now investigat ing whether this was the medium of academic and other sf linguists who have not been accepted. 2. Essay-writer.org - Sacred Poems
When taking a course in science, students often find it difficult to meet their Dissertation editing help quality deadlines and submit it to their teachers on time. In this situation, they often feel the need to take physics homework A Biography of Clive Cussler so that they can keep up with the pressure Feeding off low self-esteem to create a desire to purchase: Apple case.
Can this criticism of market their classes. If such is homework helpers physics revised need, then you can take homework helpers physics revised homework help online from TopAssignmentExperts, the leading physics homework helper in the United States. This homework helpers physics revised a platform, where you can seek the help of expert physics homework helpers for the purpose of taking physics homework help and keeping your homework deadlines up to date. Our service aims to help students, who require and search for physics homework help online.
As the homework helpers physics revised trusted physics homework helper in homework helpers physics revised, we are homework helpers physics revised to deliver the highest degree of quality homework helpers physics revised satisfactory services. We homework helpers physics revised sure that our service will be highly useful to you and help you score the best grades among all students. Homework helpers physics revised our physics homework helpers physics revised help online for a revamped experience of homework homework helpers physics revised assistance.
We Guru nanak jayanti essaytyper - teal tuesday - dream for buydom readily available homework helpers physics revised your assistance all round the day so that you can reach out to us at any hour and get our help. We come across homework help physics on a homework helpers physics revised basis, since this is one subject where students struggle homework helpers physics revised most.
Given the large quantity of home work questions and equally challenging complexity of those problems, it is homework helpers physics revised to identify why students are often homework helpers physics revised seeking someone to help them do my physics homework. If you have so far been in the search of the right help with physics homework, then you have landed at the right place. TopAssignmentExperts is the best solution to your question and to your problem. With a league of dedicated homework helpers physics revised to do my physics homework, we are completely equipped with the right people homework helpers physics revised resources to assist you with your homework help homework helpers physics revised. All you have to do is to come to us with your requirement to avail help with homework helpers physics revised homework and let us know your unknown bacteria lab report serratia marcescens requirement for your Homework helpers physics revised homework.
After that, we will quote a price for the service and homework helpers physics revised your confirmation for the homework helpers physics revised. After this, all you have to do is to sit back and relax and let us do the rest. We will have your homework ready within the time promised by homework helpers physics revised. Thus, if homework helpers physics revised are tired of searching for a capable physics homework helper, we are your best resort to get the help that homework helpers physics revised have been looking for. Reach out to us today homework helpers physics revised experience homework helpers physics revised best quality service, ready to assist you in the task homework helpers physics revised meeting your homework deadlines.
Many students feel that other than taking the help of their friends, they have no other homework helpers physics revised to get help from someone homework helpers physics revised do my physics homework. Attachment theory and research you know that you can pay An Introduction to the Creative Essay on the Topic of Michelle Baker do physics homework online and while you are at it, TopAssignmentExperts homework helpers physics revised the best option that you have got?
Not only do we assure timely services, we Here It is Ladies, the Well Thought Out, Bout to Be off Da Chain, Schedule for Homecoming 2000 make sure that our services physics homework helpers keep your homework free of any errors. Homework helpers physics revised can pay to do physics homework at TopAssignmentExperts and get the help that you have always needed. Our physics homework helper makes sure that they meet the standards of the service homework helpers physics revised by them and deliver homework helpers physics revised with every single homework.
If you need someone to do my physics homework, then we strongly suggest that you opt for our service. We feel confident that with us, you will find A Comparison of the Ancient Greek and Ancient Roman Religion Belief in Gods value in spending your money than anywhere else. Spend your hard earned money wisely and choose someone who is capable to deliver quality output. Would you like to know why you must choose experts for physics homework at TopAssignmentExperts?
Take a look at some of the features of the Physics homework experts at TopAssingmentExperts and get to know why we will prove to be your best choice for physics homework helpers. Hopefully, these homework helpers physics revised some of the major criteria homework helpers physics revised which people judge homework helpers physics revised for physics homework and decide if they wish to take a particular service or not. Our Physics homework experts can TOEFL Writing Practice Test Sample you in many ways. Give us a chance to help you score a perfect grade in your class and we assure you that we will not let you down! Great Deal! Physics Homework Help Online.
About me paper example Help From Best Experts! Please select subject. Please homework helpers physics revised deadline for your assignment. Please Enter your name. Copy and Paste Your Assignment Here. Attach File. Can you Do My Physics Homework? Yes we can We come across homework help physics on a regular homework helpers physics revised, since this is one subject where students struggle the most. Can I pay someone to Do My Physics Homework Many students feel that other than homework helpers physics revised the help of their friends, they have no other option to get help from someone to do homework helpers physics revised physics homework.
We make sure that we deliver every homework homework helpers physics revised the time homework helpers physics revised promised homework helpers physics revised us. Every homework homework helpers physics revised leaves our desk is guaranteed to be free of errors and remain accurate. With every homework, we deliver a well researched and homework helpers physics revised content, which is not homework helpers physics revised from another source.
Basic Skills Preparation: Writing the provide every student with an option to get his homework revised Bank of America revolutionizes banking industry from Bank us, so that if he is not satisfied HIV Research Papers - Paper Masters our work, he can homework helpers physics revised corrections and revisions.
Every homework at our platform is carried out by an expert, who is qualified in his area and adept at the field. USD 50 has been Credited to your account. Enter phone number to get another USD Please fill out the homework helpers physics revised below to start chatting with the homework helpers physics revised available agent. I am Monica. I homework helpers physics revised here homework helpers physics revised assist you. But please let me know whether you are here for a new order or an exising one.
|
2021-08-02 11:07:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2035451978445053, "perplexity": 3459.603064923509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00428.warc.gz"}
|
https://ethereum-magicians.org/t/mathtex-in-eips/2977
|
# MathTeX in EIPs
GitHub pages has partial support for using MathTeX in GitHub pages (which is the mechanism used to publish EIPs to eips.ethereum.org).
We can fix it so that these expressions will render properly in the browser.
MathTeX just lets you put math TeX expressions inline in a paragraph or as their own expression block. If you’ve used TeX it looks like this:
$$y = \sqrt(x)$$
and it renders perfectly just like in every math book you have ever read. That is because every math book you have ever read uses TeX.
1 Like
I left a comment on the issue. It may be simpler than that.
Let’s see if this is turned on in Discourse. Edit: nope!
Anyway, should be pretty simple to do.
\begin{align*} & \phi(x,y) = \phi \left(\sum_{i=1}^n x_ie_i, \sum_{j=1}^n y_je_j \right) = \sum_{i=1}^n \sum_{j=1}^n x_i y_j \phi(e_i, e_j) = \ & (x_1, \ldots, x_n) \left( \begin{array}{ccc} \phi(e_1, e_1) & \cdots & \phi(e_1, e_n) \ \vdots & \ddots & \vdots \ \phi(e_n, e_1) & \cdots & \phi(e_n, e_n) \end{array} \right) \left( \begin{array}{c} y_1 \ \vdots \ y_n \end{array} \right) \end{align*}
|
2022-11-28 16:13:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981067776679993, "perplexity": 1887.0939730367363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00773.warc.gz"}
|
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/405063
|
On 02/27/2013 05:24 PM, Scott Macri wrote:
> I'm trying to run the following command in irb on a mac:
>
> something.split('\')
>
> This doesn't work either:
> something.split("\")
>
> However, this just causes irb not return a result and prevents irb from
> being useable and I have to close it.
>
> Any thoughts as to why this is happening? Is this a ruby bug?
>
> It only seems to happen with the \ only..... This is a serious problem
> because I am attempting to parse the results of a Linux command which
> contains many \'s. I have the exact same issue with:
> something.delete('\')
>
|
2018-12-17 12:50:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4749068021774292, "perplexity": 2529.512588600564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00186.warc.gz"}
|
https://www.physicsforums.com/threads/without-dark-matter-and-dark-energy.423621/
|
# Without Dark Matter and Dark Energy
1. Aug 22, 2010
### cbd1
What would a cosmological model without dark matter and dark energy look like?
This requires imagining that the observational evidence for dark matter and dark energy are explainable by other means. In other words, absent of the idea that the universe is filled with 25%DM and 70%DE, but rather having 100% of the the mass in the universe being what is observable (what we now see as only 5% of the mass), what kind of model would we arrive upon?
To further clarify, if it did not appear that the universe were accelerating in expansion, and it did not appear that there is more mass in galaxies than the normal matter, but rather that regular matter is all there is, combined with the current rate of expansion, would the universe have to be open? What other problems would occur having normal matter and energy be all there is in the universe? Again, ignoring the experimental data saying that the universe is accelerating and there is more mass, what would the problems be in the universe without DM and DE?
2. Aug 22, 2010
### Chronos
Inconsistent with current observational evidence. Without DM, galactic rotation curves, clusters and the bullet cluster would be inexplicable. Without DE, the supernova legacy study would not make sense.
3. Aug 22, 2010
Staff Emeritus
Inconsistent with observation. I'm not sure it's possible to go logically further than that.
4. Aug 22, 2010
Staff Emeritus
Beat me by a microsecond!
5. Aug 22, 2010
### cbd1
I thought I made it explicitly clear that this question is theoretical, and would require disregarding these exact experiments.
So, aside from these experiments not making sense without some "dark matter" (which is a guess for an experimental result we don't understand anyway) and "dark energy" (which is also a guess for an experimental result we don't understand anyway), what would the implications be for the rest of cosmology?
I presume it would not alter Big Bang theory, or the predicted age of the universe, or anything else really except for the noted experiments. There would be a problem with the Lambda-CDM model, but only really on the content of the universe part. I can also see a problem with the attempt to *make* a result we want in order to make the universe flat, to agree with the CMB evidence that it is. But, what exactly would be the problems for this particular problem. Could the universe be flat without the 25% and 70% extrapolations of DM and DE?
6. Aug 22, 2010
### nicksauce
Sure we could make a flat universe just out of baryons, just by putting in 20x more baryons. But there are plenty of problems. If we keep Ho around the same, the universe would only be about 9 Gyr old instead of 14 Gyr old, which would disagree with globular cluster ages. Constraints placed by BBN would be violated. The CMB would look like this [attachment 1] instead of this [attachment 2]. (Source: CMBFAST online tool http://lambda.gsfc.nasa.gov/toolbox/tb_cmbfast_form.cfm [Broken])
#### Attached Files:
File size:
1.9 KB
Views:
79
• ###### cmb_89787589.fcl.tt.s.png
File size:
2.2 KB
Views:
87
Last edited by a moderator: May 4, 2017
7. Aug 22, 2010
### cbd1
It seems that there would not need to be that much extra baryons if there were no negative energy from dark energy and no dark matter... remember, dark energy must be set to zero.
Can you explain why this would make the big bang 9 billion years ago instead of 14 billion years ago?
And I am sorry, but there is no way that the CMB is just that simple.
What BBN constraints and why must H0 remain constant?
Also, what is the '5th dimension' mentioned in this calculator?
Last edited by a moderator: May 4, 2017
8. Aug 22, 2010
### nicksauce
I don't know why you think dark energy has negative energy. It is most certainly positive energy. Maybe you're thinking of negative pressure.
The formula for the age of the universe, which comes from a basic analysis of the Friedmann equations, is
$$T=\frac{1}{H_0}\int\frac{da}{a\sqrt{\Omega_{\Lambda}+\Omega_{m}a^{-3}}}$$
If you put in the numbers it just happens to give 9Gyr instead of 14Gyr. In fact, one of the original reasons people wanted to introduce dark energy was to make the universe old enough to account for the ages of some globular clusters.
It's your own prerogative to disbelieve these CMB graphs, but you're free to try to reproduce them yourself. Keep in mind that CMBFAST is a well-respected program that professionals cosmologists use all the time. I'm not sure what the "5th dimension is", and I would ignore it.
I don't have my cosmology books on hand, but I'm pretty sure BBN puts some fairly strong constraints on what the photon-baryon ratio has to be. Certainly strong enough to prevent you from increasing the number of baryons by a factor of 20.
I suppose if you want to make the universe flat without DE&DM, you could shrink the Hubble constant by a factor of sqrt20, down from 72 to 16. Of course this is highly inconsistent with all observations, but I guess no more so than increasing the number of baryons by a factor of 20. If that was the case, the universe would now be a rather absurd 41 Gyr old.
9. Aug 23, 2010
### Chronos
Science is like a jigsaw puzzle without a picture on the box. We align the pieces according to how they fit and allow the picture to emerge on its own. At the moment, a large number of pieces fit in a way suggesting it looks like the LCDM model - of which DM and DE are firmly entrenched. New observational evidence is necessary to remove these pieces from the puzzle - or a theory that better fits.
Last edited: Aug 23, 2010
10. Aug 23, 2010
Staff Emeritus
That's not "theoretical".
If you want to be doing science, you have to be consistent in your assumptions. Just because you can create a grammatical question doesn't mean it's a valid question, even "theoretically". Theoretically, what is a square circle?
11. Aug 23, 2010
### cbd1
Let me rephrase what I am asking from a different view, as to hopefully get some more adequate replies:
Suppose the observations of the supernovae surveys were explained by some unforeseen observational error, and the universe is really not expanding at an accelerating rate.
And, suppose that rotation curves of galaxies and gravitational lensing of clusters could be described by some aspect of spacetime other than suggesting there is more mass in the systems.
Then, we could drop the dark energy and dark matter ideas, showing that they are not real things. How, then, would this affect the rest of cosmology?
12. Aug 23, 2010
### cepheid
Staff Emeritus
To answer the second part in bold: we have NO way of knowing how it would affect cosmology because your supposition (the first statement in bold) requires some unspecified modification to (or elimination of) General Relativity, which is the basis of all current cosmological models. Some change to "some aspect" of space-time does leave us with enough information to figure out what the result would be.
13. Aug 23, 2010
### cesiumfrog
I think this is an interesting thread, because while most people know the basic arguments for these theoretical concepts, I for one could learn more about the independent supporting evidence.
Are the ages of globular clusters simply taken to be the ages of the oldest stars within (based on stellar evolution models)? And these presently produce figures merely a few percent younger than the universe itself? According to WP, until recently improved measurements of the properties of these stars and of the Hubble constant, those ages paradoxically appeared to be the other way around. This seems to imply that globular cluster ages are consistent with the age of the universe one would predict naively from extrapolating back the Hubble flow. So what you're saying is that if GR is correct and FRW is sufficiently applicable, then (neglecting acceleration by dark energy and deceleration by dark matter) the intergalactic expansion would still have been slowing dramatically (that is, especially without dark energy, the relativistic correction would imply a younger universe than the naive constant-rate extrapolation, making the age of those stars in the clusters inexplicable)? I take it the globular cluster evidence would also be consistent with a model in which there was no dark energy nor dark matter and there was also significantly less normal matter than is actually visible?
In what way does nucleosynthesis fit better to the model containing unknown additional species of particles?
Is there a simple explanation of how dark energy and additional matter each affect the CMB anisotropy?
14. Aug 25, 2010
### shomas
I have posted a question in another thread https://www.physicsforums.com/showthread.php?t=424244 . It asks if CMB radiation pressure differences that arise from motion with respect to the CMB reference frame, when considered on the scale of galaxies and billions of years, do away with the need for dark matter theories.
15. Dec 12, 2010
### NickMarkov
Can’t we assume that galactic disks are nearly two-dimensional objects? The two-dimensional Newtonian potential is Ln(r), and it produces constant rotational velocities v(r) = const, which would agree with the observations.
16. Dec 12, 2010
### Rick88
an example of a model that doesn't take into account dark matter and dark energy is the Einstein-de Sitter Universe.
In such model, the size of the Universe goes as t2/3. (And the Universe is assumed to be flat, which seems to be the case anyway)
It was thought to be quite accurate until observations showing the expansion of the universe accelerating were made.
Still, it is believed to be correct from the moment the Universe became transparent until the recent observations.
R.
|
2017-08-21 16:42:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018638372421265, "perplexity": 750.0514346041717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00144.warc.gz"}
|
https://ask.sagemath.org/questions/59914/revisions/
|
# Revision history [back]
### complex numbers and paramétric numbers
hi
this program works well for me, i want the critical points but only if real
but my condition if(x.imag()!=0) doesnt work properly if x is a "r_something"
and, of cours, a solution like (x,y)=(r12,r37) interests me, i want it to be displayed
but as maybe Sagemath consider r12 and r37 aspossibly being complex, is does not display it
how can i test if a number is a parametric number ?
f(x,y)=(x+y)^2
# calculs généraux
from sage.manifolds.operators import *
E.<x,y> = EuclideanSpace()
F = E.scalar_field(f)
H=f(x,y).hessian()
show(html("<h5>paramètres généraux</h5>"))
show(T)
# calcul des points critiques
|
2022-01-26 16:52:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4418095648288727, "perplexity": 4425.4905191456055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00288.warc.gz"}
|
https://electronics.stackexchange.com/questions/432762/identify-dc-motor-from-dremel
|
Identify DC Motor from Dremel
I am trying to run a formerly cordless Dremel with a 12V supply. I know that the dremel runs off of a 4.8V battery, so I would like to see if I can run the motor with 12V PWM without damaging it. However, I haven't been able to find any data for this motor.
The motor is 28.5 mm (1 1/8") in diameter. The front has the word "Johnson" and the symbol of Johnson Electric. There is text on the motor reads:
2610916361
32057
3t2903
-----
However, I can't find any motors on their site that match. Any help in either finding a datasheet or safely testing the voltage is appreciated.
• You might not find one at all. For the number of Dremels that are produced, they could easily have a custom motor made with no publicly available data. But a 4.8V motor can't usually be run off 12V without damage, not even if you 50% PWM It. – DKNguyen Apr 15 at 23:16
• I worried about that. Would 40%, which would drop it to 4.8 V average, work? – BillThePlatypus Apr 15 at 23:26
• No, lower duty cycle is even worse. There usually isn't enough inductance to smooth out the current waveforms enough and adding extra inductance or increasing the PWM frequency so that the existing inductance is able to smooth the current waveform enough is usually impractical and has its own problems. So the motor ends up seeing very peaky stall-like currents and things get worse the lower the duty cycle you go. – DKNguyen Apr 15 at 23:33
• How about stepping down your input voltage and use 5V PWM instead? – Unknown123 Apr 16 at 0:48
• @Unknown123 That would be my next plan, but the buck converter I got is behaving weird. Possible question on that to follow. I also want to know if perhaps this motor could support 12V, perhaps with temperature monitoring. – BillThePlatypus Apr 16 at 3:26
You will burn out the motor brush armature interface using 12V on a 4.8V motor with excessing arcing.
This is not a good idea.
Motor RPM increases with voltage and starting current which at full voltage is >10x rated current.
With 4.8V giving 14,000 RPM , 12V would try to go up to 35,000 RPM with a starting current also 250% more than original. Eddy current losses and armature losses with much higher currents result in 2.5x higher arcing voltage on the armature resulting in bridging between armature copper contacts and effectively causing shoot thru on the power supply or arcing short circuits which will burn out the rotating contacts raising the temperature greatly and reducing the MTBF many orders of magnitude to burn out in a minute rather than a month or more of steady operation.
If you want a 35 kRPM Dremmel DIY design, use a better motor and do not exceed rated voltage significantly.
FWIW Dentists use 400k to 800k RPM with a properly designed system.
• Aren't dental drills normally pneumatic? Or are they using electric motors now? – Hearth Apr 16 at 0:39
• Yes they are air turbine motors. E-drills are just for low speed grinding – Sunnyskyguy EE75 Apr 16 at 0:42
• e-drills have safety hazards from heat realityesthetics.com/portal/… – Sunnyskyguy EE75 Apr 16 at 0:47
• Would using PWM with a lower duty cycle (<40%) work, though? It would keep it from getting as fast. Also, I am currently unsure if the motor is a 4.8V motor, although I will have to assume that if I can't find out more. – BillThePlatypus Apr 16 at 1:55
• Yes it could work if PWM f << L/DCR of motor. @40%, it could be a 4.5V motor with 0.3V drop. – Sunnyskyguy EE75 Apr 16 at 2:16
|
2019-07-24 07:20:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20641785860061646, "perplexity": 2423.3809017122544}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00518.warc.gz"}
|
http://math.stackexchange.com/questions/779045/find-the-remainder-when-15-is-divided-by-31
|
# Find the remainder when $15!$ is divided by $31$.
Find the remainder when $15!$ is divided by $31$. I know I have to apply Wilsons theorem but i am a little confused how.
-
How does 30! relate to 15! mod 31? – M.B. May 3 '14 at 0:27
By Wilson's Theorem, we know that $30!\equiv-1$ (mod 31). Now lets look at the extra factors that are multiplied to turn $15!$ into $30!$.
$16\equiv -15$ (mod 31), $17\equiv -14$ (mod 31), $\ldots, 30\equiv -1$ (mod 31)
Thus $\frac{30!}{15!}\equiv (-1)^{30-15}\cdot 15!=-15!$ (mod 31)
Thus we get $$-1\equiv 30!=15!\frac{30!}{15!}\equiv -1\cdot (15!)^2\Rightarrow 15!=\pm1\text{ (mod 31)}$$
Which one do you think it is?
-
This might be hitting a fly with a brick but.....
Primes less than or equal to 15: 2,3,5,7,11,13.
How many multiples of 2,4,8 are less than or equal to 15:7,3,1
How many multiples of 3,9:5,1
How many multiples of 5: 3
How many multiples of 7: 2.
11,13 > 15/2.
So $15! = 2^{7+3+1}3^{5+1}5^37^211\cdot13=2^{11}3^65^27^211\cdot13$
$2^{11} = (2^5)^2*2 = 32^2*2 \equiv 2 \mod 31$
$3^6 = (3^3)^2 = 27^2 \equiv (-4)^2 \equiv 16 \equiv -15 \mod 31$
$5^3 = 25*5 \equiv -6*5 \equiv -30 \equiv 1 \mod 31$
So $2^{11}3^65^3 \equiv -30 \equiv 1 \mod 31$
So $15! \equiv 7^2*11*13 \mod 31$.
$7^2 = 49 \equiv 18 \equiv 36/2 \equiv 5/2 \mod 31$
$11 \equiv -20 \mod 31$
$13 \equiv 26/2 \equiv -5/2 \mod 31$
So $15! \equiv 5/2*-5/2*-20 \equiv 25*5 \equiv -6*5 \equiv -30 \equiv 1 \mod 31$
-
|
2016-05-01 12:39:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903115451335907, "perplexity": 258.82060013726743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860115836.8/warc/CC-MAIN-20160428161515-00096-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://greatiphonewallpapers.com/Delta%20Und%20Epsilon%20Limit%20Proof%202021
|
Delta Und Epsilon Limit Proof 2021 // greatiphonewallpapers.com
Don’t. Now, for the less facetious answer. The epsilon-delta definition is the simplest approach to what is conceptually meant by a limit, which is a statement about the behavior of a function around a particular input. If the output of a function. Where L is the limit of the function for x→a. However, In this article we will understand more fundamental definition, known as epsilon delta definition of limit L. According to Limit definition: A function fx is defined when x is near the number a, but not a. So as long as x-2 < delta < 1 we know that x2 < 5 and therefore fx-L < x2 x-2 < 5 delta So by requiring delta to be less than both 1 and epsilon/5, we know that we can keep fx-L less than epsilon. The algebra just proves that this all really works. limits of polynomials, Now I just need to prove this idea using deltas and epsilons. So I first investigate my relation between delta and epsilon in an outline: g x L H when xc G xc55 H when 5 xc H when lim lim 5 5g x x c x c x coo Math 3A H, Fall 2012 Gonzalez 5 xc H when xc G 5 xc H when Now I can write my proof. [Proof] Let H! 0 be given and choose 5 H G. Then if xc G, 55 5 5 5 5 5 g x L g. Let fx - 0 < delta. If x > 0, then we have x1 - 0 < delta. So, epsilon would be chosen such that epsilon = delta - 1. However, if x < 0, then we have x - 0 < delta. So, epsilon would be chosen such that epsilon = delta. So, there does not exist a delta for every epsilon.
Choosing any $\delta<\epsilon/3$ should work nicely, since if $x-1<\delta$, then $$x^2-1=x1\cdotx-1< 3x-1<3\delta < \epsilon$$ But remember that the middle step here also required $\delta<1$, so actually we have to pick $\delta<\min1,\epsilon/3$. I'm studying how to write epsilon-delta proofs for limits of sequences, limits of functions, continuity, and differentiability and I'm having trouble with the general methodological procedure used in some of the proofs in the text as opposed to some of the proofs I have come up with. see en./wiki/Limit_of_a_functionLimits_involving_infinity. Given any $\epsilon\gt 0, \exists x_0\gt 0$ such that $fx\gt \epsilon$ $\forall x\gt x_0$. Choose $\epsilon\gt 0$. Now, we need to find corresponding $x_0$ such that $e^x\gt \epsilon$ $\forall x\gt x_0$. 13.08.2006 · Okay, I have demonstrated with delta epsilon but I said it leads to a propblem. The entire concept of exponential functions and their properties are based on countinuity. Thus, then I cannot prove that they are countinous using the fact that they are countinous. Thus, I do not see how some one can ask you to prove such as problem.
Epsilon-Delta Definition of the Limit Few statements in elementary mathematics appear as cryptic as the one defining the limit of a function fx at the point x = a, ! ! H G H G0 0 such that whenever f x L x a Translation: for every epsilon greater than zero, there exists a delta greater than zero within delta. Recall that the definition states that the limit of as approaches, if for all, however small, there exists a such that if, then. Example 1: Let. Prove that. If we are going to study definition limit above, and apply it to the given function, we have, if for all, however small, there exists a such that if, then. 19.06.2016 · Looking at the answer I see that the limit does not exist; however when I do the epsilon delta proof I cant see where I went wrong because I keep getting the result that it does: ? So I attached a picture detailing my argument and I would love for someone to tell me where I went wrong. This video is all about the formal definition of a Limit, which is typically called the Epsilon-Delta Definition for Limits or Delta-Epsilon Proof. We will begin by explaining the definition of a limit using the delta-epsilon notation, were we create two variables, delta and epsilon, using the Greek alphabet.
04.09.2017 · Use the epsilon delta definition of limits to prove that \lim_x\rightarrow -1\fracx^4x1x^3=. Math Forums. Menu. Math Forums. Home. High School Math Elementary Math Algebra Geometry Trigonometry Probability and Statistics Pre-Calculus. University Math Calculus Linear Algebra Abstract Algebra Real Analysis Topology Complex Analysis Advanced Statistics Applied Math. Solving epsilon-delta problems Math 1A, 313,315 DIS September 29, 2014 There will probably be at least one epsilon-delta problem on the midterm and the nal. These kind of problems ask you to show1 that lim x!a fx = L for some particular fand particular L, using the actual de nition of limits in terms of ’s and ’s rather than the limit. To state that once again, but in more intuitive terms, for any tolerance limit epsilon at all, we can suitably find another tolerance limit, delta, such that whenever we make 'x' within delta of 'a', 'f of x' will automatically be within epsilon of 'l'. And again, the very, very important emphasis here, we do.
Delta-Epsilon Proofs Math 235 Fall 2000 Delta-epsilon proofs are used when we wish to prove a limit statement, such as lim x!2 3x 1 = 5: 1 Intuitively we would say that this limit statement is true because as xapproaches 2, the. 23.02.2017 · I have a problem understanding epsilon-delta proofs, and I'm guessing it's something I'm missing about the very definition of what a "proof" is. Here's an. The method we will use to prove the limit of a quadratic is called an epsilon-delta proof. The basic idea of an epsilon-delta proof is that for every y-window around the limit you set, called epsilon $\epsilon$, there exists an x-window around the point, called delta $\delta$, such that if x is in the x-window, fx is in the y-window. 16.09.2019 · I have been struggling with this problem and also my friends. We are not the best at epsilon-delta proof and we have not found an understandable solution to. Understanding limits with the epsilon-delta proof method is particularly useful in these cases. First, specify an interval containing the x -value of interest by using a variable δ.
10.10.2013 · Thread: Epsilon delta proof of a two-variable limit using inequalities. Thread Tools. Show Printable Version; Subscribe to this Thread LeibnizIsBetter. View Profile View Forum Posts Private Message View Blog Entries View Articles MHB Apprentice Status Offline Join Date Aug 2013 Location California Posts 3 Thanks 4 times Thanked 2 times 1 October 7th, 2013, 18:49 I seem to be. 11.12.2011 · Delta-epsilon proofs always seemed a bit circular to me, and what confuses me about proving "by contradiction" here is the fact that I should be able to choose some δ and the limit WOULD approach 110-10:s. I'm a bit lost on where to go from here! Epsilon delta proof of a quadratic limit hi guys, ive been pondering this problem for a few days now, and its really boging me down and i think i need a different point. 25.09.2012 · I don't know about the epsilon delta stuff. But here is a "conventional" way to show that the limit exists, maybe you can build from this. If you change it to.
We can think of ε as an input value; the proof has to work for all inputs, and therefore we are not to choose or assume its value, except for what the statement promises, i.e., we can assume ε>0 and implicitly ε is a real number. On the other hand, we can use ε and the promise as a given in the rest of the proof. For example, if the proof. Multivariable epsilon-delta proof example. Skip to main content. A collection of stuff on Linux, Programming, and Math. About Command Collection Rough Path Signature. How do I prove one-sided limits with epsilon-delta? Update Cancel. a d b y C o d e F e l l o w s. Want to become a software developer in Seattle? At Code Fellows, you can graduate with two years of relevant industry experience in just 20 weeks.
1. $\delta \leq \min\4\epsilon - \epsilon^2, 4\epsilon\epsilon^2\.$ Since \\epsilon > 0\, the minimum is \\delta \leq 4\epsilon - \epsilon^2\. That's the formula: given an \\epsilon\, set \\delta \leq 4\epsilon-\epsilon^2\. We can check this for our previous values. If \\epsilon=0.5\, the formula gives \\delta \leq 40.5 - 0.5^2 = 1.75\ and when \\epsilon=0.01\, the formula gives \\delta \leq 40.01 - 0.01^2 =.
2. 29.08.2019 · Once you give an epsilon delta definition, then of course you must give an epsilon delta proof. Indeed the epsilon delta definition of limit is just a precise statement of the definition of a tangent line given by Euclid long ago, hence has a long history. But more importantly, the epsilon delta definition of a limit is one that can actually be.
|
2021-03-09 11:07:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448323845863342, "perplexity": 544.9379382214203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00042.warc.gz"}
|
https://www.semanticscholar.org/paper/Ti-Surface-Laser-Polishing%3A-Effect-of-Laser-Path-Giorleo-Ceretti/08b9f625845b008b8f3ff0e15579cce85350a85b
|
# Ti Surface Laser Polishing: Effect of Laser Path and Assist Gas☆
@article{Giorleo2015TiSL,
title={Ti Surface Laser Polishing: Effect of Laser Path and Assist Gas☆},
author={Luca Giorleo and Elisabetta Ceretti and Claudio Giardini},
journal={Procedia CIRP},
year={2015},
volume={33},
pages={446-451}
}
• Published 2015
• Materials Science
• Procedia CIRP
## Figures and Tables from this paper
• Materials Science
Other Conferences
• 2020
The laser deposition manufacturing (LMD) is a free-form metal deposition process, which allows generating a prototype or small series of near net-shape structures. Despite numerous advantages, one of
• Materials Science
Journal of Materials Engineering and Performance
• 2021
Directed energy deposition (DED) is one of the most used additive manufacturing processes for the fabrication of 3D-metal components. However, surface quality is not always within the limits required
• Materials Science
• 2018
This study investigated the development of a novel method for designing high-end interference fit fasteners. In this work, a new surface laser treatment process was developed and implemented to
• Materials Science
• 2018
The surface of structural components is usually subjected to higher stresses, greater wear or fatigue damage, and more direct environmental exposure than the inner parts. For this reason, the
• Materials Science
IEEE Access
• 2020
Experimental results have shown that UVLP can reduce the surface roughness of 304 stainless steel from $2.777~\mu \text{m}$ to \$0.512~£1.512 at most, making the processing effect of UVLP significantly better than TLP.
## References
SHOWING 1-8 OF 8 REFERENCES
• Materials Science, Physics
• 2009
The objective of this work was to improve our understanding of pulsed laser micropolishing (PLμP) by studying the effects of laser pulse length and feed rate (pulses per millimeter) on surface
• Materials Science
• 2004
Metallic parts created using current freeform fabrication processes have an unacceptably high surface finish (Ra » 1:0 mm) for many functional applications. This paper addresses the use of
|
2023-03-21 18:01:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4134420156478882, "perplexity": 10039.531333412426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00797.warc.gz"}
|
http://algowiki-project.org/en/Triangular_decomposition_of_a_Gram_matrix
|
# Triangular decomposition of a Gram matrix
The triangular decomposition of a Gram matrix as a method for finding the QR decomposition of a square matrix $A$ works only if the non-singularity of the original matrix is guaranteed. The method consists of three parts: 1. Construction of the Gram matrix $A^*A$ for the columns of the original matrix. 2. Finding the Cholesky decomposition $R^*R$ of the Gram matrix $A^*A$. 3. Calculation of the unitary matrix $Q=AR^{-1}$ by using, for instance, the modified back substitution.
|
2019-01-20 13:24:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.864126980304718, "perplexity": 132.192442870694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00340.warc.gz"}
|
https://wiki.kidzsearch.com/wiki/Hurricane_Lenny
|
kidzsearch.com > wiki Explore:web images videos games
# Hurricane Lenny
Formed Category 4 major hurricane (SSHWS/NWS) Lenny south of Saint Croix at its peak intensity November 13, 1999 November 23, 1999 1-minute sustained: 155 mph (250 km/h) 933 mbar (hPa); 27.55 inHg 17 direct $330 million (1999 USD)(US territories only) Colombia, Puerto Rico, Leeward Islands Part of the 1999 Atlantic hurricane season Hurricane Lenny was the last major hurricane, hurricane, named storm and depression of the 1999 Atlantic hurricane season. There are three main reasons to remember Lenny. One, it was strongest November Atlantic hurricane; two, the damage it caused in Puerto Rico and the Virgin Islands and three, it moved the opposite way storms usually moved across the Caribbean Sea. Some people at the National Hurricane Center nicknamed it "Wrong Way Lenny". Lenny left$330 million in damage, this amount is likely not all the damage. This is because it was reported for U.S. territories only, such as Puerto Rico and the U.S. Virgin Islands.
## Retirement
See also: List of retired Atlantic hurricane names
Because the damage was high and likely higher, the name Lenny was retired. In 2005 the name Lee was used instead. The name Lee in 2005 was not retired, so Lee is on the list for 2011 and 2017.[1]
## Reference
Tropical cyclones of the 1999 Atlantic hurricane season
A 2 B C D E 7 F G H 11 12 I J K L
Saffir–Simpson Hurricane Scale TD TS C1 C2 C3 C4 C5
|
2021-01-25 12:44:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20705316960811615, "perplexity": 8012.880161932093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00709.warc.gz"}
|
https://www.audiolabs-erlangen.de/resources/MIR/FMP/C7/C7S2_DiagonalMatching.html
|
# Diagonal Matching
Following Section 7.2.2 of [Müller, FMP, Springer 2015], we discuss in this notebook a simple diagonal matching strategy for comparing a short sequence with subsequences of a longer sequence. This procedure has been used for audio matching in the following article.
## Matching Function¶
The following subsequence matching technique is motivated by the task of audio matching, which was introduced in the FMP notbook on content-based audio retrieval. Recall that the goal of audio matching is to retrieve all audio excerpts that musically correspond to a short query audio clip. From an abstract point of view, let $X=(x_1,x_2,\ldots,x_N)$ and $Y=(y_1,y_2,\ldots,y_M)$ be two feature sequences, representing a query $\mathcal{Q}$ and a document $\mathcal{D}$, respectively. The length $N$ of the query is typically short in comparison with the length $M$ of the database document. To see if and where the query $\mathcal{Q}$ is somehow "contained" in $\mathcal{D}$, we shift the sequence $X$ over the sequence $Y$ and locally compare $X$ with suitable subsequences of $Y$. Every subsequence of $Y$ that is similar or, equivalently, has a small distance to $X$ is considered a match for the query.
There are many ways for locally comparing $X$ with subsequences of $Y$. We now introduce a simple procedure, which is referred to as diagonal matching. First of all, we need to fix a local cost measure (or local distance measure) to compare the chroma vectors of the sequences $X$ and $Y$. In the following, we assume that all features are normalized with regard to the Euclidean norm and then use the distance measure $c$ based on the inner product (or dot product):
$$c(x,y) = 1- \langle x,y\rangle = 1 - \sum_{k=1}^K x(k)y(k)$$
for two $K$-dimensional vectors $x\in\mathbb{R}^K$ and $y\in\mathbb{R}^K$ with $\|x\|_2=\|y\|_2=1$. Furthermore, assuming that all feature vectors have non-negative entries, we have $c(x,y)\in[0,1]$ with $c(x,y)=0$ if and only if $x=y$. One simple way for comparing two feature sequences that share the same length is to compute the average distance between corresponding vectors of the two sequences. Doing so, we compare the query sequence $X=(x_1,\ldots,x_N)$ with all subsequences $(y_{1+m},\ldots,y_{N+m})$ of $Y$ having the same length $N$ as the query, where $m\in[0:M-N]$ denotes the shift index. This procedure yields a matching function $\Delta_\mathrm{Diag}:[0:M-N]\to\mathbb{R}$ defined by
$$\Delta_\mathrm{Diag}(m) := \frac{1}{N}\sum_{n=1}^{N} c(x_n,y_{n+m}).$$
We now slightly reformulate the way this matching function is computed. Let $\mathbf{C}\in\mathbb{R}^{N\times M}$ be the cost matrix given by
$$\mathbf{C}(n,m):=c(x_n,y_m)$$
for $n\in[1:N]$ and $m\in[1:M]$. Then the value $\Delta_\mathrm{Diag}(m)$ is obtained (up to the normalization by the query length) by summing up diagonals of the matrix $\mathbf{C}$ as illustrated by the next figure. This explains why this procedure is denoted as "diagonal" matching.
## Implementation¶
In the following code cell, we implement the diagonal matching procedure and apply it, as a simple example, to synthetically generated sequences $X$ and $Y$. In the subsequent example, the sequence $Y$ contains five subsequences that are similar to $X$ (starting at positions $m=20$, $40$, $60$, $80$, $100$, respectively):
• The first occurrence starting at $m=20$ is an exact copy of $X$.
• The occurrences at $m=40$ and $m=60$ are noisy versions of $X$.
• The occurrence at $m=80$ is a stretched (slower) version of $X$.
• The occurrence at $m=100$ is a compressed (faster) version of $X$.
As can be seen in the following figure, the matching function $\Delta_\mathrm{Diag}$ reveals local minima at the expected positions. While the first minimum at $m=20$ is zero, the next two minima at $m=40$ and $m=60$ are still pronounced (with a matching value being close to zero). However, due to the stretching and compression, the diagonal matching procedure is not capable of capturing well the last two subsequences at $m=80$ and $m=100$.
In [1]:
import sys
import numpy as np
import scipy
import librosa
import matplotlib.pyplot as plt
from matplotlib import patches
sys.path.append('..')
import libfmp.b
%matplotlib inline
def scale_tempo_sequence(X, factor=1):
"""Scales a sequence (given as feature matrix) along time (second dimension)
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
X (np.ndarray): Feature sequences (given as K x N matrix)
factor (float): Scaling factor (resulting in length "round(factor * N)"") (Default value = 1)
Returns:
X_new (np.ndarray): Scaled feature sequence
N_new (int): Length of scaled feature sequence
"""
N = X.shape[1]
t = np.linspace(0, 1, num=N, endpoint=True)
N_new = np.round(factor * N).astype(int)
t_new = np.linspace(0, 1, num=N_new, endpoint=True)
X_new = scipy.interpolate.interp1d(t, X, axis=1)(t_new)
return X_new, N_new
def cost_matrix_dot(X, Y):
"""Computes cost matrix via dot product
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
X (np.ndarray): First sequence (K x N matrix)
Y (np.ndarray): Second sequence (K x M matrix)
Returns:
C (np.ndarray): Cost matrix
"""
return 1 - np.dot(X.T, Y)
def matching_function_diag(C, cyclic=False):
"""Computes diagonal matching function
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
C (np.ndarray): Cost matrix
cyclic (bool): If "True" then matching is done cyclically (Default value = False)
Returns:
Delta (np.ndarray): Matching function
"""
N, M = C.shape
assert N <= M, "N <= M is required"
Delta = C[0, :]
for n in range(1, N):
Delta = Delta + np.roll(C[n, :], -n)
Delta = Delta / N
if cyclic is False:
Delta[M-N+1:M] = np.inf
return Delta
# Create snythetic example for sequences X and Y
N = 15
M = 130
feature_dim = 12
np.random.seed(2)
X = np.random.random((feature_dim, N))
Y = np.random.random((feature_dim, M))
Y[:, 20:20+N] = X
Y[:, 40:40+N] = X + 0.5 * np.random.random((feature_dim, N))
Y[:, 60:60+N] = X + 0.8 * np.random.random((feature_dim, N))
X_slow, N_slow = scale_tempo_sequence(X, factor=1.25)
Y[:, 80:80+N_slow] = X_slow
X_fast, N_fast = scale_tempo_sequence(X, factor=0.8)
Y[:, 100:100+N_fast] = X_fast
Y = librosa.util.normalize(Y, norm=2)
X = librosa.util.normalize(X, norm=2)
# Compute cost matrix and matching function
C = cost_matrix_dot(X, Y)
Delta = matching_function_diag(C)
# Visualization
fig, ax = plt.subplots(2, 2, gridspec_kw={'width_ratios': [1, 0.02],
'height_ratios': [1, 1]}, figsize=(8, 4))
cmap = libfmp.b.compressed_gray_cmap(alpha=-10, reverse=True)
libfmp.b.plot_matrix(C, title='Cost matrix', xlabel='Time (samples)', ylabel='Time (samples)',
ax=[ax[0, 0], ax[0, 1]], colorbar=True, cmap=cmap)
libfmp.b.plot_signal(Delta, ax=ax[1,0], xlabel='Time (samples)', ylabel='',
title = 'Matching function', color='k')
ax[1, 0].grid()
ax[1, 1].axis('off')
plt.tight_layout()
## Retrieval Procedure¶
We now discuss how the matching function can be applied for retrieving all matches that are similar to the query fragment. In the following, we assume that the database is represented by a single document $\mathcal{D}$ (e.g., by concatenating all document sequences). To determine the best match between $\mathcal{Q}$ and $\mathcal{D}$, we simply look for the index $m^\ast\in[0:M-N]$ that minimizes the matching function $\Delta_\mathrm{Diag}$:
$$m^\ast := \underset{m\in[0:M-N]}{\mathrm{argmin}} \,\,\Delta_\mathrm{Diag}(m).$$
The best match is then given by the subsequence
$$Y(1+m^\ast:N+m^\ast) := (y_{1+m^\ast},\ldots,y_{N+m^\ast}).$$
To obtain further matches, we exclude a neighborhood of the best match from further considerations. For example, one may exclude a neighborhood of $\rho= \lfloor N/2 \rfloor$ around $m^\ast$, e.g., by setting $\Delta_\mathrm{Diag}(m)=\infty$ for $m\in [m^\ast-\rho:m^\ast+\rho]\cap [0:M-N]$. This ensures that the subsequent matches do not overlap by more than half the query length. To find subsequent matches, the latter procedure is repeated until a certain number of matches is obtained or a specified distance threshold is exceeded.
In the following code cell, we implement this retrieval procedure. Besides the parameter $\rho$, we introduce a parameter $\tau$ for restricting the matching values (i.e., we require $\Delta_\mathrm{Diag}(m^\ast)\leq \tau$) and a parameters that specifies the maximum number of matches to be retrieved. Continuing with our synthetic example, we indicate in the visualization the minimizing matching positions (using a red dot) and the matching subsequence (using a transparent red rectangle).
In [2]:
def mininma_from_matching_function(Delta, rho=2, tau=0.2, num=None):
"""Derives local minima positions of matching function in an iterative fashion
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
Delta (np.ndarray): Matching function
rho (int): Parameter to exclude neighborhood of a matching position for subsequent matches (Default value = 2)
tau (float): Threshold for maximum Delta value allowed for matches (Default value = 0.2)
num (int): Maximum number of matches (Default value = None)
Returns:
pos (np.ndarray): Array of local minima
"""
Delta_tmp = Delta.copy()
M = len(Delta)
pos = []
num_pos = 0
rho = int(rho)
if num is None:
num = M
while num_pos < num and np.sum(Delta_tmp < tau) > 0:
m = np.argmin(Delta_tmp)
pos.append(m)
num_pos += 1
Delta_tmp[max(0, m - rho):min(m + rho, M)] = np.inf
pos = np.array(pos).astype(int)
return pos
def matches_diag(pos, Delta_N):
"""Derives matches from positions in the case of diagonal matching
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
pos (np.ndarray or list): Starting positions of matches
Delta_N (int or np.ndarray or list): Length of match (a single number or a list of same length as Delta)
Returns:
matches (np.ndarray): Array containing matches (start, end)
"""
matches = np.zeros((len(pos), 2)).astype(int)
for k in range(len(pos)):
s = pos[k]
matches[k, 0] = s
if isinstance(Delta_N, int):
matches[k, 1] = s + Delta_N - 1
else:
matches[k, 1] = s + Delta_N[s] - 1
return matches
def plot_matches(ax, matches, Delta, Fs=1, alpha=0.2, color='r', s_marker='o', t_marker=''):
"""Plots matches into existing axis
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
ax: Axis
matches: Array of matches (start, end)
Delta: Matching function
Fs: Feature rate (Default value = 1)
alpha: Transparency pramaeter for match visualization (Default value = 0.2)
color: Color used to indicated matches (Default value = 'r')
s_marker: Marker used to indicate start of matches (Default value = 'o')
t_marker: Marker used to indicate end of matches (Default value = '')
"""
y_min, y_max = ax.get_ylim()
for (s, t) in matches:
ax.plot(s/Fs, Delta[s], color=color, marker=s_marker, linestyle='None')
ax.plot(t/Fs, Delta[t], color=color, marker=t_marker, linestyle='None')
rect = patches.Rectangle(((s-0.5)/Fs, y_min), (t-s+1)/Fs, y_max, facecolor=color, alpha=alpha)
pos = mininma_from_matching_function(Delta, rho=N//2, tau=0.12, num=None)
matches = matches_diag(pos, N)
fig, ax, line = libfmp.b.plot_signal(Delta, figsize=(8, 2), xlabel='Time (samples)',
title = 'Matching function with retrieved matches',
color='k')
ax.grid()
plot_matches(ax, matches, Delta)
## Matching Function Using Multiple Queries¶
This basic matching procedure works well in the case that the tempo of the query roughly coincides with the tempo within the sections to be matched. However, as also indicated by the previous example, diagonal matching becomes problematic when the database subsequences are stretched or compressed versions of the query. To compensate for such tempo differences, one can apply a multiple-query strategy. The idea is as follows:
• Generate multiple versions of a query by applying scaling operations that simulate different tempi.
• Compute a separate matching function for each of the scaled versions using diagonal matching.
• Minimize over all resulting matching functions, which results in a single matching function.
Note that this idea is similar to the multiple-filtering approach for enhancing the path structure of SSMs. In the following implementation, we introduce a set $\Theta$ that samples the range of expected relative tempo differences. In music retrieval, it rarely happens that the relative tempo difference between matching sections is larger than $50$ percent. Therefore, $\Theta$ can be chosen to cover tempo variations of roughly $-50$ to $+50$ percent. For example, the set
$$\Theta=\{0.66,0.81,1.00,1.22,1.50\}$$
(with logarithmically spaced tempo parameters) covers tempo variations of roughly $−50$ to $+50$ percent. (This set can be computed by the function libfmp.c4.compute_tempo_rel_set.)
In [3]:
def matching_function_diag_multiple(X, Y, tempo_rel_set=[1], cyclic=False):
"""Computes diagonal matching function using multiple query strategy
Notebook: C7/C7S2_DiagonalMatching.ipynb
Args:
X (np.ndarray): First sequence (K x N matrix)
Y (np.ndarray): Second sequence (K x M matrix)
tempo_rel_set (np.ndarray): Set of relative tempo values (scaling) (Default value = [1])
cyclic (bool): If "True" then matching is done cyclically (Default value = False)
Returns:
Delta_min (np.ndarray): Matching function (obtained by from minimizing over several matching functions)
Delta_N (np.ndarray): Query length of best match for each time position
Delta_scale (np.ndarray): Set of matching functions (for each of the scaled versions of the query)
"""
M = Y.shape[1]
num_tempo = len(tempo_rel_set)
Delta_scale = np.zeros((num_tempo, M))
N_scale = np.zeros(num_tempo)
for k in range(num_tempo):
X_scale, N_scale[k] = scale_tempo_sequence(X, factor=tempo_rel_set[k])
C_scale = cost_matrix_dot(X_scale, Y)
Delta_scale[k, :] = matching_function_diag(C_scale, cyclic=cyclic)
Delta_min = np.min(Delta_scale, axis=0)
Delta_argmin = np.argmin(Delta_scale, axis=0)
Delta_N = N_scale[Delta_argmin]
return Delta_min, Delta_N, Delta_scale
# import libfmp.c4
# tempo_rel_set = libfmp.c4.compute_tempo_rel_set(tempo_rel_min=0.66, tempo_rel_max=1.5, num=5)
# print(tempo_rel_set)
tempo_rel_set = [0.66, 0.81, 1.00, 1.22, 1.50]
color_set = ['b', 'c', 'gray', 'r', 'g']
num_tempo = len(tempo_rel_set)
Delta_min, Delta_N, Delta_scale = matching_function_diag_multiple(X, Y, tempo_rel_set=tempo_rel_set,
cyclic=False)
for k in range(num_tempo):
libfmp.b.plot_signal(Delta_scale[k,:], figsize=(8, 2), xlabel='Time (samples)',
title = 'Matching function with scaling factor %.2f' % tempo_rel_set[k],
color=color_set[k], ylim=[0, 0.3])
plt.grid()
fig, ax, line = libfmp.b.plot_signal(Delta_min, figsize=(8, 2), xlabel='Time (samples)',
title = 'Matching function', color='k', ylim=[0,0.3], linewidth=3, label='min')
ax.grid()
for k in range(num_tempo):
ax.plot(Delta_scale[k, :], linewidth=1, color=color_set[k], label=tempo_rel_set[k])
plt.legend(loc='lower right', framealpha=1);
In the multiple-query matching function, the subsequences that correspond to stretched or compressed versions of the query are now revealed by local minima that have a value much closer to zero. As a result, our retrieval procedure from above (with the same parameter settings) now yields the expected matches. The length of the matching subsequence is derived from the scaled query that yields the minimal matching value over all query versions considered. Therefore, as also indicated in the subsequent figure, the length may differ from the length $N$ of the original (non-stretched) query.
In [4]:
pos = mininma_from_matching_function(Delta_min, rho=N//2, tau=0.12, num=None)
matches = matches_diag(pos, Delta_N)
fig, ax = plt.subplots(2, 2, gridspec_kw={'width_ratios': [1, 0.02],
'height_ratios': [3, 3]}, figsize=(8, 4))
cmap = libfmp.b.compressed_gray_cmap(alpha=-10, reverse=True)
libfmp.b.plot_matrix(C, title='Cost matrix', xlabel='Time (samples)', ylabel='Time (samples)',
ax=[ax[0, 0], ax[0, 1]], colorbar=True, cmap=cmap)
libfmp.b.plot_signal(Delta_min, ax=ax[1, 0], xlabel='Time (samples)',
title = 'Matching function with retrieved matches', color='k')
ax[1,0].grid()
plot_matches(ax[1, 0], matches, Delta_min)
ax[1,1].axis('off')
plt.tight_layout()
## Further Notes¶
In this notebook, we discussed a simple matching procedure that identifies database subsequences that are similar to a given query sequence. We have also shown how differences in tempo can be handled by using a multiple-query approach.
Acknowledgment: This notebook was created by Meinard Müller and Frank Zalkow.
|
2022-11-29 01:13:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.6552894115447998, "perplexity": 4361.250895328688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00273.warc.gz"}
|
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Numerical_methods_for_linear_least_squares
|
# Numerical methods for linear least squares
Numerical methods for linear least squares entails the numerical analysis of linear least squares problems.
## Introduction
A general approach to the least squares problem ${\displaystyle \operatorname {\,min} \,{\big \|}\mathbf {y} -X{\boldsymbol {\beta }}{\big \|}^{2}}$ can be described as follows. Suppose that we can find an n by m matrix S such that XS is an orthogonal projection onto the image of X. Then a solution to our minimization problem is given by
${\displaystyle {\boldsymbol {\beta }}=S\mathbf {y} }$
simply because
${\displaystyle X{\boldsymbol {\beta }}=X(S\mathbf {y} )=(XS)\mathbf {y} }$
is exactly a sought for orthogonal projection of ${\displaystyle \mathbf {y} }$ onto an image of X (see the picture below and note that as explained in the next section the image of X is just a subspace generated by column vectors of X). A few popular ways to find such a matrix S are described below.
## Inverting the matrix of the normal equations
The algebraic solution of the normal equations with a full-rank matrix XTX can be written as
${\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X} ^{\rm {T}}\mathbf {X} )^{-1}\mathbf {X} ^{\rm {T}}\mathbf {y} =\mathbf {X} ^{+}\mathbf {y} }$
where X+ is the Moore–Penrose pseudoinverse of X. Although this equation is correct and can work in many applications, it is not computationally efficient to invert the normal-equations matrix (the Gramian matrix). An exception occurs in numerical smoothing and differentiation where an analytical expression is required.
If the matrix XTX is well-conditioned and positive definite, implying that it has full rank, the normal equations can be solved directly by using the Cholesky decomposition RTR, where R is an upper triangular matrix, giving:
${\displaystyle R^{\rm {T}}R{\hat {\boldsymbol {\beta }}}=X^{\rm {T}}\mathbf {y} .}$
The solution is obtained in two stages, a forward substitution step, solving for z:
${\displaystyle R^{\rm {T}}\mathbf {z} =X^{\rm {T}}\mathbf {y} ,}$
followed by a backward substitution, solving for ${\displaystyle {\hat {\boldsymbol {\beta }}}}$:
${\displaystyle R{\hat {\boldsymbol {\beta }}}=\mathbf {z} .}$
Both substitutions are facilitated by the triangular nature of R.
## Orthogonal decomposition methods
Orthogonal decomposition methods of solving the least squares problem are slower than the normal equations method but are more numerically stable because they avoid forming the product XTX.
The residuals are written in matrix notation as
${\displaystyle \mathbf {r} =\mathbf {y} -X{\hat {\boldsymbol {\beta }}}.}$
The matrix X is subjected to an orthogonal decomposition, e.g., the QR decomposition as follows.
${\displaystyle X=Q{\begin{pmatrix}R\\0\end{pmatrix}}\ }$,
where Q is an m×m orthogonal matrix (QTQ=I) and R is an n×n upper triangular matrix with ${\displaystyle r_{ii}>0}$.
The residual vector is left-multiplied by QT.
${\displaystyle Q^{\rm {T}}\mathbf {r} =Q^{\rm {T}}\mathbf {y} -\left(Q^{\rm {T}}Q\right){\begin{pmatrix}R\\0\end{pmatrix}}{\hat {\boldsymbol {\beta }}}={\begin{bmatrix}\left(Q^{\rm {T}}\mathbf {y} \right)_{n}-R{\hat {\boldsymbol {\beta }}}\\\left(Q^{\rm {T}}\mathbf {y} \right)_{m-n}\end{bmatrix}}={\begin{bmatrix}\mathbf {u} \\\mathbf {v} \end{bmatrix}}}$
Because Q is orthogonal, the sum of squares of the residuals, s, may be written as:
${\displaystyle s=\|\mathbf {r} \|^{2}=\mathbf {r} ^{\rm {T}}\mathbf {r} =\mathbf {r} ^{\rm {T}}QQ^{\rm {T}}\mathbf {r} =\mathbf {u} ^{\rm {T}}\mathbf {u} +\mathbf {v} ^{\rm {T}}\mathbf {v} }$
Since v doesn't depend on β, the minimum value of s is attained when the upper block, u, is zero. Therefore, the parameters are found by solving:
${\displaystyle R{\hat {\boldsymbol {\beta }}}=\left(Q^{\rm {T}}\mathbf {y} \right)_{n}.}$
These equations are easily solved as R is upper triangular.
An alternative decomposition of X is the singular value decomposition (SVD)[1]
${\displaystyle X=U\Sigma V^{\rm {T}}\ }$,
where U is m by m orthogonal matrix, V is n by n orthogonal matrix and ${\displaystyle \Sigma }$ is an m by n matrix with all its elements outside of the main diagonal equal to 0. The pseudoinverse of ${\displaystyle \Sigma }$ is easily obtained by inverting its non-zero diagonal elements and transposing. Hence,
${\displaystyle \mathbf {X} \mathbf {X} ^{+}=U\Sigma V^{\rm {T}}V\Sigma ^{+}U^{\rm {T}}=UPU^{\rm {T}},}$
where P is obtained from ${\displaystyle \Sigma }$ by replacing its non-zero diagonal elements with ones. Since ${\displaystyle (\mathbf {X} \mathbf {X} ^{+})^{*}=\mathbf {X} \mathbf {X} ^{+}}$ (the property of pseudoinverse), the matrix ${\displaystyle UPU^{\rm {T}}}$ is an orthogonal projection onto the image (column-space) of X. In accordance with a general approach described in the introduction above (find XS which is an orthogonal projection),
${\displaystyle S=\mathbf {X} ^{+}}$,
and thus,
${\displaystyle \beta =V\Sigma ^{+}U^{\rm {T}}\mathbf {y} }$
is a solution of a least squares problem. This method is the most computationally intensive, but is particularly useful if the normal equations matrix, XTX, is very ill-conditioned (i.e. if its condition number multiplied by the machine's relative round-off error is appreciably large). In that case, including the smallest singular values in the inversion merely adds numerical noise to the solution. This can be cured with the truncated SVD approach, giving a more stable and exact answer, by explicitly setting to zero all singular values below a certain threshold and so ignoring them, a process closely related to factor analysis.
## Discussion
The numerical methods for linear least squares are important because linear regression models are among the most important types of model, both as formal statistical models and for exploration of data-sets. The majority of statistical computer packages contain facilities for regression analysis that make use of linear least squares computations. Hence it is appropriate that considerable effort has been devoted to the task of ensuring that these computations are undertaken efficiently and with due regard to round-off error.
Individual statistical analyses are seldom undertaken in isolation, but rather are part of a sequence of investigatory steps. Some of the topics involved in considering numerical methods for linear least squares relate to this point. Thus important topics can be
• Computations where a number of similar, and often nested, models are considered for the same data-set. That is, where models with the same dependent variable but different sets of independent variables are to be considered, for essentially the same set of data-points.
• Computations for analyses that occur in a sequence, as the number of data-points increases.
• Special considerations for very extensive data-sets.
Fitting of linear models by least squares often, but not always, arise in the context of statistical analysis. It can therefore be important that considerations of computation efficiency for such problems extend to all of the auxiliary quantities required for such analyses, and are not restricted to the formal solution of the linear least squares problem.
Matrix calculations, like any other, are affected by rounding errors. An early summary of these effects, regarding the choice of computation methods for matrix inversion, was provided by Wilkinson.[2]
## References
1. Lawson, C. L.; Hanson, R. J. (1974). Solving Least Squares Problems. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0-13-822585-0.
2. Wilkinson, J.H. (1963) "Chapter 3: Matrix Computations", Rounding Errors in Algebraic Processes, London: Her Majesty's Stationery Office (National Physical Laboratory, Notes in Applied Science, No.32)
• Ake Bjorck, Numerical Methods for Least Squares Problems, SIAM, 1996.
• R. W. Farebrother, Linear Least Squares Computations, CRC Press, 1988.
• Barlow, Jesse L. (1993), "Chapter 9: Numerical aspects of Solving Linear Least Squares Problems", in Rao, C. R. (ed.), Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8
• Björck, Åke (1996). Numerical methods for least squares problems. Philadelphia: SIAM. ISBN 0-89871-360-9.
• Goodall, Colin R. (1993), "Chapter 13: Computation using the QR decomposition", in Rao, C. R. (ed.), Computational Statistics, Handbook of Statistics, 9, North-Holland, ISBN 0-444-88096-8
• National Physical Laboratory (1961), "Chapter 1: Linear Equations and Matrices: Direct Methods", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office
• National Physical Laboratory (1961), "Chapter 2: Linear Equations and Matrices: Direct Methods on Automatic Computers", Modern Computing Methods, Notes on Applied Science, 16 (2nd ed.), Her Majesty's Stationery Office
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.
|
2022-12-05 04:03:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7826239466667175, "perplexity": 752.7908611219123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00007.warc.gz"}
|
https://flipdazed.github.io/blog/python%20tutorial/07-flow-control
|
# Learning Outcomes
• Understand how to use if, else and elif
• Construct for statements across an iterator
• Understand basics of for loop iteration in python
• Refresh Terminal skills
NB As a reminder you should be using ipython now!
# if statements
Perhaps the most well-known statement type is the if statement. For example:
number = int(input('Input a number:'))
if number < 0:
print("number is negative")
elif number % 2:
print("number is positive and odd") # Style Tip: Make sure you indent
else: print("number is even and non-negative") # this is legit python but it's UGLY - don't do this
There can be zero or more elif parts, and the else part is optional. The keyword elif is short for else if, and is useful to avoid excessive indentation. An ifelifelif … sequence is a substitute for the switch or case statements found in other languages.
# for Statements
The for statement in Python supports repeated execution of a statement or block of statements that is controlled by an iterable expression. Here’s the syntax for the for statement:
# Measure some strings:
words = ['cat', 'window', 'floccinaucinihilipilification']
for w in words:
print(w, len(w))
### Example: Loop until a condition Prime identification
Loop statements may have an else clause; it is executed when the loop terminates through exhaustion of the iterable (with for) or when the condition becomes False (with while), but not when the loop is terminated by a break statement.
max_number = int(input('Enter a number to check primes up to:'))
for n in range(2, max_number):
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n//x)
break
else: # only triggered if iterable is exhausted without breaking
print(n, 'is a prime number')
### Example: Fibonacci Series
Here we use lists instead of the previous variable assignment method and run the iteration for precisely 5 loops rather than ending with a while condition
result = [0, 1]
for i in range(5):
result.append(result[-1] + result[-2])
which will then contain
>>> result
[0, 1, 1, 2, 3, 5, 8]
# Exercises
## Exercise 7.1: Find all files matching pattern
The function os.walk(source_directory) will return three items (root, directories, files) at each iteration. There is one iteration for every end-point sub-directory in the directory tree
For example os.walk('./root') called from file test.py
test.py
root/
dir1/
f1.py
dir2/
f2.csv
data.xlsx
dir21/
g.py
will iterate 4 times
• firstly, once at the root level returning ('./root', ['dir1', 'dir2'], ['g.py'])
• once for dir1 returning ('./root/dir1', [], ['f1.py'])
• once for dir2 returning ('./root/dir2', ['dir21'], ['f2.csv', 'data.xlsx'],)
• finally, once for dir21 returning ('./root/dir2', [], [],)
Note that if you type os.walk('./root') it will return a generator object. To see the output of generator objects you can just generate them by calling list(the_object) for for example here we would do list(os.walk('./root'))
### Exercise 7.1.1: Create the directory structure above using the Terminal commands we have learned in the first section
• Hint You will need touch / mkdir. ls and mv may also be helpful (On Windows just do echo \$null >> filename instead of touch)
### Exercise 7.1.2: Print the path of all files that have an extension of .py
• Hint A modified form of the expression if '.py' in 'files.py': will be key
• Hint You can combine the root and the filename with os.path.join
• Hint Remember how to iterate an object that contains multiple items at each iteration. Multiple variable assignment! The following snippets might help you understand this
>>> for i, j, k in [['a0', ['b0', 'c0'], ['d0', 'e0']], ['a1', ['b1', 'c1'], ['d1', 'e1']]]:
>>> for l in k:
>>> print('second list elements', k)
# Solve me!
You should get
./exercises/root/g.py
./exercises/root/dir1/f1.py
## Exercise 7.2: Searching and modifying file contents
In this exercise we look at a very common use of python which is a multi-file search and replace.
The following code snippet will edit some of the files you have created in 7.1. Look for common elements / repititions in this script and replace them with a loop
import os
# os.path.join allows cross platform compatibility
# as windows uses \ and linux uses / to separate paths
# IT also allows you to combined and modify filepaths on the fly
base_dir = os.path.join('.', 'exercises', 'root','dir2')
filepath1 = os.path.join(base_dir, 'f2.csv')
filepath2 = os.path.join(base_dir, 'data.xlsx')
with open(filepath1, 'w') as fobj: # WARNING! This line ERASES all contents in filepath1!
fobj.write('test') # if you want to read data ALWAYS use 'r'
with open(filepath2, 'w') as fobj: # and do data = fobj.read() to explore the contents!
fobj.write('test')
Hint File extensions e.g. .xlsx means nothing when programming. They are just hints to the operating system (Windows / OS X) of how to open the files when double clicking. I can even create a file like myfile.alex.xlsx.rubbish. Generally you will use common file extensions like .txt or .dat when dumping data.
### Exercise 7.2.1: Utility of context managers (the with statement)
This exercise is to show you why we tend to use the with statement (a context manager) when opening files.
Do some research on the line
with open(filepath1, 'w') as fobj:
Try to understand what the difference between this and
fobj = open(filepath1, 'w')
As an exercise, open the .xlsx file for reading in python using f = open(filepath, 'r') without closing the file object and now try opening it in Excel by double clicking in the explorer. Try again the other way round.
### Exercise 7.2.2: Finding files containing strings
Modify the solution to 7.1 to find all files containing the string 'test'
Hint Don’t open the files with 'w' else you will overwrite with a blank file! The following may also be useful
>>>'test' in 'test case this is a testcase'
True
### Exercise 7.2.3: Terminal recap: Copy directory contents as a backup
This exercise is a recap of the terminal skills in session 1. It is also extremely sensible when playing around with file contents as there is no Cntrl+Z once code is executed!
In the Terminal copy (not move!) the files from solution 7.2.2 into a new temporary directory ./exercises/backup_root/ as a backup
### Exercise 7.2.4: Find and replace!
Find all files containing the string 'test' and modify it to be 'test2'. Check your solution.
Hint If you simply use fobj.write('test2') you will overwrite the entire file contents! This may not be ideal in most scenarios.
• Use the open() function this time like open(filepath, 'r') instead of 'w'
• Then read the file contents to a temporary object
• Then open a new file protocol with 'w' to overwrite the entire contents back to the file
# Next Topic
08: Functions and Editors
|
2020-06-03 21:13:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31313061714172363, "perplexity": 6436.837481894353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00098.warc.gz"}
|
https://mathkp.wordpress.com/tag/linear-algebra/
|
# Gaussian Elimination: Backsubstitution
The Gaussian Elimination with back-substitution is more optimal and less overwhelming that Gaussian Jordan. It uses partial pivoting, i.e. the pivoting is done only using row transforms, as a result the order of the solution and variable vectors remains unchanged. This mitigates the overhead of book-keeping and column swapping.
Basic Steps
Let us consider a matrix:
$\left[ \begin{array}{cccc} a_{00} & a_{01} & a_{02} & a_{03} \\ a_{10} & a_{11} & a_{12} & a_{13} \\ a_{20} & a_{21} & a_{22} & a_{23} \\ a_{30} & a_{31} & a_{32} & a_{33}\end{array} \right]$
We follow a similar but simpler procedure to the Gaussian Jordan Method.
Step 1: Upper Triangular Matrix
Only the elements below the pivot element are reduced to zero by subtracting the right amount of the “pivot row”.
After iterating over each pivot element we get an upper triangular matrix:
$\left[ \begin{array}{cccc} a_{00}^\prime & a_{01}^\prime & a_{02}^\prime & a_{03}^\prime \\ 0 & a_{11}^\prime & a_{12}^\prime & a_{13}^\prime \\ 0 & 0 & a_{22}^\prime & a_{23} ^\prime\\ 0 & 0 & 0 & a_{33}^\prime\end{array} \right] \cdot \left[ \begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_3\end{array} \right] = \left[ \begin{array}{c} b_0^\prime \\ b_1^\prime \\ b_2^\prime \\ b_3^\prime\end{array} \right]$
Each in the above equation is shown with a “prime” signifying that the element has changed during the transforms
Step 2: Back-Substitution
The name back-substitution arrives from the fact that that the last equation is a univariable equation and is trivial.
$a_{33}^\prime x_3 = b_3^\prime$
This value can be “back-substituted” into the previous equation to get the value of $x_2$
$a_{22}^\prime x_2 + a_{23}^\prime x_3 = b_2^\prime$
which further gives,
$x_2 = \frac{1}{a_{22}^\prime}\left[b_2^\prime - a_{23}^\prime x_3\right]$
The typical back-substitution can be represented with:
$x_i = \frac{1}{a_{ii}^\prime}\left[b_i^\prime - \sum_{j = i + 1}^{N - 1}a_{ij}^\prime x_j\right]$
Performance Considerations
Strictly talking in terms of complexity, both Gaussian Jordan Elimination and Gaussian Elimination with back-substitution are $O(N^3)$ algorithms. The latter is more optimal because of the reduction in the amount of operations in the innermost for loops. The difference can be attributed to full pivoting as all rows are reduced as opposed to only a subset of rows (resultant is a triangular matrix) in Gaussian Elimination with back-substitution. This reduces the number of multiplications ($N^3$) and additions ($N^2 M$) by a factor of 3. We can reduce this factor to 1.5 by avoiding the calculation of the inverse in Gaussian Jordan Elimination.
# Gaussian Jordan Elimination
I have been reading a wonderful book about mathematical programming and I have decided to document my learnings in this blog.
This post is going to explain one of basic building blocks for solving “Linear Algebra Equations”. Consider a set of equations:
$a_{0,0}x_1 + a_{0,1}x_2 + \cdots + a_{0,M} = b_0 \newline a_{1,0}x_1 + a_{1,1}x_2 + \cdots + a_{1,M} = b_1 \newline \vdots \hspace{25mm} \vdots \newline a_{N,0}x_1 + a_{N,1}x_2 + \cdots + a_{N,M} = b_N$
This is a system on M unknowns $x_0, x_1 \cdots x_M$ and N equations. Each variable can be thought of a degree of freedom and each equation can be thought of as a constraint. Think about a three variable situation, like a position of a person in a 3-D coordinate. Without any constraints, he has three degrees of freedom in the x, y and z direction. If we are given three equations describing his position(each equation in x, y and z represents a plane in 3-D), we can pin point his co-ordinates in the 3-D space.
Validation
• If M > N, the number of unknowns is greater than the number of equations, the system is said to be undetermined and has infinitely many solutions. The solution space can be restricted by Compressed Sensing.
• if M < N, the number of equations are greater than the number of variables, the system is said to be overdetermined. Here the general approach is to find the best fit solution (i.e R.M.S error values are a minimum for all equations)
• If M = N, the system is consistent if the following caveats are satisfied:
• No row should be a linear combination of the other row, this leads to row degeneracy
• If all the equations have a certain variable in the exact same linear combination, the system is afflicted by column degeneracy
• Both these equations effective result in the removal of a constraint and thus the system becomes indeterminable.
Pivoting
In order to obtain more accurate results and reduce round-off errors, a technique called Pivoting is used. Pivoting is done to convert a matrix to its row echelon form.
What is row echelon form?
A matrix is said to be in row echelon form if:
• All non zero rows are above the zero rows.
• The first non zero number in a row from the left called the Leading coefficient or Pivot should be strictly to the right of the leading coefficient of row above it.
• All entries in a column below the leading coefficient must be zero
• Here is an example of a matrix in row echelon form:
$\left[ \begin{array}{ccccc} 1 & a_0 & a_1 & a_2 & a_3 \\ 0 & 0 & 1 & a_4 & a_5 \\ 0 & 0 & 0 & 1 & a_6 \end{array} \right]$
Pivoting can be done in two ways:
• Partial Pivoting: In this the algorithm selects element the largest absolute value and shuffles the rows in such a way that it lies along the diagonal.
• Complete Pivoting: The algorithm scans the whole matrix for the largest element and shuffles both columns and rows to place the pivot along a diagonal $a_{ii}$
The Algorithm
We will be using an example matrix to illustrate this Algorithm (which is given in the text-book:
Numerical Recipes in C++
$A = \left[ \begin{array}{ccccc} 1 & 2 & 3 & 4 & 5 \\ 2 & 3 & 4 & 5 & 1 \\ 3 & 4 & 5 & 1 & 2 \\ 4 & 5 & 1 & 2 & 3 \\ 5 & 4 & 3 & 2 & 1\end{array} \right]$
The equations we aim at solving is:
$A \cdot Y = I ; A \cdot X_1 = B_1; A \cdot X_2 = B_2$
The algorithm takes two inputs, matrix A (coefficient matrix) and B (solution vector). The inverse of the matrix is returned in A and the variable vector is returned in B.
Step 1: Finding the Pivot Element
In the first step the algorithm iterates through the matrix and finds the largest element, in the first iteration the pivot element is the largest element of the last row. In our case it comes out to be five and is in the fist column, so there is no need for a column swap, it only needs to be swapped with t he first row. This swap is maintained in a two book-keeping arrays storing the actual position of pivot, so that the result can be restored.
The next time the algorithm searches for a Pivot element, it excludes $R_1$ and $C_1$ from the search.
Step 2: Normalizing the row
Before we understand the first step we need to understand why this actually works. Using our transformations we are basically converting the matrix into the identity matrix I. Therefore,
$if A = I; I \cdot X_1 = B_1^\prime \implies X_1 = B_1^\prime$
Where $B_1^\prime$ is the transformed solution vector
As we are using the equation $A \cdot Y = I$ to determine the inverse of the matrix we store the result back in A.
This step can be further subdivided into two sub-steps:
• The first step is that we normalize a given row by the Pivot element, So now our matrix equations looks like:
For the Inverse:
$A = \left[ \begin{array}{ccccc} \frac{5}{5} & \frac{1}{5} & \frac{2}{5} & \frac{3}{5} & \frac{4}{5} \\ 2 & 3 & 4 & 5 & 1 \\ 3 & 4 & 5 & 1 & 2 \\ 4 & 5 & 1 & 2 & 3 \\ 1 & 2 & 3 & 4 & 5\end{array} \right] \cdot Y = \left[ \begin{array}{ccccc} \frac{1}{5} & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1\end{array} \right]$
The solution vector also gets transformed as:
$\left[ \begin{array}{c} \frac{b_0}{5} \\ b_1 \\ b_2 \\ b_3 \\ b_4\end{array} \right]$
The next step is to reduce each element below the Pivot by subtracting the right amount of first row:
$A = \left[ \begin{array}{ccccc} 1 & 0.2 & 0.4 & 0.6 & 0.8 \\ 0 & 2.6 & 3.2 & 3.8 & -0.6 \\ 0 & 3.4 & 3.8 & -0.8 & -0.4 \\ 0 & 4.2 & -0.6 & -0.4 & 0.2 \\ 0 & 1.8 & 2.6 & 3.4 & 4.2\end{array} \right] \cdot Y = \left[ \begin{array}{ccccc} 0.2 & 0 & 0 & 0 & 0 \\ -0.4 & 1 & 0 & 0 & 0 \\ -0.6 & 0 & 1 & 0 & 0 \\ -0.8 & 0 & 0 & 1 & 0 \\ -0.2 & 0 & 0 & 0 & 1\end{array} \right]$
and similar transforms on the solution vector.
We, will discuss certain parts of the second iteration as they are slightly different from the first:
Now while iterating for the second column, the largest element found its at $R_4,C_4$
Here there is no need for swapping as the pivot is found along the diagonal itself.
At the end we have done pivoting for all columns and have reduced our matrix, but we need to accommodate for the shuffling that we have done. Let us say that our book-keeping arrays:
Let us take the first case:
As the row and column number was not the same, there is an initial swap that needs to restored back. So, we swap $C_4\ with\ C_0$. A row operation in the input appears as a column operation in its inverse a (explains the shuffling of columns instead of rows)
|
2018-04-21 13:10:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6211479902267456, "perplexity": 432.9399466122651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00454.warc.gz"}
|
https://physics.stackexchange.com/questions/27048/the-derivation-of-the-belinfante-rosenfeld-tensor
|
# The derivation of the Belinfante-Rosenfeld tensor
• It seems me that there is a "difference" (at least apparently) in how the Belinfante-Rosenfeld tensor is thought of in section 7.4 of Volume 1 of Weinberg's QFT book and in section 2.5.1 of the conformal field theory book by di Francesco et. al.
I would be glad if someone can help reconcile the issue.
Schematically the main issue as I see is this -
• If the action and the lagrangian density is writable as $I = \int d^4x L$ and $\omega_{\mu \nu}$ be the small parameter of Lorentz transformation then Weinberg is thinking of $\omega_{\mu \nu}$ to be space-time independent and he is varying the action to write it in the form, $\delta I = \int d^4x (\delta L = (A^{\mu \nu})\omega_{\mu \nu})$ Then some symmetrized form of whatever this $A^{\mu \nu}$ comes out to be is what is being called the Belinfante tensor. ( Its conservation needs the fields to satisfy the equations of motion)
• But following Francesco et.al's set-up I am inclined to think of $\omega_{\mu \nu}$ as being space-time dependent and then the variation of the action will also pick up terms from the Jacobian and the calculation roughly goes as saying, $\delta I = \int (\delta(d^4x)) L + \int d^4x (\delta L)$. Since under rigid Lorentz transformations the volume element is invariant the coefficient of $\omega_{\mu \nu}$ in the first variation will vanish but the second variation will produce a coefficient say $B^{\mu \nu}$. But both the variations will produce a coefficient for the derivative of $\omega_{\mu \nu}$ and let them be $C^{\mu \nu \lambda}$ and $D^{\mu \nu \lambda}$ respectively.
Now the argument will be that since the original action was to start off invariant under Lorentz transformations, when evaluated about them, the $B^{\mu \nu}$ should be $0$ and on shifting partial derivatives the sum of $C^{\mu \nu \lambda}$ and $D^{\mu \nu \lambda}$ is the conserved current (..and its not clear whether their conservation needs the field to satisfy the equation of motion..)
So by the first way $A^{\mu \nu}$ will be the conserved current and in the second the conserved current will be, $C^{\mu \nu \lambda} + D^{\mu \nu \lambda}$ (the C tensor will basically look like $-x^\nu \eta^{\lambda \mu}L$)
Is the above argument correct?
If yes then are the two arguments equivalent?
How or is Weinberg's argument taking into account the contribution to the conserved current from the variation of the Jacobian of the transformation?
• I took the liberty of editing the question to correct the volume number in Weinberg's trilogy: it's volume 1, and also the name of the first author of the yellow book, which is "di Francesco" and not just "Francesco". – José Figueroa-O'Farrill Oct 2 '11 at 15:48
• A derivation of the Belinfante improvement tensor from the perspective of Cartan geometry is done in my Phys.SE answer here. – Qmechanic Jul 22 '15 at 14:21
• A good reference arxiv.org/abs/1605.01121 – Saksith Jaksri Feb 17 '17 at 16:05
The two derivations are actually identical, except for the fact that Weinberg didn't have the general form of the Noether theorem for symmetries acting on the space-time coordinates as well as on the fields (Equation 2.141 in Di Francesco, Mathieu and Sénéchal's book).
As a consquence, Weinberg had to compute the variation of the action with respect to the Lorentz generators from scratch (including the substitution of the equations of motion).
Furthermor, I wanted to remark that the term depending on the variation of the space time coordinates in the general form of the Noether theorem is not due to the noninvariance of the Minkowski space time measure $d^4x$ as this measure is invariant under both translations and Lorentz transformations. The extra term is due to the dependence of the Lagrangian on the space time coordinates through its dependence on the fields.
Now, both authors use the derivation as a means of computation of the Belinfante & Rosenfeld 3-tensor whose divergence is to be added to the canonical energy momentum tensor to obtain the symmetric Belinfante energy momentum tensor. The principle upon which this computation is based is that the orbital part of the canonical conserved current corresponding the Lorentz symmetry must have the form:
$M^{\mu\nu\rho} = x^{\nu} T_B^{\mu\rho} - x^{\rho} T_B^{\mu\nu}$
with $T_B$ both conserved and symmetric (as can be checked by a direct computation), therefore, they arrange the extra-terms they obtained to bring the Lorentz canonical current to this form and as a consequence they obtain the required tensor to be added.
I wanted to add that both authors use the derivatives of the symmetry group parameters in their intermediate computations, but this is not required. The same currents can be obtained for variation with respect to global constant parameters. If the action were locally invariant (with respect to variable parameters), then the currents would have been conserved off-shell. This is the Noether's second theorem.
Finally, I want to refer you to the this article of Gotay and Marsden describing a method of obtaining a symmetric and (gauge invariant) energy-momentum tensor directly based on Noether's theory.
• Thanks for the link to Gotay/Marsden. It should be pointed out that this paper is precisely a formalisation of the idea that the stress-energy tensor is the gauge current for diffeomorphisms, which is also the idea underlying the usual derivation of the Noether charge by taking the parameter to depend on the spacetime point. – José Figueroa-O'Farrill Oct 5 '11 at 12:20
• Why are we looking for a symmetric tensor in the first place? – user7757 May 22 '14 at 6:42
• @ramanujan_dirac: The energy momentum tensor constitutes of the source term in Einstein equations: $R_{\mu\nu}-g_{\mu\nu}R = \frac{8\pi G}{c^4}T_{\mu\nu}$, and both the Ricci curvature and the metric tensor are symmetric. – David Bar Moshe May 22 '14 at 7:10
• – Qmechanic Jan 3 '15 at 16:17
The two derivations are indeed different, but the resulting object should be the same: it should be symmetric and conserved on-shell.
In fact, perhaps the cleanest way to derive it is to couple the theory to gravity and then vary the resulting action with respect to the metric. If we let $$S = \int_M d^4x \sqrt{-g} \mathcal{L}$$ denote the action of the theory coupled to gravity (i.e., put the theory on a lorentzian manifold by covariantising derivatives,...) then the Belinfante stress-energy tensor is defined (up to perhaps a constant) by $$T_{\mu\nu} = \frac{1}{\sqrt{-g}} \frac{\delta S}{\delta g^{\mu\nu}}~.$$
This form has the virtue that it is easy to see that if the theory is invariant under homotheties -- $\delta g_{\mu\nu} = \lambda g_{\mu\nu}$ for some constant $\lambda$ -- the stress-energy tensor is traceless.
• I am aware of this metric variation way of thinking about it. Can you kindly explain as to how are the two ways of thinking about the stress-energy tensor actually equivalent? For specific Lagrangians that I have worked with the final answers look very different! For any special case doing the calculation by my second alternative seems to completely obscure any connection to $T^{\mu \nu}$ - that factor never explicitly appears! – user6818 Oct 2 '11 at 19:53
• The equivalence is essentially the following. If you have a classical continuous symmetry and wish to gauge it, then the gauge field couples to the conserved current. Here you have the metric as playing the role of the gauge field for the diffeomorphism invariance, which is the translation symmetry of the original theory which has been gauged by coupling to gravity. – José Figueroa-O'Farrill Oct 2 '11 at 21:15
• I was inquiring about the equivalence between the two approaches which I stated. The Weinberg way of doing it where one sees only rigid transformations and there is nothing from the Jacobian and the di Francesco way of thinking where the symmetry is local and the current gets contributions from the Jacobian. Why are these two ways equivalent (or are they!?) – user6818 Oct 4 '11 at 1:26
• They are evidently equivalent in that they give rise to the same stress-energy tensor. I really don't understand the question. – José Figueroa-O'Farrill Oct 4 '11 at 8:04
• Thats precisely not very clear. The general expressions in the two books are clearly very different looking and for specific Lagrangians where I have tried the answers are obviously very different. No equivalence is obvious! Even conceptual its not clear that these two should be the same - the entire framework seems so different. – user6818 Oct 4 '11 at 20:37
|
2019-08-24 02:41:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9145454168319702, "perplexity": 266.1749473173635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00501.warc.gz"}
|
https://computergraphics.stackexchange.com/questions/12744/how-to-remove-elements-in-pdf-eps-vector-graphics-which-are-completely-hidden
|
# How to remove elements in PDF/EPS vector graphics which are completely hidden?
I have EPS/PDF figures which contain scatterplots consisting of a lot of dots resulting in files sizes from dozens to hundreds of MBytes. Now, many of the points are completely hidden, so I could remove them from the EPS/PDF.
The vector-graphics where generated in R (ggplot2 and generic R plot).
I am using GNU/Linux, is there a way to do this post generating the figures? Or do you know of some functionality in R?
And yes, I can export it to PNG/JPG/other, sure, but then my collaborator can not modify it without loss of quality (modification of text, etc.).
• Sortof, not really the computer does not actually know whats visible or not. Its realtively easy for you to do it manually in a vector editor because you can know whats visible and whats not. Part of the reason why this is easy to do porgrammatically in the application is that the clipping mask is just discarded in final rasterisation. But its not feasible to do this at any other stage of the process as it would require a lot of extra work to account for conrer cases. And also its not 100% the same in all the cases. The best place to cull is before plotting. Jun 2, 2022 at 11:41
This answer intends to flesh out joojaa's comment to a more complete answer.
It's possible. But probably not easy.
If you work with the EPS file, it's very likely to be ASCII or mostly ASCII so you can open it and edit with a text editor. Then you've got some obvious subtasks:
1. locate the points in the file
2. filter them somehow
To locate the points, it's all going to depend on exactly how the ggplot2 chooses to encode the points. They may be just decimal numbers or they may be something else. Level 2 PostScript added support for several kinds of binary encoding and binary tokens.
To filter the points. There are a few routes I can imagine, but I'm not sure which will come out easiest--or least challenging. You could modify the EPS program to not call any of the drawing operators, but just dump "surviving" points to a file or stdout with the = or print or writestring operators. Then, add a step which calls clip to determine if a point is "surviving" or maybe the infill would be useful for this. One issue is that in order "draw a point" in PostScript you really have to draw some other shape like a circle or rectangle. So when the program gets to the drawing phase, the point may no longer exist as just coordinates but a whole sequence of lines or curves which describe the actual shape being drawn. So far, all of these options involve modifying the PostScript code to steer or wrangle the points through a sieve instead of onto a display. If you get the code just right, then it should produce a small list of points which you can then edit back into the original EPS file.
Another option would be to write a much smaller PostScript program that just sets up the appropriate clip path (or just a regular path to use infill) and output the points. Then you could copy/paste the points out of the original, run them through this smaller, simpler sieve, and then paste them back in to the original.
Most of these steps could be done in some other, more convenient programming language assuming you can accomplish subtask 1 and find the points in the file to begin with.
In contrast, working with the PDF could provide some advantages, as well as some disadvantages. To start with, we're no longer in nice clean ASCII land. PDFs may contain a lot of just plain ASCII but there's a lot of binary as well (possibly a whole lot). PDF is not a programming language, just a flat data representation. But it's a flat data representation of printed output. So we may hit some of the same issues as working with PostScript, viz. the points may or may not exist in the file as just coordinates.
You might try passing the PDF through a tool like qpdf to get a readable ASCII file that can be edited and converted back to PDF.
After (some of) the above is done and working for a single file, you then will have the further task of automating the solution to run on multiple files. And depending on the details, that may make your process dependent on the details of gglot2's current behavior which may or may not be fully documented. At least if it depends upon the EPS being produced the same way forever into the future, that feels like a risky move.
I don't know much about R, but at the spot in the R code where you call ggplot2 to produce the output, like, you've got the points right there, right? I don't know what tools R has available for insideness testing, but for an axis-aligned box it ought to be pretty simple to code a function ( x>LL_x && x<UR_x && y>LL_y && y<UR_y ). If they're gone before they get to ggplot2, then it will have no opportunity to include them in the output.
Edit
The closest existing PS code that I know of was from a prototype program to visualize a Steinmetz solid. This was posted to usenet in 2012 (note: if you add custom CSS html-blob{white-space:pre}html-blob br{display:none}, you can view the original indentation in Google groups). It generates points with an O(n^3) algorithm but then saves the results in a file so it can alter the view without doing the generating step each time. So, it shows one way of passing points through a sieve (the function x y z f -> bool) and saving them in a separate file.
%!
% Point-field sampling
% with data caching (in a file),
% point-wise axial rotations,
% and perspective projection.
/fuzz 10000 def %the "grain" of eq
/eq {
fuzz mul round exch fuzz mul round oldeq
} def
/max {
2 copy lt { exch } if pop
} def
%x y z f bool
/f {
dup mul 3 1 roll %z^2 x y
dup mul 3 1 roll %y^2 z^2 x
dup mul 3 1 roll %x^2 y^2 z^2
2 index 2 index add %x^2 y^2 z^2 x^2+y^2
3 index 2 index add %x^2 y^2 z^2 x^2+y^2 x^2+z^2
3 index 3 index add %x^2 y^2 z^2 x^2+y^2 x^2+z^2 y^2+z^2
max %x^2 y^2 z^2 x^2+y^2 max(x^2+z^2,y^2+z^2)
max %x^2 y^2 z^2 max(x^2+y^2, max(x^2+z^2,y^2+z^2))
4 1 roll pop pop pop %max(...)
4 eq
} def
/filename (stein.pts) def
/low -2.2 def
/hi 2.2 def
%generate data by brute force, cache in file, plot
/pointfieldtocache {
/res exch def
/dt 1 res div def
/fuzz res .5 mul def
%fuzz affects the "closeness" of
%the equality test. a lower fuzz will allow more
%values to be equal.
% 'res' gives thin lines
% 'res .5 mul' gives wider "ribbons"
/outfile filename (w) file def
/outbuf 128 string def
low dt hi {
low dt hi {
low dt hi { % xW yW zW "world" coords
3 copy f {
3 copy 3 -1 1 { -1 roll
outbuf cvs outfile exch writestring
outfile (\n) writestring
} for %dump points to file
3 copy project
%2 copy exch = =
%2 copy transform exch = = ()=
2 copy 2 copy moveto dt .5 mul 0 360 arc moveto
fill
/flushpage where {pop flushpage} if
} if
pop
} for
pop
} for
pop
} for
outfile closefile
} def
%plot cached data from file
/pointfieldfromcache {
/res exch def
/dt 1 res div def
/infile filename (r) file def
/it 1 def
%cvx exec
%count 3 idiv {
{
{
infile token not {stop} if % bail-out
infile token not {stop} if % on any datafile issues
infile token not {stop} if
%3 copy
project
2 copy 2 copy moveto dt .5 mul 0 360 arc moveto
it res mod 0 oldeq {
fill /flushpage where {pop flushpage} if
} if
} loop
} stopped pop
%} repeat
} def
/pointfield { %check if there's a readable data file
{ filename (r) file closefile } stopped not
{ pop pop pointfieldtocache } ifelse %no! make one.
} def
% x y z ang -> x y' z'
/rotx {
/theta exch def
/z exch def
/y exch def
y theta cos mul
z theta sin mul sub
y theta sin mul
} def
% x y z ang -> x' y z'
/roty {
/theta exch def
/z exch def
/y exch def
/x exch def
x theta cos mul
y
x theta sin mul neg
} def
% x y z ang -> x' y' z
/rotz {
/theta exch def
/z exch def
/y exch def
/x exch def
x theta cos mul
y theta sin mul sub
x theta sin mul
z
} def
% Eye coords
/ex .2 def %a little x-y skew adds "drama"
/ey .2 def
/ez 5 def
% x y z -> X Y
/project {
ang roty
ang .25 mul rotx
/z exch def
/y exch def
/x exch def
1 ez z sub div
x ez mul z ex mul sub
1 index mul
y ez mul z ey mul sub
3 2 roll mul
} def
10 10 360 {
/ang exch def
%matrix currentmatrix
300 400 translate
100 100 scale
60 pointfield
%setmatrix fill
/flushpage where {pop flushpage} if
showpage
} for
• Thanks much for your detailed answer luser droog! I actually was looking for a existing solution. I do not have the time to implement something on my own, though that would be interesting of course. Jun 7, 2022 at 6:01
• No problem. For posterity, I've added some code that does some of the things needed for subtask 2. PostScript is a programming language, so any Turing-computable computation can be done ... with effort. But for the larger task you describe, it really feels more profitable to attack the data higher up the food chain. Modifying the generated PS code is extra difficulty just to make something fragile (although sometimes results can be achieved with text transformation or simple changes to the program, so don't rule it out altogether). Jun 7, 2022 at 12:51
|
2023-04-01 13:27:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5466293692588806, "perplexity": 5213.939049348525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00710.warc.gz"}
|
https://codereview.stackexchange.com/questions/67330/filter-by-a-transformed-list-then-untransform-the-result
|
# Filter by a transformed list, then untransform the result
Both exercises have a common pattern of "filter by a transformed list, then untransform the result". See skip and localMaxima.
-- exercise 1
skips :: [a] -> [[a]]
skips xs = map (\n -> skip n xs) [1..(length xs)]
skip :: Integral n => n -> [a] -> [a]
skip n xs = map snd $filter (\x -> (fst x) mod n == 0) (zip [1..] xs) --exercise 2 isLocalMaximum :: Integral a => (a,a,a) -> Bool isLocalMaximum (a,b,c) = b > a && b > c sliding3 :: [a] -> [(a,a,a)] sliding3 xs@(a:b:c:_) = (a,b,c) : sliding3 (tail xs) sliding3 _ = [] localMaxima :: Integral a => [a] -> [a] localMaxima xs = map proj2$ filter isLocalMaximum (sliding3 xs)
where proj2 (_,b,_) = b
-- *Main> filter isLocalMaximum (sliding3 [1,5,2,6,3])
-- [(1,5,2),(2,6,3)]
My instincts say that I could implement both of these something like this:
localMaxima' :: Integral a => [a] -> [a]
localMaxima' xs = filterBy isLocalMaximum sliding3 xs
if only I could implement filterBy
filterBy :: (b -> Bool) -> ([a] -> [b]) -> [a] -> [a]
filterBy p f as = as'
where indexedAs = zipWith (,) [0..] as
indexedBs = zipWith (,) [0..] (f as)
indexedBs' = filter p indexedBs -- doesn't typecheck; how can we teach p about the tuples?
indexes = map fst indexedBs
as' = map (\i -> snd (indexedAs !! i)) indexes
It's also slower than just writing out a fold. Is this all a bad idea? I've always considered fold a low level recursion operator and always try to structure in terms of higher level map and filter but maybe I am misunderstanding.
My Haskell level is: understand LYAH but not written much code.
This is a homework to CIS 194 (2013 version) (though I am not taking the class, I am working through the material on my own)
## Exercise 1
If you have a lambda (or any function) of type (a,b)->c, instead of writting it like this (\x->...), you can write it like this (\(x,y)->...). It will remove the need of calling fst like in your skip function. You should change the name skip too, because it already exist in the prelude and it's kinda unclear because they have different meaning. I would call it something like skipEvery
## Exercise 2
as is a keyword in Haskell, used for module importation. It will compile if you use it as a variable, but if you a text editor with syntax highlighting, it will be weird
My instincts say that I could implement both of these something like this:
localMaxima' :: Integral a => [a] -> [a]
localMaxima' xs = filterBy isLocalMaximum sliding3 xs
Your instinct wasn't wrong, but there is an alternative to filterBy, which is mapMaybe. The resulting code would be something like this
whenMaybe p x = if p x then Just x else Nothing
localMaxima' = mapMaybe (whenMaybe isLocalMaximum) . sliding3
You should use zip instead of zipWith (,), because both operations are equivalent
|
2021-10-24 02:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7357710003852844, "perplexity": 4291.053571871875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00111.warc.gz"}
|
https://www.thenakedscientists.com/forum/index.php?topic=57835.0
|
# Hi
• 12 Replies
• 1521 Views
0 Members and 1 Guest are viewing this topic.
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Hi
« on: 23/06/2015 13:07:48 »
$$\frac{1}{\frac{1}{0}}=0$$
This is my first post; didn't see an introduction thread.
Are losing theirs and blaming it on you..
http://www.poetryfoundation.org/poem/175772
Anyway, this forum looks like it has a lot of goodies to unearth. Hoping to be content and not get perturbed.
As I'm typing this I believe it may come off as a bit sucky [] But I guess that's how I kinda feel ATM.
Anyway, how can I upload an avatar? From what I've seen nobody has them, although I hadn't gone through the entire forum yet.
And again, Hi [] I think I'm a bit happier now.
#### PmbPhy
• Neilep Level Member
• 2788
##### Re: Hi
« Reply #1 on: 23/06/2015 13:14:44 »
Quote from: Beer w/Straw
$$\frac{1}{\frac{1}{0}}=0$$
That expression is meaningless because the denominator contains an undefined term, i.e. 1/0 is undefined because division by zero is undefined. And it's not merely undefined because someone chose not to define it. It's undefined because any attempt to define it leads to meaningless notions somewhere along the line.
Quote from: Beer w/Straw
This is my first post; didn't see an introduction thread.
Welcome to the forum. []
And no. There are no avatars here.
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Re: Hi
« Reply #2 on: 23/06/2015 13:26:02 »
http://www.wolframalpha.com/input/?i=1%2F%281%2F0%29&lk=4&num=1
http://www.wolframalpha.com/input/?i=1%2F0
But I wasn't using real numbers.
Why doesn't this site have avatars when when it seems to have lots besides?
#### PmbPhy
• Neilep Level Member
• 2788
##### Re: Hi
« Reply #3 on: 23/06/2015 13:59:38 »
Quote from: Beer w/Straw
http://www.wolframalpha.com/input/?i=1%2F%281%2F0%29&lk=4&num=1
http://www.wolframalpha.com/input/?i=1%2F0
Wolfram gave you the wrong answers. They made an error when they treated infinity as a number. Let me clarify;
You wrote 1/(1/0). To evaluate this you need to first reduce 1/0 to a simpler expression or number. Since 1/0 is undefined you can't correctly take another step. Suppose you think that 1/0 actually does equal infinity. Then is 1/0 > 0 or is 1/0 < 0. That is to say is the infinity positive or negative?
Quote from: Beer w/Straw
But I wasn't using real numbers.
Then you really should have made that clear by saying so. Otherwise how were your readers supposed to know that? And if you weren't using real numbers then what were you using, complex numbers? If so then the numbers you wrote down were still real and division isn't defined for anything other than real and complex numbers. And you chose to use www.wolframalpha.com which is only defined for real numbers. At least for what you used as input.
Anyway, why did you start your first post with that and not even comment on why you put it there? What was its purpose anyway?
Quote from: Beer w/Straw
Why doesn't this site have avatars when when it seems to have lots besides?
You'll have to ask the founder of the forum. I imagine its because this is a serious forum and avatars make it less serious given what some people use as avatars.
« Last Edit: 23/06/2015 14:18:24 by PmbPhy »
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Re: Hi
« Reply #4 on: 23/06/2015 14:53:44 »
MY post was a jumble of things. Didn't know if the TEX command would actually work.
Also having some unrelated things to contend with, so I may make a more thorough response in like three or four hours... And it was complex infinity, not infinity, if that makes any difference to you.
« Last Edit: 23/06/2015 14:55:50 by Beer w/Straw »
#### PmbPhy
• Neilep Level Member
• 2788
##### Re: Hi
« Reply #5 on: 23/06/2015 16:42:08 »
MY post was a jumble of things. Didn't know if the TEX command would actually work.
Also having some unrelated things to contend with, so I may make a more thorough response in like three or four hours... And it was complex infinity, not infinity, if that makes any difference to you.
I know what it is. I looked it up on Wolfram's site. I've never seen it used anywhere else except by them however and I'm a mathematician as well as a physicist. I studied complex analysis as an undergraduate and nowhere did I ever see any text define that term. E.g. use Google to search for it and you won't find it anywhere else.
I just looked it up in the math dictionary:
http://dlx.bookzz.org/genesis/1152000/31765df43415cea3df8c165975d19db7/_as/[Emma_Previato]_Dictionary_of_Applied_Math_for_Eng(Bokos-Z1).pdf
and it wasn't in there. The term "infinity" really means "unbounded" so whether its unbounded in the complex plane or on the real axis makes no difference.
I don't mean to be petty. I'm just trying to be helpful. When I see someone being careless like that by dividing by zero a warning sign goes up in my mind. Just trying to help, friend. []
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Re: Hi
« Reply #6 on: 23/06/2015 17:53:08 »
I don't want to quote you, making this thread more quote than text. And yes, I wanted to be provocative with the OP but not controversial.
My PC running Mathematica 10 I wont turn on ATM as the power supply is dirty and want to clean it first. I don't want to risk it blowing up and taking other components with it. I do believe it would render the same output as WolframAlpha from the same input.
Maple 2015 gives me "Error, numeric exception: division by zero" for the input: 1/0 .
With Google "Complex Infinity" I can get a link to a MATLAB definition"
Quote
complexInfinity represents the only non-complex point of the one-point compactification of the complex numbers.
Mathematically, complexInfinity is the north pole of the Riemann sphere, with the unit circle as equator and the point 0 at the south pole.
With respect to arithmetic, complexInfinity behaves like "1/0". In particular, non-zero complex numbers may be multiplied or divided by complexInfinity or 1/ complexInfinity. Adding complexInfinity to a finite number yields again complexInfinity.
With respect to arithmetical operations, complexInfinity is incompatible with the real infinity.
And from Free Dictionary
Quote
Riemann Sphere
(redirected from Complex infinity)
Also found in: Wikipedia.
Riemann sphere
[′rē‚män ‚sfir]
(mathematics)
The two-sphere whose points are identified with all complex numbers by a stereographic projection. Also known as complex sphere.
McGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.
http://encyclopedia2.thefreedictionary.com/Complex+infinity
Anyway, I don't have MATLAB, but I'm positive Mathmatica would give me the same answer as WolframAlpha from the same input.
And thank you for your comments. Before my OP I was more distracted with doing my profile -I'm 2012 years old you know!
#### jccc
• Hero Member
• 990
##### Re: Hi
« Reply #7 on: 23/06/2015 18:37:09 »
cool poem
enjoying it much
man drink beer using mouth
#### chiralSPO
• Global Moderator
• Neilep Level Member
• 1913
##### Re: Hi
« Reply #8 on: 23/06/2015 20:02:27 »
if we evaluate this strictly as 1/(1/0) then there is no way to give the answer, but with a little algebraic manipulation (multiplying by another undefinable, 0/0) we do indeed get (0*1)/(1*0/0), which simplifies to 0/1 = 0. However, one must beware, lots of funny things can be shown when dividing by zero (I can prove 1 = 2, and from there any rational number = any other rational number).
Given no context, I agree with pmbphy, that dividing by 0 gives a nonsensical answer, but if there were some context (what type of zero: is this something where the limit approaches zero? or was this zero introduced to an otherwise soluble expression?) then the expression could be evaluated...
Welcome to the forum!
#### PmbPhy
• Neilep Level Member
• 2788
##### Re: Hi
« Reply #9 on: 23/06/2015 20:38:51 »
Quote from: chiralSPO
if we evaluate this strictly as 1/(1/0) then there is no way to give the answer, but with a little algebraic manipulation (multiplying by another undefinable, 0/0) we do indeed get (0*1)/(1*0/0), which simplifies to 0/1 = 0.
What you assumed was 0/0 = 1 which isn't 1, it's undefined. You can't assume that anything divided by itself equals 1 because the division must be well defined, which 0/0 is not.
Quote from: chiralSPO
Given no context, I agree with pmbphy, that dividing by 0 gives a nonsensical answer, but if there were some context (what type of zero: is this something where the limit approaches zero? or was this zero introduced to an otherwise soluble expression?) then the expression could be evaluated...
The only time it makes any sense is when you use limits as you indicated here. Then of course you have to use L'HOpital's rule. The fact that expressions of an indeterminate form don't make any sense and can't be defined is because there are different values that these things can approach in a limit.
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Re: Hi
« Reply #10 on: 23/06/2015 23:18:00 »
cool poem
enjoying it much
man drink beer using mouth
once i was half drunk trying to showoff, drinking beer with my nose. no straw.
Well, I'm glad you liked the poem.
#### jccc
• Hero Member
• 990
##### Re: Hi
« Reply #11 on: 28/06/2015 03:00:17 »
i want to thank you again, the poem is like a fountain in desert. i drink it many times already, long way to go. if you even find a man like that, i want to know/meet him.
have the courage say your heart?
#### Beer w/Straw
• Jr. Member
• 11
• Transcendental Ignorance!
##### Re: Hi
« Reply #12 on: 08/07/2015 13:44:41 »
I haven't utilized this forum so great thus far. I've been more looking at other fora and reading the smack. Some people I don't rightly know if they are seriously deluded, being dumb on purpose, or just another crank.
[]
|
2017-01-20 04:11:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008267283439636, "perplexity": 2692.40084786067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280774.51/warc/CC-MAIN-20170116095120-00423-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://ask.libreoffice.org/en/questions/157201/revisions/
|
# Revision history [back]
### Sum of a row of a named range
In LibreOffice Calc, how can I sum a row of a named range?
Assume I have a named range, foo, that looks like this:
1 2 3
4 5 6
7 8 9
I'd like to add another column with a sum each row in foo, like this:
1 2 3 | 6
4 5 6 | 15
7 8 9 | 24
### Sum of a row of a named range
In LibreOffice Calc, how can I sum a row of a named range?
Assume I have a named range, foo, that looks like this:
1 2 3
4 5 6
7 8 9
I'd like to add another column with a sum each row in foo, like this:
1 2 3 | 6
4 5 6 | 15
7 8 9 | 24
I am looking for a way to do this only by referring to foo, and not directly to the cell locations.
|
2019-10-19 15:34:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29140129685401917, "perplexity": 473.8357641836425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00056.warc.gz"}
|
https://feralmachin.es/posts/lightmode_support.md
|
# Feral Machines
yeggogological meditations
# A Clean, Well-Lighted Place
Length: 354 words
Tags:
For a long time I used dark themes almost exclusively. The habit probably dates back to my childhood. My family's first computer was an old XT, with an amber (P3 phosphor) monitor, and this more or less fixed my aesthetic sense of what a computer screen should look like: light monospace type on a dark background. The stylesheet I first wrote for this blog reflects those preferences.
I still prefer this combination for terminal work and writing code, but (though it took me some time to admit this) I've never quite liked it for reading or writing prose. I really do love a good typeface, and the very best -- I'm thinking of Bembo or EB Garamond -- look dreadful when inverted into light-on-dark. Seriffed typefaces in general suffer when displayed in this way, with a few exceptions. I think Latin Modern Mono survives just fine (slab seriffed and monospace fonts tend to fare well), and that is indeed the font I chose on for this site's dark theme. (It appears in the monospaced elements of the light theme as well, like the metadata block.) The quality of LCD screens has improved enough over the past several years that reading a well-illuminated screen seems far less painful than it used to be.
Anyway, after a couple hours of tinkering around with the CSS, I set up both dark and light mode stylesheets for this site, and then configured the main stylesheet to select the CSS on the basis of the user's system preferences.
Here's a peek at the recently fine-tuned dark mode:
And here's the brand new light mode:
I don't think there's anyone else out there blogging on P'log, yet, but I've included both stylesheets in the contents.example/ directory, for general use.
|
2022-01-24 09:57:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19095738232135773, "perplexity": 3371.9421843032737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00679.warc.gz"}
|
http://math.stackexchange.com/questions/248500/sheaf-of-regular-functions-on-an-affine-k-variety
|
# Sheaf of regular functions on an affine $k$-variety
We would like to generalize this question when the base field $k$ is not necessarily algebraically closed.
We use the definitions of this question. Let $X$ be a $k$-closed subset of $\Omega^n$. Let $I(X) = \{f \in k[X_1,\dots,X_n]| f(p) = 0$ for every $p \in X\}$. Let $A = k[X_1,\dots,X_n]/I(X)$. Let $\mathcal{O}_X$ be the sheaf of $k$-regular functions on $X$. Are the following assertions true?
1) Let $x \in X$. $\mathcal{O}_x$ is canonically isomorphic to $A_{\mathfrak{p}_x}$, where $\mathfrak{p}_x = \{f \in A|\ f(x) = 0\}$.
2) Let $f \in A$. $\Gamma(D(f), \mathcal{O}_X)$ is canonically isomorphic to $A_f$, where $D(f) = \{x \in X| f(x) \neq 0\}$.
-
Please leave a comment explaining the reason for the downvote so that I can improve the question. – Makoto Kato Dec 1 '12 at 9:11
|
2015-09-04 04:43:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868877530097961, "perplexity": 94.94829765887889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00313-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/action-of-clifford-elements-on-vectors-spinors.257338/
|
# Action of Clifford-elements on vectors & spinors
1. Sep 18, 2008
### blue2script
Hi all!
I am currently preparing a talk about Clifford algebras and pin/spin-groups. Since half the audience will consist of physicians (as I am myself) I also want to get more into the connection of the mathematical definitions and derivations (as one may find in Baker, "Matrix groups" or, more for the physical liking, "Analysis, Manifolds and Physics (vol. 2)" of Choquet-Bruhat Y. & Dewitt-Morette) to the physical tools of everydays use, like Weyl-spinors, Majorana representation, behaviour of spinors under rotations.
Especially the last point is unclear to me. The only somewhat good explanation I could find was in Wikipedia, article "Spinor" (http://en.wikipedia.org/wiki/Spinor" [Broken]). Under >> "Examples" >> "Two dimensions" it is written that the action of elements on vectors is
$$\gamma\left(u\right) = \gamma u\gamma^*$$
whereas on spinors it is
$$\gamma\left(\phi\right) = \gamma \phi$$.
So the spinor shell be just a complex number. But where do these actions come from? What distinguishes, in this special case, vector and spinor? I am somewhat confused.
Thanks everybody helping me out!
Blue2script
Last edited by a moderator: May 3, 2017
2. Sep 18, 2008
### Peeter
I'm not sure what that article what that article means by action on a spinor, but as used, "the action on a vector" part is just a rotation.
If u lies in the plane of the bivector $i = \sigma_1\sigma_2$, then yes, this is just a complex product, but this action also works as a rotation in higher dimensions. This is because both $i$ and the scalar component of the spinor will both commute with any component perpendicular to the plane.
For example, if you consider the split of a vector into parts parallel to the plane and perpendicular to the plane
$$u = u_\parallel + u_\perp = (u \cdot i) \frac{1}{i} + (u \wedge i) \frac{1}{i}$$
and a spinor
$$\gamma = \exp(i\theta/2) = \cos\theta/2 + i\sin\theta/2$$
the action is linear, so both components can be considered separately. For the parallel to the plane part one has (and you can verify this by multiplying the bits out)
$$\exp(i\theta/2) u_\parallel \exp(-i\theta/2) = \exp(i\theta) u_\parallel$$
... this takes the form of a normal complex rotation.
for the component out of the plane one has:
$$\exp(i\theta/2) u_\perp \exp(-i\theta/2) = \exp(i\theta/2) \exp(-i\theta/2) u_\perp = u_\perp$$
Thus the action produces a rotation around a vector normal to the plane, but formulates this in a way that works in any dimension (like 4D where one can't express this normal in an unambigous fashion.)
Geometrically, a spinor action of this sort can rotate higher grade elements such as planes in an intuitive fashion. I can't imagine geometrically what it would mean to apply such an action to a mixed grade object such as this 2,0 spinor, nor how that produces the "complex product" mentioned in the wiki article.
fwiw. A reference that I'd recommend for this material is 'Geometric Algebra for Computer Science'. It doesn't have the applied to physics focus but is much easier to understand then the Doran/Lasenby text for example.
|
2018-08-19 00:17:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7978966236114502, "perplexity": 808.6431435200574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00505.warc.gz"}
|
https://codereview.stackexchange.com/questions/113694/2-player-battleship-game
|
# 2 Player Battleship Game
This is an adaption of a single player Battleship game. I made it two player, and I also tried to implement OOP and DRY principles. I would like my code to be reviewed for OOP, DRY, PEP8, and overall best practices. I am new to development (about 1 month strong), so any constructive feedback would be greatly appreciated.
One of my biggest light-bulb moments was when I realized that I was stuck in a loop because my loop method was a numeric value outside of the class method. Once I put it in the class method and made it a list, I could then pass the data back and forth and get a count of each players tries by using len(loop).
from random import randint
class Person(object):
def __init__(self, name, turn, loop):
self.name = name
self.turn = turn
self.loop = loop
@classmethod
def create(cls, turn):
while True:
name = input("\nWhat is the name of Player %s? " % turn)
if name.isalpha():
break
print("\nNice to meet you %s. " % name)
print("It will be fun to play Battleship!\n")
loop = []
return cls(name, turn, loop)
@staticmethod
def welcome(name1, turn1, name2, turn2):
print("It's decided that")
print("%s will take the %sst turn" % (name1, turn1))
print("and %s will take the %snd turn." % (name2, turn2))
def salutation(name1, name2, loop1, loop2):
if (len(loop1)) and (len(loop2)) == 5:
print("Thanks for playing %s and %s." % (name1, name2))
print("Hopefully we will play again, soon!")
elif (len(loop1)) > (len(loop2)):
print("Excellent win, %s!" % name1)
print("Better luck next time, %s." % name2)
else:
print("Excellent win, %s!" % name2)
print("Better luck next time, %s.\n" % name1)
class Board(object):
def __init__(self, surface, squares):
self.surface = surface
self.squares = squares
@classmethod
def create(cls, name):
while 1:
squares = input("\n%s, how big would you like your board to be (3-5)? " % name)
try:
squares = int(squares)
except (TypeError, ValueError):
print("\nPlease enter a number between 3 and 5.")
continue
if squares >= 3 and squares <= 5:
break
surface = []
for i in range(squares):
surface.append((["O"] * squares))
return cls(surface, squares)
def random_row(surface):
return randint(1, len(surface))
def random_col(surface):
return randint(1, len(surface[1]))
def rules():
print("\nIn this game, you will pick a number")
print("between 1 and your board length for each row and")
print("coloumn. Then, if your guess matches")
print("the randomly generated location. You win.")
print("Each player has 5 attempts to guess correctly.")
def one_play(name, turn, surface, row, col, loop):
print("\nOk, %s. Go ahead and take turn %s." % (name,(len(loop)+1)))
print_board(name, surface)
guess_row = (input("Guess Row (1-%s): " % len(surface)))
guess_col = (input("Guess Col (1-%s): " % len(surface)))
try:
guess_row = int(guess_row)
guess_col = int(guess_col)
if guess_row == row and guess_col == col:
print("\nCongrats! You sunk my Battleship!\n")
surface[(guess_row)-1][(guess_col)-1] = "B"
loop.extend((1, 2, 3, 4, 5, 6))
elif ((guess_row < 1 or guess_row > (len(surface))) or
(guess_col < 1 or guess_col > (len(surface)))):
print("\nOops, that's not even on the board.\n")
elif (surface[(guess_row)-1][(guess_col)-1] == "X"):
print("\nYou guessed that one already.\n")
else:
print("\nYou missed my Battleship!\n")
surface[(guess_row)-1][(guess_col)-1] = "X"
except (TypeError, ValueError):
print("\nYou failed to answer the question correctly.")
loop.append(1)
return (surface, loop)
# These are functions, not methods
def print_board(name, surface):
print("\nHere is the board for %s." % name)
for i in surface:
print(" ".join(i))
print("")
def play_battleship():
print("\n\n\n\nWelcome to Battleship!")
# We take the user input and create the Players
Player1 = Person.create(1)
Player2 = Person.create(2)
# Assign the names to variables
name1 = Player1.name
name2 = Player2.name
# Assign each player's turn.
turn1 = Player1.turn
turn2 = Player2.turn
# It's always good to say, "Hello."
Person.welcome(name1, turn1, name2, turn2)
# We create the boards, which are lists.
Board1 = Board.create(name1)
Board2 = Board.create(name2)
# Print the rules.
Board.rules()
# Store the random row and column in a variable for each player.
ship_row1 = Board.random_row(Board1.surface)
ship_col1 = Board.random_col(Board1.surface)
ship_row2 = Board.random_row(Board2.surface)
ship_col2 = Board.random_col(Board2.surface)
# Place each player's list in a variable.
surface1 = Board1.surface
surface2 = Board2.surface
# Keep track of each player's loop for flow control with a list.
loop1 = Player1.loop
loop2 = Player2.loop
while (len(loop1) < 5) and (len(loop2) < 5):
Board.one_play(name1, turn1, surface1,
ship_row1, ship_col1, loop1)
# Here, we check the length of loop1 to see if Player1 won.
# If so, we break. the loop.
if (len(loop1))>= 6:
break
else:
Board.one_play(name2, turn2, surface2,
ship_row2, ship_col2, loop2)
Person.salutation(name1, name2, loop1, loop2)
# Ask the player to play again.
while 1:
again = input("\n\nWould you like to play again: ")
if "y" in again.lower():
play_battleship()
else:
break
play_battleship()
• Does this actually work as intended? The reason I'm asking is that all of your methods are missing the reference to self, which seems really strange. – holroy Dec 12 '15 at 1:55
• It works as I intended it. I tried to go back in and put <pre>self<pre/> in some of the methods, but then it would give me an "missing argument" error. They may need to be classified as normal functions, but I am not totally sure. – kcwagenseller Dec 12 '15 at 1:57
• Hmm... I don't have time right now, but I do believe you have missed on the OOP aspect of it and are using something in between OOP and ordinary functions. Your code shouldn't have used that much of Board.XXXX, but it should have used method calling on instances. But don't change the code now, as it is working, but be forewarned that to get it proper OOP you have some changes ahead! – holroy Dec 12 '15 at 2:02
Some high-level comments first:
• There are lots of strings, but no docstrings or comments. That makes it very hard to tell what should be happening. Writing good documentation makes it much easier to read, review and maintain code – get into that habit.
• Your Person class knows too much about Battleship. It’s printing things very specific to a game of battleship. Ideally it should be a self-contained class.
Among other things:
• It prints “It will be fun to play Battleship!”
• The welcome() method assumes a two player game. This class would be more useful if it could be used for games with an arbitrary number of players, and then the game implements a welcome() method and knows the list of players. That allows different games to have different numbers of players and/or different ordering schemes.
• Likewise, the salutation() method doesn’t really belong on this class. And how does it know to stop when loop1 is empty and loop2 is five long?
• The turn and loop attributes feel like something that should be managed in a game class, not by a person.
• Custom classes should implement a __repr__() method. This is really helpful for debugging. As a simple case, something that can be eval’d to get an equivalent object. For example:
class Person:
def __repr__(self):
return '%s(%r, %r, %r)' % (self.__class__.__name__,
self.name,
self.turn,
self.loop)
• Your Board class is quite weird. I don’t see any instance methods (methods whose first argument is self), just an assortment of disconnected functions and class methods. This can be tidied up. For example:
def random_row(self):
return randint(1, len(self.surface))
One of the purposes of OOP is to keep shared state together, but as far as I can tell, the shared state initialised in __init__ is never actually used.
This class also knows too much about battleship.
I would make Board into a generic game-board class, which supports a 2D grid of points (possibly not square) of arbitrary size. Then have a BattleshipBoard or BattleshipGame class which has the specialised logic (e.g. 3 ≤ size ≤ 5) for a game of battleship.
• The play_battleship() code should be a method of this BattleshipGame class, so that you can share state about the players and boards between different calls. It’s quite messy at the moment.
Some smaller suggestions:
• The comment
# These are functions, not methods
is clear from reading the code. You don’t need it.
• Be careful with validating names. It’s very hard to determine what is and isn’t a valid name. Your simple isalpha() check will exclude names that are perfectly valid:
>>> 'Jean-Luc Picard'.isalpha()
False
>>> 'علاء الدين'.isalpha()
False
>>> '岩田 聡'.isalpha()
False
Names are hard to get right. It may well be easier to skip trying to validate, and just check that the user enters something printable.
• In the salutation() method, it’s not obvious what the logic is supposed to be – a comment would help. Also, if (len(loop1)) would be more idiomatically written as if loop1.
• In the rules() method of Board, you’ve misspelt “column” as “coloumn”.
• Prefer longer, more expressive variable names over single letters. It usually makes your code more readable. For example:
def print_board(name, surface):
for row in surface:
print(' '.join(row))
Although it would be even better if the Board class implemented a __str__ method which gave a pretty-printed representation of a board.
• Thanks for all the helpful feedback. I learned alot, just from reading your post and doing some subsequent research. I will spend some time rewriting this according to your valuable input. – kcwagenseller Dec 12 '15 at 18:42
|
2019-11-12 07:16:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32058224081993103, "perplexity": 5218.4244522087765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00111.warc.gz"}
|
http://aperiodical.com/page/2/
|
# Vi Hart is crowdfunding
If you appreciate the work of internet mathematician and hyperbolic virtual reality pioneer Vi Hart, or even if you’ve never heard of her before, you can now help support her work by subscribing to her Patreon. Vi Hart has never put any adverts on her videos or charged for her work until now, but since she’s stopped being employed by people who support that, she’s in need of your help. Check out the video below for details, or click the link below that to add your support.
Vi Hart’s Patreon page
# HLF Blogs: Is mathematics idealistic or realistic?
In September, Katie and Paul spent a week blogging from the Heidelberg Laureate Forum – a week-long maths conference where current young researchers in maths and computer science can meet and hear talks by top-level prize-winning researchers. For more information about the HLF, visit the Heidelberg Laureate Forum website.
5th Heidelberg Laureate Forum 2017, Heidelberg, Germany, Picture/Credit: Christian Flemming/HLF
The closing talk of the HLF’s main lecture programme (before the young researchers and laureates head off to participate in scientific interaction with SAP representatives to discuss maths and computer science in industry) was given by Fields Medalist Steve Smale.
# Ditching the fifth axiom (video)
Watch geometer/topologist Caleb Ashley explain the parallel postulate on Numberphile.
# “Pariah Moonshine” Part I: The Happy Family and the Pariah Groups
Being a mathematician, I often get asked if I’m good at calculating tips. I’m not. In fact, mathematicians study lots of other things besides numbers. As most people know, if they stop to think about it, one of the other things mathematicians study is shapes. Some of us are especially interested in the symmetries of those shapes, and a few of us are interested in both numbers and symmetries.
# Footballs on road signs: an international overview
I’m an old fashioned manager, I write the team down on the back of a fag packet and I play a simple 4-4-2.
• Mike Bassett, England Manager
I’m very much like Mike Bassett: I like standing on the terraces, I like full-backs whose main skill is kicking wingers into the ad hoardings, and – most of all – I like geometrically correct footballs.
# @standupmaths’ petition has had a response from the government
Friend of the site Matt Parker recently made headlines because of his UK Government Petition to correct the heinous geometrical oddity that is the UK Tourist sign for a football ground. In the standard sign, somehow a sheet of tessellating hexagons is depicted as wrapping around a sphere in a highly improbable (and provably impossible) way.
The petition has achieved a modicum of success, in that it’s passed the 10,000 signatures required to elicit a response from the government. Sadly, the response isn’t quite what you’d like to hear.
# Stirling’s numbers in a nutshell
This is a guest post by researcher Audace Dossou-Olory of Stellenbosch University, South Africa.
In assignment problems, one wants to find an optimal and efficient way to assign objects of a given set to objects of another given set. An assignment can be regarded as a bijective map $\pi$ between two finite sets $E$ and $F$ of $n\geq 1$ elements. By identifying the sets $E$ and $F$ with $\{1,2,\ldots, n\}$, we can represent an assignment by a permutation.
|
2017-11-22 13:00:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29645830392837524, "perplexity": 2663.179146251744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806586.6/warc/CC-MAIN-20171122122605-20171122142605-00151.warc.gz"}
|
https://www.brainyresort.com/en/ph-calculation-solutions-of-polybasic-acids-complete/
|
# pH Calculation - solutions of polyprotic acids (complete)
Polyprotic acids are, as the name says, those acids which in their molecule contain more than one acidic hydrogen. Polyprotic acids ionize in more stages.
Let's take for example H2S, a 'diprotic acid:
$H_2S&space;+&space;H_2O&space;\rightleftharpoons&space;HS^-&space;+&space;H_3O^+$
$K_{a1}&space;=&space;\frac{[H_3O^+][HS^-]}{[H_2S]}&space;=&space;1,1\cdot10^{-7}$
$HS^-&space;+&space;H_2O&space;\rightleftharpoons&space;S^{2-}&space;+&space;H_3O^+$
$K_{a2}&space;=&space;\frac{[H_3O^+][S^{2-}]}{[HS^-]}&space;=&space;1,3&space;\cdot&space;10^{-14}$
The overall expression is:
$H_2S&space;+&space;H_2O&space;\rightleftharpoons&space;S^{2-}&space;+&space;2H_3O^+$$K_{tot}&space;=&space;\frac{[H_3O^+]^2[S^{2-}]}{[H_2S]}$ The overall reaction constant is just the result of the product of the two dissociation constants:$K_{tot}&space;=&space;\frac{[H_3O^+]^2[S^{2-}]}{[H_2S]}&space;=&space;K_1\cdot&space;K_2$
Generally, the first dissociation constant is way larger than the second dissociation constant, and right for this reason the majority of $[H_3O^+]{\color{Blue}&space;}$ in solution derives from the first dissociation.
There are cases in which the first dissociation is only partial (Ka1 relatively small) and other cases where the first dissociation is practically quantitative (Ka1 is great). These two different contexts require two different approaches for what concern the calculation of pH.
case 1 → first dissociation not quantitative
We're gonna start by analyzing the first case, which is right the one of hydrogen sulfide. Lets' suppose we have an aqueous solution of 0.1 M H2S .
We firstly have to calculate $[H_3O^+]{\color{Blue}&space;}$ which derives from the first dissociation:
$H_2S&space;+&space;H_2O&space;\rightleftharpoons&space;HS^-&space;+&space;H_3O^+$$K_{a1}&space;=&space;1,1\cdot10^{-7}$
In order to do this, we consider $H_2S$ as a 'weak monoprotic acid and we use the approximate formula for the calculation of $[H_3O^+]{\color{Blue}&space;}$ :
$[H_3O^+]&space;\cong&space;\sqrt{K_{a1}\cdot&space;C_a}&space;=&space;\sqrt{1,1\cdot10^{-7}\cdot0,1}&space;=&space;1,05&space;\cdot&space;10^{-4}M$
We have now to calculate $[H_3O^+]{\color{Blue}&space;}$ resulting from the second dissociation equilibrium:
Substituting in the expression of the second one has dissociation constant:
$\bg_white&space;K_{a2}&space;=&space;\frac{[H_3O^+][S^{2-}]}{[HS^-]}&space;=&space;\frac{[1,4\cdot&space;10^{-5}&space;+&space;x][x]}{[1,4\cdot&space;10^{-5}&space;-&space;x]}$
It is obtained an expression of second degree of the form:
$\bg_white&space;x^2&space;+&space;(1,1\cdot&space;10^{-14}&space;+&space;1,4\cdot&space;10^{-5})x&space;-&space;(1,1\cdot&space;10^{-14}&space;\cdot&space;1,4\cdot&space;10^{-5})&space;=&space;0$
You are obtained by solving for x:
$\bg_white&space;x&space;\approx&space;10^{-14}$
Since this value is negligible with respect to $[H_3O^+]$ deriving from the first dissociation, you can be considered
$\bg_white&space;[H_3O^+]_{tot}&space;\cong&space;[H_3O^+]_{1^0diss}$
It will have that
$\bg_white&space;pH&space;=&space;-log&space;[H_3O^+]&space;=&space;-log&space;(1,4\cdot&space;10^{-5})&space;=&space;3,98$
Which it is precisely the same result that you get with the approximate calculation.
|
2018-07-21 03:44:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507593274116516, "perplexity": 1474.888913802261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00549.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-12-chemical-kinetics-exercises-page-598/62
|
Chemistry 9th Edition
Rate law = $k[NO_2]^2$ Overall balanced equation: $NO_2+CO\rightarrow NO+CO_2$
The rate law is determined from the slowest elementary step, which is given in the problem. Since the slowest elementary step requires 2 $NO_2$ molecules to collide with each other, the exponent on the concentration of $NO_2$ in the rate law is 2. The overall balanced equation can be found by adding together the reactions of the two elementary steps given in the problem.
|
2018-11-15 16:53:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566317558288574, "perplexity": 379.7705501420772}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183123-00008.warc.gz"}
|
https://probabilityandstatsproblemsolve.wordpress.com/tag/conditional-distribution/
|
## Practice Problem Set 7 – a discrete joint distribution
The practice problems presented here deal with a discrete joint distribution that is defined by multiplying a marginal distribution and a conditional distribution – similar to the joint distribution found here and here. Thus this post provides additional practice opportunities.
Practice Problems
Let $X$ be the value of a roll of a fair die. For $X=x$, suppose that $Y \lvert X=x$ has a binomial distribution with $n=4$ and $p=x / 10$.
Practice Problem 7-A
Compute the conditional binomial distributions $Y \lvert X=x$ where $x=1,2,3,4,5,6$.
Practice Problem 7-B
Calculate the joint probability function $P[X=x,Y=y]$ for $x=1,2,3,4,5,6$ and $y=0,1,2,3,4$.
Practice Problem 7-C
Determine the probability function for the marginal distribution of $Y$. Calculate the mean and variance of $Y$.
Practice Problem 7-D
Calculate the backward conditional probabilities $P[X=x \lvert Y=y]$ for all applicable $x$ and $y$.
Problems 7-A to 7-D are similar to the ones in this previous post.
Practice Problem 7-E
Calculate the mean and variance of $X$.
Practice Problem 7-F
Calculate the mean and variance of $Y$ (use the methods discussed here).
Practice Problem 7-G
Calculate the covariance $\text{Cov}(X,Y)$ and the correlation coefficient $\rho$.
Problems 7-E to 7-G are similar to the ones in this previous post.
.
.
.
.
.
.
.
.
Answers
Practice Problem 7-A
\displaystyle \begin{aligned} &P[Y=0 \lvert X=1]=0.6561 \\&P[Y=1 \lvert X=1]=0.2916 \\&P[Y=2 \lvert X=1]=0.0486 \\&P[Y=3 \lvert X=1]=0.0036 \\&P[Y=4 \lvert X=1]=0.0001 \end{aligned}
\displaystyle \begin{aligned} &P[Y=0 \lvert X=2]=0.4096 \\&P[Y=1 \lvert X=2]=0.4096 \\&P[Y=2 \lvert X=2]=0.1536 \\&P[Y=3 \lvert X=2]=0.0256 \\&P[Y=4 \lvert X=2]=0.0016 \end{aligned}
\displaystyle \begin{aligned} &P[Y=0 \lvert X=3]=0.2401 \\&P[Y=1 \lvert X=3]=0.4116 \\&P[Y=2 \lvert X=3]=0.2646 \\&P[Y=3 \lvert X=3]=0.0756 \\&P[Y=4 \lvert X=3]=0.0081 \end{aligned}
\displaystyle \begin{aligned} &P[Y=0 \lvert X=4]=0.1296 \\&P[Y=1 \lvert X=4]=0.3456 \\&P[Y=2 \lvert X=4]=0.3456 \\&P[Y=3 \lvert X=4]=0.1536 \\&P[Y=4 \lvert X=4]=0.0256 \end{aligned}
\displaystyle \begin{aligned} &P[Y=0 \lvert X=5]=0.0625 \\&P[Y=1 \lvert X=5]=0.25 \\&P[Y=2 \lvert X=5]=0.375 \\&P[Y=3 \lvert X=5]=0.25 \\&P[Y=4 \lvert X=5]=0.0625 \end{aligned}
\displaystyle \begin{aligned} &P[Y=0 \lvert X=6]=0.0256 \\&P[Y=1 \lvert X=6]=0.1536 \\&P[Y=2 \lvert X=6]=0.3456 \\&P[Y=3 \lvert X=6]=0.3456 \\&P[Y=4 \lvert X=6]=0.1296 \end{aligned}
Practice Problem 7-B
\displaystyle \begin{aligned} &P[Y=4,X=1]=\frac{0.0001}{6} \\&P[Y=4,X=2]=\frac{0.0016}{6} \\&P[Y=4,X=3]=\frac{0.0081}{6} \\&P[Y=4,X=4]=\frac{0.0256}{6} \\&P[Y=4,X=5]=\frac{0.0625}{6} \\&P[Y=4,X=6]=\frac{0.1296}{6} \end{aligned}
\displaystyle \begin{aligned} &P[Y=3,X=1]=\frac{0.0036}{6} \\&P[Y=3,X=2]=\frac{0.0256}{6} \\&P[Y=3,X=3]=\frac{0.0756}{6} \\&P[Y=3,X=4]=\frac{0.1536}{6} \\&P[Y=3,X=5]=\frac{0.25}{6} \\&P[Y=3,X=6]=\frac{0.3456}{6} \end{aligned}
\displaystyle \begin{aligned} &P[Y=2,X=1]=\frac{0.0486}{6} \\&P[Y=2,X=2]=\frac{0.1536}{6} \\&P[Y=2,X=3]=\frac{0.2646}{6} \\&P[Y=2,X=4]=\frac{0.3456}{6} \\&P[Y=2,X=5]=\frac{0.375}{6} \\&P[Y=2,X=6]=\frac{0.3456}{6} \end{aligned}
\displaystyle \begin{aligned} &P[Y=1,X=1]=\frac{0.2916}{6} \\&P[Y=1,X=2]=\frac{0.4096}{6} \\&P[Y=1,X=3]=\frac{0.4116}{6} \\&P[Y=1,X=4]=\frac{0.3456}{6} \\&P[Y=1,X=5]=\frac{0.25}{6} \\&P[Y=1,X=6]=\frac{0.1536}{6} \end{aligned}
\displaystyle \begin{aligned} &P[Y=0,X=1]=\frac{0.6561}{6} \\&P[Y=0,X=2]=\frac{0.4096}{6} \\&P[Y=0,X=3]=\frac{0.2401}{6} \\&P[Y=0,X=4]=\frac{0.1296}{6} \\&P[Y=0,X=5]=\frac{0.0625}{6} \\&P[Y=0,X=6]=\frac{0.0256}{6} \end{aligned}
Practice Problem 7-C
\displaystyle \begin{aligned} &P[Y=4]=\frac{0.2275}{6} \\&P[Y=3]=\frac{0.854}{6} \\&P[Y=2]=\frac{1.533}{6} \\&P[Y=1]=\frac{1.862}{6} \\&P[Y=0]=\frac{1.5235}{6} \end{aligned}
$\displaystyle E[Y]=1.4$
$\displaystyle E[Y^2]=3.22$
$\displaystyle Var[Y]=1.26$
Practice Problem 7-D
\displaystyle \begin{aligned} &P[X=1 \lvert Y=0]=\frac{0.6561}{1.5235}=0.4307 \\&P[X=2 \lvert Y=0]=\frac{0.4096}{1.5235}=0.2689 \\&P[X=3 \lvert Y=0]=\frac{0.2401}{1.5235}=0.1576 \\&P[X=4 \lvert Y=0]=\frac{0.1296}{1.5235}=0.0851 \\&P[X=5 \lvert Y=0]=\frac{0.0625}{1.5235}=0.0410 \\&P[X=6 \lvert Y=0]=\frac{0.0256}{1.5235}=0.0168 \end{aligned}
\displaystyle \begin{aligned} &P[X=1 \lvert Y=1]=\frac{0.2916}{1.862}=0.1566 \\&P[X=2 \lvert Y=1]=\frac{0.4096}{1.862}=0.2200 \\&P[X=3 \lvert Y=1]=\frac{0.4116}{1.862}=0.2211 \\&P[X=4 \lvert Y=1]=\frac{0.3456}{1.862}=0.1856 \\&P[X=5 \lvert Y=1]=\frac{0.25}{1.862}=0.1343 \\&P[X=6 \lvert Y=1]=\frac{0.1536}{1.862}=0.0825 \end{aligned}
\displaystyle \begin{aligned} &P[X=1 \lvert Y=2]=\frac{0.0486}{1.533}=0.0317 \\&P[X=2 \lvert Y=2]=\frac{0.1536}{1.533}=0.1002 \\&P[X=3 \lvert Y=2]=\frac{0.2646}{1.533}=0.1726 \\&P[X=4 \lvert Y=2]=\frac{0.3456}{1.533}=0.2254 \\&P[X=5 \lvert Y=2]=\frac{0.375}{1.533}=0.2446 \\&P[X=6 \lvert Y=2]=\frac{0.3456}{1.533}=0.2254 \end{aligned}
\displaystyle \begin{aligned} &P[X=1 \lvert Y=3]=\frac{0.0036}{0.854}=0.0042 \\&P[X=2 \lvert Y=3]=\frac{0.0256}{0.854}=0.0300 \\&P[X=3 \lvert Y=3]=\frac{0.0756}{0.854}=0.0885 \\&P[X=4 \lvert Y=3]=\frac{0.1536}{0.854}=0.1799 \\&P[X=5 \lvert Y=3]=\frac{0.25}{0.854}=0.2927 \\&P[X=6 \lvert Y=3]=\frac{0.3456}{0.854}=0.4047 \end{aligned}
\displaystyle \begin{aligned} &P[X=1 \lvert Y=4]=\frac{0.0001}{0.2275}=0.0004 \\&P[X=2 \lvert Y=4]=\frac{0.0016}{0.2275}=0.0070 \\&P[X=3 \lvert Y=4]=\frac{0.0081}{0.2275}=0.0356 \\&P[X=4 \lvert Y=4]=\frac{0.0256}{0.2275}=0.1125 \\&P[X=5 \lvert Y=4]=\frac{0.0625}{0.2275}=0.2747 \\&P[X=6 \lvert Y=4]=\frac{0.1296}{0.2275}=0.5697 \end{aligned}
Practice Problem 7-E
$\displaystyle E[X]=\frac{7}{2}=3.5$
$\displaystyle E[X^2]=\frac{91}{6}$
$\displaystyle Var[X]=\frac{35}{12}$
Practice Problem 7-F
$\displaystyle E[Y]=1.4$
$\displaystyle E[Y^2]=3.22$
$\displaystyle Var[Y]=1.26$
Practice Problem 7-G
$\displaystyle \text{Cov}(X,Y)=\frac{7}{6}$
$\displaystyle \rho=\frac{7}{6 \sqrt{3.675}}=0.60858$
Dan Ma statistical
Daniel Ma statistical
Dan Ma practice problems
Daniel Ma practice problems
Daniel Ma mathematics
Dan Ma math
Daniel Ma probability
Dan Ma probability
Daniel Ma statistics
Dan Ma statistics
Dan Ma mathematical
Daniel Ma mathematical
$\copyright$ 2019 – Dan Ma
Advertisements
## Practice Problems for Conditional Distributions, Part 2
The following are practice problems on conditional distributions. The thought process of how to work with these practice problems can be found in the blog post Conditionals Distribution, Part 2.
_____________________________________________________________________________________
Practice Problems
Practice Problem 1
Suppose that $X$ is the lifetime (in years) of a brand new machine of a certain type. The following is the density function.
$\displaystyle f(x)=\frac{1}{8 \sqrt{x}}, \ \ \ \ \ \ \ \ \ 1
You just purchase a 9-year old machine of this type that is in good working condition. Compute the following:
• What is the expected lifetime of this 9-year old machine?
• What is the expected remaining life of this 9-year old machine?
$\text{ }$
Practice Problem 2
Suppose that $X$ is the total amount of damages (in millions of dollars) resulting from the occurrence of a severe wind storm in a certain city. The following is the density function of $X$.
$\displaystyle f(x)=\frac{81}{(x+3)^4}, \ \ \ \ \ \ \ \ \ 0
Suppose that the next storm is expected to cause damages exceeding one million dollars. Compute the following:
• What is the expected total amount of damages for the next storm given that it will exceeds one million dollars?
• The city has a reserve fund of one million dollars to cover the damages from the next storm. Given the amount of damages for the next storm will exceeds one million dollars, what is the expected total amount of damages in excess of the amount in the reserve fund?
_____________________________________________________________________________________
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
_____________________________________________________________________________________
Answers
The thought process of how to work with these practice problems can be found in the blog post Conditionals Distribution, Part 2.
Practice Problem 1
$\displaystyle E(X \lvert X>9)=\frac{49}{3}=16.33 \text{ years}$
$\displaystyle E(X-9 \lvert X>9)=\frac{22}{3}=7.33 \text{ years}$
Practice Problem 2
$\displaystyle E(X \lvert X>1)=3 \text{ millions}$
$\displaystyle E(X-1 \lvert X>1)=2 \text{ millions}$
_____________________________________________________________________________________
$\copyright \ 2013 \text{ by Dan Ma}$
## Practice Problems for Conditional Distributions, Part 1
The following are practice problems on conditional distributions. The thought process of how to work with these practice problems can be found in the blog post Conditionals Distribution, Part 1.
_____________________________________________________________________________________
Description of Problems
Suppose $X$ and $Y$ are independent binomial distributions with the following parameters.
For $X$, number of trials $n=5$, success probability $\displaystyle p=\frac{1}{2}$
For $Y$, number of trials $n=5$, success probability $\displaystyle p=\frac{3}{4}$
We can think of these random variables as the results of two students taking a multiple choice test with 5 questions. For example, let $X$ be the number of correct answers for one student and $Y$ be the number of correct answers for the other student. For the practice problems below, passing the test means having 3 or more correct answers.
Suppose we have some new information about the results of the test. The problems below are to derive the conditional distributions of $X$ or $Y$ based on the new information and to compare the conditional distributions with the unconditional distributions.
Practice Problem 1
• New information: $X.
• Derive the conditional distribution for $X \lvert X.
• Derive the conditional distribution for $Y \lvert X.
• Compare these conditional distributions with the unconditional ones with respect to mean and probability of passing.
• What is the effect of the new information on the test performance of each of the students?
• Explain why the new information has the effect on the test performance?
Practice Problem 2
• New information: $X>Y$.
• Derive the conditional distribution for $X \lvert X>Y$.
• Derive the conditional distribution for $Y \lvert X>Y$.
• Compare these conditional distributions with the unconditional ones with respect to mean and probability of passing.
• What is the effect of the new information on the test performance of each of the students?
• Explain why the new information has the effect on the test performance?
Practice Problem 3
• New information: $Y=X+1$.
• Derive the conditional distribution for $X \lvert Y=X+1$.
• Derive the conditional distribution for $Y \lvert Y=X+1$.
• Compare these conditional distributions with the unconditional ones with respect to mean and probability of passing.
• What is the effect of the new information on the test performance of each of the students?
• Explain why the new information has the effect on the test performance?
_____________________________________________________________________________________
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
$\text{ }$
_____________________________________________________________________________________
Partial Answers
To let you know that you are on the right track, the conditional distributions are given below.
The thought process of how to work with these practice problems can be found in the blog post Conditional Distributions, Part 1.
Practice Problem 1
$\displaystyle P(X=0 \lvert X
$\displaystyle P(X=1 \lvert X
$\displaystyle P(X=2 \lvert X
$\displaystyle P(X=3 \lvert X
$\displaystyle P(X=4 \lvert X
____________________
$\displaystyle P(Y=1 \lvert X
$\displaystyle P(Y=2 \lvert X
$\displaystyle P(Y=3 \lvert X
$\displaystyle P(Y=4 \lvert X
$\displaystyle P(Y=5 \lvert X
Practice Problem 2
$\displaystyle P(X=1 \lvert X>Y)=\frac{5}{3386}=0.0013$
$\displaystyle P(X=2 \lvert X>Y)=\frac{160}{3386}=0.04$
$\displaystyle P(X=3 \lvert X>Y)=\frac{1060}{3386}=0.2728$
$\displaystyle P(X=4 \lvert X>Y)=\frac{1880}{3386}=0.4838$
$\displaystyle P(X=5 \lvert X>Y)=\frac{781}{3386}=0.2$
____________________
$\displaystyle P(Y=0 \lvert X>Y)=\frac{31}{3386}=0.008$
$\displaystyle P(Y=1 \lvert X>Y)=\frac{390}{3386}=0.1$
$\displaystyle P(Y=2 \lvert X>Y)=\frac{1440}{3386}=0.37$
$\displaystyle P(Y=3 \lvert X>Y)=\frac{1620}{3386}=0.417$
$\displaystyle P(Y=4 \lvert X>Y)=\frac{405}{3386}=0.104$
Practice Problem 3
$\displaystyle P(X=0 \lvert Y=X+1)=\frac{15}{8430}=0.002$
$\displaystyle P(X=1 \lvert Y=X+1)=\frac{450}{8430}=0.053$
$\displaystyle P(X=2 \lvert Y=X+1)=\frac{2700}{8430}=0.32$
$\displaystyle P(X=3 \lvert Y=X+1)=\frac{4050}{8430}=0.48$
$\displaystyle P(X=4 \lvert Y=X+1)=\frac{1215}{8430}=0.144$
____________________
$\displaystyle P(Y=1 \lvert Y=X+1)=\frac{15}{8430}=0.002$
$\displaystyle P(Y=2 \lvert Y=X+1)=\frac{450}{8430}=0.053$
$\displaystyle P(Y=3 \lvert Y=X+1)=\frac{2700}{8430}=0.32$
$\displaystyle P(Y=4 \lvert Y=X+1)=\frac{4050}{8430}=0.48$
$\displaystyle P(Y=5 \lvert Y=X+1)=\frac{1215}{8430}=0.144$
_____________________________________________________________________________________
$\copyright \ 2013 \text{ by Dan Ma}$
## Another Example on Calculating Covariance
In a previous post called An Example on Calculating Covariance, we calculated the covariance and correlation coefficient of a discrete joint distribution where the conditional mean $E(Y \lvert X=x)$ is a linear function of $x$. In this post, we give examples in the continuous case. Problem A is worked out and Problem B is left as exercise.
The examples presented here are also found in the post called Another Example of a Joint Distribution. Some of the needed calculations are found in this previous post.
____________________________________________________________________
Problem A
Let $X$ be a random variable with the density function $f_X(x)=\alpha^2 \ x \ e^{-\alpha x}$ where $x>0$. For each realized value $X=x$, the conditional variable $Y \lvert X=x$ is uniformly distributed over the interval $(0,x)$, denoted symbolically by $Y \lvert X=x \sim U(0,x)$. Obtain solutions for the following:
1. Calculate the density function, the mean and the variance for the conditional variable $Y \lvert X=x$.
2. Calculate the density function, the mean and the variance for the conditional variable $X \lvert Y=y$.
3. Use the fact that the conditional mean $E(Y \lvert X=x)$ is a linear function of $x$ to calculate the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
Problem B
Let $X$ be a random variable with the density function $f_X(x)=4 \ x^3$ where $0. For each realized value $X=x$, the conditional variable $Y \lvert X=x$ is uniformly distributed over the interval $(0,x)$, denoted symbolically by $Y \lvert X=x \sim U(0,x)$. Obtain solutions for the following:
1. Calculate the density function, the mean and the variance for the conditional variable $Y \lvert X=x$.
2. Calculate the density function, the mean and the variance for the conditional variable $X \lvert Y=y$.
3. Use the fact that the conditional mean $E(Y \lvert X=x)$ is a linear function of $x$ to calculate the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
____________________________________________________________________
Background Results
Here’s the idea behind the calculation of correlation coefficient in this post. Suppose $X$ and $Y$ are jointly distributed. When the conditional mean $E(Y \lvert X=x)$ is a linear function of $x$, that is, $E(Y \lvert X=x)=a+bx$ for some constants $a$ and $b$, it can be written as the following:
$\displaystyle E(Y \lvert X=x)=\mu_Y + \rho \ \frac{\sigma_Y}{\sigma_X} \ (x - \mu_X)$
Here, $\mu_X=E(X)$ and $\mu_Y=E(Y)$. The notations $\sigma_X$ and $\sigma_Y$ refer to the standard deviation of $X$ and $Y$, respectively. Of course, $\rho$ refers to the correlation coefficient in the joint distribution of $X$ and $Y$ and is defined by:
$\displaystyle \rho=\frac{Cov(X,Y)}{\sigma_X \ \sigma_Y}$
where $Cov(X,Y)$ is the covariance of $X$ and $Y$ and is defined by
$Cov(X,Y)=E[(X-\mu_X) \ (Y-\mu_Y)]$
or equivalently by $Cov(X,Y)=E(X,Y)-\mu_X \mu_Y$.
Just to make it clear, in the joint distribution of $X$ and $Y$, if the conditional mean $E(X \lvert Y=y)$ is a linear function of $y$, then we have:
$\displaystyle E(X \lvert Y=y)=\mu_X + \rho \ \frac{\sigma_X}{\sigma_Y} \ (y - \mu_Y)$
____________________________________________________________________
Discussion of Problem A
Problem A-1
Since for each $x$, $Y \lvert X=x$ has the uniform distribution $U(0,x)$, we have the following:
$\displaystyle f_{Y \lvert X=x}=\frac{1}{x}$ for $x>0$
$\displaystyle E(Y \lvert X=x)=\frac{x}{2}$
$\displaystyle Var(Y \lvert X=x)=\frac{x^2}{12}$
Problem A-2
In a previous post called Another Example of a Joint Distribution, the joint density function of $X$ and $Y$ is calculated to be: $f_{X,Y}(x,y)=\alpha^2 \ e^{-\alpha x}$. In the same post, the marginal density of $Y$ is calculated to be: $f_Y(y)=\alpha e^{-\alpha y}$ (exponentially distributed). Thus we have:
\displaystyle \begin{aligned} f_{X \lvert Y=y}(x \lvert y)&=\frac{f_{X,Y}(x,y)}{f_Y(y)} \\&=\frac{\alpha^2 \ e^{-\alpha x}}{\alpha \ e^{-\alpha \ y}} \\&=\alpha \ e^{-\alpha \ (x-y)} \text{ where } y
Thus the conditional variable $X \lvert Y=y$ has an exponential distribution that is shifted to the right by the amount $y$. Thus we have:
$\displaystyle E(X \lvert Y=y)=\frac{1}{\alpha}+y$
$\displaystyle Var(Y \lvert X=x)=\frac{1}{\alpha^2}$
Problem A-3
To compute the covariance $Cov(X,Y)$, one approach is to use the definition indicated above (to see this calculation, see Another Example of a Joint Distribution). Here we use the idea that the conditional mean $\displaystyle E(Y \lvert X=x)$ is linear in $x$. From the previous post Another Example of a Joint Distribution, we have:
$\displaystyle \sigma_X=\frac{\sqrt{2}}{\alpha}$
$\displaystyle \sigma_Y=\frac{1}{\alpha}$
Plugging in $\sigma_X$ and $\sigma_Y$, we have the following calculation:
$\displaystyle \rho \ \frac{\sigma_Y}{\sigma_X}=\frac{1}{2}$
$\displaystyle \rho = \frac{\sigma_X}{\sigma_Y} \times \frac{1}{2}=\frac{\sqrt{2}}{2}=\frac{1}{\sqrt{2}}=0.7071$
$\displaystyle Cov(X,Y)=\rho \ \sigma_X \ \sigma_Y=\frac{1}{\alpha^2}$
____________________________________________________________________
Answers for Problem B
Problem B-1
$\displaystyle E(Y \lvert X=x)=\frac{x}{2}$
$\displaystyle Var(Y \lvert X=x)=\frac{x^2}{12}$
Problem B-2
$\displaystyle f_{X \lvert Y=y}(x \lvert y)=\frac{4 \ x^2}{1-y^3}$ where $0 and $y
Problem B-3
$\displaystyle \rho=\frac{\sqrt{3}}{2 \ \sqrt{7}}=0.3273268$
$\displaystyle Cov(X,Y)=\frac{1}{75}$
____________________________________________________________________
$\copyright \ 2013$
## Another Example of a Joint Distribution
In an earlier post called An Example of a Joint Distribution, we worked a problem involving a joint distribution that is constructed from taking product of a conditional distribution and a marginial distribution (both discrete distributions). In this post, we work on similar problems for the continuous case. We work problem A. Problem B is left as exercises.
_________________________________________________________________
Problem A
Let $X$ be a random variable with the density function $f_X(x)=\alpha^2 \ x \ e^{-\alpha x}$ where $x>0$. For each realized value $X=x$, the conditional variable $Y \lvert X=x$ is uniformly distributed over the interval $(0,x)$, denoted symbolically by $Y \lvert X=x \sim U(0,x)$. Obtain solutions for the following:
1. Discuss the joint density function for $X$ and $Y$.
2. Calculate the marginal distribution of $X$, in particular the mean and variance.
3. Calculate the marginal distribution of $Y$, in particular, the density function, mean and variance.
4. Use the joint density in part A-1 to calculate the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
_________________________________________________________________
Problem B
Let $X$ be a random variable with the density function $f_X(x)=4 \ x^3$ where $0. For each realized value $X=x$, the conditional variable $Y \lvert X=x$ is uniformly distributed over the interval $(0,x)$, denoted symbolically by $Y \lvert X=x \sim U(0,x)$. Obtain solutions for the following:
1. Discuss the joint density function for $X$ and $Y$.
2. Calculate the marginal distribution of $X$, in particular the mean and variance.
3. Calculate the marginal distribution of $Y$, in particular, the density function, mean and variance.
4. Use the joint density in part B-1 to calculate the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
_________________________________________________________________
Discussion of Problem A
Problem A-1
The support of the joint density function $f_{X,Y}(x,y)$ is the unbounded lower triangle in the xy-plane (see the shaded region in green in the figure below).
Figure 1
The unbounded green region consists of vertical lines: for each $x>0$, $y$ ranges from $0$ to $x$ (the red vertical line in the figure below is one such line).
Figure 2
For each point $(x,y)$ in each vertical line, we assign a density value $f_{X,Y}(x,y)$ which is a positive number. Taken together these density values sum to 1.0 and describe the behavior of the variables $X$ and $Y$ across the green region. If a realized value of $X$ is $x$, then the conditional density function of $Y \lvert X=x$ is:
$\displaystyle f_{Y \lvert X=x}(y \lvert x)=\frac{f_{X,Y}(x,y)}{f_X(x)}$
Thus we have $f_{X,Y}(x,y) = f_{Y \lvert X=x}(y \lvert x) \times f_X(x)$. In our problem at hand, the joint density function is:
\displaystyle \begin{aligned} f_{X,Y}(x,y)&=f_{Y \lvert X=x}(y \lvert x) \times f_X(x) \\&=\frac{1}{x} \times \alpha^2 \ x \ e^{-\alpha x} \\&=\alpha^2 \ e^{-\alpha x} \end{aligned}
As indicated above, the support of $f_{X,Y}(x,y)$ is the region $x>0$ and $0 (the region shaded green in the above figures).
Problem A-2
The unconditional density function of $X$ is $f_X(x)=\alpha^2 \ x \ e^{-\alpha x}$ (given above in the problem) is the density function of the sum of two independent exponential variables with the common density $f(x)=\alpha e^{-\alpha x}$ (see this blog post for the derivation using convolution method). Since $X$ is the independent sum of two identical exponential distributions, the mean and variance of $X$ is twice that of the same item of the exponential distribution. We have:
$\displaystyle E(X)=\frac{2}{\alpha}$
$\displaystyle Var(X)=\frac{2}{\alpha^2}$
Problem A-3
To find the marginal density of $Y$, for each applicable $y$, we need to sum out the $x$. According to the following figure, for each $y$, we sum out all $x$ values in a horizontal line such that $y (see the blue horizontal line).
Figure 3
Thus we have:
\displaystyle \begin{aligned} f_Y(y)&=\int_y^\infty f_{X,Y}(x,y) \ dy \ dx \\&=\int_y^\infty \alpha^2 \ e^{-\alpha x} \ dy \ dx \\&=\alpha \int_y^\infty \alpha \ e^{-\alpha x} \ dy \ dx \\&= \alpha e^{-\alpha y} \end{aligned}
Thus the marginal distribution of $Y$ is an exponential distribution. The mean and variance of $Y$ are:
$\displaystyle E(Y)=\frac{1}{\alpha}$
$\displaystyle Var(Y)=\frac{1}{\alpha^2}$
Problem A-4
The covariance of $X$ and $Y$ is defined as $Cov(X,Y)=E[(X-\mu_X) (Y-\mu_Y)]$, which is equivalent to:
$\displaystyle Cov(X,Y)=E(X Y)-\mu_X \mu_Y$
where $\mu_X=E(X)$ and $\mu_Y=E(Y)$. Knowing the joint density $f_{X,Y}(x,y)$, we can calculate $Cov(X,Y)$ directly. We have:
\displaystyle \begin{aligned} E(X Y)&=\int_0^\infty \int_0^x xy \ f_{X,Y}(x,y) \ dy \ dx \\&=\int_0^\infty \int_0^x xy \ \alpha^2 \ e^{-\alpha x} \ dy \ dx \\&=\int_0^\infty \frac{\alpha^2}{2} \ x^3 \ e^{-\alpha x} \ dy \ dx \\&= \frac{3}{\alpha^2} \int_0^\infty \frac{\alpha^4}{3!} \ x^{4-1} \ e^{-\alpha x} \ dy \ dx \\&= \frac{3}{\alpha^2} \end{aligned}
Note that the last integrand in the last integral in the above derivation is that of a Gamma distribution (hence the integral is 1.0). Now the covariance of $X$ and $Y$ is:
$\displaystyle Cov(X,Y)=\frac{3}{\alpha^2}-\frac{2}{\alpha} \frac{1}{\alpha}=\frac{1}{\alpha^2}$
The following is the calculation of the correlation coefficient:
\displaystyle \begin{aligned} \rho&=\frac{Cov(X,Y)}{\sigma_X \ \sigma_Y} = \frac{\displaystyle \frac{1}{\alpha^2}}{\displaystyle \frac{\sqrt{2}}{\alpha} \ \frac{1}{\alpha}} \\&=\frac{1}{\sqrt{2}} = 0.7071 \end{aligned}
Even without the calculation of $\rho$, we know that $X$ and $Y$ are positively and quite strongly correlated. The conditional distribution of $Y \lvert X=x$ is $U(0,x)$ which increases with $x$. The calculation of $Cov(X,Y)$ and $\rho$ confirms our observation.
_________________________________________________________________
Answers for Problem B
Problem B-1
$\displaystyle f_{X,Y}(x,y)=4 \ x^2$ where $x>0$, and $0.
Problem B-2
$\displaystyle E(X)=\frac{4}{5}$
$\displaystyle Var(X)=\frac{2}{75}$
Problem B-3
$\displaystyle f_Y(y)=\frac{4}{3} \ (1- y^3)$
$\displaystyle E(Y)=\frac{2}{5}$
$\displaystyle Var(Y)=\frac{14}{225}$
Problem B-4
$\displaystyle Cov(X,Y)=\frac{1}{75}$
$\displaystyle \rho = \frac{\sqrt{3}}{2 \sqrt{7}}=0.327327$
_________________________________________________________________
## Mixing Bowls of Balls
We present problems involving mixture distributions in the context of choosing bowls of balls, as well as related problems involving Bayes’ formula. Problem 1a and Problem 1b are discussed. Problem 2a and Problem 2b are left as exercises.
____________________________________________________________
Problem 1a
There are two identical looking bowls. Let’s call them Bowl 1 and Bowl 2. In Bowl 1, there are 1 red ball and 4 white balls. In Bowl 2, there are 4 red balls and 1 white ball. One bowl is selected at random and its identify is kept from you. From the chosen bowl, you randomly select 5 balls (one at a time, putting it back before picking another one). What is the expected number of red balls in the 5 selected balls? What the variance of the number of red balls?
Problem 1b
Use the same information in Problem 1a. Suppose there are 3 red balls in the 5 selected balls. What is the probability that the unknown chosen bowl is Bowl 1? What is the probability that the unknown chosen bowl is Bowl 2?
____________________________________________________________
Problem 2a
There are three identical looking bowls. Let’s call them Bowl 1, Bowl 2 and Bowl 3. Bowl 1 has 1 red ball and 9 white balls. Bowl 2 has 4 red balls and 6 white balls. Bowl 3 has 6 red balls and 4 white balls. A bowl is chosen according to the following probabilities:
\displaystyle \begin{aligned}\text{Probabilities:} \ \ \ \ \ &P(\text{Bowl 1})=0.6 \\&P(\text{Bowl 2})=0.3 \\&P(\text{Bowl 3})=0.1 \end{aligned}
The bowl is chosen so that its identity is kept from you. From the chosen bowl, 5 balls are selected sequentially with replacement. What is the expected number of red balls in the 5 selected balls? What is the variance of the number of red balls?
Problem 2b
Use the same information in Problem 2a. Given that there are 4 red balls in the 5 selected balls, what is the probability that the chosen bowl is Bowl i, where $i = 1,2,3$?
____________________________________________________________
Solution – Problem 1a
Problem 1a is a mixture of two binomial distributions and is similar to Problem 1 in the previous post Mixing Binomial Distributions. Let $X$ be the number of red balls in the 5 balls chosen from the unknown bowl. The following is the probability function:
$\displaystyle P(X=x)=0.5 \binom{5}{x} \biggl[\frac{1}{5}\biggr]^x \biggl[\frac{4}{5}\biggr]^{4-x}+0.5 \binom{5}{x} \biggl[\frac{4}{5}\biggr]^x \biggl[\frac{1}{5}\biggr]^{4-x}$
where $X=0,1,2,3,4,5$.
The above probability function is the weighted average of two conditional binomial distributions (with equal weights). Thus the mean (first moment) and the second moment of $X$ would be the weighted averages of the two same items of the conditional distributions. We have:
$\displaystyle E(X)=0.5 \biggl[ 5 \times \frac{1}{5} \biggr] + 0.5 \biggl[ 5 \times \frac{4}{5} \biggr] =\frac{5}{2}$
$\displaystyle E(X^2)=0.5 \biggl[ 5 \times \frac{1}{5} \times \frac{4}{5} +\biggl( 5 \times \frac{1}{5} \biggr)^2 \biggr]$
$\displaystyle + 0.5 \biggl[ 5 \times \frac{4}{5} \times \frac{1}{5} +\biggl( 5 \times \frac{4}{5} \biggr)^2 \biggr]=\frac{93}{10}$
$\displaystyle Var(X)=\frac{93}{10} - \biggl( \frac{5}{2} \biggr)^2=\frac{61}{20}=3.05$
See Mixing Binomial Distributions for a more detailed explanation of the calculation.
____________________________________________________________
Solution – Problem 1b
As above, let $X$ be the number of red balls in the 5 selected balls. The probability $P(X=3)$ must account for the two bowls. Thus it is obtained by mixing two binomial probabilities:
$\displaystyle P(X=3)=\frac{1}{2} \binom{5}{3} \biggl(\frac{1}{5}\biggr)^3 \biggl(\frac{4}{5}\biggr)^2+\frac{1}{2} \binom{5}{3} \biggl(\frac{4}{5}\biggr)^3 \biggl(\frac{1}{5}\biggr)^2$
The following is the conditional probability $P(\text{Bowl 1} \lvert X=3)$:
\displaystyle \begin{aligned} P(\text{Bowl 1} \lvert X=3)&=\frac{\displaystyle \frac{1}{2} \binom{5}{3} \biggl(\frac{1}{5}\biggr)^3 \biggl(\frac{4}{5}\biggr)^2}{P(X=3)} \\&=\frac{16}{16+64} \\&=\frac{1}{5} \end{aligned}
Thus $\displaystyle P(\text{Bowl 1} \lvert X=3)=\frac{4}{5}$
____________________________________________________________
Answers for Problem 2
Problem 2a
Let $X$ be the number of red balls in the 5 balls chosen random from the unknown bowl.
$E(X)=1.2$
$Var(X)=1.56$
Problem 2b
$\displaystyle P(\text{Bowl 1} \lvert X=4)=\frac{27}{4923}=0.0055$
$\displaystyle P(\text{Bowl 2} \lvert X=4)=\frac{2304}{4923}=0.4680$
$\displaystyle P(\text{Bowl 3} \lvert X=4)=\frac{2592}{4923}=0.5265$
## An Example on Calculating Covariance
The practice problems presented here are continuation of the problems in this previous post.
Problem 1
Let $X$ be the value of one roll of a fair die. If the value of the die is $x$, we are given that $Y \lvert X=x$ has a binomial distribution with $n=x$ and $p=\frac{1}{4}$ (we use the notation $\text{binom}(x,\frac{1}{4})$ to denote this binomial distribution).
1. Compute the mean and variance of $X$.
2. Compute the mean and variance of $Y$.
3. Compute the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
Problem 2
Let $X$ be the value of one roll of a fair die. If the value of the die is $x$, we are given that $Y \lvert X=x$ has a binomial distribution with $n=x$ and $p=\frac{1}{2}$ (we use the notation $\text{binom}(x,\frac{1}{2})$ to denote this binomial distribution).
1. Compute the mean and variance of $X$.
2. Compute the mean and variance of $Y$.
3. Compute the covariance $Cov(X,Y)$ and the correlation coefficient $\rho$.
Problem 2 is left as exercise. A similar problem is also found in this post.
Discussion of Problem 1
The joint variables $X$ and $Y$ are identical to the ones in this previous post. However, we do not plan on following the approach in the previous, which is to first find the probability functions for the joint distribution and then the marginal distribution of $Y$. The calculation of covariance in Problem 1.3 can be very tedious by taking this approach.
Problem 1.1
We start with the easiest part, which is the random variable $X$ (the roll of the die). The variance is computed by $Var(X)=E(X^2)-E(X)^2$.
(1)……$\displaystyle E(X)=\frac{1}{6} \biggl[1+2+3+4+5+6 \biggr]=\frac{21}{6}=3.5$
(2)……$\displaystyle E(X^2)=\frac{1}{6} \biggl[1^2+2^2+3^2+4^2+5^2+6^2 \biggr]=\frac{91}{6}$
(3)……$\displaystyle Var(X)=\frac{91}{6}-\biggl[\frac{21}{6}\biggr]^2=\frac{105}{36}=\frac{35}{12}$
Problem 1.2
We now compute the mean and variance of $Y$. The calculation of finding the joint distribution and then finding the marginal distribution of $Y$ is tedious and has been done in this previous post. We do not take this approach here. Instead, we find the unconditional mean $E(Y)$ by weighting the conditional mean $E(Y \lvert X=x)$. The weights are the probabilities $P(X=x)$. The following is the idea.
(4)……\displaystyle \begin{aligned} E(Y)&=E_X[E(Y \lvert X=x)] \\&= E(Y \lvert X=1) \times P(X=1) \\&+ E(Y \lvert X=2) \times P(X=2)\\&+ E(Y \lvert X=3) \times P(X=3) \\&+ E(Y \lvert X=4) \times P(X=4) \\&+E(Y \lvert X=5) \times P(X=5) \\&+E(Y \lvert X=6) \times P(X=6) \end{aligned}
We have $P(X=x)=\frac{1}{6}$ for each $x$. Before we do the weighting, we need to have some items about the conditional distribution $Y \lvert X=x$. Since $Y \lvert X=x$ has a binomial distribution, we have:
(5)……$\displaystyle E(Y \lvert X=x)=\frac{1}{4} \ x$
(6)……$\displaystyle Var(Y \lvert X=x)=\frac{1}{4} \ \frac{3}{4} \ x=\frac{3}{16} \ x$
For any random variable $W$, $Var(W)=E(W^2)-E(W)^2$ and $E(W^2)=Var(W)+E(W)^2$. The following is the second moment of $Y \lvert X=x$, which is needed in calculating the unconditional variance $Var(Y)$.
(7)……\displaystyle \begin{aligned} E(Y^2 \lvert X=x)&=\frac{3}{16} \ x+\biggl[\frac{1}{4} \ x \biggr]^2 \\&=\frac{3x}{16}+\frac{x^2}{16} \\&=\frac{3x+x^2}{16} \end{aligned}
We can now do the weighting to get the items of the variable $Y$.
(8)……\displaystyle \begin{aligned} E(Y)&=\frac{1}{6} \biggl[\frac{1}{4} +\frac{2}{4}+\frac{3}{4}+ \frac{4}{4}+\frac{5}{4}+\frac{6}{4}\biggr] \\&=\frac{7}{8} \\&=0.875 \end{aligned}
(9)……\displaystyle \begin{aligned} E(Y^2)&=\frac{1}{6} \biggl[\frac{3(1)+1^2}{16} +\frac{3(2)+2^2}{16}+\frac{3(3)+3^2}{16} \\&+ \frac{3(4)+4^2}{16}+\frac{3(5)+5^2}{16}+\frac{3(6)+6^2}{16}\biggr] \\&=\frac{154}{96} \\&=\frac{77}{48} \end{aligned}
(10)……\displaystyle \begin{aligned} Var(Y)&=E(Y^2)-E(Y)^2 \\&=\frac{77}{48}-\biggl[\frac{7}{8}\biggr]^2 \\&=\frac{161}{192} \\&=0.8385 \end{aligned}
Problem 1.3
The following is the definition of covariance of $X$ and $Y$:
(11)……$\displaystyle Cov(X,Y)=E[(X-\mu_X)(Y-\mu_Y)]$
where $\mu_X=E(X)$ and $\mu_Y=E(Y)$.
The definition (11) can be simplified as:
(12)……$\displaystyle Cov(X,Y)=E[XY]-E[X] E[Y]$
To compute $E[XY]$, we can use the joint probability function of $X$ and $Y$ to compute this expectation. But this is tedious. Anyone who wants to try can go to this previous post to obtain the joint distribution.
Note that the conditional mean $E(Y \lvert X=x)=\frac{x}{4}$ is a linear function of $x$. It is a well known result in probability and statistics that whenever a conditional mean $E(Y \lvert X=x)$ is a linear function of $x$, the conditional mean can be written as:
(13)……$\displaystyle E(Y \lvert X=x)=\mu_Y+\rho \ \frac{\sigma_Y}{\sigma_X} \ (x-\mu_X)$
where $\mu$ is the mean of the respective variable, $\sigma$ is the standard deviation of the respective variable and $\rho$ is the correlation coefficient. The following relates the correlation coefficient with the covariance.
(14)……$\displaystyle \rho=\frac{Cov(X,Y)}{\sigma_X \ \sigma_Y}$
Comparing (5) and (13), we have $\displaystyle \rho \frac{\sigma_Y}{\sigma_X}=\frac{1}{4}$ and
(15)……$\displaystyle \rho = \frac{\sigma_X}{4 \ \sigma_Y}$
Equating (14) and (15), we have $Cov(X,Y)=\frac{\sigma_X^2}{4}$. Thus we deduce that $Cov(X,Y)$ is one-fourth of the variance of $X$. Using $(3)$, we have:
(16)……$\displaystyle Cov(X,Y) = \frac{1}{4} \times \frac{35}{12}=\frac{35}{48}=0.72917$
Plug in all the items of (3), (10), and (16) into (14), we obtained $\rho=0.46625$. Both $\rho$ and $Cov(X,Y)$ are positive, an indication that both variables move together. When one increases, the other variable also increases. Thus makes sense based on the definition of the variables. For example, when the value of the die is large, the number of trials of $Y$ is greater (hence a larger mean).
A similar problem is also found in this post.
.
.
.
.
.
.
.
.
Answers to Problem 2
$\displaystyle E[X]=\frac{7}{2}$
$\displaystyle Var[X]=\frac{35}{12}$
$\displaystyle E[Y]=\frac{7}{4}$
$\displaystyle Var[Y]=\frac{77}{48}$
$\displaystyle \text{Cov}(X,Y)=\frac{35}{24}$
$\displaystyle \rho=\sqrt{\frac{5}{11}}=0.67419986$
Dan Ma statistical
Daniel Ma statistical
Dan Ma practice problems
Daniel Ma practice problems
Daniel Ma mathematics
Dan Ma math
Daniel Ma probability
Dan Ma probability
Daniel Ma statistics
Dan Ma statistics
Dan Ma mathematical
Daniel Ma mathematical
$\copyright$ 2012-2019 – Dan Ma
|
2019-04-21 06:23:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 427, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555587768554688, "perplexity": 1456.3405340798085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530253.25/warc/CC-MAIN-20190421060341-20190421082341-00107.warc.gz"}
|
https://www.physicsforums.com/threads/compute-the-volume-of-the-solid.433004/
|
# Homework Help: Compute the volume of the solid
1. Sep 28, 2010
### number0
1. The problem statement, all variables and given/known data
Computer the volume of the solid bounded by the xz plane, the yz plane, the xy plane, the planes x = 1 and y = 1, and the surface z = x2 + y2
2. Relevant equations
None.
3. The attempt at a solution
Since the solid is bounded by the xz plane, the yz plane, the xy plane, it is assumed that the values of x, y, and z all equal 0. And since x = 1 and y = 1, the limits of integration is:
0 $$\leq$$ x $$\leq$$ 1
0 $$\leq$$ y $$\leq$$ 1
Thus, the double integral is:
$$\int$$ $$\int$$ x2 + y4 dA
and the limits of integration is 0 $$\leq$$ x $$\leq$$ 1, 0 $$\leq$$ y $$\leq$$ 1.
After calculating the integral, I got the answer $$\frac{8}{15}$$. Can anyone verify my work?
2. Sep 28, 2010
### Staff: Mentor
That's what I get, too.
For future reference, here is the integral I evaluated, using LaTeX.
$$\int_{x = 0}^1 \int_{y = 0}^1 x^2 + y^4~dy~dx$$
Click the integral to see my LaTeX code.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
2018-12-17 01:29:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7860713601112366, "perplexity": 538.2619152125991}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00011.warc.gz"}
|