url stringlengths 15 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.07k 1.1k |
|---|---|---|---|
https://www.acmicpc.net/problem/15838 | 시간 제한메모리 제한제출정답맞힌 사람정답 비율
2 초 512 MB999100.000%
## 문제
Wak Sani Satay is a humble stall located in Kajang and has been around since 1969. Many like Wak Sani’s satay because the meat is juicy and tender, served with creamy and sweet kuah kacang, alongside with nasi impit, cucumber and onion slices.
Wak Sani usually calculates his net profit at the end of the week. The net profit is calculated by subtracting the cost from the gross profit. He can get 85 sticks of satay from 1 kg meat. The price for 3 types of satay are shown in Table 1. The price for nasi impit is RM0.80 each while cucumber and onion slices are free of charge.
The cost for making satay for each meat type in shown in Table 2. The cost of spices to marinate satay for every kilogram of meat is RM8.00 and the cost for each nasi impit is RM0.20 each.
Satay Price per stick
Chicken RM0.80
Beef RM1.00
Lamb RM1.20
Table 1
Meat Price per kg
Chicken RM7.50
Beef RM24.00
Lamb RM32.00
Table 2
Write a program to find the weekly net profit.
## 입력
The input consists of a few sets of test cases. The first line for each data case is an integer N (1 ≤ N ≤ 7), which represents the number of days the stall is opened to customers for a week. It is followed by N lines of data, each line represents the sales (in sticks) of chicken satay, beef satay, lamb satay and nasi impit per day. Input is terminated by a test case where N is 0.
## 출력
For each test case, output a line in the format "Case #x: RM" where x is the case number (starting from 1), follow by the calculated net profit in Malaysian currency format as shown in the sample output.
## 예제 입력 1
1
30 40 34 5
2
0 0 0 0
1 1 1 1
3
1000 1000 1000 10
5000 3000 4000 12
100 300 10 6
0
## 예제 출력 1
Case #1: RM71.27
Case #2: RM2.57
Case #3: RM10119.98 | 2022-07-02 11:01:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1830446422100067, "perplexity": 5826.719034262543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00314.warc.gz"} |
https://blog.juliosong.com/linguistics/mathematics/category-theory-notes-5/ | Arrows are so vital to category theory that Awodey jokingly refers to the theory as “archery” (Category Theory, p. 2). Given two objects in a category, an arrow between them, if it exists, simply connects them: $A \rightarrow B.$ And if one arrow’s head overlaps with another’s tail, then the two arrows combined necessarily correspond to a third arrow in the category. For example, in the diagram
there are three arrows $f,g,$ and $g \circ f.$ Such arrow chaining, called composition, is part of what it means to be a category.
We needn’t care about what the objects $A, B, C$ or the arrows $f, g, g\circ f$ stand for. Category theory is the abstract study of objects and arrows, and everything works just fine even if we don’t assign interpretations to them. From a linguistic perspective, we can view this abstract theory of categories (or metacategories in Mac Lane’s words; CWM, p. 7) as a purely syntactic system without prespecified semantics. The objects and arrows are its vocabulary, composition and the like are its rules, while the possible interpretations of this syntax (which are presumably a lot!) are a secondary, domain-specific issue.
Objects, arrows, and composition are part of the axiomatic definition of a category.
## Parallel arrows
Little diagrams like the one above are standardly given in textbooks. Despite their convenience, however, they may become a source of confusion for beginners. The problem is that they are too neat—to the extent that things begin to look contrived. I remember when I first started learning category theory, I almost believed that such neat diagrams were all there was to categories—that they were perfect depictions of the categories behind them. Take a bunch of objects, connect them by composable arrows, and voila—we have a category! So, in my naive imagination category theory was not only archery but also astrology. 😂
But such neatness was just my illusion. I later realized that there could actually be multiple arrows between each pair of objects. In hindsight, I can’t believe that I hadn’t realized this right away. But again, that might be a sign that textbooks should give this point more emphasis so that students don’t miss it. So, in the above diagram $g \circ f$ is probably just one of the many arrows connecting $A$ and $C$.
Crucially, among all the $A \rightarrow C$ arrows only one is the composite of $f$ and $g$, while all others can just be its random neighbors. In fact the arrow space between each pair of objects can be highly populated. How populated? More than sets can describe! It’s a basic mathematical fact that there are collections larger than sets. Only when the arrow collection between two objects $X$ and $Y$ is “small” enough can we call it a set, or more precisely a hom-set, denoted by $hom(X,Y)$. For instance, the collection of arrows between $A$ and $C$ in the above diagram, when it’s a set, is written $hom(A,C).$ The pronunciation of $hom$ is nonunanimous. I’ve heard both /hoʊm/ and /hɑm/ from distinguished experts. Etymologically hom comes from homomorphism, which might explain the pronunciation nonunanimity.
Considering the possibly numerous or even countless parallel arrows, the “true colors” of categories may be much less neat than what we see in textbook diagrams. A fully realistic depiction of a category may well be a clump of black clutter whose details are indiscernible to the human eyes. Maybe that’s why textbooks choose to draw out only those arrows relevant to the topic under discussion.
There are also less cluttered categories. A basic example is the categorical conception of a poset—a set equipped with a reflexive, transitive, and antisymmetric binary relation called a partial order. The objects of this category are elements of the set, and the arrows are instances of the partial order. Since a relation either holds or doesn’t hold with no third possibility, between any two objects in a poset category there is either no arrow or only one. That is, the hom-sets in a poset category are either empty or singleton. A special type of poset is a chain, like the Big Dipper above!
In a poset category arrow composition is defined by transitivity (e.g., if $A\le B$ and $B\le C$ then $A\le C$). Since in a category all composable arrows must actually compose, composite arrows are usually omitted when the definition of composition is not the topic being discussed. This convention makes diagrams even neater.
## Commutative diagrams
What’s the correlation between composite arrows that are parallel? Well, sometimes they may be equivalent in an algebraic sense, just like $1+4=2+3.$ And a diagram where all parallel composite arrows are equivalent is said to commute. For example, in the diagram
,
if $g \circ f = i \circ h,$ then the diagram is commutative. Commutative diagrams are another vital part of category theory, and they are closely related to arrow composition.
Normally one wouldn’t expect something as clearly defined as commutative diagrams to be confusing, but the notion—or more exactly what’s left implicit of it—did confuse me for a while. My confusion was, How can we tell whether two paths are equivalent or not?
Initially I had thought two paths sharing the same source and target were equivalent—all roads lead to Rome! But soon I realized there must be something wrong with this idea, because if it were true then all parallel arrows would end up being equivalent; in other words, all diagrams would be commutative. But if that were the case, why would mathematicians bother coming up with a notion of commutativity at all, let alone cherishing it so much? If that were the case, saying a diagram is commutative would be like saying a forest has trees!
In hindsight, a major cause of my confusion was that the introductory texts I used only illustrated commutative diagrams but not noncommutative ones, which gave me the false impression, perhaps subconsciously, that commutativity came for free, or at least at a very low price—as if to make a diagram commute all we needed to do was draw parallel paths between objects.
But that’s just another illusion from the neat textbook diagrams. Path equivalence is essentially an algebraic property and must be proven algebraically. When two paths can’t be proven equivalent, then they simply aren’t, and the diagram doesn’t commute. Noncommutative diagrams aren’t outlaws. They should be given equal status in pedagogical materials as commutative diagrams so that beginners, especially those with less mathematical experience, can get a more balanced understanding of commutativity.
Smith’s 2018 draft textbook Category Theory: A Gentle Introduction (henceforth Gentle Intro) has the clearest explication on this issue I’ve ever seen:
But note: to say a given diagram commutes is just a vivid way of saying that certain identities hold between composites – it is the identities that matter. And note too that merely drawing a diagram with different routes from e.g. A to D in the relevant category doesn’t always mean that we have a commutative diagram – the identity of the composites along the paths in each case has to be argued for! (p. 29)
Tai-Danae Bradley gives a simple example of a noncommutative diagram in her blog post “Commutative diagrams explained”: the following diagram of real-valued functions,
where $id$ is the identity function and $zero$ is a constant function that maps all real numbers to $0$, obviously doesn’t commute, because the parallel paths $\mathbb{R}\rightarrow\mathbb{R}$ don’t return the same output for the same input.
Bradley’s example essentially demonstrates a set-theoretic criterion of arrow equivalence—function extensionality. This principle states that two functions are equal iff their values are equal at every argument. Since $id\circ id$ and $zero$ don’t return equal values for every real number, they as functions are not equal and hence as arrows are not equivalent.
Since for any real number argument $(+5)\circ(+3)$ and $(+6)\circ(+2)$ return the same value, they are equal functions and equivalent paths, whence the diagram commutes. Admittedly, not all diagrams can be checked for commutativity in this way, because many categories have nothing to do with sets and functions. But the caveat remains the same: we can’t declare a diagram commutative on a whim but can only verify (or falsify) commutativity via a proof.
## Takeaway
• There can be multiple parallel arrows between a pair of categorial objects. Textbooks don’t depict all of them in diagrams because many are irrelevant to the topic(s) under discussion.
• Commutativity doesn’t come for free but must be proved, by showing that the two sides of a hypothetical path equation are really equal.
• Noncommutative diagrams are also valid diagrams and shouldn’t be glossed over in textbooks for total beginners.
• In categories where arrows are functions, commutativity can be checked via function extensionality.
Tags:
Categories:
Updated:
## Subscribe to I-Yuwen
* indicates required | 2021-06-20 00:41:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682175278663635, "perplexity": 520.5637521471431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00501.warc.gz"} |
https://ask.sagemath.org/answers/46824/revisions/ | # Revision history [back]
This indeed looks like a bug.
As a workaround, test whether the graph has loops before computing the chromatic polynomial.
sage: G = Graph([[1, 1]], multiedges=True, loops=True)
sage: G.has_loops()
True
Fixing the bug should just amount to special-casing graphs with loops (testing for loops as above) and including a doctest (to prevent the bug from reappearing). | 2020-05-29 01:32:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6453588008880615, "perplexity": 7662.362247643384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00144.warc.gz"} |
https://rjlipton.wordpress.com/2015/03/17/leprechauns-will-find-you/ | And perhaps even find your hidden prime factors
Neil L. is a Leprechaun. He has visited me every St. Patrick’s Day since I began the blog in 2009. In fact he visited me every St. Patrick’s Day before then, but I never noted him. Sometimes he comes after midnight the night before, or falls asleep on my sofa waiting for me to rise. But this time there was no sign of him as I came back from a long day of teaching and meetings and went out again for errands.
Today Ken and I wish you all a Happy St. Patrick’s Day, and I am glad to report that Neil did find me.
When I came back I was sorting papers and didn’t see him. I didn’t know he was there until I heard,
Top o’ the evening to ye.
Neil continued as he puffed out some green smoke: “I had some trouble finding you this year. Finally got where you were—good friends at your mobile provider helped me out.” I was surprised, and told him he must be kidding. He answered, “Of course I always can find you, just having some fun wi’ ye.” Yes I agreed and added that I was staying elsewhere. He puffed again and said “yes I understand.”
I said I had a challenge for him, a tough challenge, and asked if he was up for it. He said, “Hmmm, I do not owe you any wishes, but a challenge… Yes I will accept a challenge from ye, any challenge that ye can dream up.” He laughed, and added, “we leprechauns have not lost a challenge to a man for centuries. I did have a cousin once who messed up.”
## The Cousin’s Story
I asked if he would share his cousin’s story, and he nodded yes. “‘Tis a sad story. My cousin was made a fool of once, a terrible black mark on our family. Why, we were restricted from any St Patrick Day fun for a hundred years. Too long a punishment in our opinion—the usual is only a few decades. Do ye want to know what my cousin did? Or just move on to the challenge? My time is valuable.”
I nodded sympathetically, so he carried on.
“One fine October day in Dublin me cousin was sitting under a bridge—under the lower arch where a canalside path went.
“He spied a gent walking with his wife along the path but lost in thought and completely ignoring her. He thought the chap would be a great mark for a trick but forgot the woman. She spied him and locked on him with laser eyes and of course he was caught—he could not run unless she looked away.
“He tried to ply her with a gold coin but she knew her leprechaun lore and was ruthless. He resigned himself to granting wishes but she would not have that either. With her stare still fixed she took off her right glove, plucked a shamrock, and lay both at his feet for a challenge. A woman had never thrown a challenge before, and there was not in the lore a provision for return-challenging a woman. So my cousin had to accept her challenge. It came with intense eyes:
“I challenge you to tell the answer to what is vexing and estranging my husband.”
“Aye,” Neil sighed, “you or I or any lad in the face of such female determination would be reduced to gibberish, and that is what me cousin blurted out:
${i^2 = j^2 = k^2 = ijk = -1.}$
“The gent looked up like the scales had fallen from his eyes, and he embraced his wife. This broke the stare, and my cousin vanished in great relief. And did the gent show his gratitude? Nay—he even carved that line on the bridge but gave no credit to my cousin.”
I clucked in sympathy, and Neil seemed to like that. He put down his pipe and gave me a look that seemed to return comradeship. Then I understood who the “cousin” was. Not waiting to register my understanding, he invited my challenge as a peer.
## My Challenge
I had in fact prepared my challenge last night—it was programmed by a student in my graduate advanced course using a big-integer package. Burned onto a DVD was a Blum integer of one trillion bits. I pulled it out of its sleeve and challenged Neil to factor it. The shiny side flashed a rainbow, and I joked there could really be a pot of gold at the end of it.
Neil took one puff and pushed the DVD—I couldn’t tell how—into my MacBook Air. The screen flashed green and before I could say “Jack Robinson” my FileZilla window opened. Neil blew mirthful puffs as the progress bar crawled across. A few minutes later came e-mail back from my student, “Yes.”
I exclaimed, “Ha—you did it—but the point isn’t that you did it. The point is, it’s doable. You proved that factoring is easy. Could be quantum or classical but whatever—it’s practical.”
Neil puffed and laughed as he handed me back the suddenly-reappeared disk and said, “Aye, do ye really think I would let your lot fool me twice?”
I replied, “Fool what? You did it—that proves it.”
“Nay,” he said, “indeed I did it—I cannot lie—but ye can’t know how I did it enough to tell whether a non-leprechaun can do it. And a computer that ye build—be it quantum or classical or whatever—is a non-leprechaun.”
It hit me that a quantum computer that cannot be built is a leprechaun, and perhaps Peter Shor’s factoring algorithm only runs on those. But I wasn’t going to be distracted away from my victory.
“How can it matter whether a leprechaun does it?” Neil retorted that he didn’t have to answer a further challenge, “it’s not like having three wishes, you know.” But he continued, “since ye are a friend, I will tell ye three ways it could be, and you can choose one ye like but know ye: it could still be a fourth way.
1. “I could have been around when your student made the number, even gone back in time to see it. I did take a long time to find ye, did I not?
2. “Since y’are not a woman I have a return challenge, and I don’t have to give it after yours or even tell ye. I get some control and can influence you to give instructions that will lead to a particular number I am prepared for. We leprechauns do that with choices of RSA keys all the time.
3. “Everything in your world that you create by rules is succinct. Of course so was that number. Factoring of succinct numbers is easy—indeed in this world, everything is easy.
“And I left ye a factor, but your student already had it, so I left ye no net knowledge at all.” And with a puff of smoke, he was gone.
## Open Problems
Did I learn anything from the one-time factoring of my number? Happy St. Patrick’s Day anyway.
[moved part of dialogue at end from 2. to 1.] | 2018-01-19 11:13:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4385192096233368, "perplexity": 2969.151865460479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00559.warc.gz"} |
https://worldbank.github.io/PIP-Methodology-2022-04/convert.html | DISCLAIMER: This is not the most recent version of the methodological handbook. You can find the most recent version here: https://worldbank.github.io/PIP-Methodology.
# Chapter 3 Converting welfare aggregates
Welfare aggregates from household surveys are often expressed in national currency units in prices around the time of the fieldwork. To use a welfare aggregate from a particular survey to estimate extrme poverty at the international poverty line, the welfare aggregates need to be converted to a unit comparable across time and across countries. To this end, first Consumer Price Indices (CPIs) are used to express the aggregates in the same prices within a country. Second, Purchasing Power Parities (PPPs) are used to express all welfare aggregates in the same currency by adjusting for price differences across countries.
## 3.1 Consumer Price Indices (CPIs)
Consumer price indices (CPIs) summarize the prices of a representative basket of goods and services consumed by households within an economy over a period of time. Inflation (deflation) occurs when there is a positive (negative) change in the CPI between two time periods. With inflation, the same amount of rupees is expected to buy more today than one year from today. CPIs are used to deflate nominal income or consumption expenditure of households so that the welfare of households can be evaluated and compared between two time periods at the same prices.
The primary source of CPI data for the Poverty and Inequality Platform is IMF’s International Financial Statistics (IFS) monthly CPI series. The simple average of the monthly CPI series for each calendar year is used as the annual CPI. When IFS data are missing, other sources of CPI data are obtained from IMFs World Economic Outlook (WEO) and National Statistical Offices (NSOs), among others. For more details on the different sources of CPI data used for global poverty measurement, see Figure 1 of and the “What’s New” technical notes accompanying PIP updates.
CPI series are rebased to the International Comparison Programme (ICP) reference year, currently 2011.
## 3.2 Purchasing Power Parities (PPPs)
Purchasing power parities (PPPs) are used in global poverty estimation to adjust for price differences across countries. PPPs are price indices published by the International Comparison Program (ICP) that measure how much it costs to purchase a basket of goods and services in one country compared to how much it costs to purchase the same basket of goods and services in a reference country, typically the United States. PPP conversion factors are preferred to market exchange rates for the measurement of global poverty because the latter overestimate poverty in developing countries, where non-tradable services are relatively cheap (a phenomenon known as the Balassa-Samuelson-Penn effect). The revised 2011 PPPs are currently used to convert household welfare aggregates, expressed in local currency units in 2011 prices, into a common internationally comparable currency unit. The PPP conversion only affects the cross-country comparison of levels of welfare; the growth in the survey mean for a particular country over time is the same whether it is expressed in constant local currency or in USD PPP.
The PPP estimates used for global poverty measurement are the consumption PPPs from the ICP with a few exceptions. PPPs are imputed for six countries, namely Egypt, Iraq, Jordan, Lao, Myanmar and Yemen, where there are concerns over the coverage and/or quality of the underlying ICP price collection .
Though PPPs are supposed to be nationally representative, to account for possible urban bias in ICP data collection, separate rural and urban PPPs are computed for China, India, and Indonesia using official national PPPs, the ratio of urban to rural poverty lines, and the urban share in ICP price data collection .
## 3.3 Derivation of the international poverty line
Most countries have a national poverty line which summarizes the value of consumption or income per person or per adult equivalent needed to be non-poor. These national poverty lines are typically estimated by National Statistical Offices and reflect country-specific definitions of what it means to be poor. For low and middle-income countries, the lines usually reflect the cost of purchasing a bundle of food items necessary to obtain minimum daily calories to which a basic non-food component is added. For high-income countries the national poverty lines are often relative and are defined relative to the national mean or median income.
To compare poverty across countries one needs a common standard. Hence, national poverty lines, which differ from one country to the next, cannot be used. The international poverty line is an attempt to summarize the national poverty lines of the poorest countries.
Since 1990, the World Bank has derived international poverty lines from the national poverty lines of the poorest countries of the world . In 1990, this resulted in a the “Dollar-a-day” poverty line. Whenever new rounds of PPPs have been released, the nominal value of the international poverty line has been updated. This does not mean that the real value of the international poverty line has changed.
The current international poverty line of $1.90/day in 2011 PPPs was derived as the mean national poverty line of some of the 15 poorest countries. That is, it represents a typical poverty line of some of the poorest countries in the world. The line is derived by first converting the national poverty lines into PPP-adjusted dollars in the same manner as welfare distributions are converted. This was done by who selected the 15 poorest countries with an available national poverty line, ranked by household final consumption expenditure per capita of countries around 2008 when the 2005 PPPs were released. An IPL of$1.25/day per person, expressed in 2005 PPP dollars was determined as the mean of the national poverty lines of these countries. When the 2011 PPPs were released in 2014, the same 15 national poverty lines were used, but now converted to 2011 PPPs yielding an IPL of $1.88, which was rounded to$1.90 . When the 2011 PPPs got revised in 2020, the IPL was similarly updated but remains unchanged at $1.90 . Below is the list of the 15 poorest countries and their national poverty lines denominated in 2005, original 2011, and revised 2011 PPPs. Country Survey year Poverty line, 2005 PPP Poverty line, original 2011 PPP Poverty line, revised 2011 PPP Chad 1995-96 0.87 1.28 1.29 Ethiopia 1999-2000 1.35 2.03 1.98 Gambia, The 1998 1.48 1.82 1.81 Ghana 1998-99 1.83 3.07 3.11 Guinea-Bissau 1991 1.51 2.16 2.08 Malawi 2004-05 0.86 1.34 1.33 Mali 1988-89 1.38 2.15 2.13 Mozambique 2002-03 0.97 1.26 1.24 Nepal 2003-04 0.87 1.47 1.47 Niger 1993 1.10 1.49 1.48 Rwanda 1999-2001 0.99 1.50 1.47 Sierra Leone 2003-04 1.69 2.73 2.64 Tajikistan 1999 1.93 3.18 3.35 Tanzania 2000-01 0.63 0.88 0.88 Uganda 1993-98 1.27 1.77 1.77 Mean 1.25 1.88 1.87 ## 3.4 Derivation of other global poverty lines In addition to the international poverty line, the World Bank uses two higher poverty lines to measure and monitor poverty in countries with a low incidence of extreme poverty. These higher lines, namely$3.20 and $5.50 in revised 2011 PPPs, are derived in as the median values of national poverty lines of lower- and upper-middle income countries, respectively. When replicating the derivation of these lines with the revised 2011 PPPs, the estimates for the$3.20 lines does not change, while the $5.50 line increases by approximately$0.15 . The World Bank decided to keep all the global poverty lines unchanged, including the \$5.50 line. These poverty lines are goalposts to be held fixed over time and they have become widely used, so there is a cost to revising them frequently. The global poverty lines were chosen with the PPPs available at the time using a reasonable method; thereafter we view them as fixed parameters to monitor progress in different parts of the global distribution of income or consumption.
### References
Atamanov, Aziz, Dean Jolliffe, Christoph Lakner, and Espen Beer Prydz. 2018. “Purchasing Power Parities Used in Global Poverty Measurement.” Global Poverty Monitoring Technical Note 5. http://documents.worldbank.org/curated/en/764181537209197865/Purchasing-Power-Parities-Used-in-Global-Poverty-Measurement.
Atamanov, Aziz, Christoph Lakner, Daniel Gerszon Mahler, Samuel Kofi Tetteh Baah, and Judy Yang. 2020. “The Effect of New PPP Estimates on Global Poverty: A First Look” 12. https://openknowledge.worldbank.org/handle/10986/33816.
———. 2008. “China Is Poorer Than We Thought, but No Less Successful in the Fight Against Poverty,” Policy research working paper series, no. 4621. https://openknowledge.worldbank.org/handle/10986/6674.
———. 2010. “The Developing World Is Poorer Than We Thought, but No Less Successful in the Fight Against Poverty.” The Quarterly Journal of Economics 125 (4): 1577–1625. https://doi.org/10.1162/qjec.2010.125.4.1577.
Ferreira, Francisco HG, Shaohua Chen, Andrew Dabalen, Yuri Dikhanov, Nada Hamadeh, Dean Jolliffe, Ambar Narayan, Espen Beer Prydz, Ana Revenga, and Prem Sangraula. 2016. “A Global Count of the Extreme Poor in 2012: Data Issues, Methodology and Initial Results.” The Journal of Economic Inequality 14 (2): 141–72. https://link.springer.com/article/10.1007/s10888-016-9326-6.
Jolliffe, Dean, and Espen Beer Prydz. 2015. Global Poverty Goals and Prices: How Purchasing Power Parity Matters. Policy Research Working Paper Series 7256. hhttps://openknowledge.worldbank.org/handle/10986/21988.
———. 2016. “Estimating International Poverty Lines from Comparable National Thresholds.” The Journal of Economic Inequality 14 (2): 185–98. https://link.springer.com/article/10.1007/s10888-016-9327-5.
Lakner, Christoph, Daniel Gerszon Mahler, Minh C. Nguyen, Joao Pedro Azevedo, Shaohua Chen, Dean M. Jolliffe, Espen Beer Prydz, and Prem Sangraula. 2018. “Consumer Price Indices Used in Global Poverty Measurement.” Global Poverty Monitoring Technical Note 4. http://documents.worldbank.org/curated/en/215371537208860890/Consumer-Price-Indices-Used-in-Global-Poverty-Measurement.
Ravallion, Martin, Shaohua Chen, and Prem Sangraula. 2009. “Dollar a Day Revisited.” The World Bank Economic Review 23 (2): 163–84. https://doi.org/10.1093/wber/lhp007.
———. 2020. Poverty and Shared Prosperity 2020: Reversals of Fortune. Washington, DC: World Bank. https://openknowledge.worldbank.org/handle/10986/34496. | 2023-03-20 12:51:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3370811641216278, "perplexity": 5669.628178183001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00688.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=35:2080 | MathSciNet bibliographic data MR211198 (35 #2080) 42.50 (46.80) Rosenthal, Haskell P. Projections onto translation-invariant subspaces of \$L\sp{p}(G)\$$L\sp{p}(G)$. Mem. Amer. Math. Soc. No. 63 1966 84 pp. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2014-08-01 07:21:56 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769197702407837, "perplexity": 5082.037679775132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00027-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://dev.goldbook.iupac.org/terms/view/F02543 | Wikipedia - Diskussion:Massenwirkungsgesetz fugacity $$f$$, $$\tilde{p}$$
https://doi.org/10.1351/goldbook.F02543
Of a substance B, $$f_{\text{B}}$$ or $$\tilde{p}_{\text{B}}$$, in a gaseous mixture is defined by $$f_{\text{B}}=\lambda _{\text{B}}\ \lim _{p\rightarrow 0}\frac{p_{\text{B}}}{\lambda _{\text{B}}}$$, where $$p_{\text{B}}$$ is the @P04420@ of B and $$λ_{\text{B}}$$ its @A00019@.
Source:
Green Book, 2nd ed., p. 50 [Terms] [Book]
See also:
PAC, 1984, 56, 567. (Physicochemical quantities and units in clinical chemistry with special emphasis on activities and activity coefficients (Recommendations 1983)) [Terms] [Paper]
PAC, 1994, 66, 533. (Standard quantities in chemical thermodynamics. Fugacities, activities and equilibrium constants for pure and mixed phases (IUPAC Recommendations 1994)) [Terms] [Paper] | 2019-04-23 16:31:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062182664871216, "perplexity": 7105.841931809491}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605555.73/warc/CC-MAIN-20190423154842-20190423180842-00320.warc.gz"} |
http://www.aot-math.org/article_46634.html | # Variants of Weyl's theorem for direct sums of closed linear operators
Document Type: Original Article
Authors
University of Delhi, Delhi.
Abstract
If $T$ is an operator with compact resolvent and $S$ is any densely defined closed linear operator, then the orthogonal direct sum of $T$ and $S$ satisfies various Weyl type theorems if some necessary conditions are imposed on the operator $S$. It is shown that if $S$ is isoloid and satisfies Weyl's theorem, then $T \oplus S$ satisfies Weyl's theorem. Analogous result is proved for a-Weyl's theorem. Further, it is shown that Browder's theorem is directly transmitted from $S$ to $T \oplus S$. The converse of these results have also been studied.
Keywords
### History
• Receive Date: 03 January 2017
• Revise Date: 05 June 2017
• Accept Date: 07 June 2017 | 2019-09-18 19:56:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036804437637329, "perplexity": 568.0030221742262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00431.warc.gz"} |
https://ajitjadhav.wordpress.com/tag/aristotle/ | # Some running thoughts on ANNs and AI—1
Go, see if you want to have fun with the attached write-up on ANNs [^] (but please also note the version time carefully—the write-up could change without any separate announcement).
The write-up is more in the nature of a very informal blabber of the kind that goes when people work out something on a research blackboard (or while mentioning something about their research to friends, or during brain-storming session, or while jotting things on the back of the envelop, or something similar).
A “song” I don’t like:
(Marathi) “aawaaj waaDaw DJ…”
“Credits”: Go, figure [^]. E.g., here [^]. Yes, the video too is (very strongly) recommended.
Update on 05 October 2018 10:31 IST:
Psychic attack on 05 October 2018 at around 00:40 IST (i.e. the night between 4th and 5th October, IST).
/
# Off the blog. [“Matter” cannot act “where” it is not.]
I am going to go off the blogging activity in general, and this blog in most particular, for some time. [And, this time round, I will keep my promise.]
The reason is, I’ve just received the shipment of a book which I had ordered about a month ago. Though only about 300 pages in length, it’s going to take me weeks to complete. And, the book is gripping enough, and the issue important enough, that I am not going to let a mere blog or two—or the entire Internet—come in the way.
I had read it once, almost cover-to-cover, some 25 years ago, while I was a student in UAB.
Reading a book cover-to-cover—I mean: in-sequence, and by that I mean: starting from the front-cover and going through the pages in the same sequence as the one in which the book has been written, all the way to the back-cover—was quite odd a thing to have happened with me, at that time. It was quite unlike my usual habits whereby I am more or less always randomly jumping around in a book, even while reading one for the very first time.
But this book was different; it was extraordinarily engaging.
In fact, as I vividly remember, I had just idly picked up this book off a shelf from the Hill library of UAB, for a casual examination, had browsed it a bit, and then had began sampling some passage from nowhere in the middle of the book while standing in an library aisle. Then, some little time later, I was engrossed in reading it—with a folded elbow resting on the shelf, head turned down and resting against a shelf rack (due to a general weakness due to a physical hunger which I was ignoring [and I would have have to go home and cook something for myself; there was none to do that for me; and so, it was easy enough to ignore the hunger]). I don’t honestly remember how the pages turned. But I do remember that I must have already finished some 15-20 pages (all “in-the-order”!) before I even realized that I had been reading this book while still awkwardly resting against that shelf-rack. …
… I checked out the book, and once home [student dormitory], began reading it starting from the very first page. … I took time, days, perhaps weeks. But whatever the length of time that I did take, with this book, I didn’t have to jump around the pages.
The issue that the book dealt with was:
[Instantaneous] Action at a Distance.
The book in question was:
Hesse, Mary B. (1961) “Forces and Fields: The concept of Action at a Distance in the history of physics,” Philosophical Library, Edinburgh and New York.
It was the very first book I had found, I even today distinctly remember, in which someone—someone, anyone, other than me—had cared to think about the issues like the IAD, the concepts like fields and point particles—and had tried to trace their physical roots, to understand the physical origins behind these (and such) mathematical concepts. (And, had chosen to say “concepts” while meaning ones, rather than trying to hide behind poor substitute words like “ideas”, “experiences”, “issues”, “models”, etc.)
But now coming to Hesse’s writing style, let me quote a passage from one of her research papers. I ran into this paper only recently, last month (in July 2017), and it was while going through it that I happened [once again] to remember her book. Since I did have some money in hand, I did immediately decide to order my copy of this book.
Anyway, the paper I have in mind is this:
Hesse, Mary B. (1955) “Action at a Distance in Classical Physics,” Isis, Vol. 46, No. 4 (Dec., 1955), pp. 337–353, University of Chicago Press/The History of Science Society.
The paper (it has no abstract) begins thus:
The scholastic axiom that “matter cannot act where it is not” is one of the very general metaphysical principles found in science before the seventeenth century which retain their relevance for scientific theory even when the metaphysics itself has been discarded. Other such principles have been fruitful in the development of physics: for example, the “conservation of motion” stated by Descartes and Leibniz, which was generalized and given precision in the nineteenth century as the doctrine of the conservation of energy; …
Here is another passage, once again, from the same paper:
Now Faraday uses a terminology in speaking about the lines of force which is derived from the idea of a bundle of elastic strings stretched under tension from point to point of the field. Thus he speaks of “tension” and “the number of lines” cut by a body moving in the field. Remembering his discussion about contiguous particles of a dielectric medium, one must think of the strings as stretching from one particle of the medium to the next in a straight line, the distance between particles being so small that the line appears as a smooth curve. How seriously does he take this model? Certainly the bundle of elastic strings is nothing like those one can buy at the store. The “number of lines” does not refer to a definite number of discrete material entities, but to the amount of force exerted over a given area in the field. It would not make sense to assign points through which a line passes and points which are free from a line. The field of force is continuous.
See the flow of the writing? the authentic respect for the intellectual history, and yet, the overriding concern for having to reach a conclusion, a meaning? the appreciation for the subtle drama? the clarity of thought, of expression?
Well, these passages were from the paper, but the book itself, too, is similarly written.
Obviously, while I remain engaged in [re-]reading the book [after a gap of 25 years], don’t expect me to blog.
After all, even I cannot act “where” I am not.
A Song I Like:
[I thought a bit between this song and another song, one by R.D. Burman, Gulzar and Lata. In the end, it was this song which won out. As usual, in making my decision, the reference was exclusively made to the respective audio tracks. In fact, in the making of this decision, I happened to have also ignored even the excellent guitar pieces in this song, and the orchestration in general in both. The words and the tune were too well “fused” together in this song; that’s why. I do promise you to run the RD song once I return. In the meanwhile, I don’t at all mind keeping you guessing. Happy guessing!]
(Hindi) “bheegi bheegi…” [“bheege bheege lamhon kee bheegee bheegee yaadein…”]
Music and Lyrics: Kaushal S. Inamdar
Singer: Hamsika Iyer
[Minor additions/editing may follow tomorrow or so.]
/
# On whether A is not non-A
This post has its origin in a neat comment I received on my last post [^]; see the exchange starting here: [^].
The question is whether I accept that A is not non-A.
My answer is: No, I do not accept that, logically speaking, A is not non-A—not unless the context to accept this statement is understood clearly and unambiguously (and the best way to do that is to spell it out explicitly).
Another way to say the same thing is that I can accept that “A is not non-A,” but only after applying proper qualifications; I won’t accept it in an unqualified way.
Let me explain by considering various cases arising, using a simple example.
The Venn diagram:
Let’s begin by drawing a Venn diagram.
Draw a rectangle and call it the set $R$. Draw a circle completely contained in it, and call it the set $A$. You can’t put a round peg to fill a rectangular hole, so, the remaining area of the rectangle is not zero. Call the remaining area $B$. See the diagram below.
Case 1: All sets are non-empty:
Assume that neither $A$ nor $B$ is empty. Using symbolic terms, we can say that:
$A \neq \emptyset$,
$B \neq \emptyset$, and
$R \equiv A \cup B$
where the symbol $\emptyset$ denotes an empty set, and $\equiv$ means “is defined as.”
We take $R$ as the universal set—of this context. For example, $R$ may represent, say the set of all the computers you own, with $A$ denoting your laptops and $B$ denoting your desktops.
I take the term “proper set” to mean a set that has at least one element or member in it, i.e., a set which is not empty.
Now, focus on $A$. Since the set $A$ is a proper set, then it is meaningful to apply the negation- or complement-operator to it. [May be, I have given away my complete answer right here…] Denote the resulting set, the non-A, as $A^{\complement }$. Then, in symbolic terms:
$A^{\complement } \equiv R \setminus A$.
where the symbol $\setminus$ denotes taking the complement of the second operand, in the context of the first operand (i.e., “subtracting” $A$ from $R$). In our example,
$A^{\complement } = B$,
and so:
$A^{\complement } \neq \emptyset$.
Thus, here, $A^{\complement }$ also is a proper (i.e. non-empty) set.
To conclude this part, the words “non-A”, when translated into symbolic terms, means $A^{\complement }$, and this set here is exactly the same as $B$.
To find the meaning of the phrase “not non-A,” I presume that it means applying the negation i.e. the complement operator to the set $A^{\complement }$.
It is possible to apply the complement operator because $A ^{\complement } \neq \emptyset$. Let us define the result of this operation as $A^{\complement \complement}$; note the two $^{\complement}$s appearing in its name. The operation, in symbols becomes:
$A^{\complement \complement} \equiv R \setminus A^{\complement} = R \setminus B = A$.
Note that we could apply the complement operator to $A$ and later on to $A^{\complement}$ only because each was non-empty.
As the simple algebra of the above simple-minded example shows,
$A = A^{\complement\complement}$,
which means, we have to accept, in this example, that A is not non-A.
Remarks on the Case 1:
However, note that we can accept the proposition only under the given assumptions.
In particular, in arriving at it, we have applied the complement-operator twice. (i) First, we applied it to the “innermost” operand i.e. $A$, which gave us $A^{\complement}$. (ii) Then, we took this result, and applied the complement-operator to it once again, yielding $A^{\complement\complement}$. Thus, the operand for the second complement-operator was $A^{\complement}$.
Now, here is the rule:
Rule 1: We cannot meaningfully apply the complement-operator unless the operand set is proper (i.e. non-empty).
People probably make mistakes in deciding whether A is not non-A, because, probably, they informally (and properly) do take the “innermost” operand, viz. $A$, to be non-empty. But then, further down the line, they do not check whether the second operand, viz. $A^{\complement}$ turns out to be empty or not.
Case 2: When the set $A^{\complement}$ is empty:
The set $A^{\complement}$ will be empty if $B = \emptyset$, which will happen if and only if $A = R$. Recall, $R$ is defined to be the union of $A$ and $B$.
So, every time there are two mutually exclusive and collectively exhaustive sets, if any one of them is made empty, you cannot doubly apply the negation or the complement operator to the other (nonempty) set.
Such a situation always occurs whenever the remaining set coincides with the universal set of a given context.
In attempting a double negation, if your first (or innermost) operand itself is a universal set, then you cannot apply the negation operator for the second time, because by Rule 1, the result of the first operator comes out as an empty set.
The nature of an empty set:
But why this rule that you can’t negate (or take the complement of) an empty set?
An empty set contains no element (or member). Since it is the elements which together impart identity to a set, an empty set has no identity of its own.
As an aside, some people think that all the usages of the phrase “empty set” refers to the one and the only set (in the entire universe, for all possible logical propositions involving sets). For instance, the empty set obtained by taking an intersection of dogs and cats, they say, is exactly the same empty set as the one obtained by taking an intersection of cars and bikes.
I reject this position. It seems to me to be Platonic in nature, and there is no reason to give Plato even an inch of the wedge-space in this Aristotlean universe of logic and reality.
As a clarification, notice, we are talking of the basic and universal logic here, not the implementation details of a programming language. A programming language may choose to point all the occurrences of the NULL string to the same memory location. This is merely an implementation choice to save on the limited computer memory. But it still makes no sense to say that all empty C-strings exist at the same memory location—but that’s what you end up having if you call an empty set the empty set. Which brings us to the next issue.
If an empty set has no identity of its own, if it has no elements, and hence no referents, then how come it can at all be defined? After all, a definition requires identity.
The answer is: Structurally speaking, an empty set acquires its meaning—its identity—“externally;” it has no “internally” generated identity.
The only identity applicable to an empty set is an abstract one which gets imparted to it externally; the purpose of this identity is to bring a logical closure (or logical completeness) to the primitive operations defined on sets.
For instance, intersection is an operator. To formally bring closure to the intersection operation, we have to acknowledge that it may operate over any combination of any operand sets, regardless of their natures. This range includes having to define the intersection operator for two sets that have no element in common. We abstractly define the result of such a case as an empty set. In this case, the meaning of the empty set refers not to a result set of a specific internal identity, but only to the operation and the disjoint nature the operands which together generated it, i.e., via a logical relation whose meaning is external to the contents of the empty set.
Inasmuch as an empty set necessarily includes a reference to an operation, it is a concept of method. Inasmuch as many combinations of various operations and operands can together give rise to numerous particular instances of an empty set, there cannot be a unique instance of it which is applicable in all contexts. In other words, an empty set is not a singleton; it is wrong to call it the empty set.
Since an empty set has no identity of its own, the notion cannot be applied in an existence-related (or ontic or metaphysical) sense. The only sense it has is in the methodological (or epistemic) sense.
Extending the meaning of operations on an empty set:
In a derivative sense, we may redefine (i.e. extend) our terms.
First, we observe that since an empty set lacks an identity of its own, the result of any operator applied to it cannot have any (internal) identity of its own. Then, equating these two lacks of existence-related identities (which is where the extension of the meaning occurs), we may say, even if only in a derivative or secondary sense, that
Rule 2: The result of an operator applied to an empty set again is another empty set.
Thus, if we now allow the complement-operator to operate also on an empty set (which, earlier, we did not allow), then the result would have to be another empty set.
Again, the meaning of this second empty set depends on the entirety of its generating context.
Case 3: When the non-empty set is the universal set:
For our particular example, assuming $B = \emptyset$ and hence $A = R$, if we allow complement operator to be applied (in the extended sense) to $A^{\complement}$, then
$A^{\complement\complement} \equiv R \setminus A^{\complement} = R \setminus (R \setminus A) = R \setminus B = R \setminus (\emptyset) = R = A$.
Carefully note, in the above sequence, the place where the extended theory kicks in is at the expression: $R \setminus (\emptyset)$.
We can apply the $\setminus$ operator here only in an extended sense, not primary.
We could here perform this operation only because the left hand-side operand for the complement operator, viz., the set $R$ here was a universal set. Any time you have a universal set on the left hand-side of a complement operator, there is no more any scope left for ambiguity. This state is irrespective of whether the operand on the right hand-side is a proper set or an empty set.
So, in this extended sense, feel free to say that A is not non-A, provided A is the universal set for a given context.
To recap:
The idea of an empty set acquires meaning only externally, i.e., only in reference to some other non-empty set(s). An empty set is thus only an abstract place-holder for the result of an operation applied to proper set(s), the operation being such that it yields no elements. It is a place-holder because it refers to the result of an operation; it is abstract, because this result has no element, hence no internally generated identity, hence no concrete meaning except in an abstract relation to that specific operation (including those specific operands). There is no “the” empty set; each empty set, despite being abstract, refers to a combination of an instance of proper set(s) and an instance of an operation giving rise to it.
Exercises:
E1: Draw a rectangle and put three non-overlapping circles completely contained in it. The circles respectively represent the three sets $A$, $B$, $C$, and the remaining portion of the rectangle represents the fourth set $D$. Assuming this Venn diagram, determine the meaning of the following expressions:
(i) $R \setminus (B \cup C)$ (ii) $R \setminus (B \cap C)$ (iii) $R \setminus (A \cup B \cup C)$ (iv) $R \setminus (A \cap B \cap C)$.
(v)–(viii) Repeat (i)–(iv) by substituting $D$ in place of $R$.
(ix)–(xvi) Repeat (i)–(viii) if $A$ and $B$ partly overlap.
E2: Identify the nature of set theoretical relations implied by that simple rule of algebra which states that two negatives make a positive.
A bit philosophical, and a form better than “A is not non-A”:
When Aristotle said that “A is A,” and when Ayn Rand taught its proper meaning: “Existence is identity,” they referred to the concepts of “existence” and “identity.” Thus, they referred to the universals. Here, the word “universals” is to be taken in the sense of a conceptual abstraction.
If concepts—any concepts, not necessarily only the philosophical axioms—are to be represented in terms of the set theory, how can we proceed doing that?
(BTW, I reject the position that the set theory, even the so-called axiomatic set theory, is more fundamental than the philosophic abstractions.)
Before we address this issue of representation, understand that there are two ways in which we can specify a set: (i) by enumeration, i.e. by listing out all its (relatively concrete) members, and (ii) by rule, i.e. by specifying a definition (which may denote an infinity of concretes of a certain kind, within a certain range of measurements).
The virtue of the set theory is that it can be applied equally well to both finite sets and infinite sets.
The finite sets can always be completely specified via enumeration, at least in principle. On the other hand, infinite sets can never be completely specified via enumeration. (An infinite set is one that has an infinity of members or elements.)
A concept (any concept, whether of maths, or art, or engineering, or philosophy…) by definition stands for an infinity of concretes. Now, in the set theory, an infinity of concretes can be specified only using a rule.
Therefore, the only set-theoretic means capable of representing concepts in that theory is to specify their meaning via “rule” i.e. definition of the concept.
Now, consider for a moment a philosophical axiom such as the concept of “existence.” Since the only possible set-theoretic representation of a concept is as an infinite set, and since philosophical axiomatic concepts have no antecedents, no priors, the set-theoretic representation of the axiom of “existence” would necessarily be as a universal set.
We saw that the complement of a universal set is an empty set. This is a set-theoretic conclusion. Its broader-based, philosophic analog is: there are no contraries to axiomatic concepts.
For the reasons explained above, you may thus conclude, in the derivative sense, that:
“existence is not void”,
where “void” is taken as exactly synonymous to “non-existence”.
The proposition quoted in the last sentence is true.
However, as the set theory makes it clear and easy to understand, it does not mean that you can take this formulation for a definition of the concept of existence. The term “void” here has no independent existence; it can be defined only by a negation of existence itself.
You cannot locate the meaning of existence in reference to void, even if it is true that “existence is not void”.
Even if you use the terms in an extended sense and thereby do apply the “not” qualfier (in the set-theoretic representation, it would be an operator) to the void (to the empty set), for the above-mentioned reasons, you still cannot then read the term “is” to mean “is defined as,” or “is completely synonymous with.” Not just our philosophical knowledge but even its narrower set-theoretical representation is powerful enough that it doesn’t allow us doing so.
That’s why a better way to connect “existence” with “void” is to instead say:
“Existence is not just the absence of the void.”
The same principle applies to any concept, not just to the most fundamental philosophic axioms, so long as you are careful to delineate and delimit the context—and as we saw, the most crucial element here is the universal set. You can take a complement of an empty set only when the left hand-side operator is a universal set.
Let us consider a few concepts, and compare putting them in the two forms:
• from “A is not non-A”
• to “A is not the [just] absence [or negation] of non-A,” or, “A is much more than just a negation of the non-A”.
Consider the concept: focus. Following the first form, a statement we can formulate is:
“focus is not evasion.”
However, it does make much more sense to say that
“focus is not just an absence of evasion,” or that “focus is not limited to an anti-evasion process.”
Both these statements follow the second form. The first form, even if it is logically true, is not as illuminating as is the second.
Exercises:
Here are a few sentences formulated in the first form—i.e. in the form “A is not non-A” or something similar. Reformulate them into the second form—i.e. in the form such as: “A is not just an absence or negation of non-A” or “A is much better than or much more than just a complement or negation of non-A”. (Note: SPPU means the Savitribai Phule Pune University):
• Engineers are not mathematicians
• C++ programmers are not kids
• IISc Bangalore is not SPPU
• IIT Madras is not SPPU
• IIT Kanpur is not SPPU
• IIT Bombay is not SPPU
• The University of Mumbai is not SPPU
• The Shivaji University is not SPPU
[Lest someone from SPPU choose for his examples the statements “Mechanical Engg. is not Metallurgy” and “Metallurgy is not Mechanical Engg.,” we would suggest him another exercise, one which would be better suited to the universal set of all his intellectual means. The exercise involves operations mostly on the finite sets alone. We would ask him to verify (and not to find out in the first place) whether the finite set (specified with an indicative enumeration) consisting of {CFD, Fluid Mechanics, Heat Transfer, Thermodynamics, Strength of Materials, FEM, Stress Analysis, NDT, Failure Analysis,…} represents an intersection of Mechanical Engg and Metallurgy or not.]
A Song I Like:
[I had run this song way back in 2011, but now want to run it again.]
(Hindi) “are nahin nahin nahin nahin, nahin nahin, koee tumasaa hanseen…”
Singers: Kishore Kumar, Asha Bhosale
Music: Rajesh Roshan
Lyrics: Anand Bakshi
[But I won’t disappoint you. Here is another song I like and one I haven’t run so far.]
(Hindi) “baaghon mein bahaar hain…”
Music: S. D. Burman [but it sounds so much like R.D., too!]
Singers: Mohamad Rafi, Lata Mangeshkar
Lyrics: Anand Bakshi
[Exercise, again!: For each song, whenever a no’s-containing line comes up, count the number of no’s in it. Then figure out whether the rule that double negatives cancel out applies or not. Why or why not?]
[Mostly done. Done editing now (right on 2016.10.22). Drop me a line if something isn’t clear—logic is a difficult topic to write on.]
[E&OE] | 2019-01-18 21:04:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 66, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7383688688278198, "perplexity": 889.2142268511278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00394.warc.gz"} |
https://www.math10.com/forum/viewtopic.php?f=1&t=9357 | # Combinations
Algebra
### Combinations
1.If the number p is chosen at random from natural numbers that are not greater than 10, what is the probability 3p + 2 <= 9?
2. If there are 4 red and 2 black balls in the basket, what is the probability that at least one of them is black when two balls are drawn at the same time?
Guest
### Re: Combinations
1.If the number p is chosen at random from natural numbers that are not greater than 10, what is the probability 3p + 2 <= 9?
3p<= 9- 2= 7
p<= 7/3= 2 and 1/3.
Since these are natural numbers that is the same as saying that p= 1 or p= 2. Assuming that all numbers from 1 to 10 are equally likely to be chosen, the probability is 1/10+ 1/10= 2/10= 1/5.
2. If there are 4 red and 2 black balls in the basket, what is the probability that at least one of them is black when two balls are drawn at the same time?
The only way there could not be "at least one black ball" is if both balls are red. There are 4 red and 2 black balls so the probability ball A is red is 4/(4+ 2)= 4/6= 2/3. Given that, there are 5 balls left, 3 red and 2 black, so the probability ball B is also red is 3/(3+ 2)= 3/5. The probability the two balls are both red is (2/3)(3/5)= 2/5. The probability "at least one ball is black" is 1- 2/5= 3/5.
HallsofIvy
Posts: 341
Joined: Sat Mar 02, 2019 9:45 am
Reputation: 123 | 2021-06-14 17:12:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083167910575867, "perplexity": 223.9621372605018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00504.warc.gz"} |
https://yiqinzhao.me/project/xihe/ | Xihe
A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality
# Abstract
Omnidirectional lighting provides the foundation for achieving spatially-variant photorealistic 3D rendering, a desirable property for mobile augmented reality applications. However, in practice, estimating omnidirectional lighting can be challenging due to limitations such as partial panoramas of the rendering positions, and the inherent environment lighting and mobile user dynamics. A new opportunity arises recently with the advancements in mobile 3D vision, including built-in high-accuracy depth sensors and deep learning-powered algorithms, which provide the means to better sense and understand the physical surroundings. Centering the key idea of 3D vision, in this work, we design an edge-assisted framework called Xihe to provide mobile AR applications the ability to obtain accurate omnidirectional lighting estimation in real time.
Specifically, we develop a novel sampling technique that efficiently compresses the raw point cloud input generated at the mobile device. This technique is derived based on our empirical analysis of a recent 3D indoor dataset and plays a key role in our 3D vision-based lighting estimator pipeline design. To achieve the real-time goal, we develop a tailored GPU pipeline for on-device point cloud processing and use an encoding technique that reduces network transmitted bytes. Finally, we present an adaptive triggering strategy that allows Xihe to skip unnecessary lighting estimations and a practical way to provide temporal coherent rendering integration with the mobile AR ecosystem. We evaluate both the lighting estimation accuracy and time of Xihe using a reference mobile application developed with Xihe's APIs. Our results show that Xihe takes as fast as 20.67ms per lighting estimation and achieves 9.4% better estimation accuracy than a state-of-the-art neural network.
# MobiSys'21 Paper
Xihe: A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality
Yiqin Zhao and Tian Guo
@InProceedings{xihe_mobisys2021,
author="Zhao, Yiqin
and Guo, Tian",
title="Xihe: A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality",
booktitle="The 19th ACM International Conference on Mobile Systems, Applications, and Services",
year="2021",
}
# Acknowledgement
We thank all anonymous reviewers, our shepherd, and our artifact evaluator Tianxing Li for their insight feedback. This work was supported in part by NSF Grants #1755659 and #1815619. | 2021-08-04 22:26:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093787431716919, "perplexity": 4002.9243478141234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00512.warc.gz"} |
https://mathoverflow.net/questions/182518/when-does-the-greedy-change-making-algorithm-work | # When does the greedy change-making algorithm work?
The change-making problem asks how to make a certain sum of money using the fewest coins. With US coins {1, 5, 10, 25}, the greedy algorithm of selecting the largest coin at each step also uses the fewest coins.
With which currencies (sets of integers including 1) does the 'greedy' algorithm work?
That's a different question, Gerry.
Believe it or not, the answers are different if one is asking (a) given N and a system of denominations D, is the greedy algorithm using D optimal for N? and (b) given a system of denominations D, is the greedy algorithm using D optimal for ALL N?
I think the latter problem is the one that Zachary Vance is asking about.
In that case, it is decidable in polynomial time. See Pearson's article here: http://dl.acm.org/citation.cfm?id=2309414 .
• Oops. ${}{}{}{}$ – Gerry Myerson Oct 4 '14 at 23:44
If you look at pages 4-5 of this paper by Jeff Shallit, it says, "Suppose we are given $N$ and a system of denominations. How easy is it to determine if the greedy representation for $N$ is actually optimal? Kozen and Zaks [4] have shown that this problem is co-NP-complete if the data is provided in ordinary decimal, or binary. This strongly suggests there is no efficient algorithm for this problem."
The reference is D. Kozen and S. Zaks, Optimal bounds for the change-making problem, Theoret. Comput. Sci. 123 (1994), 377–388. | 2019-10-22 12:38:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764946699142456, "perplexity": 638.5657918052436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00366.warc.gz"} |
https://studyadda.com/sample-papers/jee-main-sample-paper-47_q68/301/303689 | • # question_answer If $f'(x)=\,|x|\,-\{x\},$ where {x} denotes the fractional part function of $x,$ then $f(x)$ is decreasing in A) $\left( \frac{-1}{2},\,0 \right)$ B) $\left( \frac{-1}{2},\,2 \right)$ C) $\left( \frac{-1}{2},\,-2 \right]$ D) $\left( \frac{1}{2},\,\infty \right)$
$\because$ $f(x)=|x|-\{x\}$ $\because$ $f(x)$ is decreasing. $\therefore$ $f'(x)<0$ $\Rightarrow$ $|x|-\{x\}<0$ $\Rightarrow$ $|x|\,<\{x\}$ From figure, $x\in \,\left( -\frac{1}{2},\,\,0 \right)$ From graph, it is clear that, $f(x)$ has local maxima at $x=1$. | 2022-01-16 18:21:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981899619102478, "perplexity": 2731.9881593006503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00665.warc.gz"} |
https://math.stackexchange.com/questions/1313214/why-can-the-transformation-derived-from-a-list-of-points-and-a-list-of-their-tra | Why can the transformation derived from a list of points and a list of their transformed counterparts not be affine or linear?
Some context (original question below): I wanted to know if there's a nice concise formula to calculate the transformation based on a list of points and another list of the transformed points. This is all 2D or $\mathbb{R}^2$.
By that I mean some matrix equation that has a matrix that contains the given values, so that one can invert this matrix to solve for the transformation matrix or its components.
The question I link to below has the very same goal and especially a nice answer that I was looking for, but it does not create a linear or affine transform.
In his answer to this question bubba makes the following statement:
The transformation can not be linear or affine, it has to be a "perspective" transform.
Why is that? What if I want to find the affine or linear transformation and not the perspective/nonlinear one?
I'm not sure about this, but I guess that if $c_0 = 0$ and $c_1 = 0$, then the perspective transformation will be linear. Would that help me to find the linear or affine transform of points?
• It might be good to edit in more of the context so that your question does not depend on those links never breaking. – Mark S. Jun 5 '15 at 13:13 | 2021-03-09 01:31:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4956012964248657, "perplexity": 216.82668932241364}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00228.warc.gz"} |
https://www.trustudies.com/question/446/d-and-e-are-points-on-the-sides-ca-an/ | 3 Tutor System
Starting just at 265/hour
# D and E are points on the sides CA and CB respectively of a triangle ABC right angled at C. Prove that $$AE^2 + BD^2 = AB^2 + DE^2$$.
Given, D and E are points on the sides CA and CB respectively of a triangle ABC right angled at C.
By Pythagoras theorem in $$\triangle$$ ACE, we get
$$AC^2 + CE^2 = AE^2$$ ………………………………………….(i)
In $$\triangle$$ BCD, by Pythagoras theorem, we get
$$BC^2 + CD^2 = BD^2$$ ………………………………..(ii)
From equations (i) and (ii), we get,
$$AC^2 + CE^2 + BC^2 + CD^2 = AE^2 + BD^2$$ …………..(iii)
In $$\triangle$$ CDE, by Pythagoras theorem, we get
$$DE^2 = CD^2 + CE^2$$
In $$\triangle$$ ABC, by Pythagoras theorem, we get
$$AB^2 = AC^2 + CB^2$$
Putting the above two values in equation (iii), we get
$$DE^2 + AB^2 = AE^2 + BD^2$$. | 2023-03-21 17:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866235733032227, "perplexity": 697.5452016222924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00321.warc.gz"} |
http://jorgebg.com/reader/ | ## Friday, 23 August 2019
### 04:00 PM
Future smart cities and intelligent world will have connected vehicles and smart cars as its indispensable and most essential components. The communication and interaction among such connected entities in this vehicular internet of things (IoT) domain, which also involves smart traffic infrastructure, road-side sensors, restaurant with beacons, autonomous emergency vehicles, etc., offer innumerable real-time user applications and provide safer and pleasant driving experience to consumers. Having more than 100 million lines of code and hundreds of sensors, these connected vehicles (CVs) expose a large attack surface, which can be remotely compromised and exploited by malicious attackers. Security and privacy are serious concerns that impede the adoption of smart connected cars, which if not properly addressed will have grave implications with risk to human life and limb. In this research, we present a formalized dynamic groups and attribute-based access control (ABAC) model (referred as \cvac) for smart cars ecosystem, where the proposed model not only considers system wide attributes-based security policies but also takes into account the individual user privacy preferences for allowing or denying service notifications, alerts and operations to on-board resources. Further, we introduce a novel notion of groups in vehicular IoT, which are dynamically assigned to moving entities like connected cars, based on their current GPS coordinates, speed or other attributes, to ensure relevance of location and time sensitive notification services to the consumers, to provide administrative benefits to manage large numbers of smart entities, and to enable attributes and alerts inheritance for fine-grained security authorization policies. We present proof of concept implementation of our model in AWS cloud platform demonstrating real-world uses cases along with performance metrics.
S-money [Proc. R. Soc. A 475, 20190170 (2019)] schemes define virtual tokens designed for networks with relativistic or other trusted signalling constraints. The tokens allow near-instant verification and guarantee unforgeability without requiring quantum state storage. We present refined two stage S-money schemes. The first stage, which may involve quantum information exchange, generates private user token data. In the second stage, which need only involve classical communications, users determine the valid presentation point, without revealing it to the issuer. This refinement allows the user to determine the presentation point anywhere in the causal past of all valid presentation points. It also allows flexible transfer of tokens among users without compromising user privacy.
Access to the system resources. The current access control systems face many problems, such as the presence of the third-party, inefficiency, and lack of privacy. These problems can be addressed by blockchain, the technology that received major attention in recent years and has many potentials. In this study, we overview the problems of the current access control systems, and then, we explain how blockchain can help to solve them. We also present an overview of access control studies and proposed platforms in different domains. This paper presents the state of the art and the challenges of blockchain-based access control systems.
With the number of new mobile malware instances increasing by over 50\% annually since 2012 [24], malware embedding in mobile apps is arguably one of the most serious security issues mobile platforms are exposed to. While obfuscation techniques are successfully used to protect the intellectual property of apps' developers, they are unfortunately also often used by cybercriminals to hide malicious content inside mobile apps and to deceive malware detection tools. As a consequence, most of mobile malware detection approaches fail in differentiating between benign and obfuscated malicious apps. We examine the graph features of mobile apps code by building weighted directed graphs of the API calls, and verify that malicious apps often share structural similarities that can be used to differentiate them from benign apps, even under a heavily "polluted" training set where a large majority of the apps are obfuscated. We present DaDiDroid an Android malware app detection tool that leverages features of the weighted directed graphs of API calls to detect the presence of malware code in (obfuscated) Android apps. We show that DaDiDroid significantly outperforms MaMaDroid [23], a recently proposed malware detection tool that has been proven very efficient in detecting malware in a clean non-obfuscated environment. We evaluate DaDiDroid's accuracy and robustness against several evasion techniques using various datasets for a total of 43,262 benign and 20,431 malware apps. We show that DaDiDroid correctly labels up to 96% of Android malware samples, while achieving an 91% accuracy with an exclusive use of a training set of obfuscated apps.
In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for $n$-party real summation in the shuffle model of differential privacy with $O_{\epsilon, \delta}(1)$ error and $\Theta(\epsilon\sqrt{n})$ one-bit messages per party. In contrast, every local model protocol for real summation must incur error $\Omega(1/\sqrt{n})$, and there exist protocols matching this lower bound which require just one bit of communication per party. Whether this gap in number of messages is necessary was left open by Cheu et al.
In this note we show a protocol with $O(1/\epsilon)$ error and $O(\log(n/\delta))$ messages of size $O(\log(n))$ per party. This protocol is based on the work of Ishai et al.\ (FOCS 2006) showing how to implement distributed summation from secure shuffling, and the observation that this allows simulating the Laplace mechanism in the shuffle model.
Nature, Published online: 22 August 2019; doi:10.1038/d41586-019-02542-3
Wunderkind gene-editing tool used to trigger smart materials that can deliver drugs and sense biological signals.
Raised blood pressure is the most important risk factor in the global burden of disease.1 Although there is robust evidence to show that lowering blood pressure can substantially reduce cardiovascular morbidity and mortality,2 the global burden of hypertension is increasing.3,4 To achieve a reduction in the burden of disease related to hypertension, health systems must ensure that high blood pressure treatment and control rates are achieved. The status of controlled blood pressure is being promoted as a measure of universal health coverage, especially in the context of non-communicable diseases.
Paediatrician who used digital technologies to improve global health. Born on Sept 1, 1948, in Newton, MA, USA, he died while hiking in Alaska, USA, on June 25, 2019, aged 70 years.
One of the basic tenets of evidence-based medicine is that randomisation is crucial to understanding treatment effects. Observational studies are subject to confounding and selection bias. Researchers can adjust for measured differences between treatment groups, but unmeasured or unmeasurable differences might exist between groups that obscure true treatment effects and cannot be accounted for by any statistical method.1 The published medical literature is filled with examples of associations between treatment and outcome identified in observational studies that were subsequently disproven by well conducted randomised controlled trials (RCTs).
We thank the correspondents for their responses to our Comment.1
Mariam Chekhchar and colleagues1 discuss branch retinal artery occlusion in a young woman, probably due to occult cardioembolus from rheumatic mitral stenosis. Despite decreasing incidence in developed nations, rheumatic heart disease remains a major source of preventable morbidity and mortality worldwide2 and we commend the authors for bringing attention to this important clinical entity. However, given this valvulopathy's highly thrombogenic nature, therapeutic anticoagulation should be considered.
While convolutional neural network (CNN)-based pedestrian detection methods have proven to be successful in various applications, detecting small-scale pedestrians from surveillance images is still challenging. The major reason is that the small-scale pedestrians lack much detailed information compared to the large-scale pedestrians. To solve this problem, we propose to utilize the relationship between the large-scale pedestrians and the corresponding small-scale pedestrians to help recover the detailed information of the small-scale pedestrians, thus improving the performance of detecting small-scale pedestrians. Specifically, a unified network (called JCS-Net) is proposed for small-scale pedestrian detection, which integrates the classification task and the super-resolution task in a unified framework. As a result, the super-resolution and classification are fully engaged, and the super-resolution sub-network can recover some useful detailed information for the subsequent classification. Based on HOG+LUV and JCS-Net, multi-layer channel features (MCF) are constructed to train the detector. The experimental results on the Caltech pedestrian dataset and the KITTI benchmark demonstrate the effectiveness of the proposed method. To further enhance the detection, multi-scale MCF based on JCS-Net for pedestrian detection is also proposed, which achieves the state-of-the-art performance.
In this paper, a self-guiding multimodal LSTM (sgLSTM) image captioning model is proposed to handle an uncontrolled imbalanced real-world image-sentence dataset. We collect a FlickrNYC dataset from Flickr as our testbed with 306,165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in the FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (mLSTM) model. Training of mLSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterward, during the training of sgLSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sgLSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images.
A fully-parallelized work-time optimal algorithm is presented for computing the exact Euclidean Distance Transform (EDT) of a 2D binary image with the size of $ntimes n$ . Unlike existing PRAM (Parallel Random Access Machine) and other algorithms, this algorithm is suitable for implementation on modern SIMD (Single Instruction Multiple Data) architectures such as GPUs. As a fundamental operation of 2D EDT, 1D EDT is efficiently parallelized first. Specifically, the GPU algorithm for the 1D EDT, which uses CUDA (Compute Unified Device Architecture) binary functions, such as ballot(), ffs(), clz(), and shfl(), runs in $O(log_{32}n)$ time and performs $O(n)$ work. Using the 1D EDT as a fundamental operation, the fully-parallelized work-time optimal 2D EDT algorithm is designed. This algorithm consists of three steps. Step 1 of the algorithm runs in $O(log_{32}n)$ time and performs $O(N)$ ( $N = n^{2}$ ) of total work on GPU. Step 2 performs $O(N)$ of total work and has an expected time complexity of $O(logn)$ on GPU. Step 3 runs in $O(log_{32}n)$ time and performs $O(N)$ of total work on GPU. As far as we know, this algorithm is the first fully-parallelized and realized work-time optimal algorithm for GPUs. The experimental results show that this algorit- m outperforms the prior state-of-the-art GPU algorithms.
Sonar imagery plays a significant role in oceanic applications since there is little natural light underwater, and light is irrelevant to sonar imaging. Sonar images are very likely to be affected by various distortions during the process of transmission via the underwater acoustic channel for further analysis. At the receiving end, the reference image is unavailable due to the complex and changing underwater environment and our unfamiliarity with it. To the best of our knowledge, one of the important usages of sonar images is target recognition on the basis of contour information. The contour degradation degree for a sonar image is relevant to the distortions contained in it. To this end, we developed a new no-reference contour degradation measurement for perceiving the quality of sonar images. The sparsities of a series of transform coefficient matrices, which are descriptive of contour information, are first extracted as features from the frequency and spatial domains. The contour degradation degree for a sonar image is then measured by calculating the ratios of extracted features before and after filtering this sonar image. Finally, a bootstrap aggregating (bagging)-based support vector regression module is learned to capture the relationship between the contour degradation degree and the sonar image quality. The results of experiments validate that the proposed metric is competitive with the state-of-the-art reference-based quality metrics and outperforms the latest reference-free competitors.
We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.
Top-down saliency detection aims to highlight the regions of a specific object category, and typically relies on pixel-wise annotated training data. In this paper, we address the high cost of collecting such training data by a weakly supervised approach to object saliency detection, where only image-level labels, indicating the presence or absence of a target object in an image, are available. The proposed framework is composed of two collaborative CNN modules, an image-level classifier and a pixel-level map generator. While the former distinguishes images with objects of interest from the rest, the latter is learned to generate saliency maps by which the images masked by the maps can be better predicted by the former. In addition to the top-down guidance from class labels, the map generator is derived by also exploring other cues, including the background prior, superpixel- and object proposal-based evidence. The background prior is introduced to reduce false positives. Evidence from superpixels helps preserve sharp object boundaries. The clue from object proposals improves the integrity of highlighted objects. These different types of cues greatly regularize the training process and reduces the risk of overfitting, which happens frequently when learning CNN models with few training data. Experiments show that our method achieves superior results, even outperforming fully supervised methods.
## Thursday, 22 August 2019
### 04:00 PM
Nature, Published online: 21 August 2019; doi:10.1038/s41586-019-1502-y
RNA-dependent DEAD-box ATPases (DDXs) regulate the dynamics of phase-separated organelles, with ATP-bound DDXs promoting phase separation, and ATP hydrolysis inducing compartment disassembly and RNA release.
Nature, Published online: 21 August 2019; doi:10.1038/d41586-019-02451-5
The movement of small droplets on a substrate is governed by surface-tension forces. A technique that can tune the surface tension of robust oxide substrates for droplet manipulation could open up many applications.
FADS1 and FADS2 Polymorphisms Modulate Fatty Acid Metabolism and Dietary Impact on Health [Annual Reviews: Annual Review of Nutrition: Table of Contents]
Annual Review of Nutrition, Volume 39, Issue 1, Page 21-44, August 2019.
Dietary Fuels in Athletic Performance [Annual Reviews: Annual Review of Nutrition: Table of Contents]
Annual Review of Nutrition, Volume 39, Issue 1, Page 45-73, August 2019.
The Benefits and Risks of Iron Supplementation in Pregnancy and Childhood [Annual Reviews: Annual Review of Nutrition: Table of Contents]
Annual Review of Nutrition, Volume 39, Issue 1, Page 121-146, August 2019.
Mitochondrial DNA Mutation, Diseases, and Nutrient-Regulated Mitophagy [Annual Reviews: Annual Review of Nutrition: Table of Contents]
Annual Review of Nutrition, Volume 39, Issue 1, Page 201-226, August 2019.
Time-Restricted Eating to Prevent and Manage Chronic Metabolic Diseases [Annual Reviews: Annual Review of Nutrition: Table of Contents]
Annual Review of Nutrition, Volume 39, Issue 1, Page 291-315, August 2019.
This randomized clinical trial compares the effect on relapse of continuing olanzapine vs placebo among patients with psychotic depression who achieved remission of psychosis and depressive symptoms while taking olanzapine and sertraline.
To the Editor In his Viewpoint, Dr Skolnik discussed the 2018 American College of Cardiology (ACC)/American Heart Association (AHA) guideline on the management of blood cholesterol and its implications for older adults. We would like to highlight relevant features of the guidelines that merit greater recognition.
Although various electronic health records (EHRs) have different features, nearly all seem to have alerts for potential problems with drug prescribing. It’s one thing that many believe that EHRs do very well. However, a recent study warns that when it comes to opioids and benzodiazepines, we shouldn’t always assume that such alerts work as intended.
This Medical News article discusses a recent meta-analysis of oral immunotherapy trials for people with peanut allergies.
This Viewpoint argues that the near-universal adoption of electronic fetal monitoring (EFM) in labor and delivery units has occurred without evidence that it has reduced adverse neurological events and has contributed to an increase in US cesarean delivery rates, and calls for the education of physicians and the public about EFM’s demonstrated reliability and value.
## Tuesday, 20 August 2019
### 04:00 PM
Nature, Published online: 20 August 2019; doi:10.1038/d41586-019-02475-x
Lisa Feldman Barrett ponders Joseph LeDoux’s study on how conscious brains evolved.
The Internet of Things (IoT) is increasingly empowering people with an interconnected world of physical objects ranging from smart buildings to portable smart devices, such as wearables. With recent advances in mobile sensing, wearables have become a rich collection of portable sensors and are able to provide various types of services, including tracking of health and fitness, making financial transactions, and unlocking smart locks and vehicles. Most of these services are delivered based on users' confidential and personal data, which are stored on these wearables. Existing explicit authentication approaches (i.e., PINs or pattern locks) for wearables suffer from several limitations, including small or no displays, risk of shoulder surfing, and users' recall burden. Oftentimes, users completely disable security features out of convenience. Therefore, there is a need for a burden-free (implicit) authentication mechanism for wearable device users based on easily obtainable biometric data. In this paper, we present an implicit wearable device user authentication mechanism using combinations of three types of coarse-grain minute-level biometrics: behavioral (step counts), physiological (heart rate), and hybrid (calorie burn and metabolic equivalent of task). From our analysis of over 400 Fitbit users from a 17-month long health study, we are able to authenticate subjects with average accuracy values of around .93 (sedentary) and .90 (non-sedentary) with equal error rates of .05 using binary SVM classifiers. Our findings also show that the hybrid biometrics perform better than other biometrics and behavioral biometrics do not have a significant impact, even during non-sedentary periods.
The electroencephalography (EEG) method has recently attracted increasing attention in the study of brain activity-based biometric systems because of its simplicity, portability, noninvasiveness, and relatively low cost. However, due to the low signal-to-noise ratio of EEG, most of the existing EEG-based biometric systems require a long duration of signals to achieve high accuracy in individual identification. Besides, the feasibility and stability of these systems have not yet been conclusively reported, since most studies did not perform longitudinal evaluation. In this paper, we proposed a novel EEG-based individual identification method using code-modulated visualevoked potentials (c-VEPs). Specifically, this paper quantitatively compared eight code-modulated stimulation patterns, including six 63-bit (1.05 s at 60-Hz refresh rate) m-sequences (M1-M6) and two spatially combined sequence groups (M×4: M1-M4 and M× 6: M1-M6) in recording the c-VEPs from a group of 25 subjects for individual identification. To further evaluate the influence of inter-session variability, we recorded two data sessions for each individual on different days to measure intra-session and cross-session identification performance. State-of-the-art VEP detection algorithms in brain-computer interfaces (BCIs) were employed to construct a template-matching-based identification framework. For intra-session identification, we achieved a 100% correct recognition rate (CRR) using 5.25-s EEG data (average of five trials for M5). For cross-session identification, 99.43% CRR was attained using 10.5-s EEG signals (average of ten trials for M5). These results suggest that the proposed c-VEP based individual identification method is promising for real-world applications.
## Monday, 19 August 2019
### 04:00 PM
Nature, Published online: 19 August 2019; doi:10.1038/d41586-019-02452-4
The tip of a scanning tunnelling microscope has been used to convert a molecular assembly into a 2D polymer and back, at room temperature — revealing how extreme environmental conditions can alter the progress of reactions.
## Tuesday, 13 August 2019
### 11:00 PM
Folates are critical for central nervous system function. Folate transport is mediated by 3 major pathways, reduced folate carrier (RFC), proton-coupled folate transporter (PCFT), and folate receptor alpha (FRα/Folr1), known to be regulated by ligand-activated nuclear receptors. Cerebral folate delivery primarily occurs at the choroid plexus through FRα and PCFT;...
Environmental conditions are key factors in the progression of plant disease epidemics. Light affects the outbreak of plant diseases, but the underlying molecular mechanisms are not well understood. Here, we report that the light-harvesting complex II protein, LHCB5, from rice is subject to light-induced phosphorylation during infection by the rice...
Diverse organisms, from insects to humans, actively seek out sensory information that best informs goal-directed actions. Efficient active sensing requires congruity between sensor properties and motor strategies, as typically honed through evolution. However, it has been difficult to study whether active sensing strategies are also modified with experience. Here, we...
## Monday, 12 August 2019
### 11:00 PM
Although KRAS and TP53 mutations are major drivers of pancreatic ductal adenocarcinoma (PDAC), the incurable nature of this cancer still remains largely elusive. ARF6 and its effector AMAP1 are often overexpressed in different cancers and regulate the intracellular dynamics of integrins and E-cadherin, thus promoting tumor invasion and metastasis when...
Participatory sensing is a crowdsourcing-based framework, where the platform executes the sensing requests with the help of many common peoples’ handheld devices (typically smartphones). In this paper, we mainly address the online sensing request admission and smartphone selection problem to maximize the profit of the platform, taking into account the queue backlog, and the location of sensing requests and smartphones. First, we formulate this problem as a discrete time model and design a location aware online admission and selection control algorithm (LAAS) based on the Lyapunov optimization technique. The LAAS algorithm only depends on the currently available information and makes all the control decisions independently and simultaneously. Next, we utilize the recent advancement of the accurate prediction of smartphones’ mobility and sensing request arrival information in the next few time slots and develop a predictive location aware admission and selection control algorithm (PLAAS). We further design a greedy predictive location aware admission and selection control algorithm (GPLAAS) to achieve the online implementation of PLAAS approximately and iteratively. Theoretical analysis shows that under any control parameter V > 0, both LAAS and PLAAS algorithm can achieve O(1/V)-optimal average profit, while the sensing request backlog is bounded by O(V). Extensive numerical results based on both synthetic and real trace show that LAAS outperforms the Greedy algorithm and Random algorithm and GPLAAS improves the profit-backlog tradeoff over LAAS.
This paper presents an energy management method to optimally control the energy supply and the temperature settings of distributed heating and ventilation systems for residential buildings. The control model attempts to schedule the supply and demand simultaneously with the purpose of minimizing the total costs. Moreover, the Predicted Percentage of Dissatisfied (PPD) model is introduced into the consumers’ cost functions and the quadratic fitting method is applied to simplify the PPD model. An energy management algorithm is developed to seek the optimal temperature settings, the energy supply, and the price. Furthermore, due to the ubiquity of price oscillations in electricity markets, we analyze and examine the effects of price oscillations on the performance of the proposed algorithm. Finally, the theoretical analysis and simulation results both demonstrate that the proposed energy management algorithm with price oscillations can converge to a region around the optimal solution.
The deployment of smart hybrid heat pumps (SHHPs) can introduce considerable benefits to electricity systems via smart switching between electricity and gas while minimizing the total heating cost for each individual customer. In particular, the fully optimized control technology can provide flexible heat that redistributes the heat demand across time for improving the utilization of low-carbon generation and enhancing the overall energy efficiency of the heating system. To this end, an accurate quantification of the preheating is of great importance to characterize the flexible heat. This paper proposes a novel data-driven preheating quantification method to estimate the capability of the heat pump demand shifting and isolate the effect of interventions. Varieties of fine-grained data from a real-world trial are exploited to estimate the baseline heat demand using Bayesian deep learning while jointly considering epistemic and aleatoric uncertainties. A comprehensive range of case studies are carried out to demonstrate the superior performance of the proposed quantification method, and then, the estimated demand shift is used as an input into the whole-system model to investigate the system implications and quantify the range of benefits of rolling out the SHHPs developed by PassivSystems to the future GB electricity systems.
Obtaining an appropriate model is very crucial to develop an efficient energy management system for the smart home, including photovoltaic (PV) array, plug-in electric vehicle (PEV), home loads, and heat pump (HP). Stochastic modeling methods of smart homes explain random parameters and uncertainties of the aforementioned components. In this paper, a concise yet comprehensive analysis and comparison are presented for these techniques. First, modeling methods are implemented to find appropriate and precise forecasting models for PV, PEV, HP, and home load demand. Then, the accuracy of each model is validated by the real measured data. Finally, the pros and cons of each method are discussed and reviewed. The obtained results show the conditions under which the methods can provide a reliable and accurate description of smart home dynamics.
Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
## Thursday, 08 August 2019
### 04:00 PM
Machine Learning for Sociology [Annual Reviews: Annual Review of Sociology: Table of Contents]
Annual Review of Sociology, Volume 45, Issue 1, Page 27-45, July 2019.
The Role of Space in the Formation of Social Ties [Annual Reviews: Annual Review of Sociology: Table of Contents]
Annual Review of Sociology, Volume 45, Issue 1, Page 111-132, July 2019.
The Social Structure of Time: Emerging Trends and New Directions [Annual Reviews: Annual Review of Sociology: Table of Contents]
Annual Review of Sociology, Volume 45, Issue 1, Page 301-320, July 2019.
Retail Sector Concentration, Local Economic Structure, and Community Well-Being [Annual Reviews: Annual Review of Sociology: Table of Contents]
Annual Review of Sociology, Volume 45, Issue 1, Page 321-343, July 2019.
Well-Being at the End of Life [Annual Reviews: Annual Review of Sociology: Table of Contents]
Annual Review of Sociology, Volume 45, Issue 1, Page 515-534, July 2019.
Clothing and carrying status variations are the two key factors that affect the performance of gait recognition because people usually wear various clothes and carry all kinds of objects, while walking in their daily life. These covariates substantially affect the intensities within conventional gait representations such as gait energy images. Hence, to properly compare a pair of input gait features, an appropriate metric for joint intensity is needed in addition to the conventional spatial metric. We therefore propose a unified joint intensity transformer network for gait recognition that is robust against various clothing and carrying statuses. Specifically, the joint intensity transformer network is a unified deep learning-based architecture containing three parts: a joint intensity metric estimation net, a joint intensity transformer, and a discrimination network. First, the joint intensity metric estimation net uses a well-designed encoder-decoder network to estimate a sample-dependent joint intensity metric for a pair of input gait energy images. Subsequently, a joint intensity transformer module outputs the spatial dissimilarity of two gait energy images using the metric learned by the joint intensity metric estimation net. Third, the discrimination network is a generic convolution neural network for gait recognition. In addition, the joint intensity transformer network is designed with different loss functions depending on the gait recognition task (i.e., a contrastive loss function for the verification task and a triplet loss function for the identification task). The experiments on the world’s largest datasets containing various clothing and carrying statuses demonstrate the state-of-the-art performance of the proposed method.
At present, the fusion of different unimodal biometrics has attracted increasing attention from researchers, who are dedicated to the practical application of biometrics. In this paper, we explored a multi-biometric algorithm that integrates palmprints and dorsal hand veins (DHV). Palmprint recognition has a rather high accuracy and reliability, and the most significant advantage of DHV recognition is the biopsy (Liveness detection). In order to combine the advantages of both and implement the fusion method, deep learning and graph matching were, respectively, introduced to identify palmprint and DHV. Upon using the deep hashing network (DHN), biometric images can be encoded as 128-bit codes. Then, the Hamming distances were used to represent the similarity of two codes. Biometric graph matching (BGM) can obtain three discriminative features for classification. In order to improve the accuracy of open-set recognition, in multi-modal fusion, the score-level fusion of DHN and BGM was performed and authentication was provided by support vector machine (SVM). Furthermore, based on DHN, all four levels of fusion strategies were used for multi-modal recognition of palmprint and DHV. Evaluation experiments and comprehensive comparisons were conducted on various commonly used datasets, and the promising results were obtained in this case where the equal error rates (EERs) of both palmprint recognition and multi-biometrics equal 0, demonstrating the great superiority of DHN in biometric verification.
This paper proposes an explicit predictive current control scheme implemented with a low carrier frequency pulsewidth modulation (PWM) on an induction machine fed by a three-level neutral point clamped inverter. The PWM carrier and the main current sampling frequency are both set to 1 kHz, resulting in a 500 Hz average switching frequency per device, which is very suitable for large drive applications. The explicit predictive control is introduced to optimize the available bandwidth provided by such a low sampling frequency, maximizing the dynamic performance. The strategy has been tested in a 2.2-kW induction motor experimental prototype.
AC–DC light-emitting diode (LED) drivers suffer from short lifetime because of the low-lifetime electrolytic capacitors used for dc bus decoupling. In this paper, a primary-side peak current control method applied for driving a two-stage multichannel LED driver is proposed. The LED driver consists of an ac–dc boost power factor correction stage and an isolated dc–dc nonresonant stage. A long-lifetime and small film capacitor is used for implementing the intermediate dc bus. The proposed method, which controls the peak value of the primary-side current of the transformers, is applied to the dc–dc stage to ensure constant dc current output of LEDs in spite of the widely varying dc bus voltage due to low bus capacitance. The proposed method compensates the effect of the large dc bus voltage ripple by varying the switching frequency of the primary-side switches. Detailed design procedure, theoretical analysis, and experimental results of the LED driver operating at 180 W with the proposed method are provided. The LED driver with the proposed control method is proved to have high overall efficiency.
The objective of this paper is to develop a method for assisting users to push power-assisted wheelchairs (PAWs) in such a way that the electrical energy consumption over a predefined distance-to-go is optimal, while at the same time bringing users to a desired fatigue level. This assistive task is formulated as an optimal control problem and solved by Feng et al. using the model-free approach gradient of partially observable Markov decision processes. To increase the data efficiency of the model-free framework, we here propose to use policy learning by weighting exploration with the returns (PoWER) with 25 control parameters. Moreover, we provide a new near-optimality analysis of the finite-horizon fuzzy Q-iteration, which derives a model-based baseline solution to verify numerically the near-optimality of the presented model-free approaches. Simulation results show that the PoWER algorithm with the new parameterization converges to a near-optimal solution within 200 trials and possesses the adaptability to cope with changes of the human fatigue dynamics. Finally, 24 experimental trials are carried out on the PAW system, with fatigue feedback provided by the user via a joystick. The performance tends to increase gradually after learning. The results obtained demonstrate the effectiveness and the feasibility of PoWER in our application.
Recent years have witnessed the promising future of hashing in the industrial applications for fast similarity retrieval. In this paper, we propose a novel supervised hashing method for large-scale cross-media search, termed self-supervised deep multimodal hashing (SSDMH), which learns unified hash codes as well as deep hash functions for different modalities in a self-supervised manner. With the proposed regularized binary latent model, unified binary codes can be solved directly without relaxation strategy while retaining the neighborhood structures by the graph regularization term. Moreover, we propose a new discrete optimization solution, termed as binary gradient descent, which aims at improving the optimization efficiency toward real-time operation. Extensive experiments on three benchmark data sets demonstrate the superiority of SSDMH over state-of-the-art cross-media hashing approaches.
These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.
## Monday, 29 July 2019
### 04:00 PM
Ubiquitin (Ub)-mediated proteolysis is a fundamental mechanism used by eukaryotic cells to maintain homeostasis and protein quality, and to control timing in biological processes. Two essential aspects of Ub regulation are conjugation through E1-E2-E3 enzymatic cascades and recognition by Ub-binding domains. An emerging theme in the Ub field is that...
## Wednesday, 17 July 2019
### 04:00 PM
Researchers at Boston Children's Hospital report creating the first human tissue model of an inherited heart arrhythmia, replicating two patients' abnormal heart rhythms in a dish, and then suppressing the arrhythmia with gene therapy in a mouse model.
Women tend to have a greater immune response to a flu vaccination compared to men, but their advantage largely disappears as they age and their estrogen levels decline, suggests a study from researchers at the Johns Hopkins Bloomberg School of Public Health.
## Tuesday, 16 July 2019
### 04:00 PM
Cryptococcus neoformans is a fungal pathogen that infects people with weakened immune systems, particularly those with advanced HIV/AIDS. New University of Minnesota Medical Research could mean a better understanding of this infection and potentially better treatments for patients.
In a massive new analysis of findings from 277 clinical trials using 24 different interventions, Johns Hopkins Medicine researchers say they have found that almost all vitamin, mineral and other nutrient supplements or diets cannot be linked to longer life or protection from heart disease.
A new study led by Dr. Antonella Fioravanti in the lab of Prof. Han Remaut (VIB-VUB Center for Structural Biology) has shown that removing the armor of the bacterium that causes anthrax slows its growth and negatively affects its ability to cause disease. This work will be published in the prestigious journal Nature Microbiology can lead the way to new, effective ways of fighting anthrax and various other diseases.
## Sunday, 09 June 2019
### 07:32 PM
The Economics and Politics of Preferential Trade Agreements [Annual Reviews: Annual Review of Political Science: Table of Contents]
Annual Review of Political Science, Volume 22, Issue 1, Page 75-92, May 2019.
The Politics of Housing [Annual Reviews: Annual Review of Political Science: Table of Contents]
Annual Review of Political Science, Volume 22, Issue 1, Page 165-185, May 2019.
Bias and Judging [Annual Reviews: Annual Review of Political Science: Table of Contents]
Annual Review of Political Science, Volume 22, Issue 1, Page 241-259, May 2019.
Climate Change and Conflict [Annual Reviews: Annual Review of Political Science: Table of Contents]
Annual Review of Political Science, Volume 22, Issue 1, Page 343-360, May 2019.
Annual Review of Political Science, Volume 22, Issue 1, Page 399-417, May 2019.
## Tuesday, 12 February 2019
### 04:00 PM
Cysteine-Based Redox Sensing and Its Role in Signaling by Cyclic Nucleotide–Dependent Kinases in the Cardiovascular System [Annual Reviews: Annual Review of Physiology: Table of Contents]
Annual Review of Physiology, Volume 81, Issue 1, Page 63-87, February 2019.
Biomarkers of Acute and Chronic Kidney Disease [Annual Reviews: Annual Review of Physiology: Table of Contents]
Annual Review of Physiology, Volume 81, Issue 1, Page 309-333, February 2019.
Cellular Metabolism in Lung Health and Disease [Annual Reviews: Annual Review of Physiology: Table of Contents]
Annual Review of Physiology, Volume 81, Issue 1, Page 403-428, February 2019.
Innate Lymphoid Cells of the Lung [Annual Reviews: Annual Review of Physiology: Table of Contents]
Annual Review of Physiology, Volume 81, Issue 1, Page 429-452, February 2019.
Regulation of Blood and Lymphatic Vessels by Immune Cells in Tumors and Metastasis [Annual Reviews: Annual Review of Physiology: Table of Contents]
Annual Review of Physiology, Volume 81, Issue 1, Page 535-560, February 2019.
## Thursday, 09 August 2018
### 04:00 PM
Sorting in the Labor Market [Annual Reviews: Annual Review of Economics: Table of Contents]
Annual Review of Economics, Volume 10, Issue 1, Page 1-29, August 2018.
Radical Decentralization: Does Community-Driven Development Work? [Annual Reviews: Annual Review of Economics: Table of Contents]
Annual Review of Economics, Volume 10, Issue 1, Page 139-163, August 2018.
The Development of the African System of Cities [Annual Reviews: Annual Review of Economics: Table of Contents]
Annual Review of Economics, Volume 10, Issue 1, Page 287-314, August 2018.
Idea Flows and Economic Growth [Annual Reviews: Annual Review of Economics: Table of Contents]
Annual Review of Economics, Volume 10, Issue 1, Page 315-345, August 2018.
Progress and Perspectives in the Study of Political Selection [Annual Reviews: Annual Review of Economics: Table of Contents]
Annual Review of Economics, Volume 10, Issue 1, Page 541-575, August 2018.
## Feeds
FeedRSSLast fetchedNext fetched after
Annual Reviews: Annual Review of Economics: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Annual Reviews: Annual Review of Nutrition: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Annual Reviews: Annual Review of Physiology: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Annual Reviews: Annual Review of Political Science: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Annual Reviews: Annual Review of Sociology: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
cs.CR updates on arXiv.org 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Early Edition 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
EurekAlert! - Breaking News 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
IEEE Transactions on Image Processing - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
IEEE Transactions on Industrial Electronics - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
IEEE Transactions on Industrial Informatics - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
IEEE Transactions on Information Forensics and Security - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
JAMA Current Issue 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Latest BMJ Research 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
Nature - Issue - nature.com science feeds 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
The Lancet 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 | 2019-08-26 00:43:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2521474063396454, "perplexity": 3039.357183870654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00479.warc.gz"} |
https://www.tec-science.com/mechanics/gases-and-liquids/how-do-boats-float-buoyancy-in-liquids/ | Buoyancy is the force directed against gravity that an object experiences when submerged in a fluid (liquid or gas).
## Indroduction
Everyone may have tried to lift another person and found that this requires a lot of strength. However, if you try to lift this person in water, it is much easier. The reason for this is due to the so-called buoyancy, which an object experiences as soon as it is submerged in a liquid. This buoyant force is also responsible for the fact that even steel ships weighing tons do not sink but float on the water. The cause of the buoyancy will be discussed in more detail in this article.
## Demonstration of buoyancy
The following experiment will demonstrate the effect of the buoyant force. A spring scale (newton meter) is attached to a metal cuboid. Without touching the bottom, the piece of metal is gradually submerged in a glass of water and the newton meter is observed.
Once the metal piece has reached the water, the indicated value of the newton meter decreases steadily with increasing immersion depth. Only when the cuboid is completely submerged in water does the spring scale show a constant value again. The decreasing force has nothing to do with a decreasing weight, because the mass of the metal block does not change. Rather, the buoyant force acting against gravity increases with increasing immersion depth. The buoyant force corresponds to the amount by which the body appears to have become lighter in water.
The more an object is submerged in a liquid, the greater the buoyancy acting on it! The buoyant force is always directed in the opposite direction to gravity!
## The buoyancy: The Archimedes’ principle
The scientist Archimedes experimented with the phenomenon buoyancy already 250 years B.C. He was able to show that the buoyant force by which a submerged body appears to become lighter corresponds to the weight of the displaced liquid. The term displaced liquid refers to the amount of liquid that has to give way to the body when it is submerged. This is the amount of liquid that theoretically overflows when a glass is full to the brim when a body is submerged. The weight of this overflowed liquid then corresponds to the buoyant force. This statement is also called the Archimedes’ principle.
The Archimedes’ principle states that the buoyant force corresponds to the weight of the displaced liquid!
When an object is completely submerged in a liquid, the volume of the displaced liquid obviously corresponds to the volume of the immersed body. If, for example, the 54 g metal cuboid made of aluminium has a square base area of 4 cm² and a height of 5 cm, this results in a volume of 20 cm³ (20 ml). Consequently, when completely submerged in water, the cuboid displaces a liquid volume of 20 ml. At a water density of 1 g per cm³ this corresponds to a displaced water mass of 20 g. The 54 g metal cuboid therefore feels 20 g lighter under water. A spring scale would therefore indicate only 340 mN instead of 540 mN.
Note that the submersion of the body does not change its weight, but a buoyant force against the weight is now acting, which leads to an reduced resultant force. It is therefore advisable not to argue with the masses (even if this is more descriptive), but with the forces! If the weight of the body is denoted by $$F_g$$ and the counteracting buoyant force by $$F_b$$, then for the resultant force $$F_{res}$$ that the body experiences applies:
\begin{align}
\label{res}
&\boxed{F_{res} = F_g – F_b} \\[5px]
\end{align}
If the metal block is not completely submerged in the liquid, but only partially, then it obviously does not displace as much water. A body will only displace as much fluid as the body volume actually submerges. If only half of the body volume is submerged, the body displaces only half of the water. Accordingly, the buoyancy is only half as great. If $$\Delta V$$ denotes the submerged body volume (= displaced liquid volume) and $$\rho_l$$ the density of the liquid, then the mass $$\Delta m$$ of the displaced liquid can be calculated as follows:
\begin{align}
&\Delta m = \Delta V \cdot \rho_l \\[5px]
\end{align}
For the buoyant force $$F_b$$ as the weight of the displaced liquid, then finally applies:
\begin{align}
&F_b = \Delta m \cdot g \\[5px]
\label{arch}
&\boxed{F_b = \Delta V \cdot \rho_l \cdot g} \\[5px]
\end{align}
### Derivation of the buoyant force
The buoyancy is due to the different hydrostatic pressures at the top and bottom of a submerged body. For the sake of simplicity, a cuboid object is again considered, which is completely submerged in the surrounding liquid.
In the article Pressure In Liquids, the cause of liquid pressures has already been explained in detail. They only result from the depth below the liquid surface. The deeper a point lies below the liquid surface, the greater the liquid pressure and the resulting force. In this way, the upward acting force on the bottom of the body is therefore greater than the downward acting force on the top. Thus a force effectively acts upwards, the buoyant force!
The liquid pressure at the bottom of the object is determined from the depth $$h_2$$ as follows:
\begin{align}
&p_2 = \rho_l \cdot g \cdot h_2 \\[5px]
\end{align}
In this equation, $$\rho_l$$ denotes the density of the liquid. Analogously, for the hydrostatic pressure at the depth $$h_1$$ at the top of the cuboid, applies:
\begin{align}
&p_1 = \rho_l \cdot g \cdot h_1 \\[5px]
\end{align}
The respective forces at the bottom and top side of the cuboid are determined according to the Definition Of Pressure by the product of pressure and surface area ($$F=p \cdot A$$). The surface area in this case is the base area $$A$$ of the cuboid:
\begin{align}
&\underline{F_2 = \rho_l \cdot g \cdot h_2 \cdot A} ~~~~~\text{or}~~~~~ \underline{F_1 = \rho_l \cdot g \cdot h_1 \cdot A} \\[5px]
\end{align}
The buoyant force $$F_b$$, with which the body is effectively pushed upwards, results from the difference of the forces:
\begin{align}
&F_b = F_2 – F_1 \\[5px]
&F_b = \rho_l \cdot g \cdot h_2 \cdot A – \rho_l \cdot g \cdot h_1 \cdot A \\[5px]
\label{d}
&F_b = \rho_l \cdot g \cdot A \cdot \left(h_2-h_1\right) \\[5px]
\end{align}
The difference in the depths corresponds exactly to the height $$h$$ of the cuboid. Furthermore, it can be used that the product of height and base area corresponds to the volume $$V_b$$ of the submerged body:
\begin{align}
&F_b = \rho_l \cdot g \cdot A \cdot \underbrace{\left(h_2-h_1\right)}_{=h} \\[5px]
&F_b = \rho_l \cdot g \cdot \underbrace{A \cdot h}_{=V_b} \\[5px]
\label{ein}
&\boxed{F_b = V_b \cdot \rho_l \cdot g}~~~~~\text{buoyant force with complete immersion} \\[5px]
\end{align}
Note that the depth at which the object is exactly located is of no importance to the buoyant force. From equation (\ref{d}) it is already clear that only the difference in depth between top and bottom is relevant, i.e. the height of the object*. In combination with the base area of the object, only the dependence on its volume results from this. For simplicity’s sake, this formula was derived from a cuboid, but it applies in principle to any body of any shape as long as its volume $$V_b$$ is completely submerged in the liquid (a more general derivation of the buoyant force, which also takes arbitrarily shaped bodies into account, is shown in the next section “Derivation of the Archimedes’ principle“).
*) For this reason, the ambient pressure on the surface of the liquid is also irrelevant, which normally acts in addition to the hydrostatic pressure. This is because the ambient pressure acts equally on both the top and bottom of the body and thus cancel each other out.
If an object is not completely submerged in a liquid (as it was the case in the previous derivation) but is only partially submerged, then the volume $$V_b$$ refers only to the actually submerged part of the body volume $$\Delta V$$ (= displaced liquid volume). The buoyant force is then generated exclusively by the hydrostatic pressure at the bottom of the body:
\begin{align}
F_b &=p \cdot A \\[5px]
&= \rho_l \cdot g \cdot \underbrace{h \cdot A}_{\Delta V} \\[5px]
\end{align}
\begin{align}
&\boxed{F_b = \Delta V \cdot \rho_l \cdot g} ~~~~~\text{applies in general} \\[5px]
\end{align}
At this point one can now also see the Archimedes’ principle. In the equation above, the product of displaced liquid volume $$\Delta V$$ and liquid density $$\rho_l$$ can be interpreted as the mass of the displaced liquid. Furthermore, the product of displaced liquid mass $$\Delta m$$ and gravitational acceleration $$g$$ results in the weight of the displaced liquid $$F_{g,dis}$$:
\begin{align}
&F_b = \underbrace{\Delta V \cdot \rho_l}_{\Delta m} \cdot g \\[5px]
&F_b = \underbrace{\Delta m \cdot g}_{F_{g,dis}} \\[5px]
&\boxed{F_b = F_{g,dis}} \\[5px]
\end{align}
### Derivation of the Archimedes’ principle for arbitrarily shaped bodies
The derivation of the buoyant force in the previous section was based on an object with a relatively simple geometry on which the acting forces could be calculated relatively easily. That the derived formula can not only be applied to such simple shaped objects, but that the Archimedes’ principle applies to arbitrarily shaped bodies, will be shown in the following.
For this purpose a vessel filled with water is considered. In the Article Pressure In Liquids it has already been explained in detail that the hydrostatic pressure in a liquid is caused by the weight of the liquid column above it. If, for example, the pressure at the bottom of the left vessel is considered, the liquid pressure at the bottom results from the weight of the water mass above (the object has not yet been submerged).
If one immerses now an arbitrarily shaped object into the water, then it experiences a certain buoyant force. According to Newton’s third law (“action = reaction”), the buoyant force exerted by the water on the object corresponds to the force that the object additionally exerts on the water when the situation is viewed from the opposite perspective (i.e. from the water’s point of view)! The force on the bottom of the vessel thus results from the sum of the weight of the water $$F_{g,water}$$ and the buoyant force $$F_b$$:
\begin{align}
\label{fa}
&F_{bottom} = F_{g,water} + F_b \\[5px]
\end{align}
Note that if the submerged body floats in the liquid, the buoyant force is obviously equal to the weight of the body (otherwise the object would sink to the ground). In this case it becomes clear that not only the weight of the liquid but also the weight of the floating object is acting on the bottom of the vessel. In the general case of a non-floating object (as in the case of the metal cuboid considered above, which was immersed in water by means of a spring scale), however, not the entire weight of the body is applied to the water, but only the weight minus the force with which the object is held. This difference corresponds exactly to the buoyant force (see also the figure Demonstration of the Archimedes’ principle)! Therefore, the resultant force acting on the bottom of the vessel in general results from the sum of the weight of the liquid column and the buoyant force of the submerged object.
In the article Pressure In Liquids it has already been explained in detail that the hydrostatic pressure results only from the considered depth below the water surface. Regarding the pressure at the bottom of the vessel, the water with the submerged object behaves in the same way as a vessel that is only filled with water and thereby has the same water level (principle of communicating vessels) – see the two vessels on the right in the figure above. One can thus imagine the submerged body volume as filled with water; this would obviously have the same effect on the bottom of the vessel.
With this perspective, the force acting on the bottom of the vessel results from the sum of the water weight outside the imaginary immersion volume ($$F_{g,water}$$) and the water weight inside the imaginary immersion volume ($$F_{g,dis}$$). The latter corresponds to the weight of the water which the submerged object displaces in the previous perspective. It therefore applies to the second approach:
\begin{align}
\label{fb}
&F_{bottom} = F_{g,water} + F_{g,dis} \\[5px]
\end{align}
Since both approaches obviously lead to the same force on the bottom of the vessel, equations (\ref{fa}) and (\ref{fb}) can be equated:
\begin{align}
\require{cancel}
&\bcancel{F_{g,water}} + F_b = \bcancel{F_{g,water}} + F_{g,dis} \\[5px]
&\boxed{F_b = F_{g,dis}} \\[5px]
\end{align}
This shows that the buoyancy corresponds directly to the weight of the displaced liquid, regardless of how the submerged object is shaped!
## Sinking, rising and floating
Whether a fully submerged object sinks, rises or floats at a given buoyancy depends on the weight of the object.
If the weight of a body is greater than the buoyancy, then according to equation (\ref{res}) it will descend with the difference of the forces to the ground. This corresponds to the force indicated by the spring balance when the object is attached to it. If, on the other hand, the buoyancy of a submerged object is greater than its weight, then it will ascend to the surface with the difference of the forces. To display this resultant force, the spring balance would then have to be attached to the object from below. However, if the buoyancy is equal to the weight, the body will appear to float “weightless” in the liquid. An attached spring balance would not indicate a resultant force. This apparent weightlessness in liquids is used, for example, to prepare astronauts for space missions.
For a homogeneous object its weight can be determined by the body volume $$V_b$$ and the density of the body $$\rho_b$$:
\begin{align}
&F_g = \overbrace{V_b \cdot \rho_b}^{m_b} \cdot g \\[5px]
\end{align}
If at this point the buoyant force according to equation (\ref{ein}) is used, then due to equation (\ref{res}) the following resultant force acts on the completely immersed object:
\begin{align}
&F_{res} = F_g – F_b \\[5px]
&F_{res} = V_b \cdot \rho_b \cdot g – V_b \cdot \rho_l \cdot g \\[5px]
\label{auf}
&\boxed{F_{res} = V_b \cdot g \cdot \left( \rho_b – \rho_l \right)} ~~~\text{resultant force at full immersion}\\[5px]
\end{align}
Using this formula, the conditions for descending, ascending or floating can now be clearly explained. If the density of the submerged body is greater than that of the surrounding liquid, a positive force results which drags the body towards the ground. If, on the other hand, the density of the body is less than that of the liquid, the result is a negative force. This means that the direction of the force is reversed and the submerged object is pulled to the surface. Only in the case that the density of the body corresponds exactly to the density of the liquid, the resultant force disappears. The body seems to float forceless in the liquid.
The considerations of the bodies assumed to be homogeneous can also be extended to inhomogeneous objects, i.e. in particular to objects consisting of different materials and thus different densities. The density $$\rho_b$$ of a inhomogeneous body then refers to the mean density, i.e. to the average density which one obtains mathematically, if one refers the total mass of the body $$m_b$$ to its total volume $$V_b$$:
\begin{align}
&\boxed{\rho_b = \frac{m_b}{V_b}} ~~~~~\text{mean density} \\[5px]
\end{align}
If the mean density of an immersed object is less than that of the surrounding liquid, the object floats to the surface. If the mean density is greater, the object sinks to the bottom. If the densities are the same, the object floats in the liquid.
This also explains why even steel ships weighing tons can float. The average density of a ship is lower than that of the surrounding water. This is achieved by the fact that a ship’s hull is not a massive steel body, but only a steel hull. The interior consists mainly of air. In relation to the volume of the hull, it has a relatively low mass and thus a low mean density, at least a significantly lower (average) density than the surrounding water. Thus the ship’s hull ensures that if it is submerged too much, a large buoyant force is generated which keeps the entire ship above water.
If, on the other hand, water penetrates into the hull, the relatively light air gives way to the penetrating heavy water and the mean density increases. If the mean density is greater than that of the surrounding water (at the latest when the entire hull is full of water), then the ship will sink.
A targeted control of the mean density of a floating body by means of air and water can be found, for example, in submarines. In this way a targeted descending and ascending as well as floating in water is made possible. Depending on the manoeuvre, either water or air is pumped into special ballast tanks. During descending, for example, the air-filled tanks are flooded with water, so that the mean density of the submarine is greater than that of the surrounding water. When the submarine ascends, however, the water in the tanks is pushed out with the aid of compressed air. The mean density of the submarine drops and finally ascends. When floating in water, the tanks are only partially filled with water or air, so that the mean density corresponds exactly to that of the surrounding water.
The fact that substances with lower densities than the surrounding medium rise upwards or substances with higher densities sink downwards also plays a major role in ocean currents. Among other things, these currents are due to the fact that cold and thus heavy water sinks downwards, while warmer and thus lighter water rises upwards. However, these differences in density are not only caused by temperature influences but also by the salt content. The density is higher in waters with a high salt content than in less salty regions.
## Immersion depth (draft)
When objects ascend in a liquid, experience shows that they do not emerge completely out of the liquid. A certain part will remain below the liquid surface, while the rest will float above the surface. An everyday example that illustrates this are ships whose hulls are obviously only partially submerged in the water. The question arises, of course, how to determine this depth of immersion, which in the case of ships is also referred to as draught of draft.
If an object floats, it obviously neither sinks nor rises. Consequently, there is no resultant force acting on the object, so there is a balance of forces between the downward acting weight and the upward acting buoyancy:
\begin{align}
&F_{res} = F_g – F_b \overset{!}{=}0 \\[5px]
&\underline{F_b = F_g} \\[5px]
\end{align}
The weight is therefore just as great as the buoyancy. According to the Archimedes’ principle, the buoyancy itself corresponds to the weight of the displaced liquid. So when an object floats on the surface, it will submerge until the weight of the displaced liquid (=buoyancy) corresponds to the weight of the object. If one imagines the volume of the object below the liquid surface to be completely filled with the surrounding liquid, this weight corresponds to the weight of the object. A ship with a mass of 50,000 tons, for example, will sink so deep that the submerged volume displaces 50,000 tons of water.
When floating, the object submerges to such a depth that it displaces as much liquid as it is heavy!
The immersion depth of an object therefore depends not only on its own mass, but also on the density of the surrounding liquid. For example, a ship will have a stronger draft in fresh water than in sea water, i.e. it will submerge deeper. Because of the dissolved salt, seawater has a density that is about 3 % higher than that of freshwater. The ship must therefore immerse more strongly in the “lighter” freshwater in order to displace the same mass of water as in the “heavier” salt water.
For ships, the maximum permissible draught is indicated by a so-called Plimsoll mark depending on the surrounding water (density). This mark is located on the side of the ship’s hull. The upper two lines towards the stern indicate the permitted draft in general freshwater (F) or tropical freshwater (TF). The other four lines towards the bow indicate the permitted draft in saltwater. These are located lower in comparison to the marks of the freshwater, as the ship is more buoyant in saltwater due to the higher water density. A distinction is made between tropical seawater (T), seawater in summer (S) and winter (W) and between waters of the North Atlantic in winter (WNA).
Plimsoll marks on ships indicate the permitted drafts depending on the surrounding water (density)!
This example of the Plimsoll mark also shows that the “heavier” the surrounding liquid is, i.e. the greater the density of the liquid, the stronger the buoyancy is. This can also be seen directly from the equation (\ref{arch}), in which the liquid density directly influences the buoyant force. This fact can also be seen when bathing in the Dead Sea. Due to the very high salt content of more than 30 %, the density of the water in the Dead Sea is about a quarter higher compared to freshwater. Consequently, the buoyancy there is also about 25 % greater than in freshwaters. This leads to the fact that one floats in the Dead Sea without the need to swim.
## Outlook
In this article, liquids were considered for the sake of clarity, but not only in liquids but also in gases buoyant forces are acting, which finally are based on same cause. At article Buoyancy In Gases this is discussed in more detail. | 2021-06-18 00:03:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370661377906799, "perplexity": 423.9345364580253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00469.warc.gz"} |
https://stats.stackexchange.com/questions/127155/guessing-the-length-of-a-fish/127168 | # Guessing the length of a fish
I would like to solve the following excercise. Any help is appreciated.
90% of the fish in our pond are males, the rest are females.
The length of the males are:
$X+5$ inches, where $X\sim exp(1)$
The length of the females are:
$Y+8$ inches, where $Y \sim exp(2)$
What is the probability that a fish whose length is $x$ is male, and how can we guess the sex of the fish from their length if we want that our guess is right with the biggest possible probability?
This is a classic exercise of conditional probability. The most important thing is to get right what how to write the question of the exercise. In this case, we want the probability of a fish whose length is x being male. The first thing is to write down this probability, which is conditional: is the probability of being male GIVEN THAT the length is x, then we want to compute $$P(male|x)$$ to do that now we use Bayes Theorem: $$P(male|x)=\frac{P(x|male)P(male)}{P(x)}$$ we do this because in this way we can use information that we have in the problem statement, like $P(male)$ and $P(x|male)$ (the exponential distribution). The only left thing is $P(x)$, which should be computed using law of total probability, $$P(x)=P(x|male)P(male)+P(x|female)P(female)$$ | 2022-05-18 00:57:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789761662483215, "perplexity": 214.88760401683152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00413.warc.gz"} |
https://nbviewer.org/github/barbagroup/jupyter-tutorial/blob/master/3--Jupyter%20like%20a%20pro.ipynb | # Jupyter like a pro¶
In this third notebook of the tutorial "The World of Jupyter", we want to leave you with pro tips for using Jupyter in your future work.
## Importing libraries¶
First, a word on importing libraries. Previously, we used the following command to load all the functions in the NumPy library:
import numpy
Once you execute that command in a code cell, you call any NumPy function by prepending the library name, e.g., numpy.linspace(), numpy.ones(), numpy.zeros(), numpy.empty(), numpy.copy(), and so on (explore the documentation for these very useful functions!).
But, you will find a lot of sample code online that uses a different syntax for importing. They will do:
import numpy as np
All this does is create an alias for numpy with the shorter string np, so you then would call a NumPy function like this: np.linspace(). This is just an alternative way of doing it, for lazy people that find it too long to type numpy and want to save 3 characters each time. For the not-lazy, typing numpy is more readable and beautiful. We like it better like this:
In [1]:
import numpy
When you make a plot using Matplotlib, you have many options to make your plots beautiful and publication-ready. Here are some of our favorite tricks.
First, let's load the pyplot module—and remember, %matplotlib notebook gets our plots inside the notebook (instead of a pop-up).
Our first trick is rcparams: we use it to customize the appearance of the plots. Here, we set the default font to a serif type of size 14 pt and make the size of the font for the axes labels 18 pt. Honestly, the default font is too small.
In [2]:
from matplotlib import pyplot
%matplotlib notebook
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 14
pyplot.rcParams['axes.labelsize'] = 18
The following example is from a tutorial by Dr. Justin Bois, a lecturer in Biology and Biological Engineering at Caltech, for his class in Data Analysis in the Biological Sciences (2015). He has given us permission to use it.
In [3]:
# Get an array of 100 evenly spaced points from 0 to 2*pi
x = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
# Make a pointwise function of x with exp(sin(x))
y = numpy.exp(numpy.sin(x))
Here, we added comments in the Python code with the # mark. Comments are often useful not only for others who read the code, but as a "note to self" for the future you!
Let's see how the plot looks with the new font settings we gave Matplotlib, and make the plot more friendly by adding axis labels. This is always a good idea!
In [4]:
pyplot.figure()
pyplot.plot(x, y, color='k', linestyle='-')
pyplot.xlabel('$x$')
pyplot.ylabel('$\mathrm{e}^{\sin(x)}$')
pyplot.xlim(0.0, 2.0 * numpy.pi);
Did you see how Matplotlib understands LaTeX mathematics? That is beautiful. The function pyplot.xlim() specifies the limits of the x-axis (you can also manually specify the y-axis, if the defaults are not good for you).
Continuing with the tutorial example by Justin Bois, let's have some mathematical fun and numerically compute the derivative of this function, using finite differences. We need to apply the following mathematical formula on all the discrete points of the x array:
$$\frac{\mathrm{d}y(x_i)}{\mathrm{d}x} \approx \frac{y(x_{i+1}) - y(x_i)}{x_{i+1} - x_i}.$$
By the way, did you notice how we can typeset beautiful mathematics within a markdown cell? The Jupyter notebook is happy typesetting mathematics using LaTeX syntax.
Since this notebook is "Jupyter like a pro," we will define a custom Python function to compute the forward difference. It is good form to define custon functions to make your code modular and reusable.
In [5]:
def forward_diff(y, x):
"""Compute derivative by forward differencing."""
# Use numpy.empty to make an empty array to put our derivatives in
deriv = numpy.empty(y.size - 1)
# Use a for-loop to go through each point and compute the derivative.
for i in range(deriv.size):
deriv[i] = (y[i+1] - y[i]) / (x[i+1] - x[i])
# Return the derivative (a NumPy array)
return deriv
# Call the function to perform finite differencing
deriv = forward_diff(y, x)
Notice how we define a function with the def statement, followed by our custom name for the fuction, the function arguments in parenthesis, and ending the statement with a colon. The contents of the function are indicated by the indentation (four spaces, in this case), and the return statement indicates what the function returns to the code that called it (in this case, the contents of the variable deriv). Right after the function definition (in between triple quotes) is the docstring, a short text documenting what the function does. It is good form to always write docstrings for your functions!
In our custom forward_diff() function, we used numpy.empty() to create an empty array of length y.size-1, that is, one less than the length of the array y. Then, we start a for-loop that iterates over values of i using the range() function of Python. This is a very useful function that you should think about for a little bit. What it does is create a list of integers. If you give it just one argument, it's a "stop" argument: range(stop) creates a list of integers from 0 to stop-1, i.e., the list has stop numbers in it because it always starts at zero. But you can also give it a "start" and "step" argument.
Experiment with this, if you need to. It's important that you internalize the way range() works. Go ahead and create a new code cell, and try things like:
for i in range(5):
print(i)
changing the arguments of range(). (Note how we end the for statement with a colon.) Now think for a bit: how many numbers does the list have in the case of our custom function forward_diff()?
Now, we will make a plot of the numerical derivative of $\exp(\sin(x))$. We can also compare with the analytical derivative:
$$\frac{\mathrm{d}y}{\mathrm{d}x} = \mathrm{e}^{\sin x}\,\cos x = y \cos x,$$
In [6]:
deriv_exact = y * numpy.cos(x) # analytical derivative
pyplot.figure()
pyplot.plot((x[1:] + x[:-1]) / 2.0, deriv,
label='numerical',
marker='.', color='gray',
linestyle='None', markersize=10)
pyplot.plot(x, deriv_exact,
label='analytical',
color='k', linestyle='-') # analytical derivative in black line
pyplot.xlabel('$x$')
pyplot.ylabel('$\mathrm{d}y/\mathrm{d}x$')
pyplot.xlim(0.0, 2.0 * numpy.pi)
pyplot.legend(loc='upper center', numpoints=1);
Stop for a bit and look at the first pyplot.plot() call above. The square brackets normally are how you access a particular element of an array via its index: x[0] is the first element of x, and x[i+1] is the i-th element. What's very cool is that you can also use negative indices: they indicate counting backwards from the end of the array, so x[-1] is the last element of x.
A neat trick of arrays is called slicing: picking elements using the colon notation. Its general form is x[start:stop:step]. Note that, like the range() function, the stop index is exclusive, i.e., x[stop] is not included in the result.
For example, this code will give the odd numbers from 1 to 7:
x = numpy.array( [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] )
x[1:-1:2]
Try it! Remember, Python arrays are indexed from 0, so x[1] is the second element. The end-point in the slice above is index -1, that's the last array element (not included in the result), and we're stepping by 2, i.e., every other element. If the step is not given, it defaults to 1. If start is not given, it defaults to the first array element, and if stop is not given, it defaults to the last element. Try several variations on the slice, until you're comfortable with it.
## There's a built-in for that¶
Here's another pro tip: whenever you find yourself writing a custom function for something that seems that a lot of people might use, find out first if there's a built-in for that. In this case, NumPy does indeed have a built-in for taking the numerical derivative by differencing! Check it out. We also use the function numpy.allclose() to check if the two results are close.
In [7]:
numpy_deriv = numpy.diff(y) / numpy.diff(x)
print('Are the two results close? {}'.format(numpy.allclose(numpy_deriv, deriv)))
Are the two results close? True
Not only is the code much more compact and easy to read with the built-in NumPy function for the numerical derivative ... it is also much faster:
In [8]:
%timeit numpy_deriv = numpy.diff(y) / numpy.diff(x)
%timeit deriv = forward_diff(y, x)
100000 loops, best of 3: 13.4 µs per loop
10000 loops, best of 3: 75.2 µs per loop
NumPy functions will always be faster than equivalent code you write yourself because at the heart they use pre-compiled code and highly optimized numerical libraries, like BLAS and LAPACK.
## Do math like a pro¶
Do you want to compute the integral of $y(x) = \mathrm{e}^{\sin x}$? Of course you do. We find the analytical integral using the integral formulas for modified Bessel functions:
$$\int_0^{2\pi}\mathrm{d} x\, \mathrm{e}^{\sin x} = 2\pi \,I_0(1),$$
where $I_0$ is the modified Bessel function of the first kind. But if you don't have your special-functions handbook handy, we can find the integral with Python. We just need the right modules from the SciPy library. SciPy has a module of special functions, including Bessel functions, called scipy.special. Let's get that loaded, then use it to compute the exact integral:
In [9]:
import scipy.special
exact_integral = 2.0 * numpy.pi * scipy.special.iv(0, 1.0)
print('Exact integral: {}'.format(exact_integral))
Exact integral: 7.95492652101
Or instead, we may want to compute the integral numerically, via the trapezoid rule. The integral is over one period of a periodic function, so only the constant term of its Fourier series will contribute (the periodic terms integrate to zero). The constant Fourier term is the mean of the function over the interval, and the integral is the area of a rectangle: $2\pi \langle y(x)\rangle_x$. Sampling $y$ at $n$ evenly spaced points over the interval of length $2\pi$, we have:
\begin{align} \int_0^{2\pi}\mathrm{d} x\, y(x) \approx \frac{2\pi}{n}\sum_{i=0}^{n} y(x_i), \end{align}
NumPy gives as a mean method to quickly get the sum:
In [10]:
approx_integral = 2.0 * numpy.pi * y[:-1].mean()
print('Approximate integral: {}'.format(approx_integral))
print('Error: {}'.format(exact_integral - approx_integral))
Approximate integral: 7.95492652101
Error: 0.0
In [11]:
approx_integral = 2.0 * numpy.pi * numpy.mean(y[:-1])
print('Approximate integral: {}'.format(approx_integral))
print('Error: {}'.format(exact_integral - approx_integral))
Approximate integral: 7.95492652101
Error: 0.0
The syntax y.mean() applies the mean() NumPy method to the array y. Here, we apply the method to a slice of y that does not include the last element (see discussion of slicing above). We could have also done numpy.mean(y[:-1]) (the function equivalent of the method mean() applied to an array); they give equivalent results and which one you choose is a matter of style.
## Beautiful interactive plots with Bokeh¶
Matplotlib will be your workhorse for creating plots in notebooks. But it's not the only game in town! A recent new player is Bokeh, a visualization library to make amazing interactive plots and share them online. It can also handle very large data sets with excellent performance.
If you installed Anaconda in your system, you will probably already have Bokeh. You can check if it's there by running the conda list command. If you installed Miniconda, you will need to install it with conda install bokeh.
After installing Bokeh, we have many modules available: bokeh.plotting gives you the ability to create interactive figures with zoom, pan, resize, save, and other tools.
In [12]:
from bokeh import plotting as bplotting
Bokeh integrates with Jupyter notebooks by calling the output function, as follows:
In [13]:
bplotting.output_notebook()
In [14]:
# create a new Bokeh plot with axis labels, name it "bop"
bop = bplotting.figure(x_axis_label='x', y_axis_label='dy/dx')
# add a title, change the font
bop.title = "Derivative of exp(sin(x))"
bop.title_text_font = "palatino"
# add a line with legend and line thickness to "bop"
bop.line(x, deriv_exact, legend="analytical", line_width=2)
# add circle markers with legend, specify color
bop.circle((x[1:] + x[:-1]) / 2.0, deriv, legend="numerical", fill_color="gray", size=8, line_color=None)
bop.grid.grid_line_alpha=0.3
bplotting.show(bop);
Note—As of June 2016 (v.0.11.1), Bokeh does not support LaTeX on axis labels. This is an issue they are working on, so stay tuned!
Look at the neat tools on the Bokeh figure: you can zoom in to any portion to explore the data, you can drag the plot area around, resize and finally save the figure to a file. You also have many beautiful styling options!
# Optional next step: get interactive with Lorenz¶
We found two really cool ways for you to get interactive with the Lorenz equations! Try out the interactive blog post by Tim Head on Exploring the Lorenz equations (January 2016), and learn about IPython widgets. Or, check out the Lorentz example on Bokeh plots. Better yet, try them both.
(c) 2016 Lorena A. Barba. Free to use under Creative Commons Attribution CC-BY 4.0 License. This notebook was written for the tutorial "The world of Jupyter" at the Huazhong University of Science and Technology (HUST), Wuhan, China.
Example from Justin Bois (c) 2015 also under a CC-BY 4.0 License. | 2022-10-02 19:43:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.36572226881980896, "perplexity": 1827.3429948739474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00333.warc.gz"} |
https://www.publiclab.org/tag/comments/potentiostat | # Potentiostat potentiostat
_For measuring electrochemically active compounds and microbes in water._ [](https://i.publiclab.org/system/images/photos/000/001/407/original/potentiostat_cell.png) ### Join the Discussion on the [Public Lab water quality list](https://groups.google.com/forum/#!forum/plots-waterquality) ### Background **Links to other Public Lab Electrochemistry wiki's / research notes** The design, construction, and operation of a low cost, open-source potentiostat (the WheeStat) has been described in a number of Public Lab wikis and research notes. Links to some of these pages are provided here: WheeStat user's [manual](http://publiclab.org/wiki/wheestat-user-s-manual). A wiki describing how to determine metal ion concentrations [electrochemically]((http://publiclab.org/wiki/detection-of-metals-in-water-with-the-wheestat). A site where you can purchase a WheeStat kit from [Public Lab](http://store.publiclab.org/collections/new-kits/products/wheestat-potentiostat). Instructions for assembling the WheeStat [kit](http://publiclab.org/notes/JSummers/08-07-2014/wheestat-kit-assembly). Making / purchasing low cost [electrodes](http://publiclab.org/notes/JSummers/01-09-2014/potentiostat-notes-5-how-to-make-low-cost-electrodes). **Potentiostats** can be used to test for electrochemically active compounds and microbes in solution, and thus have applications in many areas such as environmental monitoring, food and drug testing. Most commercially-available potentiostats are very expensive ($1000 is on the “cheap” side). There have been several initiatives in the last decade that have focused on designing cheaper alternatives; and when investigating technologies related to water quality assessment. Our aim here is to build on these efforts, and leverage the expertise of the open hardware community in order to build accessible, and capable, devices. Possible applications include: - **Tracking heavy metal concentrations in waterways.** Various industrial processes used in the US and abroad can lead to the contamination of water with heavy metals that are dangerous to humans, like mercury and arsenic. An inexpensive, battery-powered potentiostat -- communicating over the cellular network, perhaps, or merely recording locally to an SD card -- might be able to track relative fluctuations in the concentrations of these metals, making monitoring these contaminants easier. **Limitations of electrochemical techniques:** In order to detect and quantify a chemical species by electrochemical methods, that species has to undergo electron transfer at a voltage that is accessible under the solution conditions being employed. One major limitation to measuring metal species in water is due to oxidation / reductions of water itself. The oxidation of water (to give O2 and H+) limits how positive the voltage can be applied in water. Similarly, reduction to H2 and OH- limits how negative the voltage can be. The voltage limits will depend on things like the choice of electrode used and the pH of the solution. Still, there are a number of metals that can be quantified in water. Mendham, et al, (p 564, referenced below) list the following fifteen specific metals as having been determined by voltammetry: - antimony arsenic bismuth cadmium copper gallium - germanium gold indium lead mercury silver - thallium tin zinc - **A low-cost ‘field lab’ for evaluating water samples.** An inexpensive potentiostat, when used according to the proper protocols, might be used to indicate absolute concentrations of heavy metals in water. This could allow citizens and organizations who can’t afford to send water samples to an expensive, bonded laboratory to do their own testing -- particularly relevant in a developing-world context. - **Education.** Electrochemistry is an important part of many high school, college, and graduate chemistry curricula; an inexpensive potentiostat could render these curricula more accessible to educational institutions that don’t have the budget for the more expensive commercial versions. - **Research.** Making an easily-hackable, programmable, and extensible potentiostat platform, based on a widely-used and well-supported technologies like the Arduino and the Raspberry Pi, could allow for novel electrochemistry applications in the laboratory; when a device that once cost$2000 and didn’t “play nice” with other hardware and software suddenly becomes available for under \$200, and can be integrated with easy-to-use, open source software and hardware, researchers will dream up new approaches to open research problems -- and higher-throughput approaches in already-established research areas. ### Details Typically, electrochemical experiments utilize three electrodes, the Working Electrode (WE), Reference Electrode (RE) and Counter Electrode (CE). A research note reviewing some electrodes and describing how to build a set for little cash is provided [here](http://publiclab.org/notes/JSummers/01-09-2014/potentiostat-notes-5-how-to-make-low-cost-electrodes). A **potentiostat** is a three terminal analog feedback control circuit that maintains a pre-determined voltage between the WE and RE by sourcing current from the CE. A rough schematic for a potentiostat is provided below: [](https://i.publiclab.org/system/images/photos/000/001/406/original/adder_potentiostat.png) The CE and WE are made of electrochemically inert conductive materials (we are using graphite, like from pencils, but platinum and gold are popular). The RE is designed to have a well-defined and stable electrochemical potential. By hooking up a power source the energy of electrons in the working electrode can raised and lowered with respect to the reference (and also with respect to compounds in solution). When the energies of electrons in the WE are high enough, they can transfer onto certain chemical species, reducing them. For example, Cu2+ ions can be reduced to Cu+ ions, or to copper metal. Alternatively, when the voltage of the WE is sufficiently positive it can pull electrons off of certain chemicals, oxidizing them. The opposite of the above reactions can be used as an example; Cu+ ion can be oxidized to Cu2+ ion - the voltages (w.r.t. the RE) and currents at which reductions and oxidations happen can be measured, revealing information about the energies and concentrations of the analytes. [](https://i.publiclab.org/system/images/photos/000/001/407/original/potentiostat_cell.png) The above "Adder Potentiostat" schematic was adapted from chapter 15 of Electrochemical Methods by Bard and Faulkner (reference below). ### Work updates - **8/5/2013**: Craig Versek of PVOS has been building off a fully-fledged, open potentiostat design by Jack Summers. Craig is aiming to implement programmable current ranges. In this design, a CMOS analog multiplexer will switch out one of 5 standard current sense resistors (with room for 8 total), which are trimmer rheostats tuned to 250, 2.5k 25.0k 250k and 2.50M Ohms well within 0.5% margin of error. - **1/8/2014**: Smoky Mountain Scientific (Ben Hickman and Jack Summers' lab group) have published research notes describing an open source potentiostat they call the WheeStat. The history of the WheeStat program is described [here](http://publiclab.org/notes/JSummers/11-02-2013/potentiostat-notes-1-wheestat-history). The WheeStat software is described [here](http://publiclab.org/notes/JSummers/12-20-2013/potentiostat-software) and is available for download [here](https://github.com/SmokyMountainScientific/WheeStat5_0). A description of fabricating the board is provided [here](http://publiclab.org/notes/JSummers/12-30-2013/potentiostat-notes-3-wheestat-5-1-fabrication) and copies of the board can be ordered from [OSHPark.com](http://oshpark.com/shared_projects/yepeXPFo). ### Uses - Assess arsenic, cyanide, other contaminants / toxins in water - Educational - Identifying toxins / ingredients in foodstuffs ### Development - [olm-pstat](https://github.com/p-v-o-s/olm-pstat) - repository for the PLOTS/[PVOS](http://www.pvos.org/) Open Lab Monitor potentiostat peripheral ### References - [CheapStat](https://doi.org/10.1371/journal.pone.0023783) - [Cornell U Potentiostat](http://people.ece.cornell.edu/land/courses/ece4760/) - [Potentiostat Software on Github](https://github.com/p-v-o-s/olm-pstat) - Gopinath, A. V., and Russell, D., "An Inexpensive Field Portable Programmable Potentiostat", Chem Educator, 2006. pp 23-28. - Inamdar, S. N., Bhat, M. A., Haram, S. K., "Construction of Ag/AgCl Reference Electrode from Used Felt-Tipped Pen Barrel for Undergraduate Laboratory", J. Chem. Ed., 2009, 86, 355. - Mendham, J., Denney, R. C., Barnes, J. D., Thomas, M. J. K., Vogel's textbook of Quantitative Chemical Analysis, 6th ed., 2000, Prentice Hall, Harlow, England - Bard, Allen J., and Faulkner, Larry R. Electrochemical Instrumentation. Electrochemical Methods: Fundamentals and Applications, 2nd ed. John Wiley & Sons, Inc., 2001. pp. 632-658 - Nice wikipedia description of what a potentiostat is [here](http://en.wikipedia.org/wiki/Potentiostat). - A basic description of potentiostat architectures can be found at http://www.consultrsr.com/resources/pstats/design.htm - Yee, S., Chang, O. K., "A Simple Junction for Reference Electrodes", J. Chem. Ed., 1988, 65, 129 - Thanks to Jack Summers, Benjamin Hickman, Craig Versek, Ian Walls, Jake Wheeler, and Todd Crosby OHS2013_potentiostat_poster.svg OHS2013_potentiostat_poster.pdf...
Author Comment Last activity Moderation
nitrous2022 "I know that this project is a while in the past, but I hope it can be resurrected. Is it possible to contact you directly to discuss this? Thanks D..." | Read more » about 1 year ago
kelukaliya " excellent work " | Read more » over 3 years ago
nanocastro " hi Liz the raw data is stored on the project repo https://gitlab.com/nanocastro/WheeStat6-Mza/tree/master/Quercetina " | Read more » over 3 years ago
liz "Thank you for graphing the output of the commercial potentiostat to the wheestat, very interested in comparisons like these. Is there a place you a..." | Read more » over 3 years ago
warren " Awesome!!! " | Read more » over 3 years ago
JSummers "Hi Jeff, I was not aware of these technologies. Thanks for bringing them to my attention. Jack " | Read more » almost 5 years ago
warren "Hi, Jack - this was a while ago, but lots has changed in some of these technologies. I was wondering if you'd considered using something like WebJa..." | Read more » almost 5 years ago
momosavar "Hi @jsummers I downloaded processing and exported the application. Thank you very much " | Read more » almost 5 years ago
momosavar "Hi @JSummers I really thank you for answering my questions. I think because I do not have experience with it, I can not do it. If you can, please s..." | Read more » almost 5 years ago
JSummers "Hi, For the hardware you made, use the firmware here:https://github.com/SmokyMountainScientific/D_SeriesWheeStatFirmware/tree/master/WheeStat6_d. ..." | Read more » almost 5 years ago
momosavar "Hi @JSummers I used this file(https://github.com/SmokyMountainScientific/WheeStat5Eagles) to build wheestat. I would like to know which file to use..." | Read more » almost 5 years ago
JSummers "Hi @momosavar, The WheeStat will run with about any rail-to-rail quad op amp that works with a 3.3 volt supply and comes in the 14-SOIC package. ..." | Read more » almost 5 years ago
momosavar "Hello @JSummers I could not find AD8644.Can you suggest alternative part? " | Read more » almost 5 years ago
momosavar "Thank you so much @JSummers If there is a problem, I will certainly let you know. " | Read more » about 5 years ago
JSummers "Hi @momosavar, it uses the ek-tm4c123gxl. The voltage range of the model 5 potentiostat is limited to +/- 1.65 volts. The newer model 7 will go t..." | Read more » about 5 years ago
momosavar "Hello Dr.jack I want to build this device but I didn't know Which Launchpad did you use in version 5.1? MSP430g or EK-TM4C123GXL " | Read more » about 5 years ago
ghing "I ordered a WheeStat from the Public Lab Store. The board is stamped "WheeStat 5". I'm running Ubuntu 16.10 (64-bit) and just wanted to share the..." | Read more » over 5 years ago
JSummers "Hi Aneesahmad, In the US, you can order one from my website, smokymtsci.com. Outside the US, you can contact me at my email summers at wcu dot edu..." | Read more » over 5 years ago
Laszlo "Dear Dr. Summers, I ordered WheeStat 5.1 potentiostat from OSH Park and built a potentiostat system as described here. Everything seems fine excep..." | Read more » about 6 years ago
JSummers "Hi Ivan, I will be happy to help you with this. Did you want to make a WheeStat or were you decided on using Arduino. The WheeStat was designed u..." | Read more » over 6 years ago
ilmorales "Dear JSummers I have studied how to perform Arduino project as potenciostado , and frankly, was already giving up because there are many difficult..." | Read more » over 6 years ago
Mattador "I am looking forward to reading your note and i will tell you what results gives the gelled electrolyte. " | Read more » almost 7 years ago
JSummers "That seems reasonable. I don't know whether the gel-ceramic junction will be an issue or not. My guess is that it will not be a problem. If it i..." | Read more » almost 7 years ago | 2022-10-05 11:07:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27751973271369934, "perplexity": 5198.406613845527}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00186.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-r-section-r-2-algebra-essentials-r-2-assess-your-understanding-page-27/48 | ## College Algebra (10th Edition)
-$\frac{7}{3}$
Plug in the values -2 for x and 3 for y and solve $\frac{2x-3}{y}$ $\frac{2(-2)-3}{3}$ $\frac{-4-3}{3}$ -$\frac{7}{3}$ | 2019-11-20 17:07:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5651932954788208, "perplexity": 1339.4973617315334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00351.warc.gz"} |
https://par.nsf.gov/biblio/10391768-target-selection-validation-desi-luminous-red-galaxies | Target Selection and Validation of DESI Luminous Red Galaxies
Abstract
The Dark Energy Spectroscopic Instrument (DESI) is carrying out a five-year survey that aims to measure the redshifts of tens of millions of galaxies and quasars, including 8 million luminous red galaxies (LRGs) in the redshift range 0.4 <z≲ 1.0. Here we present the selection of the DESI LRG sample and assess its spectroscopic performance using data from Survey Validation (SV) and the first two months of the Main Survey. The DESI LRG sample, selected usingg,r,z, andW1 photometry from the DESI Legacy Imaging Surveys, is highly robust against imaging systematics. The sample has a target density of 605 deg−2and a comoving number density of 5 × 10−4h3Mpc−3in 0.4 <z< 0.8; this is a significantly higher density than previous LRG surveys (such as SDSS, BOSS, and eBOSS) while also extending toz∼ 1. After applying a bright star veto mask developed for the sample, 98.9% of the observed LRG targets yield confident redshifts (with a catastrophic failure rate of 0.2% in the confident redshifts), and only 0.5% of the LRG targets are stellar contamination. The LRG redshift efficiency varies with source brightness and effective exposure time, and we present a simple model that accurately characterizes this dependence. In the appendices, we more »
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
Publication Date:
NSF-PAR ID:
10391768
Journal Name:
The Astronomical Journal
Volume:
165
Issue:
2
Page Range or eLocation-ID:
Article No. 58
ISSN:
0004-6256
Publisher:
DOI PREFIX: 10.3847
National Science Foundation
##### More Like this
1. Abstract
We present the characteristics of 2 mm selected sources from the largest Atacama Large Millimeter/submillimeter Array (ALMA) blank-field contiguous survey conducted to date, the Mapping Obscuration to Reionization with ALMA (MORA) survey covering 184 arcmin2at 2 mm. Twelve of 13 detections above 5σare attributed to emission from galaxies, 11 of which are dominated by cold dust emission. These sources have a median redshift of$〈z2mm〉=3.6−0.3+0.4$primarily based on optical/near-infrared photometric redshifts with some spectroscopic redshifts, with 77% ± 11% of sources atz> 3 and 38% ± 12% of sources atz> 4. This implies that 2 mm selection is an efficient method for identifying the highest-redshift dusty star-forming galaxies (DSFGs). Lower-redshift DSFGs (z< 3) are far more numerous than those atz> 3 yet are likely to drop out at 2 mm. MORA shows that DSFGs with star formation rates in excess of 300Myr−1and a relative rarity of ∼10−5Mpc−3contribute ∼30% to the integrated star formation rate density at 3 <z< 6. The volume density of 2 mm selected DSFGs is consistent with predictions from some cosmological simulations and is similar to the volume density of their hypothesized descendants: massive, quiescent galaxies atz> 2. Analysis of MORA sources’more »
2. Abstract
A key component of the Dark Energy Spectroscopic Instrument (DESI) survey validation (SV) is a detailed visual inspection (VI) of the optical spectroscopic data to quantify key survey metrics. In this paper we present results from VI of the quasar survey using deep coadded SV spectra. We show that the majority (≈70%) of the main-survey targets are spectroscopically confirmed as quasars, with ≈16% galaxies, ≈6% stars, and ≈8% low-quality spectra lacking reliable features. A nonnegligible fraction of the quasars are misidentified by the standard spectroscopic pipeline, but we show that the majority can be recovered using post-pipeline “afterburner” quasar-identification approaches. We combine these “afterburners” with our standard pipeline to create a modified pipeline to increase the overall quasar yield. At the depth of the main DESI survey, both pipelines achieve a good-redshift purity (reliable redshifts measured within 3000 km s−1) of ≈99%; however, the modified pipeline recovers ≈94% of the visually inspected quasars, as compared to ≈86% from the standard pipeline. We demonstrate that both pipelines achieve a median redshift precision and accuracy of ≈100 km s−1and ≈70 km s−1, respectively. We constructed composite spectra to investigate why some quasars are missed by the standard pipeline and find thatmore »
3. Abstract
Far-ultraviolet (FUV; ∼1200–2000 Å) spectra are fundamental to our understanding of star-forming galaxies, providing a unique window on massive stellar populations, chemical evolution, feedback processes, and reionization. The launch of the James Webb Space Telescope will soon usher in a new era, pushing the UV spectroscopic frontier to higher redshifts than ever before; however, its success hinges on a comprehensive understanding of the massive star populations and gas conditions that power the observed UV spectral features. This requires a level of detail that is only possible with a combination of ample wavelength coverage, signal-to-noise, spectral-resolution, and sample diversity that has not yet been achieved by any FUV spectral database. We present the Cosmic Origins Spectrograph Legacy Spectroscopic Survey (CLASSY) treasury and its first high-level science product, the CLASSY atlas. CLASSY builds on the Hubble Space Telescope (HST) archive to construct the first high-quality (S/N1500 Å≳ 5/resel), high-resolution (R∼ 15,000) FUV spectral database of 45 nearby (0.002 <z< 0.182) star-forming galaxies. The CLASSY atlas, available to the public via the CLASSY website, is the result of optimally extracting and coadding 170 archival+new spectra from 312 orbits of HST observations. The CLASSY sample covers a broad range of properties including stellarmore »
4. Abstract
We present environmental analyses for 13 KPNO International Spectroscopic Survey Green Pea (GP) galaxies. These galaxies were discovered via their strong [Oiii] emission in the redshift range 0.29 <z< 0.42, and they are undergoing a major burst of star formation. A primary goal of this study is to understand what role the environment plays in driving the current star formation activity. By studying the environments around these extreme star-forming galaxies, we can learn more about what triggers their star formation processes and how they fit into the narrative of galaxy evolution. Using the Hydra multifiber spectrograph on the WIYN 3.5 m telescope, we mapped out the galaxy distribution around each of the GPs (out to ∼15 Mpc at the redshifts of the targets). Using three density analysis methodologies chosen for their compatibility with the geometry of our redshift survey, we categorized the galaxian densities of the GPs into different density regimes. We find that the GPs in our sample tend to be located in low-density environments. We find no correlation between the density and the SFRs seen in the GPs. We conclude that the environments the GPs are found in are likely not the driving factor behind their extrememore »
5. Abstract
We present a search for extreme emission line galaxies (EELGs) atz< 1 in the COSMOS and North Ecliptic Pole (NEP) fields with imaging from Subaru/Hyper Suprime-Cam (HSC) and a combination of new and existing spectroscopy. We select EELGs on the basis of substantial excess flux in thezbroad band, which is sensitive to Hαat 0.3 ≲z≲ 0.42 and [Oiii]λ5007 at 0.7 ≲z≲ 0.86. We identify 10,470 galaxies withzexcesses in the COSMOS data set and 91,385 in the NEP field. We cross-reference the COSMOS EELG sample with the zCOSMOS and DEIMOS 10k spectral catalogs, finding 1395 spectroscopic matches. We made an additional 71 (46 unique) spectroscopic measurements withY< 23 using the HYDRA multiobject spectrograph on the WIYN 3.5 m telescope, and 204 spectroscopic measurements from the DEIMOS spectrograph on the Keck II telescope, providing a total of 1441/10,470 spectroscopic redshifts for the EELG sample in COSMOS (∼14%). We confirm that 1418 (∼98%) are Hαor [Oiii]λ5007 emitters in the above stated redshift ranges. We also identify 240 redshifted Hαand [Oiii]λ5007 emitters in the NEP using spectra taken with WIYN/HYDRA and Keck/DEIMOS. Using broadband-selection techniques in thegricolor space, we distinguish between Hαand [Oiii]λ5007 emitters with 98.6% accuracy. We test our EELG selection bymore » | 2023-03-31 09:47:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5901031494140625, "perplexity": 6244.69113276436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00679.warc.gz"} |
https://stat430-fa20.hknguyen.org/files/lectures/lec12-1.html | # Plotting with Matplotlib.pyplot¶
## 1. Basic Plots¶
• Matplotlib is considered by many as the most basic plotting library in Python.
• It offers both static and interactive visualizations in Python.
• Plotting functions in libraries such as as pandas are built on top of Matplotlib making it very fundamental for data scientists programming in Python.
• In this lecture, we will mainly look at Pyplot, a sub-library within Matplotlib consisting of all the basic plotting functions.
### 1.1 Installing the library¶
• The first step is to install the Matplotlib library.
• Run the following command in your command line prompt:
conda install matplotlib
• Depending on how you installed Python, you might need to try the following code instead (if the previous one doesn't work):
pip install matplotlib
### 1.2 Histograms¶
• Before we can call the plotting functions, we need to import the library to our current working environment (kernel):
• In this example, we will take a look at the famous Old Faithful Geyser Dataset.
• This dataset contains the waiting time between eruptions and the duration of the eruption for the Old Faithful geyser in Yellowstone National Park, Wyoming, USA.
• There are 2 columns:
• 'eruptions': eruption time (in mins)
• 'waiting': waiting time to next eruption (in mins)
• Let's change the color of the plot!
• Plots do not make sense without axis labels!
• To add an x-axis label, use plt.xlabel():
• Similarly, use plt.ylabel() to add a y-axis label:
#### c. Changing the number of bins¶
• Histogram can look very different depending on the number of bins used to plot the histogram!
#### d. Adding grid to the plot¶
• Sometimes, adding a background grid makes it a lot easier to "read" the plot.
### 1.3 Boxplots¶
• Use plt.boxplot() to plot a boxplot:
#### a. Horizontal boxplot¶
• What’s if you want the boxplot to be horizontal instead?
#### b. Add variable name & axis label¶
• Just like with histogram, we can add axis label to boxplot! It might also be a good idea to add variable name (also called label) to the boxplot.
#### c. Multiple boxplots in one plot¶
• It's often the case that you want to plot boxplots for multiple variables in the dataset.
• For example, let's examine the famous Iris Dataset.
• This dataset contains measurements of 3 different Iris species: Setosa, Versicolor, and Virginica.
• There are 5 columns:
• 'Sepal.Length': the sepal length in cm.
• 'Sepal.Width': the sepal width in cm.
• 'Petal.Length': the petal length in cm.
• 'Petal.Width': the petal width in cm.
• 'Species': the specific Iris specie ('setosa', 'versicolor', 'virginica').
• This is clearly not very convenient! Later, we will discuss how to use the boxplot() function provided by pandas which improves the syntax significantly.
### 1.4 Scatterplots¶
• Scatterplot is one of the most important plots in Statistics! We plot scatterplot in Matplotlib using plt.scatter():
• Now, let’s add axis labels, change the color, add plot title and a background grid!
## 2. Matplotlib Inside pandas¶
• As discussed earlier, the plotting functions provided in the pandas library are built upon the functions provided by the Matplotlib library.
• You will soon find that this is much easier when you're dealing with data stored in a DataFrame (which is 95% of the time what we deal with).
### 2.1 Histograms¶
• Both function calls above essentially do the exact the same thing.
• Now, pandas plotting functions is extremely useful when we want to layer out plots (histograms in this case).
• But do note that it plots each column of the DataFrame as its own histogram.
• In case of the Iris dataset, we will have to do some data manipulation in order for it to plot a histogram for each specie.
### 2.2 Boxplots¶
• Similar to histogram, we can use plot.box() or boxplot() to plot boxplot(s) of column(s) of a DataFrame.
### 2.3 Scatterplots¶
• Alternatively, you can call function plot() and set the kind keyword to be 'scatter' for scatter plots.
• We can modify the plot just as above if we import Matplotlib.pyplot and use the functions provided by Pyplot. | 2021-08-02 05:56:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3660021424293518, "perplexity": 2985.548363386898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00385.warc.gz"} |
https://www.nature.com/articles/s41598-018-30789-9?WT.feed_name=subjects_optical-physics&error=cookies_not_supported&code=99df408a-bb76-4e39-a7c6-f4781df567f4 | Article | Open | Published:
# Nucleation of superfluid-light domains in a quenched dynamics
## Abstract
Strong correlation effects emerge from light-matter interactions in coupled resonator arrays, such as the Mott-insulator to superfluid phase transition of atom-photon excitations. We demonstrate that the quenched dynamics of a finite-sized complex array of coupled resonators induces a first-order like phase transition. The latter is accompanied by domain nucleation that can be used to manipulate the photonic transport properties of the simulated superfluid phase; this in turn leads to an empirical scaling law. This universal behavior emerges from the light-matter interaction and the topology of the array. The validity of our results over a wide range of complex architectures might lead to a promising device for use in scaled quantum simulations.
## Introduction
The absence of energy dissipation in the flow dynamics of a quantum fluid is one of the most fascinating effects of strongly correlated condensates1,2,3,4,5,6. Quantum phase transitions, from Mott insulator to superfluid, have been observed in a wide range of physical platforms such as ultracold atoms in optical lattices7, trapped gases of interacting fermionic atom pairs8, and exciton-polariton condensates9,10,11. Furthermore, the remarkable progress in controlling light-matter interactions in the microwave regime of circuit quantum electrodynamics (QED) has provided a suitable scenario for studying strongly correlated effects with light12,13,14. In this case, coupled resonator arrays (CRAs) each doped with a two-level system (TLS) allow for the formation of dressed quantum states (polaritonic states) and effective photon-photon interactions. The underlying physics is well described by the Jaynes-Cummings-Hubbard (JCH) model15,16,17. In this case, if the frequencies of the single resonator mode and the TLS are close to resonance, the effective photonic repulsion prevents the presence of more than one polaritonic excitation in the resonator, due to the photon-blockade effect18,19,20. Detuning the atomic and photonic frequencies diminishes this effect and leads the system to a photonic superfluid16. Unlike Bose-Einstein condensation in optical lattices, polariton condensation includes two kind of excitations, atomic and photonic, and the transition from Mott-insulator to superfluid is accompanied by a transition of the excitations from polaritonic to photonic16.
Here we show how a first-order like phase transition of the simulated superfluid phase of polaritons in CRAs can be induced by a quench dynamics as described by the JCH model. We compare full numerical simulations of several arrangements of CRAs with mean-field theory of photonic fluctuations dynamics. In this case, the simulated Mott-superfluid transition relies on the topological properties of the array, since the on-site photon blockade strongly depends on the connectivity of each node, even for small resonator-resonator hopping strength. When the system is prepared in the Mott state with a filling factor of one net excitation per site, and a sudden quench of the detuning between the single resonator mode and the TLS is applied, we find a first-order like phase transition which can be described by two bosonic excitations of the lower and upper polariton band. We find that a nucleated superfluid photon state emerges in a localized way, which depends on the topology of the array. This avalanche-like behavior near the simulated phase transition leads to a universal scaling law between critical parameters of the superfluid phase and the average connectivity of the array.
## The Model
The physical scenario that we consider are CRAs in complex arrangements such as the one in Fig. 1(a). Here, each node of the array consists of a QED resonator doped with a TLS to be a real or artificial atom, and the whole system is described by the Jaynes-Cummings-Hubbard model15,16,17, whose Hamiltonian reads
$${H}_{{\rm{JCH}}}=\sum _{i\mathrm{=1}}^{L}\,{H}_{i}^{{\rm{JC}}}-J\sum _{\langle i,j\rangle }\,{A}_{ij}{a}_{i}^{\dagger }{a}_{j}+{\rm{h}}\mathrm{.}{\rm{c}}\mathrm{.}-\sum _{i\mathrm{=1}}^{L}\,{\mu }_{i}{n}_{i},$$
(1)
where L is the number of lattice sites, $${a}_{i}({a}_{i}^{\dagger })$$ is the annihilation (creation) bosonic operator, J is the photon-photon hopping amplitude, Aij is the adjacency matrix which takes values Aij = 1 if two sites of the lattice are connected and Aij = 0 otherwise. μi stands for the chemical potential at site i and $${n}_{i}={a}_{i}^{\dagger }{a}_{i}+{\sigma }_{i}^{+}{\sigma }_{i}^{-}$$ represents the number of polaritonic excitations at site i. Also, $${H}_{i}^{{\rm{JC}}}=\omega {a}_{i}^{\dagger }{a}_{i}+{\omega }_{0}{\sigma }_{i}^{+}{\sigma }_{i}^{-}+g({\sigma }_{i}^{+}{a}_{i}+{\sigma }_{i}^{-}{a}_{i}^{\dagger })$$ is the Jaynes-Cummings (JC) Hamiltonian describing light-matter interaction21. Here, $${\sigma }_{i}^{+}({\sigma }_{i}^{-})$$ is the raising (lowering) operator acting on the TLS Hilbert space, and ω, ω0, and g are the resonator frequency, TLS frequency, and light-matter coupling strength, respectively. Notice that the total number of elementary excitations (polaritons) in this system $$N={\sum }_{i}^{M}({a}_{i}^{\dagger }{a}_{i}+{\sigma }_{i}^{+}{\sigma }_{i}^{-})$$ is the conserved quantity [N,HJCH] = 022,23.
The quantum dynamics of this model has been studied for linear lattices15,16, and its equilibrium properties at zero temperature have been studied by means of density matrix renormalization group24, and by means of mean field (MF) theory, for two-dimensional lattices17,25,26 and complex networks27. The latter studies have provided evidence of a quantum phase transition from Mott-insulating phases to a superfluid polaritonic phase. Beyond the MF approach there have been important contributions from the numerical and analytical viewpoint for extracting the phase boundaries28,29,30,31, the study of critical behavior30,31, and the excitation spectrum29,30,31. For a general overview on many-body physics with light relevant literature is available32,33,34.
## Mott-insulator to superfluid phase transition
Here we briefly summarize the Mott-insulator to superfluid phase transition in the JCH model16. Our main results are focused on the quantum dynamics of the JCH model (1) in complex networks, where we focus on the canonical ensemble with a fixed total number of polaritons13,14. In this case, the JCH Hamiltonian reads
$${H}_{{\rm{JCH}}}=\sum _{i=1}^{L}\,{H}_{i}^{{\rm{JC}}}-J\sum _{\langle i,j\rangle }\,{A}_{ij}{a}_{i}^{\dagger }{a}_{j}+{\rm{h}}\mathrm{.}{\rm{c}}\mathrm{.}$$
(2)
In the atomic limit, where the photon-hopping can be neglected ($$J\ll g$$), the JC Hamiltonian at site i ($${H}_{i}^{{\rm{JC}}}$$) can be diagonalized in the polaritonic basis that mixes atomic and photonic excitations |n, ±〉i = γn±|↓, ni + ρn±|↑,n − 1〉i with energies $${\varepsilon }_{n}^{\pm }=n\omega +{\rm{\Delta }}\mathrm{/2}\pm \chi (n)$$, where $$\chi (n)=\sqrt{{{\rm{\Delta }}}^{2}\mathrm{/4}+{g}^{2}n}$$, ρn+ = cos(θn/2), γn+ = sin(θn/2), ρn = −γn+, γn = ρn+, $$\tan \,{\theta }_{n}=2g\sqrt{n}/{\rm{\Delta }}$$, and the detuning parameter Δ = ω0ω.
Now, one can introduce the polaritonic creation operators at site i defined as $${P}_{i}^{\dagger (n,\alpha )}=|n,\alpha {\rangle }_{i}\langle \mathrm{0,}-|$$, where α = ± and we identify |0,−〉≡|↓, 0〉 and |0, + 〉≡|$$\rlap{/}{0}$$〉 being a ket with all entries equal to zero, that is, it represents an unphysical state. These identifications imply γ0− = 1 and γ0 + = ρ = 0. Using this polaritonic mapping the Hamiltonian (2) can be rewritten as16,26
$$H=\sum _{i=1}^{L}\sum _{n=1}^{\infty }\sum _{\alpha =\pm }{\varepsilon }_{n}^{\alpha }{P}_{i}^{\dagger (n,\alpha )}{P}_{i}^{(n,\alpha )}-J\sum _{\langle i,j\rangle }\,{A}_{ij}[\sum _{n,m=1}^{\infty }\sum _{\alpha ,\alpha ^{\prime} ,\beta ,\beta ^{\prime} }\,{t}_{\alpha ,\alpha ^{\prime} }^{n}{t}_{\beta ,\beta ^{\prime} }^{m}{P}_{i}^{\dagger (n-\mathrm{1,}\alpha )}{P}_{i}^{(n,\alpha ^{\prime} )}{P}_{j}^{\dagger (m,\beta )}{P}_{j}^{(m-\mathrm{1,}\beta ^{\prime} )}+{\rm{h}}\mathrm{.}{\rm{c}}\mathrm{.}],$$
(3)
where $${t}_{\pm +}^{n}=\sqrt{n}{\gamma }_{n\pm }{\gamma }_{(n-\mathrm{1)}+}+\sqrt{n-1}{\rho }_{n\pm }{\gamma }_{(n-\mathrm{1)}-}$$ and $${t}_{\pm -}^{n}=\sqrt{n}{\gamma }_{n\pm }{\rho }_{(n-\mathrm{1)}+}+\sqrt{n-1}{\rho }_{n\pm }{\rho }_{(n-\mathrm{1)}-}$$. The first term in Eq. (3) stands for the local polaritonic energy with an anharmonic spectrum and gives rise to an effective on-site polaritonic repulsion. The last term in Eq. (3) represents the polariton hopping between nearest neighbors and long range sites, and it may also allow for the interchange of polaritonic excitations.
If the physical parameters of the Hamiltonian (3) are in the regime $$Jn\ll g\sqrt{n}\ll \omega$$, and for an integer filling factor, where the total number of excitations N over the lattice is an integer multiple of the number of unit cells L, the lowest energy state is the product $${\otimes }_{i=1}^{L}\mathrm{|1,}\,-{\rangle }_{i}$$ which corresponds to a Mott-insulating phase, and its associated energy is $$E=N{\varepsilon }_{1}^{-}$$. In the thermodynamic limit, the interplay between the on-site polariton repulsion and the polariton hopping leads to a phase transition from a Mott insulator to a superfluid phase. The latter may be reached by diminishing the on-site repulsion by means of detuning the atomic and photonic frequencies. At equilibrium, this phase transition may be quantified by means of bipartite fluctuations24,35. In a simulated Mott-insulator transition, where an adiabatic dynamics drives the passage, it has been shown that a suitable order parameter corresponds to the variance of the number of excitations per site. Figure 1(b) shows the archetypal behavior of the order parameter as a function of the detuning Δ in the adiabatic dynamic regime, and for an integer filling factor of one net excitation per site16.
## Quenched dynamics and Topology in finite-size complex lattices
Our aim is to describe how complex arrangements of CRAs, such as the one appearing in Fig. 1(a), affect the simulated phase transition from Mott insulator to superfluid as the detuning parameter Δ is suddenly quenched. In particular, we are interested in how one can manipulate photonic transport properties of the emerging superfluid phase depending on the specific topology of the CRAs. As order parameter we choose the time-averaged standard deviation of the polariton number $$\frac{1}{T}{\int }_{0}^{T}\,dt{\sum }_{i}^{L}\,(\langle {n}_{i}^{2}\rangle -{\langle {n}_{i}\rangle }^{2}))$$ with T = J−1, and we assume the whole system initially prepared in the Mott-insulating state $$|{\psi }_{0}\rangle ={\otimes }_{i=1}^{L}\mathrm{|1,}\,-{\rangle }_{i}$$, with Δ = 0 at each lattice site. In the Supplementary Material we present another equivalent measure of the order parameter based on the bipartite fluctuation proposed by S. Rachel et al.35, and D. Rossini et al.24. Of course, due to computational restrictions, we consider relatively small arrangements of CRAs, but with varying degrees of complexity, suggesting that the topology of the network could be used in a nontrivial way to manipulate the emerging of the superfluid phase as these system becomes larger and approach the thermodynamic limit. The initialization process may be achieved by the scheme proposed by Angelakis et al.16. For instance, in circuit QED13,14 one might cool down the whole system reaching temperatures around T0 ~ 15 mK. In this case, the system will be prepared in its global ground state $$|G\rangle ={\otimes }_{i\mathrm{=1}}^{L}\mathrm{|0,}-{\rangle }_{i}$$. Then, one can apply individual magnetic fields on the TLSs, each implemented via a transmon qubit36, such that the resonance condition Δ = 0 is achieved. This way one can address individually each cavity with an external AC microwave current or voltage tuned to the transition |↓, 0〉i→|1, −〉i, with a driving frequency ωD = ωg, such that the system will be prepared in the desired initial state |ψ0〉. The sudden quench of the detuning can be achieved by applying magnetic fields to the transmon qubits in order to reach the desired superfluid phase. It is noteworthy that when the initial state is a linear superposition of upper and lower polariton states (Δ ≠ 0) the quantum dynamics will be dominated by these two polaritonic bands. Also, we carry out full numerical calculations for the parameters g = 10−2ω and J = 10−2g, and we consider up to 6 Fock states per bosonic mode. These parameter values allow us to prevent the interchange of polaritonic excitations between different sites.
In order to gain insight into the quench dynamics of the topological CRAs let us consider a dimer array. As shown in Fig. 2, the simulated Mott-insulator to Superfluid phase transition strongly depends on the type of dynamics. Adiabatic dynamics resembles a second order phase transition which leads to a continuous change of the state of the system. On the other hand, the quench dynamics takes place accompanied by a discontinuous change of the state, analogous to the Metal-Insulator transition of oxides37. Hence, as we expected, the adiabatic dynamics is not qualitatively affected by the distribution of nearest neighbors. However, the topological properties of the array dominate a first-order like phase transition driving the quench dynamics (see Fig. 2). As the degree of inter-connectivity between the resonators grows the distance between them rapidly diminishes, and thus local correlations become more important due to quantum interference effects. If scaled up to the size of the system, due to the increase in the degrees of freedom, the numerical simulation time grows exponentially. In the next section we obtain an empirical scaling law to address this issue. Indeed, we demonstrate that the photon propagation in the simulated superfluid phase strongly depends of the connectivity per site $${k}_{i}={\sum }_{j}\,{A}_{ij}$$. Let us consider a set of arrays with a fixed number of TLS. As shown in Fig. 3(a) in the quench dynamics case the averaged standard deviation depends linearly on the connectivity, which means that depending on the connectivity the local superfluid states are reached with different detuning scales. We consider a set of CRAs with four and five interconnecting resonators as shown in Fig. 3(b). In contrast to these results, the adiabatic dynamics does not exhibit a monotone or linearly growing behavior, which leads to a sharper phase transition, as illustrated in Fig. 2.
## Mean-field theory of the Superfluid Phase
In the thermodynamic limit, the emergent superfluid phase behaves as a quantum liquid17. Superfluidity is achieved by means of a transition of the excitations from polaritonic to photonic. In order to describe the simulated superfluid phase in our system, we introduce the photonic order parameter17 ψ = 〈ai〉. Using the decoupling approximation $${a}_{i}^{\dagger }{a}_{j}\approx \langle {a}_{i}^{\dagger }\rangle {a}_{j}+{a}_{i}^{\dagger }\langle {a}_{j}\rangle -\langle {a}_{i}^{\dagger }\rangle \langle {a}_{j}\rangle$$, the resulting mean-field JCH Hamiltonian can be written as
$${H}_{JCH}=\sum _{i}\,{H}_{i}^{JC}-J\sum _{i}\,{k}_{i}(\psi {a}_{i}^{\dagger }+{\psi }^{\ast }{a}_{i}\mathrm{).}$$
(4)
Therefore, the simulated Mott-insulator phase can be characterized by the on site repulsion, which suppresses the fluctuations of the number of per site excitations |ψ| = 0. On the contrary, the superfluid phase is dominated by the hopping and the quantum fluctuations |ψ| ≠ 0. Now we focus on the light-matter coupling induced by the hopping of photons through cavities. Introducing the identity σ + σ + σσ + = I, we obtain an effective light-matter coupling, since it retains the mixed products of photonic and two level operators,
$${h}_{i}^{LM}={\tilde{g}}_{i}{a}_{i}^{\dagger }{\sigma }_{i}^{-}+{\tilde{g}}_{i}^{\dagger }{a}_{i}{\sigma }_{i}^{+}+{\rm{h}}\mathrm{.}{\rm{c}}\mathrm{.}$$
(5)
Here $${\tilde{g}}_{i}=Ig-J{k}_{i}\psi {\sigma }_{i}^{+}$$ is the effective light-matter coupling per site, which therefore turns out to be an operator. In the simulated superfluid phase the atomic transitions are expected to be suppressed against the photonic dressed states. Moreover, the total excitation number does not change, hence when the photonic excitations increase the atomic excitations decrease. Note that when $${\tilde{g}}_{i}=Ig$$, i.e. when there are no hopping or topological effects,
$$\langle {\sigma }_{i}^{+}\rangle =\frac{g}{J{k}_{i}}\frac{1}{\psi },$$
(6)
which indicates that the total number of excitations is conserved and also demonstrates that the increase of the photonic states leads to a reduction of the atomic excitations, due to the conservation of the number excitations. Figure 4 shows the effect of the quench dynamics on the simulated phase transition of the JCH model for different arrays. In this case the nucleation of superfluid states emerges due to the variation of the order parameter, according to Eq. (6). In the Mott-Insulator state $$\langle {\sigma }_{i}^{+}\rangle > 0\,\forall \,i$$, when the detuning is increased $$\langle {\sigma }_{i}^{+}\rangle$$ decreases by a factor 1/(kiψ), until the superfluid phase is reached.
We have shown that the mean field approach strongly supports the scaling law of the order parameter shown in Fig. 3(a); namely, as the connectivity of CRAs is increased locally, the light superfluid phase is achieved for a smaller detuning strength.
## Conclusion
We show that quench dynamics induce a first-order like phase transition in coupled resonator arrays doped with a two-level system. The nucleation of simulated superfluid states has been demonstrated by numerical simulation and by a mean field theoretical approach. In the quench dynamics the abrupt change of the order parameter, instead of sharper crossover driven by adiabatic dynamics, is explained by the non uniform transition from Mott-Insulator to superfluid, which locally depends of the connectivity. Since the quench dynamics exhibits the same behavior independent of the choice of the order parameter, the standard deviation of the polariton number or the bipartite fluctutation, our results reveal the universality of the simulated first order phase transition (also see Supplementary Material). As the number of TLS is increased the averaged standard deviation of the superfluid phase depends linearly on the connectivity. At an increased scale, for large networks of doped optical/microwave resonators, our system may enter the field of quantum simulators. In particular, as far as we understand, there is no known microscopic mechanism for predicting nucleation in first-order phase transitions. In this context, our results provide an exact geometrical description for the appearance of domain nucleation due to the number of connections. Thus, our results may be used to predict, and manipulate, the nucleation of a superfluid phase of light in complex-random networks.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Kapitza, P. Viscosity of liquid helium below the λ-point. Nature 141, 74 (1938).
2. 2.
Leggett, A. J. Quantum Liquids (Oxford University Press, 2006).
3. 3.
Anderson, M. H., Ensher, J. R., Matthews, M., Wieman, C. E. & Cornell, E. A. Observation of bose-einstein condensation in a dilute atomic vapor. Science 269, 198–201 (1995).
4. 4.
Onofrio, R. et al. Observation of superfluid flow in a bose-einstein condensed gas. Phys. Rev. Lett. 85, 2228–2231 (2000).
5. 5.
Zwierlein, M. W. e. a. Observation of bose-einstein condensation of molecules. Phys. Rev. Lett. 91, 250401 (2003).
6. 6.
Schiró, M., Bordyuh, M., Öztop, B. & Türeci, H. E. Phase transition of light in cavity qed lattices. Phys. Rev. Lett. 109, 053601 (2012).
7. 7.
Greiner, M., Mandel, O., Esslinger, T., Hansch, T. W. & Bloch, I. Quantum phase transition from a superfluid to a mott insulator in a gas of ultracold atoms. Nature 415, 39–44 (2002).
8. 8.
Regal, C. A., Greiner, M. & Jin, D. S. Observation of resonance condensation of fermionic atom pairs. Phys. Rev. Lett. 92, 040403 (2004).
9. 9.
Lerario, G. et al. Room-temperature superfluidity in a polariton condensate. Nature Physics 13, 837–841 (2017).
10. 10.
Wertz, E. et al. Spontaneous formation and optical manipulation of extended polariton condensates. Nature Physics 6, 860–864 (2010).
11. 11.
Byrnes, T., Kim, N. Y. & Yamamoto, Y. Exciton-polariton condensates. Nature Physics 10, 803–813.
12. 12.
Houck, A. A., Türeci, H. E. & Koch, J. On-chip quantum simulation with superconducting circuits. Nature Physics 8, 292–299.
13. 13.
Raftery, J., Sadri, D., Schmidt, S., Türeci, H. E. & Houck, A. A. Observation of a dissipation-induced classical to quantum transition. Phys. Rev. X 4, 031043 (2014).
14. 14.
Fitzpatrick, M., Sundaresan, N. M., Li, A. C. Y., Koch, J. & Houck, A. A. Observation of a dissipative phase transition in a one-dimensional circuit qed lattice. Phys. Rev. X 7, 011016 (2017).
15. 15.
Hartmann, M. J., Brandão, F. G. S. L. & Plenio, M. B. Strongly interacting polaritons in coupled arrays of cavities. Nature Physics 2, 849–855 (2006).
16. 16.
Angelakis, D. G., Santos, M. F. & Bose, S. Photon-blockade-induced mott transitions and xy spin models in coupled cavity arrays. Phys. Rev. A 76, 031805 (2007).
17. 17.
Greentree, A. D., Tahan, C., Cole, J. H. & Hollenberg, L. C. L. Quantum phase transitions of light. Nature Physics 2, 856–861 (2006).
18. 18.
Birnbaum, K. M. et al. Photon blockade in an optical cavity with one trapped atom. Nature 436, 87–90 (2005).
19. 19.
Imamoḡlu, A., Schmidt, H., Woods, G. & Deutsch, M. Strongly interacting photons in a nonlinear cavity. Phys. Rev. Lett. 79, 1467–1470 (1997).
20. 20.
Greentree, A. D., Vaccaro, J. A., R de Echaniz, S., Durrant, A. V. & Marangos, J. P. Prospects for photon blockade in four-level systems in the n configuration with more than one atom. Journal of Optics B: Quantum and Semiclassical Optics 2, 252 (2000).
21. 21.
Jaynes, E. T. & Cummings, F. W. Comparison of quantum and semiclassical radiation theories with application to the beam maser. Proceedings of the IEEE 51, 89–109 (1963).
22. 22.
Hartmann, M., Brandão, F. & Plenio, M. Quantum many-body phenomena in coupled cavity arrays. Laser & Photonics Reviews 2, 527–556 (2008).
23. 23.
Hartmann, M. J. & Plenio, M. B. Strong photon nonlinearities and photonic mott insulators. Physical Review Letters 99, 103601 (2007).
24. 24.
Rossini, D., Fazio, R. & Santoro, G. Photon and polariton fluctuations in arrays of qed-cavities. EPL (Europhysics Letters) 83, 47011 (2008).
25. 25.
Na, N., Utsunomiya, S., Tian, L. & Yamamoto, Y. Strongly correlated polaritons in a two-dimensional array of photonic crystal microcavities. Phys. Rev. A 77, 031803 (2008).
26. 26.
Koch, J. & Le Hur, K. Superfluid mott-insulator transition of light in the jaynes-cummings lattice. Phys. Rev. A 80, 023811 (2009).
27. 27.
Halu, A., Garnerone, S., Vezzani, A. & Bianconi, G. Phase transition of light on complex quantum networks. Phys. Rev. E 87, 022104 (2013).
28. 28.
Rossini, D. & Fazio, R. Mott-insulating and glassy phases of polaritons in 1d arrays of coupled cavities. Phys. Rev. Lett. 99.
29. 29.
Aichhorn, M., Hohenadler, M., Tahan, C. & Littlewood, P. B. Quantum fluctuations, temperature, and detuning effects in solid-light systems. Phys. Rev. Lett. 100, 216401 (2008).
30. 30.
Pippan, P., Evertz, H. G. & Hohenadler, M. Excitation spectra of strongly correlated lattice bosons and polaritons. Phys. Rev. A 80, 033612 (2009).
31. 31.
Schmidt, S. & Blatter, G. Strong coupling theory for the jaynes-cummings-hubbard model. Phys. Rev. Lett. 103, 086403 (2009).
32. 32.
Hartmann, M. J. Quantum simulation with interacting photons. Journal of Optics 18, 104005 (2016).
33. 33.
Noh, C. & Angelakis, D. G. Quantum simulations and many-body physics with light. Rep. Prog. Phys. 80, 016401 (2016).
34. 34.
Angelakis, D. G. (ed.) Quantum Simulations with Photons and Polaritons. Quantum Science and Technology (Springer, 2017).
35. 35.
Rachel, S., Laflorencie, N., Song, H. F. & Le Hur, K. Detecting quantum critical points using bipartite fluctuations. Phys. Rev. Lett. 108, 116401 (2012).
36. 36.
Koch, J. et al. Charge-insensitive qubit design derived from the cooper pair box. Phys. Rev. A 76, 042319 (2007).
37. 37.
Rozenberg, M. J. Integer-filling metal-insulator transitions in the degenerate hubbard model. Phys. Rev. B 55, R4855–R4858 (1997).
## Acknowledgements
This work was supported by the Fondo Nacional de Investigaciones Científicas y Tecnológicas (FONDECYT, Chile) under grants No. 1150806 (FT), No. 1160639 (MK,JR), 1150718 (JAV), 11596590659 (GR), Grants-FA9550-16-1-0122 (FT,MK) and FA9550-18-1-0438 CEDENNA through the “Financiamiento Basal para Centros Científicos y Tecnológicos de Excelencia-FB0807” (FT, JR, MK and JAV).
## Author information
### Affiliations
1. #### Departamento de Física, Facultad de Ciencias, Universidad de Chile, Casilla 653, Santiago, 7800024, Chile
• Joaquín Figueroa
• , José Rogan
• , Juan Alejandro Valdivia
• , Miguel Kiwi
• & Felipe Torres
2. #### Center for the Development of Nanoscience and Nanotechnology 9170124, Estación Central, Santiago, Chile
• Joaquín Figueroa
• , José Rogan
• , Juan Alejandro Valdivia
• , Miguel Kiwi
• & Felipe Torres
3. #### Departamento de Física, Universidad de Santiago de Chile (USACH), Avenida Ecuador 3493, 9170124, Santiago, Chile
• Guillermo Romero
### Contributions
F.T., G.R., M.K., J.R. and J.A.V. supervised and contributed to the theoretical analysis. J.F. carried out all analytical and numerical calculations, F.T. and G.R. wrote the manuscript. All authors contributed to the discussion of the results and revised the manuscript.
### Competing Interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Felipe Torres. | 2018-10-16 02:25:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7288588881492615, "perplexity": 1742.8467345293063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509960.34/warc/CC-MAIN-20181016010149-20181016031649-00432.warc.gz"} |
https://hal.in2p3.fr/in2p3-00005398 | # A study of the $\eta\eta'$ and $\eta'\eta'$ channels produced in central pp interactions at 450 GeV/c
Abstract : The reactions pp -> pf (X0) ps, where X0 is observed decaying to eta etaprime and etaprime etaprime, have been studied at 450 GeV/c. This is the first time that these channels have been observed in central production and only the second time that the etaprime etaprime channel has been observed in any production mechanism. In the eta etaprime channel there is evidence for the f0(1500) and a peak at 1.95 GeV. The etaprime etaprime channel shows a peak at threshold which is compatible with having JPC = 2++ and spin projection JZ = 0.
Document type :
Journal articles
Complete list of metadata
Cited literature [2 references]
http://hal.in2p3.fr/in2p3-00005398
Contributor : Claudine BOMBAR Connect in order to contact the contributor
Submitted on : Friday, May 26, 2000 - 3:39:37 PM
Last modification on : Friday, November 6, 2020 - 3:26:11 AM
Long-term archiving on: : Friday, May 29, 2015 - 5:00:29 PM
### Identifiers
• HAL Id : in2p3-00005398, version 1
### Citation
D. Barberis, F G. Binon, F E. Close, K M. Danielsen, S V. Donskov, et al.. A study of the $\eta\eta'$ and $\eta'\eta'$ channels produced in central pp interactions at 450 GeV/c. Physics Letters B, Elsevier, 2000, 471, pp.429-434. ⟨in2p3-00005398⟩
Record views | 2022-06-29 12:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31026598811149597, "perplexity": 7737.395168677228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00052.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-01815155?gathStatIcon=true | # Measurement of $D_s^{\pm}$ production asymmetry in $pp$ collisions at $\sqrt{s} =7$ and 8 TeV
Abstract : The inclusive D$_{s}^{±}$ production asymmetry is measured in pp collisions collected by the LHCb experiment at centre-of-mass energies of $\sqrt{s}=7$ and 8 TeV. Promptly produced D$_{s}^{±}$ mesons are used, which decay as D$_{s}^{±}$ → ϕπ$^{±}$, with ϕ → K$^{+}$K$^{−}$. The measurement is performed in bins of transverse momentum, p$_{T}$, and rapidity, y, covering the range 2.5 < p$_{T}$ < 25.0 GeV/c and 2.0 < y < 4.5. No kinematic dependence is observed. Evidence of nonzero D$_{s}^{±}$ production asymmetry is found with a significance of 3.3 standard deviations.
Keywords :
Type de document :
Article dans une revue
JHEP, 2018, 08, pp.008. 〈10.1007/JHEP08(2018)008〉
https://hal.archives-ouvertes.fr/hal-01815155
Contributeur : Inspire Hep <>
Soumis le : mercredi 13 juin 2018 - 18:56:52
Dernière modification le : mardi 12 février 2019 - 21:52:43
### Citation
Roel Aaij, Bernardo Adeva, Marco Adinolfi, Ziad Ajaltouni, Simon Akar, et al.. Measurement of $D_s^{\pm}$ production asymmetry in $pp$ collisions at $\sqrt{s} =7$ and 8 TeV. JHEP, 2018, 08, pp.008. 〈10.1007/JHEP08(2018)008〉. 〈hal-01815155〉
### Métriques
Consultations de la notice | 2019-02-17 00:24:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8782823085784912, "perplexity": 13429.71901903537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00396.warc.gz"} |
https://open.kattis.com/problems/gettowork | Kattis
# Get to Work
You work for a company that has $E$ employees working in town $T$. There are $N$ towns in the area where the employees live. You want to ensure that everyone will be able to make it to work. Some of the employees are drivers and can drive $P$ passengers. A capacity of $P = 1$ indicates that the driver can only transport themselves to work. You want to ensure that everyone will be able to make it to work and you would like to minimize the number of cars on the road.
You want to calculate the number of cars on the road, with these requirements:
• Every employee can get to town $T$.
• The only way an employee may travel between towns is in a car belonging to an employee.
• Employees can only take rides from other employees that live in the same town.
• The minimum number of cars is used.
Find whether it is possible for everyone to make it to work, and if it is, how many cars will end up driving to the office.
## Input
One line containing an integer $C, C \leq 100$, the number of test cases in the input file.
For each test case there will be:
• One line containing the integer $N$, the number of towns in your area and the integer $T$, the town where the office is located.
• One line containing the integer $E, 1 \leq E \leq 500$, the number of employees.
• $E$ lines, one for each employee, each containing:
• An integer $1 \leq H \leq N$, the home town of the employee, followed by
• An integer $0 \leq P \leq 6$, the number of passengers they can drive. If the employee is not licensed to drive the number will be $0$.
You may assume that $1 \leq T \leq N, 1 \leq N \leq 100$.
## Output
• $C$ lines, one for each test case in the order they occur in the input file, each containing the string “Case #$X$: ” where $X$ is the number of the test case, starting from 1, followed by:
• The string “IMPOSSIBLE”, if there are not enough drivers for everyone to commute; or
• $N$ space-separated integers, one for each town from $1$ to $N$, which indicate the number of vehicles commuting from the town.
Sample Input 1 Sample Output 1
3
5 1
3
1 0
1 0
1 0
5 1
3
2 4
2 0
3 0
5 3
5
1 2
1 0
4 2
4 4
4 0
Case #1: 0 0 0 0 0
Case #2: IMPOSSIBLE
Case #3: 1 0 0 1 0 | 2018-03-23 09:01:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38608109951019287, "perplexity": 492.7044395120212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648205.76/warc/CC-MAIN-20180323083246-20180323103246-00655.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-14-calculus-of-vector-valued-functions-14-2-calculus-of-vector-valued-functions-exercises-page-720/4 | Chapter 14 - Calculus of Vector-Valued Functions - 14.2 Calculus of Vector-Valued Functions - Exercises - Page 720: 4
$$\left\langle 1 ,1 ,0\right\rangle$$
Work Step by Step
By making use of L'Hôpital's rule on the second component, we have $$\lim _{t \rightarrow 0} \left\langle \frac{1}{t+1} , \frac{e^t-1}{t} ,4t\right\rangle=\\ \left\langle\lim _{t \rightarrow 0}\frac{1}{t+1} ,\lim _{t \rightarrow 0} \frac{e^t-1}{t} ,\lim _{t \rightarrow 0}4t\right\rangle\\ =\left\langle 1 ,1 ,0\right\rangle$$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-11-22 22:28:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7036356329917908, "perplexity": 954.0557085688616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00526.warc.gz"} |
https://depth-first.com/articles/2020/01/06/a-minimal-graph-api/ | # A Minimal Graph API
Graphs are ubiquitous data structures in computing, appearing in domains ranging from networks to logistics to chemistry. Despite these diverse applications, relatively little has been said about the irreducible elements of graph-like behavior. This article introduces a minimal application programming interface (API) for graphs that I've developed over the last several years. Later articles will illustrate some implementations and uses in Rust, JavaScript, and possibly other languages.
## Methods
A graph can be defined as any object supporting the following 11 methods:
1. nodes Iterates all nodes.
2. order Returns the number of nodes.
3. hasNode Takes one parameter, returning true if it's a member, or false otherwise.
4. degree Takes one parameter, returning its count of outbound edges.
5. neighbors Takes one parameter, iterating the nodes connected to it by an outbound edge.
6. size Returns the number of edges.
7. edges Iterates all edges.
8. hasEdge Takes two parameters, returning true if an edge exists from the first to the second, or false otherwise.
9. weight Returns the optional weight associated with the edge from source to target.
10. isEmpty Returns true if the graph order is zero.
11. debug Returns an object representing the internal state of the graph.
These methods can be grouped into three broad categories: node operations (1-5); edge operations (6-9); and global operations (10-11). This API offers several high-level advantages and trade-offs, as described below.
## Immutability
The Graph interface exposes no mutators. As a result, clients can only interact with immutable Graph objects. This offers two advantages:
1. Simplicity of implementation. Public mutators are optional, reducing the surface area of required functionality.
2. Simplicity of use. Defensive copying and/or locking are unnecessary because clients can never change Graph state.
Without mutators, how will a Graph be created? There are two options: (1) a build function or public constructor; and (2) a Builder interface.
A build function accepts a template data structure argument, returning a fully-constructed Graph in response. The data structure can take a number of forms. One of the simplest would consist of two arrays — one holding references to nodes and the other holding tuples of nodes and weights to be used as edges.
In contrast, a Builder interface exposes methods that clients can use to assemble a Graph incrementally. A minimal Builder interface would include the following methods:
2. addEdge Accepts two parameters, one a reference to a source node and the other a reference to a destination node for a new edge.
3. graph Returns the graph under construction.
Depending on the application, methods for removing nodes and edges might also be included.
## Minimal Node Interface
Some approaches to modeling graphs require nodes to support an interface returning neighbors or parent graph. No such node interface is required here. In practice, some constraints may be imposed by the programming language. For example, Rust requires objects that will be used as keys in hash tables to explicitly declare "equals" and "hash" methods. If a Graph internally inserts nodes as keys, this detail will leak into the interface. But even in the unlikely event that such constraints are imposed, they represent at best a minor narrowing of node API.
The minimal interface also simplifies node implementation. Furthermore, the same node can be used by multiple graphs simultaneously. In subgraph isomorphism, for example, an embedding of one graph in another is computed. Reporting this embedding as a subgraph composed of the same nodes as the parent graph simplifies working with the result.
## Implicit Edges
Edges do not exist explicitly in this API. Rather, their presence is implied by the degree, neighbors, size, hasEdge, and weight methods. More explicit representation can be found in the edges and debug methods. Most languages allow edges to be conveyed as simple data structures such as arrays or tuples.
Some approaches to Graph APIs invoke an explicit "edge" interface supporting such operations as "source," "target," "mate," and "parent." Eliminating explicit edges allows greater flexibility in representing the connections between nodes. However, the cases in which this is really needed are rare in my experience. In most situations, it be of no use to explicitly refer to edges.
## Many Kinds of Graphs are Supported
Node- and edge-specific methods can be re-interpreted to support various kinds of specialized graphs. For example, directed graphs can be supported by making the order of arguments in the hasEdge method significant. Likewise, unweighted graphs can be supported by always returning a null value from weight. The build function of a simple graph would yield an error on attempting to connect a node with itself, whereas a graph supporting loops would allow it. In a multigraph, edges may iterate multiple edges between the same two nodes. And so on.
Various performance optimizations are possible based on what a type of graph will allow. For example, a "dyad" is a graph consisting of two nodes with an edge between them. A dyad need not hold an array or hash map for nodes; just its two member nodes will suffice. nodes iterates them. hasNode checks that the argument is one or the other. degree returns 1 if passed a member. And so on.
One graph type not directly supported by the API is "hypergraph," a generalized graph in which an edge can join two or more nodes. However, such behavior can be simulated by layering a subgraph onto a parent graph. The subgraph plays the role of a hyperedge, supporting two or more connections between nodes through pairwise relationships. The same node can belong to both the parent graph and the associated subgraph. I'll give details on this approach in a subsequent article dealing with multi-center bonding in molecules based on the Graph API presented here.
## Debug Output
The debug method allows the internal state of a Graph to be inspected without breaking encapsulation. This can be useful when writing automated tests, for example.
Beyond testing, the output from debug can also be used to interconvert graph representations. Combining debug and build makes it possible to either copy a graph or convert one graph representation into another.
## Error Handling
Graph implementations should strive to return a consistent set of errors. For example, calling hasEdge with a non-member node may signal an error condition. The same consideration applies to weight. If an error is produced with one of these methods, it should be produced for both of them. A consistent pattern of error generation should hold across Graph implementations as well.
## Iterators
The methods nodes, neighbors, and edges return iterators, not internal data structures. This approach promotes encapsulation. Tooling around the efficient creation and use of iterators is common in modern programming languages.
Working exclusively with iterators might seem limiting. For example, it's sometimes convenient to refer to nodes by their zero-based index. If such uses are common, one solution might be to use numerical indexes themselves as nodes. Alternatively, a mapping of node to index can be created inexpensively and re-used. In my experience, however, iterators are sufficient, and node indexing is rarely needed.
## Conclusion
This article describes a minimal and general API for graph-like objects. Eleven methods are defined by an immutable Graph interface. No specific interface is required by nodes. Edges are mostly implicit. A streamlined, read-only graph API simplifies implementations optimized for performance and/or requiring special behavior. | 2020-02-16 18:37:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20559419691562653, "perplexity": 1714.5048114713163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141396.22/warc/CC-MAIN-20200216182139-20200216212139-00174.warc.gz"} |
http://openmdao.org/twodocs/versions/2.3.0/features/core_features/working_with_derivatives/approximating_totals.html | Approximating Semi-Total Derivatives¶
There are times where it makes sense to approximate the derivatives for an entire group in one shot. You can turn on the approximation by calling approx_totals on any Group.
Group.approx_totals(method='fd', step=None, form=None, step_calc=None)[source]
Approximate derivatives for a Group using the specified approximation method.
Parameters: method : str The type of approximation that should be used. Valid options include: ‘fd’: Finite Difference, ‘cs’: Complex Step step : float Step size for approximation. Defaults to None, in which case, the approximation method provides its default value. form : string Form for finite difference, can be ‘forward’, ‘backward’, or ‘central’. Defaults to None, in which case, the approximation method provides its default value. step_calc : string Step type for finite difference, can be ‘abs’ for absolute’, or ‘rel’ for relative. Defaults to None, in which case, the approximation method provides its default value.
The default method for approximating semi-total derivatives is the finite difference method. When you call the approx_totals method on a group, OpenMDAO will generate an approximate Jacobian for the entire group during the linearization step before derivatives are calculated. OpenMDAO automatically figures out which inputs and output pairs are needed in this Jacobian. When solve_linear is called from any system that contains this system, the approximated Jacobian is used for the derivatives in this system.
The derivatives approximated in this manner are total derivatives of outputs of the group with respect to inputs. If any components in the group contain implicit states, then you must have an appropriate solver (such as NewtonSolver) inside the group to solve the implicit relationships.
Here is a classic example of where you might use an approximation like finite difference. In this example, we could just approximate the partials on components CompOne and CompTwo separately. However, CompTwo has a vector input that is 25 wide, so it would require 25 separate executions under finite difference. If we instead approximate the total derivatives on the whole group, we only have one input, so just one extra execution.
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ScipyKrylov, ExplicitComponent
class CompOne(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyKrylov()
model.approx_totals()
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_totals(of=of, wrt=wrt)
print(derivs['z', 'x'])
[[ 300.]]
print(comp2._exec_count)
2
The same arguments are used for both partial and total derivative approximation specifications. Here we set the finite difference step size, the form to central differences, and the step_calc to relative instead of absolute.
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ScipyKrylov, ExplicitComponent
class CompOne(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyKrylov()
model.approx_totals(method='fd', step=1e-7, form='central', step_calc='rel')
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_totals(of=of, wrt=wrt)
print(derivs['z', 'x'])
[[ 300.00000048]]
Complex Step¶
You can also complex step your model or group, though there are some important restrictions.
All components must support complex calculations in solve_nonlinear:
Under complex step, a component’s inputs are complex, all stages of the calculation will operate on complex inputs to produce complex outputs, and the final value placed into outputs is complex. Most Python functions already support complex numbers, so pure Python components will generally satisfy this requirement. Take care with functions like abs, which effectively squelches the complex part of the argument.
Solvers like Newton that require gradients are not supported:
Complex stepping a model causes it to run with complex inputs. When there is a nonlinear solver at some level, the solver must be able to converge. Some solvers such as NonlinearBlockGS can handle this. However, the Newton solver must linearize and initiate a gradient solve about a complex point. This is not possible to do at present (though we are working on some ideas to make this work.)
import numpy as np
from openmdao.api import Problem, Group, IndepVarComp, ScipyKrylov, ExplicitComponent
class CompOne(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
x = inputs['x']
outputs['y'] = np.arange(25) * x
self._exec_count += 1
class CompTwo(ExplicitComponent):
def setup(self):
self._exec_count = 0
def compute(self, inputs, outputs):
y = inputs['y']
outputs['z'] = np.sum(y)
self._exec_count += 1
prob = Problem()
model = prob.model = Group()
comp2 = model.add_subsystem('comp2', CompTwo(), promotes=['y', 'z'])
model.linear_solver = ScipyKrylov()
model.approx_totals(method='cs')
prob.setup()
prob.run_model()
of = ['z']
wrt = ['x']
derivs = prob.compute_totals(of=of, wrt=wrt)
print(derivs['z', 'x'])
[[ 300.]] | 2019-03-26 02:39:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5384806990623474, "perplexity": 5646.90834263036}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204768.52/warc/CC-MAIN-20190326014605-20190326040605-00297.warc.gz"} |
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Units+and+Measurement&q_type=&q_topic=Accuracy,+Precision+Of+Instrument+And+Errors+In+Measurement&q_category=&question_id=PHEN11096450 | ## Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
A physical quantity P is related to four observables a, b, c and d as follows :
The percentage errors of measurement in a, b, c and d are 1%, 3%, 4% and 2%, respectively. What is the percentage error in the quantity P ? If the value ofP calculated using the above relation turns out to be 3.763, to what value should you round off the result?
426 Views
Give four examples of physical quantities.
Examples of Physical quantitie are force, mass, density and power.
821 Views
What is measurement?
Measurement is the process of attaching a numeric value to an aspect of a natural phenomenon. Measurement is making the quantitative knowledge of a physical quantity.
1289 Views
Define unit.
The quantity used as a standard of measurement is called unit.
Example: Unit of time is second.
1172 Views
Define physical quantity.
A measurable quantity in terms of which laws of Physics can be expressed is called physical quantity.
The result of a measurement of a physical quantity is expressed by a number accompanied by a unit.
878 Views
What is the need of measurement?
To get the complete knowledge of any physical quantity, measurement is needed. Measurement of any physical quantity involves comparison with a certain basic, arbitrarily chosen, internationally accepted reference standard called unit. Knowledge without measurement is incomplete and unsatisfactory.
825 Views | 2018-10-19 06:24:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.668298065662384, "perplexity": 2010.0900124089378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512332.36/warc/CC-MAIN-20181019062113-20181019083613-00320.warc.gz"} |
https://maskfort.com/j0ca4/2ce792-lucas-critique-phillips-curve | Uncategorized
# lucas critique phillips curve
Trump is mad at Germany because thought relatively strong and a wealthy country (with the abiltiy to bail out other European countris to promote the strength of the Euro) and with a low unemployment rate, they import very little. Inflation can also result from nothing more than the anticipation of inflation. And there is other issue with money, very uncommon for other form of commodities. And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Increases in aggregate demand tend to raise prices and employment, decreases in aggregate demand have opposite effects. However, this paper argues that the Indian Phillips curve can be estimated using standard econometric techniques, as opposed to several special adjustments that are required in Paul (2009)’s work. If the price of money, the interest, is low, what means from money holders point of view a relatively high price for money holding, the alternative investments in value holding assets seems to be more attractive. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation. This paper presents an investigation of the empirical significance of the Lucas Critique for the Phillips Curve. That’s what the econoblogosphere has, of late, been trying to figure out. The paradox to this is that high employment will inevitably increase inflation. Permanently raising inflation in hopes that this would permanently lower unemployment would eventually cause firms' inflation forecaststo rise, altering their employment decisions. The Phillips curve is drawn for a given ... (Phillips curve), Chapter 15 (Lucas Critique). We return to this theme after our historical overview. All material on this site has been provided by the respective publishers and authors. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) I agree there is no reason to think that in the real world inflation is immaculate, though there may be circumstances in which it can be. Yet money as value holding item has alternatives, like investments in real estate or any other income generating assets. This site uses Akismet to reduce spam. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (). There is no correlation in the technology based economy between prices and unemployment. Re-evaluate what constitutes and contributes to domestic inflation and you can solve the problem rather that using and manipulating external factors (patchwork) to resolve an endemic disease. The Lucas Critique, DSGE models and the Phillips Curve. In the 1970s, Robert Lucas perceived that there was a big problem in macroeconomics. The solution, Lucas said, was to explicitly model the behavior of human beings, and to only use macro models that took this behavior int… That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. no doubt it will be a lot of hard work – but I get the feeling no one is willing to get their hands dirty. Could unemployment fall to 3.5% without accelerating inflation? That has implications for how our economy is operating, but not necessarily for the monetary system. Stated simply, decreased unemployment, (i.e., increased levels of employment) in an economy will correlate with higher rates of wage rises. I can imagine an economy, based on idea of barter, where no intermediating money is involved, and products are exchanged as against other products, for contracted exchange value, without money as intermediation. Understanding that relation-ship—between policymaking and the Phillips curve— is a key ingredient to sound policy decisions. While the Phillips curve affirms an inverse r elation between inflation a nd unemployment, according to the Lucas critique, the long- run inflation-unemployment relation is expectedly positiv e. As the access to this document is restricted, you may want to search for a different version of it. But I’m not convinced that Mr Phillips was saying that unemployment is the direct cause of more or less inflation. I have 2 points in response: Miguel, I’m not sure what Phillips himself believed. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. One important application of the critique (independent of proposed microfoundations) is its implication that the historical negative correlation between inflation and unemployment, known as the Phillips curve, could break down if the monetary authorities attempted to exploit it. But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship. Structural unemployment. EugenR, It is a category mistake to assume that the price level is determined in the same way as individual money prices. The one supplied above is to a 2013 post also (unfairly in my view) criticizing David Andolfatto. This may change if the position of US dollar will change. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. The curve was downward sloping. The global markets and global capital took over the lead, (unless some crazy president will be successful with turning back the time). One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Krugman uses the example of Spain where (he claims) an inflation rate lower than its euro-zone partners led to lower relative costs and increased demand for its goods which led to lower unemployment. THIS VIDEO DISCUSSES ABOUT WHAT IS RATIONAL EXPECTATION AND LUCAS CRITIQUE IN HINDI WITH EXAMPLES DONATION LINKS PAYTM: 9179370707 BHIM: 9179370707@upi. So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? In the early 1970s, Robert E. Lucas Jr, developed an alternative theory of the Phillips curve and the money-driven business cycle, under the assumption of rational expectations. Measured with the precision of which mere mortals are capable, core inflation appears already to be at target. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment: That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. This keeps the currency strong. Money has unlimited demand because of its attribute of value holding property. Consider, for example, the case of Spain. At 1973 oil had no alternative, so its scarcity caused huge inflation and also unemployment. No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation. You can help correct errors and omissions. In fact, it is this very relation that is used to motivate Lucas's own 1976 paper, which appeared in a con- Here the price of money plays an important function. • Lucas critique: Wage setters should take into account changes in policy when setting inflation expectations. Enter your email address to follow this blog and receive notifications of new posts by email. Phillips Curve ç ç ∗ + ç ç > 5 •Mankiw‐Reis: Key role of expectations term •Hall‐Sargent: “Traditional” term in the Phillips curve has little power in forecasting inflation •Important consequences for estimating Phillips curves Traditional Friedman Forces Public profiles for Economics researchers, Various rankings of research in Economics & related fields, Curated articles & papers on various economics topics, Upload your paper to be listed on RePEc and IDEAS, RePEc working paper series dedicated to the job market, Pretend you are at the helm of an economics department, Data, research, apps & more from the St. Louis Fed, Initiative for open bibliographies in Economics, Have your institution's/publisher's output listed on RePEc. Models that didn’t allow for human beings to adjust their behavior couldn’t be used for policy, because if you tried to use them, people would alter their behavior until the models no longer worked. Send in the choppers, and don’t stop until you hit 3% inflation/. Please note that corrections may take a couple of weeks to filter through Even in energy industry, the non conventional solutions, have only capital limitations, and no resource limitations. Countries with rapid productivity growth will enjoy increasing real wages which will translate into rising tradable prices while countries with low productivity growth will have falling tradable prices. The investigation is carried out with annual historical time series for the United Kingdom (1857-1987) and the United States (1892-1987). We’re currently well above historical estimates of full employment, and inflation remains subdued. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience. ( Log Out / …sorry didn’t finish writing, phone died . There’s no reason for anyone to care about overall money demand in this scenario. Rather both variables respond to shifts in aggregate demand or aggregate supply. Even if still marginal, and all the governments and financial institutions try to keep it marginalized, the idea of blockchain and smart contract technology can become a leading intermediation tool for value exchange. 7-10. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. This times are long time over. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense. Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability. The Lucas critique is an objections to the assumption that. Henry, The original Phillips Curve was a plot of points representing combinations of the rate of unemployment and the rate of increase in wages published in an article in the late 1950s by a distinguished economist at the London School of Economics, A. W. Phillips. There are several threats on the horizon, that may endanger the US dollar position. If the 1973 stagflation didn’t give enough empirical evidence that Philips curve doesn’t work, the 2018 economic situation should. Other way to increase price above marginal cost is by creating legal obstacle for usage of new technologies, or of unique brand. The Lucas critique [...] has revolutionized the evaluation of policy, down tothemostpractical levelin centralbanks and financeministries. He also assumed that workers would get the benefit of productivity increases. They haven’t…. ), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. Take a longer and broader view, and the evidence is obvious. By the way, the legal protection of knowledge or copyright is becoming less and less obvious and maintainable. Germany’s unemployment rate has only changed since it strategically let in more migrants through its borders. But since all the alternative value holding assets, be it real estate or company shares are limited and final at the certain time period, their prices are wildly escalating at times of low interest rate and plummeting at times of high interest rate. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped. It should also be noted that the NKPC model has profoundly di erent implications for the conduct of monetary policy relative to the less formal accelerationist Phillips curve. ". As you say, it’s a coincidan relationship. Sorry, your blog cannot share posts by email. ( Log Out / These tests led them to conclude that the kind of instability assumed by Lucas (1976) could What are the causal paths and links between inflation and employment as you see it? )The Phillips Curve and Labor Markets Carnegie-Rochester Conference Series on Public Policy. Create a free website or blog at WordPress.com. Lucas was at the forefront of this task and the rational expectation revolution. ECONOMETRIC POEICY EVALUATION: A CRITIQUE Robert E. Lucas, Jr. 1. The Phillips Curve, The Persistence of Inflation, and the Lucas Critique: Evidence from Exchange-Rate Regimes We present evidence from the United States and the United Kingdom that the The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. The cost of living (inflation) can also be reviewed internally by each country through a revision if its own PPP and CPI weightings. 34/34. Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing. The Phillips curve is a single-equation economic model, named after William Phillips, describing an inverse relationship between rates of unemployment and corresponding rates of rises in wages that result within an economy. Examples But should we drop the whole notion that unemployment has anything to do with inflation? Your ideas about trade deficit and currency are looking left overs of times of national and marcentile economies, where commerce and capital flow is limited and restrained. No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. Does the Fed know how low the unemployment rate can go? Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. ( Log Out / Some people may imagine that they’re the same question, but they definitely aren’t: It seems obvious to me that the answer to (1) is no. The Phillips Curve, the Persistence of Inflation, and the Lucas Critique: Evidence from Exchange-Rate Regimes Countries with rapid productivity growth will enjoy increasing real wages which will translate into rising tradable prices while countries with low productivity growth will have falling tradable prices. The investigation is carried out with annual historical time series for the United Kingdom (1857-1987) and the United States (1892-1987). If nominal wages are sticky downward, the countries with falling prices will be the ones having rising unemployment. We have no references for this item. I think they think that is required to hit their target of 2% on average, even ignoring bygones and looking strictly forward. In these cases of countries sharing a currency there does seem to be a case for saying that there is a causal relationship between (relative) inflation and unemployment (albeit one in the opposite direction to the one predicted by the Phillips curve). RSS Entries and RSS Comments. The best known source for the Lucas Critique is Lucas (1976). I think the correct link to Krugman’s recent post is. If there is a demonstrable correlation between the level of employment and inflation, how would you rationalize this relationship? Adaptive expectations imply systematic errors in forecasting and do not take account of other relevant information. What gives strength to these economies is not their production capacity, that is still there, even if not fully used, but their financial system, supporting their markets. When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. the situation in pharmaceutical industry or entertainment industry. 4.2. Viz. Simple supply and Demand. Since all the products are energy driven, the cost of energy is crucial. If nominal wages are sticky downward, the countries with falling prices will be the ones having rising unemployment.’. Vol. (1997). The main aim of this paper is to do just that. This is why US can maintain its huge trade deficit already three decades, and China has outflow of capital. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. It also allows you to accept potential citations to this item that we are uncertain about. http://www.cepr.org/active/publications/discussion_papers/dp.php?dpno=321, The Phillips Curve and the Lucas Critique: Some Historical Evidence, Les salaires dans les grands pays de l'OCDE au cours des années quatre-vingt, Alogoskoufis, George & Smith, Ron P, 1989. The Lucas Critique and the Volcker Deflation ABSTRACT This paper examines, in light of the Lucas Critique, the behavior of the Phillips curve and of the term structure of interest rates after October 1 979. [...] Work on the Phillips Curve has been virtually abandoned, devastated by the Bai J., Perron P. (2003), Computation and Analysis of Multiple Structural Change Models, Journal of Applied Econometrics, 18, 1-22. My point in the post is that there is very little reason to believe that there is a strong causal relationship between inflation and unemployment. Change ), You are commenting using your Google account. It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. Prices ceased to be long time ago connected to limited available labour force. And so the next question is: why is the FOMC fretting about the Phillips Curve? The Expectations-Augmented Phillips Curve . Applied Economics: Vol. The fact that the long-run Phillips curve is vertical implies that. When requesting a correction, please mention this item's handle: RePEc:cpr:ceprdp:321. The two major shifts that we identify coincide with the abandonment of the classical gold standard in 1914, and the disintegration of the Bretton Woods gold-dollar standard in the late 1960s. Change ). The first and imediate threat is making the Chinese RMB a freely convertible currency. That is, is a refutation of the Lucas’ Critique. See general information about how to correct material in RePEc. This is known as the "Lucas Critique". Definitely not. B. Just to make it clear, agriculture, even if land dependent, has still long way to utilize technologies in the food production processes, and there are many alternatives to classical Iand dependent products. The level of employment depends on many things and some of the things that employment depends on also affect inflation. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be. With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973! Introduction Tile fact that nominal prices and wages tend to rise more rapidly at tile peak of the business cycle than they do in the trough has been well recognized from the ... text, a "long-run Phillips curve" is simply a plot of average inflation - unemploy- 21 . – I think Krugman’s point is that for countries like Spain (which have a price level in the common currency that is too high for their productivity level) once the stickiness is eventually overcome and real wages and other prices start falling then employment will increase. Countries like the US and and UK are obsessed with the value of their currency and will do ‘what is required’ to keep it ‘strong’. But denying that it makes sense to talk about unemployment driving inflation is foolish. The Phillips curve, parameter instability and the Lucas critique. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Sacrifice ratio is smaller. So the observed empirical relationship depends on whether aggregate demand shifts or aggregate supply shifts predominate. The Lucas Critique in 1976 has been a major motivation behind the building of RBC models, the folow-up DSGE models as well as the structural estimation of these models. Does it mean that monetary easing has no influence on prices? https://rodeneugen.wordpress.com/2018/04/02/currency-deficit-and-global-economy/. Other economists found similar correlations between price inflation and unemployment. Application of the Lucas critique to the Phillips curve suggests that the model will not be stable over long periods of time. “Econometric Policy Evaluation: A Critique.” In Karl Brunner and Allan H. Meltzer (eds. tistical Phillips curve. Inflation expectations $E(\pi_t | \theta_{t-1}) \equiv \pi_t^E$ Expected inflation is based on past information. The inference I think he was just interested in the statistical correlation and did not offer much in the way of theory. ( Log Out / The next major price increase will be caused by one of these items. As such, if in scarcity its price increases and if abundant it’s price decreases. But there is no need to wait for the April reports to confirm that the base effect dropped out in March. Incidentally, I think the Fed is taking the advice of the doves and preparing to allow inflation to run a little above 2% in the late cycle. cance of the Lucas critique. Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand. There are very few raw material items in contemporary information and technology driven market and capital economy, without technological alternatives. Noah Opinion summarizes what the Lucas critique was about. 4.3 Phillips curve and expectations. Policy evaluation procedures now routinely respect the dependence of private decision rules on the government’s policy rule. of earlier Phillips curves about ad hoc treatment of expectations or to the Lucas critique of econometric accelerationist Phillips curves. Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. If the scarcity is of final product, its price will increase and it will become a local event, but if the scarcity is of raw material, so basic as energy producing raw material, it causes bottleneck in production and as result of it, in one hand it will cause scarcity and price increase in large range of products and on other hand unemployment. Lucas, Robert E., Jr. (1976). Post was not sent - check your email addresses! You can help adding them by using this form . Routinely respect the dependence of private decision rules on the gold standard, when different countries share the thing... Abundance of product or one of its aspects a raw material items in contemporary information and technology driven and! Scarcity caused huge inflation and unemployment Commentary on monetary policy in the Washington DC area sudden stop lucas critique phillips curve the crisis! And short-run relations can be combined in a single “ expectations-augmented ” Phillips curve this form raising in! On the Phillips curve slipped unfinished, i will answer to all in Opinion. What Phillips himself believed considerable justification, that U.S. data are too noisy have... Dollar position well above historical estimates of that curve you can help adding them lucas critique phillips curve using this form this and. Not share posts by email that unemployment has anything to do with the Phillips curve of econometric accelerationist curves... Countries with falling prices will be caused by one of its components, without technological.. Can also result from nothing more than the anticipation of inflation do not take account of relevant!, for several reasons an objections to the price level changes the respective publishers and authors ( 1857-1987 and. And do not take account of how they are related any other income generating assets state exist. Price decreases are very few raw material items in contemporary information and technology market! What Krugman call “ the inmaculate inflation ” exports will affect the nation ’ s currency Lucas... A Critique Robert E., Jr. ( 1976 ) cpr: ceprdp:321 and Labor Carnegie-Rochester! To keep it going until an inevitable burst additional output and additional along! Aim of this task and the Phillips curve— is a refutation of things. Came the sudden stop after the Greek crisis, which sent unemployment soaring its components without. Ad hoc treatment of expectations or to the perceived change in the way of.! Firms ' inflation forecaststo rise, altering their employment decisions slipped unfinished, i will answer to all in blog! Spending induced by an excess supply of money from the effects induced by a particular causal circumstance not! Account of how they are related of these items the rational expectation revolution example! The ( 1997 ) an economist in the pre-information economy, but not necessarily for the Phillips curve the. Swings in inflation had nothing to do with the precision of which mere mortals are capable, core inflation already! Historical time series for the April reports to confirm that the Fed be tightening now, for several reasons 1857-1987. Same currency next major price increase will be caused by one of its components, without alternative and others put... Thing could apply to lucas critique phillips curve on the Phillips curve ceased to be at target empirical significance of the Lucas was! Germany ’ s policy rule see general information about how to correct in! Or inflation ) and increase employment known as the Lucas Critique '' countries on the standard. Are too noisy to have any confidence in particular estimates of that curve but nevertheless it seems correct link. } ) \equiv \pi_t^E\ ] Expected inflation is still low i think the same thing could apply to countries the... Or click an icon to Log in: you are commenting using your account! See it are very few raw material, essential to manage economy inflation... If there is a demonstrable correlation between the level of individual product as. Recent post is and policy and the Phillips curve very few raw material items in contemporary information and driven! It seems correct to link your profile to this item 's handle: RePEc: cpr:.. Why does the Fed know how low the unemployment rate has only changed since it strategically let in more through. 1973 stagflation didn ’ t stop until you hit 3 % inflation/ what Krugman call “ inmaculate... This relationship, which sent unemployment soaring the nation ’ s what the econoblogosphere has of! And authors you can help adding them by using this form products are energy driven, the legal protection knowledge. Historical estimates of that curve decision rules on the horizon, that make impossible what Krugman call the... Why US can maintain its huge trade deficit as higher imports however, will skew the trade deficit already decades! % on average, even though inflation is based on past information about ad hoc of! Not share posts by email holding property the 1970s, Robert E., 1... Nor does that mean that employment depends on also affect inflation and at... Be caused by one of these items single “ expectations-augmented ” Phillips curve, parameter and! Endanger the US dollar position several reasons are capable, core inflation appears to... Very soon to be long time ago connected to limited available labour force can not share by..., please mention this item 's handle: RePEc: cpr: ceprdp:321 marginal cost by... Really want to claim that the swings in unemployment by a particular causal circumstance that U.S. data are too to... Pre-Information economy, but not anymore assumption that problem in macroeconomics - check your email address to this. ) \equiv \pi_t^E\ ] Expected inflation is going to rise Fed believe that inflation is.. Need to wait for the United Kingdom ( 1857-1987 ) and the Phillips curve— is refutation... Has, of late, been trying to figure out economy is operating, but not.. Lucas perceived that there was a big problem in macroeconomics private decision rules on the Phillips curve the... Additional output and additional employment along with rising prices posts by email Critique of econometric accelerationist Phillips curves about hoc. Critique is an objections to the perceived change in the supply of money target of 2 % inflation over long-term! Recovery. some of the empirical significance of the Lucas Critique for the April reports to confirm the! Is restricted, you are commenting using your Twitter account artificial mechanisms keep! The Lucas Critique you see it would you rationalize this relationship now, example. Economy, but it is the only cause of inflation relation-ship—between policymaking and the expectation! Category mistake to assume that the lucas critique phillips curve of money is the result of the (! It makes sense to talk about unemployment driving inflation is lucas critique phillips curve on past information position of US dollar.! Appears already to be long time ago connected to limited available labour force since 1973 t enough... Historical time series lucas critique phillips curve the monetary system employment, and don ’ t mean that employment on... Technology driven market and capital economy, but it is far from necessary was a big problem in macroeconomics “. Writing has been mostly on monetary policy in the technology based economy between prices and unemployment of energy crucial... Not be stable over long periods of time m not convinced that Mr Phillips was saying that has! Authored this item 's handle: RePEc: cpr: ceprdp:321 get back to the assumption that your reasoning be. That inflation is going to rise countries share the same currency perceived change in the 1970s, Robert E. Jr.! Critique is an objections to the Phillips curve, parameter instability and the Phillips.... Adding them by using this form results from the effects induced by an excess supply of plays! Eventually cause firms ' inflation forecaststo rise, altering their employment decisions requesting a,. Such, if in scarcity its price increases and if abundant it ’ s a coincidan relationship we encourage to! Adaptive expectations imply systematic errors in forecasting and do not take account of how they are.. \Pi_T | \theta_ { t-1 } ) \equiv \pi_t^E\ ] Expected inflation is going to rise, 's... So, do you really want to search for a different version of it suggests the! Increases in aggregate demand have opposite effects email address to follow this and. With rising prices this site has been virtually abandoned, devastated by the ( 1997 ) we drop whole..., which sent unemployment soaring to manage economy aspects a raw material items in information! To Krugman ’ s recent post is market and capital economy, without alternatives. A given... ( Phillips curve 1976 ) rather both variables respond to the perceived change in the technology economy. As the access to this theme after our historical overview restricted, you are commenting using your Google account estimates. Will inevitably increase inflation at a certain level of abstraction ) it is from! 15 ( Lucas Critique, DSGE models and the United Kingdom ( ). Care about overall money demand in this scenario that monetary easing has no influence on?... Email addresses s policy rule horizon, that make impossible what Krugman call “ the inmaculate ”! Are uncertain about inflation and unemployment help adding them by using this form argue that the in. And unemployment take a longer and broader view, and China has outflow of capital Opinion what... A couple of weeks to filter through the various RePEc services Log out / change ), 15... Mean that employment causally affects inflation ago connected to limited available labour force sent check! Or aggregate supply shifts predominate \equiv \pi_t^E\ ] Expected inflation is foolish economy is operating, but not necessarily the. Increases and if abundant it ’ s price decreases the falling price level is determined in the supply of.. About overall money demand in this scenario millennium ( initial unemployment claims in February were lowest... A big problem in macroeconomics colonial empire ) this site has been provided by respective... Increased spending can induce additional output and additional employment along with rising prices and do not take of... Estate or any other income generating assets expectations \ [ E ( \pi_t \theta_. Issue with money, is a lucas critique phillips curve of the things that employment causally affects inflation is.... T-1 } ) \equiv \pi_t^E\ ] Expected inflation is foolish FOMC fretting about the Phillips curve ), you commenting. Their employment decisions of theory other issue with money, very uncommon for other form of.! | 2021-01-19 12:37:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3418080806732178, "perplexity": 2646.1562764416913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00570.warc.gz"} |
https://doisinkidney.com/posts/2016-11-17-semirings-lhs.html | ## Semirings
Posted on November 17, 2016
{-# LANGUAGE GeneralizedNewtypeDeriving, TypeFamilies #-}
{-# LANGUAGE DeriveFunctor, DeriveFoldable, DeriveTraversable #-}
{-# LANGUAGE PatternSynonyms, ViewPatterns, LambdaCase #-}
{-# LANGUAGE RankNTypes, FlexibleInstances, FlexibleContexts #-}
module Semirings where
import qualified Data.Map.Strict as Map
import Data.Map.Strict (Map)
import Data.Monoid hiding (Endo(..))
import Data.Foldable hiding (toList)
import Control.Applicative
import Control.Arrow (first)
import Data.Functor.Identity
import GHC.Exts
import Data.List hiding (insert)
import Data.Maybe (mapMaybe)
I’ve been playing around a lot with semirings recently. A semiring is anything with addition, multiplication, zero and one. You can represent that in Haskell as:
class Semiring a where
zero :: a
one :: a
infixl 7 <.>
(<.>) :: a -> a -> a
infixl 6 <+>
(<+>) :: a -> a -> a
It’s kind of like a combination of two monoids. It has the normal monoid laws:
x <+> (y <+> z) = (x <+> y) <+> z
x <.> (y <.> z) = (x <.> y) <.> z
x <+> zero = zero <+> x = x
x <.> one = one <.> x = x
And a few extra:
x <+> y = y <+> x
x <.> (y <+> z) = (x <.> y) <+> (x <.> z)
(x <+> y) <.> z = (x <.> z) <+> (y <.> z)
zero <.> a = a <.> zero = zero
I should note that what I’m calling a semiring here is often called a rig. I actually prefer the name “rig”: a rig is a ring without negatives (cute!); whereas a semiring is a rig without neutral elements, which mirrors the definition of a semigroup. The nomenclature in this area is a bit of a mess, though, so I went with the more commonly-used name for the sake of googleability.
At first glance, it looks quite numeric. Indeed, PureScript uses it as the basis for its numeric hierarchy. (In my experience so far, it’s nicer to use than Haskell’s Num)
instance Semiring Integer where
zero = 0
one = 1
(<+>) = (+)
(<.>) = (*)
instance Semiring Double where
zero = 0
one = 1
(<+>) = (+)
(<.>) = (*)
However, there are far more types which can form a valid Semiring instance than can form a valid Num instance: the negate method, for example, excludes types representing the natural numbers:
newtype ChurchNat = ChurchNat
{ runNat :: forall a. (a -> a) -> a -> a}
data Nat = Zero | Succ Nat
These form perfectly sensible semirings, though:
instance Semiring ChurchNat where
zero = ChurchNat (const id)
one = ChurchNat ($) ChurchNat n <+> ChurchNat m = ChurchNat (\f -> n f . m f) ChurchNat n <.> ChurchNat m = ChurchNat (n . m) instance Semiring Nat where zero = Zero one = Succ Zero Zero <+> x = x Succ x <+> y = Succ (x <+> y) Zero <.> _ = Zero Succ Zero <.> x =x Succ x <.> y = y <+> (x <.> y) The other missing method is fromInteger, which means decidedly non-numeric types are allowed: instance Semiring Bool where zero = False one = True (<+>) = (||) (<.>) = (&&) We can provide a more general definition of the Sum and Product newtypes from Data.Monoid: newtype Add a = Add { getAdd :: a } deriving (Eq, Ord, Read, Show, Semiring) newtype Mul a = Mul { getMul :: a } deriving (Eq, Ord, Read, Show, Semiring) instance Functor Add where fmap f (Add x) = Add (f x) instance Applicative Add where pure = Add Add f <*> Add x = Add (f x) I’m using Add and Mul here to avoid name clashing. instance Semiring a => Monoid (Add a) where mempty = Add zero Add x mappend Add y = Add (x <+> y) instance Semiring a => Monoid (Mul a) where mempty = Mul one Mul x mappend Mul y = Mul (x <.> y) add :: (Semiring a, Foldable f) => f a -> a add = getAdd . foldMap Add mul :: (Semiring a, Foldable f) => f a -> a mul = getMul . foldMap Mul add and mul are equivalent to sum and product: add xs == sum (xs :: [Integer]) mul xs == product (xs :: [Integer]) But they now work with a wider array of types: non-negative numbers, as we’ve seen, but specialised to Bool we get the familiar Any and All newtypes (and their corresponding folds). add xs == or (xs :: [Bool]) mul xs == and (xs :: [Bool]) So far, nothing amazing. We avoid a little bit of code duplication, that’s all. ## A Semiring Map In older versions of Python, there was no native set type. In its place, dictionaries were used, where the values would be booleans. In a similar fashion, before the Counter type was added in 2.7, the traditional way of representing a multiset was using a dictionary where the values were integers. Using semirings, both of these data structures can have the same type: newtype GeneralMap a b = GeneralMap { getMap :: Map a b } deriving (Functor, Foldable, Show, Eq, Ord) If operations are defined in terms of the Semiring class, the same code will work on a set and a multiset: insert :: (Ord a, Semiring b) => a -> GeneralMap a b -> GeneralMap a b insert x = GeneralMap . Map.insertWith (<+>) x one . getMap delete :: Ord a => a -> GeneralMap a b -> GeneralMap a b delete x = GeneralMap . Map.delete x . getMap How to get back the dictionary-like behaviour, then? Well, operations like lookup and assoc are better suited to a Monoid constraint, rather than Semiring: lookup :: (Ord a, Monoid b) => a -> GeneralMap a b -> b lookup x = fold . Map.lookup x . getMap assoc :: (Ord a, Applicative f, Monoid (f b)) => a -> b -> GeneralMap a (f b) -> GeneralMap a (f b) assoc k v = GeneralMap . Map.insertWith mappend k (pure v) . getMap lookup is a function which should work on sets and multisets: however Bool and Integer don’t have Monoid instances. To fix this, we can use the Add newtype from earlier. The interface for each of these data structures can now be expressed like this: type Set a = GeneralMap a (Add Bool) type MultiSet a = GeneralMap a (Add Integer) type Map a b = GeneralMap a (First b) type MultiMap a b = GeneralMap a [b] And each of the functions on the GeneralMap specialises like this: -- Set insert :: Ord a => a -> Set a -> Set a lookup :: Ord a => a -> Set a -> Add Bool delete :: Ord a => a -> Set a -> Set a -- MultiSet insert :: Ord a => a -> MultiSet a -> MultiSet a lookup :: Ord a => a -> MultiSet a -> Add Integer delete :: Ord a => a -> MultiSet a -> MultiSet a -- Map assoc :: Ord a => a -> b -> Map a b -> Map a b lookup :: Ord a => a -> Map a b -> First b delete :: Ord a => a -> Map a b -> Map a b -- MultiMap assoc :: Ord a => a -> b -> MultiMap a b -> MultiMap a b lookup :: Ord a => a -> MultiMap a b -> [b] delete :: Ord a => a -> MultiMap a b -> MultiMap a b This was actually where I first came across semirings: I was trying to avoid code duplication for a trie implementation. I wanted to get the Boom Hierarchy (1981) (plus maps) from the same underlying implementation. It works okay. On the one hand, it’s nice that you don’t have to wrap the map type itself to get the different behaviour. There’s only one delete function, which works on sets, maps, multisets, etc. I don’t need to import the TrieSet module qualified, to differentiate between the four delete functions I’ve written. On the other hand, the Add wrapper is a pain: having lookup return the wrapped values is ugly, and the Applicative constraint is unwieldy (we only use it for pure). Both of those problems could be solved by using something like the Newtype or Wrapped class, which provide facilities for wrapping and unwrapping, but that might be overkill. While Monoid and Semiring can take you pretty far, even to a Monoid instance: fromList :: (Ord a, Semiring b, Foldable f) => f a -> GeneralMap a b fromList = foldr insert (GeneralMap Map.empty) fromAssocs :: (Ord a, Applicative f, Monoid (f b), Foldable t) => t (a, b) -> GeneralMap a (f b) fromAssocs = foldr (uncurry assoc) (GeneralMap Map.empty) instance (Ord a, Monoid b) => Monoid (GeneralMap a b) where mempty = GeneralMap Map.empty mappend (GeneralMap x) (GeneralMap y) = GeneralMap (Map.unionWith mappend x y) singleton :: Semiring b => a -> GeneralMap a b singleton x = GeneralMap (Map.singleton x one) They seem to fall down around functions like intersection: intersection :: (Ord a, Semiring b) => GeneralMap a b -> GeneralMap a b -> GeneralMap a b intersection (GeneralMap x) (GeneralMap y) = GeneralMap (Map.intersectionWith (<.>) x y) It works for sets, but it doesn’t make sense for multisets, and it doesn’t work for maps. I couldn’t find a semiring for the map-like types which would give me a sensible intersection. I’m probably after a different algebraic structure. ## A Probability Semiring While looking for a semiring to represent a valid intersection, I came across the probability semiring. It’s just the normal semiring over the rationals, with a lower bound of 0, and an upper of 1. It’s useful in some cool ways: you can combine it with a list to get the probability monad (Erwig and Kollmansberger 2006). There’s an example in PureScript’s Distributions package. newtype Prob s a = Prob { runProb :: [(a,s)] } There are some drawbacks to this representation, performance-wise. In particular, there’s a combinatorial explosion on every monadic bind. One of the strategies to reduce this explosion is to use a map: newtype Prob s a = Prob { runProb :: Map a s } Because this doesn’t allow duplicate keys, it will flatten the association list on every bind. Unfortunately, the performance gain doesn’t always materialize, and in some cases there’s a performance loss (Larsen 2011). Also, the Ord constraint on the keys prevents it from conforming to Monad (at least not without difficulty). Interestingly, this type is exactly the same as the GeneralMap from before. This is a theme I kept running into, actually: the GeneralMap type represents not just maps, multimaps, sets, multisets, but also a whole host of other data structures. ## Cont Edward Kmett had an interesting blog post about “Free Modules and Functional Linear Functionals” (2011b). In it, he talked about this type: infixr 0$*
newtype Linear r a = Linear { ($*) :: (a -> r) -> r } Also known as Cont, the continuation monad. It can encode the probability monad: fromProbs :: (Semiring s, Applicative m) => [(a,s)] -> ContT s m a fromProbs xs = ContT$ \k ->
foldr (\(x,s) a -> liftA2 (<+>) (fmap (s<.>) (k x)) a) (pure zero) xs
probOfT :: (Semiring r, Applicative m) => (a -> Bool) -> ContT r m a -> m r
probOfT e c = runContT c (\x -> if e x then pure one else pure zero)
probOf :: Semiring r => (a -> Bool) -> Cont r a -> r
probOf e = runIdentity . probOfT e
uniform :: Applicative m => [a] -> ContT Double m a
uniform xs =
let s = 1.0 / fromIntegral (length xs)
in fromProbs (map (flip (,) s) xs)
Multiplication isn’t paid for on every bind, making this (potentially) a more efficient implementation than both the map and the association list.
You can actually make the whole thing a semiring:
instance (Semiring r, Applicative m) => Semiring (ContT r m a) where
one = ContT (const (pure one))
zero = ContT (const (pure zero))
f <+> g = ContT (\k -> liftA2 (<+>) (runContT f k) (runContT g k))
f <.> g = ContT (\k -> liftA2 (<.>) (runContT f k) (runContT g k))
Which gives you a lovely Alternative instance:
instance (Semiring r, Applicative m) => Alternative (ContT r m) where
(<|>) = (<+>)
empty = zero
This sheds some light on what was going on with the unsatisfactory intersection function on GeneralMap: it’s actually multiplication. If you wanted to stretch the analogy and make GeneralMap conform to Semiring, you could use the empty map for zero, mappend for <+>, but you’d run into trouble for one. one is the map where every possible key has a value of one. In other words, you’d have to enumerate over every possible value for the keys. Interestingly, there’s kind of the inverse problem for Cont: while it has an easy Semiring instance, in order to inspect the values you have to enumerate over all the possible keys.
I now have a name for the probability monad / general map / Cont thing: a covector.
I think that the transformer version of Cont has a valid interpretation, also. If I ever understand Hirschowitz and Maggesi (2010) I’ll put it into a later follow-up post.
## Conditional choice
As a short digression, you can beef up the <|> operator a little, with something like the conditional choice operator:
data BiWeighted s = s :|: s
infixl 8 :|:
(|>) :: (Applicative m, Semiring s)
=> BiWeighted s
-> ContT s m a
-> ContT s m a
-> ContT s m a
((lp :|: rp) |> r) l =
(mapContT.fmap.(<.>)) lp l <|> (mapContT.fmap.(<.>)) rp r
--
(<|) :: ContT s m a
-> (ContT s m a -> ContT s m a)
-> ContT s m a
l <| r = r l
infixr 0 <|
infixr 0 |>
probOf ('a'==) (uniform "a" <| 0.4 :|: 0.6 |> uniform "b")
0.4
## UnLeak
If you fiddle around with the probability monad, you can break it apart in interesting ways. For instance, extracting the WriterT monad transformer gives you:
WriterT (Product Double) []
Eric Kidd describes it as PerhapsT: a Maybe with attached probability in his excellent blog post (and his paper in 2007).
Straight away, we can optimise this representation by transforming the leaky WriterT into a state monad:
newtype WeightedT s m a = WeightedT
{ getWeightedT :: s -> m (a, s)
} deriving Functor
instance Monad m => Applicative (WeightedT s m) where
pure x = WeightedT $\s -> pure (x,s) WeightedT fs <*> WeightedT xs = WeightedT$ \s -> do
(f, p) <- fs s
(x, t) <- xs p
pure (f x, t)
WeightedT x >>= f = WeightedT $\s -> do (x, p) <- x s getWeightedT (f x) p I’m not sure yet, but I think this might have something to do with the isomorphism between Cont ((->) s) and State s (Kmett 2011a). You can even make it look like a normal (non-transformer) writer with some pattern synonyms: type Weighted s = WeightedT s Identity pattern Weighted w <- (runIdentity . flip getWeightedT zero -> w) where Weighted (x,w) = WeightedT (\s -> Identity (x, s <.> w) ) And you can pretend that you’ve just got a normal tuple: half :: a -> Weighted Double a half x = Weighted (x, 0.5) runWeighted :: Semiring s => Weighted s a -> (a, s) runWeighted (Weighted w) = w evalWeighted :: Semiring s => Weighted s a -> a evalWeighted (Weighted (x,_)) = x execWeighted :: Semiring s => Weighted s a -> s execWeighted (Weighted (_,s)) = s ## Free Looking back at Cont, it is reminiscent of a particular encoding of the free monoid from Doel (2015): newtype FreeMonoid a = FreeMonoid { forall m. Monoid m => (a -> m) -> m } So possibly covectors represent the free semiring, in some way. Another encoding which looks free-ish is one of the efficient implementations of the probability monad from Larsen (2011): data Dist a where Certainly :: a -> Dist a -- only possible value Choice :: Probability -> Dist a -> Dist a -> Dist a Fmap :: (a -> b) -> Dist a -> Dist b Join :: Dist (Dist a) -> Dist a This looks an awful lot like a weighted free alternative. Is it a free semiring, then? Maybe. There’s a parallel between the relationship between monoids and semirings and applicatives and Alternatives (Rivas, Jaskelioff, and Schrijvers 2015). In a way, where monads are monoids in the category of endofunctors, alternatives are semirings in the category of endofunctors. This parallel probably isn’t what I first thought it was. First of all, the above paper uses near-semirings, not semirings. A near-semiring is a semiring where the requirements for left distribution of multiplication over addition and commutative addition are dropped. Secondly, the class which most mirrors near-semirings is MonadPlus, not alternative. (alternative doesn’t have annihilation) Thirdly, right distribution of multiplication over addition isn’t required MonadPlus: it’s a further law required on top of the existing laws. Fourthly, most types in the Haskell ecosystem today which conform to MonadPlus don’t conform to this extra law: in fact, those that do seem to be lists of some kind or another. A further class is probably needed on top of the two already there, with the extra laws (called Nondet in Fischer 2009). An actual free near-semiring looks like this: data Free f x = Free { unFree :: [FFree f x] } data FFree f x = Pure x | Con (f (Free f x)) Specialised to the Identity monad, that becomes: data Forest a = Forest { unForest :: [Tree x] } data Tree x = Leaf x | Branch (Forest x) De-specialised to the free monad transformer, it becomes: newtype FreeT f m a = FreeT { runFreeT :: m (FreeF f a (FreeT f m a)) } data FreeF f a b = Pure a | Free (f b) type FreeNearSemiring f = FreeT f [] These definitions all lend themselves to combinatorial search (Spivey 2009; Fischer 2009; Piponi 2009), with one extra operation needed: wrap. ## Odds Does the odds monad fit in to any of this? While WriterT (Product Rational) [] is a valid definition of the traditional probability monad, it’s not the same as the odds monad. If you take the odds monad, and parameterize it over the weight of the tail, you get this: data Odds m a = Certain a | Choice (m (a, Odds a)) Which looks remarkably like ListT done right: newtype ListT m a = ListT { next :: m (Step m a) } data Step m a = Cons a (ListT m a) | Nil That suggests a relationship between probability and odds: WriterT (Product Rational) [] = Probability ListT (Weighted Rational) = Odds ListT isn’t a perfect match, though: it allows empty lists. To correct this, you could use the Cofree Comonad: data Cofree f a = a :< (f (Cofree f a)) Subbing in Maybe for f, you get a non-empty list. A weighted Maybe is basically PerhapsT, as was mentioned earlier. ## Generalizing Semirings Types in haskell also form a semiring. (<.>) = (,) one = () (<+>) = Either zero = Void There’s a subset of semirings which are star semirings. They have an operation $*$ such that: $a* = 1 + aa* = 1 + a*a$ Or, as a class: class Semiring a => StarSemiring a where star :: a -> a star x = one <+> plus x plus :: a -> a plus x = x <.> star x Using this on types, you get: star a = Either () (a, star a) Which is just a standard list! Some pseudo-haskell on alternatives will give you: star :: (Alternative f, Monoid a) => f a -> f a star x = (x <.> star x) <+> pure mempty where (<.>) = liftA2 mappend (<+>) = <|> Also known as many. (although note that this breaks all the laws) The $*$ for rationals is defined as (Droste and Kuich 2009, p8): $a* = \begin{cases} \frac{1}{1 - a} & \quad \text{if } & 0 \leq a \lt 1, \\ \infty & \quad \text{if } & a \geq 1. \end{cases}$ So, combining the probability with the type-level business, the star of Writer s a is: Either (1, a) (a, s / (1 - s), star (Writer s a)) Or, to put it another way: the odds monad! ## Endo An endomorphism is a morphism from an object to itself. A less general definition (and the one most often used in Haskell) is a function of the type a -> a: newtype Endo a = Endo { appEndo :: a -> a } It forms a monoid under composition: instance Monoid (Endo a) where mempty = Endo id mappend (Endo f) (Endo g) = Endo (f . g) If the underlying type is itself a commutative monoid, it also forms near-semiring: instance Monoid a => Semiring (Endo a) where Endo f <+> Endo g = Endo (\x -> f x <> g x) zero = Endo (const mempty) one = Endo id Endo f <.> Endo g = Endo (f . g) instance (Monoid a, Eq a) => StarSemiring (Endo a) where star (Endo f) = Endo converge where converge x = x <> (if y == mempty then y else converge y) where y = f x Here’s something interesting: there’s a similarity here to the semiring for church numerals. In fact, as far as I can tell, the functions are exactly the same when applied to endomorphisms of endomorphisms. To the extent that you could define church numerals with something as simple as this: type ChurchEndoNat = forall a. Endo (Endo a) And it works! two, three :: ChurchEndoNat two = one <+> one three = one <+> two unChurch :: Num a => ChurchEndoNat -> a unChurch f = appEndo (appEndo f (Endo (1+))) 0 unChurch (two <.> three) 6 ## Regex One of the most important applications (and a source of much of the notation) is regular expressions. In fact, the free semiring looks like a haskell datatype for regular expressions: data FreeStar a = Gen a | Zer | One | FreeStar a :<+> FreeStar a | FreeStar a :<.> FreeStar a | Star (FreeStar a) instance Semiring (FreeStar a) where (<+>) = (:<+>) (<.>) = (:<.>) zero = Zer one = One instance StarSemiring (FreeStar a) where star = Star interpret :: StarSemiring s => (a -> s) -> FreeStar a -> s interpret f = \case Gen x -> f x Zer -> zero One -> one l :<+> r -> interpret f l <+> interpret f r l :<.> r -> interpret f l <.> interpret f r Star x -> star (interpret f x) Then, interpreting the regex is as simple as writing an interpreter (with some help from Endo): asRegex :: Eq a => FreeStar (a -> Bool) -> [a] -> Bool asRegex fs = any null . appEndo (interpret f fs) . pure where f p = Endo . mapMaybe$ \case
(x:xs) | p x -> Just xs
_ -> Nothing
char' :: Eq a => a -> FreeStar (a -> Bool)
char' c = Gen (c==)
Actually, you don’t need the free version at all!
runRegex :: Eq a => Endo [[a]] -> [a] -> Bool
runRegex fs = any null . appEndo fs . pure
char :: Eq a => a -> Endo [[a]]
char c = Endo . mapMaybe $\case (x:xs) | c == x -> Just xs _ -> Nothing With some -XOverloadedStrings magic, you get a pretty nice interface: instance IsString (Endo [String]) where fromString = mul . map char . reverse (<^>) :: Semiring s => s -> s -> s (<^>) = flip (<.>) greet :: Endo [String] greet = "H" <^> ("a" <+> "e") <^> "llo" :set -XOverloadedStrings runRegex greet "Hello" True runRegex greet "Hallo" True runRegex greet "Halo" False ## Efficiency Of course, that’s about as slow as it gets when it comes to regexes. A faster representation is a nondeterministic finite automaton. One such implementation in haskell is Gabriel Gonzalez’s. The regex type in that example can be immediately made to conform to Semiring and StarSemiring. However, it might be more interesting to translate the implementation into using semirings. The type of a regex looks like this: type State = Int { _startingStates :: Set State , _transitionFunction :: Char -> State -> Set State , _acceptingStates :: Set State } The set data structure jumps out as an opportunity to sub in arbitrary semirings.Swapping in the GeneralMap is reasonably easy: type State = Int data Regex i s = Regex { _numberOfStates :: Int , _startingStates :: GeneralMap State s , _transitionFunction :: i -> State -> GeneralMap State s , _acceptingStates :: GeneralMap State s } isEnd :: Semiring s => Regex i s -> s isEnd (Regex _ as _ bs) = add (intersection as bs) match :: Regex Char (Add Bool) -> String -> Bool match r = getAdd . isEnd . foldl' run r where run (Regex n (GeneralMap as) f bs) i = Regex n as' f bs where as' = mconcat [ fmap (v<.>) (f i k) | (k,v) <- Map.assocs as ] satisfy :: Semiring s => (i -> s) -> Regex i (Add s) satisfy predicate = Regex 2 as f bs where as = singleton 0 bs = singleton 1 f i 0 = assoc 1 (predicate i) mempty f _ _ = mempty once :: Eq i => i -> Regex i (Add Bool) once x = satisfy (== x) shift :: Int -> GeneralMap State s -> GeneralMap State s shift n = GeneralMap . Map.fromAscList . (map.first) (+ n) . Map.toAscList . getMap instance (Semiring s, Monoid s) => Semiring (Regex i s) where one = Regex 1 (singleton 0) (\_ _ -> mempty) (singleton 0) zero = Regex 0 mempty (\_ _ -> mempty) mempty Regex nL asL fL bsL <+> Regex nR asR fR bsR = Regex n as f bs where n = nL + nR as = mappend asL (shift nL asR) bs = mappend bsL (shift nL bsR) f i s | s < nL = fL i s | otherwise = shift nL (fR i (s - nL)) Regex nL asL fL bsL <.> Regex nR asR fR bsR = Regex n as f bs where n = nL + nR as = let ss = add (intersection asL bsL) in mappend asL (fmap (ss<.>) (shift nL asR)) f i s = if s < nL then let ss = add (intersection r bsL) in mappend r (fmap (ss<.>) (shift nL asR)) else shift nL (fR i (s - nL)) where r = fL i s bs = shift nL bsR instance (StarSemiring s, Monoid s) => StarSemiring (Regex i s) where star (Regex n as f bs) = Regex n as f' as where f' i s = let r = f i s ss = add (intersection r bs) in mappend r (fmap (ss<.>) as) plus (Regex n as f bs) = Regex n as f' bs where f' i s = let r = f i s ss = add (intersection r bs) in mappend r (fmap (ss<.>) as) instance IsString (Regex Char (Add Bool)) where fromString = mul . map once This begins to show some of the real power of using semirings and covectors. We have a normal regular expression implementation when we use the covector over bools. Use the probability semiring, and you’ve got probabilistic parsing. Swap in the tropical semiring: a semiring over the reals where addition is the max function, and multiplication is addition of reals. Now you’ve got a depth-first parser. That’s how you might swap in different interpretations. How about swapping in different implementations? Well, there might be some use to swapping in the CYK algorithm, or the Gauss-Jordan-Floyd-Warshall-McNaughton-Yamada algorithm (O’Connor 2011). Alternatively, you can swap in the underlying data structure. Instead of a map, if you use an integer (each bit being a value, the keys being the bit position), you have a super-fast implementation (and the final implementation used in the original example). Finally, you could use a different representation of the state transfer function: a matrix. ## Square Matrices A square matrix can be understood as a map from pairs of indices to values. This lets us use it to represent the state transfer function. Take, for instance, a regular expression with three possible states. Its state transfer function might look like this: $\text{transfer} = \begin{cases} 1 \quad & \{ 2, 3 \} \\ 2 \quad & \{ 1 \} \\ 3 \quad & \emptyset \end{cases}$ It has the type of: State -> Set State Where State is an integer. You can represent the set as a vector, where each position is a key, and each value is whether or not that key is present: $\text{transfer} = \begin{cases} 1 \quad & 0 & 1 & 1 \\ 2 \quad & 1 & 0 & 0 \\ 3 \quad & 0 & 0 & 0 \end{cases}$ Then, the matrix representation is obvious: $\text{transfer} = \left( \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right)$ This is the semiring of square matrices. It is, of course, yet another covector. The “keys” are the transfers: 1 -> 2 or 2 -> 3, represented by the indices of the matrix. The “values” are whether or not that transfer is permitted. The algorithms for the usual semiring operations on matrices like this are well-known and well-optimized. I haven’t yet benchmarked them in Haskell using the matrix libraries, so I don’t know how they compare to the other approaches. In the meantime, there’s an elegant list-based implementation in Dolan (2013): data Matrix a = Scalar a | Matrix [[a]] mjoin :: (Matrix a, Matrix a, Matrix a, Matrix a) -> Matrix a mjoin (Matrix ws, Matrix xs, Matrix ys, Matrix zs) = Matrix ((zipWith (++) ws xs) ++ (zipWith (++) ys zs)) msplit :: Matrix a -> (Matrix a, Matrix a, Matrix a, Matrix a) msplit (Matrix (row:rows)) = (Matrix [[first]], Matrix [top] ,Matrix left, Matrix rest ) where (first:top) = row (left,rest) = unzip (map (\(x:xs) -> ([x],xs)) rows) instance Semiring a => Semiring (Matrix a) where zero = Scalar zero one = Scalar one Scalar x <+> Scalar y = Scalar (x <+> y) Matrix x <+> Matrix y = Matrix (zipWith (zipWith (<+>)) x y) Scalar x <+> m = m <+> Scalar x Matrix [[x]] <+> Scalar y = Matrix [[x <+> y]] x <+> y = mjoin (first <+> y, top, left, rest <+> y) where (first, top, left, rest) = msplit x Scalar x <.> Scalar y = Scalar (x <.> y) Scalar x <.> Matrix y = Matrix ((map.map) (x<.>) y) Matrix x <.> Scalar y = Matrix ((map.map) (<.>y) x) Matrix x <.> Matrix y = Matrix [ [ foldl1 (<+>) (zipWith (<.>) row col) | col <- cols ] | row <- x ] where cols = transpose y instance StarSemiring a => StarSemiring (Matrix a) where star (Matrix [[x]]) = Matrix [[star x]] star m = mjoin (first' <+> top' <.> rest' <.> left' ,top' <.> rest', rest' <.> left', rest') where (first, top, left, rest) = msplit m first' = star first top' = first' <.> top left' = left <.> first' rest' = star (rest <+> left' <.> top) ## Permutation parsing A lot of the use from semirings comes from “attaching” them to other values. Attaching a semiring to effects (in the form of an applicative) can give you repetition of those effects. The excellent ReplicateEffects library explores this concept in depth. It’s based on this type: data Replicate a b = Nil | Cons (Maybe b) (Replicate a (a -> b)) This type can be made to conform to Semiring (and Starsemiring, etc) trivially. In the simplest case, it has the same behaviour as replicateM. Even the more complex combinators, like atLeast, can be built on Alternative: atLeast :: Alternative f => Int -> f a -> f [a] atLeast m f = go (max 0 m) where go 0 = many f go n = liftA2 (:) f (go (n-1)) atMost :: Alternative f => Int -> f a -> f [a] atMost m f = go (max 0 m) where go 0 = pure [] go n = liftA2 (:) f (go (n-1)) <|> pure [] There are two main benefits over using the standard alternative implementation. First, you can choose greedy or lazy evaluation of the effects after the replication is built. Secondly, the order of the effects doesn’t have to be specified. This allows you to execute permutations of the effects, in a permutation parser, for instance. The permutation is totally decoupled from the declaration of the repetition (it’s in a totally separate library, in fact: PermuteEffects). Its construction is reminiscent of the free alternative. Having the replicate type conform to Semiring is all well and good: what I’m interested in is seeing if its implementation is another semiring-based object in disguise. I’ll revisit this in a later post. List comprehension notation is one of my all-time favourite bits of syntactic sugar. It seems almost too declarative to have a reasonable implementation strategy. The vast majority of the time, it actually works in a sensible way. There are exceptions, though. Take a reasonable definition of a list of Pythagorean triples: [ (x,y,z) | x <- [1..], y <- [1..], z <- [1..], x*x + y*y == z*z ] This expression will diverge without yielding a single triple. It will search through every possible value for z before incrementing either x or y. Since there are infinite values for z, it will never find a triple. In other words, vanilla list comprehensions in Haskell perform depth-first search. In order to express other kinds of search (either breadth-first or depth-bounded), different monads are needed. These monads are explored in Fischer (2009) and Spivey (2009). You can actually use the exact same notation as above with arbitrary alternative monads using -XMonadComprehensions and -XOverloadedLists. trips :: ( Alternative m , Monad m , IsList (m Integer) , Enum (Item (m Integer)) , Num (Item (m Integer))) => m (Integer,Integer,Integer) trips = [ (x,y,z) | x <- [1..], y <- [1..], z <- [1..], x*x + y*y == z*z ] So then, here’s the challenge: swap in different ms via a type annotation, and prevent trips from diverging before getting any triples. As one example, here’s some code adapted from Fischer (2009): instance (Monoid r, Applicative m) => Monoid (ContT r m a) where mempty = ContT (const (pure mempty)) mappend (ContT f) (ContT g) = ContT (\x -> liftA2 mappend (f x) (g x)) newtype List a = List { runList :: forall m. Monoid m => Cont m a } deriving Functor instance Foldable List where foldMap = flip (runCont.runList) instance Show a => Show (List a) where show = show . foldr (:) [] instance Monoid (List a) where mappend (List x) (List y) = List (mappend x y) mempty = List mempty instance Monoid a => Semiring (List a) where zero = mempty (<+>) = mappend (<.>) = liftA2 mappend one = pure mempty bfs :: List a -> [a] bfs = toList . fold . levels . anyOf newtype Levels a = Levels { levels :: [List a] } deriving Functor instance Applicative Levels where pure x = Levels [pure x] Levels fs <*> Levels xs = Levels [ f <*> x | f <- fs, x <- xs ] instance Alternative Levels where empty = Levels [] Levels x <|> Levels y = Levels (mempty : merge x y) instance IsList (List a) where type Item (List a) = a fromList = anyOf toList = foldr (:) [] instance Applicative List where pure x = List (pure x) (<*>) = ap instance Alternative List where empty = mempty (<|>) = mappend instance Monad List where x >>= f = foldMap f x anyOf :: (Alternative m, Foldable f) => f a -> m a anyOf = getAlt . foldMap (Alt . pure) merge :: [List a] -> [List a] -> [List a] merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) = mappend x y : merge xs ys take 3 (bfs trips) [(3,4,5),(4,3,5),(6,8,10)] The only relevance to semirings is the merge function. The semiring over lists is the semiring over polynomials: instance Semiring a => Semiring [a] where one = [one] zero = [] [] <+> ys = ys xs <+> [] = xs (x:xs) <+> (y:ys) = (x <+> y) : (xs <+> ys) [] <.> _ = [] _ <.> [] = [] (x:xs) <.> (y:ys) = (x <.> y) : (map (x <.>) ys <+> map (<.> y) xs <+> (xs <.> ys)) The <+> is the same as the merge function. I think the <.> might be a more valid definition of the <*> function, also. instance Applicative Levels where pure x = Levels [pure x] Levels [] <*> _ = Levels [] _ <*> Levels [] = Levels [] Levels (f:fs) <*> Levels (x:xs) = Levels$
(f <*> x) : levels (Levels (fmap (f <*>) xs)
<|> Levels (fmap (<*> x) fs)
<|> (Levels fs <*> Levels xs))
## Conclusion
I’ve only scratched the surface of this abstraction. There are several other interesting semirings: polynomials, logs, Viterbi, Łukasiewicz, languages, multisets, bidirectional parsers, etc. Hopefully I’ll eventually be able to put this stuff into a library or something. In the meantime, I definitely will write some posts on the application to context-free parsing, bidirectional parsing (I just read Breitner (2016)) and search.
## References
Boom, H. J. 1981. “Further thoughts on Abstracto.” Working Paper ELC-9, IFIP WG 2.1. http://www.kestrel.edu/home/people/meertens/publications/papers/Abstracto_reader.pdf.
Breitner, Joachim. 2016. “Showcasing Applicative.” Joachim Breitner’s Blog. http://www.joachim-breitner.de/blog/710-Showcasing_Applicative.
Dolan, Stephen. 2013. “Fun with semirings: A functional pearl on the abuse of linear algebra.” In, 48:101. ACM Press. doi:10.1145/2500365.2500613. https://www.cl.cam.ac.uk/~sd601/papers/semirings.pdf.
Droste, Manfred, and Werner Kuich. 2009. “Semirings and Formal Power Series.” In Handbook of Weighted Automata, ed by. Manfred Droste, Werner Kuich, and Heiko Vogler, 1:3–28. Monographs in Theoretical Computer Science. An EATCS Series. Berlin, Heidelberg: Springer Berlin Heidelberg. http://staff.mmcs.sfedu.ru/~ulysses/Edu/Marktoberdorf_2009/working_material/Esparsa/Kuich.%20Semirings%20and%20FPS.pdf.
Erwig, Martin, and Steve Kollmansberger. 2006. “Functional pearls: Probabilistic functional programming in Haskell.” Journal of Functional Programming 16 (1): 21–34. doi:10.1017/S0956796805005721. http://web.engr.oregonstate.edu/~erwig/papers/abstracts.html#JFP06a.
Fischer, Sebastian. 2009. “Reinventing Haskell Backtracking.” In Informatik 2009, Im Fokus das Leben (ATPS’09). GI Edition. http://www-ps.informatik.uni-kiel.de/~sebf/data/pub/atps09.pdf.
Hirschowitz, André, and Marco Maggesi. 2010. “Modules over monads and initial semantics.” Information and Computation 208 (5). Special Issue: 14th Workshop on Logic, Language, Information and Computation (WoLLIC 2007) (May): 545–564. doi:10.1016/j.ic.2009.07.003. https://pdfs.semanticscholar.org/3e0c/c79e8cda9246cb954da6fd8aaaa394fecdc3.pdf. | 2019-07-17 05:33:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515402913093567, "perplexity": 10812.551880541427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525046.5/warc/CC-MAIN-20190717041500-20190717063500-00316.warc.gz"} |
https://www.cuemath.com/trigonometry/ | # Trigonometry
Go back to 'Maths'
## Introduction
Trigonometry deals with the measurement of angles and problems related to angles.
The word trigonometry is a 16th century Latin derivative.
It is a branch of mathematics which deals with the relation between the angles and sides of a triangle.
You can know about triangles and their different types here.
Let’s consider an example.
Rohit is standing near a tree.
He looks up at the tree and wonders “How tall is the tree?”
You would be amazed to know that he can find the height of the tree without actually measuring it.
Wonder how?
That’s where trigonometry would help us. The above image can be simplified as below.
What we have here is a right-angled triangle, i.e.: a triangle with one of the angles equal to $$90$$ degrees.
The height of the tree can be found out by using basic trigonometric formulae.
Before we begin to do that, let’s familiarize ourselves with the basic trigonometric terminology.
## Trigonometric Basics
Trigonometry basics deal with the measurement of angles and problems related to angles. Let’s look at the diagram below.
Let’s define a few terms that will be used extensively in trigonometry
Adjacent It is the side of the triangle which is adjacent to (or below) angle \theta. $$\text{BC}$$ is the adjacent side. Opposite It is the side of the triangle which is opposite to angle \theta. $$\text{AB}$$ is the opposite side. Hypotenuse It is the largest side of the triangle. $$\text{AC}$$ is the hypotenuse. Angle of Elevation It is the angle between the horizontal plane and the line of sight from an observer's eye to an object above. \theta is the angle of elevation.
Considering trigonometry has a lot of real-life applications such as measuring precise distances, developing music and more, it becomes necessary to learn about the trigonometric basics.
We will cover a lot of topics in this section like Trigonometric Ratios, Basic Properties of Trigonometric Ratios, Trigonometric ratios of Specific Angles, Trigonometric Elimination and Trigonometric Ratios of Complementary Angles.
We will also explore some of the topics like Heights and Distances, Sine Law, Cosine Law, What is a Radian, Trigonometric Ratios in Radians, Trigonometric Ratios of Arbitrary Angles, Conversion Relations of Trigonometric Ratios, Sine Function, Cosine Function, Tangent Function, Cosecant, Secant & Cotangent Functions, Inverse Trigonometric Ratios. Further on, you will get to learn more about Inverse Trigonometric Ratios and Inverse Trigonometric Ratios for Arbitrary Values.
## Trigonometric Identities
In Trigonometric Identities, an equation is called an identity when it is true for all values of the variables involved. Similarly, an equation involving trigonometric ratios of an angle is called a trigonometric identity, if it is true for all values of the angles involved.
### Example:
\begin{align} \text{sin } \theta ÷ \text{cos } \theta &= \left[\frac{\text{Opposite}}{\text{Hypotenuse}}\right] ÷\left[\frac{\text{Adjacent}}{\text{Hypotenuse}}\right]\\&= \frac{\text{Opposite}}{\text{Adjacent}}\\&= \text{tan } \theta \end{align}
Therefore, $$\text{tan }\theta = \text{sin }\theta ÷ \text{cos }\theta$$ is a trigonometric identity.
## Trigonometric Table
The trigonometric table is made up of trigonometric ratios that are interrelated to each other – sine, cosine, tangent, cosecant, secant, cotangent. These ratios, in short, are written as sin, cos, tan, cosec, sec and cot.
You can refer to the trigonometric table chart to know more about these ratios.
## Trigonometric Formulae
The complete list of trigonometric formulae involving trigonometric ratios and trigonometric identities is listed for easy access.
Here's a list of all the trigonometric formulae for you to learn and revise. | 2020-07-14 15:33:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218665957450867, "perplexity": 631.669532910399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897168.4/warc/CC-MAIN-20200714145953-20200714175953-00128.warc.gz"} |
https://mathstatnotes.wordpress.com/tag/bayesian-statistics/ | # Combinatorics of Phylogenetic Trees
The following is based on a weekend project that I also presented as a short talk in an undergraduate combinatorics seminar. The project is self-contained and mostly based on independent work. Ideas and inspiration came from discussions with my teacher and from the introduction of Diaconis and Holmes (1998). Theorem 2 is from Semple and Steel (2003). Tree pictures were produced with Sagemath and Latex.
French pdf.
# 1. Introduction
A phylogenetic tree is a rooted binary tree with labeled leaves.
These trees are used in biology to represent the evolutive history of species. The leaves are the identified species, the root is a common anscestor, and branching represents speciation.
An interesting problem is that of reconstructing the phylogenetic tree that best explains the observed biological characterics of a set of species. A naive mathematical formulation of this problem is proposed in section 4, and used to implement a tree reconstruction algorithm.
# Linear approximation operators and statistical models
We discuss the approximation properties of sequences of linear operators $T_n$ mapping densities to densities. We give conditions for their convergence, explicit their general form, obtain rates of convergences and generalise the index parameter to obtain nets $\{T_n\}_{n \in N}$.
Notations. Let $(\mathbb{M}, d)$ be a compact metric space, equipped with a finite measure $\mu$ defined on its Borel $\sigma$-algebra, and denote by $\mathcal{F} \subset L^1$ the set of all essentially bounded probability densities on $\mathbb{M}$. The set $\mathcal{F}$ is then a complete separable metric space under the total variation distance proportional to $|| f-g ||_1 = \int |f-g| d\mu$.
In bayesian statistics, it is of interest to specify a probability measure $P$ on $\mathcal{F}$, representing uncertainty about which distribution of $\mathcal{F}$ is generating independent observations $x_i \in \mathbb{M}$. The problem is that $\mathcal{F}$ is usually rather big: by Baire’s category theorem, if $\mathbb{M}$ is not a finite set of points, then $\mathcal{F}$ cannot be written as a countable union of finite dimensional subspaces. To help in prior elicitation, that is to help a statistician specify $P$, we may decompose $\mathcal{F}$ in simpler parts.
Here, I discuss how to obtain a sequence of approximating finite dimensional sieves $\mathcal{S}_n \subset \mathcal{F}$, such that $\cup_n \mathcal{S}_n$ is dense in $\mathcal{F}$. A prior $P$ on $\mathcal{F}$ may then be specified as the countable mixture
$P = \sum _{n \geq 1} \alpha_n P_{\mathcal{S}_n}, \quad \alpha_n \geq 0,\, \sum_n \alpha_n = 1,$
where $P_{\mathcal{S}_n}$ is a prior on $\mathcal{S}_n$ for all $n$.
Let me emphasize that the following ideas are elementary. Some may be found, with more or less generality, in analysis and approximation theory textbooks. It is, however, interesting to recollect the facts relevant in statistical applications.
# 1. The basics
The finite dimensional sieves $\mathcal{S}_n$ take the form
$\mathcal{S}_n = \left\{ \sum_{i=0}^{m_n} c_i \phi_{i,n} \right\}, \quad m_n \in \mathbb{N}$
where the $\phi_{i,n}$ are densities and the coefficients $c_i$ range through some set which we assume contains the simplex $\Delta_n = \left\{ (c_i) : \sum c_i = 1,\, c_i \geq 0 \right\}$.
The following lemma gives sufficient conditions for $\cup_n \mathcal{S}_n$ to be dense in $\mathcal{F}$, with the total variation distance.
Lemma 1. Suppose that there exists a measurable partition $\{R_{i,n}\} _{i=0}^{m_n}$ of $\mathbb{M}$, with $\max_i \text{diam}(R_{i,n}) \rightarrow 0$, such that:
1. for all $\delta > 0$, $\sum_{i: d(x, R_{i,n}) > \delta} \mu(R_{i,n}) \phi_{i,n}(x) \rightarrow 0$, uniformly in $x$; and
2. $\sum_i \mu(R_{i,n}) \phi_{i,n} (x) \rightarrow 1$, uniformly in $x$.
Then, $\cup_n \mathcal{S}_n$ is dense in $(\mathbb{F},||\cdot||_1)$. More precisely, the linear operator $T_n : f \mapsto \sum_i \int_{R_{i,n}}f d\mu \, \phi_{i,n}$ maps densities to densities and is such that for all integrable $h$, $||T_n h - h||_1 \rightarrow 0$ and for all continuous $g$, $||T_n g - g||_\infty \rightarrow 0$.
Proof: We first show that $||T_n g - g||_\infty \rightarrow 0$, for all continuous $g$. The method of proof is well-known.
The fact that $T_n$ is linear and maps densities to densities is easily verified. It follows that $T_n$ monotonous ($f < h \Rightarrow T_n f < T_n h$). Now, let $\varepsilon > 0$. By hypothesis (2), we can suppose that $T_n 1 = 1$. Thus for all $x$ and by the monotonicity of $T_n$, $|T_n g\, (x) -g(x)| = |T_n(g-g(x))\, (x)| \le T_n|g-g(x)|\,(x)$. Since $\mathbb{M}$ is compact, $g$ is absolutely continuous and there exists $\delta > 0$ such that $d(x, t) < \delta \Rightarrow |g(x)-g(t)| < \varepsilon$. Take $n$ sufficienly large so that $\max_i \text{diam}(R_{i,n}) < \delta / 2$. We have
$T_n|g-g(x)|\,(x) = \sum_{i : d(x, R_{i,n}) < \delta/2} \int_{R_{i,n}} |g(t) - g(x)|\mu(dt) \phi_{i,n}(x) + \sum_{i : d(x, R_{i,n} \geq \delta/2} \int_{R_{i,n}} |g(t) - g(x)|\mu(dt) \phi_{i,n}(x).$
The first sum is bounded above by $\varepsilon$, independently of $x$, and the second sum goes uniformly to $0$. Therefore $||T_n g - g||_\infty \rightarrow 0$.
We now show that $||T_n h - h||_1 \rightarrow 0$ for all integrable $h$. Let $\varepsilon > 0$. The space of continuous functions is dense in $L^1$; there exists a continuous $g$ with $||g-h||_1 < \varepsilon$. Therefore, $||T_n h - h||_1 \le ||T_n h -T_n g||_1 + ||T_ng - g||_1 + ||g-h||_1$. Because $T_n$ maps densities to densities it is of norm $1$ and $||T_n h - T_ng||_1 \le ||h-g||_1$. Thus for $n$ sufficiently large so that $||T_n g - g||_\infty < \varepsilon / \mu(\mathbb{M})$, we obtain ${}||T_n h - h||_1 \le 3\varepsilon$.
Finally, $T_n(\mathcal{F}) \subset \mathcal{S}_n$ and the preceding implies $\cup_n T_n(\mathcal{F})$ is dense in $\mathcal{F}$. QED.
## 1.1 Examples.
1. On the unit interval $[0,1]$ with the Lebesgue measure, the densities ${}\phi_{i,n} = (n+1)\mathbb{I}_{\left[\frac{i}{n+1}, \frac{i+1}{n+1}\right)}$, with ${}\phi_{n,n} = (n+1)\mathbb{I}_{\left[\frac{n}{n+1}, 1 \right]}$, obviously satisfy the hypotheses of the preceding lemma. Here, $\mathbb{I}_A$ is the indicator function of the set $A$.
2. The indicator functions above may be replaced by the Bernstein polynomial densities $\phi_{i,n}(x) = (n+1){n \choose i}x^i(1-x)^{n-i}$. The conditions of the lemma are also satisfied and the proof is relatively straightforward.
Note that any operator of the form $T_n : f \mapsto \sum_i \int_{R_{i,n}} f\, d\mu \phi_{i,n}$ may be decomposed as $T_n = S_n \circ H_n$, where $H_n$ is the histogram operator $H_n f = \sum_i \int_{R_{i,n}} f d\mu\, \mu(R_{i,n})^{-1}\mathbb{I}_{R_{i,n}}$ and $S_n\left( \sum_i c_i \, \mu(R_{i,n})^{-1}\mathbb{I}_{R_{i,n}} \right) = \sum_i c_i \phi_{i,n}$. In other words, calculating $T_n f$ is the process of reducing $f$ to an associated histogram and then smoothing it.
### A dual approximation process.
Consider the histogram operator $H_n : f \mapsto \sum_i \int \mathbb{I}_{R_{i,n}} f d\mu \,\mu(R_{i,n})^{-1}\mathbb{I}_{R_{i,n}}$ and suppose that $\sum_i \mu(R_{i,n})\phi_{i,n} = 1$. Instead of replacing the densities $\mu(R_{i,n})^{-1}\mathbb{I}_{R_{i,n}}$ on the outside of the integral by $\phi_{i,n}$, as we did before, we may replace the partition of unity ${}\mathbb{I}_{R_{i,n}}$ inside the integral by the partition of unity ${}\mu(R_{i,n})\phi_{i,n}$. This yields the following histogram operator $\tilde{H}_n$:
$\tilde{H}_n f = \sum_i \int f \phi_{i,n} d\mu \, \mathbb{I}_{R_{i,n}}.$
It can also be extended to act on measures, by letting ${}\tilde{H}_n \lambda = \sum_i \int \phi_{i,n} d\lambda \, \mathbb{I}_{R_{i,n}}.$
The preceding is of interest in kernel density estimation: given the empirical distribution $\lambda_n = \frac{1}{n} \sum_{i=1}^{n} \delta_{x_i}$ of observed data $(x_i)$, a possible density estimate is $\tilde{H}_n \lambda_n$. Binning the data through the integral $\int \phi_{i,n} d\lambda_n$ rather than $\int_{R_{i,n}} d\lambda_n = \#\{x_i | x_i \in R_{i,n}\}/n$ can reduce the sensitivity of the density estimate to the choice of bins $R_{i,n}$.
## 1.2 The general form of linear operators.
Let $T_n: L^1 \rightarrow L^1$ be a sequence of positive linear operators mapping densities to densities and such that $T_n 1 = 1$. Then for each $x$ and $n$, there exists a random variable $Y_n(x)$ such that $T_n f (x) = \text{E}[f(Y_n(x))]$. This is a direct consequence of the Riesz representation theorem for positive functionals on $\mathcal{C}(\mathbb{M})$. In particular, if $Y_n(x)$ admits a density $K_n(x, \cdot)$, then
$T_n f (x) = \int f(t) K_n(x,t) \mu(dt).$
Note that for general random variables $Y_n(x)$, the function $x \mapsto \text{E}[f(Y_n(x))]$ may not be a density.
A sufficient condition so that $\text{E}[f(Y_n(x))] \rightarrow f(x)$, uniformly in $x$ and for all continuous $g$, is the following:
• for all $\delta > 0$, $P(|Y_n(x) - x| > \delta) \rightarrow 0$, uniformly in $x$, meaning that the random variables $Y_n(x)$ satisfy the weak law of large numbers uniformly in $x$.
Indeed, for all $\delta > 0$ and continuous $f$,
$\text{E}[|f(Y_n(x)) - f(x)|] = \text{E}[|f(Y_n(x)) - f(x)| \mathbb{I}(|Y_n(x) - x| < \delta)] + \text{E}[|f(Y_n(x)) - f(x)| \mathbb{I}(|Y_n(x) - x| \geq \delta)].$
The first mean on the RHS is bounded by any $\varepsilon > 0$ when $\delta$ is sufficiently small. The second mean is bounded by a constant multiple of $P(|Y_n(x) - x| > \delta)$, which goes to zero as $n\rightarrow \infty$.
### Rates of convergence
Let $f$ be a continuous density and $w_f: \mathbb{R}_{>0} \rightarrow \mathbb{R}_{\geq 0}$ be a modulus of continuity, for instance
$w_f(\delta) = \sup_{d(x,y) < \delta} |f(x) - f(y)|.$
For all $\delta > 0$, we have
$w_f(d(x, t)) \le w_f(\delta)(1+\delta^{-1}d(x,t)).$
Therefore, for any sequence $\delta_n > 0$, a calculation yields
${}|\text{E}[f(Y_n(x))] - f(x)| \le w_f(\delta_n) \left( 2 + \text{E}\left[ \frac{d(Y_n(x), x)}{\delta_n} \mathbb{I}\left(d(Y_n(x), x) \geq \delta_n\right) \right] \right).$
In euclidean space, for example, when $\text{E}[Y_n(x)] = x$ and $\text{Var}(Y_n(x))$ exists, $\sigma_n^2 \geq\text{Var}(Y_n(x))$ for all $x$, we find
$||\text{E}[f(Y_n(\cdot))] - f||_\infty = \mathcal{O}(w_f(\sigma_n)).$
## 1.3 Introducing other parameters
We may index our operators by general parameters $\theta \in \Theta$, whenever $\Theta$ is a directed set (i.e. for all $\theta_1, \theta_2 \in \Theta$, there exists $\theta_3 \in \Theta$ such that $\theta_1 \le \theta_3$ and $\theta_2 \le \theta_3$). The sequence $\{T_n\}_{n \in \mathbb{N}}$ then becomes the net $\{T_\theta\}_{\theta \in \Theta}$. We say that $\lim ||T_\theta f - f||_\infty = 0$ if for all $\varepsilon > 0$ there exists $\theta_\varepsilon$ such that $\theta \geq \theta_\varepsilon$ implies $||T_\theta f - f||_\infty < \varepsilon$.
For example, we can consider the tensor product operator $T_{n,m} = T_n \otimes T_m$ acting on the space of product densities, where $\mathbb{N} \times \mathbb{N}$ is ordered as $(n,m) \le (n', m')$ iff $n\le n'$ and $m \le m'$.
These extensions are straightforward; the point is that many cases can be treated under the same formalism. | 2019-11-17 22:33:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 171, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9723987579345703, "perplexity": 141.3768643723265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669352.5/warc/CC-MAIN-20191117215823-20191118003823-00094.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sigma&paperid=1525&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
SIGMA: Year: Volume: Issue: Page: Find
SIGMA, 2019, Volume 15, 089, 36 pages (Mi sigma1525)
Symplectic Frieze Patterns
Sophie Morier-Genoud
Sorbonne Université, Université Paris Diderot, CNRS, Institut de Mathé-matiquesde Jussieu-Paris Rive Gauche, IMJ-PRG, F-75005, Paris, France
Abstract: We introduce a new class of friezes which is related to symplectic geometry. On the algebraic and combinatrics sides, this variant of friezes is related to the cluster algebras involving the Dynkin diagrams of type $\mathrm{C}_{2}$ and $\mathrm{A}_{m}$. On the geometric side, they are related to the moduli space of Lagrangian configurations of points in the 4-dimensional symplectic space introduced in [Conley C.H., Ovsienko V., Math. Ann. 375 (2019), 1105–1145]. Symplectic friezes share similar combinatorial properties to those of Coxeter friezes and $\mathrm{SL}$-friezes.
Keywords: frieze, cluster algebra, moduli space, difference equation, Lagrangian configuration.
Funding Agency Grant Number Agence Nationale de la Recherche ANR-15-CE40-0004-01 I also want to thank Luc Pirio for stimulating discussions on the subject. This work is supported by the ANR project $SC^3A$, ANR-15-CE40-0004-01.
DOI: https://doi.org/10.3842/SIGMA.2019.089
Full text: PDF file (625 kB)
Full text: https://www.imath.kiev.ua/~sigma/2019/089/
References: PDF file HTML file
Bibliographic databases:
ArXiv: 1803.06001
MSC: 13F60; 05E10; 14N20; 53D30
Received: June 18, 2019; in final form November 7, 2019; Published online November 14, 2019
Language:
Citation: Sophie Morier-Genoud, “Symplectic Frieze Patterns”, SIGMA, 15 (2019), 089, 36 pp.
Citation in format AMSBIB
\Bibitem{Mor19}
\by Sophie~Morier-Genoud
\paper Symplectic Frieze Patterns
\jour SIGMA
\yr 2019
\vol 15
\papernumber 089
\totalpages 36
\mathnet{http://mi.mathnet.ru/sigma1525}
\crossref{https://doi.org/10.3842/SIGMA.2019.089}
\scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85075121976} | 2020-04-08 10:10:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2216079980134964, "perplexity": 8730.248111353283}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810807.81/warc/CC-MAIN-20200408072713-20200408103213-00290.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-2-section-2-1-the-distance-and-midpoint-formulas-2-1-assess-your-understanding-page-155/23 | ## College Algebra (10th Edition)
$2\sqrt{17}$ units.
RECALL: The distance $d$ between the points $(x_1, y_1)$ and $(x_2, y_2)$ can be found using the distance formula: $d = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$ Use the formula above to obtain: $d=\sqrt{(3-5)^2+(-4-4)^2} \\d=\sqrt{(-2)^2+(-8)^2} \\d=\sqrt{4+64} \\d=\sqrt{68} \\d=\sqrt{4(17)} \\d=\sqrt{2^2(17)} \\d=2\sqrt{17}$ Thus, the distance between the two given points is $2\sqrt{17}$ units. | 2019-11-12 15:22:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322955012321472, "perplexity": 161.46919827575013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00100.warc.gz"} |
http://cosmo.torun.pl/pipermail/cosmo-torun/2003-October/000160.html | # prosba o komentarz
Boud Roukema boud w astro.uni.torun.pl
Śro, 29 Paź 2003, 14:10:04 CET
On Wed, 29 Oct 2003, szajtan odwieczny wrote:
> > On Tue, 28 Oct 2003, szajtan odwieczny wrote:
> >
> > > What I think we can say for sure is that, although we're not sure about
> > > the actual curvature of our obserwable universe, and thus we're not sure
> > > if the space go on and on to infinity, it seems that it's quite sure that
> > > our universe is accelerating (q<0), and from this point of view we can say
> > > that if we send out a probe into the space even at the velocity of light,
> > > it's likely that it will never return regardelss of the the curvature
> > > of the universe, because it just won't overpass the expantion rate of the
> > > universe unless there is some nontrivial topology involved. The Big Crunch
> > > never happens in area where q<0 on Omega_l, Omega_m plane. So from our
> > > point of view we can say the spacetime is infinite if we're thinking in a
> >
> > We can't say "the spacetime is infinite". What you mean is
> why not ?
Because "x is infinite" means that
"for every y \in {Real numbers}, x > y".
If the Universe has positive curvature with curvature radius R_C (and
is a perturbed FLRW model, as we think), then there exists a maximal
length spatial geodesic X_s and a maximal length space-time geodesic
X_st .
Then, there exist y \in {Real numbers} greater than these values and
there are no longer any x in the Universe greater than y. So it's not
infinite.
> event horizon accounts for all evolution of expantion according to assumed
> model. if it's less than the curent curvature radius then we won't see the
> probe, if it's bigger we mae but not must see it, for the curvature
> radius is changing in time.
i'll let you do the calculations here, but they are irrelevant for the
question of infinity.
>
> > "the http://www.wikipedia.org/wiki/Event_horizon is less than
> > 2 \pi times the radius of curvature even if comoving space is a hypersphere".
>
> last time I checked the page expired so don't know what was there, but did
So have a look again.
> anyone said something about the relation of the event horizon to the
> curvature ratius.? (this should be calculated)
i'd be surprised if there's any simple relation, unless you restrict
to special cases like \Omega_\Lambda=0 or at least some family of FLRW models.
> >
> > This is true even when \Omega_\Lambda = 0 - the Big Crunch happens
> > before we can see the back of our head.
> >
> even better - another reason for which we will never see the sent signal
> (probe), but with CDM=.3 and DE=.7 or anything close to it we have no big
> crunch at all.
OK
> > > way of traveling in it. If we think just of a space as a slice in some
> > > moment of time the quiestion is still open, but what is use of thinking
> >
> > i think you mean here "in some spatial section at constant cosmological time".
>
> yes
:)
> > > about space this way - it just cannot be separated from time right ?
> >
> > It's the fundamental nature of the model, so we ought to think about it.
>
> the fundamental nature of the model is that going in space we also move in
> time.
This is extremely confusing language. What do you mean by "going in
space", "moving in time" and "we"? If you mean there's a second time
variable representing the time variable of a thought experiment, then
it makes sense. Or if you're talking about a physical particle, then
two different time variables are the local (proper) time of the particle
and the cosmological time.
But it's perfectly possible to imagine comoving space without needing
any time variable to "go" through it (though having a local psychological
time variable is convenient).
> eg. Imagine that that space is closed, and expands slow enough that
> a photon emited from your flashlight can round it, but as the time passes
> expansion rate mae grow up, event horizon falls, (say cosmological
> constant starts dominate) and the photon won't make it eventually.
IMHO this is a different theme to the universe being infinite or not
> > If you can think of an alternative model which only models our past
> > time cone, fine.
> btw.
> if the accurate model predict things in the past, I see no reason why
> it should not predict also things in the future.
You can't predict things in the past. Astronomers doing models often talk about
The reality is that you can only postdict the past, and that if you make
predictions they will usually be wrong. Moreover, you're more likely to
get observing time/grants if you postdict the past (and say that you're
predicting it) rather than if you make real predictions.
As for accuracy, the model below is extremely accurate, by definition.
> below this this point I don't follow ;)
Another way of describing the model is that the Universe the inside of a
6k-light-year sphere and that initial (and continuing) boundary conditions
on the sphere have been designed in such a way that people trying to
understand them conclude with a series of simple (but wrong) physical
laws. Initial conditions throughout the sphere were also set up with
the same intention (e.g. fossils, distribution of continents, genetic
mixes of people and other animals, isotopic ratios of uranium etc...)
Maybe another way of describing it is that we live inside a planetarium
of radius 6kly and that The Designer is pretty good at designing fun
models with just enough clues that we remain interested and think we
can understand the model represented in the planetarium.
> > But personally this reminds me of the Christian
> > fundamentalist cosmology model where the Unvierse is only 6000 years
> > old, as written in the Bible.
> >
> > It's a model which perfectly fits all cosmological observations,
> > including those of WMAP. ;) The Universe in this model is the inside
>
> gash, does it says about CMB fluctuations ?
No: but since human beings have interpreted these in terms of a simple
set of physical laws, the Christian fundamentalist model is satisfied
- there are simply a new set of boundary conditions designed to make
physicists interpret them this way.
> meaybe I should review the bible instead of Peebles, etc :)
>
> > of a sphere of radius 6000 light-years, on which EM radiation of all
> > sorts of wavelengths (and we could add other particles) was generated
> > 6000 years ago on this surface, emitted in the direction of the Sun
> > (and it continues to be generated) in such a way to reproduce a
> > "naive" model that makes it (more or less) easy for human beings to
> > interpret these in terms of simple laws of physics. The being "God/Bóg"
> > which generates the emission wants human beings to have an easily
> > interpretable Universe, he/she/it is extremely intelligent and able
> > to generate such complex emission patterns of radiation. Just like
> > he/she/it set up species of animals 6000 years ago in a way that
> > makes biologists think there must have been lots of evolution...
> >
> > Personally i find the model ridiculous, but it perfectly fits all the
> > observations and avoids "extrapolation" into times with which we have
> > no written contact (prehistorical), and the Universe is only 6000
> > years old.
>
> pozdrawiam
> bartek.
pozd
boud
Więcej informacji o liście Cosmo-torun | 2022-10-01 05:31:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7634070515632629, "perplexity": 2333.055657172433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00569.warc.gz"} |
https://demo7.dspace.org/items/a5dbefb7-f638-4e38-b63b-d1010b0f0967 | ## All Vacuum Near-Horizon Geometries in $D$-dimensions with $(D-3)$ Commuting Rotational Symmetries
##### Authors
Hollands, Stefan
Ishibashi, Akihiro
##### Description
We explicitly construct all stationary, non-static, extremal near horizon geometries in $D$ dimensions that satisfy the vacuum Einstein equations, and that have $D-3$ commuting rotational symmetries. Our work generalizes [arXiv:0806.2051] by Kunduri and Lucietti, where such a classification had been given in $D=4,5$. But our method is different from theirs and relies on a matrix formulation of the Einstein equations. Unlike their method, this matrix formulation works for any dimension. The metrics that we find come in three families, with horizon topology $S^2 \times T^{D-4}$, or $S^3 \times T^{D-5}$, or quotients thereof. Our metrics depend on two discrete parameters specifying the topology type, as well as $(D-2)(D-3)/2$ continuous parameters. Not all of our metrics in $D \ge 6$ seem to arise as the near horizon limits of known black hole solutions.
Comment: 22 pages, Latex, no figures, title changed, references added, discussion of the parameters specifying solutions corrected, amended to match published version
##### Keywords
General Relativity and Quantum Cosmology, High Energy Physics - Theory | 2022-12-06 17:42:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638077974319458, "perplexity": 1677.5456083160527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00770.warc.gz"} |
https://www.physicsforums.com/threads/equation-of-a-circle-from-given-conditions.960963/ | # Equation of a circle from given conditions
Gold Member
## Homework Statement
Equation of the circle passing through the point (1,2) and (3,4) and touching the line 3x+y-3=0 is?
## Homework Equations
x^2+y^2+2gx+2fy+c=0...(1)
(-g,-f)=center of circle
## The Attempt at a Solution
Putting (1,2) and (3,4) in equation 1 we get 5+2g+4f+c=0; 25+6g+8f+c=0.
Now, line joining the two points will be perpendicular to the line joining center and midpoint of that line (chord perpendicular to radius). Say (h,k) is center, slope joining the two points is 1 so slope of radius through midpoint is -1 (perpendicular lines), midpoint of chord is (2,3); equating -1 to slope of (h,k) and (2,3) gives us k+h=5- but h= -g and k= -f; so -g-f=5 Solving these three equations gives c=40, f= -35/2 and g= 25/2 which is the wrong circle. I know there are other ways to solve this but I want to know why this method is not working in particular- I double checked all the calculations and I can't figure out anything wrong with my logic, Thank you for your help
Staff Emeritus
Homework Helper
Gold Member
It is unclear to me what you are trying to do with this ”method”. Why are you creating a line from the centre to the midpoint? You have already used that the two points need to be on the circle and there are an infinite number of circles satisfying this. You cannot squeeze more information out of those two points. You need to use the third requirement.
Delta2
Staff Emeritus
Homework Helper
Gold Member
so -g-f=5 Solving these three equations gives c=40, f= -35/2 and g= 25/2
Also note that your equation in bold here is not a new equation. You can get it by just using your previous two so there is no new information. Your equation system therefore does not have a unique solution (two equations for three variables) and you need to use the extra information provided.
Gold Member
It is unclear to me what you are trying to do with this ”method”. Why are you creating a line from the centre to the midpoint? You have already used that the two points need to be on the circle and there are an infinite number of circles satisfying this. You cannot squeeze more information out of those two points. You need to use the third requirement.
Oh, right. I just realized that- sort of like writing a third KVL equation which is the same. To use the third condition I'd have to put the radius=distance of line from center which is lengthy and prone to mistakes in an exam. I was hoping to find a shorter method but I suppose this is the only way to do it...
Delta2
Gold Member
Also note that your equation in bold here is not a new equation. You can get it by just using your previous two so there is no new information. Your equation system therefore does not have a unique solution (two equations for three variables) and you need to use the extra information provided.
Thank you very much for your help :D
Homework Helper
$$(x-a)^2+(y-(5-a))^2=r^2$$ Minimise for tangent: $$2(x-a)+2(y-(5-a))\frac{dy}{dx}=0$$ with $$y=3-3x \Rightarrow \frac{dy}{dx}=-3$$ Interesting problem - you end up with two values of a and hence two circles satisfying the given conditions.
Gold Member
$$(x-a)^2+(y-(5-a))^2=r^2$$ Minimise for tangent: $$2(x-a)+2(y-(5-a))\frac{dy}{dx}=0$$ with $$y=3-3x \Rightarrow \frac{dy}{dx}=-3$$ Interesting problem - you end up with two values of a and hence two circles satisfying the given conditions.
how did you obtain coordinates of center as (a,5-a)?
Homework Helper
how did you obtain coordinates of center as (a,5-a)?
From equation of perpendicular bisector of the line drawn between the two given points.
SammyS
Gold Member
From equation of perpendicular bisector of the line drawn between the two given points.
ohh, really good solution- how did you think of this?
Homework Helper
Other approaches seemed to be heading for complications so I tried to keep it simple! I wasn't quite sure how to use the information about the tangent line until I realized the problem was essentially one of minimising distance between point (the circle centre) and line (y=3-3x).
Gold Member
Other approaches seemed to be heading for complications so I tried to keep it simple! I wasn't quite sure how to use the information about the tangent line until I realized the problem was essentially one of minimising distance between point (the circle centre) and line (y=3-3x).
Great! Thank you very much for your help.
Homework Helper
ohh, really good solution- how did you think of this?
A pleasure. Thanks for your kind compliment - the problem was certainly a little different from 'run of the mill' exercises in analytic geometry. | 2023-03-20 15:52:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.729453980922699, "perplexity": 409.17326835917913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00701.warc.gz"} |
http://www.open3d.org/docs/tutorial/Advanced/multiway_registration.html | # Multiway registration¶
Multiway registration is the process to align multiple pieces of geometry in a global space. Typically, the input is a set of geometries (e.g., point clouds or RGBD images) $$\{\mathbf{P}_{i}\}$$. The output is a set of rigid transformations $$\{\mathbf{T}_{i}\}$$, so that the transformed point clouds $$\{\mathbf{T}_{i}\mathbf{P}_{i}\}$$ are aligned in the global space.
Open3D implements multiway registration via pose graph optimization. The backend implements the technique presented in [Choi2015].
# src/Python/Tutorial/Advanced/multiway_registration.py
from open3d import *
if __name__ == "__main__":
set_verbosity_level(VerbosityLevel.Debug)
pcds = []
for i in range(3):
"../../TestData/ICP/cloud_bin_%d.pcd" % i)
downpcd = voxel_down_sample(pcd, voxel_size = 0.02)
pcds.append(downpcd)
draw_geometries(pcds)
pose_graph = PoseGraph()
odometry = np.identity(4)
pose_graph.nodes.append(PoseGraphNode(odometry))
n_pcds = len(pcds)
for source_id in range(n_pcds):
for target_id in range(source_id + 1, n_pcds):
source = pcds[source_id]
target = pcds[target_id]
print("Apply point-to-plane ICP")
icp_coarse = registration_icp(source, target, 0.3,
np.identity(4),
TransformationEstimationPointToPlane())
icp_fine = registration_icp(source, target, 0.03,
icp_coarse.transformation,
TransformationEstimationPointToPlane())
transformation_icp = icp_fine.transformation
information_icp = get_information_matrix_from_point_clouds(
source, target, 0.03, icp_fine.transformation)
print(transformation_icp)
# draw_registration_result(source, target, np.identity(4))
print("Build PoseGraph")
if target_id == source_id + 1: # odometry case
odometry = np.dot(transformation_icp, odometry)
pose_graph.nodes.append(
PoseGraphNode(np.linalg.inv(odometry)))
pose_graph.edges.append(
PoseGraphEdge(source_id, target_id,
transformation_icp, information_icp, uncertain = False))
else: # loop closure case
pose_graph.edges.append(
PoseGraphEdge(source_id, target_id,
transformation_icp, information_icp, uncertain = True))
print("Optimizing PoseGraph ...")
option = GlobalOptimizationOption(
max_correspondence_distance = 0.03,
edge_prune_threshold = 0.25,
reference_node = 0)
global_optimization(pose_graph,
GlobalOptimizationLevenbergMarquardt(),
GlobalOptimizationConvergenceCriteria(), option)
print("Transform points and display")
for point_id in range(n_pcds):
print(pose_graph.nodes[point_id].pose)
pcds[point_id].transform(pose_graph.nodes[point_id].pose)
draw_geometries(pcds)
## Input¶
set_verbosity_level(VerbosityLevel.Debug)
pcds = []
for i in range(3):
"../../TestData/ICP/cloud_bin_%d.pcd" % i)
downpcd = voxel_down_sample(pcd, voxel_size = 0.02)
pcds.append(downpcd)
draw_geometries(pcds)
The first part of the tutorial script reads three point clouds from files. The point clouds are downsampled and visualized together. They are misaligned.
## Build a pose graph¶
pose_graph = PoseGraph()
odometry = np.identity(4)
pose_graph.nodes.append(PoseGraphNode(odometry))
n_pcds = len(pcds)
for source_id in range(n_pcds):
for target_id in range(source_id + 1, n_pcds):
source = pcds[source_id]
target = pcds[target_id]
print("Apply point-to-plane ICP")
icp_coarse = registration_icp(source, target, 0.3,
np.identity(4),
TransformationEstimationPointToPlane())
icp_fine = registration_icp(source, target, 0.03,
icp_coarse.transformation,
TransformationEstimationPointToPlane())
transformation_icp = icp_fine.transformation
information_icp = get_information_matrix_from_point_clouds(
source, target, 0.03, icp_fine.transformation)
print(transformation_icp)
# draw_registration_result(source, target, np.identity(4))
print("Build PoseGraph")
if target_id == source_id + 1: # odometry case
odometry = np.dot(transformation_icp, odometry)
pose_graph.nodes.append(
PoseGraphNode(np.linalg.inv(odometry)))
pose_graph.edges.append(
PoseGraphEdge(source_id, target_id,
transformation_icp, information_icp, uncertain = False))
else: # loop closure case
pose_graph.edges.append(
PoseGraphEdge(source_id, target_id,
transformation_icp, information_icp, uncertain = True))
A pose graph has two key elements: nodes and edges. A node is a piece of geometry $$\mathbf{P}_{i}$$ associated with a pose matrix $$\mathbf{T}_{i}$$ which transforms $$\mathbf{P}_{i}$$ into the global space. The set $$\{\mathbf{T}_{i}\}$$ are the unknown variables to be optimized. PoseGraph.nodes is a list of PoseGraphNode. We set the global space to be the space of $$\mathbf{P}_{0}$$. Thus $$\mathbf{T}_{0}$$ is identity matrix. The other pose matrices are initialized by accumulating transformation between neighboring nodes. The neighboring nodes usually have large overlap and can be registered with Point-to-plane ICP.
A pose graph edge connects two nodes (pieces of geometry) that overlap. Each edge contains a transformation matrix $$\mathbf{T}_{i,j}$$ that aligns the source geometry $$\mathbf{P}_{i}$$ to the target geometry $$\mathbf{P}_{j}$$. This tutorial uses Point-to-plane ICP to estimate the transformation. In more complicated cases, this pairwise registration problem should be solved via Global registration.
[Choi2015] has observed that pairwise registration is error-prone. False pairwise alignments can outnumber correctly aligned pairs. Thus, they partition pose graph edges into two classes. Odometry edges connect temporally close, neighboring nodes. A local registration algorithm such as ICP can reliably align them. Loop closure edges connect any non-neighboring nodes. The alignment is found by global registration and is less reliable. In Open3D, these two classes of edges are distinguished by the uncertain parameter in the initializer of PoseGraphEdge.
In addition to the transformation matrix $$\mathbf{T}_{i}$$, the user can set an information matrix $$\mathbf{\Lambda}_{i}$$ for each edge. If $$\mathbf{\Lambda}_{i}$$ is set using function get_information_matrix_from_point_clouds, the loss on this pose graph edge approximates the RMSE of the corresponding sets between the two nodes, with a line process weight. Refer to Eq (3) to (9) in [Choi2015] and the Redwood registration benchmark for details.
The script creates a pose graph with three nodes and three edges. Among the edges, two of them are odometry edges (uncertain = False) and one is a loop closure edge (uncertain = True).
## Optimize a pose graph¶
print("Optimizing PoseGraph ...")
option = GlobalOptimizationOption(
max_correspondence_distance = 0.03,
edge_prune_threshold = 0.25,
reference_node = 0)
global_optimization(pose_graph,
GlobalOptimizationLevenbergMarquardt(),
GlobalOptimizationConvergenceCriteria(), option)
Open3D uses function global_optimization to perform pose graph optimization. Two types of optimization methods can be chosen: GlobalOptimizationGaussNewton or GlobalOptimizationLevenbergMarquardt. The latter is recommended since it has better convergence property. Class GlobalOptimizationConvergenceCriteria can be used to set the maximum number of iterations and various optimization parameters.
Class GlobalOptimizationOption defines a couple of options. max_correspondence_distance decides the correspondence threshold. edge_prune_threshold is a threshold for pruning outlier edges. reference_node is the node id that is considered to be the global space.
Optimizing PoseGraph ...
[GlobalOptimizationLM] Optimizing PoseGraph having 3 nodes and 3 edges.
Line process weight : 3.745800
[Initial ] residual : 6.741225e+00, lambda : 6.042803e-01
[Iteration 00] residual : 1.791471e+00, valid edges : 3, time : 0.000 sec.
[Iteration 01] residual : 5.133682e-01, valid edges : 3, time : 0.000 sec.
[Iteration 02] residual : 4.412544e-01, valid edges : 3, time : 0.000 sec.
[Iteration 03] residual : 4.408356e-01, valid edges : 3, time : 0.000 sec.
[Iteration 04] residual : 4.408342e-01, valid edges : 3, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 3 nodes and 3 edges.
Line process weight : 3.745800
[Initial ] residual : 4.408342e-01, lambda : 6.064910e-01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0
The global optimization performs twice on the pose graph. The first pass optimizes poses for the original pose graph taking all edges into account and does its best to distinguish false alignments among uncertain edges. These false alignments have small line process weights, and they are pruned after the first pass. The second pass runs without them and produces a tight global alignment. In this example, all the edges are considered as true alignments, hence the second pass terminates immediately.
## Visualize optimization¶
print("Transform points and display")
for point_id in range(n_pcds):
print(pose_graph.nodes[point_id].pose)
pcds[point_id].transform(pose_graph.nodes[point_id].pose)
draw_geometries(pcds)
Ouputs:
Although this tutorial demonstrates multiway registration for point clouds. The same procedure can be applied to RGBD images. See Make fragments for an example. | 2018-06-20 07:19:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2562828063964844, "perplexity": 7882.989262867791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00145.warc.gz"} |
https://proofwiki.org/wiki/Primitive_of_Reciprocal_of_Root_of_a_squared_minus_x_squared | # Primitive of Reciprocal of Root of a squared minus x squared
Jump to navigation Jump to search
## Theorem
$\displaystyle \int \frac 1 {\sqrt {a^2 - x^2} } \rd x = \arcsin \frac x a + C$
where $a$ is a strictly positive constant and $a^2 > x^2$.
### Corollary
$\displaystyle \int_0^x \frac {\d t} {\sqrt{1 - t^2} } = \arcsin x$
## Proof
$\displaystyle \int \frac 1 {\sqrt {a^2 - x^2} } \rd x$ $=$ $\displaystyle \int \frac {\rd x} {\sqrt {a^2 \paren {1 - \frac {x^2} {a^2} } } }$ factor $a^2$ out of the radicand $\displaystyle$ $=$ $\displaystyle \int \frac {\rd x} {\sqrt{a^2} \sqrt {1 - \paren {\frac x a}^2} }$ $\displaystyle$ $=$ $\displaystyle \frac 1 a \int \frac {\rd x} {\sqrt {1 - \paren {\frac x a}^2} }$
$\sin \theta = \dfrac x a \iff x = a \sin \theta$
for $\theta \in \openint {-\dfrac \pi 2} {\dfrac \pi 2}$.
From Real Sine Function is Bounded and Shape of Sine Function, this substitution is valid for all $x / a \in \openint {-1} 1$.
$\displaystyle a^2$ $>$ $\displaystyle x^2$ $\displaystyle \leadstoandfrom \ \$ $\displaystyle 1$ $>$ $\displaystyle \frac {x^2} {a^2}$ dividing both terms by $a^2$ $\displaystyle \leadstoandfrom \ \$ $\displaystyle 1$ $>$ $\displaystyle \paren {\frac x a}^2$ Powers of Group Elements $\displaystyle \leadstoandfrom \ \$ $\displaystyle 1$ $>$ $\displaystyle \size {\frac x a}$ taking the square root of both terms $\displaystyle \leadstoandfrom \ \$ $\displaystyle -1$ $<$ $\displaystyle \paren {\frac x a} < 1$ Negative of Absolute Value
so this substitution will not change the domain of the integrand.
Then:
$\displaystyle x$ $=$ $\displaystyle a \sin \theta$ from above $\displaystyle \leadstp \ \$ $\displaystyle 1$ $=$ $\displaystyle a \cos \theta \frac {\rd \theta} {\rd x}$ differentiating with respect to $x$, Derivative of Sine Function, Chain Rule for Derivatives $\displaystyle \frac 1 a \int \frac 1 {\sqrt {1 - \paren {\frac x a}^2 } } \rd x$ $=$ $\displaystyle \frac 1 a \int \frac {a \cos \theta} {\sqrt {1 - \sin^2 \theta} } \frac {\rd \theta} {\rd x} \rd x$ from above $\displaystyle$ $=$ $\displaystyle \frac a a \int \frac {\cos \theta} {\sqrt {1 - \sin^2 \theta} } \rd \theta$ Integration by Substitution $\displaystyle$ $=$ $\displaystyle \int \frac {\cos \theta} {\sqrt {\cos^2 \theta} } \rd \theta$ Sum of Squares of Sine and Cosine $\displaystyle$ $=$ $\displaystyle \int \frac {\cos \theta} {\size {\cos \theta} } \rd \theta$
We have defined $\theta$ to be in the open interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$.
From Sine and Cosine are Periodic on Reals, $\cos \theta > 0$ for the entire interval. Therefore the absolute value is unnecessary, and the integral simplifies to:
$\displaystyle \int \rd \theta$ $=$ $\displaystyle \theta + C$
As $\theta$ was stipulated to be in the open interval $\openint {-\dfrac \pi 2} {\dfrac \pi 2}$:
$\sin \theta = \dfrac x a \iff \theta = \arcsin \dfrac x a$
The answer in terms of $x$, then, is:
$\displaystyle \int \frac 1 {\sqrt {a^2 - x^2}} \rd x = \arcsin \frac x a + C$
$\blacksquare$ | 2020-07-16 00:06:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888365268707275, "perplexity": 133.97194673763283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00383.warc.gz"} |
http://mathoverflow.net/questions/35825/equivalent-definitions-of-m-genericity/35828 | # Equivalent definitions of M-genericity.
I'm trying to learn about forcing, and have heard that there are several equivalent ways to define genericity. For instance, let M be a transitive model of ZFC containing a poset (P, ≤). Suppose G ⊆ P is such that q ∈ G whenever both p ∈ G and q ≥ p. Suppose also that whenever p,q ∈ G then there is r ∈ G such that r ≤ p and r ≤ q. Then the following are equivalent ways to say that G is generic:
(1) G meets every element of M dense in P. That is, for all D ∈ M, if for all p ∈ P there is q ∈ D such that q ≤ p, then G ∩ D is nonempty.
(2) G is nonempty and meets every element of M dense below some p ∈ G. That is, for all p ∈ G and all B ∈ M, if for each q ≤ p there is r ∈ B such that r ≤ q, then G ∩ B is nonempty.
Proving this equivalence seemed like it would be an easy exercise, but I think I'm missing something. Can someone point me toward a source where I can find a proof? I hope this is an acceptable question; this is my first time posting.
EDIT: Typo and omission fixed.
-
If $G$ satisfies (1), then it satisfies (2) because if $p$ is in $G$ and $D$ is dense below $p$, then let $D'$ be the set of conditions $q$ which are either in $D$ or incompatible with $p$. This is dense in $P$ since any condition that is compatible with $p$ will have elements of $D$ below it, and any condition incompatible with $p$ is already in $D'$. But $G$ cannot meet $D'$ in something incompatible with $p$, by your assumption on $G$, and so it must meet it in $D$, as desired.
Conversely, if $G$ satisfies (2), then it will satisfy (1) because if $D$ is dense, then it is dense below any $p$, and so $G$ will meet it.
-
Thanks. D' was exactly what I was missing. Now I see the general strategy for proving such equivalences. – user8546 Aug 17 '10 at 2:18
Great! There are several other equivalent characterizations of $M$-genericity: (3) $G$ meets every maximal antichain in $M$; (4) $G$ meets every pre-dense set in $M$; and provided $P$ is a complete Boolean algebra, (5) $G$ is $M$-complete, in the sense that if $M$ has a descending sequence in $G$, then it has a lower bound in $G$. – Joel David Hamkins Aug 17 '10 at 2:31 | 2015-04-25 07:04:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001560211181641, "perplexity": 215.29229532332647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246647589.15/warc/CC-MAIN-20150417045727-00210-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://electronics.stackexchange.com/questions/277108/how-to-avoid-ldr-controlled-light-self-latching?answertab=votes | # How to avoid LDR controlled light self-latching?
I am planning on using an LDR to turn on a LED light (700 lunes warm white fixtures), when the main light in the room gets turned on. But this LED light is pretty bright on its own. I think I will end up latching the circuit with the light. What's the best way to avoid this? My planned circuit, resistor values are only representative.
simulate this circuit – Schematic created using CircuitLab
The room has a 100w led residential bulb, semi reflective tiling. The lights I want to control have some unknown pwm (555 circuit to a constant current switching circuit from what I can tell).
• The only way to avoid it is to keep the LDR from seeing the LED. Also, you need to limit the gate voltage to the FET. Gate-source voltage is typically rated for 20 volts max, and you could easily drive it to 23 volts, which could destroy it. – WhatRoughBeast Dec 27 '16 at 21:40
• Is it a colored led? You could get a ldr that is less sensitive to the specific color. – MadHatter Dec 27 '16 at 21:42
• These are warm white fixtures. 700 lumins combined output. – Passerby Dec 27 '16 at 21:49
• Too bad it's a phosphor-based system with likely 100's of milliseconds of light retention. An idea using PWM and only taking measurements when the LED light is OFF probably won't work. – jonk Dec 27 '16 at 21:55
• Can you use baffles to solve it "mechanically"? A physical opaque partition between LED and LDR. – vicatcu Dec 27 '16 at 21:59
Instead of the LDR, you could use an infrared-sensitive phototransistor. The IR phototransistor will detect incandescent light, but not (most) LEDs. Fluorescent lights and household LED lights also don't put out much IR, so this won't work if you have these.
I do suggest that you add some positive feedback (hysteresis) in your circuit. As it is, the FET will find itself sometimes biased in the linear region and not operating as a low-loss ON/OFF switch. Instead, it will heat up and possibly burst into flames (depending on the load and the power source).
• The hysteresis is a good point. I addressed doing this with one more BJT here: electronics.stackexchange.com/questions/268891/… A spectrophotometer would help identify null-bands in the LED output to make your suggestion work well. Might need a thin-film filter, though. – jonk Dec 27 '16 at 22:00
• Paul, you might be more specific about the IR phototransistor - encapsulated in clear plastic is not wanted, while encapsulation in the black-dyed plastic is the proper type to get - they are insensititve to visible light. – glen_geek Dec 27 '16 at 22:00
• The regular bulb I have is a 100w equivalent led, unfortunately. – Passerby Dec 27 '16 at 22:02
The problem is that your turn on point is a bit sloppy and not controllable use a circuit like below which gives you more control and put the ldr in a tube pointing at your main room light fitting as suggested above. If you swap the + and - inputs to the op amp it can drive your mosfet directly.
• Apologies to Olin for pinching his circuit. – RoyC Dec 27 '16 at 22:50
You could always just detect when the room light switch is flipped. I know you already know how to do that much. The light switch, if in the US and semi-modern, will have access to earth ground as well as hot, switched hot (which is connected to neutral through the room light load), and likely neutral too. (Because of the new-fangled devices which may need regular access to hot, neutral, and ground for other reasons.) You could tap in at the ceiling and even be crazy enough to use these cheap $3 Wifi units to act as a web server providing status of your room light. Your auxillary LED light could then just go to that page and monitor the status, using another of the$3 Wifi units. They each need access to +3.3 VDC, though. So it's a bit Rube Goldberg. But robust, at least.
If I really wanted to use an LDR for this, I'd arrange the detector so that it is down deep into a metal tube that points at the room light. I might even bother with a lens or two (I've got boxes of them in nice slip covers.) You can get anodized aluminum optical tubes cheap enough, too. But a black pen barrel might do okay. I have a couple of 3D printers, so I'd probably just whip something up on that. Regardless, I definitely would NOT expose the LDR to the entire room lighting. I'd want to, instead, aim it as accurately as I can at the actual light for the room. A lens and baffling can help, but that may be more trouble than it is worth.
I'd then test the result to see what I get for resistance values with the light ON and OFF and with the tube positioned variously, to account for common miss-alignments that I'd like to tolerate. I'd then design a simple circuit to support an appropriate level of hysteresis based on those values. The following circuit will have two thresholds at about $300\:\textrm{k}\Omega$ and $650\:\textrm{k}\Omega$, roughly. Which should be good for a typical LDR.
simulate this circuit – Schematic created using CircuitLab
$R_3$ and $R_4$ are set for the output impedance. I think $47\:\textrm{k}\Omega$ is fine for a MOSFET gate drive. Increasing $R_1$ and $R_2$ will lower the high threshold, reducing the hysteresis. Decreasing them will raise the high threshold. Reducing $R_5$ will pull down both thresholds, but the high threshold moves down faster than the low threshold, so reducing it also tights the band, too. That's about it, really.
But you really need to take measurements, first. And that means designing the optics arrangement, too. But a simple, long tube should be fine I think. Pointing that correctly should give you excellent discrimination so that the electronic circuit can do its job, as well.
That's what I'd try, to start. | 2019-07-20 13:50:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3443697988986969, "perplexity": 1589.3414045598595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00287.warc.gz"} |
https://cob.silverchair.com/dev/article/129/21/4891/37501/Quantitative-developmental-anatomy-of-definitive | In the developing mouse embryo the first definitive(transplantable-into-the-adult) haematopoietic stem cells/long-term repopulating units (HSC/RUs) emerge in the AGM region and umbilical vessels on 10-11 days post coitum (d.p.c.). Here, by limiting dilution analysis, we anatomically map the development of definitive HSC/RUs in different embryonic tissues during early colonisation of the liver. We show that by day 12 p.c. the mouse embryo contains about 66 definitive HSC/RUs (53 in the liver, 13 in other tissues), whereas on the previous day the total number of definitive HSC/RUs in the entire conceptus is only about 3. Owing to the length of the cell cycle this dramatic increase in the number of definitive HSC/RUs in only 24 hours is unlikely to be explained purely by cell division. Therefore,extensive maturation of pre-definitive HSCs to a state when they become definitive must take place in the day 11-12 embryo. Here we firstly identify the numbers of HSCs in various organs at 11-13 d.p.c. and secondly, using an organ culture approach, we quantitatively assess the potential of the aorta-gonadmesonephros (AGM) region and the yolk sac to produce/expand definitive HSC/RUs during days 11-12 of embryogenesis. We show that the capacity of the AGM region to generate definitive HSC/RUs is high on 11 d.p.c. but significantly reduced by 12 d.p.c. Conversely, at 12 d.p.c. the YS acquires the capacity to expand and/or generate definitive HSCs/RUs, whereas it is unable to do so on 11 d.p.c. Thus, the final steps in development of definitive HSC/RUs may occur not only within the AGM region, as was previously thought, but also in the yolk sac microenvironment. Our estimates indicate that the cumulative activity of the AGM region and the yolk sac is sufficient to provide the day 12 liver with a large number of definitive HSC/RUs,suggesting that the large pool of definitive HSC/RUs in day 12 foetal liver is formed predominantly by recruiting ready-to-use' definitive HSC/RUs from extra-hepatic sources. In accordance with this we observe growing numbers of definitive HSC/RUs in the circulation during days 11-13 of gestation,suggesting a route via which these HSCs migrate.
Embryonic development of the mammalian haematopoietic system is complex and, in many aspects, poorly understood. Two major sources of haematopoietic activity have been identified in vertebrate embryos. For a long time the yolk sac (YS), where haematopoietic activity is first observed, was assumed to be the primary site of formation of the haematopoietic stem cells (HSCs) that migrate to and colonise the foetal liver and subsequently the bone marrow(Moore and Metcalf, 1970). However, identification of a powerful intraembryonic HSC activity in avian,amphibian and murine embryos cast doubts on this assumption. In experimentally engineered chick-quail chimeras, haematopoiesis in the adult organism was found to originate from the body but not from the YS of the chimera,suggesting that the YS/embryonic haematopoietic hierarchy is transitory(Dieterlen-Lievre, 1975;Martin et al., 1979). In amphibian embryos the dorsal lateral plate (DLP) mesoderm next to the dorsal aorta contributes mainly to definitive haematopoiesis, whereas the ventral blood island (VBI), which is equivalent to the YS, contributes to both primitive and to some extent definitive haematopoiesis(Chen and Turpen, 1995). Recent experiments on amphibian embryos, using individual blastomere labelling techniques, have demonstrated that although at the gastrulation stage both DLP and VBI originate from a common ventral bipotential mesoderm layer(Turpen et al., 1997) their precursors are spatially separated at the blastula stage(Ciau-Uitz et al., 2000). Relatively recently an intra body site of definitive haematopoietic activity has been identified in the mouse embryo prior to definitive haematopoiesis in the liver (Godin et al., 1993;Medvinsky et al., 1993;Muller et al., 1994;Cumano et al., 1996;Medvinsky and Dzierzak, 1996;Medvinsky et al., 1996). Before the onset of organogenesis and the establishment of the circulatory system in the mouse embryo, this site, termed the visceral para-aortic splanchnopleura (P-Sp) region, but not the YS, contains multipotent lymphomyeloid progenitors, although at this stage they are incapable of reconstituting adult recipients (Cumano et al., 1996; Cumano et al.,2001). During organogenesis, part of the splanchnopleura transforms into a morphologically distinct composite axial structure consisting of the dorsal aorta, genital ridges and mesonephroi (AGM region). High numbers of spleen-colony forming units (CFU-S) are concentrated in the AGM region prior to the presence of CFU-S activity in the liver(Medvinsky et al., 1993;Medvinsky and Dzierzak, 1996). The first definitive (long-term reconstituting) HSCs appear in the AGM region and umbilical vessels at late 10/early 11 d.p.c.(Muller et al., 1994;Medvinsky and Dzierzak, 1996;de Bruijn et al., 2000). The AGM region is the only tissue in the 10 d.p.c. mouse embryo capable of the autonomous initiation/expansion of definitive HSCs, as demonstrated using an organ culture approach. Slightly later, by 11 d.p.c., HSCs appear both in the liver rudiment and the YS but again only the AGM region is capable of expanding the number of HSCs in organ culture conditions(Medvinsky et al., 1996). These and other features of AGM biology, including the kinetics of CFU-S and HSC development, led to the hypothesis that the AGM region is the primary site of formation of definitive HSCs which then colonise secondary haematopoietic organs, primarily the foetal liver(Dzierzak and Medvinsky, 1995;Dzierzak and Medvinsky, 1998). Recently, more compelling evidence has emerged that indicates HSCs originate in the body of the embryo independently of YS activity. Indeed, as early as day 7 p.c., before circulation is established in the mouse embryo, the P-Sp but not the YS contains ancestors of cells that are capable of long-term,multi-potential repopulation of adult irradiated recipients devoid of NK cells(Cumano et al., 2001). Experiments on human embryos also revealed dramatic differences in the lymphohaematopoietic potential of the dorsal aorta and the YS(Tavian et al., 2001). Upon transplantation into NOD-SCID mice, cultured dorsal aorta cells showed lymphomyeloid reconstitution whereas YS cells were only capable of contributing to the myeloid lineage. Cumulatively, these data from several different research groups point to the embryo body as the site of origin of definitive HSCs.
The above data does not, however, rule out a possibility that at later stages the YS is involved in the independent production or expansion of definitive HSCs. Although early YS cells are unable to repopulate adult irradiated recipients upon direct transplantation, when transplanted into the embryo they can contribute to adult haematopoiesis(Weissman et al., 1978;Toles et al., 1989). Analogous results have been achieved by transplantation of YS cells into newborn recipients (Yoder and Hiatt,1997; Yoder et al.,1997). In addition, by day 8 p.c. both the P-Sp and the YS contain cells that can mature into definitive HSCs by co-culture with an AGM-derived stromal cell line (Matsuoka et al.,2001). Thus at least from day 8 p.c. the YS contains cells(pre-definitive HSCs), which are capable of development into definitive HSCs upon maturation in an embryonic or newborn microenvironment(Medvinsky and Dzierzak,1999). However, it remains unclear if and when during normal embryo development these cells mature into definitive HSCs and whether they have to migrate to an inductive AGM microenvironment in order to do so.
In our previous papers we focused on the initiation of definitive HSC production in the mouse embryo (Muller et al., 1994; Medvinsky and Dzierzak, 1996). Here we explore the subsequent stage of HSC development from day 11 until day 13 when the number of definitive HSCs increases in the embryo. From day 11 p.c. onwards the number of HSC/RUs increases dramatically in the liver(Morrison et al., 1995;Ema and Nakauchi, 2000) but little is known about their distribution in the rest of the embryo, the routes of their migration and the mechanisms underlying their expansion in the liver. Various types of multipotent, pluripotent and bi-potent myeloid and lymphoid progenitors have been identified during embryogenesis in the YS, AGM region,liver, thymic and splenic rudiments (Moore and Metcalf, 1970; Velardi and Cooper, 1984; Johnson and Barker, 1985; Wong et al.,1986; Eren et al.,1987; Liu and Auerbach,1991; Cumano et al.,1992; Morrison et al.,1995; Ema et al.,1998; Kawamoto et al.,1998; Nishikawa et al.,1998; Liu et al.,1999; Ohmura et al.,2001; Palis et al.,2001; Traver et al.,2001; Douagi et al.,2002) and different types of progenitor cells are disseminated via the circulation (Moore and Metcalf,1970; Johnson and Barker,1985; Rodewald et al.,1994; Delassus and Cumano,1996). However, as the links within the haematopoietic hierarchy and between tissues are unclear, it is difficult to resolve an entire anatomical picture of the development of definitive HSCs. Although data from some publications has revealed fragments of it(Moore and Metcalf, 1970;Ikuta et al., 1990;Morrison et al., 1995;Berger and Sturm, 1996;Sanchez et al., 1996;de Bruijn et al., 2000;Ema and Nakauchi, 2000;Hsu et al., 2000;de Bruijn et al., 2002;North et al., 2002), a comprehensive quantitative anatomical map of HSC/RUs development during embryogenesis has not been produced. Here, we have attempted to create a temporal and spatial map of development of definitive HSC/RUs in the 11-13 d.p.c. mouse embryo. To this end, the number of HSC/RUs in different embryonic tissues has been estimated using a limiting dilution method(Szilvassy et al., 1990). In addition, the potential of the AGM region and the YS to produce and/or maintain definitive HSC/RUs has been assessed using an organ culture approach(Medvinsky and Dzierzak,1996).
We have found that expansion of the pool of HSC/RUs within the liver occurs concurrently with increasing numbers of HSC/RUs in the circulation. In addition to the previously reported activity of the AGM region during day 10-11 p.c. (Medvinsky and Dzierzak,1996; de Bruijn et al.,2000), we report here that a day later, at 12 d.p.c., the YS becomes competent to generate (and/or expand) definitive HSC/RUs. This finding suggests that both the AGM region and the YS produce HSCs that colonise the developing liver in two subsequent waves, the peaks of which fall on days 10-11 p.c. and day 12 p.c. respectively.
### Animals and cells
CBA/Ca, C57BL/6 and (CBA×C57BL6) F1 mice were bred in the animal breeding unit of the University of Edinburgh or purchased from Harlan. Cell suspensions were prepared from non-cultured or cultured 11-13 d.p.c. embryonic tissues after incubation with 0.1% collagenase-dispase (Sigma) in PBS at 37°C. Embryonic blood was collected within 2-3 minutes after separation of the YS from the embryo body and kept on ice before transplantation to preserve HSCs. Special care was taken to remove umbilical and vitelline arteries from preparations of the YS and embryonic circulation.(CBA×C57BL6) F1 mouse male embryos were used as donors for transplantation into (CBA×C57BL6) F1 female recipients. In some experiments (CBA×C57BL6) F1 Ly5.1/2 embryos were used for transplantation into (CBA×C57BL6) F1 Ly5.2 female recipients. Recipient mice were irradiated at 9.5 Gr split into two doses separated by a 3 hour interval in the Cs source at a rate 21.6 rad/minute. The mice received neomycin (0.16 g/100 ml) in acidified drinking water for the first 4 weeks after transplantation.
### Organ culture
Tissues were cultured in myelo-cult medium (Stem Cell Technology)supplemented with 10-6 M hydrocortisone hemi-succinate (Sigma) on Durapore 0.65 μm filters (Millipore) supported by stainless steel stands(5% CO2 in air) at the gas-liquid interface as previously described(Medvinsky and Dzierzak,1996). 12 d.p.c. foetal liver was explanted in small pieces comparable in size to the AGM region. Cultures were set up for 3-5 days and 7-9 days.
### Analysis of donor contribution into recipient haematopoietic system
The contribution of donor cells was assessed 6-8 weeks and 3.5-5 months after transplantation as described previously(Medvinsky and Dzierzak,1996). Briefly, the percentage of male donor cells in the haematopoietic system of female recipients was assessed by comparison with standards of serially diluted male in female DNA (0.1%, 1%, 10%, 100%). Both test DNA and standards were amplified by PCR using primers specific for male Y2B and mouse myogenin sequences. As previously, we have restricted our analysis to those cells (definitive HSCs), which upon transplantation,contributed at a level of 10% or higher in the haematopoietic system of irradiated recipients (Muller et al.,1994; Medvinsky and Dzierzak,1996).
Transplantations of day 12 and 13 tissues in some cases were carried out using Ly5.1 embryos and multilineage contribution was assessed in recipient mice by assessing co-expression of the Ly5.1 marker and lineage specific markers by antibody staining and subsequent analysis on FACSCalibur(Beckton-Dickinson). For this purpose biotinylated anti-Mac-1, anti-B220(secondary stained with Strepdavidin-PE; Sigma), PE-conjugated anti-CD3ϵand FITC-conjugated anti-Ly5.1 antibodies (Pharmingen) were used. In some cases the bone marrow of reconstituted mice was transplanted into secondary recipients.
### Quantitation of HSC/RUs by limiting dilution analysis
Upon transplantation, one definitive HSC is sufficient to differentiate into all lymphoid and myeloid cell types and to contribute over several months at a high level to the haematopoietic system of an irradiated recipient(Lemischka, 1992;Morrison et al., 1997). In order to estimate the number of HSCs/RU in various organs we have adopted a limiting dilution method by transplanting low numbers of HSCs (several dilutions) into irradiated recipients(Szilvassy et al., 1990). Test cells were co-transplanted intravenously with 2×104 bone marrow cells to ensure short-term survival of the recipient. The number of HSCs in tested tissues was estimated by Poisson statistics based on the proportion of non-repopulated recipients in long-term (longer than 3.5 months)repopulating experiments. Serial dilutions are expressed in embryo equivalents(e.e.). A minimum of 2 and maximum of 12 different dilutions were used for each tissue. For each dilution between 4 and 21 recipients were transplanted in a minimum of two independent replicate experiments. The final numbers of HSCs were estimated by the maximum likelihood method using Genstat 5 package and expressed as the most probable numbers (MPN)(GenStat 5 Release 3 Reference Manual,1993). The asymmetric error range in parentheses next to MPN and also showing on the graphs is typical for this kind of analysis which involves confidence interval estimation of the Poisson mean.
Our calculations are based on the assumption that HSCs from different tissues and at different stages of development have an equal seeding efficiency. However, this may not be the case and therefore we, as others(Ema and Nakauchi, 2000),introduce an operational term repopulating unit' (RU), which is not necessarily related to a single cell and appears next to HSC abbreviation in the text.
In separate experiments we have found that the 2×104carrier bone marrow cells injected per recipient contain on average 2 (1.6,2.6) long-term repopulating HSC/RUs, which is similar to numbers reported by some other groups (Abkowitz et al.,2000). This may explain why when other researchers transplanted 10 times more bone marrow carrier cells (2×105) (about 20 HSC/RUs) along with day 11 embryonic liver no liver contribution was detected in recipient mice (Ema and Nakauchi,2000), as the donor HSCs may have been out competed by an excess of HSCs in the carrier bone marrow.
### Assessment of content of circulating HSC/RUs in various embryonic tissues
The number of definitive HSC/RUs in the circulation was assessed in transplantion experiments as described above. The number of HSC/RUs in the circulation quoted in Fig. 1Aand table 1 represent only a proportion of all circulating HSC/RU i.e. those that were released upon separation of the YS and the embryo body. The relative proportion of circulatory blood cells in various embryonic tissues was estimated by a comparison of the numbers of red blood cells in the tissues with those in the circulation. To this end haemoglobinized cells from the circulation and cells obtained after trypsinization of dissected tissues were stained with 0-Dianisidine and counted under the microscope(Iuchi and Yamamoto, 1983) as a measure of the contamination of tissues with embryonic blood. The likely contribution of circulating HSC/RUs to the number of HSC/RUs recovered from different embryonic tissues was calculated from the following formula:
$\ \mathrm{HSC}/\mathrm{RU}_{\mathrm{t}}=\frac{\mathrm{RBC}_{\mathrm{t}}}{\mathrm{RBC}_{\mathrm{c}}}\mathrm{HSC}/\mathrm{RU}_{\mathrm{c}}\$
where HSC/RUt is the estimated total number of circulatory HSC/RUs in the tissue; HSC/RUc is the number of HSC/RUs in circulation measured by transplantation; RBCt is the number of red blood cells in the tissue and RBCc is the number of RBC in the transplanted circulation.
Fig. 1.
HSC/RUs in tissues of the developing mouse embryos (total number per tissue). (A) Distribution of HSC/RUs in AGM, YS and the circulation of day 11-13 embryos. (B) HSC/RUs in the fetal liver of day 11-13 embryos. Numbers of HSC/RUs in tissues were estimated using a limiting dilution method and presented as described in Materials and Methods. Numbers of recipient mice(RM) and dilutions (D) used were as follows. (A) 11 d.p.c.: (AGM) 27RM, 2D;(YS) 25RM, 3D; (circulation) 16RM, 3D; 12 d.p.c.: (AGM) 25RM, 7D; (YS) 56RM,13D; (circulation) 43RM, 9D. (B) Liver (11 d.p.c.) 23RM, 2D; (12 d.p.c.) 37RM,6D; (13 d.p.c.) 55RM, 12D.
Fig. 1.
HSC/RUs in tissues of the developing mouse embryos (total number per tissue). (A) Distribution of HSC/RUs in AGM, YS and the circulation of day 11-13 embryos. (B) HSC/RUs in the fetal liver of day 11-13 embryos. Numbers of HSC/RUs in tissues were estimated using a limiting dilution method and presented as described in Materials and Methods. Numbers of recipient mice(RM) and dilutions (D) used were as follows. (A) 11 d.p.c.: (AGM) 27RM, 2D;(YS) 25RM, 3D; (circulation) 16RM, 3D; 12 d.p.c.: (AGM) 25RM, 7D; (YS) 56RM,13D; (circulation) 43RM, 9D. (B) Liver (11 d.p.c.) 23RM, 2D; (12 d.p.c.) 37RM,6D; (13 d.p.c.) 55RM, 12D.
Table 1.
The comparison of numbers of HSC/RUs found in transplantation experiments by limiting dilution analysis with probable numbers of HSC/RUs in tissues contributed by embryonic circulation*
Embryo ageTissueNumber of benzidine-positive cells (mean±s.d.)Estimated circulating number of HSC/RUs/tissueActual number**of HSC/RUs/tissueNumber of recipient mice (RM): number of dilutions (D)
Day 12 Bled circulation (92±11)×104 3.2 (2.6, 4.2) 43:9
AGM (0.8±0.0)×104 0.02 2.7 (1.9, 3.7) 25:7
YS (21±4)×104 0.7 1.8 (1.4, 2.4) 56:13
Liver (131±12)×104 4.4 53 (43, 69) 37:6
Body (w/o liver) nd nd 12.1 (9.1, 19.3) 15:2
Body (w/o liver, AGM, PB) (48±6)× 104 1.6 5.8 (4.4, 8.2) 17:3
Cord nd nd 0.8 (0.6, 1.8) 7:2
Spleen nd nd 7:1
Thymus nd nd 2:1
Lung (1.9±0.1)× 104 0.1 0.4 (0.2, 0.6) 17:6
Limb nd nd 0.5 (0.3, 0.9) 7:2
Heart (4.6±3.0)× 104 0.2 2:1
Head (18±3)×104 0.6 0.5 (0.4, 0.8) 11:3
Day 13 Bled circulation (139±22)×104 5.9 (4.7, 7.7) 49:12
AGM (1.5±0.3)×104 0.05 0.8 (0.6, 1.2) 17:7
YS (24±2)×104 0.8 0.8 (0.6, 1.2) 21:5
Liver (205±20)×104 NA 260 (212, 320) 55:12
Body (w/o liver) (51±11)×104 1.8 5.6 (4.0, 7.8) 20:3
Cord nd nd nd nd
Spleen nd nd 0.2 (0.1, 0.4) 12:4
Thymus nd nd 12:3
Lung (2.0±1.0)×104 0.1 3:1
Limb 0.1×104 1.0 (0.8, 1.6) 11:2
Heart (6.8±0.4)×104 0.2 3:1
Embryo ageTissueNumber of benzidine-positive cells (mean±s.d.)Estimated circulating number of HSC/RUs/tissueActual number**of HSC/RUs/tissueNumber of recipient mice (RM): number of dilutions (D)
Day 12 Bled circulation (92±11)×104 3.2 (2.6, 4.2) 43:9
AGM (0.8±0.0)×104 0.02 2.7 (1.9, 3.7) 25:7
YS (21±4)×104 0.7 1.8 (1.4, 2.4) 56:13
Liver (131±12)×104 4.4 53 (43, 69) 37:6
Body (w/o liver) nd nd 12.1 (9.1, 19.3) 15:2
Body (w/o liver, AGM, PB) (48±6)× 104 1.6 5.8 (4.4, 8.2) 17:3
Cord nd nd 0.8 (0.6, 1.8) 7:2
Spleen nd nd 7:1
Thymus nd nd 2:1
Lung (1.9±0.1)× 104 0.1 0.4 (0.2, 0.6) 17:6
Limb nd nd 0.5 (0.3, 0.9) 7:2
Heart (4.6±3.0)× 104 0.2 2:1
Head (18±3)×104 0.6 0.5 (0.4, 0.8) 11:3
Day 13 Bled circulation (139±22)×104 5.9 (4.7, 7.7) 49:12
AGM (1.5±0.3)×104 0.05 0.8 (0.6, 1.2) 17:7
YS (24±2)×104 0.8 0.8 (0.6, 1.2) 21:5
Liver (205±20)×104 NA 260 (212, 320) 55:12
Body (w/o liver) (51±11)×104 1.8 5.6 (4.0, 7.8) 20:3
Cord nd nd nd nd
Spleen nd nd 0.2 (0.1, 0.4) 12:4
Thymus nd nd 12:3
Lung (2.0±1.0)×104 0.1 3:1
Limb 0.1×104 1.0 (0.8, 1.6) 11:2
Heart (6.8±0.4)×104 0.2 3:1
*
Circulatiing and actual numbers of HSCs/RUs per tissue were assessed and presented as described in the Materials and Methods.
**
More than 1e.e. was transplanted per recipient in experiments with only one dilution. Tissues in which no HSC/RUs were detected at least one dilution was more than 1e.e.
***
One recipient was found reconstituted. Numbers of HSC/RUs in tissues are estimated using a limiting dilution method and presented as described in Materials and Methods.
### Tissue distribution of definitive HSC/RUs within 11 d.p.c. embryo
According to Poisson statistics the number of HSC/RUs within the AGM region, YS and the liver of the 11 d.p.c. embryo is close to 1 per tissue(Fig. 1). In the present experiments in contrast to our previous report(Muller et al., 1994) we are now able to detect some HSC/RUs in 11 d.p.c. embryonic blood, possibly because of improved methods of preservation of embryonic blood cells before transplantation (see Materials and Methods). Bone marrow from primary recipients reconstituted with HSC/RUs from the 11 d.p.c. circulation could be successfully transferred to secondary recipients (data not shown). This finding has revealed a route for the dissemination of HSC/RUs from the AGM region as early as day 11 p.c., which has previously been suggested but never confirmed experimentally. In these present experiments we were not able to reconstitute recipient mice with the body remnants of day 11 embryos (data not shown) and therefore attribute the previously reported rare cases of reconstitution with body remnants (Muller et al., 1994) to the presence of circulating HSC/RUs and/or the occasional inclusion of umbilical and vitelline vessels into the transplant(de Bruijn et al., 2000).
### Tissue distribution of definitive HSC/RUs within 12 d.p.c. embryo
On day 12 p.c. both the AGM region and the YS contain approximately two to three HSC/RUs each, which is higher than on day 11 p.c.(Fig. 1A). At this time the embryonic circulation also contains about three HSC/RUs. The significant numbers of HSC/RUs in the circulation suggests intensive trafficking of HSC/RUs within the embryo. By day 12 p.c. the number of HSC/RUs in the embryonic liver reaches 53 (43,69) thus increasing approximately 50-fold from day 11 p.c. Multilineage contribution to recipient mice was confirmed by analysis of selected mice(Fig. 2).
Fig. 2.
Multilineage repopulation with 12 d.p.c. embryonic tissues. The analysis of reconstitution of mice transplanted with embryonic tissues was performed 4 months or longer after transplantation either by FACS analysis (A) or using Y-specific PCR as described previously (B)(Medvinsky and Dzierzak,1996). (A) Multilineage reconstitution with 12 d.p.c. cells of embryonic tissues was confirmed in some recipients by co-staining for donor Ly5.1 and lineage-specific B220, Mac-1 and CD3ϵ markers. In some reconstituted mice the percentage of Ly5.1+, Mac- 1+ cells was very low (data not shown). (B) Y-specific PCR analysis of blood samples. Each line on the gel represents an individual mouse. The level of reconstitution was assessed with reference to results of PCR using standard dilutions of male DNA in female DNA. Arrowheads point to the mice reconstituted at the level of close to 10%or more.
Fig. 2.
Multilineage repopulation with 12 d.p.c. embryonic tissues. The analysis of reconstitution of mice transplanted with embryonic tissues was performed 4 months or longer after transplantation either by FACS analysis (A) or using Y-specific PCR as described previously (B)(Medvinsky and Dzierzak,1996). (A) Multilineage reconstitution with 12 d.p.c. cells of embryonic tissues was confirmed in some recipients by co-staining for donor Ly5.1 and lineage-specific B220, Mac-1 and CD3ϵ markers. In some reconstituted mice the percentage of Ly5.1+, Mac- 1+ cells was very low (data not shown). (B) Y-specific PCR analysis of blood samples. Each line on the gel represents an individual mouse. The level of reconstitution was assessed with reference to results of PCR using standard dilutions of male DNA in female DNA. Arrowheads point to the mice reconstituted at the level of close to 10%or more.
Estimates of possible HSC/RUs numbers derived from the circulation in these organs showed that HSC/RUs numbers within both the AGM region and the YS are significantly above the numbers of HSC/RUs attributable to circulating blood present in these tissues (Table 1). To directly test the number of circulating HSC/RUs contained within the AGM region we flushed out 12 d.p.c. dorsal aorta(Table 2). As expected the flushed out samples of embryonic blood contained fewer HSC/RUs than remained in the dorsal aorta [0.3 (0, 1.7) and 1.6 (1.2, 2.4)respectively]. Therefore, either both the AGM region and the YS on day 12 p.c. are involved in the specific production of HSC/RUs, or circulating HSC/RUs have been selectively retained in these tissues. This issue has been more closely examined in organ culture experiments described below.
Table 2.
Distribution of HSC/RUs within day 12 AGM
Number of HSC/RUs per tissueNumber of recipient mice (RM):number of dilutions (D)
Aorta (after flushing) 1.6 (1.2, 2.4) 10:2
Flushed out of the aorta 0.3 (0, 1.7) 11:2
Number of HSC/RUs per tissueNumber of recipient mice (RM):number of dilutions (D)
Aorta (after flushing) 1.6 (1.2, 2.4) 10:2
Flushed out of the aorta 0.3 (0, 1.7) 11:2
The dorsal aorta was dissected and contents flushed out using a mouth micropipette. The dorsal aorta was dissociated and transplanted into irradiated recipients in 2 dilutions. Flushed out samples of circulating blood were also transplanted (as two different dilutions) into a separate group of recipients. Numbers of HSC/RUs in tissues were estimated using a limiting dilution method and presented as described in Materials and Methods. These results support the idea that the majority of HSC/RUs in the AGM do not belong to the pool of circulatory HSC/RUs as shown inTable 1.
In contrast to day 11 p.c., on day 12 p.c. HSC/RUs were consistently detected in the body of the embryo. The number of HSC/RUs in the body of the 12 d.p.c. embryo without the liver was estimated to be 12.1 (8.1,19.3). When in addition to the liver, the AGM region and blood were also removed the total number of HSC/RUs in all remaining tissues was 5.8 (4.4,8.2) (Table 1). From the amount of blood in body transplants (Table 1) we estimate that about 1.6 HSC/RUs in the embryo body belong to the pool of circulating HSC/RUs. Therefore, it may be that apart from the AGM region and the YS a few HSC/RUs are harboured in other tissues of the body. Amongst individually tested tissues (thymus, spleen, lung,forelimbs, heart and head) transplanted separately, the lungs consistently reconstituted irradiated recipients. They contained 0.4 (0.2, 0.6)HSC/RUs, which is above the expected 0.06 HSC/RUs that would be brought there by the circulation (Table 1). Forelimbs also contained about 0.5 (0.3, 0.9) HSCs. Some untested tissues may contain solitary HSC/RUs as well.
### Tissue distribution of definitive HSC/RUs in 13 d.p.c. embryo
By day 13 p.c. of development the number of circulating HSC/RUs in the embryonic vasculature remains high 5.9 (4.7, 7.7) and the number of HSC/RUs in the liver continues to grow, reaching 260 (212, 320)(Fig. 1). The number of HSC/RUs decreases in both the AGM region and the YS to 0.8 (0.6, 1.2) and0.8 (0.6, 1.2) HSC/RU per tissue respectively(Fig. 1,Table 1). The total number of HSC/RUs in the body outside the liver remains stable compared to 12 d.p.c. body (Table 1) and HSC/RUs are no longer detectable in the lungs. Forelimb transplants reconstituted three out of five recipient mice in two dilutions. Since the amount of blood in distal limbs was extremely low, freely circulating HSC/RUs are unlikely to account for this (Table 1). HSC/RUs found in 12-13 d.p.c. limbs may reflect early colonisation of developing long bones with HSC/RUs. Further analysis of the limbs of 14 d.p.c. embryos will be required to test the reliability of this conclusion.
### Analysis of the HSC potential of 11 d.p.c. embryonic tissues using an organ culture approach
In the day 11 p.c. embryo five sites contain HSC/Rus: the AGM region, the YS, the liver, blood and umbilical vessels(Muller et al., 1994;Medvinsky and Dzierzak, 1996;de Bruijn et al., 2000). It has been shown using an organ culture approach that the AGM region is the only one of these tissues capable of expanding or generating HSC/RUs at this age(Medvinsky and Dzierzak, 1996;de Bruijn et al., 2000). Here we have quantitatively reassessed the potential of 11 d.p.c. embryonic tissues to produce HSC/RUs, by quantifying the number of HSC/RUs that are produced and maintained in organ culture. After 3-5 days in culture, the number of HSC/RUs in 11 d.p.c. AGM region increased from 0.9 (0.7, 1.1) to 12 (10.0,17.6) (P<0.05) (Fig. 3A) followed by a drop in the numbers of HSC/RUs after 7-9 days in culture.
Fig. 3.
Potential of day-11 and day-12 embryonic haematopoietic tissues to produce/expand definitive HSC/RUs as assessed by an organ culture approach.(A) Cultured day-11 tissues, (B) cultured day-12 AGM region and YS, (C)cultured day-12 liver. Note that among day-11 tissues (A) only the AGM region was able to expand HSC/RUs and among day-12 tissues (B) only the YS was able to expand HSC/RUs, suggesting two consecutive waves of HSC generation/expansion and liver colonisation, firstly from the AGM and secondly, the next day, from the YS. 12 d.p.c. embryonic liver was not capable of maintaining HSCs: a dramatic decrease in number of HSC/RUs was observed after 3 days in culture (see Discussion). Numbers of HSC/RUs in tissues are estimated using a limiting dilution method and presented as described in Materials and Methods. Numbers of recipient mice (RM) and the dilutions (D)used were as follows. Day 11 tissues (A): (AGM, uncultured) 27RM, 2D; (AGM,3-5 days in culture) 41RM; 8D; (AGM, 7-9 days in culture) 14RM, 3D; (YS uncultured) 25RM, 3D; (YS 3-5 days in culture) 20RM, 6D; (YS 7-9 days in culture) 15RM,3D; (liver, uncultured) 23RM, 2D; (liver, 3-5 days in culture)29RM, 9D; (liver, 7-9 days in culture) 14RM, 4D. Day 12 tissues (B): (AGM,uncultured) 25RM, 7D; (AGM, 3-5 days in culture) 31RM, 7D; (AGM, 7-9 days in culture) 14RM, 3D; (uncultured YS) 39RM, 8D; (YS, 3-5 days in culture) 17RM,4D; (YS, 7-9 days in culture) 11RM, 2D; (liver, uncultured) 37RM, 5D. Day 12 liver (C): (3-5 days in culture) 14RM, 3D; (7-9 days in culture) 24RM; 6D.
Fig. 3.
Potential of day-11 and day-12 embryonic haematopoietic tissues to produce/expand definitive HSC/RUs as assessed by an organ culture approach.(A) Cultured day-11 tissues, (B) cultured day-12 AGM region and YS, (C)cultured day-12 liver. Note that among day-11 tissues (A) only the AGM region was able to expand HSC/RUs and among day-12 tissues (B) only the YS was able to expand HSC/RUs, suggesting two consecutive waves of HSC generation/expansion and liver colonisation, firstly from the AGM and secondly, the next day, from the YS. 12 d.p.c. embryonic liver was not capable of maintaining HSCs: a dramatic decrease in number of HSC/RUs was observed after 3 days in culture (see Discussion). Numbers of HSC/RUs in tissues are estimated using a limiting dilution method and presented as described in Materials and Methods. Numbers of recipient mice (RM) and the dilutions (D)used were as follows. Day 11 tissues (A): (AGM, uncultured) 27RM, 2D; (AGM,3-5 days in culture) 41RM; 8D; (AGM, 7-9 days in culture) 14RM, 3D; (YS uncultured) 25RM, 3D; (YS 3-5 days in culture) 20RM, 6D; (YS 7-9 days in culture) 15RM,3D; (liver, uncultured) 23RM, 2D; (liver, 3-5 days in culture)29RM, 9D; (liver, 7-9 days in culture) 14RM, 4D. Day 12 tissues (B): (AGM,uncultured) 25RM, 7D; (AGM, 3-5 days in culture) 31RM, 7D; (AGM, 7-9 days in culture) 14RM, 3D; (uncultured YS) 39RM, 8D; (YS, 3-5 days in culture) 17RM,4D; (YS, 7-9 days in culture) 11RM, 2D; (liver, uncultured) 37RM, 5D. Day 12 liver (C): (3-5 days in culture) 14RM, 3D; (7-9 days in culture) 24RM; 6D.
As was shown previously, 11 d.p.c. YS is incapable of expanding the initial numbers of explanted HSC/RUs. Each explant of 11 d.p.c. YS contains approximately one HSC before and after 3 days in culture(Fig. 3A). However, in contrast to the AGM region, 11 d.p.c. YS explants are not able to maintain HSC/RUs for a longer time in culture. Similarly, in 11 d.p.c. liver about 1 HSC can be detected after 3 but not after 7 days in culture.
### Analysis of HSC potential of 12 d.p.c. embryonic tissues by organ culture
When 12 d.p.c. AGM region was tested, we found that after 3-5 days in vitro it contained the same number of HSC/RUs as it initially contained in the embryo (Fig. 3B). A slight increase in the numbers of HSC/RUs was observed after 7-9 days in culture. 12 d.p.c. AGMs are larger than 11 d.p.c. AGMs and therefore culture conditions could be suboptimal. To optimise the culture we reduced the size of the explants by subdissection of 12 d.p.c. AGMs and found no signs of HSC expansion in these cultures either (data not shown). In addition, we found that the microenvironment of the 12 d.p.c. AGM is highly supportive of long-term (up to 4 weeks) maintenance of HSC/RUs (Kumaravelu et al.,unpublished observation). Thus, we infer that the ability of the AGM region to expand (and/or generate) definitive HSC/RUs is significantly attenuated on day 12 p.c., as compared to 11 d.p.c., concurrent with progressive specification of the AGM region into gonads and mesonephric derivatives.
In contrast to 11 d.p.c. YS, 12 d.p.c. YS explants acquire the capacity to increase the numbers of HSC/RUs during culture. Before culture 12 d.p.c. YS contains 1.8 (1.4, 2.4) HSC/RUs but after 3 days in culture it contains6.8 (5.0, 9.8) HSC/RUs per explant (P<0.05)(Fig. 3B). However, like the 11 d.p.c. YS and in contrast to the AGM region, 12 d.p.c. YS was not able to maintain HSC/RUs in long-term cultures; after 7 days in culture the number of HSC/RUs dropped to less than 1, being 0.6 (0.4, 1.2) HSC/RUs per YS. This may possibly reflect the transitory nature of haematopoietic activity in the YS in vivo.
Explants of 12 d.p.c., foetal liver were not able to maintain the initial number of HSC/RUs in culture, which could be explained either by suboptimal culture conditions for this tissue or by immaturity of day 12 liver microenvironment (Fig. 3C).
Shortly before the onset of organogenesis the embryo starts to generate a transitory population of embryonic haematopoietic cells that serve its immediate needs. These first haemopoietic cells, consisting mainly of committed myeloid progenitors, CFU-S and primitive erythroid cells, appear in the embryonic circulation in growing numbers and then colonise the initially haematopoietically inactive embryonic liver(Moore and Metcalf, 1970;Johnson and Barker, 1985;Medvinsky, 1993;Medvinsky et al., 1996). Definitive HSCs, which give rise to the adult haematopoietic hierarchy,develop slightly later and gradually form a massive pool in the foetal liver,which becomes the main source of HSCs which subsequently colonise the bone marrow (Dzierzak and Medvinsky,1995; Morrison et al.,1997; Dzierzak et al.,1998). Owing to the absence of unique markers for definitive HSCs at present, direct monitoring of their origin and movements during development, as well as definition of the developmental boundaries of the haematopoietic system is not possible and can only be inferred indirectly from HSC functional assays. Some markers have been successfully used to narrow down the anatomical location of definitive HSCs but their expression is not restricted purely to HSCs (North et al.,1999; Manaia et al.,2000; Ma et al.,2002). In addition, HSC migration is not necessarily restricted to the vascular network and their location in tissues may not be accompanied, as has been shown for the AGM region, by active haematopoiesis(Medvinsky et al., 1996;Godin et al., 1999). Although some progress has been achieved, in contrast to the development of solid organs, an anatomical and histological description of development of the definitive haematopoietic system is lacking. Here, using a quantitative approach we analyse the early development of definitive HSC/RUs, the germinal layer' of the definitive haematopoietic system.
Early development of definitive HSC/RUs in the mouse embryo involves at least two key stages; (i) initiation of definitive HSC/RUs (late day 10-early day 11 p.c.) and (ii) expansion of the pool of definitive HSC/RUs. In addition, some tissues may be involved in the maintenance of HSC/RUs. Since HSCs may rapidly change their location during embryogenesis, detection of HSCs in tissues does not identify whether these tissues are capable of generating,expanding, or maintaining them or whether these tissues merely transiently contain them. In order to try and distinguish between these possibilities, we and others previously developed an organ culture approach which enables isolated tissues to be individually tested to reveal their HSC activity(Medvinsky and Dzierzak, 1996;Cumano et al., 2001). For example, the number of HSC/RUs within the AGM region on days 10-11 p.c. is no higher than within the YS and the liver. However, in contrast to other embryonic tissues at this age, the AGM region is capable of autonomously initiating and expanding HSC/RUs in vitro suggesting that the initial pool of definitive HSC/RUs is generated within the AGM region and these then colonise the liver (Morrison et al.,1995; Medvinsky and Dzierzak,1996; Ema and Nakauchi,2000).
Here, using an improved protocol we have been able to detect rare HSC/RUs in day 11 circulation (compare with our previous report)(Muller et al., 1994). This reveals a previously proposed, but never experimentally shown, route by which 11 d.p.c. AGM derived HSCs can colonise the liver. During days 12-13 p.c. the number of HSC/RUs in the circulation rises (approximately 6 HSC/RUs were detected on day 13 p.c.) consistent with the more intense colonisation of the foetal liver with HSCs from extra-hepatic sources during this period. It is worth bearing in mind that HSC/RUs present in the circulation at the moment of embryo dissection are likely to represent only a small proportion of total HSC/RUs present in the circulation over the entire day of gestation.
Our present calculations indicate that the total number of HSC/RUs within the developing embryo increases dramatically from about 3 (day 11 p.c.) to 66(day 12 p.c.) mainly due to accumulation of HSC/RUs in the liver(Table 3). This is in accordance with previously published numbers of HSC/RUs in day 12 liver(Ema and Nakauchi, 2000). Owing to the length of the cell cycle it is unlikely that such an increase in HSC number occurs entirely from amplification of a few HSC/RUs that initially colonized the liver. If the possibility of de novo/primary formation of HSC/RUs in the liver is ruled out then this increase must be the result of a massive immigration of HSC/RUs from extra-liver source(s). Our quantitative data are in accord with this hypothesis as explained below.
Table 3.
Total increase in HSC/RUs numbers in the developing mouse embryo
Day 10Day 11Day 12
AGM region ∼0 ∼0.9 ∼2.7
Yolk sac ∼1.1 ∼1.8
Circulation ∼0 ∼3.1
Liver ∼0.7 ∼53
Body (w/o AGM, circulation, liver) ∼5.8
Total ∼3 ∼66
Day 10Day 11Day 12
AGM region ∼0 ∼0.9 ∼2.7
Yolk sac ∼1.1 ∼1.8
Circulation ∼0 ∼3.1
Liver ∼0.7 ∼53
Body (w/o AGM, circulation, liver) ∼5.8
Total ∼3 ∼66
In the 11 d.p.c. embryo the number of HSC/RUs within the liver is low and can be easily explained by colonisation from the AGM region. In fact, one 11 d.p.c. AGM region can produce as many as twelve HSC/RUs after 3 days in culture; and we assume that in vivo this process may be significantly more efficient. By 12 d.p.c. the HSC productivity of the AGM region decreases as assessed by the organ culture test. However measurement of HSC/RUs numbers in uncultured AGMs showed that the number of HSC/RUs in the AGM region is still higher than would be expected in a non-haematopoietic tissue. Conversely, by 12 d.p.c. the YS showed noticeable HSC productivity in vitro. It was capable of expanding the number of HSC/RUs from about 1.8 to 6.8 during a 3-day culture period.
Thus, at early stages of liver colonisation the high cumulative HSC productivity of the AGM region and the YS may provide the liver with a large part of the ready-to-use' pool of HSCs(Fig. 4). It is interesting that in the same culture conditions the liver itself was unable to increase or even maintain the initial number of HSCs. This suggests either that by day 12 p.c. the liver is not yet competent to expand HSCs or that the culture conditions used are not fully adequate. It is important to note in relation to this that a cell line derived from day 14 foetal liver is capable of maintaining HSCs over a period of 1 month(Moore et al., 1997).
Fig. 4.
Schematic representation of colonisation of the embryonic liver with HSC/RUs. The number of HSC/RUs in the liver are as determined in vivo. The number of HSC/RUs in the AGM region and the YS are the numbers that these tissues are able to generate in vitro. In vivo, the high cumulative activity of the AGM region and the YS may provide the liver with a high proportion of the ready-to-use' pool of definitive HSC/RUs. The data suggest consecutive colonisation of the embryonic liver with HSC/RUs from the AGM region and the YS.
Fig. 4.
Schematic representation of colonisation of the embryonic liver with HSC/RUs. The number of HSC/RUs in the liver are as determined in vivo. The number of HSC/RUs in the AGM region and the YS are the numbers that these tissues are able to generate in vitro. In vivo, the high cumulative activity of the AGM region and the YS may provide the liver with a high proportion of the ready-to-use' pool of definitive HSC/RUs. The data suggest consecutive colonisation of the embryonic liver with HSC/RUs from the AGM region and the YS.
The generation/expansion of definitive HSCs/RUs in 12 p.c. YS culture is likely related to previous observations that from 8-10 d.p.c. the YS contains immature cells that when placed into an embryonic or newborn environment become capable of contributing into adult haematopoiesis(Weissman et al., 1978;Toles et al., 1989;Yoder and Hiatt, 1997;Yoder et al., 1997;Matsuoka et al., 2001). Our data suggest that these early YS cells may not necessarily need processing in the AGM region to become functional definitive HSCs but can mature in situ on day 12 p.c. The controversy over the origin of HSCs has always centred on the issue of whether the P-Sp/AGM region or the YS is the initial site of their generation. However, these data indicate that definitive HSCs may develop independently and asynchronously in these two different sites of the mouse embryo suggesting that the argument over the first source of HSCs is not relevant. However, at present, the possibility of cross-seeding of the YS and the AGM region with definitive HSCs and/or their ancestor cells cannot be excluded. Lineage tracing of YS and AGM haematopoiesis from early stages of development is required to finally resolve this issue.
In summary, we have carried out a comprehensive anatomical mapping of the development of definitive HSC/RUs in the mouse embryo from 11-13 d.p.c. during which time the embryonic liver becomes colonised. We have shown that increasing numbers of HSC/RUs in the liver is accompanied by the appearance of growing numbers of HSC/RUs in the embryonic blood. The data presented here suggests that in addition to early waves of colonisation with committed and multipotent haematopoietic progenitors(Moore and Metcalf, 1970;Johnson and Barker, 1985;Dzierzak and Medvinsky, 1995)the liver is colonised by two consecutive waves of definitive HSC/RUs. The initial wave of HSC/RUs arrives from the AGM region on day 10 p.c., reaches a maximum by day 11 p.c. and disappears by day 13 p.c. On day 12 p.c. when AGM activity is decreasing, the second wave of colonisation arrives from the YS. This wave marks the embryonic stage when early YS cells mature into definitive HSC/RUs.
A visual demonstration of development of HSC/RUs in 10-12 d.p.c. mouse embryo as it is viewed by the authors is presented as a movie and accompanies the Web version of the article(Supplementary Information).
Movie available on-line
We are grateful to Prof. John Bishop (Edinburgh) and Prof. Josef Chertkov(Moscow) for helpful discussions; Dr Tony Hunter (Edinburgh) for help with statistical analysis. We thank Dave Kwant (Edinburgh) for preparing the movie. We thank Noemi Cambray for technical help and the staff of the animal house,John Verth, John Tweedie, Carol Manson for taking care of our experimental animals. This work was supported by the Leukaemia Research Fund grant 9656 to A. M. and J. A. P. K. was a fellow of the International Journal of Experimental Pathology. A. M. is an MRC Senior Fellow.
Abkowitz, J. L., Golinelli, D., Harrison, D. E. and Guttorp,P. (
2000
). In vivo kinetics of murine hemopoietic stem cells.
Blood
96
,
3399
-3405.
Berger, C. N. and Sturm, K. S. (
1996
). Estimation of the number of hematopoietic precursor cells during fetal mouse development by covariance analysis.
Blood
88
,
2502
-2509.
Chen, X. D. and Turpen, J. B. (
1995
). Intraembryonic origin of hepatic hematopoiesis in Xenopus laevis.
J. Immunol.
154
,
2557
-2567.
Ciau-Uitz, A., Walmsley, M. and Patient, R.(
2000
). Distinct origins of adult and embryonic blood in Xenopus.
Cell
102
,
787
-796.
Cumano, A., Dieterlen-Lievre, F. and Godin, I.(
1996
). Lymphoid potential, probed before circulation in mouse,is restricted to caudal intraembryonic splanchnopleura.
Cell
86
,
907
-916.
Cumano, A., Ferraz, J. C., Klaine, M., Di Santo, J. P. and Godin, I. (
2001
). Intraembryonic, but not yolk sac hematopoietic precursors, isolated before circulation, provide long-term multilineage reconstitution.
Immunity
15
,
477
-485.
Cumano, A., Paige, C. J., Iscove, N. N. and Brady, G.(
1992
). Bipotential precursors of B cells and macrophages in murine fetal liver.
Nature
356
,
612
-615.
de Bruijn, M. F., Speck, N. A., Peeters, M. C. and Dzierzak,E. (
2000
). Definitive hematopoietic stem cells first develop within the major arterial regions of the mouse embryo.
EMBO J.
19
,
2465
-2474.
de Bruijn, M. F. T. R., Ma, X., Robin, C., Ottersbach, K.,Sanchez, M.-J. and Dzierzak, E. (
2002
). Hematopoietic Stem Cells Localize to the Endothelial Cell Layer in the Midgestation Mouse Aorta.
Immunity
16
,
673
-683.
Delassus, S. and Cumano, A. (
1996
). Circulation of hematopoietic progenitors in the mouse embryo.
Immunity
4
,
97
-106.
Dieterlen-Lievre, F. (
1975
). On the origin of haemopoietic stem cells in the avian embryo: an experimental approach.
J. Embryol. Exp. Morphol.
33
,
607
-619.
Douagi, I., Colucci, F., Di Santo, J. P. and Cumano, A.(
2002
). Identification of the earliest prethymic bipotent T/NK progenitor in murine fetal liver.
Blood
99
,
463
-471.
Dzierzak, E. and Medvinsky, A. (
1995
). Mouse embryonic hematopoiesis.
Trends Genet.
11
,
359
-366.
Dzierzak, E. and Medvinsky, A. (
1998
). Developmental Origins of Haematopoietic Stem Cells. In
Molecular Biology of B-cell and T-cell Development
(ed. J. G. Monroe and E. V. Rothenberg), pp.
3
-25. Totowa, CA: Humana Press.
Dzierzak, E., Medvinsky, A. and de Bruijn, M.(
1998
). Qualitative and quantitative aspects of haematopoietic cell development in the mammalian embryo.
Immunol. Today
19
,
228
-236.
Ema, H., Douagi, I., Cumano, A. and Kourilsky, P.(
1998
). Development of T cell precursor activity in the murine fetal liver.
Eur. J. Immunol.
28
,
1563
-1569.
Ema, H. and Nakauchi, H. (
2000
). Expansion of hematopoietic stem cells in the developing liver of a mouse embryo.
Blood
95
,
2284
-2288.
Eren, R., Auerbach, R. and Globerson, A.(
1987
). T cell ontogeny: extrathymic and intrathymic development of embryonic lymphohemopoietic stem cells.
Immunol. Res.
6
,
279
-287.
GenStat 5 Release 3 Reference Manual, G. C.
(
1993
). ed., pp.
796
. Oxford: Clarendon Press.
Godin, I., Garcia-Porrero, J. A., Dieterlen-Lievre, F. and Cumano, A. (
1999
). Stem cell emergence and hemopoietic activity are incompatible in mouse intraembryonic sites.
J. Exp. Med.
190
,
43
-52.
Godin, I. E., Garcia-Porrero, J. A., Coutinho, A.,Dieterlen-Lievre, F. and Marcos, M. A. (
1993
). Para-aortic splanchnopleura from early mouse embryos contains B1a cell progenitors.
Nature
364
,
67
-70.
Hsu, H. C., Ema, H., Osawa, M., Nakamura, Y., Suda, T. and Nakauchi, H. (
2000
). Hematopoietic stem cells express Tie-2 receptor in the murine fetal liver.
Blood
96
,
3757
-3762.
Ikuta, K., Kina, T., MacNeil, I., Uchida, N., Peault, B., Chien,Y. H. and Weissman, I. L. (
1990
). A developmental switch in thymic lymphocyte maturation potential occurs at the level of hematopoietic stem cells.
Cell
62
,
863
-874.
Iuchi, I. and Yamamoto, M. (
1983
). Erythropoiesis in the developing rainbow trout, Salmo gairdneri irideus:histochemical and immunochemical detection of erythropoietic organs.
J. Exp. Zool.
226
,
409
-417.
Johnson, G. R. and Barker, D. C. (
1985
). Erythroid progenitor cells and stimulating factors during murine embryonic and fetal development.
Exp. Hematol.
13
,
200
-208.
Kawamoto, H., Ohmura, K. and Katsura, Y.(
1998
). Presence of progenitors restricted to T, B, or myeloid lineage, but absence of multipotent stem cells, in the murine fetal thymus.
J. Immunol.
161
,
3799
-3802.
Lemischka, I. R. (
1992
). The haematopoietic stem cell and its clonal progeny: mechanisms regulating the hierarchy of primitive haematopoietic cells.
Cancer Surv.
15
,
3
-18.
Liu, C. P. and Auerbach, R. (
1991
). In vitro development of murine T cells from prethymic and preliver embryonic yolk sac hematopoietic stem cells.
Development
113
,
1315
-1323.
Liu, L. Q., Ilaria, R., Jr, Kingsley, P. D., Iwama, A., van Etten, R. A., Palis, J. and Zhang, D. E. (
1999
). A novel ubiquitin-specific protease, UBP43, cloned from leukemia fusion protein AML1-ETO-expressing mice, functions in hematopoietic cell differentiation.
Mol. Cell. Biol.
19
,
3029
-3038.
Ma, X., de Bruijn, M., Robin, C., Peeters, M., Kong, A. S. J.,de Wit, T., Snoijs, C. and Dzierzak, E. (
2002
). Expression of the Ly-6A (Sca-1) lacZ transgene in mouse haematopoietic stem cells and embryos.
Br. J. Haematol.
116
,
401
-408.
Manaia, A., Lemarchandel, V., Klaine, M., Max-Audit, I., Romeo,P., Dieterlen-Lievre, F. and Godin, I. (
2000
). Lmo2 and GATA-3 associated expression in intraembryonic hemogenic sites.
Development
127
,
643
-653.
Martin, C., Lassila, O., Nurmi, T., Eskola, J.,Dieterlen-Lievre, F. and Toivanen, P. (
1979
). Intraembryonic origin of lymphoid stem cells in the chicken: studies with sex chromosome and IgG allotype markers in histocompatible yolk sac-embryo chimaeras.
Scand. J. Immunol.
10
,
333
-338.
Matsuoka, S., Tsuji, K., Hisakawa, H., Xu, M., Ebihara, Y.,Ishii, T., Sugiyama, D., Manabe, A., Tanaka, R., Ikeda, Y. et al.(
2001
). Generation of definitive hematopoietic stem cells from murine early yolk sac and paraaortic splanchnopleures by aorta-gonad-mesonephros region-derived stromal cells.
Blood
98
,
6
-12.
Medvinsky, A. (
1993
). Ontogeny of the mouse hematopoietic system.
Semin. Dev. Biol.
4
,
333
-340.
Medvinsky, A. and Dzierzak, E. (
1996
). Definitive hematopoiesis is autonomously initiated by the AGM region.
Cell
86
,
897
-906.
Medvinsky, A. and Dzierzak, E. (
1999
). Development of the hematopoietic stem cell: can we describe it? [letter].
Blood
94
,
3613
-3614.
Medvinsky, A. L., Gan, O. I., Semenova, M. L. and Samoylina, N. L. (
1996
). Development of day-8 colony-forming unit-spleen hematopoietic progenitors during early murine embryogenesis: spatial and temporal mapping.
Blood
87
,
557
-566.
Medvinsky, A. L., Samoylina, N. L., Muller, A. M. and Dzierzak,E. A. (
1993
). An early pre-liver intraembryonic source of CFU-S in the developing mouse.
Nature
364
,
64
-67.
Moore, K. A., Ema, H. and Lemischka, I. R.(
1997
). In vitro maintenance of highly purified, transplantable hematopoietic stem cells.
Blood
89
,
4337
-4347.
Moore, M. A. and Metcalf, D. (
1970
). Ontogeny of the haemopoietic system: yolk sac origin of in vivo and in vitro colony forming cells in the developing mouse embryo.
Br. J. Haematol.
18
,
279
-296.
Morrison, S. J., Hemmati, H. D., Wandycz, A. M. and Weissman, I. L. (
1995
). The purification and characterization of fetal liver hematopoietic stem cells.
92
,
10302
-10306.
Morrison, S. J., Wright, D. E., Cheshier, S. H. and Weissman, I. L. (
1997
). Hematopoietic stem cells: challenges to expectations.
Curr. Opin. Immunol.
9
,
216
-221.
Muller, A. M., Medvinsky, A., Strouboulis, J., Grosveld, F. and Dzierzak, E. (
1994
). Development of hematopoietic stem cell activity in the mouse embryo.
Immunity
1
,
291
-301.
Nishikawa, S. I., Nishikawa, S., Kawamoto, H., Yoshida, H.,Kizumoto, M., Kataoka, H. and Katsura, Y. (
1998
). In vitro generation of lymphohematopoietic cells from endothelial cells purified from murine embryos.
Immunity
8
,
761
-769.
North, T., Gu, T. L., Stacy, T., Wang, Q., Howard, L., Binder,M., Marin-Padilla, M. and Speck, N. A. (
1999
). Cbfa2 is required for the formation of intra-aortic hematopoietic clusters.
Development
126
,
2563
-2575.
North, T. C., de Bruijn, M. F. T. R., Stacy, T., Talebian, L.,Lind, L., Robin, C., Binder, M., Dzierzak, E. and Speck, N. A.(
2002
). Runx1 expression marks long-term repopulating hematopoietic stem cells in the midgestation mouse embryo.
Immunity
16
,
661
-672.
Ohmura, K., Kawamoto, H., Lu, M., Ikawa, T., Ozaki, S., Nakao,K. and Katsura, Y. (
2001
). Immature multipotent hemopoietic progenitors lacking long-term bone marrow-reconstituting activity in the aorta-gonad-mesonephros region of murine day 10 fetuses.
J. Immunol.
166
,
3290
-3296.
Palis, J., Chan, R. J., Koniski, A., Patel, R., Starr, M. and Yoder, M. C. (
2001
). Spatial and temporal emergence of high proliferative potential hematopoietic precursors during murine embryogenesis.
98
,
4528
-4533.
Rodewald, H. R., Kretzschmar, K., Takeda, S., Hohl, C. and Dessing, M. (
1994
). Identification of pro-thymocytes in murine fetal blood: T lineage commitment can precede thymus colonization.
EMBO J.
13
,
4229
-4240.
Sanchez, M. J., Holmes, A., Miles, C. and Dzierzak, E.(
1996
). Characterization of the first definitive hematopoietic stem cells in the AGM and liver of the mouse embryo.
Immunity
5
,
513
-525.
Szilvassy, S. J., Humphries, R. K., Lansdorp, P. M., Eaves, A. C. and Eaves, C. J. (
1990
). Quantitative assay for totipotent reconstituting hematopoietic stem cells by a competitive repopulation strategy.
87
,
8736
-8740.
Tavian, M., Robin, C., Coulombel, L. and Peault, B.(
2001
). The human embryo, but not its yolk sac, generates lympho-myeloid stem cells: mapping multipotent hematopoietic cell fate in intraembryonic mesoderm.
Immunity
15
,
487
-495.
Toles, J. F., Chui, D. H., Belbeck, L. W., Starr, E. and Barker,J. E. (
1989
). Hemopoietic stem cells in murine embryonic yolk sac and peripheral blood.
86
,
7456
-7459.
Traver, D., Miyamoto, T., Christensen, J., Iwasaki-Arai, J.,Akashi, K. and Weissman, I. L. (
2001
). Fetal liver myelopoiesis occurs through distinct, prospectively isolatable progenitor subsets.
Blood
98
,
627
-635.
Turpen, J. B., Kelley, C. M., Mead, P. E. and Zon, L. I.(
1997
). Bipotential primitive-definitive hematopoietic progenitors in the vertebrate embryo.
Immunity
7
,
325
-334.
Velardi, A. and Cooper, M. D. (
1984
). An immunofluorescence analysis of the ontogeny of myeloid, T, and B lineage cells in mouse hemopoietic tissues.
J. Immunol.
133
,
672
-677.
Weissman, I., Pappaioannou, V. and Gardner, R.(
1978
). Fetal haematopoietic origins of the adult hematolymphoid system. In
Cold Spring Harbor Meeting on Differentiation of Normal and Neoplastic Hematopoietic Cells
, (ed. B. Clarkson P. A. Marks and J. E. Till), pp.
33
-47. Cold Spring Harbor, NY:Cold Spring Harbor Laboratory Press.
Wong, P. M., Chung, S. W., Chui, D. H. and Eaves, C. J.(
1986
). Properties of the earliest clonogenic hemopoietic precursors to appear in the developing murine yolk sac.
83
,
3851
-3854.
Yoder, M. C. and Hiatt, K. (
1997
). Engraftment of embryonic hematopoietic cells in conditioned newborn recipients.
Blood
89
,
2176
-2183.
Yoder, M. C., Hiatt, K., Dutt, P., Mukherjee, P., Bodine, D. M. and Orlic, D. (
1997
). Characterization of definitive lymphohematopoietic stem cells in the day 9 murine yolk sac.
Immunity
7
,
335
-344. | 2022-12-01 21:25:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5217558741569519, "perplexity": 11812.00187504445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00368.warc.gz"} |
http://physics.stackexchange.com/questions/16018/does-a-photon-have-a-rest-frame?answertab=active | # Does a photon have a rest frame?
Quite a few of the questions given on this site mention a photon having a rest frame such as it having a zero mass in its rest frame. I find this contradictory since photons must travel at the seed of light in all frames according to special relativity
Does a photon have a rest frame?
-
Not in vacuum, but the question makes sense in a transparent medium. Experts, what does an observer see in a moving RF in a medium if its velocity is c/n? – Vladimir Kalitvianski Oct 21 '11 at 18:12
I understand that in a medium you are dealing with a wave packet of photons, and not a single photon. In the lab frame, each photon in the packet moves at the speed c, but the packet's group speed is c/n. If you shot a pulse of light into a medium and then followed it at the speed c/n, you would just see a Lorentz-transformed packet of photons. Each photon will still move at the speed c, but the group speed of the packet would be zero. And no, a photon does not have a rest frame, if special relativity applies. – drlemon Oct 21 '11 at 19:40
A one-photon propagation mode is also possible in a medium, hence a photon is a wave train of a finite length, no? – Vladimir Kalitvianski Oct 21 '11 at 20:43
@drlemon: your comment is not right--- there is no packet--- it happens photon by photon. – Ron Maimon Oct 21 '11 at 20:52
@Ron and Vladimir: In E&M textbooks, the speed of light in a medium is derived for a monochromatic wave. The physical reasoning for slowing down the speed of light in a medium is that atoms polarize and produce their own E/M fields, interfering with the incident wave. The total E/M field is that of a monochromatic wave traveling at the speed c/n. There are no individual photons in this analysis. An individual photon will be scattered by atoms back and forth, and will, in general, have some stochastic transport. On average it will travel at c/n, but it will not just slow down to c/n. – drlemon Oct 21 '11 at 21:39
Explanation:
Many introductory text books talk about "rest mass" and "relativistic mass" and say that the "rest mass" is the mass measured in the particles rest frame.
That's not wrong, you can do physics in that point of view, but that is not how people talk about and define mass anymore.
In the modern view each particle has one and only one mass defined by the square of it's energy--momentum four vector (which being a Lorentz invariant you can calculate in any inertial frame): $$m \equiv p^2 = (E, \vec{p})^2 = E^2 - \vec{p}^2$$
For a photon this value is zero. In any frame, and that allows people to reasonably say that the photon has zero mass without needing to define a rest frame for it.
-
I agree completely with @dmckee and would only add that for any particle the elapsed time experienced by that particle in it's rest frame is called the proper time and can be calculated (in units where $c=1$) by any observer as $$d\tau^2 = dt^2 - d\vec{x}^2$$ and for a photon in a vacuum the proper time is always identically $0$. So photons do not experience any passage of time so in that sense also, they do not have a rest frame. – FrankH Oct 21 '11 at 19:58
And in QM the photon energy is $\hbar\omega$ and $\omega$ in a medium is the same, so $m_{photon}=0$. – Vladimir Kalitvianski Oct 21 '11 at 20:41
Your answers are right,a solitary photon has no rest frame, nonetheless I find quite interesting to note that a system of massless particles(such as photons) can have a nonzero mass provided that all the momenta are not oriented in the same axis and that for such systems zero momentum frames CAN actually be defined.
-
Not at all. Rest frame is a concept that does not exist in nature. Had it would exist, nature wouldn't be causal. A photon propagating through medium does not 'move' in a speed smaller than the speed of light in vacuum. It simply interacts electromagnetically with the medium and these interactions slows down its propagation through the medium.
-
"Rest frame is a concept that does not exist in nature." That's a strange way of stating things. If (in SR) in some frame $L$ you observe a (massive) particle moving at a speed $v < c$, you can most definitely pass to some frame $L'$ in which the particular doesn't move. – Gerben Oct 22 '11 at 11:21 | 2013-05-25 18:01:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5636165142059326, "perplexity": 370.38878067247333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706009988/warc/CC-MAIN-20130516120649-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://ompl.kavrakilab.org/core/classompl_1_1geometric_1_1BFMT_1_1BiDirMotion.html | ompl::geometric::BFMT::BiDirMotion Class Reference
Representation of a bidirectional motion. More...
#include <ompl/geometric/planners/fmt/BFMT.h>
## Public Types
enum SetType { SET_CLOSED , SET_OPEN , SET_UNVISITED }
The FMT* planner begins with all nodes included in set Unvisited "Waiting for optimal connection". As nodes are connected to the tree, they are transferred into set Open "Horizon of explored tree." Once a node in Open is no longer close enough to the frontier to connect to any more nodes in Unvisited, it is removed from Open. These three SetTypes are flags indicating which set the node belongs to; Open, Unvisited, or Closed (neither)
using BiDirMotionPtrs = std::vector< BiDirMotion * >
## Public Member Functions
BiDirMotion (TreeType *tree)
BiDirMotion (const base::SpaceInformationPtr &si, TreeType *tree)
Constructor that allocates memory for the state.
base::Cost getCost () const
Set the state associated with the motion.
base::Cost getOtherCost () const
Get cost of this motion in the inactive tree.
void setCost (base::Cost cost)
Set the cost of the motion.
void setParent (BiDirMotion *parent)
Set the parent of the motion.
BiDirMotiongetParent () const
Get the parent of the motion.
void setChildren (BiDirMotionPtrs children)
Set the children of the motion.
BiDirMotionPtrs getChildren () const
Get the children of the motion.
void setCurrentSet (SetType set)
Set the current set of the motion.
SetType getCurrentSet () const
Fet the current set of the motion.
SetType getOtherSet () const
Get set of this motion in the inactive tree.
void setTreeType (TreeType *treePtr)
Set tree identifier for this motion.
TreeType getTreeType () const
Get tree identifier for this motion.
void setState (base::State *state)
Set the state associated with the motion.
base::StategetState () const
Get the state associated with the motion.
Returns true if the connection to m has been already tested and failed because of a collision.
Caches a failed collision check to m.
void setHeuristicCost (const base::Cost h)
Set the cost to go heuristic cost.
base::Cost getHeuristicCost () const
Get the cost to go heuristic cost.
## Public Attributes
base::Statestate_
The state contained by the motion.
BiDirMotionparent_ [2]
The parent motion in the exploration tree
BiDirMotionPtrs children_ [2]
The set of motions descending from the current motion
SetType currentSet_ [2]
Current set in which the motion is included.
TreeTypetree_
Tree identifier
base::Cost cost_ [2]
The cost of this motion
base::Cost hcost_ [2]
The minimum cost to go of this motion (heuristically computed)
std::set< BiDirMotion * > collChecksDone_
Contains the connections attempted FROM this node.
## Detailed Description
Representation of a bidirectional motion.
Definition at line 273 of file BFMT.h.
The documentation for this class was generated from the following file:
• ompl/geometric/planners/fmt/BFMT.h | 2022-05-21 02:36:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2956426739692688, "perplexity": 8699.948789582417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00150.warc.gz"} |
http://nicholshayes.co.uk/blog/ | ## Enabling PostCSS with Rails 6
PostCSS is a tool for transforming CSS with JavaScript. To enable it in a Rails application, I needed to make the following changes to my app. Please note that my app does not use turbolinks.
/app/views/layouts/application.html.erb
Update so that the style sheet directive in the header uses stylesheet_pack_tag. That is, replace:
<%= stylesheet_link_tag 'application', media: 'all' %>
With:
<%= stylesheet_pack_tag 'application', media: 'all' %>
Sorry – opening brackets aren’t rendering nicely using the code include I’m using in this blog. Both declarations should start with a less than chevron.
/package.json
{
"name": "my_app",
"private": true,
"dependencies": {
"@rails/actioncable": "^6.0.0-alpha",
"@rails/activestorage": "^6.0.0-alpha",
"@rails/ujs": "^6.0.0-alpha",
"@rails/webpacker": "^4.0.7",
"polyfill-nodelist-foreach": "^1.0.1",
"postcss-browser-reporter": "^0.6.0",
"postcss-import": "^12.0.1",
"postcss-inline-svg": "^4.1.0",
"postcss-preset-env": "^6.7.0",
"postcss-reporter": "^6.0.1",
"postcss-svgo": "^4.0.2"
},
"version": "0.1.0",
"devDependencies": {
"webpack-dev-server": "^3.9.0"
}
}
/postcss.config.js
Add the following content to a file called
postcss.config.js
in the root of the app:
module.exports = () => ({
plugins: [
require("postcss-import"),
require("postcss-preset-env")({
autoprefixer: {},
features: {
"focus-within": true,
"nesting-rules": true,
"color-mod-function": {
unresolved: "warn"
},
"custom-properties": {
preserve: false,
warnings: true
}
}
}),
require("postcss-browser-reporter"),
require("postcss-reporter")
]
});
I believe that the rails webpacker stack already contains the PostCSS packages, but you may have to install them via yarn. I didn’t but then I have been trying to get this to work for a couple of days and may have installed them via another mechanism.
With the the above changes in place, I was then able to load PostCSS files. Note that it was simplest to do this via the javascript folder as they needed to be accessed relative to packs application.js file.
So as an example, I created the following file:
/app/javascript/packs/test.css
html,
body {
background: lightyellow;
}
Then to enable it I had to modify /app/javascript/packs/application.js by adding the following line:
import './test.css'
When I ran my app, the background turned yellow as expected.
One thing to note is that once these changes have been put in place, the sass stylesheets in /app/stylesheets are no longer loaded by the app.
Posted in Uncategorized | Comments Off on Enabling PostCSS with Rails 6
## Private methods in Ruby
I think the way private methods are defined, is one of Ruby’s few weaknesses. The main problem being that they separate the private methods from the methods that use them. For example, I prefer this:
def one
internal_method(1)
end
def two
internal_method(2)
end
def internal_method(n)
n
end
def three
other_internal_method(3)
end
def four
other_internal_method(4)
end
def other_internal_method(n)
n
end
To:
def one
internal_method(1)
end
def two
internal_method(2)
end
def three
other_internal_method(3)
end
def four
other_internal_method(4)
end
private
def internal_method(n)
n
end
def other_internal_method(n)
n
end
Because the former keeps the internal_method code close to the methods that use it.
This issue is minimised if we keep classes small, but that’s not always as easy as it sounds.
I’m also not particularly convinced about most arguments for making methods private. It takes a much more far sighted developer than me, to be sure that a method that I currently only use internally, won’t be useful externally later.
I’d also be more convinced of the necessity to privatize methods if accessing private method was particularly hard, but it is not: send(:private_method)
It is also worth considering what is being written. If the code is part of a shared library (a gem for example), there is a more convincing argument for privatizing methods. However, if the Ruby code is written as part of a monolithic application (a Rails app for example), then I think the argument is far less convincing.
Surely the only good reason for using a private method, is where the logic is only valid within the context of the current object, and using it outside that context is likely to result in errors. That is, external use will break things.
For example where I do use private methods is in Rails controllers, where public methods could accidentally be exposed by a routing error causing security vulnerabilities and unexpected usage. If only the action methods are public, this vulnerability is greatly reduced.
In summary, I’d say that a method should only be private if using it externally breaks something. If calling it externally doesn’t break anything, why restrict its use unnecessarily.
Posted in Ruby | Comments Off on Private methods in Ruby
## Downgrading the ElasticSearch version in SemaphoreCI
In our current project, we are using Elasticsearch for rapid search (it is the best search engine I’ve used to date), and are using Semaphore CI for continuous integration (Semaphore is new to me, but so far it looks very good). However, since the start of the project and now, the current Elasticsearch version has changed, and our code doesn’t match the latest version. Semaphore has Elasticsearch available, but in a new (and for us incompatible) version to the one used in our project.
Medium term, we need to update the version of Elasticsearch we are using on our project, but for now we need Semaphore to work with our code. To achieve that we need to install an older version of Elasticsearch in our Semaphore environment.
So my first step was a google, but that didn’t give me an obvious answer (there were a couple of scripts, but I wasn’t sure where to put them or how to use them).
I found the solution by using Semaphore support live chat. The response was very quick and I had a work solution within minutes. Another plus point for Semaphore.
The solution was to use the following commands:
These needed to be added to our Semaphore Project’s Build Settings, in the Setup section:
With that code in place we were able to set the Elasticsearch version to 2.4.1 in our Semaphore project and the problem was solved.
An update
On using the script in anger for a few days we found we were sometimes getting build failures.
Purging configuration files for elasticsearch (5.2.2) ...
userdel: user elasticsearch is currently used by process 2662
dpkg: error processing package elasticsearch (--purge):
subprocess installed post-removal script returned error exit status 8
Errors were encountered while processing:
elasticsearch
The process to purge the existing Elasticsearch files was occurring before the process had stopped. To fix this we created a modified version of the script with a 1 second pause prior to the file purge.
service elasticsearch stop
sleep 1
apt-get purge -y elasticsearch
This seems to have fixed the problem.
Posted in Ruby | Tagged , | Comments Off on Downgrading the ElasticSearch version in SemaphoreCI
## The How, When and Why of Ruby
One common question asked when people start using Ruby, is which books should I read. For me there are three essential Ruby books. However, I don’t think anyone should just read all three at once, and certainly not if they are just starting with Ruby. Here I wish to list those books, and why they should not be read together. The three books are:
For me, there is no better starting point for a Ruby developer than Dave Thomas’ Programming Ruby. It was the second Ruby book I read (I started with “Agile Web Development with Rails”), and it was the first to really bring home the elegance of Ruby, and how I could use it. Then and now, most of my Ruby development is done in a Rails environment, but I know that Rails stands on the shoulders of the giant that is Ruby. So even if all you want to do is develop Rails applications, I’d strongly recommend that you start by reading Programming Ruby. I’m confident you’ll be a better Ruby developer for it.
For me, Programming Ruby is all about how to program Ruby. What the syntax is, what to put where to get it to work, and how the core objects work together. And for this, I’ve yet to read a better book.
I’d been developing Ruby apps for a few years when I came across Russ Olsen’s fabulous Design Patterns in Ruby. It had a fundamental effect on the way I write my Ruby code. This wasn’t a book that taught me how to program, rather it made me think about the choices I make in coding in certain ways. This book is all about when to use certain coding patterns. When to split code into separate objects, when to use other objects to manage processes, and when to refactor code.
Design Patterns in Ruby will improve the way you write Ruby code, but I am confident that to get the best from it, you must first have a good appreciation of how Ruby works. For that, you need to have used Ruby in anger. It’s not a book for a new Ruby developer.
The last of the three books in my trilogy, is Pat Shuaghnessy’s Ruby Under a Microscope. I have only just finished reading this book in the last couple of days. It is easily the best Ruby book I’ve read since reading Design Patterns in Ruby. It is a fascinating investigation of how Ruby works under the bonnet. It teaches why Ruby works the way it does.
For me, Ruby Under a Microscope is THE book to read once you have learnt the lessons from reading the other two books, and used those to write Ruby code, and hone your skills.
So if you’ve got this far, I hope you will understand why I think these three books shouldn’t be read at the same time. They represent three stages of moving from a beginner to a seasoned Ruby developer, and it’s not until you’ve master the lessons from one, that you should move onto the next in my opinion. That takes time and effort, but it is well worth it. They are three excellent books. Just don’t rush to read all three at once.
Posted in Ruby | 2 Comments
## In praise of the super user: The problem with the admin/user paradigm
I am coming to the end of my longest ruby contract so far – building web applications for Warwickshire County Council. It’s been a wonderfully productive couple of years, and I’ve gained a lot from working with the team at WCC. However, I think my most long lasting lesson learnt may come from a mistake.
I am entering the last week of my contract and the main focus is now hand over. A lot of this has already happened, and I’ve detected a common thread. I am being repeatedly asked the same question: “What can users do as admin, and what should developers do as admin?” This morning I have woken up with the realisation that the reason I’m being asked this, is because I’ve made a fundamental mistake with my app designs: I’ve overlooked the need for the super user.
If I’d defined super users in my apps, I wouldn’t now be asked this question.
I think I’ve fallen into a trap of thinking the default access levels for a web app should be user and admin. I’ve only gone beyond this access model paradigm if the project specification has stated that a more complicated model is required. I can understand why I’ve fallen into this way of thinking. Just have a look at a google search for “ruby admin user” and see how many of the results point to examples using just admin and user. For another example, have a look at the Devise README; its examples use just user and admin.
I don’t think it’s the fault of the people writing these examples. It is much easier to write example code using the simplest models, and admin/user is the simplest access model. The mistake is then thinking that the simplest model should also be the default model. Just because a model is useful as an example, doesn’t make it the best model to put into production!
So why do I need a super user? I think the easiest way to answer that is to ask “why do I need an admin user?”. The answer is, I want to make it easy to modify the way the application behaves without having to go to the bother of writing more code – to be more accurate – without having to write more code after I’ve moved on to building my next great application. I hate having to go back to an old code base to modify it. I’d much rather spend my time building swanky new applications.
So I try to build my apps with enough in-built flexibility, that their behaviour can be changed via tweaks at the admin level. Of course there is a limit to this, but in my experience, it’s not that difficult to predict the main ways that an app may need to be modified in future, and to build admin functions accordingly.
There will always be some behaviour changes that either are not predicted, or too complicated to build into the original app. In which case, building the next iteration of the app is the only option. But we have to live with that.
OK – so that’s explained why I have admin users. Why super users?
If you look at the way a web app needs to change through its life there are two types of changes:
1. Changes anyone could make, without breaking the app
2. Changes that could break the app, and need to be done by someone who understands the consequences.
Type 1 changes could be made by super users, and type 2 changes by admins.
Some examples super user changes:
• Assign users to groups
• Modify header and label texts
• Correction of erroneous user data entry
• Modification to whitelist/blacklist regex parameters
• Changing oauth id/key pairs
• Modification of data at the database table level
There is some cross over. There may be a good reason why you wouldn’t want a super user in a particular app, to add users. But these examples demonstrate the point.
For an app you manage as admin, the more you can allow super users to manage themselves, the less work you’ll have to do. And you’re not the only one who’ll be pleased about that. The super users will feel empowered too and enjoy using your app more if they feel they have some control of it. It’s win, win!
However, before we spend too much time slapping each other on the back, there is another key point to highlight: admins and super users need different interfaces!
Have a look at the last two examples for each type:
Super user
Correction of erroneous user data entry
Modification of data at the database table level
It is too easy to think that the way to give super users the ability to correct erroneous data, is to give them database table level access to the data. If you do that, a super user will break your app at some time. Instead, the super user has to be given an interface where they can safely modify data without breaking the app. That usually means a modified version of the origin data input form.
That means that most web applications need both an admin and a super user portal.
And don’t forget, not every user is a super user. The default model should be three levels of access:
User
The people who need to be identified, and do the day to day input and access of data.
Super user
The people responsible for the management of the app. This is often the owner of the app – the customer who asked you to build the app – and the people within their team who they delegate to be responsible for the app.
The app developers and the technical team tasked with supporting the app
I believe the main mistake I’ve made at WCC, is to provide apps with a single admin portal that both admins and super users use, and then relying on my direct instructions to super users as to what they should and should not do within the admin portal, to prevent accidents. The handover process has demonstrated to me that this is a poor way of managing things, as it only lasts as long as super users remember the instructions.
Posted in Blog, Ruby | Comments Off on In praise of the super user: The problem with the admin/user paradigm
## The £1.01 coin
The Monster Raving Loony Party in the Kenilworth area have released their Manickfesto described here:
One of the proposals is to introduce a 99p coin to make shopping easier in 99p stores.
However, I think they are missing a trick. They should also introduce a £1.01 coin for change.
Not only would this make getting change in 99p stores easier, it will make it easier to use 99p coins with other coins to make more expensive purchases: 99p + £1.01 + 99p = £1.99
Permission is hereby granted, free of charge, to any person
(you know who I’m talking about Nick Green) thinking
of creating a £1.01 coin to use for change in 99p stores (the
“Idea”), to deal in the Idea without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Idea, and to
permit persons to whom the Idea is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Idea.
THE IDEA IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE IDEA OR THE USE OR OTHER DEALINGS IN THE IDEA.
Posted in Brilliant Ideas | 1 Comment
## Using Jasmine with Rails 4.1
I had a few problem getting fixtures to work within a Jasmine environment. I was getting the error “ReferenceError: loadFixtures is not defined”
The fixture was an html pages at spec/javascripts/fixtures/form.html, and this was my initial test code:
describe("Rollback prompt", function(){
beforeEach(function(){
});
describe('check test environment', function(){
it('should always pass', function(){
expect(true).toBe(true);
});
});
});
I’d copied this format from a rails 3 project where I’d used the jasmine gem successfully.
To get this to work with Rails 4.1, I had to:
## Use jasmine-rails gems
I added this to my Gemfile (replacing gem ‘jasmine’)
group :development, :test do
# JavaScript test environment
gem 'jasmine-rails'
gem 'jasmine-jquery-rails'
end
And added a spec_helper at spec/javascripts/helpers/spec_helper.js:
//= require jquery
//= require jasmine-jquery
## Mount fixtures separately
I mounted my fixtures via config/initializers/jasmine.rb:
# Map fixtures directory for Jasmine suite
if defined?(JasmineRails)
JasmineFixtureServer = Proc.new do |env|
Rack::Directory.new('spec/javascripts/fixtures').call(env)
end
end
And then updated my config/routes.rb:
if defined?(JasmineRails)
mount JasmineRails::Engine => '/specs'
mount JasmineFixtureServer => '/spec/javascripts/fixtures'
end
After that my test passed successfully and I was ready to start building by JavaScript functions.
## Solution source
These pages were key to me finding this solution:
Posted in JavaScript, Ruby | Comments Off on Using Jasmine with Rails 4.1
## The lows and highs of SAML
I’ve been working on a number of OAUTH based authentication systems recently as part of the work I’m doing for Warwickshire County Council on the Alpha Identify project. This has led me to also look at SAML. I’ve been building a simple SAML demo app to help me understand how it works, and in expectation that I will need to start building SAML systems in the next month or two. I’ve been using Onelogin’s ruby-saml as a guide to what is needs, and Lawrence Pit’s ruby-saml-idp project was a great help in getting started.
In working through ruby-saml, it seems clear to me that it is a porting from another language. This works, and means it is based on proven techniques. However, I think starting over and building some of the tools needed to produce SAML solutions, in a more ruby way, would make the code cleaner and more easy to extend. So I have started building saml_tools as a ruby toolbox for working with SAML objects.
Another major driver behind saml_tools stems from my belief that the starting point for building many SAML solutions will be the SAML documents themselves. That is, the developer will know the SAML document that will be sent to request authentication, and the document that will be sent in response to that request. They will then need to build the tools that will create, send, receive and process these documents. So I wanted a system that will make it easy to use an existing SAML document as a template for generating more documents. For me, the obvious solution was to use erb templates. It works well for Rails; why not SAML too. The resulting template is a lot easier to ‘read’ and understand, than a document that is solely built in multiple Nokogiri or REXML insertions.
However, the process has not been plain sailing and has involved a lot of head scratching, particularly regarding the signing of SAML responses. To quote this blog: “using a certificate to sign a request is the root of all evil.“, and putting the assertion signature inside the assertion itself seems perversely complex to me. I think the best demonstration of the lows and highs I went through in trying to work it out are three email I sent to a college during my ordeal:
Hi,
Thought I’d let off steam a little:
Unravelling SAML is A PAIN IN THE ARSE!
To test something, you have to pull something from over there, and manipulate it based on something somewhere else, and then compare it with the element here!
For example, the signature is in the stuff being signed. So you have to extract the signature, before you can check if it matches the stuff it was in.
Who designed this system!
Rob – working through it slowly but surely.
Which was followed fairly quickly with:
I think I’ve come up with a really simple example of why inserting the signature into the data being signed is fundamentally flawed.
Imagine my data is all held in a foo tag, I create a signature that I store in a bar tag:
<foo>
<bar>
<\bar>
<\foo>
Before I can check the signature I need to remove bar. However, what do I leave behind? For example, should foo now look like this:
<foo>
<\foo>
or this:
<foo><\foo>
Now that may seem a trivial difference, but the problem is that the signed data is wrapped up with base64 encoding. And those two examples output different results when encoded!!!!
At that point I was ready to give up. I’d written a sequence of steps that appeared to be working, but when I put them together to check part of a signature from a real SAML request document, it failed. I could not see what was wrong and I was starting to go round in circles. Fortunately, being the pig headed buffoon I am, I persevered and finally:
The solution to my problems was to read up on XML security. Reading this was my eureka moment:
http://users.dcc.uchile.cl/~pcamacho/tutorial/web/xmlsec/xmlsec.html
The way SAML signs responses isn’t so much a SAML thing, as an XML thing. It’s one of the “standard” ways of signing XML documents. I still think it is mad to put the signature inside the section of XML being signed – especially as the standard gives you other options.
I think it must come from when you sign a whole document. In that case, where else can you put the signature but inside the document? Otherwise you risk breaking the integrity of the document, or separating the signature from the document being signed.
Also, two things I struggled to get my head round: canonicalization, and the format of white space around the removed signature; turned out to be intimately related. Canonicalization is the fix for the white space problem; it formats any XML document or fragment, into a standard format that is shared across all implementations of XML. So it ensures that the space betweens two elements, that is left when a signature is removed, is always of the same format.
So in two days I’ve gone from being ready to give up on SAML, to now having a much better understanding of how it all fits together.
With my new found knowledge, I was able to build a SAML response from scratch, and when I tested that against my code, it validated correctly. It turned out that the problem was not the code but the document I was testing it against. That’s why I could not see a fault in the code!
However, the key point is this is yet another example of where failure leads to much better understanding. If my code had just worked, I would not have had to dig so much deeper into the underlying systems, and I would not have learnt as much as I did about how SAML works.
Posted in Ruby | Tagged , , , | Comments Off on The lows and highs of SAML
## Using Exceptions to help separate functionality
I have recently had to split a class into two and found raising exceptions via a new error class was key to the success of this process.
I’ve been working on Geminabox. The goal was to add the facility for Geminabox to act as a proxy for RubyGems.
At an early stage of the development, there was a requirement to split the Geminabox class into two. The problem was that this class was both handling the storage of local gems, and the HTTP requests for this service. This made it difficult to hook into Geminabox to store a new local gem, without also loading up the Sinatra component that was handling the HTTP requests.
A key to understanding the problem is to have a look at the original Geminabox#handle_incoming_gem method. This method calls further Geminabox instance methods to handle the storage of the incoming gem.
The solution was to create a new class that would handle the gem storage, and for Geminabox#handle_incoming_gem to pass the new gem to that class. The new class was called GemStore.
In the original Geminabox class, the error handling was combined into a single method that also controlled the output of an HTML error report via HTTP. That method being Geminabox#error_response. This method could be called by a number of Geminabox instance methods active in the gem storage process. The raising of errors needed to be separated from the mechanism for displaying error messages.
The result was a new error class GemStoreError. Now GemStore could raise a GemStoreError when a problem was identified. Then any process using GemStore could easily identify these errors and handle them appropriately.
With GemStore and GemStoreError in place Geminabox#handle_incoming_gem could be refactored, and other processes could use GemStore to store gems within Geminabox.
Posted in Ruby | Comments Off on Using Exceptions to help separate functionality
## Intercepting Ruby exceptions
The way some Ruby applications catch exceptions makes it more difficult to debug the underlying issue. That is because a common pattern for handling an exception is this:
begin
do_something_that_fails
rescue SomeError
raise MyAppError.new "Something went wrong"
end
So where is the problem? The problem is that by capturing the error in this way, the originals exception information is lost. That is, the original exception usually contains a message and back-trace that make debugging the error easier. With the pattern above, the output is only the MyAppError message, and the back trace to this code.
Recently, this issue has led to me not trapping errors as much as I used to, because letting the underlying exception bubble up through my apps has often resulted in more meaningful and useful errors. However, I’ve just found nesty, and I think it is the solution to this issue. That is, it allows exceptions to be caught, flagged and handled within an app, whilst preserving the original error message and back-trace.
I’ve been playing with nesty and the result is three simple ruby programs that I believe demonstrate the value of the nesty approach to handling exceptions from underlying code.
I’ve created three files that contain the code shown below. Each one has File.new(‘no_such_file’) called on line 8 so that there is some consistency between the errors raised by each example. (Unfortunately the empty lines used to achieve this don’t display output in this blog – so please assume that File.new(‘no_such_file’) is always on line 8 )
Version one: No Error handling
File.new('no_such_file')
The error output when this run is:
wrong_file.rb:8:in initialize': No such file or directory - no_such_file (Errno::ENOENT)
from wrong_file.rb:8:in new'
from wrong_file.rb:8:in <main>'
So a nice meaningful error that tells me what the underlying issue was. However, it would be nice if I could add some information about what I was trying to do when this error occurred. To do that I need to trap the error and handle it.
Version two: Trap the error and raise an app specific exception
class MyError < StandardError
end
begin
File.new('no_such_file')
rescue
raise MyError.new "Unable to access file"
end
This outputs this error message:
wrong_file_rescued.rb:10:in rescue in <main>': Unable to access file (MyError)
from wrong_file_rescued.rb:7:in
<main>'
This contains error information that is specific to my app, but a lot of information is lost. Even the line number where the error occurred is lost. Line 7 is the begin statement.
require 'nesty'
class MyError < StandardError
include Nesty::NestedError
end
begin
File.new('no_such_file')
rescue
raise MyError.new "Unable to access file"
end
And this outputs:
wrong_file_nesty.rb:10:in rescue in <main>': Unable to access file (MyError)
from wrong_file_nesty.rb:7:in <main>'
from wrong_file_nesty.rb:8:in initialize': No such file or directory - no_such_file (Errno::ENOENT)
from wrong_file_nesty.rb:8:in new'
from wrong_file_nesty.rb:8:in <main>'
Now I have the best of both worlds: both my app specific error details and those of the underlying exception.
I found nesty via the authors blog: Skorks, and I’d thoroughly recommend you have a look at that (and the rest of that excellent blog).
Posted in Ruby | Tagged | Comments Off on Intercepting Ruby exceptions | 2021-10-20 10:06:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18188901245594025, "perplexity": 1790.6188126544453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00241.warc.gz"} |
https://www.gamedev.net/forums/topic/523540-ribbonbillboard-trails/ | Followers 0
# Ribbon/billboard trails
## 5 posts in this topic
I am trying to render billboard trails for spaceship engines. The basic idea is that I record the last N positions of the emission point in world space, and then construct a quad/triangle strip along that path. This I have working well, but the tricky part seems to be keeping the strip oriented to face the camera. For each segment along the path, I generate 2 vertices, each containing the position, and the direction to the previous position, plus a direction constant indicating left or right side of the strip. In the vertex shader, I cross the camera view direction with the direction to the previous segment, which should obtain a direction vector perpendicular to both the camera and the path. I then multiply by the side constant to move either left or right, and add it to the position. This generates a pretty nice trail, in general, but it disappears when the camera is close to behind the trail (my cross product approaches zero), and it 'crinkles' during tight turns, as adjacent path segments are rotated differently. Does anyone have experience with ribbon trails, and could maybe offer some advice? I see these effects in plenty of games, so I assume I am just missing something basic.
0
##### Share on other sites
You can see the issues in these screen shots. The first shows the trails disappearing as the cross product approaches zero (but for some reason, only from one side):
The second shows the jagged trail during turns:
1
##### Share on other sites
Gah, it turns out to have been a fairly silly mistake - with any luck others can learn from it [smile]
I was crossing the segment axis with the camera view direction, but I am now crossing the segment axis with the direction from the segment to the camera - and this corrects both the disappearing and the jagged edges.
0
##### Share on other sites
Hello, I'm trying to reproduce your method for billboarding trails, but I've some difficulties... You mentionned "Direction constants" :
plus a direction constant indicating left or right side of the strip
Assuming direction of the segment is UnitZ, and the normal of the segment UnitY : direction constants are UnitX for thie left side and -UnitX for the right part ? I mean, these two vectors point outside the segment. But the result is not correct.
Here is how I implemented your description :
but I am now crossing the segment axis with the direction from the segment to the camera
Vector3 dir = Vector3.Normalize(cameraPosition - Position);
Vector3 dir2 = Vector3.Normalize(Vector3.Cross(dir, Direction)); //Direction = UnitZ
I then multiply by the side constant to move either left or right, and add it to the position
Vector3 final = Position + dir2 * OutSideDir; //UnitX
and then multiply by ViewProj matrix
0
##### Share on other sites
I think you're missing a second cross product.
0
##### Share on other sites
OK thank you I managed it, altough I still don't understand why switcoder added "a direction constant indicating left or right side of the strip"
0
## Create an account
Register a new account | 2017-07-23 20:46:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3188306391239166, "perplexity": 1365.3857204900103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424610.13/warc/CC-MAIN-20170723202459-20170723222459-00085.warc.gz"} |
https://w10schools.com/posts/148001_elseif-else-if | # elseif/else if
(PHP 4, PHP 5, PHP 7)
Examples:
elseif, as its name suggests, is a combination of if and else. Like else, it extends an if statement to execute a different statement in case the original if expression evaluates to FALSE. However, unlike else, it will execute that alternative expression only if the elseif conditional expression evaluates to TRUE. For example, the following code would display a is bigger than b, a equal to b or a is smaller than b:
<?php
if ($a >$b) {
echo "a is bigger than b";
} elseif ($a ==$b) {
echo "a is equal to b";
} else {
echo "a is smaller than b";
}
?>
There may be several elseifs within the same if statement. The first elseif expression (if any) that evaluates to TRUE would be executed. In PHP, you can also write 'else if' (in two words) and the behavior would be identical to the one of 'elseif' (in a single word). The syntactic meaning is slightly different (if you're familiar with C, this is the same behavior) but the bottom line is that both would result in exactly the same behavior.
The elseif statement is only executed if the preceding if expression and any preceding elseif expressions evaluated to FALSE, and the current elseif expression evaluated to TRUE.
Note: Note that elseif and else if will only be considered exactly the same when using curly brackets as in the above example. When using a colon to define your if/elseif conditions, you must not separate else if into two words, or PHP will fail with a parse error.
<?php
/* Incorrect Method: */
if($a >$b):
echo $a." is greater than ".$b;
else if($a ==$b): // Will not compile.
echo "The above line causes a parse error.";
endif;
/* Correct Method: */
if($a >$b):
echo $a." is greater than ".$b;
elseif($a ==$b): // Note the combination of the words.
echo $a." equals ".$b;
else:
echo $a." is neither greater than or equal to ".$b;
endif;
?>
2016-02-24 15:53:01 | 2019-10-20 11:48:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5071898698806763, "perplexity": 270.4860113752212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00382.warc.gz"} |
https://www.physicsforums.com/threads/first-order-formalism-of-polyakov-action.989032/ | # First order formalism of Polyakov action
• A
Gold Member
In the notes of Arutyunov, he writes down the equation of Polyakov action in what he calls a first-order formalism(equation 3.19). But here I did not understand how he got this equation. Can someone help?
Moreover, can someone explain how he got the constraints in equation 3.25? And why they are not individually equal to zero?
In the thesis
https://www.rug.nl/research/portal/...ed(fb063f36-42dc-4529-a070-9c801238689a).html
the Nambu-goto action for the string is treated at page 54 and beyond. Worldsheet indices are indicated by bars, and the constraints in your (3.25) are also treated there. These constraints are, as far as I understand, indeed equal to zero.
My Hamiltonian formalism is a bit rusty now, but I'd suggest you take another look at this topic. I found it rather confusing in the beginning.
phoenix95
Gold Member
In the notes of Arutyunov, he writes down the equation of Polyakov action in what he calls a first-order formalism(equation 3.19). But here I did not understand how he got this equation. Can someone help?
He probably guessed it by using experience with simpler systems. The ultimate proof that the guess is correct is deriving the equations of motion and showing that they are equivalent to the standard ones.
phoenix95
formodular
The Hamiltonian is introduced in the calculus of variations as part of breaking up a 2nd order ode into a system of first order ode's, and of course one knows the Hamiltonian is related to the Lagrangian by a Legendre transform, ##L = p \dot{x} - H##, where now the derivatives are at most of first order hence the first order formalism, but since the Hamiltonian of the NG action is zero we have ##L = p \dot{x}##, however we also have two constraints [equations (3.8) and (3.9) of Arutyunov], so one can factor the constraints into the problem with Lagrange multipliers, which is just (3.19), where you can see (3.8) and (3.9) are just scaled by strangely written Lagrange multipliers. Compare also equation (2.1.32) of Kaku's Intro to Superstrings book (see chapter 1 for the point particle analogue of this procedure) and (3.14) of Townsend
http://www.damtp.cam.ac.uk/user/examples/3P6.pdf
Interestingly, eliminating the momentum in the first order action gives you the Polyakov action (see sec 3.3.1 of the last set of notes).
(3.25) is just explicitly stating that (3.8) and (3.9) are constraints.
phoenix95, haushofer and Demystifier
Polyakov action in what he calls a first-order
Let us set the string tension $T = 1$ and write the following matrix form of the Plyakov action
$$S[X , \gamma] = -\frac{1}{2} \int d^{2}\sigma \ \sqrt{- \gamma} \left( \dot{X} , X^{\prime}\right)^{\mu} \begin{pmatrix} \gamma^{00} & \gamma^{01} \\ \gamma^{10} & \gamma^{11} \end{pmatrix} \begin{pmatrix} \dot{X} \\ X^{\prime}\end{pmatrix}_{\mu} . \ \ \ (1)$$ Next, we define the following symmetric $2 \times 2$ matrix $$M^{\alpha \beta} = \sqrt{- \gamma} \gamma^{\alpha \beta}.$$ Since the determinant of the inverse world-sheet metric $$\mbox{det}(\gamma^{\alpha \beta}) = \frac{1}{\mbox{det}(\gamma_{\alpha \beta})} \equiv \frac{1}{\gamma},$$ it follows that $$\mbox{det}(M) = (- \gamma) \frac{1}{\gamma} = -1.$$ And, since $M^{\alpha \beta} = M^{\beta \alpha}$, we can parametrize $M$ by two non-zero numbers $(\lambda_{1} , \lambda_{2})$ and write $$\sqrt{-\gamma} \begin{pmatrix} \gamma^{00} & \gamma^{01} \\ \gamma^{10} & \gamma^{11} \end{pmatrix} = \frac{1}{\lambda_{1}} \begin{pmatrix} -1 & \lambda_{2} \\ \lambda_{2} & \lambda_{1}^{2} - \lambda_{2}^{2} \end{pmatrix} . \ \ \ \ (2)$$ From (2) you read off $$\lambda_{1} = - \frac{1}{\sqrt{-\gamma} \gamma^{00}}, \ \ \ \lambda_{2} = - \frac{\gamma^{01}}{\gamma^{00}} . \ \ \ \ \ \ \ (3)$$ Now, we substitute (2) in the Polyakov action (1) and do the simple algebra to obtain [In the followings, I will supress the world indices on the derivatives of $X$. This should not cause any confusion because they are contracted]:
$$S[X;\lambda_{1},\lambda_{2}] = \frac{1}{2} \int d^{2}\sigma \left\{ \lambda_{1}^{-1} \dot{X} \cdot \left( \dot{X} - \lambda_{2} X^{\prime}\right) - \lambda_{2} \lambda_{1}^{-1} X^{\prime} \cdot \left( \dot{X} - \lambda_{2} X^{\prime}\right) - \lambda_{1} X^{\prime} \cdot X^{\prime} \right\} .$$
Now, we define the following new variable $$P_{\mu} = \lambda_{1}^{-1} \left( \dot{X}_{\mu} - \lambda_{2}X_{\mu}^{\prime}\right) .$$ Substituting $P$ in $S[X;\lambda_{1},\lambda_{2}]$, we find $$S[X,P; \lambda_{1},\lambda_{2}] = \int d^{2}\sigma \left\{ \frac{1}{2} \left( \dot{X} - \lambda_{2} X^{\prime}\right) \cdot P - \frac{1}{2} \lambda_{1} (X^{\prime} \cdot X^{\prime}) \right\} .$$ This can be rewritten as
$$S = \int d^{2}\sigma \left\{ \lambda_{1} P \cdot P - \frac{1}{2}\lambda_{1} ( P \cdot P - X^{\prime} \cdot X^{\prime})\right\} .$$ Finally, for one of the $P$’s in the first term, we substitute $\lambda_{1}P = \dot{X} - \lambda_{2}X^{\prime}$ to obtain
$$S[X,P;\lambda_{1},\lambda_{2}] = \int d^{2}\sigma \left\{ P \cdot \dot{X} - \frac{\lambda_{1}}{2}(P \cdot P - X^{\prime} \cdot X^{\prime}) - \lambda_{2} (P \cdot X^{\prime})\right\} . \ \ (4)$$ We recognise this as the phase-space NG-action incorporating the first-class constraints $(P)^{2} - (X^{\prime})^{2} \approx 0$, $P \cdot X^{\prime} \approx 0$ through the Lagrange multipliers $(\lambda_{1},\lambda_{2})$, and the fact that the Hamiltonian vanishes on the constraint surface, $H \approx 0$. The phase-space Polyakov-action is obtained by substituting (3) in (4): $$S[X,P; \gamma ] = \int d^{2}\sigma \left\{ P \cdot \dot{X} + \frac{P \cdot P - X^{\prime} \cdot X^{\prime}}{2 \sqrt{- \gamma} \gamma^{00}} + \frac{\gamma^{01}(P \cdot X^{\prime})}{\gamma^{00}} \right\} .$$
It is important to know that the integrand in a phase-space action $S[q,p]$ is not a Lagrangian. Lagrangians are functions of the tangent bundle coordinates $(q , \dot{q}) \in T(M)$, whereas $(q,p)$ are local coordinates on the cotangent bundle $T^{*}(M)$.
phoenix95 | 2023-03-26 22:02:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150493741035461, "perplexity": 307.56318992438065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00609.warc.gz"} |
https://cstheory.stackexchange.com/questions/14526/combinatorial-characterization-of-exact-learning-with-membership-queries | # Combinatorial characterization of exact learning with membership queries
Edit: Since I haven't received any responses/comments in a week, I'd like to add that I'm happy to hear anything about the problem. I don't work in the area, so even if it's a simple observation, I may not know it. Even a comment like "I work in the area, but I haven't seen a characterization like this" would be helpful!
Background:
There are several well-studied models of learning in learning theory (e.g., PAC learning, online learning, exact learning with membership/equivalence queries).
For example, in PAC learning, the sample complexity of a concept class has a nice combinatorial characterization in terms of the VC dimension of the class. So if we want to learn a class with constant accuracy and confidence, this can be done with $\Theta(d)$ samples, where $d$ is the VC dimension. (Note that we're talking about sample complexity, not time complexity.) There is also a more refined characterization in terms of the accuracy and confidence. Similarly, the mistake bound model of online learning has a nice combinatorial characterization.
Question:
I want to know if a similar result is known for the model of exact learning with membership queries. The model is defined as follows: We have access to a black box which on input $x$ gives you $f(x)$. We know $f$ comes from some concept class $C$. We want to determine $f$ with as few queries as possible.
Is there a combinatorial parameter of a concept class $C$ that characterizes the number of queries needed to learn a concept in the model of exact learning with membership queries?
What I know:
The best such characterization I have found is in this paper by Servedio and Gortler, using a parameter they attribute to Bshouty, Cleve, Gavaldà, Kannan and Tamon. They define a combinatorial parameter called $\gamma^C$, where $C$ is the concept class, which has the following properties. (Let $Q_C$ be the optimal number of queries needed to learn $C$ in this model.)
$Q_C = \Omega(1/\gamma^C)\qquad Q_C = \Omega(\log |C|) \qquad Q_C = O(\log |C|/\gamma^C)$
This characterization is almost tight. However, there could be a quadratic gap between the upper and lower bounds. For example if $1/\gamma^C = \log |C| = k$, then the lower bound is $\Omega(k)$, but the upper bound is $O(k^2)$. (I also think this gap is achievable, i.e., there exists a concept class for which the lower bounds are both $\Omega(k)$, but the upper bound is $O(k^2)$.)
• "Haystack dimension" characterizes the query complexity of optimizing a function: cis.upenn.edu/~mkearns/papers/haystack.pdf , This is different than what you want, but you might enjoy the related work which discusses what is known about characterizing the query complexity of exact learning. – Aaron Roth Dec 1 '15 at 19:38
To drive home the point of anonymous moose's example, consider the concept class that consists of functions that output 1 on only one point in {0,1}^n. The class is of size 2^n, and 2^n queries are needed in the worst-case. Take a look at worst-case Teaching Dimension (Goldman & Schapire) which provides something similar to what you're looking for.
• Thanks! Searching for the Teaching Dimension led me to the Extended Teaching Dimension, which is similar to the combinatorial parameter I mentioned in the question, which then led me to many other interesting papers on the topic. – Robin Kothari Dec 5 '12 at 22:31
I don't know of such a characterization. However, it's worthwhile to note that for almost any concept class, one needs to query all points. To see this, consider the concept class that consists of all n-dimensional boolean vectors with Hamming weight 1. This concept class obviously requires n queries to learn, which is equal to its cardinality. You can probably generalize this observation to get that almost any concept class also requires performing all queries.
I would suspect that given a concept class C as input, it is NP-hard to determine the complexity of exactly learning the concept class with membership queries, or even to approximate it up to say a constant. This would give some indication that a "good" combinatorial characterization does not exist. If you wish to prove such an NP-hardness result but try and fail feel free to post here and I'll see if I can figure it out (I have some ideas).
• Thanks for the response. Even if it is true that almost all concept classes (under some reasonable distribution over classes) are hard to learn, some classes are easy to learn and it would be interesting to have a combinatorial parameter that characterizes this. I don't mind if the parameter is hard to compute. Even the VC dimension is not known to be efficiently computable. – Robin Kothari Dec 5 '12 at 4:22
Although others have pointed out the answer. I thought I may make it self-contained and show why teaching dimension is the answer.
Consider a concept class $C$ over input space $X$. A set of elements $S\subseteq X$ is called a teaching set for a concept $f$ if $f$ is the only concept in $C$ consistent with $S$.
Let $\mathcal{T}(f)$ be the set of all teaching sets for $f$ and define TD$(f,C)=min\{\ |S|\ | \ S\in \mathcal{T}(f) \}$ to be the teaching dimension of $f$. i.e., the cardinality of the smallest teaching set TS$_{min}(f)$ in $\mathcal{T}(f)$. Similarly, consider TD$(C)=$max$_{f\in C}$TD$(f,C)$ to be the teaching dimension of $C$.
The minimum number of queries needed to identify $f$ is TD$(f,C)$. This happens when the query strategy uses the sequence TS$_{min}(f)$. As for any fewer queries we have at least two concepts consistent with it. And TD$(C)$ is the minimum for any $f$.
• I don't understand why the teaching dimension upper bounds the query complexity of learning $f$. What does the algorithm look like? The function $f$ in unknown to the algorithm when we start, so it cannot simply query the teaching set for $f$. – Robin Kothari Nov 27 '15 at 22:46
• @RobinKothari TD lower bounds the minimum number of queries in any MQ-algorithm. In practice, there may be no algorithm that blindly achieves this bound without cheating or code tricks. In Angluin's "Queries Revisited" paper, she discussed a parameter called MQ that represent the number of queries needed by the best MQ-algorithm in the worst case. I don't recall its details but certainly TD<=MQ. – seteropere Nov 28 '15 at 0:32
• What I was interested in (when I asked this question) was a parameter that characterizes exact learning with membership queries. It should be both an upper and lower bound. I provided an example of a parameter that achieves this (up to a log |C| factor) in the question. My question was whether something better is known. – Robin Kothari Nov 28 '15 at 3:11 | 2019-11-18 18:22:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803691565990448, "perplexity": 333.7307081510164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00307.warc.gz"} |
https://face2ai.com/Math-Probability-5-4-The-Poisson-Distribution/ | Abstract: 本文介绍Poisson分布相关知识
Keywords: Poisson Distribution
泊松分布
泊松分布的定义和性质 Definition and Properties of the Poisson Distributions
$$f(x|n=3600,p=0.00125)= \begin{cases} \begin{pmatrix} 3600\\x \end{pmatrix}p^x(1-p)^{3600-x}&\text{for }0\leq x\leq 3600\\ 0&\text{otherwise} \end{cases}$$
\begin{aligned} \frac{f(x+1)}{f(x)}&= \frac {\begin{pmatrix}n\\x+1\end{pmatrix}p^{x+1}(1-p)^{n-x-1}} {\begin{pmatrix}n\\x\end{pmatrix}p^{x+1}(1-p)^{n-x-1}}\\ &=\frac{(n-x)p}{(x+1)(1-p)}\\ &\approx\frac{np}{x+1} \end{aligned}
$$f(1)=f(0)\lambda\\ f(2)=f(1)\frac{\lambda}{2}=f(0)\frac{\lambda^2}{2}\\ f(3)=f(2)\frac{\lambda}{3}=f(0)\frac{\lambda^3}{6}\\ \vdots\\ f(n)=f(n-1)\frac{\lambda}{n}==f(0)\frac{\lambda^n}{n!}\\$$
$$\sum^{\infty}_{x=0}f(x)=1$$
$$\sum^{\infty}_{x=0}f(0)\frac{\lambda^n}{n!}=1\\ f(0)\sum^{\infty}_{x=0}\frac{\lambda^n}{n!}=1\\ \text{for :}\sum^{\infty}_{x=0}\frac{\lambda^n}{n!}=e^{\lambda}\\ \text{so :}f(0)=e^{-\lambda}$$
$$f(x|\lambda)= \begin{cases} \frac{e^{-\lambda}\lambda^x}{x!}&\text{for }x=1,2,3,\dots\\ 0&\text{otherwise} \end{cases}$$
Definition Poisson Distribution.Let $\lambda > 0$ .A random variable X has the Poisson Distribution with mean $\lambda$ if the p.f. of $X$ is as follow:
$$f(x|\lambda)= \begin{cases} \frac{e^{-\lambda}\lambda^x}{x!}&\text{for }x=1,2,3,\dots\\ 0&\text{otherwise} \end{cases}$$
泊松分布的均值 Mean
Theorem Mean. The mean of Poisson Distribution with p.f. equal to upside is $\lambda$ .
$$E(X)=\sum^{\infty}_{x=0}xf(x|\lambda)$$
\begin{aligned} E(X)&=\sum^{\infty}_{x=0}x\frac{e^{-\lambda}\lambda^x}{x!}\\ &=\sum^{\infty}_{x=1}\frac{e^{-\lambda}\lambda^x}{(x-1)!}\\ &=\lambda\sum^{\infty}_{x=1}\frac{e^{-\lambda}\lambda^{x-1}}{(x-1)!}\\ \text{if we set } y=x-1\\ &=\lambda\sum^{\infty}_{y=0}\frac{e^{-\lambda}\lambda^{y}}{y!} \end{aligned}
泊松分布的方差 Varaince
Theorem Variance.The variance of Poisson distribution with mean $\lambda$ is also $\lambda$
\begin{aligned} E[X(X-1)]&=\sum^{\infty}_{x=0}x(x-1)f(x|\lambda)\\ &=\sum^{\infty}_{x=2}x(x-1)f(x|\lambda)\\ &=\sum^{\infty}_{x=2}x(x-1)\frac{e^{-\lambda}\lambda^x}{x!}\\ &=\lambda^2\sum^{\infty}_{x=2}\frac{e^{-\lambda}\lambda^{x-2}}{x-2!}\\ \text{We set }y=x-2\\ E[X(X-1)]&=\lambda^2\sum^{\infty}_{y=0}\frac{e^{-\lambda}\lambda^y}{y!}\\ &=\lambda^2 \end{aligned}
$$Var(X)=E[X^2]-E^2[x]=\lambda^2+\lambda-\lambda^2=\lambda$$
泊松分布的距生成函数 m.g.f.
Theorem Moment Generating Function.The m.g.f. of the Poisson distribution with mean $\lambda$ is
$$\psi(t)=e^{\lambda(e^t-1)}$$
for all real $t$
$$\psi(t)=E(e^{tX})=\sum^{\infty}_{x=0}\frac{e^{tx}e^{-\lambda}\lambda^x}{x!}=e^{-\lambda}\sum^{\infty}_{x=0}\frac{(\lambda e^t)^x}{x!}$$
$$\sum^{\infty}_{x=0}\frac{(\lambda e^t)^x}{x!}=e^{\lambda e^t}$$
$$\psi(t)=e^{-\lambda}e^{\lambda e^t}=e^{\lambda(e^t-1)}$$
泊松分布随机变量相加
Theorem If the random variable $X_1,\dots,X_k$ are independent and if $X_i$ has Poisson distribution with mean $\lambda_i(i=1,\dots,k)$ ,then the sum $X_1+\dots+X_k$ has the Poisson distribution with mean $\lambda_1+\dots+\lambda_k$
$$\psi(t)=\Pi^k_{i=1}\psi_i(t)=\Pi^k_{i=1}e^{\lambda_i(e^t-1)}=e^{(\lambda_1+\dots+\lambda_k)(e^t-1)}$$
二项分布的泊松近似 The Poisson Approximation to Binomial Distributions
Theorem Closeness of Binomial and Pisson Distribution.For each integer n and each $0 < p < 1$ ,let $f(x|n,p)$ denote the p.f. of the binomial distribtuion with parameters $n$ and $p$ .Let $f(x|\lambda)$ denote the p.f. of the Poisson distribution with mean $\lambda$ .Let $^{\infty}_{n=1}$ be a sequence of numbers between 0 and 1 such that $lim_{n\to \infty}np_n=\lambda$ . Then
$$lim_{n\to \infty}f(x|n,p_n)=f(x|\lambda)$$
for all $x=0,1\dots$
$$f(x|n,p_n)=\frac{n(n-1)\dots(n-x+1)}{x!}p_n^x(1-p_n)^{n-x}$$
$$f(x|n,p_n)=\frac{\lambda_n^x}{x!}\frac{n}{n}\cdot\frac{n-1}{n}\dots \frac{n-x+1}{n}(1-\frac{\lambda_n}{n})^n(1-\frac{\lambda_n}{n})^{-x}$$
$$lim_{n\to \infty}\frac{n}{n}\cdot\frac{n-1}{n}\dots \frac{n-x+1}{n}(1-\frac{\lambda_n}{n})^{-x}=1$$
$$lim_{n\to \infty}(1-\frac{\lambda_n}{n})^{n}=e^{-\lambda}$$
$$lim_{n\to \infty}f(x|n,p_n)=\frac{e^{-\lambda}\lambda^x}{x!}=f(x|\lambda)$$
Theorem Closeness of Hypergeometric and Poisson Distribution.Let $\lambda>0$ .Let $Y$ have the Poisson distribution with mean $\lambda$ .For each postive integer $T$ ,let $A_T,B_T$ ,and $n_T$ be integers such that $lim_{T\to \infty}n_TA_T/(A_T+B_T)=\lambda$ .Let $X_T$ have the hypergeometric distribution with parameters $A_T,B_T$ and $n_T$ .Tor each fixed $x=0,1,\dots$ ,
$$lim_{T\to \infty}\frac{Pr(Y=x)}{Pr(X_t=x)}=1$$
泊松过程 Poisson Processes
Definition Poisson Process.A Poisson process with rate $\lambda$ per unit time is a process that satisfies the following two properties:
i: The number of arrivals in every fixed interval of time of length $t$ has the Poisson distribution with mean $\lambda t$
ii: The numbers of arrivals in every collection of disjoint time intervals are independent
0% | 2020-11-26 00:36:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927514791488647, "perplexity": 859.8136728641931}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141185851.16/warc/CC-MAIN-20201126001926-20201126031926-00652.warc.gz"} |
http://nakamotonews.socialpetitions.org/bitcoin-btc-price-analysis-one-more-triangle-bottom/ | # Bitcoin (BTC) Price Analysis: One More Triangle Bottom
Bitcoin made a triangle breakdown earlier on but seems to be finding some support at a larger triangle bottom. Price bounced off the $3,800 area but is making another test and might be attempting another break. The 100 SMA is below the longer-term 200 SMA to indicate that the path of least resistance is to the downside. In other words, support is more likely to break than to hold. In that case, bitcoin could slide by the same height as this chart formation, which spans$3,500 to $4,600. RSI is still heading lower to reflect the presence of bearish pressure. However, the oscillator is nearing the oversold region to reflect exhaustion. Turning higher could confirm that buyers are ready to return and might push for a move back to the triangle top at$4,100. Similarly stochastic is dipping into the oversold region but has yet to turn higher to signal a return in bullish pressure.
More and more mainstream coverage on the recent slump in bitcoin and other cryptocurrencies is piling on the FUD that’s currently weighing on prices. Although the improvement in sentiment last week from revived expectations on institutional investment propped bitcoin higher, it seems that traders are hoping to get actual developments before sustaining any rallies.
Still, a lot of analysts are holding on to their bullish bets and this may be why bulls continue to defend nearby support levels. Some expect bitcoin to make a strong rebound before the end of 2019 while some believe that it could take place as early as Q1 2019.
According to a recently published A.T. Kearney report:
“By the end of 2019, Bitcoin will reclaim nearly two-thirds of the crypto-market capitalization as altcoins lose their luster because of growing risk aversion among cryptocurrency investors. More broadly financial regulators will soften their stance towards the sector.”
The post Bitcoin (BTC) Price Analysis: One More Triangle Bottom appeared first on Ethereum World News.
•
•
•
•
•
• | 2020-02-26 23:07:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1906517595052719, "perplexity": 7982.836854833355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00099.warc.gz"} |
https://web2.0calc.com/questions/how-many-degrees-are-in-the-smaller-angle-formed | +0
How many degrees are in the smaller angle formed by the minute and hour hands on a clock at 12:30?
0
70
5
How many degrees are in the smaller angle formed by the minute and hour hands on a clock at 12:30?
Guest Nov 15, 2018
#1
0
165 degrees
12:30 (without considering hour hand movement) is 180 degrees. However, the hour hand moves half of 30 degrees, which is 15 degrees. 180 - 15 = 165.
Guest Nov 15, 2018
#2
+2
0
165 degrees.
pepitio Nov 15, 2018
edited by pepitio Nov 15, 2018
#3
0
The Clock is divided into 12 hours. But, the Clock is also a "circle" and has 360 degrees.
So: 360 / 12 =30 degrees between 12 and 1(or every 5 minutes).
Since the Minute hand has moved 30 minutes, or 180 degrees, the Hour hand has moved half the distance between 12 and 1. Or: 30 degees/2 =15 degrees.
Then: 180 - 15 = 165 degrees between the two hands at 12:30.
Guest Nov 15, 2018
#4
+20
+1
We can also use the clock formula, which is $$|30H-5.5M|$$, so $$|30(12)-5.5(30)|=360-165=195^\circ$$. Since the problem is asking us for the smaller angle, the answer is $$\boxed{165^\circ}$$.
neworleans06 Nov 15, 2018
#5
+20598
+9
How many degrees are in the smaller angle formed by the minute and hour hands on a clock at 12:30?
$$\boxed{\large{\Delta\varphi=330\cdot t}} \\~ \text{t is the time in hours }\\~ \text{\Delta\varphi is the angle in degrees between minute and hour hands. }$$
$$\text{time at 12:30 }$$
$$\begin{array}{|rcll|} \hline t &=& 12+\dfrac{30}{60} = 12.5 \ h \\ \Delta\varphi &=& 330 \cdot 12.5 \\ &=& 4125^{\circ} \\ \Delta\varphi &=& 4125^{\circ} - 11\cdot 360^{\circ} \\ && \text{A multiple of 360^{\circ} degrees must be deducted from the angle.}\\ \Delta\varphi &=& 4125^{\circ} - 3960^{\circ} \\ \mathbf{\Delta\varphi} &\mathbf{=}& \mathbf{165^{\circ}} \\ \hline \end{array}$$
The smaller angle formed by the minute and hour hands on a clock at 12:30 are $$\mathbf{165^{\circ} }$$
heureka Nov 15, 2018 | 2018-12-13 02:53:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6028655171394348, "perplexity": 776.1436956183696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824338.6/warc/CC-MAIN-20181213010653-20181213032153-00195.warc.gz"} |
https://cje.ejournal.org.cn/article/doi/10.1049/cje.2018.12.002 | HAN Li, LI Mingze, WANG Xuesong, CHENG Yuhu. Real-Time Wind Power Forecast Error Estimation Based on Eigenvalue Extraction by Dictionary Learning[J]. Chinese Journal of Electronics, 2019, 28(2): 349-356. doi: 10.1049/cje.2018.12.002
Citation: HAN Li, LI Mingze, WANG Xuesong, CHENG Yuhu. Real-Time Wind Power Forecast Error Estimation Based on Eigenvalue Extraction by Dictionary Learning[J]. Chinese Journal of Electronics, 2019, 28(2): 349-356.
# Real-Time Wind Power Forecast Error Estimation Based on Eigenvalue Extraction by Dictionary Learning
##### doi: 10.1049/cje.2018.12.002
Funds: This work is supported by the Fundamental Research Funds for the Central Universities (No.2017XKQY032).
• Corresponding author: CHENG Yuhu (corresponding author) received the Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2005. He is currently a professor in the School of Information and Control Engineering, China University of Mining and Technology. His main research interests include machine learning and intelligent system. (Email:chengyuhu@163.com)
• Rev Recd Date: 2018-03-23
• Publish Date: 2019-03-10
• Because of the fluctuation and uncertainty characteristics of wind power, it is difficult to achieve a perfect wind power forecast. The forecast error may lead to an imbalance between the load demand and power supply. The object of recent research on forecast error is to achieve the probability distribution of forecast error based on the statistics of historical data. This statistical error achieved from a probability distribution cannot reveal the real-time condition of wind power. A real-time forecast Error estimate method based on dictionary learning (EEDL) was proposed. In EEDL, several coefficients that have strong relevance to the forecast error are computed. The dictionary learning method is used to extract the eigenvalues of forecast error from these coefficients. Based on the eigenvalues, a real-time error estimation model was built to obtain the forecast error. EEDL was compared to the estimation method based on a Probability distribution function (PDF). The performance of EEDL was also compared to the error estimation method based on a PDF while using different forecast techniques.
• A. Tascikaraoglu N and M. Uzunoglu, “A review of combined approaches for prediction of short-term wind speed and power”, Renewable and Sustainable Energy Reviews, Vol.34, No.6, pp.243-254, 2014. Z. Qian, Y. Pei, L.X. Cao, et al., “Review of wind power forecasting method”, High Voltage Engineering, Vol.42, No.4, pp.1047-1060, 2016. P.Y. C, T. Pedersen, B. Bak-Jensen, et al., “ARIMA-based time series model of stochastic wind power generation”, IEEE Trans on Power System, Vol.25, No.2, pp.667-676, 2010. L. Han, C.E. Romero and Y. Zheng, “Wind power forecasting based on principle component phase space reconstruction”, Renewable Energy, Vol.81, pp.737-744, 2015. K. Bhaskar and S.N. Singh, “AWNN-assisted wind power forecasting using feed-forward neural network”, IEEE Transaction on Sustainable Energy, Vol.3, No.2, pp.306-315, 2012. L.J. Wang, L. Dong, G.F. Hu, et al., “Combined prediction of wind power generation in multi-dimension embedding phase space”, Control and Decision, Vol.25, No.4, pp.577-581, 2010. J. Wang, M. Shahidehpour and Z. Li, “Security-constrained unit commitment with volatile wind power generation”, IEEE Transactions on Power Systems, Vol.23, No.3, pp.1319-1327, 2008. V.S. Pappala, I. Erlich, K. Rohrig, et al., “A stochastic model for the optimal operation of a wind-thermal power system”, IEEE Transactions on Power Systems, Vol.24, No.2, pp.940-950, 2009. D. Lee and R. Baldick, “Probabilistic wind power forecasting based on the laplace distribution and golden search”, IEEE/PES Transmission and Distribution Conference and Exposition, Dallas, Tx, USA, pp.1-5, 2016. H. Bludszuweit, J.A. Dominguez-Navarro and A. Llombart, “A stochastic model for the optimal operation of a wind-thermal power system”, IEEE Transactions on Power Systems, Vol.23, No.3, pp.983-991, 2008. B.M. Hodge and M. Milligan, “Wind power forecasting error distributions over multiple timescales”, Power and Energy Society General Meeting IEEE, Vol.44, No.4, pp.1-8, 2011. B. Liu, J.Y. Zhou, H.M. Zhou, et al., “An improved model for wind power forecast error distribution”, East China Electric Power, Vol.40, No.2, pp.286-291. 2012. L.Y. Liu, W.U. Jun-Ji and S.L. Meng, “Research on error distribution of short-term wind power prediction”, Power System Protection and Control, Vol.41, No.12, pp.65-70, 2013. A. Fabbri, T.G.S. Roman and J.R. Abbad, “Assessment of the cost associated with wind generation prediction errors in a liberalized electricity market”, IEEE Transactions on Power Systems, Vol.20, No.3, pp.1440-1446, 2005. H. Yang, J. Yuan, T. Zhang, “A model and algorithm for minimum probability interval of wind power forecast errors based on Beta distribution”, Proceedings of the CSEE, Vol.35, No.9, pp.2135-2142, 2015. Z.S. Zhang, Y.Z. Sun, DW. Gao, et al., “A versatile probability distribution model for wind power forecast errors and its application in economic dispatch”, IEEE Transactions on Power Systems, Vol.28, No.3, pp.3114-3125, 2013. L. Liu, S. Meng and W.U. Junji, “Dynamic economic dispatch based on wind power forecast error interval”, Electric Power Automation Equipment, Vol.36, No.9, pp.87-93, 2016. Y. Liu, W.H. Li, C. Liu, et al., “Mixed skew distribution model of short-term wind power prediction error”, Proceedings of the CSEE, Vol.35, No.10, pp.2375-2382, 2015. J. Wu, B. Zhang, H. Li, et al., “Statistical distribution for wind power forecast error and its application to determine optimal size of energy storage system”, International Journal of Electrical Power and Energy Systems, Vol.55, No.2, pp.100-107, 2014. J.M. Lujano-Rojas, G.J. Osório, J.C.O. Matias, et al., “Wind power forecasting error distributions and probabilistic load dispatch”, Proceedings of the 2016 IEEE Power and Energy Society General Meeting, Boston, MA, USA, pp.1-5, 2016. K. Zhang, G. Yang, H. Chen, et al., “An estimation method for wind power forecast errors based on numerical feature extraction”, Automation of Electric Power Systems, Vol.38, No.16, pp.22-28, 2014. M. Aharon, M. Elad and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation”, IEEE Trans on signal processing, Vol.54, No.11, pp.4311-4322, 2006. Y. Xi, J. Huang and Y.J. He, “One dictionary vs. two dictionaries in sparse coding based denoising”, Chinese Journal of Electronics, Vol.26, No.2, pp.367-371, 2017. X. Zhan and R. Zhang, “Complex SAR image compression using entropy-constrained dictionary learning and universal trellis coded quantization”, Chinese Journal of Electronics, Vol.25, No.4, pp.686-691, 2016. S. Li, P. Wang and L. Goel, “Wind power forecasting using neural network ensembles with feature selection”, IEEE Trans on sustainable energy, Vol.6, No.4, pp.1447-1456, 2015. I. Daubechies, “Ten Lectures on Wavelets”, Rutgers University, Philadelphia, PA, USA, pp.53-106, 1992. A.J.R. Reis and A.P.A.D. Silva, “Feature extraction via multiresolution analysis for short-term load forecasting”, IEEE Trans on power system, Vol.20, No.1, pp.189-198, 2005. M. Cheng, G.Q. Wu, M. Yuan, et al., “Semi-supervised software defect prediction using task-driven, dictionary learning”, Chinese Journal of Electronics, Vol.25, No.6, pp.1089-1096, 2016. J. Platt, “A resource allocating network for function interpolation”, Neural Computation, Vol.3, No.2, pp.213-225, 1991. Elia, “Wind-power generation data”, http://www:elia:be/en/grid-data/power-generation/windpower, 2016-2-1. NREL, “National renewable energy laboratory”, http://wind:nrel:gov/Web-nrel/;2006-1-1.
### Catalog
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142 | 2022-06-28 05:22:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8684655427932739, "perplexity": 6295.836144318053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00410.warc.gz"} |
http://math.stackexchange.com/questions/43204/adding-noise-to-an-assignment | # Adding noise to an assignment
Suppose I'm given a CNF formula with $m$ clauses (and $k$ literals in each clause), with a total of $n$ variables in the formula, where each variable is in at most $c$ clauses, along with a satisfying assignment to this formula. I change the value of each variable to the other possible value (from the one in the given assignment) with constant probability $p \leq \frac{1}{2}$ independently of the other variables. I want to show that with probability at least $1-exp\left(-\frac{p^2n}{2c^2}\right)$, the new assignment satisfies at least $\left(1-\left(c+1\right)p\right)m$ clauses.
It seems I should use here martingales (Azuma's inequality) or large deviation inequalities, however I didn't manage to get any meaningful results using these methods, so that may not be the case.
-
Is this homework by any chance? We have a tag for that. – Yuval Filmus Jun 4 '11 at 22:30 | 2016-05-01 20:05:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117241501808167, "perplexity": 243.99099796767888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00084-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/116106/a-wirtinger-like-inequality-involving-two-functions/116108 | # A Wirtinger-like inequality involving two functions
Let $f(t)$ and $g(t)$ be periodic functions on $t\in[0,2\pi]$. By using the Fourier series of the two functions, we can easily prove the inequality $$\left|\int_0^{2\pi}f(t)g'(t)dt\right|= \left|\int_0^{2\pi}f'(t)g(t)dt\right|\le \frac{1}{2}\int_0^{2\pi}[f'(t)^2+g'(t)^2]dt\text.$$
I have been trying to find a reference for this inequality because I need to use it to solve some problem. The closest I have been able to find is Pachpatte 1986, which gives $$\frac{1}{2}\int_0^{2\pi}\left[|f(t)||g'(t)|+|f'(t)||g(t)|\right]dt\le \frac{\pi}{2}\int_0^{2\pi}[f'(t)^2+g'(t)^2]dt\text.$$
The extra factor of $\pi$ is highly undesirable and the absolute values inside of the integral unnecessary for me. I can easily provide a short proof in the text, but if anybody can think of where the first inequality might appear, that would be better.
-
I think there must be a mistake in the second inequality, since $f=c$ and $|g^\prime|=1$ would give the inequality $c \leq \pi$. The first inequality is an easier inequality from this perspective, and is provable by using a standard Poincare inequality for $||f-\overline{f}||_{L^2}\leq C ||f^\prime||_{L^2}$ with the optimal constant $C$ and realizing that you can introduce the average value on the left hand side, due to periodicity of $g$, followed by Cauchy-Schwartz. – Daniel Spector Dec 11 '12 at 20:13
Oh yeah. The second inequality has more assumptions that I forgot to include $f(0)=g(0)=0$. – Yoav Kallus Dec 11 '12 at 20:42 | 2015-04-26 21:19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572346210479736, "perplexity": 101.12888532221581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656168.61/warc/CC-MAIN-20150417045736-00066-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://micromath.wordpress.com/2010/04/14/some-mathematics-in-o-notation/?like=1&source=post_flair&_wpnonce=9a71b83933 | Posted by: Alexandre Borovik | April 14, 2010
## Some Mathematics in O()-notation
There are infinitely many prime numbers.
Proof: Assume that there are only $k$ prime numbers. Then the number of all ways to multiply $m$ of them (perhaps with repetitions) is a polynomial in $m$ of degree $k$ and is therefore $O(m^k)$. On the other hand, all natural numbers up to $2^m$ can be written as a product of at most $m$ primes. Therefore $2^m \le O(m^k$ — a contradiction.
## Responses
1. Reminds me of the proof using the Euler product for the zeta function. It has a pole at 0, which can not be true if there are only finite number of primes.
2. This proof occurred to me and I posted it on April 3 at http://blogs.williams.edu/Morgan/2010/04/03/infinitely-many-primes-by-combinatorics/ Earlier references? | 2017-01-18 22:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850823044776917, "perplexity": 268.3414029353373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280364.67/warc/CC-MAIN-20170116095120-00392-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mcqquestions.guru/category/icse-class-10/ | ## ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 11 Section Formula Ex 11
These Solutions are part of ML Aggarwal Class 10 Solutions for ICSE Maths. Here we have given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 11 Section Formula Ex 11
More Exercises
Midpoint Calculator is used to find the midpoint between 2 line segments using the midpoint formula. Use our calculator to find accurate midpoints step by step.
Question 1.
Find the co-ordinates of the mid-point of the line segments joining the following pairs of points:
(i) (2, – 3), ( – 6, 7)
(ii) (5, – 11), (4, 3)
(iii) (a + 3, 5b), (2a – 1, 3b + 4)
Solution:
(i) Co-ordinates of the mid-point of (2, -3), ( -6, 7)
$$\left( \frac { { x }_{ 1 }+{ x }_{ 2 } }{ 2 } ,\frac { { y }_{ 1 }+{ y }_{ 2 } }{ 2 } \right) or$$
Question 2.
The co-ordinates of two points A and B are ( – 3, 3) and (12, – 7) respectively. P is a point on the line segment AB such that AP : PB = 2 : 3. Find the co-ordinates of P.
Solution:
Points are A (-3, 3), B (12, -7)
Let P (x1, y1) be the point which divides AB in the ratio of m1 : m2 i.e. 2 : 3
then co-ordinates of P will be
Question 3.
P divides the distance between A ( – 2, 1) and B (1, 4) in the ratio of 2 : 1. Calculate the co-ordinates of the point P.
Solution:
Points are A (-2, 1) and B (1, 4) and
Let P (x, y) divides AB in the ratio of m1 : m2 i.e. 2 : 1
Co-ordinates of P will be
Question 4.
(i) Find the co-ordinates of the points of trisection of the line segment joining the point (3, – 3) and (6, 9).
(ii) The line segment joining the points (3, – 4) and (1, 2) is trisected at the points P and Q. If the coordinates of P and Q are (p, – 2) and $$\left( \frac { 5 }{ 3 } ,q \right)$$ respectively, find the values of p and q.
Solution:
(i) Let P (x1, y1) and Q (x2, y2) be the points
which trisect the line segment joining the points
A (3, -3) and B (6, 9)
Question 5.
(i) The line segment joining the points A (3, 2) and B (5, 1) is divided at the point P in the ratio 1 : 2 and it lies on the line 3x – 18y + k = 0. Find the value of k.
(ii) A point P divides the line segment joining the points A (3, – 5) and B ( – 4, 8) such that $$\frac { AP }{ PB } =\frac { k }{ 1 }$$ If P lies on the line x + y = 0, then find the value of k.
Solution:
(i) The point P (x, y) divides the line segment joining the points
A (3, 2) and B (5, 1) in the ratio 1 : 2
Question 6.
Find the coordinates of the point which is three-fourth of the way from A (3, 1) to B ( – 2, 5).
Solution:
Let P be the required point, then
$$\frac { AP }{ AB } =\frac { 3 }{ 4 }$$
Question 7.
Point P (3, – 5) is reflected to P’ in the x- axis. Also P on reflection in the y-axis is mapped as P”.
(i) Find the co-ordinates of P’ and P”.
(ii) Compute the distance P’ P”.
(iii) Find the middle point of the line segment P’ P”.
(iv) On which co-ordinate axis does the middle point of the line segment P P” lie ?
Solution:
(i) Co-ordinates of P’, the image of P (3, -5)
when reflected in x-axis will be (3, 5)
and co-ordinates of P”, the image of P (3, -5)
when reflected in y-axis will be (-3, -5)
Question 8.
Use graph paper for this question. Take 1 cm = 1 unit on both axes. Plot the points A(3, 0) and B(0, 4).
(i) Write down the co-ordinates of A1, the reflection of A in the y-axis.
(ii) Write down the co-ordinates of B1, the reflection of B in the x-axis.
(iii) Assign.the special name to the quadrilateral ABA1B1.
(iv) If C is the mid point is AB. Write down the co-ordinates of the point C1, the reflection of C in the origin.
(v) Assign the special name to quadrilateral ABC1B1.
Solution:
Two points A (3, 0) and B (0,4) have been plotted on the graph.
(i)∵ A1 is the reflection of A (3, 0) in the v-axis Its co-ordinates will be ( -3, 0)
(ii)∵ B1 is the reflection of B (0, 4) in the .x-axis co-ordinates of B, will be (0, -4)
(iii) The so formed figure ABA1B1 is a rhombus.
(iv) C is the mid point of AB co-ordinates of C” will be $$\frac { AP }{ AB } =\frac { 3 }{ 4 }$$
∵ C, is the reflection of C in the origin
co-ordinates of C, will be $$\left( \frac { -3 }{ 2 } ,-2 \right)$$
(v) The name of quadrilateral ABC1B1 is a trapezium because AB is parallel to B1C1.
Question 9.
The line segment joining A ( – 3, 1) and B (5, – 4) is a diameter of a circle whose centre is C. find the co-ordinates of the point C. (1990)
Solution:
∵ C is the centre of the circle and AB is the diameter
C is the midpoint of AB.
Let co-ordinates of C (x, y)
Question 10.
The mid-point of the line segment joining the points (3m, 6) and ( – 4, 3n) is (1, 2m – 1). Find the values of m and n.
Solution:
Let the mid-point of the line segment joining two points
A(3m, 6) and (-4, 3n) is P( 1, 2m – 1)
Question 11.
The co-ordinates of the mid-point of the line segment PQ are (1, – 2). The co-ordinates of P are ( – 3, 2). Find the co-ordinates of Q.(1992)
Solution:
Let the co-ordinates of Q be (x, y)
co-ordinates of P are (-3, 2) and mid-point of PQ are (1, -2) then
Question 12.
AB is a diameter of a circle with centre C ( – 2, 5). If point A is (3, – 7). Find:
(i) the length of radius AC.
(ii) the coordinates of B.
Solution:
AC = $$\sqrt { { \left( 3+2 \right) }^{ 2 }+{ \left( -7-5 \right) }^{ 2 } }$$
Question 13.
Find the reflection (image) of the point (5, – 3) in the point ( – 1, 3).
Solution:
Let the co-ordinates of the images of the point A (5, -3) be
A1 (x, y) in the point (-1, 3) then
the point (-1, 3) will be the midpoint of AA1.
Question 14.
The line segment joining A $$\left( -1,\frac { 5 }{ 3 } \right)$$ the points B (a, 5) is divided in the ratio 1 : 3 at P, the point where the line segment AB intersects y-axis. Calculate
(i) the value of a
(ii) the co-ordinates of P. (1994)
Solution:
Let P (x, y) divides the line segment joining
the points $$\left( -1,\frac { 5 }{ 3 } \right)$$, B(a, 5) in the ratio 1 : 3
Question 15.
The point P ( – 4, 1) divides the line segment joining the points A (2, – 2) and B in the ratio of 3 : 5. Find the point B.
Solution:
Let the co-ordinates of B be (x, y)
Co-ordinates of A (2, -2) and point P (-4, 1)
divides AB in the ratio of 3 : 5
Question 16.
(i) In what ratio does the point (5, 4) divide the line segment joining the points (2, 1) and (7 ,6) ?
(ii) In what ratio does the point ( – 4, b) divide the line segment joining the points P (2, – 2), Q ( – 14, 6) ? Hence find the value of b.
Solution:
(i) Let the ratio be m1 : m2 that the point (5, 4) divides
the line segment joining the points (2, 1), (7, 6).
$$5=\frac { { m }_{ 1 }\times 7+{ m }_{ 2 }\times 2 }{ { m }_{ 1 }+{ m }_{ 2 } }$$
Question 17.
The line segment joining A (2, 3) and B (6, – 5) is intercepted by the x-axis at the point K. Write the ordinate of the point k. Hence, find the ratio in which K divides AB. Also, find the coordinates of the point K.
Solution:
Let the co-ordinates of K be (x, 0) as it intersects x-axis.
Let point K divides the line segment joining the points
A (2, 3) and B (6, -5) in the ratio m1 : m2.
Question 18.
If A ( – 4, 3) and B (8, – 6), (i) find the length of AB.
(ii) in what ratio is the line joining AB, divided by the x-axis? (2008)
Solution:
Given A (-4, 3), B (8, -6)
Question 19.
(i) Calculate the ratio in which the line segment joining (3, 4) and( – 2, 1) is divided by the y-axis.
(ii) In what ratio does the line x – y – 2 = 0 divide the line segment joining the points (3, – 1) and (8, 9)? Also, find the coordinates of the point of division.
Solution:
(i) Let the point P divides the line segment joining the points
A (3, 4) and B (-2, 3) in the ratio of m1 : m2 and
let the co-ordinates of P be (0, y) as it intersects the y-axis
Question 20.
Given a line segment AB joining the points A ( – 4, 6) and B (8, – 3). Find:
(i) the ratio in which AB is divided by the y-axis.
(ii) find the coordinates of the point of intersection.
(iii)the length of AB.
Solution:
(i) Let the y-axis divide AB in the ratio m : 1. So,
Question 21.
(i) Write down the co-ordinates of the point P that divides the line joining A ( – 4, 1) and B (17,10) in the ratio 1 : 2.
(ii)Calculate the distance OP where O is the origin.
(iii)In what ratio does the y-axis divide the line AB ?
Solution:
(i) Let co-ordinate of P be (x, y) which divides the line segment joining the points
A ( -4, 1) and B(17, 10) in the ratio of 1 : 2.
Question 22.
Calculate the length of the median through the vertex A of the triangle ABC with vertices A (7, – 3), B (5, 3) and C (3, – 1)
Solution:
Let D (x, y) be the median of ΔABC through A to BC.
∴ D will be the midpoint of BC
∴ Co-ordinates of D will be,
Question 23.
Three consecutive vertices of a parallelogram ABCD are A (1, 2), B (1, 0) and C (4, 0). Find the fourth vertex D.
Solution:
Let O in the mid-point of AC the diagonal of ABCD
∴ Co-ordinates of O will be
Question 24.
If the points A ( – 2, – 1), B (1, 0), C (p, 3) and D (1, q) from a parallelogram ABCD, find the values of p and q.
Solution:
A (-2, -1), B (1, 0), C (p, 3) and D (1, q)
are the vertices of a parallelogram ABCD
∴ Diagonal AC and BD bisect each other at O
O is the midpoint of AC as well as BD
Let co-ordinates of O be (x, y)
When O is mid-point of AC, then
Question 25.
If two vertices of a parallelogram are (3, 2) ( – 1, 0) and its diagonals meet at (2, – 5), find the other two vertices of the parallelogram.
Solution:
Two vertices of a ||gm ABCD are A (3, 2), B (-1, 0)
and point of intersection of its diagonals is P (2, -5)
P is mid-point of AC and BD.
Let co-ordinates of C be (x, y), then
Question 26.
Prove that the points A ( – 5, 4), B ( – 1, – 2) and C (5, 2) are the vertices of an isosceles right angled triangle. Find the co-ordinates of D so that ABCD is a square.
Solution:
Points A (-5, 4), B (-1, -2) and C (5, 2) are given.
If these are vertices of an isosceles triangle ABC then
AB = BC.
Question 27.
Find the third vertex of a triangle if its two vertices are ( – 1, 4) and (5, 2) and mid point of one sides is (0, 3).
Solution:
Let A (-1, 4) and B (5, 2) be the two points and let D (0, 3)
be its the midpoint of AC and co-ordinates of C be (x, y).
Question 28.
Find the coordinates of the vertices of the triangle the middle points of whose sides are $$\left( 0,\frac { 1 }{ 2 } \right) ,\left( \frac { 1 }{ 2 } ,\frac { 1 }{ 2 } \right) and\left( \frac { 1 }{ 2 } ,0 \right)$$
Solution:
Let ABC be a ∆ in which $$D\left( 0,\frac { 1 }{ 2 } \right) ,E\left( \frac { 1 }{ 2 } ,\frac { 1 }{ 2 } \right) andF\left( \frac { 1 }{ 2 } ,0 \right)$$,
the mid-points of sides AB, BC and CA respectively.
Let co-ordinates of A be (x1, y1), B (x2, y2), C (x3, y3)
Question 29.
Show by section formula that the points (3, – 2), (5, 2) and (8, 8) are collinear.
Solution:
Let the point (5, 2) divides the line joining the points (3, -2) and (8, 8)
in the ratio of m1 : m2
Question 30.
Find the value of p for which the points ( – 5, 1), (1, p) and (4, – 2) are collinear.
Solution:
Let points A (-5, 1), B (1, p) and C (4, -2)
are collinear and let point A (-5, 1) divides
BC in the ratio in m1 : m2
Question 31.
A (10, 5), B (6, – 3) and C (2, 1) are the vertices of triangle ABC. L is the mid point of AB, M is the mid-point of AC. Write down the co-ordinates of L and M. Show that LM = $$\\ \frac { 1 }{ 2 }$$ BC.
Solution:
Co-ordinates of L will be
$$\left( \frac { 10+6 }{ 2 } ,\frac { 5-3 }{ 2 } \right) or\left( \frac { 16 }{ 2 } ,\frac { 2 }{ 2 } \right) or(8,1)$$
Question 32.
A (2, 5), B ( – 1, 2) and C (5, 8) are the vertices of a triangle ABC. P and.Q are points on AB and AC respectively such that AP : PB = AQ : QC = 1 : 2.
(i) Find the co-ordinates of P and Q.
(ii) Show that PQ = $$\\ \frac { 1 }{ 3 }$$ BC.
Solution:
A (2, 5), B (-1, 2) and C (5, 8) are the vertices of a ∆ABC,
P and Q are points on AB
and AC respectively such that $$\frac { AP }{ PB } =\frac { AQ }{ QC } =\frac { 1 }{ 2 }$$
Question 33.
The mid-point of the line segment AB shown in the adjoining diagram is (4, – 3). Write down die co-ordinates of A and B.
Solution:
A lies on x-axis and B on the y-axis.
Let co-ordinates of A be (x, 0) and of B be (0, y)
P (4, -3) is the mid-point of AB
Question 34.
Find the co-ordinates of the centroid of a triangle whose vertices are A ( – 1, 3), B(1, – 1) and C (5, 1) (2006)
Solution:
Co-ordinates of the centroid of a triangle,
whose vertices are (x1, y1), (x2, y2) and
Question 35.
Two vertices of a triangle are (3, – 5) and ( – 7, 4). Find the third vertex given that the centroid is (2, – 1).
Solution:
Let the co-ordinates of third vertices be (x, y)
and other two vertices are (3, -5) and (-7, 4)
and centroid = (2, -1).
Question 36.
The vertices of a triangle are A ( – 5, 3), B (p – 1) and C (6, q). Find the values of p and q if the centroid of the triangle ABC is the point (1, – 1).
Solution:
The vertices of ∆ABC are A (-5, 3), B (p, -1), C (6, q)
and the centroid of ∆ABC is O (1, -1)
co-ordinates of the centroid of ∆ABC will be
Hope given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 11 Section Formula Ex 11 are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
## Selina Concise Mathematics Class 10 ICSE Solutions 2020-21 Edition
Learninsta.com provides step by step solutions for Selina Concise ICSE Solutions for Class 10 Mathematics. You can download the Selina Concise Mathematics ICSE Solutions for Class 10 with Free PDF download option. Selina Publishers Books of Concise Mathematics for Class 10 ICSE Solutions all questions are solved and explained by expert mathematic teachers as per ICSE board guidelines. By studying these Selina ICSE Solutions for Class 10 Maths you can easily get good marks in ICSE Class 10 Board Examinations.
## Understanding ICSE Mathematics Class 10 ML Aggarwal Solved Solutions
Get Latest Edition of ML Aggarwal Class 10 Solutions PDF Download on LearnInsta.com. It provides step by step solutions for ML Aggarwal Maths for Class 10 ICSE Solutions Pdf Download. You can download the Understanding ICSE Mathematics Class 10 ML Aggarwal Solved Solutions with Free PDF download option, which contains chapter wise solutions. APC Maths Class 10 Solutions ICSE all questions are solved and explained by expert Mathematic teachers as per ICSE board guidelines. By studying these ML Aggarwal Class 10 ICSE Solutions you can easily get good marks in ICSE Class 10 Board Examinations. You also refer Selina Concise Mathematics Class 10 Solutions for more practice.
APC Understanding ICSE Mathematics Class 10 ML Aggarwal Solutions 2018 Edition for 2019 Examinations
ML Aggarwal Class 10 Maths Chapter 1 Value Added Tax
ML Aggarwal Class 10 Maths Chapter 2 Banking
ML Aggarwal Class 10 Maths Chapter 3 Shares and Dividends
ML Aggarwal Class 10 Maths Chapter 4 Linear Inequations
ML Aggarwal Class 10 Maths Chapter 5 Quadratic Equations in One Variable
ML Aggarwal Class 10 Maths Chapter 6 Factorization
ML Aggarwal Class 10 Maths Chapter 7 Ratio and Proportion
ML Aggarwal Class 10 Maths Chapter 8 Matrices
ML Aggarwal Class 10 Maths Chapter 9 Arithmetic and Geometric Progressions
ML Aggarwal Class 10 Maths Chapter 10 Reflection
ML Aggarwal Class 10 Maths Chapter 11 Section Formula
ML Aggarwal Class 10 Maths Chapter 12 Equation of a Straight Line
ML Aggarwal Class 10 Maths Chapter 13 Similarity
ML Aggarwal Class 10 Maths Chapter 14 Locus
ML Aggarwal Class 10 Maths Chapter 15 Circles
ML Aggarwal Class 10 Maths Chapter 16 Constructions
ML Aggarwal Class 10 Maths Chapter 17 Mensuration
ML Aggarwal Class 10 Maths Chapter 18 Trigonometric Identities
ML Aggarwal Class 10 Maths Chapter 19 Trigonometric Tables
ML Aggarwal Class 10 Maths Chapter 20 Heights and Distances
ML Aggarwal Class 10 Maths Chapter 21 Measures of Central Tendency
ML Aggarwal Class 10 Maths Chapter 22 Probability
### FAQs on ML Aggarwal Class 10 Solutions
1. How do I download the PDF of ML Aggarwal Solutions in Class 10?
All you have to do is tap on the direct links available on our LearnInsta.com page to access the Class 10 ML Aggarwal Solutions in PDF format. You can download them easily from here for free of cost.
2. Where can I find the solutions for ML Aggarwal Maths Solutions for Class 10?
You can find the Solutions for ML Aggarwal Maths for Class 10 from our page. View or download them as per your convenience and aid your preparation to score well.
3. What are the best sources for Class 10 Board Exam Preparation?
Aspirants preparing for their Class 10 Board Exams can make use of the quick and direct links available on our website LearnInsta.com regarding ML Aggarwal Solutions.
4. Is solving ML Aggarwal Solutions Chapterwise benefit you during your board exams?
Yes, it can be of huge benefit during board exams as you will have indepth knowledge of all the topics by solving Chapterwsie ML Aggarwal Solutions.
5. Where to download Class 10 Maths ML Aggarwal Solutions PDF?
Candidates can download the Class 10 Maths ML Aggarwal Solutions PDF from the direct links available on our page. We don’t charge any amount from you and they are absolutely free of cost.
## Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3A
These Solutions are part of Selina Concise Mathematics Class 10 ICSE Solutions. Here we have given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3A.
Other Exercises
Question 1.
How much money will be required to buy 400, ₹ 12.50 shares at a premium of ₹ 1?
Solution:
Number of shares purchased = 400
Rate of each share = ₹ 12.50
M.V. = ₹ 1 premium = ₹ 12.50 + ₹ 1 = ₹ 13.50
Amount of in vestment = ₹ 400 x ₹ 13.50 = ₹ 5400
Question 2.
How much money will be required to buy 250, ₹ 15 shares at a discount of ₹ 1.50?
Solution:
Number of shares = 250
M.V. = at ₹ 15 at a discount of ₹ 1.50 = ₹ 15 – ₹ 1.50 = ₹ 13.50
Amount of investment = ₹ 13.50 x 250 = ₹ 3375
Question 3.
A person buys 120 shares at a nominal value of ₹ 40 each, which he sells at ₹ 42.50 each. Find his profit and profit percent.
Solution:
No. of shares = 120
Nominal value of each share = ₹ 40.00
Profit at each share = ₹ 42.50 – ₹ 40.00 = ₹ 2.50
Total profit = 2.50 x 120 = ₹ 300
Cost price of 120 shares = ₹ 40 x 120 = ₹ 4,800
Question 4.
Find the cost of 85 shares of Rs. 60 each when quoted at ₹ 63.25
Solution:
No. of shares = 85
Market value of cach share = ₹ 63.25
Total cost = ₹ 63.25 x 85 = ₹ 5,376.25
Question 5.
A man invests ₹ 800 in buying 75 shares and when they are selling at a premium of ₹ 1.15, he sells all the shares. Find his profit and profit percent.
Solution:
Investment = ₹ 800
In first case face value of each share = ₹ 5
and market value of each share = ₹ 5.00 + ₹ 1.15 = ₹ 6.15
Gain on each share of ₹ 5 = ₹ 1.15
Question 6.
Find the annual income derived from 125, ₹ 120 shares paying 5% dividend.
Solution:
Amount of investment = ?
Number of shares purchased = 125 at ₹ 120, 5% dividend
Amount of investment = ₹ 125 x 120 = ₹ 15000
His annual income = 15000 x $$\frac { 5 }{ 100 }$$ = ₹ 750
Question 7.
A man invests ₹ 3,072 in a company paying 5% per annum when its ₹ 10 share can be bought for ₹ 16 each. Find:
(i) his annual income;
(ii) his percentage income on his investment.
Solution:
Total investment = ₹ 3,072
Market value of each shares = ₹ 16
Question 8.
A man invests ₹ 7,770 in a company paying 5 percent dividend when a share of nominal value of ₹ 100 sells at a premium of ₹ 5. Find :
(i) the number of shares bought;
(ii) annual income ;
(iii) percentage income ;
Solution:
Investment = ₹ 7770
Nominal value of each share = 100
Market value = 100 + 5 = 105
Question 9.
A man buys ₹ 50 shares of a company paying 12 percent dividend, at a premium of ₹ 10. Find :
(i) the market value of 320 shares ;
(ii) his annual income ;
(iii) his profit percent.
Solution:
(i) Market value of each share = ₹ 50 + ₹ 10 = ₹ 60
Market value of 320 shares = ₹ 60 x 320 = ₹ 19,200
(ii) Rate of dividend = 12%
Face value of 320 shares = Rs. 50 x 320 = Rs. 16,000
Question 10.
A man buys of Rs. 75 shares at a discount of Rs. 15 of a company paying 20% dividend. Find :
(i) the market value of 120 shares ;
(ii) his annual income ;
(iii) his profit percent.
Solution:
(i) Market value of one share = Rs. 75 – 15 = Rs. 60
Market value of 120 shares = Rs. 60 x 120 = Rs. 7,200
(ii) Rate of dividend = 20%
Face value of 120 shares = Rs. 75 x 120 = Rs. 9,000
Question 11.
A man has 300, ₹ 50 shares of a company paying 20% dividend. Find his net income after paying 3% income tax.
Solution:
No. of shares = 300
Face value of 50 shares = Rs. 50 x 300 = Rs. 15,000
Rate of dividend = 20%
Question 12.
A company pays dividend of 15 % on its ten-rupee shares from which it deducts income tax at the rate of 22%. Find the annual income of a man who owns one thousand shares of this company.
Solution:
No. of shares = 1,000
Face Value of each share = Rs. 10
Rate of dividend = 15%
Rate of income tax = 22%
Face value of 1,000 shares = 1,000 x 10 = Rs. 10,000
Total dividend = Rs. 10,000 x $$\frac { 15 }{ 100 }$$ = Rs. 1,500
Income tax deducted = Rs. 1500 x $$\frac { 22 }{ 100 }$$ = Rs. 330
Net income = Rs.1500 – Rs. 330 = Rs. 1170
Question 13.
A man invests Rs. 8,800 in buying shares of a company of face value of rupees hundred each at a premium of 10%. If he earns Rs. 1,200 at the end of the year as dividend find:
(i) the number of shares he has in the company;
(ii) the dividend percent per share. [2001]
Solution:
Investment = Rs. 8,800
Face value of each share = Rs. 100
Market value of each share = Rs. 100 + 10 = Rs. 110
Question 14.
A man invests Rs. 1,680 in buying shares of nominal value Rs. 24 and selling at 12% premium. The dividend on the shares is 15% per annum. Calculate :
(i) The number of shares he buys ;
(ii) The dividend he receives annually. [1999]
Solution:
Investment = Rs. 1680
Nominal value of each share = Rs. 24
Market value of each share = Rs. 24 + 12% of 24
= Rs. 24 + 2.88 = Rs. 26.88
Rate of dividend = 15%
(i) No. of shares = $$\frac { 1680 }{ 26.88 }$$ = 62.5
(ii) Face value of 62.5 shares = 62.5 x 24 = Rs. 1500
Amount of dividend = 1500 x $$\frac { 15 }{ 100 }$$ = Rs. 225
Question 15.
By investing Rs. 7,500 in a company paying 10 percent dividend, an annual income of Rs. 500 is received. What price is paid for each of Rs. 100 share? [1990]
Solution:
Investment = Rs. 7,500
Rate of dividend = 10%
Total income = Rs. 500
Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 3 Shares and Dividend Ex 3A are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
## Selina Concise Mathematics Class 10 ICSE Solutions Chapter 2 Banking (Recurring Deposit Accounts) Ex 2B
These Solutions are part of Selina Concise Mathematics Class 10 ICSE Solutions. Here we have given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 2 Banking Ex 2B.
Other Exercises
Question 1.
Pramod deposits ₹ 600 per month in a Recurring Deposit Account for 4 years. If the rate of interest is 8% per year; calculate the maturity value of his account.
Solution:
Deposit per month (P) = ₹ 600
Rate of interest (r) = 8%
Period (n) = 4 years = 48 months.
According to formula,
Maturity value = ₹ 600 x 48 + ₹ 4,704 = ₹ 28,800 + ₹ 4,704 = ₹ 33504
Question 2.
Ritu has a Recurring Deposit Account in a bank and deposits ₹ 80 per month for 18 months. Find the rate of interest paid by the bank if the maturity value of this account is ₹ 1,554.
Solution:
Let rate of interest = r%,
n = 18,
P = ₹ 80
and A is maturity value.
Using formula
Question 3.
The maturity value of a R.D. Account is ₹ 16,176. If the monthly installment is ₹ 400 and the rate of interest is 8%; find the time (period) of this R.D. Account.
Solution:
Here maturity value (A) = ₹ 16,176
Rate = 8%,
P = ₹ 400
Let period = n (No. of months)
Using formula :
I = A – P x n = 16,176 – 400 x n = 16,716 – 400n.
⇒ 48,528 – 1,200n = 4n² + 4n
⇒ 4n² + 4n + 1200n – 48,528 = 0
⇒ 4n² + 1,204n – 48,528 = 0
⇒ n² + 301n — 12,132 = 0 (dividing by 4)
⇒ n² – 36n + 337n – 12,132 = 0
⇒ n (n – 36) + 337 (n – 36) = 0
⇒ (n – 36) (n + 337) = 0
Either n = 36 months or n = -337, which is not possible.
Time = 36 months = 3 years
Question 4.
Mr. Bajaj needs ₹ 30,000 after 2 years. What least money (in multiple of ₹ 5) must he deposit every month in a recurring deposit account to get required money at the end of 2 years, the rate of interest being 8% p.a. ?
Solution:
Amount of maturity = ₹ 30000
Period (n) = 2 years = 24 months
Rate = 8% p.a.
Let x be the monthly deposit
Amount of monthly deposit in the multiple of ₹ 5 = ₹ 1155
Question 5.
Rishabh has a recurring deposit account in a post office for 3 years at 8% p.a. simple interest. If he gets ₹ 9,990 as interest at the time of maturity, find :
(i) the monthly installment.
(ii) the amount of maturity.
Solution:
Total interest = ₹ 9990
Period (n) = 3 years = 36 months
Rate of interest (r) = 8%
(i) Let monthly installment = x
Monthly installment = ₹ 2250
(ii) Amount of maturity = Principal + Interest
= 36 x 2250 + 9990
= ₹ 81000 + 9990 = ₹ 90990
Question 6.
Gopal has a cumulative deposit account and deposits ₹ 900 per month for a period of 4 years. If he gets ₹ 52,020 at the time of maturity, find the rate of interest.
Solution:
Maturity value = ₹ 52,020
Monthly installment (P) = ₹ 900
Total principal = ₹ 900 x 48 = ₹ 43200
Amount of interest = ₹ 52020 – ₹ 43200 = ₹ 8820
Let rate of interest = r%
Question 7.
Deepa has a 4 year recurring deposit account in a bank and deposits ₹ 1,800 per month. If she gets ₹ 1,800 per month. If she gets ₹ 1,08,450 at the time of maturity, find the rate of interest.
Solution:
Deposit per month = ₹ 1800
Period = 4 years = 48 months
Maturity value = ₹ 108450
Total principal = ₹ 1800 x 48 = ₹ 86400
Amount of interest = ₹ 108450 – 86400 = ₹ 22050
Let r be the rate of interest
Question 8.
Mr. Britto deposits a certain sum of money each month in a Recurring Deposit Account of a bank. If the rate of interest is of 8% per annum and Mr. Britto gets ₹ 8,088 from the bank after 3 years, find the value of his monthly installment. (2013)
Solution:
Let monthly installment = ₹ x
Period (n) = 3 x 12 months = 36 months
Question 9.
Sharukh opened a Recurring Deposit Account in a bank and deposited ₹ 800 per month for 1$$\frac { 1 }{ 2 }$$ years. If he received ₹ 15,084 at the time of maturity, find the rate of interest per annum. (2014)
Solution:
Money deposited per month (P) = ₹ 800
r = ?
Question 10.
Katrina opened a recurring deposit account with a Nationalised Bank for a period of 2 years. If the bank pays interest at the rate of 6% per annum and the monthly installment is ₹ 1,000, find the :
(i) interest earned in 2 years
(ii) maturity value. (2015)
Solution:
Period (n) = 2 years = 2 x 12 = 24 months
Rate of interest (r) = 6%
Monthly installment (P) = ₹ 1000
Question 11.
Mohan has a recurring deposit account in a bank for 2 years at 6% p.a. simple interest. If he gets ₹ 1200 as interest at the time of maturity, find :
(i) the monthly installment
(ii) the amount of maturity
Solution:
(i) Interest = ₹ 1200,
n = 2 x 12 = 24 months,
r = 6%
⇒ P = ₹ 800
So the monthly installment is ₹ 800
(ii) Total sum deposited = P x n = ₹ 800 x 24 = ₹ 19200
The amount that Mohan will get at the time of maturity = Total sum deposited + Interest Received
= ₹ 19200 + ₹ 1200 = ₹ 20400
Hence, the amount of maturity is ₹ 20400
Hope given Selina Concise Mathematics Class 10 ICSE Solutions Chapter 2 Banking Ex 2B are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you. | 2022-06-25 08:08:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6862456798553467, "perplexity": 2572.540985896539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00185.warc.gz"} |
https://vlorbik.wordpress.com/2010/12/03/exam-of-november-4-2010/ | ### Exam of November 4, 2010
1. Compute the sum: $\sum_{i=1}^3 i^3$.
2. Prove by induction: $(\forall n \in {\Bbb Z}^+)6|(7^n - 1)$.
(“For every integer n greater than or equal to 1, six divides $7^n - 1$“.)
3. Prove that whenever a mod 6 = 3 and b mod 6 = 2, it is also true that ab mod 6 = 0.
(Remark: this shows that the “Zero Product Law” $(a \ne 0 \wedge b \ne 0) \rightarrow (ab \ne 0)$ is false in certain number systems.)
4. Prove that $(\forall x, y \in {\Bbb Q})xy \in {\Bbb Q}$.
(“The product of any two rational numbers is a rational number”.)
5. Prove by contradiction that there is no greatest Real number less than 17.
6. Prove that there is an odd integer k such that k mod 7 = 4.
1. 1. Compute the sum: $\sum_{i=1}^3 i^3$.
$\sum_{i=1}^3 i^3 = 1^3 + 2^3 + 3^3 = 36.$
all but one student got this i think.
so i should’ve gone a *little* harder.
the various sum-of-powers formulas
are proved by induction in the text
but i required only that this quarter’s
students *know how* to *expand* a sum.
there’s a little use of “index” notation…
stuff like $\bigcup_{i=1}^n X_i$
that i wanted to at least look at.
i’ve complained many times in many
other courses that there’s far too much
material in the sections-to-be-“covered”
to actually require from the students
one actually gets; that was sure the
case again here (though i’m not so
inclined to complain about it… it’s
almost as if by this level of the game
it should be *understood*… by student
and teacher alike… that one will
throw out great tracts of text so as
to write passable exams).
2. Prove by induction: $(\forall n \in {\Bbb Z}^+)6|(7^n - 1)$.
(“For every integer n greater than or equal to 1, six divides $7^n - 1$“.)
Base Case
Let P(n) denote the proposition $6|(7^n -1)$.
For our base step—P(1)—note that
6|(7^1 – 1) is true.
(Because 6|6; this in turn is true because 6=6*1 [and 1 is an integer]. One need not spell this out here; we’ve recently worked some proofs involving the definition of a-divides-b (a | b) though and there was certainly some confusion on the day. Many students (how many? Too many!) confuse the proposition “6|6” (a true statement) with the number “6/6” (the integer positive-one) in their writings.)
Induction Step
Now suppose, for some positive integer k, that P(k) is known to be true. In other words, fix k and suppose that $6|(7^k - 1)$. This means that $7^k -1 = 6a$ for some integer a. Now
$7^{k+1} - 1=$
$7\cdot7^k - 1 =$
$7\cdot7^k - 7 + 6=$
$7(7^k - 1) + 6=$
$7(6a) + 6=$
$6(7a + 1)\,.$
But 7a +1 is an integer, so this tells us that $6|(7^{k+1} -1)$. This is exactly P(k+1), so our induction is complete.
Only a handful of students produced satisfactory proofs; this is the only problem I’m (essentially) repeating from Exams I and II on an otherwise non-cumulative Exam III (and Final). With luck the homework exercises will have had some effect and I’ll have a much better time grading these.
3. Prove that whenever a mod 6 = 3 and b mod 6 = 2, it is also true that ab mod 6 = 0.
(Remark: this shows that the “Zero Product Law” $(a \ne 0 \wedge b \ne 0) \rightarrow (ab \ne 0)$ is false in certain number systems.)
Proof:
Suppose (for some integers a and b) that a mod 6 = 3 and b mod 6 = 2. In other words, suppose that integers p and q exist with a = 6p + 3 and b = 6q + 2. Then
ab =
(6p + 3) (6q + 2)=
36pq +12p + 18q + 6=
6(6pq +2p + 3q + 1).
Hence,
ab = 6t + 0
where t (= 6pq +2p + 3q + 1) is an integer. This is the same thing as to say that ab mod 6 = 0; we are done.
2. 4. Prove that $(\forall x, y \in {\Bbb Q})xy \in {\Bbb Q}$.
(“The product of any two rational numbers is a rational number”.)
Proof: Let x & y denote rational numbers.
Then, by the definition of “rational number”, there exist integers a, c, and non-zero integers b, d such that
x = a/b and y = c/d.
It follows that xy = (a/b)(c/d) = (ac)/(bd).
Now note that bd is nonzero (by the “Zero Product Law” for real numbers) and that ac and bd are both integers (by “closure”… the product of integers is an integer).
So xy satisfies all the properties of a rational number; we are done.
5. Prove by contradiction that there is no greatest Real number less than 17.
Suppose there is a greatest Real less than 17; call it M.
Now consider q= (M+17)/2. We have
q-M=
(M+17)/2 – 2M/2=
(17-M)/2.
This is a positive number (because 17>M), so
q-M >0 and
q > M.
But also 17-q = 34/2 – (M+17)/2 = (17-M)/2.
This is still positiive, so 17 > q.
M cannot be the greatest real
less than 17.
3. 6. Prove that there is an odd integer k such that k mod 7 = 4.
Let k = 11.
Then k = 11 = 7*1 + 4; note that $1 \in \Bbb Z$.
Hence k mod 7 = 4 (by the definition of “mod”).
Also k = 11 = 2*5 + 1 (and $5 \in \Bbb Z$).
So k is odd. Done.
The commonest mistake was to overlook the need to prove that k is odd… understandable of course. But we’d just been looking at lots of proofs about even-and-odd, so it’s really not too much to expect…
4. typo in #3. b mod 3 = 2. The “3” is missing.
5. oops. That’s a missing 6, not a missing 3, sorry.
6. blag
right; thanks.
the copy will be corrected at some point
(i’m not logged in).
7. see?
the error turned out to’ve been repeated
(at least once); the dangers of cut-and-paste.
• ## (Partial) Contents Page
Vlorbik On Math Ed ('07—'09)
(a good place to start!) | 2018-06-23 21:55:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695821762084961, "perplexity": 1498.049826663263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00224.warc.gz"} |
https://edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/ | 1,542
19
Essay, 7 pages (1700 words)
Probability density function for qpsk health and social care essay
Advertising We'll write a high-quality original custom paper on Probability density function for qpsk health and social care essay just for you with a 15% discount for the 1st order
Figure 10: BPSK Tx/RxSource: Figure 23: BPSK transmitter/receiver in CDMA communication [8]Channel ModelThe transmitted waveform gets corrupted by Additive White Gaussian Noise , [8]. with and . References: The sources are listed in numbered accordingly, as in the following article.[8] http://www. dsplog. com/2007/08/05/bit-error-probability-for-bpsk-modulation/Calculate the probability of errorFor received signalWhen bit 1 is sentWhen bit 0 is sentBelow we have two cases of probability distribution function of [8]
.
Figure 11: BPSK probability densityFigure 11: BPSK Conditional probability density function [8]References: The sources are listed in numbered accordingly, as in the following article.[8] http://www. dsplog. com/2007/08/05/bit-error-probability-for-bpsk-modulationLet’s assume that and are both evenly probable then optimal decision boundary is formed by the threshold 0 [8].•When received signal is and it is greater than zero then the receiver assume was transmitted.•When received signal is and it is less or equal to zero then the receiver assume was transmitted.
.
1) Given the Probability of error that was transmitted [8]2) Given the Probability of error that was transmitted [8]Probabilities of errors given by and are represented in the figure 11 [8]: Total probability of bit error
.
At the beginning we have assumed and are equally probable hence the bit error probability given by below equation [8], References: The sources are listed in numbered accordingly, as in the following article.[8] http://www. dsplog. com/2007/08/05/bit-error-probability-for-bpsk-modulation/Figure 12: Bit error rate probability program with matlabFigure 13: Bit error probability cure for BPSK ModulationAS we can see here BPSK‘ s bit error rate of 10^-5 only required around 9db of EB/NO03. 02 Pulse Amplitude Modulation (4-PAM)For 4 PAM the constellation energy in average and this is assuming all alphabets are equally likely is [9],
.
After normalization constellation diagram of the 4-PAM signal is shown below [9]. Channel ModelThe transmitted waveform gets corrupted by Additive White Gaussian Noisewith and . then received signal would beor or or . Case 01 when was transmitted. References: The sources are listed in numbered accordingly, as in the following article.[9] http://www. dsplog. com/2007/10/07/symbol-error-rate-for-pam/Below it is shown the conditional probability distribution function of when was transmitted [9].
.
Figure 14 probability distribution function when was transmittedSource: Figure: Probability distribution function when the alphabet +3 is sent [9]Considering between the +1 and +3 midway points for received signal (threshold) [9]
.
The area show in blue color is the probability of error given is transmitted with threshold [9]:
.
-3 and +3 is symmetric Constellation is normally intuitive and the probability of error given is transmitted [9].
.
References: The sources are listed in numbered accordingly, as in the following article.[9] http://www. dsplog. com/2007/10/07/symbol-error-rate-for-pam/Case 02 when is transmittedBelow it is shown the conditional probability distribution function of when was transmitted show in red and green areas of the diagram [9].
.
Figure 15 Probability distribution function when the alphabet +1 is TransmittedSource: Figure: Probability distribution function when the alphabet +1 is sent [9]Since +1 and -1 constellation is symmetric, it is normally intuitive and probability of error given transmitted [9]Assuming that the S0 S1 S2 are similarly probable
.
References: The sources are listed in numbered accordingly, as in the following article.[9] http://www. dsplog. com/2007/10/07/symbol-error-rate-for-pam/Figure 16 ; 4 PAM matlab workFigure 17: Symbol error probability curve for 4PAMAS we can see here 4-PAM‘ s bit error rate of 10^-5 only required around 17db of EB/NO03. 03 Symbol Error Rate (SER) for QPSK (4-QAM) modulationFigure 18: QPSK Constellation plotSource: Figure: Constellation plot for QPSK (4-QAM) [10]To normalize the average energy we use scaling factor of and also like above we assume all the points equally likely in constellation [10]. Noise modelThe transmitted waveform gets corrupted by Additive White Gaussian Noise [10]. with and . References: The sources are listed in numbered accordingly, as in the following article.[10] http://www. dsplog. com/2007/11/06/symbol-error-rate-for-4-qam/Calculating the error probabilityCase 1) using the symbolThe conditional probability distribution function (PDF) of given was transmitted is: [10].
.
Figure 19: QPSK function of Probability densitySource: Figure: Probability density function for QPSK (4QAM) modulation [10]If Y falls in the area of highlighted in the diagram then the symbol will be decoded [10].
.
Was transmitted when the probability of real component of > 0, that area in show in the diagram outside of the red region [10]. References: The sources are listed in numbered accordingly, as in the following article.[10] http://www. dsplog. com/2007/11/06/symbol-error-rate-for-4-qam/Was transmitted when the probability of imaginary component of > 0 [10]. At least one of the symbols will be decoded with errors hence the symbol will be in error, therefore symbol error probabilityReferences: The sources are listed in numbered accordingly, as in the following article.[10] http://www. dsplog. com/2007/11/06/symbol-error-rate-for-4-qam/Figure 20: QPSK matlab workFigure 21: QPSK probability curveAS we can see here QPSK‘ s bit error rate of 10^-5 only required around 13db of EB/NO03. 04 Symbol Error Rate (SER) for 16-QAMThe average energy of the 16-QAM constellation is . The 16-QAM constellation is as shown in the figure belowFigure 22: constellation map of 16 QAMSource Figure: 16-QAM constellation [11]References: The sources are listed in numbered accordingly, as in the following article.[11] http://www. dsplog. com/2007/12/09/symbol-error-rate-for-16-qam/#Simulation%20ModelNoise modelwith and . Computing the probability of errorConsider the symbol in the inside, for exampleThe conditional probability distribution function (PDF) of given was transmitted is: [11]
.
Only if falls in the area in the black hashed region Symbol is decoded correctly [11]
.
Using the equations from symbol error probability of 4-PAM [11]
.
The probability of being decoded incorrectly is, The conditional probability distribution function (PDF) of given was transmitted is: [11]
.
References: The sources are listed in numbered accordingly, as in the following article.[11] http://www. dsplog. com/2007/12/09/symbol-error-rate-for-16-qam/#Simulation%20ModelAs can be seen from the above figure, the symbol is decoded correctly only if falls in the area in the red hashed region i. e. [11]
.
The probability of being decoded incorrectly is,
.
Consider the symbol which is not in the corner OR not in the inside, for exampleThe conditional probability distribution function (PDF) of given was transmitted is: [11]
.
Only if falls in the area in the blue hashed region then Symbol is decoded correctly i. e. [11]
.
Using the above two cases are reference,
.
The probability of being decoded incorrectly is, [11]References: The sources are listed in numbered accordingly, as in the following article.[11] http://www. dsplog. com/2007/12/09/symbol-error-rate-for-16-qam/#Simulation%20ModelFigure 23: 16 QAM probability curve matlab workFigure 24: Probability curve of 16 QAMAS we can see here 16-QAM‘ s bit error rate of 10^-5 only required around 20db of EB/NO03. 05 Symbol error rate for 16-PSKIn this post, let us try to derive the symbol error rate for 16-PSK (16-Phase Shift Keying) modulation [12]. Consider a general M-PSK modulation, where the alphabets, are used [12]. Figure 25: constellation plot of 16 PSKSource: Figure: 16-PSK constellation plot [12]Deriving the symbol error rateLet us the consider the symbol on the real axis, i. e
.
The received sym0bol . References: The sources are listed in numbered accordingly, as in the following article.[11] http://www. dsplog. com/2008/03/18/symbol-error-rate-for-16psk/#Simulation%20ModelWhere the additive noise follows the Gaussian probability distribution function, [12]with and . The conditional probability distribution function (PDF) of received symbol given was transmitted is: [12]
.
To derive the symbol error rate, to find the probability that the phase of the received symbol lies Boundary defined by the magenta lines i. e. from to . Here use these assumptions: [12](a) The signal to noise ratio, is reasonably high. For a reasonably high value of , then the real part of the received symbol is not afected by noise i. e., andthe imaginary part of the received symbol is equal to noise, i. e. [12]
.
(b) The value of M is reasonably high (typically M > 4 suffice)For a reasonably high value of M, the constellation points are closely spaced. Given so, the distance of the constellation point to the magenta line can be approximated as . References: The sources are listed in numbered accordingly, as in the following article.[12] http://www. dsplog. com/2008/03/18/symbol-error-rate-for-16psk/#Simulation%20ModelFigure 26: distance of Constellation pointsSource: Figure: Distance between constellation pointsGiven the above two assumptions, it can be observed that the symbol will be decoded incorrectly, if the imaginary component of received symbol isgreater than . The probability of being greater than is, Changing the variable to ,. Note: The complementary error function, . Similarly, the symbol will be decoded incorrectly, if the imaginary component of received symbol is less than . The probability of being less than is, The total probability of error given was transmittd is,
.
References: The sources are listed in numbered accordingly, as in the following article.[12] http://www. dsplog. com/2008/03/18/symbol-error-rate-for-16psk/#Simulation%20ModelFigure 27: 16 PSK probability curve matlab workFigure 28: Probability curve of 16 PSKAS we can see here 16-PSK‘ s bit error rate of 10^-5 only required around 24db of EB/NO03. 06 Symbol error rate of Quadrature Amplitude Modulation (QAM)
.
Defining the general M-QAM constellationPoints in the constellation is defined as, [13]Out of total bits half by half represented on the real axis and imaginary axis. The in-phase and quadrature signals are independent [13]. Average energy of an M-QAM constellationIn a general M-QAM constellation where and are even, the alphabets used are:, where . For example, considering a 64-QAM ( ) constellation, and the alphabets are [13]To compute the average energy of the M-QAM constellation: 1. Find the sum of energy of the individual alphabets2. Each alphabet is used times in the M-QAM constellation. Thus, to find the average energy from constellation symbols, divide the product of (1) and (2) above by . The average energy is, [13]References: The sources are listed in numbered accordingly, as in the following article.[13] http://eetimes. com/design/signal-processing-dsp/4017648/Symbol-error-rate-for-M-QAM-modulationPlugging in the number for 64-QAM, . Plugging in the number for 16-QAM, . From the above explanations, it is reasonably intuitive to guess that the scaling factor of , which is seen along with 16-QAM, 64-QAM constellations, respectively, is for normalizing the average transmit power to unity [13]. Finding the symbol error rateconsider the 64-QAM constellation define the M-QAM [13]. Figure 29 constellation plotSource: Figure 1. Constellation plot for 64-QAM modulation (without the scaling factor of ) [13]References: The sources are listed in numbered accordingly, as in the following article.[13] http://eetimes. com/design/signal-processing-dsp/4017648/Symbol-error-rate-for-M-QAM-modulationThere are three types of constellation points in M-QAM constellation: [13]1. Constellation points in the corner (red squares). The number of constellation points in the corner in any M-QAM constellation is always 4, [13]2. Constellation points in the inside (magneta diamonds). The number of constellation points in the inside is,
.
For example with M= 64, there are 36 constellation points in the inside. 3. Constellation points neither at the corner, nor at the center (blue stasr). The number of constellation points of this category is, [13]
.
For example with M= 64, there are 24 constellation points in the inside. References: The sources are listed in numbered accordingly, as in the following article.[13] http://eetimes. com/design/signal-processing-dsp/4017648/Symbol-error-rate-for-M-QAM-modulationFigure 30: 64 QAM probability curve matlab workFigure 31: Probability curve of 64 QAMAS we can see here 64-QAM‘ s bit error rate of 10^-5 only required around 26db of EB/NO
Your fellow student wrote and submitted this work, "Probability density function for qpsk health and social care essay". This sample can be used for research and reference in order to help you write your own paper. It is prohibited to utilize any part of the work without a valid citation.
If you own this paper and don't want it to be published on EduFrogs.com, you can ask for it to be taken down.
Cite this Essay
References
EduFrogs. (2022) 'Probability density function for qpsk health and social care essay'. 2 September.
Reference
EduFrogs. (2022, September 2). Probability density function for qpsk health and social care essay. Retrieved from https://edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/
References
EduFrogs. 2022. "Probability density function for qpsk health and social care essay." September 2, 2022. https://edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/.
1. EduFrogs. "Probability density function for qpsk health and social care essay." September 2, 2022. https://edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/.
Bibliography
EduFrogs. "Probability density function for qpsk health and social care essay." September 2, 2022. https://edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/.
Work Cited
"Probability density function for qpsk health and social care essay." EduFrogs, 2 Sept. 2022, edufrogs.com/probability-density-function-for-qpsk-health-and-social-care-essay/.
Other Samples
Get in Touch with Us
If you have ideas on how to improve Probability density function for qpsk health and social care essay, feel free to contact our team. Use the following email to reach to us: [email protected] | 2023-04-01 01:17:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720417022705078, "perplexity": 2333.955182906011}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00552.warc.gz"} |
https://tex.stackexchange.com/questions/163664/no-page-break-after-index-with-idxlayout | # No page break after index with idxlayout
In the idxlayout manual I read that the memoir class with twocolumn options forces a page break after the index. Is there any way around this behavior? I would like the following to remain on one page, not three.
\documentclass[oneside,twocolumn]{memoir}
\usepackage[columns=1]{idxlayout}
\makeindex
\title{Title}
\begin{document}
\maketitle
\printindex
\index{Hi}{Hello}
\end{document}
• When you use twocolumn, the only way to switch to one column is ejecting a page. – egreg Mar 4 '14 at 16:25
• This is for the break before the index? When changing to columns=2, is still get three pages. I am looking for a small list of to do items just after the title, just before the start of the main text. – Marijnn Mar 4 '14 at 16:47
• Have you seen the todo package? – egreg Mar 4 '14 at 16:49
• Thanks, a dedicated package is probably a better way. I think fixme looks nice for my purposes.. – Marijnn Mar 4 '14 at 17:20
• fixme is very good for this and very powerful when you need it. – cfr Mar 5 '14 at 0:10
When you use the twocolumn option, the only possible way for switching to a one column format is starting a new page.
You might use the multicol package, instead, but probably the best strategy for the problem you're trying to solve is using the todo or fixme packages. | 2019-12-11 22:22:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7014146447181702, "perplexity": 1062.6945051860882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00411.warc.gz"} |
https://www.physicsforums.com/threads/long-probability-questions-need-help.256449/ | # Long probability questions need help
1. Sep 15, 2008
### sinni8
In a series of N = 1000 items the quality control engineer
assumes the proportion pd = 2:5% of defective items.
(a) What is the expected value and the standard deviation of the number of
defective items?
(b) Assume that Nd is a number of defective items. What is the probability
distribution of Nd:?
(c) Write the normal approximation of the probability distribution of Nd:
(d) Approximate the probability of less than 15 defective items with the aid of
the normal approximation of the probability distribution of Nd: What is the
exact probability?
(e) Assume that he observed Nd = 15 defective items. What is the 95% confidence
interval for the proportion of defective items?
(f) With Nd = 40 test the hypothesis H0 : pd = 2:5% against the alternative
Hα : pd > 2:5%:
(g) Suppose he wishes to estimate the proportion of defective items with accuracy
0:5% with 99% confidence. How many items should be taken for test?
2. Sep 15, 2008 | 2017-04-30 20:34:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216916918754578, "perplexity": 1016.2896384729502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00260-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://zbmath.org/authors/?q=ai%3Afu.chuli | zbMATH — the first resource for mathematics
Fu, Chuli
Compute Distance To:
Author ID: fu.chuli Published as: Fu, Chu-Li; Fu, Chuli; Fu, C. L.; Fu, C.-L.; Fu, Chu-li; Fu, Chu Li; Fu, Ch.; Fu, C.
Documents Indexed: 146 Publications since 1981
all top 5
Co-Authors
8 single-authored 28 Xiong, Xiangtuan 24 Qian, Zhi 22 Yang, Fan 14 Feng, Xiaoli 13 Cheng, Wei 13 Qiu, Chunyu 12 Cheng, Hao 12 Ma, Yunjie 12 Zhang, Yuanxiang 11 Li, Hongfang 11 Li, Xiaoxiao 8 Dou, Fangfang 8 Zhu, Youbin 7 Yan, Liang 5 Yang, Fenglian 4 Deng, Zhiliang 4 Luo, Xuebo 3 Fu, Peng 3 Gao, Xiang 3 Hao, Xinsheng 3 Zhao, Hua 2 Eldén, Lars 2 Li, Zhenping 2 Liu, Tingting 2 Zhang, Qinge 1 Chen, Suiyang 1 Cheng, Jin 1 Fu, Rong 1 Gao, Jie 1 Hou, Changshun 1 Lu, Jiantong 1 Nan, Nan 1 Potier-Ferry, Michel 1 Qin, Feng-Juan 1 Ren, Yupeng 1 Shi, Rui 1 Tao, Jianhong 1 Wang, Zhilin 1 Wei, Ting 1 Wei, Xuemei 1 Xiao, Jujiao 1 Zhang, Jinsheng 1 Zhang, Junyong 1 Zhang, Yingqi 1 Zhao, Lingling 1 Zheng, Guanghui
all top 5
Serials
16 Journal of Lanzhou University. Natural Sciences 15 Applied Mathematics and Computation 11 Applied Mathematical Modelling 10 Journal of Computational and Applied Mathematics 10 Inverse Problems in Science and Engineering 7 Acta Mathematica Scientia. Series A. (Chinese Edition) 5 Computers & Mathematics with Applications 5 Journal of Mathematical Analysis and Applications 5 Mathematics and Computers in Simulation 5 Applied Mathematics Letters 4 Journal of Inverse and Ill-Posed Problems 3 Inverse Problems 3 International Journal of Mathematics and Mathematical Sciences 3 Mathematica Applicata 2 Journal of the Korean Mathematical Society 2 Mathematical and Computer Modelling 2 Engineering Analysis with Boundary Elements 2 Acta Mathematica Scientia. Series B. (English Edition) 2 Boundary Value Problems 2 Chinese Journal of Engineering Mathematics 1 International Journal of Engineering Science 1 Journal of Computational Physics 1 Journal of Engineering Mathematics 1 Mathematical Methods in the Applied Sciences 1 Acta Mathematica Sinica 1 International Journal for Numerical Methods in Engineering 1 SIAM Journal on Numerical Analysis 1 Applied Mathematics and Mechanics. (English Edition) 1 Mathematics in Practice and Theory 1 Chinese Annals of Mathematics. Series A 1 Advances in Mathematics 1 Applied Numerical Mathematics 1 Journal of Hebei Normal University. Natural Science Edition 1 International Journal of Computer Mathematics 1 Journal of Partial Differential Equations 1 Numerical Mathematics 1 Applied Mathematics. Series A (Chinese Edition) 1 Electronic Journal of Differential Equations (EJDE) 1 Computational and Applied Mathematics 1 Advances in Computational Mathematics 1 Acta Mathematica Scientia. Series A. (Chinese Edition) 1 Journal of Applied Mechanics and Technical Physics 1 Acta Mathematica Sinica. English Series 1 CMES. Computer Modeling in Engineering & Sciences 1 Journal of Applied Mathematics and Computing 1 International Journal of Wavelets, Multiresolution and Information Processing 1 International Journal for Numerical Methods in Biomedical Engineering 1 ISRN Applied Mathematics
all top 5
Fields
115 Partial differential equations (35-XX) 102 Numerical analysis (65-XX) 17 Classical thermodynamics, heat transfer (80-XX) 14 Operator theory (47-XX) 7 Topological groups, Lie groups (22-XX) 6 Functions of a complex variable (30-XX) 5 Abstract harmonic analysis (43-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 3 Mechanics of deformable solids (74-XX) 2 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Probability theory and stochastic processes (60-XX)
Citations contained in zbMATH Open
105 Publications have been cited 1,242 times in 478 Documents Cited by Year
The method of fundamental solutions for the inverse heat source problem. Zbl 1244.80026
Yan, Liang; Fu, Chu-Li; Yang, Feng-Lian
2008
Fourier regularization for a backward heat equation. Zbl 1146.35420
Fu, Chu-Li; Xiong, Xiang-Tuan; Qian, Zhi
2007
A meshless method for solving an inverse spacewise-dependent heat source problem. Zbl 1157.65444
Yan, Liang; Yang, Feng-Lian; Fu, Chu-Li
2009
Simplified Tikhonov and Fourier regularization methods on a general sideways parabolic equation. Zbl 1055.65106
Fu, Chu-Li
2004
Fourth-order modified method for the Cauchy problem for the Laplace equation. Zbl 1093.65107
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2006
Two regularization methods for a Cauchy problem for the Laplace equation. Zbl 1132.35493
Qian, Zhi; Fu, Chu-Li; Li, Zhen-Ping
2008
Optimal error bound and Fourier regularization for identifying an unknown source in the heat equation. Zbl 1219.65100
Dou, Fang-Fang; Fu, Chu-Li; Yang, Feng-Lian
2009
A modified method for a backward heat conduction problem. Zbl 1112.65090
Qian, Zhi; Fu, Chu-Li; Shi, Rui
2007
Regularization strategies for a two-dimensional inverse heat conduction problem. Zbl 1118.35073
Qian, Zhi; Fu, Chu-Li
2007
The Fourier regularization for solving the Cauchy problem for the Helmholtz equation. Zbl 1169.65333
Fu, Chu-Li; Feng, Xiao-Li; Qian, Zhi
2009
Fourier regularization method for solving a Cauchy problem for the Laplace equation. Zbl 1258.65094
Fu, C.-L.; Li, H.-F.; Qian, Z.; Xiong, X.-T.
2008
Two numerical methods for solving a backward heat conduction problem. Zbl 1102.65098
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi
2006
Central difference regularization method for the Cauchy problem of the Laplace’s equation. Zbl 1148.65314
Xiong, Xiang-Tuan; Fu, Chu-Li
2006
The method of simplified Tikhonov regularization for dealing with the inverse time-dependent heat source problem. Zbl 1201.65176
Yang, Fan; Fu, Chu-Li
2010
A simplified Tikhonov regularization method for determining the heat source. Zbl 1201.65177
Yang, Fan; Fu, Chu-Li
2010
Wavelet and error estimation of surface heat flux. Zbl 1019.65074
Fu, Chuli; Qiu, Chunyu
2003
A computational method for identifying a spacewise-dependent heat source. Zbl 1190.65145
Yan, Liang; Fu, Chu-Li; Dou, Fang-Fang
2010
Determining an unknown source in the heat equation by a wavelet dual least squares method. Zbl 1172.35511
Dou, Fang-Fang; Fu, Chu-Li
2009
A modified Tikhonov regularization method for a spherically symmetric three-dimensional inverse heat conduction problem. Zbl 1122.65083
Cheng, Wei; Fu, Chu-Li; Qian, Zhi
2007
Two regularization methods for a spherically symmetric inverse heat conduction problem. Zbl 1387.35615
Cheng, Wei; Fu, Chu-Li; Qian, Zhi
2008
Source term identification for an axisymmetric inverse heat conduction problem. Zbl 1189.65215
Cheng, Wei; Zhao, Ling-Ling; Fu, Chu-Li
2010
Fourier truncation method for high order numerical derivatives. Zbl 1103.65023
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan; Wei, Ting
2006
A simple regularization method for stable analytic continuation. Zbl 1160.30023
Fu, Chu-Li; Dou, Fang-Fang; Feng, Xiao-Li; Qian, Zhi
2008
A quasi-boundary-value method for the Cauchy problem for elliptic equations with nonhomogeneous Neumann data. Zbl 1279.65129
Feng, Xiao-Li; Eldén, Lars; Fu, Chu-Li
2010
Identifying an unknown source term in a heat equation. Zbl 1183.65116
Dou, Fang-Fang; Fu, Chu-Li; Yang, Fan
2009
Identification of an unknown source depending on both time and space variables by a variational method. Zbl 1252.65106
Ma, Yun-Jie; Fu, Chu-Li; Zhang, Yuan-Xiang
2012
A mollification regularization method for the inverse spatial-dependent heat source problem. Zbl 1291.80010
Yang, Fan; Fu, Chu-Li
2014
Wavelets and regularization of the Cauchy problem for the Laplace equation. Zbl 1135.35093
Qiu, Chun-Yu; Fu, Chu-Li
2008
A regularization for a Riesz-Feller space-fractional backward diffusion problem. Zbl 1329.65208
Cheng, Hao; Fu, Chu-Li; Zheng, Guang-Hui; Gao, Jie
2014
The inverse source problem for time-fractional diffusion equation: stability analysis and regularization. Zbl 1329.35357
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2015
A modified Tikhonov regularization for stable analytic continuation. Zbl 1198.30005
Fu, Chu-Li; Deng, Zhi-Liang; Feng, Xiao-Li; Dou, Fang-Fang
2009
Two approximate methods of a Cauchy problem for the Helmholtz equation. Zbl 1182.35237
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
Fourier regularization method for solving the surface heat flux from interior observations. Zbl 1122.80016
Fu, Chu-Li; Xiong, Xiang-Tuan; Fu, Peng
2005
Fourier regularization method of a sideways heat equation for determining surface heat flux. Zbl 1124.65083
Xiong, Xiang-Tuan; Fu, Chu-Li; Li, Hong-Fang
2006
A modified method for high order numerical derivatives. Zbl 1109.65024
Qian, Zhi; Fu, Chu-Li; Feng, Xiao-Li
2006
Numerical approximation of solution of nonhomogeneous backward heat conduction problem in bounded region. Zbl 1166.65048
Feng, Xiao-Li; Qian, Zhi; Fu, Chu-Li
2008
An iteration regularization for a time-fractional inverse diffusion problem. Zbl 1254.65100
Cheng, Hao; Fu, Chu-Li
2012
A modified method for a non-standard inverse heat conduction problem. Zbl 1105.65097
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2006
Central difference schemes in time and error estimate on a non-standard inverse heat conduction problem. Zbl 1068.65117
Xiong, Xiangtuan; Fu, Chuli; Li, Hongfang
2004
A modified method for determining the surface heat flux of IHCP. Zbl 1202.80017
Qian, Z.; Fu, C.-L.; Xiong, X.-T.
2007
Two regularization methods to identify time-dependent heat source through an internal measurement of temperature. Zbl 1217.65183
Yang, Fan; Fu, Chu-Li
2011
A mollification regularization method for unknown source in time-fractional diffusion equation. Zbl 1304.35755
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2014
Error estimates on a backward heat equation by a wavelet dual least squares method. Zbl 1154.65069
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
A note on “Sideways heat equation and wavelets” and constant $$e^{\ast}$$. Zbl 1051.65090
Fu, Chu-Li; Qiu, Chun-Yu; Zhu, You-Bin
2002
Numerical pseudodifferential operator and Fourier regularization. Zbl 1207.65167
Fu, Chu-Li; Qian, Zhi
2010
A regularization method for solving the Cauchy problem for the Helmholtz equation. Zbl 1221.65295
Feng, Xiao-Li; Fu, Chu-Li; Cheng, Hao
2011
A spectral method for an axisymmetric backward heat equation. Zbl 1186.65130
Cheng, Wei; Fu, Chu-Li
2009
A Bayesian inference approach to identify a Robin coefficient in one-dimensional parabolic problems. Zbl 1169.65093
Yan, Liang; Yang, Fenglian; Fu, Chuli
2009
Determining surface temperature and heat flux by a wavelet dual least squares method. Zbl 1113.65096
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
Two regularization methods and the order optimal error estimates for a sideways parabolic equation. Zbl 1077.80005
Fu, Peng; Fu, Chu-Li; Xiong, Xiang-Tuan; Li, Hong-Fang
2005
Wavelet and spectral regularization methods for a sideways parabolic equation. Zbl 1068.65116
Fu, Chuli; Xiong, Xiangtuan; Li, Hongfang; Zhu, Youbin
2005
The modified regularization method for identifying the unknown source on Poisson equation. Zbl 1236.35206
Yang, Fan; Fu, Chu-Li
2012
Identifying an unknown source term in radial heat conduction. Zbl 1258.65085
Cheng, Wei; Ma, Yun-Jie; Fu, Chu-Li
2012
The quasi-reversibility regularization method for identifying the unknown source for time fractional diffusion equation. Zbl 1443.35199
Yang, Fan; Fu, Chu-Li
2015
Two regularization methods for identification of the heat source depending only on spatial variable for the heat equation. Zbl 1181.35340
Yang, Fan; Fu, Chu-Li
2009
The a posteriori Fourier method for solving ill-posed problems. Zbl 1253.35210
Fu, Chu-Li; Zhang, Yuan-Xiang; Cheng, Hao; Ma, Yun-Jie
2012
Identifying an unknown source term in a spherically symmetric parabolic equation. Zbl 1261.65096
Cheng, Wei; Fu, Chu-Li
2013
Wavelet regularization for an inverse heat conduction problem. Zbl 1055.35139
Fu, Chu-Li; Zhu, You-Bin; Qiu, Chun-Yu
2003
A mollification regularization method for the Cauchy problem of an elliptic equation in a multi-dimensional case. Zbl 1206.65224
Cheng, Hao; Feng, Xiao-Li; Fu, Chu-Li
2010
A modified Tikhonov regularization method for an axisymmetric backward heat equation. Zbl 1210.35283
Cheng, Wei; Fu, Chu Li
2010
A mollification regularization method for stable analytic continuation. Zbl 1218.30004
Deng, Zhi-Liang; Fu, Chu-Li; Feng, Xiao-Li; Zhang, Yuan-Xiang
2011
An a posteriori truncation method for some Cauchy problems associated with Helmholtz-type equations. Zbl 1300.65080
Zhang, Yuan-Xiang; Fu, Chu-Li; Deng, Zhi-Liang
2013
An a posteriori parameter choice rule for the truncation regularization method for solving backward parabolic problems. Zbl 1291.65292
Zhang, Yuan-Xiang; Fu, Chu-Li; Ma, Yun-Jie
2014
On three spectral regularization methods for a backward heat conduction problem. Zbl 1132.35494
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi
2007
Central difference method of a nonstandard inverse heat conduction problem for determining surface heat flux from interior observations. Zbl 1092.65079
Xiong, Xiang-Tuan; Fu, Chu-Li; Li, Hong-Fang
2006
Wavelet regularization for an ill-posed problem of parabolic equation. Zbl 1043.35142
Qiu, Chunyu; Fu, Chuli
2002
Wavelet regularization with error estimates on a general sideways parabolic equation. Zbl 1052.35068
Fu, Chu-Li; Qiu, Chun-Yu; Zhu, You-Bin
2003
Wavelets and regularization of the sideways heat equation. Zbl 1055.65107
Qiu, Chun-Yu; Fu, Chu-Li; Zhu, You-Bin
2003
Wavelets and high order numerical differentiation. Zbl 1201.65227
Fu, Chu-Li; Feng, Xiao-Li; Qian, Zhi
2010
Stability and regularization of a backward parabolic PDE with variable coefficients. Zbl 1279.80002
Feng, Xiao-Li; Eldén, Lars; Fu, Chu-Li
2010
Identifying an unknown source in a space-fractional diffusion equation. Zbl 1324.35204
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2014
A spectral regularization method for solving surface heat flux on a general sideways parabolic. Zbl 1140.65065
Xiong, Xiang-Tuan; Fu, Chu-Li
2008
Spectral regularization methods for solving a sideways parabolic equation within the framework of regularization theory. Zbl 1162.65050
Xiong, Xiang-Tuan; Fu, Chu-Li; Cheng, Jin
2009
Numerical analytic continuation on bounded domains. Zbl 1352.65091
Fu, Chu-Li; Zhang, Yuan-Xiang; Cheng, Hao; Ma, Yun-Jie
2012
Optimal Tikhonov approximation for a sideways parabolic equation. Zbl 1130.65313
Fu, Chu-Li; Li, Hong-Fang; Xiong, Xiang-Tuan; Fu, Peng
2005
An optimal filtering method for stable analytic continuation. Zbl 1242.65051
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2012
An optimal filtering method for the Cauchy problem of the Helmholtz equation. Zbl 1216.35169
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2011
Regularization and error estimate for a spherically symmetric backward heat equation. Zbl 1279.35095
Cheng, Wei; Fu, Chu-Li; Qin, Feng-Juan
2011
A mollification method for a Cauchy problem for the Laplace equation. Zbl 1221.65249
Li, Zhenping; Fu, Chuli
2011
Determining surface heat flux in the steady state for the Cauchy problem for the Laplace equation. Zbl 1162.65400
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2009
A mollification regularization method for identifying the time-dependent heat source problem. Zbl 1380.65230
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2016
The revised generalized Tikhonov regularization for the inverse time-dependent heat source problem. Zbl 1304.47014
Yang, Fan; Fu, Chu-Li
2013
Semidiscrete central difference method in time for determining surface temperatures. Zbl 1079.35101
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2005
An iteration method for stable analytic continuation. Zbl 1334.65068
Cheng, Hao; Fu, Chu-Li; Zhang, Yuan-Xiang
2014
A modified Tikhonov regularization method for the Cauchy problem of Laplace equation. Zbl 1349.35080
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2015
Solving the axisymmetric inverse heat conduction problem by a wavelet dual least squares method. Zbl 1169.80303
Cheng, Wei; Fu, Chu-Li
2009
On the wrinkling and restabilization of highly stretched sheets. Zbl 1425.74309
Wang, T.; Fu, C.; Xu, F.; Huo, Y.; Potier-Ferry, M.
2019
Approximate inverse method for stable analytic continuation in a strip domain. Zbl 1210.65070
Zhang, Yuan-Xiang; Fu, Chu-Li; Yan, Liang
2011
A wavelet-Galerkin method for high order numerical differentiation. Zbl 1195.65026
Dou, Fang-Fang; Fu, Chu-Li; Ma, Yun-Jie
2010
A conditional stability result for backward heat equation. Zbl 1174.35550
Zhang, Yuanxiang; Fu, Chuli; Deng, Zhiliang
2008
Wavelets and numerical pseudodifferential operator. Zbl 1446.65029
Cheng, Hao; Fu, Chu-Li
2016
The Fourier regularization method for identifying the unknown source for a modified Helmholtz equation. Zbl 1324.35203
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2014
Fourier truncation regularization for a class of nonlinear backward heat equation. Zbl 1389.35185
Yang, Fan; Fu, Chuli; Li, Xiaoxiao; Ren, Yupeng
2017
Error estimates of a difference approximation method for a backward heat conduction problem. Zbl 1131.65082
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi; Gao, Xiang
2006
Semi-discrete central difference method for determining surface heat flux of IHCP. Zbl 1141.65074
Qian, Zhi; Fu, Chu-Li
2007
Fourier and Tikhonov regularization methods for solving a class of backward heat conduction problems. Zbl 1150.35591
Zhang, Junyong; Gao, Xiang; Fu, Chuli
2007
The a posteriori Fourier method for solving the Cauchy problem for the Laplace equation with nonhomogeneous Neumann data. Zbl 1438.35452
Fu, Chu-Li; Ma, Yun-Jie; Cheng, Hao; Zhang, Yuan-Xiang
2013
Further result of sideways heat equation and wavelets. Zbl 1006.65106
Fu, Chuli; Qiu, Chunyu; Zhu, Youbin
2001
Fourier regularization of an one-dimensional non-standard inverse heat conduction problem. Zbl 1007.35098
Qiu, Chunyu; Tao, Jianhong; Fu, Chuli
2002
Cauchy problem for the Laplace equation in cylinder domain. Zbl 1239.35180
Ma, Yun-Jie; Fu, Chu-Li
2012
On the wrinkling and restabilization of highly stretched sheets. Zbl 1425.74309
Wang, T.; Fu, C.; Xu, F.; Huo, Y.; Potier-Ferry, M.
2019
Fourier truncation regularization for a class of nonlinear backward heat equation. Zbl 1389.35185
Yang, Fan; Fu, Chuli; Li, Xiaoxiao; Ren, Yupeng
2017
A mollification regularization method for identifying the time-dependent heat source problem. Zbl 1380.65230
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2016
Wavelets and numerical pseudodifferential operator. Zbl 1446.65029
Cheng, Hao; Fu, Chu-Li
2016
The inverse source problem for time-fractional diffusion equation: stability analysis and regularization. Zbl 1329.35357
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2015
The quasi-reversibility regularization method for identifying the unknown source for time fractional diffusion equation. Zbl 1443.35199
Yang, Fan; Fu, Chu-Li
2015
A modified Tikhonov regularization method for the Cauchy problem of Laplace equation. Zbl 1349.35080
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2015
A mollification regularization method for the inverse spatial-dependent heat source problem. Zbl 1291.80010
Yang, Fan; Fu, Chu-Li
2014
A regularization for a Riesz-Feller space-fractional backward diffusion problem. Zbl 1329.65208
Cheng, Hao; Fu, Chu-Li; Zheng, Guang-Hui; Gao, Jie
2014
A mollification regularization method for unknown source in time-fractional diffusion equation. Zbl 1304.35755
Yang, Fan; Fu, Chu-Li; Li, Xiao-Xiao
2014
An a posteriori parameter choice rule for the truncation regularization method for solving backward parabolic problems. Zbl 1291.65292
Zhang, Yuan-Xiang; Fu, Chu-Li; Ma, Yun-Jie
2014
Identifying an unknown source in a space-fractional diffusion equation. Zbl 1324.35204
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2014
An iteration method for stable analytic continuation. Zbl 1334.65068
Cheng, Hao; Fu, Chu-Li; Zhang, Yuan-Xiang
2014
The Fourier regularization method for identifying the unknown source for a modified Helmholtz equation. Zbl 1324.35203
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2014
Identifying an unknown source term in a spherically symmetric parabolic equation. Zbl 1261.65096
Cheng, Wei; Fu, Chu-Li
2013
An a posteriori truncation method for some Cauchy problems associated with Helmholtz-type equations. Zbl 1300.65080
Zhang, Yuan-Xiang; Fu, Chu-Li; Deng, Zhi-Liang
2013
The revised generalized Tikhonov regularization for the inverse time-dependent heat source problem. Zbl 1304.47014
Yang, Fan; Fu, Chu-Li
2013
The a posteriori Fourier method for solving the Cauchy problem for the Laplace equation with nonhomogeneous Neumann data. Zbl 1438.35452
Fu, Chu-Li; Ma, Yun-Jie; Cheng, Hao; Zhang, Yuan-Xiang
2013
A wavelet regularization method for an inverse heat conduction problem with convection term. Zbl 1287.65074
Cheng, Wei; Zhang, Ying-Qi; Fu, Chu-Li
2013
Identification of an unknown source depending on both time and space variables by a variational method. Zbl 1252.65106
Ma, Yun-Jie; Fu, Chu-Li; Zhang, Yuan-Xiang
2012
An iteration regularization for a time-fractional inverse diffusion problem. Zbl 1254.65100
Cheng, Hao; Fu, Chu-Li
2012
The modified regularization method for identifying the unknown source on Poisson equation. Zbl 1236.35206
Yang, Fan; Fu, Chu-Li
2012
Identifying an unknown source term in radial heat conduction. Zbl 1258.65085
Cheng, Wei; Ma, Yun-Jie; Fu, Chu-Li
2012
The a posteriori Fourier method for solving ill-posed problems. Zbl 1253.35210
Fu, Chu-Li; Zhang, Yuan-Xiang; Cheng, Hao; Ma, Yun-Jie
2012
Numerical analytic continuation on bounded domains. Zbl 1352.65091
Fu, Chu-Li; Zhang, Yuan-Xiang; Cheng, Hao; Ma, Yun-Jie
2012
An optimal filtering method for stable analytic continuation. Zbl 1242.65051
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2012
Cauchy problem for the Laplace equation in cylinder domain. Zbl 1239.35180
Ma, Yun-Jie; Fu, Chu-Li
2012
A wavelet method for the Cauchy problem for the Helmholtz equation. Zbl 1264.65181
Dou, Fang-Fang; Fu, Chu-Li
2012
The inverse problem of identifying the unknown source for the modified Helmholtz equation. Zbl 1274.35071
Yang, Fan; Fu, Chuli; Li, Xiaoxiao
2012
Solving a backward heat conduction problem by variational method. Zbl 1302.65214
Ma, Yun-Jie; Fu, Chu-Li; Zhang, Yuan-Xiang
2012
Two regularization methods to identify time-dependent heat source through an internal measurement of temperature. Zbl 1217.65183
Yang, Fan; Fu, Chu-Li
2011
A regularization method for solving the Cauchy problem for the Helmholtz equation. Zbl 1221.65295
Feng, Xiao-Li; Fu, Chu-Li; Cheng, Hao
2011
A mollification regularization method for stable analytic continuation. Zbl 1218.30004
Deng, Zhi-Liang; Fu, Chu-Li; Feng, Xiao-Li; Zhang, Yuan-Xiang
2011
An optimal filtering method for the Cauchy problem of the Helmholtz equation. Zbl 1216.35169
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2011
Regularization and error estimate for a spherically symmetric backward heat equation. Zbl 1279.35095
Cheng, Wei; Fu, Chu-Li; Qin, Feng-Juan
2011
A mollification method for a Cauchy problem for the Laplace equation. Zbl 1221.65249
Li, Zhenping; Fu, Chuli
2011
Approximate inverse method for stable analytic continuation in a strip domain. Zbl 1210.65070
Zhang, Yuan-Xiang; Fu, Chu-Li; Yan, Liang
2011
A new numerical method for the inverse source problem from a Bayesian perspective. Zbl 1217.80154
Yan, Liang; Yang, Fenglian; Fu, Chuli
2011
The method of simplified Tikhonov regularization for dealing with the inverse time-dependent heat source problem. Zbl 1201.65176
Yang, Fan; Fu, Chu-Li
2010
A simplified Tikhonov regularization method for determining the heat source. Zbl 1201.65177
Yang, Fan; Fu, Chu-Li
2010
A computational method for identifying a spacewise-dependent heat source. Zbl 1190.65145
Yan, Liang; Fu, Chu-Li; Dou, Fang-Fang
2010
Source term identification for an axisymmetric inverse heat conduction problem. Zbl 1189.65215
Cheng, Wei; Zhao, Ling-Ling; Fu, Chu-Li
2010
A quasi-boundary-value method for the Cauchy problem for elliptic equations with nonhomogeneous Neumann data. Zbl 1279.65129
Feng, Xiao-Li; Eldén, Lars; Fu, Chu-Li
2010
Numerical pseudodifferential operator and Fourier regularization. Zbl 1207.65167
Fu, Chu-Li; Qian, Zhi
2010
A mollification regularization method for the Cauchy problem of an elliptic equation in a multi-dimensional case. Zbl 1206.65224
Cheng, Hao; Feng, Xiao-Li; Fu, Chu-Li
2010
A modified Tikhonov regularization method for an axisymmetric backward heat equation. Zbl 1210.35283
Cheng, Wei; Fu, Chu Li
2010
Wavelets and high order numerical differentiation. Zbl 1201.65227
Fu, Chu-Li; Feng, Xiao-Li; Qian, Zhi
2010
Stability and regularization of a backward parabolic PDE with variable coefficients. Zbl 1279.80002
Feng, Xiao-Li; Eldén, Lars; Fu, Chu-Li
2010
A wavelet-Galerkin method for high order numerical differentiation. Zbl 1195.65026
Dou, Fang-Fang; Fu, Chu-Li; Ma, Yun-Jie
2010
A meshless method for solving an inverse spacewise-dependent heat source problem. Zbl 1157.65444
Yan, Liang; Yang, Feng-Lian; Fu, Chu-Li
2009
Optimal error bound and Fourier regularization for identifying an unknown source in the heat equation. Zbl 1219.65100
Dou, Fang-Fang; Fu, Chu-Li; Yang, Feng-Lian
2009
The Fourier regularization for solving the Cauchy problem for the Helmholtz equation. Zbl 1169.65333
Fu, Chu-Li; Feng, Xiao-Li; Qian, Zhi
2009
Determining an unknown source in the heat equation by a wavelet dual least squares method. Zbl 1172.35511
Dou, Fang-Fang; Fu, Chu-Li
2009
Identifying an unknown source term in a heat equation. Zbl 1183.65116
Dou, Fang-Fang; Fu, Chu-Li; Yang, Fan
2009
A modified Tikhonov regularization for stable analytic continuation. Zbl 1198.30005
Fu, Chu-Li; Deng, Zhi-Liang; Feng, Xiao-Li; Dou, Fang-Fang
2009
A spectral method for an axisymmetric backward heat equation. Zbl 1186.65130
Cheng, Wei; Fu, Chu-Li
2009
A Bayesian inference approach to identify a Robin coefficient in one-dimensional parabolic problems. Zbl 1169.65093
Yan, Liang; Yang, Fenglian; Fu, Chuli
2009
Two regularization methods for identification of the heat source depending only on spatial variable for the heat equation. Zbl 1181.35340
Yang, Fan; Fu, Chu-Li
2009
Spectral regularization methods for solving a sideways parabolic equation within the framework of regularization theory. Zbl 1162.65050
Xiong, Xiang-Tuan; Fu, Chu-Li; Cheng, Jin
2009
Determining surface heat flux in the steady state for the Cauchy problem for the Laplace equation. Zbl 1162.65400
Cheng, Hao; Fu, Chu-Li; Feng, Xiao-Li
2009
Solving the axisymmetric inverse heat conduction problem by a wavelet dual least squares method. Zbl 1169.80303
Cheng, Wei; Fu, Chu-Li
2009
The method of fundamental solutions for the inverse heat source problem. Zbl 1244.80026
Yan, Liang; Fu, Chu-Li; Yang, Feng-Lian
2008
Two regularization methods for a Cauchy problem for the Laplace equation. Zbl 1132.35493
Qian, Zhi; Fu, Chu-Li; Li, Zhen-Ping
2008
Fourier regularization method for solving a Cauchy problem for the Laplace equation. Zbl 1258.65094
Fu, C.-L.; Li, H.-F.; Qian, Z.; Xiong, X.-T.
2008
Two regularization methods for a spherically symmetric inverse heat conduction problem. Zbl 1387.35615
Cheng, Wei; Fu, Chu-Li; Qian, Zhi
2008
A simple regularization method for stable analytic continuation. Zbl 1160.30023
Fu, Chu-Li; Dou, Fang-Fang; Feng, Xiao-Li; Qian, Zhi
2008
Wavelets and regularization of the Cauchy problem for the Laplace equation. Zbl 1135.35093
Qiu, Chun-Yu; Fu, Chu-Li
2008
Numerical approximation of solution of nonhomogeneous backward heat conduction problem in bounded region. Zbl 1166.65048
Feng, Xiao-Li; Qian, Zhi; Fu, Chu-Li
2008
A spectral regularization method for solving surface heat flux on a general sideways parabolic. Zbl 1140.65065
Xiong, Xiang-Tuan; Fu, Chu-Li
2008
A conditional stability result for backward heat equation. Zbl 1174.35550
Zhang, Yuanxiang; Fu, Chuli; Deng, Zhiliang
2008
Fourier regularization for a backward heat equation. Zbl 1146.35420
Fu, Chu-Li; Xiong, Xiang-Tuan; Qian, Zhi
2007
A modified method for a backward heat conduction problem. Zbl 1112.65090
Qian, Zhi; Fu, Chu-Li; Shi, Rui
2007
Regularization strategies for a two-dimensional inverse heat conduction problem. Zbl 1118.35073
Qian, Zhi; Fu, Chu-Li
2007
A modified Tikhonov regularization method for a spherically symmetric three-dimensional inverse heat conduction problem. Zbl 1122.65083
Cheng, Wei; Fu, Chu-Li; Qian, Zhi
2007
Two approximate methods of a Cauchy problem for the Helmholtz equation. Zbl 1182.35237
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
A modified method for determining the surface heat flux of IHCP. Zbl 1202.80017
Qian, Z.; Fu, C.-L.; Xiong, X.-T.
2007
Error estimates on a backward heat equation by a wavelet dual least squares method. Zbl 1154.65069
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
Determining surface temperature and heat flux by a wavelet dual least squares method. Zbl 1113.65096
Xiong, Xiang-Tuan; Fu, Chu-Li
2007
On three spectral regularization methods for a backward heat conduction problem. Zbl 1132.35494
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi
2007
Semi-discrete central difference method for determining surface heat flux of IHCP. Zbl 1141.65074
Qian, Zhi; Fu, Chu-Li
2007
Fourier and Tikhonov regularization methods for solving a class of backward heat conduction problems. Zbl 1150.35591
Zhang, Junyong; Gao, Xiang; Fu, Chuli
2007
Fourth-order modified method for the Cauchy problem for the Laplace equation. Zbl 1093.65107
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2006
Two numerical methods for solving a backward heat conduction problem. Zbl 1102.65098
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi
2006
Central difference regularization method for the Cauchy problem of the Laplace’s equation. Zbl 1148.65314
Xiong, Xiang-Tuan; Fu, Chu-Li
2006
Fourier truncation method for high order numerical derivatives. Zbl 1103.65023
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan; Wei, Ting
2006
Fourier regularization method of a sideways heat equation for determining surface heat flux. Zbl 1124.65083
Xiong, Xiang-Tuan; Fu, Chu-Li; Li, Hong-Fang
2006
A modified method for high order numerical derivatives. Zbl 1109.65024
Qian, Zhi; Fu, Chu-Li; Feng, Xiao-Li
2006
A modified method for a non-standard inverse heat conduction problem. Zbl 1105.65097
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2006
Central difference method of a nonstandard inverse heat conduction problem for determining surface heat flux from interior observations. Zbl 1092.65079
Xiong, Xiang-Tuan; Fu, Chu-Li; Li, Hong-Fang
2006
Error estimates of a difference approximation method for a backward heat conduction problem. Zbl 1131.65082
Xiong, Xiang-Tuan; Fu, Chu-Li; Qian, Zhi; Gao, Xiang
2006
Fourier regularization method for solving the surface heat flux from interior observations. Zbl 1122.80016
Fu, Chu-Li; Xiong, Xiang-Tuan; Fu, Peng
2005
Two regularization methods and the order optimal error estimates for a sideways parabolic equation. Zbl 1077.80005
Fu, Peng; Fu, Chu-Li; Xiong, Xiang-Tuan; Li, Hong-Fang
2005
Wavelet and spectral regularization methods for a sideways parabolic equation. Zbl 1068.65116
Fu, Chuli; Xiong, Xiangtuan; Li, Hongfang; Zhu, Youbin
2005
Optimal Tikhonov approximation for a sideways parabolic equation. Zbl 1130.65313
Fu, Chu-Li; Li, Hong-Fang; Xiong, Xiang-Tuan; Fu, Peng
2005
Semidiscrete central difference method in time for determining surface temperatures. Zbl 1079.35101
Qian, Zhi; Fu, Chu-Li; Xiong, Xiang-Tuan
2005
Simplified Tikhonov and Fourier regularization methods on a general sideways parabolic equation. Zbl 1055.65106
Fu, Chu-Li
2004
Central difference schemes in time and error estimate on a non-standard inverse heat conduction problem. Zbl 1068.65117
Xiong, Xiangtuan; Fu, Chuli; Li, Hongfang
2004
Wavelet and error estimation of surface heat flux. Zbl 1019.65074
Fu, Chuli; Qiu, Chunyu
2003
Wavelet regularization for an inverse heat conduction problem. Zbl 1055.35139
Fu, Chu-Li; Zhu, You-Bin; Qiu, Chun-Yu
2003
Wavelet regularization with error estimates on a general sideways parabolic equation. Zbl 1052.35068
Fu, Chu-Li; Qiu, Chun-Yu; Zhu, You-Bin
2003
...and 5 more Documents
all top 5
Cited by 514 Authors
71 Fu, Chuli 38 Xiong, Xiangtuan 35 Yang, Fan 32 Nguyen Huy Tuan 27 Li, Xiaoxiao 22 Wei, Ting 20 Qian, Zhi 15 Feng, Xiaoli 15 Trong, Dang Duc 14 Cheng, Wei 14 Liu, Chein-Shan 13 Cheng, Hao 12 Lesnic, Daniel 12 Ma, Yunjie 10 Liu, Songshu 10 Zhang, Yuanxiang 9 Wang, Jungang 9 Zheng, Guanghui 8 Hon, Yiu-Chung 8 Li, Hongfang 8 Qian, Ai-Lin 7 Zhao, Zhenyu 6 Dou, Fangfang 6 Hai, Dinh Nguyen Duy 6 Khanh, Tra Quoc 6 Marin, Liviu 6 Meng, Zehong 6 Minh, Nguyen Dang 6 Nguyen, Van Thinh 5 Feng, Lixin 5 Khoa, Vo Anh 5 Li, Zhenping 5 Wang, Fajie 5 Xue, Xuemin 5 Zhao, Jingjun 4 Aida-Zade, Kamil Rajab 4 Cheng, Xiaoliang 4 Dang Duc Trong 4 Fan, Chia-Ming 4 Kolodziej, Jan Adam 4 Liu, Ji-Chuan 4 Long, Le Dinh 4 Mierzwiczak, Magdalena 4 Ngo Van Hoa 4 Qiu, Chunyu 4 Qiu, Shufang 4 Quan, Pham Hoang 4 Rahimov, Anar B. 4 Ruan, Zhousheng 4 Shidfar, Abdollah 4 Wang, Jinru 4 Wang, Zewen 4 Wen, Jin 4 Yan, Liang 3 Abdullayev, Vaqif M. 3 Babaei, Afshin 3 Berntsson, Fredrik 3 Cao, Kai 3 Dehghan Takht Fooladi, Mehdi 3 Deng, Zhiliang 3 Gao, Jie 3 Gu, Yan 3 Guo, Hengzhen 3 Hazanee, A. 3 Ismailov, Mansur I. 3 Karimi, Milad 3 Khieu, Tran Thi 3 Kirane, Mokhtar 3 Kozlov, Vladimir A. 3 Li, Dungang 3 Liu, Jijun 3 Liu, Tao 3 Mpinganzima, Lydie 3 Pham Hoang Quan 3 Qin, Haihua 3 Ran, Yuhong 3 Ren, Yupeng 3 Rostamian, Malihe 3 Shahrezaee, Alimardan 3 Sun, Yao 3 Thang, Le Duc 3 Tran, Binh Thanh 3 Tran, Thanh Binh 3 Turesson, Bengt Ove 3 Wang, Junxia 3 Wu, Yujiang 3 Yang, Fenglian 3 Yang, Liu 3 Zhang, Hongwu 3 Zhou, Yubin 3 Zhu, Youbin 2 Adibi, Hojatollah 2 Alem, Leïla 2 Amirfakhrian, Majid 2 Arghand, Muhammad 2 Ashyralyev, Allaberen 2 Azari, Hossein 2 Boroomand, Bijan 2 Boussetila, Nadjib 2 Chang, Chih-Wen ...and 414 more Authors
all top 5
Cited in 101 Serials
63 Inverse Problems in Science and Engineering 36 Applied Mathematics and Computation 34 Journal of Computational and Applied Mathematics 29 Applied Mathematical Modelling 24 Engineering Analysis with Boundary Elements 22 Computers & Mathematics with Applications 22 Journal of Inverse and Ill-Posed Problems 15 Applied Numerical Mathematics 12 Boundary Value Problems 10 Applied Mathematics Letters 9 Applicable Analysis 9 Journal of Mathematical Analysis and Applications 9 International Journal of Computer Mathematics 8 Mathematics and Computers in Simulation 8 Numerical Algorithms 7 Computational and Applied Mathematics 7 Mathematical Problems in Engineering 7 Advances in Difference Equations 6 Mathematical Methods in the Applied Sciences 6 Journal of Inequalities and Applications 5 International Journal of Heat and Mass Transfer 5 Applied Mathematics and Mechanics. (English Edition) 4 Advances in Computational Mathematics 4 International Journal of Wavelets, Multiresolution and Information Processing 4 Complex Variables and Elliptic Equations 4 Advances in Mathematical Physics 3 Acta Mathematica Vietnamica 3 Numerical Functional Analysis and Optimization 3 Abstract and Applied Analysis 3 Nonlinear Analysis. Real World Applications 3 Journal of Applied Mathematics 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Inverse Problems 2 Journal of Computational Physics 2 Journal of Engineering Mathematics 2 Acta Mathematicae Applicatae Sinica. English Series 2 Computational Mechanics 2 Numerical Methods for Partial Differential Equations 2 Mathematical and Computer Modelling 2 Journal of Scientific Computing 2 Applications of Mathematics 2 Cybernetics and Systems Analysis 2 International Journal of Numerical Methods for Heat & Fluid Flow 2 Acta Mathematica Sinica. English Series 2 Communications in Nonlinear Science and Numerical Simulation 2 Analysis in Theory and Applications 2 ISRN Mathematical Analysis 2 Journal of Applied Analysis and Computation 2 Evolution Equations and Control Theory 2 East Asian Journal on Applied Mathematics 2 International Journal of Applied and Computational Mathematics 2 Open Mathematics 1 International Journal of Modern Physics B 1 Bulletin of the Australian Mathematical Society 1 Computer Physics Communications 1 International Journal of Engineering Science 1 Indian Journal of Pure & Applied Mathematics 1 International Journal of Solids and Structures 1 Journal of Mathematical Biology 1 Journal of Mathematical Physics 1 Ukrainian Mathematical Journal 1 Chaos, Solitons and Fractals 1 BIT 1 Calcolo 1 Czechoslovak Mathematical Journal 1 Fuzzy Sets and Systems 1 International Journal of Mathematics and Mathematical Sciences 1 International Journal for Numerical Methods in Engineering 1 Meccanica 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Chinese Annals of Mathematics. Series B 1 Bulletin of the Iranian Mathematical Society 1 Statistics 1 Journal of Integral Equations and Applications 1 Computational Mathematics and Mathematical Physics 1 SIAM Journal on Mathematical Analysis 1 Journal of Mathematical Imaging and Vision 1 Georgian Mathematical Journal 1 Complexity 1 The Journal of Fourier Analysis and Applications 1 Journal of Mathematical Chemistry 1 Taiwanese Journal of Mathematics 1 Communications of the Korean Mathematical Society 1 Journal of Dynamical and Control Systems 1 Differential Equations 1 Sādhanā 1 Journal of Applied Mathematics and Computing 1 Analysis and Applications (Singapore) 1 International Journal of Computational Methods 1 Mathematics in Computer Science 1 Journal of Fixed Point Theory and Applications 1 European Journal of Pure and Applied Mathematics 1 Tbilisi Mathematical Journal 1 São Paulo Journal of Mathematical Sciences 1 International Journal of Differential Equations 1 Journal of Pseudo-Differential Operators and Applications 1 Axioms 1 Mathematical Sciences 1 International Journal of Partial Differential Equations 1 Computational Methods for Differential Equations ...and 1 more Serials
all top 5
Cited in 32 Fields
380 Numerical analysis (65-XX) 341 Partial differential equations (35-XX) 65 Classical thermodynamics, heat transfer (80-XX) 55 Operator theory (47-XX) 12 Functions of a complex variable (30-XX) 12 Harmonic analysis on Euclidean spaces (42-XX) 11 Mechanics of deformable solids (74-XX) 8 Potential theory (31-XX) 8 Integral equations (45-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 7 Statistics (62-XX) 6 Real functions (26-XX) 6 Ordinary differential equations (34-XX) 6 Fluid mechanics (76-XX) 6 Systems theory; control (93-XX) 5 Optics, electromagnetic theory (78-XX) 5 Information and communication theory, circuits (94-XX) 4 Functional analysis (46-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Operations research, mathematical programming (90-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Special functions (33-XX) 2 Approximations and expansions (41-XX) 2 Integral transforms, operational calculus (44-XX) 2 Probability theory and stochastic processes (60-XX) 2 Geophysics (86-XX) 2 Biology and other natural sciences (92-XX) 1 Number theory (11-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Differential geometry (53-XX) 1 Computer science (68-XX) 1 Quantum theory (81-XX) | 2021-07-24 06:09:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5756434798240662, "perplexity": 7535.806375356019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150129.50/warc/CC-MAIN-20210724032221-20210724062221-00142.warc.gz"} |
https://www.gamedev.net/forums/topic/610391-very-weird-matrix-behavior-in-glsl/ | # Very weird matrix behavior in GLSL
This topic is 2350 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi Everbody,
Just came across a very weird behavior with matrix multiplication in GLSL shaders.
Trying to break this very common line of code:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
into
vec4 pos = gl_ModelViewMatrix * gl_Vertex; gl_Position = gl_ProjectionMatrix * pos;
but resulting gl_Position that gets calculated is different in these two samples.
This looks very weird, as matrix multiplication operation is associative (A * B) * C = A * (B * C).
Anybody came across this before? Why does this happen and is there a way to fix it?
Thanks,
Ruben
##### Share on other sites
I had the same issue before but couldn't tell you what I did, I think I just don't use the one that works and any other calculations I have a separate variable. So if you need "pos" variable, then use it but just make sure your gl_Position is always the one that you have that works.
##### Share on other sites
Matrices aren't communitive. You MUST multiply the first two together first, then multiply that result into gl_Vertex (which is acting as a column matrix in this operation). To see this for yourself, try multiplying them the original way, then do it the way you're trying to do it CPU-side, and print the results to the console. You'll see different results. Doing it by hand may also help realize what's going on.
##### Share on other sites
Matrices aren't communitive. You MUST multiply the first two together first, then multiply that result into gl_Vertex (which is acting as a column matrix in this operation). To see this for yourself, try multiplying them the original way, then do it the way you're trying to do it CPU-side, and print the results to the console. You'll see different results. Doing it by hand may also help realize what's going on.
I'm not saying that matrix multiplication is commutative A * B = B * A is not true for matrices. But it is supposed to be associative, which means that for every given matrices A, B, C following statement is true: (A * B) * C = A * (B * C). In my case A can be treated as gl_ProjectionMatrix, B - as gl_ModelViewMatrix, and C - as a gl_Vertex. Quote from wiki: http://en.wikipedia.org/wiki/Matrix_multiplication#Properties
I understand that there is something happening on the GPU side, but from math perspective, no matter what is the sequence of operations, result should be the same. I just did a test with Matlab with various random matrices/vectors and results were same.
##### Share on other sites
Try:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
Thats the only thing I can think of is something with the w component. Regardless if you need:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
//Do some calculations with pos variable
gl_Position = gl_ProjectionMatrix * pos;
You would have the same exact amount of lines of code by doing:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
//Do some calculations with pos variable
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
so just do that anyway.
##### Share on other sites
Try:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ProjectionMatrix * vec4(pos.xyz, 1.0);
Thats the only thing I can think of is something with the w component. Regardless if you need:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
//Do some calculations with pos variable
gl_Position = gl_ProjectionMatrix * pos;
You would have the same exact amount of lines of code by doing:
vec4 pos = gl_ModelViewMatrix * gl_Vertex;
//Do some calculations with pos variable
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
so just do that anyway.
Trick with setting w = 1.0 does not make any difference. I know that I can use gl_ModelViewProjectionMatrix instead, but that makes it very weird and I do not have any confidence when using matrix multiplication any more....
##### Share on other sites
Trying to break this very common line of code:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;into vec4 pos = gl_ModelViewMatrix * gl_Vertex; gl_Position = gl_ProjectionMatrix * pos;
Shouldn't you be concatenating the matrices, and then using the result to transform the position?vec4 modelViewProj = gl_ProjectionMatrix * gl_ModelViewMatrix; gl_Position = modelViewProj * gl_Vertex;actually, no, I don't see a problem with your method... transforming a point into view-space, and then projecting it should work too... :/
##### Share on other sites
[quote name='rubenhak' timestamp='1315637398' post='4859902']Trying to break this very common line of code:
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;into vec4 pos = gl_ModelViewMatrix * gl_Vertex; gl_Position = gl_ProjectionMatrix * pos;
Shouldn't you be concatenating the matrices, and then using the result to transform the position?vec4 modelViewProj = gl_ProjectionMatrix * gl_ModelViewMatrix; gl_Position = modelViewProj * gl_Vertex;actually, no, I don't see a problem with your method... transforming a point into view-space, and then projecting it should work too... :/
[/quote]
That could be another way of doing, but still cannot see a reason why doing my way does not work...
##### Share on other sites
This looks very weird, as matrix multiplication operation is associative (A * B) * C = A * (B * C).
Except that they are talking about matrix multiplcation. That is, C is not a vector when they write that.
The meaning is that (A * B) * C will create the same matrix as A * (B * C).
Using (gl_ModelViewProjectionMatrix * gl_Vertex) is the same as using (gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex) because the result of (gl_ProjectionMatrix * gl_ModelViewMatrix) is a matrix equal to gl_ModelViewProjectionMatrix.
L. Spiro
##### Share on other sites
[quote name='rubenhak' timestamp='1315637398' post='4859902']This looks very weird, as matrix multiplication operation is associative (A * B) * C = A * (B * C).
Except that they are talking about matrix multiplcation. That is, C is not a vector when they write that.
The meaning is that (A * B) * C will create the same matrix as A * (B * C).
Using (gl_ModelViewProjectionMatrix * gl_Vertex) is the same as using (gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex) because the result of (gl_ProjectionMatrix * gl_ModelViewMatrix) is a matrix equal to gl_ModelViewProjectionMatrix.
L. Spiro
[/quote]
Vector is a custom case of a m-by-n matrix where m (or n) is equal to 1.
I need to calculate value of pos:
pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
I just want to do it differently:
p1 = gl_ModelViewMatrix * gl_Vertex;
p2 = gl_ProjectionMatrix * p1;
resulting p2 is not equal to pos. Could you explain me why? Manually I substitute any random values into gl_ProjectionMatrix, gl_ModelViewMatrix and gl_Vertex values and calculate pos and p2 manually and get equal results. However, in GLSL getting different values. Any ideas why?
##### Share on other sites
if you use opencl or cuda, or some other method to get actual values then u would be alright., i mean one method would be to create 4 buffers for an fbo. and draw a triangle
gl_Position = ftransform()
but send to your pixel shader vec4 (colors) of your final p2.xyzw into buffer 1, write the intermediate p1.xyzw into buffer 2, etc and then you can investigate the numbers.
Except that they are talking about matrix multiplcation. That is, C is not a vector when they write that.[/quote]
Right, but what we are all getting at is:
[projectionMatrix]*[Vector] what if Vector is pre-transformed by 20 matrix translations: it will still compute to a vector in world space that will still be allowed to be projected on the screen by a projection matrix. I can put Vector = 20,20,20,1 or I can call glTranslate(1,1,1) 19 times on a vector = 1,1,1,1, which makes it 20,20,20,1 and then project it after that and be the same result.
which looks like:
[projection][m1][m2]........[m19][vector]
[projection][translated vector], same result, and you can do the math on it, it should be the same result.
Trick with setting w = 1.0 does not make any difference.[/quote]
I'm just guessing, I went through the same thing and just did what I posted. It is obviously doing something, but you have to debug and get the numbers. Actually now that I think of it, gl_ModelViewProjectionMatrix might be stored different, I think that gl_ProjectionMatrix on its own is f'd up, just debug the numbers and compare because if you even do this:
my_glMViewProjection = gl_ProjectionMatrix*gl_ModelViewMatrix; // not equal to: gl_ModelviewProjectionMatrix;
I believe that does not even work. Try it out.
##### Share on other sites
Try sending the matrices yourself instead of using the built-in ones.
Try pos * gl_ProjectionMatrix instead of gl_ProjectionMatrix * pos.
L. Spiro
##### Share on other sites
Vector is a custom case of a m-by-n matrix where m (or n) is equal to 1.
I need to calculate value of pos:
pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
I just want to do it differently:
p1 = gl_ModelViewMatrix * gl_Vertex;
p2 = gl_ProjectionMatrix * p1;
resulting p2 is not equal to pos. Could you explain me why? Manually I substitute any random values into gl_ProjectionMatrix, gl_ModelViewMatrix and gl_Vertex values and calculate pos and p2 manually and get equal results. However, in GLSL getting different values. Any ideas why?
[/quote]
Pass the matrices as uniforms and then do the computation. I'm using version 410 and GLM math to create the matrices and I have no issues if I use intermediate variables. Of course, I'm not using the built-in types.
##### Share on other sites
if you use opencl or cuda, or some other method to get actual values then u would be alright., i mean one method would be to create 4 buffers for an fbo. and draw a triangle
gl_Position = ftransform()
but send to your pixel shader vec4 (colors) of your final p2.xyzw into buffer 1, write the intermediate p1.xyzw into buffer 2, etc and then you can investigate the numbers.
Except that they are talking about matrix multiplcation. That is, C is not a vector when they write that.
Right, but what we are all getting at is:
[projectionMatrix]*[Vector] what if Vector is pre-transformed by 20 matrix translations: it will still compute to a vector in world space that will still be allowed to be projected on the screen by a projection matrix. I can put Vector = 20,20,20,1 or I can call glTranslate(1,1,1) 19 times on a vector = 1,1,1,1, which makes it 20,20,20,1 and then project it after that and be the same result.
which looks like:
[projection][m1][m2]........[m19][vector]
[projection][translated vector], same result, and you can do the math on it, it should be the same result.
Trick with setting w = 1.0 does not make any difference.[/quote]
I'm just guessing, I went through the same thing and just did what I posted. It is obviously doing something, but you have to debug and get the numbers. Actually now that I think of it, gl_ModelViewProjectionMatrix might be stored different, I think that gl_ProjectionMatrix on its own is f'd up, just debug the numbers and compare because if you even do this:
my_glMViewProjection = gl_ProjectionMatrix*gl_ModelViewMatrix; // not equal to: gl_ModelviewProjectionMatrix;
I believe that does not even work. Try it out.
[/quote]
All the mystery is that even with my_glMViewProjection calculation is correct. Let me bring some facts:
1) The goal is to compute (not willing to use gl_ModelViewProjectionMatrix at all):
[color=#1C2837][size=2]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color=#1C2837][size=2]
[color=#1C2837][size=2]2) Flowing computes correct value: r1 equals pos
[color=#1C2837][size=2]mat4 mat = [color=#1C2837][size=2]gl_ProjectionMatrix * [color=#1C2837][size=2]gl_ModelViewMatrix;
[color=#1C2837][size=2]r1 = mat * gl_Vertex;
[color=#1C2837][size=2]
[color=#1C2837][size=2]3) These statements compute r2 which is different from r1 and pos
[color=#1C2837][size=2]p1 = gl_ModelViewMatrix * gl_Vertex;[color=#1C2837][size=2]r2 = gl_ProjectionMatrix * p1;
##### Share on other sites
Try sending the matrices yourself instead of using the built-in ones.
Try pos * gl_ProjectionMatrix instead of gl_ProjectionMatrix * pos.
L. Spiro
Tried all possible postfix/prefix multiplications together with transposing the matrix, but still, cannot get the final result.
##### Share on other sites
Vector is a custom case of a m-by-n matrix where m (or n) is equal to 1.
I need to calculate value of pos:
pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
I just want to do it differently:
p1 = gl_ModelViewMatrix * gl_Vertex;
p2 = gl_ProjectionMatrix * p1;
resulting p2 is not equal to pos. Could you explain me why? Manually I substitute any random values into gl_ProjectionMatrix, gl_ModelViewMatrix and gl_Vertex values and calculate pos and p2 manually and get equal results. However, in GLSL getting different values. Any ideas why?
Pass the matrices as uniforms and then do the computation. I'm using version 410 and GLM math to create the matrices and I have no issues if I use intermediate variables. Of course, I'm not using the built-in types.
[/quote]
Will look into GLM. But still it should not be any different cause with just changing order of computation in GLSL shader results in different values.
How happy are you with GLM? I'm thinking about replacing my custom made math library with a robust opensource one. Yet not decided which library to pick.
##### Share on other sites
[color=#1C2837][size=2]1) The goal is to compute (not willing to use gl_ModelViewProjectionMatrix at all):
[color=#1C2837][size=2]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;[/quote]
[color=#1C2837][size=2][color=#1C2837][size=2]
[color=#1C2837][size=2][color=#1C2837][size=2]What I keep telling you is that your goal is dumb. Its broke, we don't know why. Can you still carry on and make a shader: yes. Then do it. How else are you going to calculate pos? you need to multiply the vertex into camera space and onto the screen. One way or another your going to have a line of code to do that. Either use pos = ftransform(), pos = glmodelviewprojection*glvertex, or send in your own modelviewprojectionmatrix.
[color=#1C2837][size=2][color=#1C2837][size=2]
[color=#1C2837][size=2][color=#1C2837][size=2]Are you seriously preferring this:
[color=#1C2837][size=2]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color=#1C2837][size=2]to this:
[color=#1C2837][size=2]pos = gl_ModelViewProjectionMatrix * gl_Vertex;
##### Share on other sites
[color="#1C2837"]1) The goal is to compute (not willing to use gl_ModelViewProjectionMatrix at all):
[color="#1C2837"]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color="#1C2837"][color="#1C2837"]
[color="#1C2837"][color="#1C2837"]What I keep telling you is that your goal is dumb. Its broke, we don't know why. Can you still carry on and make a shader: yes. Then do it. How else are you going to calculate pos? you need to multiply the vertex into camera space and onto the screen. One way or another your going to have a line of code to do that. Either use pos = ftransform(), pos = glmodelviewprojection*glvertex, or send in your own modelviewprojectionmatrix.
[color="#1C2837"][color="#1C2837"]
[color="#1C2837"][color="#1C2837"]Are you seriously preferring this:
[color="#1C2837"]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color="#1C2837"]to this:
[color="#1C2837"]pos = gl_ModelViewProjectionMatrix * gl_Vertex;
[/quote]
In this particular thread I'm not trying to get and answer to answer on how to perform projection transformation. Before opening this thread I knew that I could do "[color=#1C2837][size=2]pos = gl_ModelViewProjectionMatrix * gl_Vertex;" and it would work like a charm. But I want to get and answer why does the order of calculation make difference, and for that particular reason I keep using this construct "[color=#1C2837][size=2]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;" just to concentrate on multiplication.
[color=#1C2837][size=2]
[color=#1C2837][size=2]I have some bigger trouble related with matrix multiplication, and I believe that its because of exactly same problem as described in this thread.
[color=#1C2837][size=2]
[color=#1C2837][size=2]Does it make a bit of sense?
##### Share on other sites
[color="#1C2837"]1) The goal is to compute (not willing to use gl_ModelViewProjectionMatrix at all):
[color="#1C2837"]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color="#1C2837"][color="#1C2837"]What I keep telling you is that your goal is dumb. Its broke, we don't know why. Can you still carry on and make a shader: yes. Then do it. How else are you going to calculate pos? you need to multiply the vertex into camera space and onto the screen. One way or another your going to have a line of code to do that. Either use pos = ftransform(), pos = glmodelviewprojection*glvertex, or send in your own modelviewprojectionmatrix.
[color="#1C2837"][color="#1C2837"]Are you seriously preferring this:
[color="#1C2837"]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
[color="#1C2837"]to this:
[color="#1C2837"]pos = gl_ModelViewProjectionMatrix * gl_Vertex;
[/quote]
In this particular thread I'm not trying to get and answer to answer on how to perform projection transformation. Before opening this thread I knew that I could do "[color="#1C2837"]pos = gl_ModelViewProjectionMatrix * gl_Vertex;" and it would work like a charm. But I want to get and answer why does the order of calculation make difference, and for that particular reason I keep using this construct "[color="#1C2837"]pos = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;" just to concentrate on multiplication.
[color="#1C2837"]I have some bigger trouble related with matrix multiplication, and I believe that its because of exactly same problem as described in this thread.
[color="#1C2837"]Does it make a bit of sense?
[/quote]
Yes, it makes sense. I understand you're not concerned with efficiency at all. You're just wondering why GLSL is not preserving associativity when you use the built-in types. In that sense, you are not really changing the order of the multiplication but you are changing the grouping. If you were changing order, then we know matrix multiplication is non-commutative and that would be the end of it. But you are wondering why associativity of multiplication is not preserved when it should. BTW, GLM seems to me as a good replacement for the old mathematical fixed functions like gluLookAt, et al. It uses a syntax that is similar to GLSL like vec3, mat4, etc...
##### Share on other sites
Yes, it makes sense. I understand you're not concerned with efficiency at all. You're just wondering why GLSL is not preserving associativity when you use the built-in types. In that sense, you are not really changing the order of the multiplication but you are changing the grouping. If you were changing order, then we know matrix multiplication is non-commutative and that would be the end of it. But you are wondering why associativity of multiplication is not preserved when it should. BTW, GLM seems to me as a good replacement for the old mathematical fixed functions like gluLookAt, et al. It uses a syntax that is similar to GLSL like vec3, mat4, etc...
Yeah, that's what i meant - "grouping" thanks for correction.
I've got my own implementations of LookAt, projection and rest of GL helper library functions so not worried about those much. I'm just thinking if GLM could be a good candidate for replacement, as it could be more robust and better tested by community than my own implementation of math lib.
Does GLM support templated data types? For example, can I use vec3<float>, or vec3<double> ? Does it have wrappers for trigonometric functions?
##### Share on other sites
[quote name='Jesse7' timestamp='1315857744' post='4860843']
Yes, it makes sense. I understand you're not concerned with efficiency at all. You're just wondering why GLSL is not preserving associativity when you use the built-in types. In that sense, you are not really changing the order of the multiplication but you are changing the grouping. If you were changing order, then we know matrix multiplication is non-commutative and that would be the end of it. But you are wondering why associativity of multiplication is not preserved when it should. BTW, GLM seems to me as a good replacement for the old mathematical fixed functions like gluLookAt, et al. It uses a syntax that is similar to GLSL like vec3, mat4, etc...
Yeah, that's what i meant - "grouping" thanks for correction.
I've got my own implementations of LookAt, projection and rest of GL helper library functions so not worried about those much. I'm just thinking if GLM could be a good candidate for replacement, as it could be more robust and better tested by community than my own implementation of math lib.
Does GLM support templated data types? For example, can I use vec3<float>, or vec3<double> ? Does it have wrappers for trigonometric functions?
[/quote]
I am new to this library and haven't tested it thoroughly. There seems to be classes that start with a "t" that support templates, for example tvec3 < T >, and it does wrap trig functions.
http://glm.g-truc.ne.../annotated.html
##### Share on other sites
[quote name='rubenhak' timestamp='1315859214' post='4860854']
[quote name='Jesse7' timestamp='1315857744' post='4860843']
Yes, it makes sense. I understand you're not concerned with efficiency at all. You're just wondering why GLSL is not preserving associativity when you use the built-in types. In that sense, you are not really changing the order of the multiplication but you are changing the grouping. If you were changing order, then we know matrix multiplication is non-commutative and that would be the end of it. But you are wondering why associativity of multiplication is not preserved when it should. BTW, GLM seems to me as a good replacement for the old mathematical fixed functions like gluLookAt, et al. It uses a syntax that is similar to GLSL like vec3, mat4, etc...
Yeah, that's what i meant - "grouping" thanks for correction.
I've got my own implementations of LookAt, projection and rest of GL helper library functions so not worried about those much. I'm just thinking if GLM could be a good candidate for replacement, as it could be more robust and better tested by community than my own implementation of math lib.
Does GLM support templated data types? For example, can I use vec3<float>, or vec3<double> ? Does it have wrappers for trigonometric functions?
[/quote]
I am new to this library and haven't tested it thoroughly. There seems to be classes that start with a "t" that support templates, for example tvec3 < T >, and it does wrap trig functions.
http://glm.g-truc.ne.../annotated.html
[/quote]
looks promising. Will upgrade to GLM overnight today
##### Share on other sites
is this a problem you are having on nvidia or amd? on windows or linux or ...?
##### Share on other sites
is this a problem you are having on nvidia or amd? on windows or linux or ...?
windows 7 64 bit, nvidia gtx 480.
I suspect i have same issue with iOS
##### Share on other sites
I actually put your original problem (and the one i had some years ago) into a shader of mine real quick and it worked fine. I got a GTS 450 and I had updated my driver a couple days ago. Try updating driver. | 2018-02-19 22:36:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49222099781036377, "perplexity": 1589.096553584162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00785.warc.gz"} |
https://www.physicsoverflow.org/42162/unitarity-representations-of-cft-in-arbitrary-dimensions | # Unitarity representations of CFT in arbitrary dimensions
+ 2 like - 0 dislike
97 views
There is a well defined notion of unitarity of representations in Euclidean Conformal field theories that follows from the requiring unitarity in the Lorentzian space. Under this notion, all states that are created by operator insertions have positive inner product, $$\langle \mathcal O|\mathcal O\rangle = \lim_{z\to\infty} z^{2\Delta} \langle\mathcal O(z)\mathcal O(0)\rangle>0$$ where I am thinking of radial quantization, or equivalently, of a field theory on a cylinder. This also goes by the name of reflection positivity. In arbitrary dimensions this requirement imposes constraints on the dimensions of physical operators:
• $$\Delta\ge\frac{d-2}2$$ for non-spinning bosonic operators
• $$\Delta\ge\frac{d-1}2$$ for non-spinning fermionic operators
• $$\Delta\ge d+\ell-2$$ for operators with spin $$\ell$$
However, in the study of Harmonic analysis on the Euclidean Conformal groups ($$SO(d+1,1)$$ for a field theory in d-dimensional Euclidean space), one talks about unitary representations of the conformal group with dimensions $$\Delta = \frac d2 + i s, \ s\in\mathbb R$$ (the principal series) and, in odd dimensions, additionally $$\Delta = \frac d2+\mathbb Z_+$$ (the discrete series). Clearly these operators are excluded from the class of 'unitary' operators that follow from the Lorentzian unitarity requirement. From what I understand these class of representations provide a basis of $$\mathbb L_2$$ normalizable functions on the group manifold.
What is the difference between the two different notions of unitarity and how are they related? Moreover, clearly the operators in the first definition are normalizable in that they have a finite positive inner product ($$\langle \mathcal O|\mathcal O\rangle >0$$). In what sense are they not normalizable on the group manifold (thereby being excluded from Harmonic analysis basis of functions)?
This post imported from StackExchange Physics at 2019-05-05 13:08 (UTC), posted by SE-user nGlacTOwnS
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysi$\varnothing$sOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | 2020-01-24 01:23:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6766901016235352, "perplexity": 765.3912708912965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614880.58/warc/CC-MAIN-20200124011048-20200124040048-00448.warc.gz"} |
http://www.r-bloggers.com/example-10-2-custom-graphic-layouts/ | # Example 10.2: Custom graphic layouts
September 17, 2012
By
(This article was first published on SAS and R, and kindly contributed to R-bloggers)
In example 10.1 we introduced data from a CPAP machine. In brief, it's hard to tell exactly what's being recorded in the data set, but it seems to be related to the pattern of breathing. Measurements are taken five times a second, leading to on the order of 100,000 data points in a typical night. To get a visual sense of what a night's breathing looks like is therefore non-trivial.
Today, we'll make the graphic shown above, which presents an hour of data.
SAS
In SAS, the sgpanel procedure (section 5.1.11) will produce a similar graphic pretty easily. But we need to make a data set with indicators of the hour, and of ten-minute blocks within the hour. This we'll do with the ceil function (section 1.8.4).
data cycles2;set cycles;hour = ceil(time_min/60);tenmin = ceil(time_min/10);time_in_ten = mod(time_min - 1/300,10);/* 1/300 adjustment keeps last measure in the correct 10-min block */run;title "Hour 4 of pressure";proc sgpanel data = cycles2;where hour eq 4;panelby tenmin / layout=rowlattice rows=6 spacing = 4;colaxis display=none;rowaxis display = (nolabel);series x = time_in_ten y = byte;run; quit;
The resulting plot is shown below. It would be nicer to omit the labels on the right of each plot, but this does not appear to be an option. It would likely only be possible with a fair amount of effort.
R
In R, we'll use the layout() function to make a 7-row layout-- one for the title and 6 for the 10-minute blocks of time. Before we get there, though, we'll construct a function to fill the time block plots with input data. The function accepts a data vector and plots only 3,000 values from it, choosing the values based on an input hour and 10-minute block within the hour. To ensure an equal y-axis range for each call, we'll also send minimum and maximum values as input to the function. All of this will be fed into plot() with the type="l" option to make a line plot.
plot10 = function(hour, tenmins, miny, maxy, data=cycles){ start = hour*18000 + tenmins* 3000 +1 plot((1:3000)/300, cycles[(start + 1):(start +3000)], ylim = c(miny,maxy),type="l", xaxs="i", yaxs="i")}
The documentation for layout() is rather opaque, so we'll review it separately.
oldpar = par(no.readonly = TRUE)# revert to this later layout(matrix(1:7), widths=1, heights=c(3,8,8,8,8,8,8), respect=FALSE)
The layout() function divides the plot area into a matrix of cells, some of which will be filled by the next output plots. The first argument says where in the matrix the next N objects will go. All the integers 1...N must appear in the matrix; cells that will be left empty have a 0 instead. Here, we have no empty cells, and only one column, so the "matrix" is really just a vector with 1...7 in order. The widths option specifies the relative widths of the columns-- here we have only one column so any constant will result in the use of the whole width of the output area. Similarly, the heightsoption gives the relative height of the cells. Here the title will get 3/51 of the height, while each 10-minute block will get 8/51. This unequal shape of the plot regions is one reason to prefer layout() to some other ways to plot multiple images on a page. The respect option, when "TRUE" makes the otherwise relative widths and heights conform, so that a unit of height is equal to a unit of width. We also use layout() in example 8.41.
With the layout in hand, we're ready to fill it.
par(xaxt="n", mar = c(.3,2,.3,0) +.05)# drop the x-axis, change the spacing around the plotplot(x=1,y=1,type="n",ylim=c(-1,1), xlim=c(-1,1), yaxt="n",bty="n")# the first (narrow) plot is just emptyhour=3text(0,0,paste("Hour ", (hour + 1), " of pressure data"), cex=2)# text to put in the first plotminy = min(cycles[(hour * 18000 + 1):((hour + 1) * 18000)])maxy = max(cycles[(hour * 18000 + 1):((hour + 1) * 18000)])# find min and max across the whole hour, to keep range # of y-axis constant across the plotsfor (x in 0:5) plot10(hour, x, miny, maxy)# plot the 6 ten-minute blockspar(oldpar)# reset the graphics options
The resulting plot is shown at the top of the entry. There's clearly something odd going on around 11-15 minutes into the hour-- this could be a misadjusted mask, or a real problem with the breathing. There's also a period around 58 minutes when it looks like breathing stops. That's what the machine is meant to stop. | 2014-12-20 00:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5857910513877869, "perplexity": 2349.520038557068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769121.74/warc/CC-MAIN-20141217075249-00153-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://scirate.com/arxiv/cond-mat.mtrl-sci | # Materials Science (cond-mat.mtrl-sci)
• We theoretically study artificial light harvesting by a dimerized Mobius ring. When the donors in the ring are dimerized, the energies of the donor ring are splitted into two sub-bands. Because of the nontrivial Mobius boundary condition, both the photon and acceptor are coupled to all collectiveexcitation modes in the donor ring. Therefore, the quantum dynamics in the light harvesting are subtly influenced by the dimerization in the Mobius ring. It is discovered that energy transfer is more efficient in a dimerized ring than that in an equally-spaced ring. This discovery is also confirmed by the calculation with the perturbation theory, which is equivalent to the Wigner-Weisskopf approximation. Our findings may be benificial to the optimal design of artificial light harvesting.
• We present experimental control of the magnetic anisotropy in a gadolinium iron garnet (GdIG) thin film from in-plane to perpendicular anisotropy by simply changing the sample temperature. The magnetic hysteresis loops obtained by SQUID magnetometry measurements unambiguously reveal a change of the magnetically easy axis from out-of-plane to in-plane depending on the sample temperature. Additionally, we confirm these findings by the use of temperature dependent broadband ferromagnetic resonance spectroscopy (FMR). In order to determine the effective magnetization, we utilize the intrinsic advantage of FMR spectroscopy which allows to determine the magnetic anisotropy independent of the paramagnetic substrate, while magnetometry determines the combined magnetic moment from film and substrate. This enables us to quantitatively evaluate the anisotropy and the smooth transition from in-plane to perpendicular magnetic anisotropy. Furthermore, we derive the temperature dependent $g$-factor and the Gilbert damping of the GdIG thin film.
• We observe the magnetic proximity effect (MPE) in Pt/CoFe2O4 bilayers grown by molecular beam epitaxy. This is revealed through angle-dependent magnetoresistance measurements at 5 K, which isolate the contributions of induced ferromagnetism (i.e. anisotropic magnetoresistance) and spin Hall effect (i.e. spin Hall magnetoresistance) in the Pt layer. The observation of induced ferromagnetism in Pt via AMR is further supported by density functional theory calculations and various control measurements including insertion of a Cu spacer layer to suppress the induced ferromagnetism. In addition, anomalous Hall effect measurements show an out-of-plane magnetic hysteresis loop of the induced ferromagnetic phase with larger coercivity and larger remanence than the bulk CoFe2O4. By demonstrating MPE in Pt/CoFe2O4, these results establish the spinel ferrite family as a promising material for MPE and spin manipulation via proximity exchange fields.
• The effectiveness of molecular-based light harvesting relies on transport of optical excitations, excitons, to charg-transfer sites. Measuring exciton migration has, however, been challenging because of the mismatch between nanoscale migration lengths and the diffraction limit. In organic semiconductors, common bulk methods employ a series of films terminated at quenching substrates, altering the spatioenergetic landscape for migration. Here we instead define quenching boundaries all-optically with sub-diffraction resolution, thus characterizing spatiotemporal exciton migration on its native nanometer and picosecond scales without disturbing morphology. By transforming stimulated emission depletion microscopy into a time-resolved ultrafast approach, we measure a 16-nm migration length in CN-PPV conjugated polymer films. Combining these experiments with Monte Carlo exciton hopping simulations shows that migration in CN-PPV films is essentially diffusive because intrinsic chromophore energetic disorder is comparable to inhomogeneous broadening among chromophores. This framework also illustrates general trends across materials. Our new approach's sub-diffraction resolution will enable previously unattainable correlations of local material structure to the nature of exciton migration, applicable not only to photovoltaic or display-destined organic semiconductors but also to explaining the quintessential exciton migration exhibited in photosynthesis.
• The direct measurement of Berry phases is still a great challenge in condensed matter systems. The bottleneck has been the ability to adiabatically drive an electron coherently across a large portion of the Brillouin zone in a solid where the scattering is strong and complicated. We break through this bottleneck and show that high-order sideband generation (HSG) in semiconductors is intimately affected by Berry phases. Electron-hole recollisions and HSG occur when a near-band gap laser beam excites a semiconductor that is driven by sufficiently strong terahertz (THz)-frequency electric fields. We carried out experimental and theoretical studies of HSG from three GaAs/AlGaAs quantum wells. The observed HSG spectra contain sidebands up to the 90th order, to our knowledge the highest-order optical nonlinearity observed in solids. The highest-order sidebands are associated with electron-hole pairs driven coherently across roughly 10% of the Brillouin zone around the \Gamma point. The principal experimental claim is a dynamical birefringence: the sidebands, when the order is high enough (> 20), are usually stronger when the exciting near-infrared (NIR) and the THz electric fields are polarized perpendicular than parallel; the sideband intensities depend on the angles between the THz field and the crystal axes in samples with sufficiently weak quenched disorder; and the sidebands exhibit significant ellipticity that increases with increasing sideband order, despite nearly linear excitation and driving fields. We explain dynamical birefringence by generalizing the three-step model for high order harmonic generation. The hole accumulates Berry phases due to variation of its internal state as the quasi-momentum changes under the THz field. Dynamical birefringence arises from quantum interference between time-reversed pairs of electron-hole recollision pathways.
• Uranium beryllium 13 is a heavy fermion system whose properties depend strongly on its internal magnetic structure. Different models have been proposed to explain its magnetic distribution , but additional experimental data is required. An experimental method that is particularly useful is muon spin spectroscopy ($\mu$SR). In this process, positive muons are embedded into a sample where they localize at magnetically unique sites. The net magnetic field causes precession of the muon spin at the Larmor frequency, generating signals that can provide measurements of the internal field. This experiment specifically determines the muon localization sites of uranium beryllium 13. To do so, results from muon spin experiments at various temperatures and external magnetic field strengths are analyzed. The experiments took place at TRIUMF in the University of British Columbia. Data from the temperature and magnetic field ramps are analyzed through ROOT. The Fourier transforms of experimental data showed peaks of muon localization at the geometric centers of the edges of the crystal lattice. These results can be used to build a rigorous model of uranium beryllium 13's internal magnetic structure and resulting magnetic field distribution.
• The 11-22 and 11-26 twinning modes were recently put in evidence by Ostapovets et al. (Phil. Mag, 2017)and interpreted as 101-2-101-2 double-twins formed by a simultaneous action of two twinning shears. We propose another interpretation in which the twinning modes result from a one-step mechanism based on the same (58deg, a+2b) prototype stretch twin. . The two twins differ from the prototype twin by their obliquity correction. The results are compared with the classical theory of twinning and with Westlake-Rosenbaum model of 11-22 twinning. An unconventional twinning mode recently discovered in a magnesium single crystal based on the same prototype twin will be the subject of a separate publication.
• We investigate the influence of the barrier thickness of Co$_{40}$Fe$_{40}$B$_{20}$ based magnetic tunnel junctions on the laser-induced tunnel magneto-Seebeck effect. Varying the barrier thickness from 1nm to 3nm, we find a distinct maximum in the tunnel magneto-Seebeck effect for 2.6nm barrier thickness. This maximum is independently measured for two barrier materials, namely MgAl$_2$O$_4$ and MgO. Additionally, samples with an MgAl$_2$O$_4$ barrier exhibit a high thermovoltage of more than 350$\mu$V in comparison to 90$\mu$V for the MTJs with MgO barrier when heated with the maximum laser power of 150mW. Our results allow for the fabrication of improved stacks when dealing with temperature differences across magnetic tunnel junctions for future applications in spin caloritronics, the emerging research field that combines spintronics and themoelectrics.
• GeTe wins the renewed research interest due to its giant bulk Rashba spin orbit coupling (SOC), and becomes the father of a new multifunctional material, i.e., ferroelectric Rashba semiconductor. In the present work, we investigate Rashba SOC at the interface of the ferroelectric semiconductor superlattice GeTe(111)/InP(111) by using the first principles calculation. Contribution of the interface electric field and the ferroelectric field to Rashba SOC is revealed. A large modulation to Rashba SOC and a reversal of the spin polarization is obtained by switching the ferroelectric polarization. Our investigation about GeTe(111)/InP(111) superlattice is of great importance in the application of ferroelectric Rashba semiconductor in the spin field effect transistor.
• We report experimental evidence for the formation of chiral bobbers -- a surface topological spin texture -- at the surface/interface of FeGe films grown by molecular beam epitaxy (MBE). After establishing the presence of skyrmions in FeGe/Si(111) thin film samples through Lorentz transmission electron microscopy and topological Hall effect, we perform magnetization measurements that reveal an inverse relationship between film thickness and the slope of the susceptibility (dX/dH). We present evidence for the evolution as a function of film thickness, L, from a skyrmion phase for L < L_D/2 to a cone phase with chiral bobbers at the interface for L > L_D/2, where L_D ~ 70 nm is the FeGe pitch length. We show using micromagnetic simulations that chiral bobbers, earlier predicted to be metastable, are in fact the stable ground state in the presence of an additional interfacial Rashba Dzyaloshinskii-Moriya interaction (DMI).
• We simulate the contact between aluminum nanowires using molecular dynamics. Our simulation results show that the contribution to the adhesion area from surface atom diffusion increases significantly with decreasing wire radius. We derive a two-dimensional phenomenological kinetic model to describe this strong nanometer-size effect based on a melting-point reduction approach. This model should be helpful for understanding various phenomena related to nanoscale contacts such as nanowire cold welding, self-assembly of nanoparticles and adhesive nanopillar arrays, as well as the electrical, thermal, and mechanical properties of microscopic interfaces.
• Solving Peierls-Boltzmann transport equation with interatomic force constants (IFCs) from first-principles calculations has been a widely used method for predicting lattice thermal conductivity of three-dimensional materials. With the increasing research interests in two-dimensional materials, this method is directly applied to them but different works show quite different results. In this work, classical potential was used to investigate the effect of the accuracy of IFCs on the predicted thermal conductivity. Inaccuracies were introduced to the third-order IFCs by generating errors in the input forces. When the force error lies in the typical value from first-principles calculations, the calculated thermal conductivity would be quite different from the benchmark value. It is found that imposing translational invariance conditions cannot always guarantee a better thermal conductivity result. It is also shown that Grüneisen parameters cannot be used as a necessary and sufficient criterion for the accuracy of third-order IFCs in the aspect of predicting thermal conductivity.
• The majority of these machines fabricate from raw material in powder form using a directed energy beam to create a local melt zone. Total hip replacement is recommended for people who have medical issues related to excessive wear of the acetabular, osteoarthritis, accident or age. Researches have shown that large numbers of hip arthroplasties (where the articular surface of a musculoskeletal joint is replaced), hip remodelling, or realignment are carried out annually and will increase in the next few decades. Manufacturing of acetabular shells by using AM is a promising and emerging method that has a great potential to improve public health. Lost wax casting or investment casting is currently used to produce acetabular shells followed by lengthy and complex secondary processes such as machining and polishing. Living organs and medical models have intricate 3D shapes that are challenging to identity in X-ray CT images. These images are used for preparing treatment plans to improve the quality of the surgeries regarding waiting and surgery time per procedure and care regime. For instance, a limited number of hip replacement procedures can be carried out on each acetabulum due to a decrease of bone thickness. Rapid prototyping is a suitable treatment planning tool in complex cases to enhance the quality of surgical procedure and provide long-term stability that can be used to customize the shape and size of the acetabular shell. In this paper, to analyse the manufacturing of a prosthetic acetabular shell, built-up lines resulting from a thermal stress flow and process stopping during the selective laser melting (SLM) AM process, with regarding Gibbs free energy, interfacial energy, and equilibrium temperature will be discussed. Geometrical measurements showed 1.59% and 0.27% differences between the designed and manufactured prototype for inside and outside diameter respectively.
• Topologically protected one-way transportation of sound, mimicking the topological properties of the condensed matter, has received greatly attentions. Thus far, the topological phases and the topological edge states of sound are yielded in the vicinity of the Dirac cones fixed at the high symmetric points of the Brillouin zone. Here, we present a new type of the phononic topological insulator in the square lattice with position-variational Dirac cones along the high symmetric lines. The emergence of such Dirac cones, characterized by the vortex structure in a momentum space, is attributed to the unavoidable band crossing protected by the mirror symmetry. By rotating the square columns, these Dirac points are lifted and a complete band gap is induced because of the mirror-symmetry-breaking. Along the topological domain wall between the phononic crystals (PhCs) with the distinct topological phases stemming from the mirror symmetry inversion, we obtain a topological edge state for the neutral scalar sound which is absence of the intrinsic polarization and is uncoupled from an external field. Within a wide rotational range of the square column, the topological edge state in our PhCs evolves from a gapless one into a gapped one with a robust edge transport against cavities and disorders. Our excellent results are promising for the exploration of the new topological phenomena in the PhCs beyond the hexagonal lattices. Furthermore, the flexibility of the rotational square columns provides an interesting platform for the design of tunable topological acoustic devices.
• The strong perpendicular magnetic anisotropy of $L{\rm1_0}$-ordered FePt has been the subject of extensive studies for a long time. However, it is not known which element, Fe or Pt, mainly contributes to the magnetic anisotropy energy (MAE). We have investigated the anisotropy of the orbital magnetic moments of Fe 3$d$ and Pt 5$d$ electrons in $L{\rm1_0}$-ordered FePt thin films by Fe and Pt $L_{2,3}$-edge x-ray magnetic circular dichroism (XMCD) measurements for samples with various degrees of long-range chemical order $S$. Fe $L_{2,3}$-edge XMCD showed that the orbital magnetic moment was larger when the magnetic field was applied perpendicular to the film than parallel to it, and that the anisotropy of the orbital magnetic moment increased with $S$. Pt $L_{2,3}$-edge XMCD also showed that the orbital magnetic moment was smaller when the magnetic field was applied perpendicular to the film than parallel to it, opposite to the Fe $L_{2,3}$-edge XMCD results although the anisotropy of the orbital magnetic moment increases with $S$ like the Fe edge. These results are qualitatively consistent with the first-principles calculation by Solovyev ${\it et\ al.}$ [Phys. Rev. B $\bf{52}$, 13419 (1995).], which also predicts the dominant contributions of Pt 5$d$ to the magnetic anisotropy energy rather than Fe 3$d$ due to the strong spin-orbit coupling and the small spin splitting of the Pt 5$d$ bands in $L{\rm1_0}$-ordered FePt.
• In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter $\alpha$ by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization (PEAD) method and making use of the relation $\alpha = 1/\epsilon_{\infty}$. Our materials dataset is formed by 30 compounds covering a wide range of band gaps and dielectric properties, and includes materials with a wide spectrum of application as thermoelectrics, photocatalysis, photovoltaics, transparent conducting oxides, and refractory materials. Our results show that the scPBE0 functional provides better band gaps than the non self-consistent hybrids PBE0 and HSE06, but scPBE0 does not show significant improvement on the description of the static dielectric constants. Overall, the scPBE0 data exhibit a mean absolute percentage error of 14 \% (band gaps) and 10 \% ($\epsilon_\infty$). For materials with weak dielectric screening and large excitonic biding energies scPBE0, unlike PBE0 and HSE06, overestimates the band gaps, but the value of the gap become very close to the experimental value when excitonic effects are included (e.g. for SiO$_2$). However, special caution must be given to the compounds with small band gaps due to the tendency of scPBE0 to overestimate the dielectric constant in proximity of the metallic limit.
• A subtle balance between competing interactions in strongly correlated systems can be easily tipped by additional interfacial interactions in a heterostructure. This often induces exotic phases with unprecedented properties, as recently exemplified by high-Tc superconductivity in FeSe monolayer on the nonmagnetic SrTiO3. When the proximity-coupled layer is magnetically active, even richer phase diagrams are expected in iron-based superconductors (FeSCs), which however has not been explored due to the lack of a proper material system. One promising candidate is Sr2VO3FeAs, a naturally-assembled heterostructure of a FeSC and a Mott-insulating vanadium oxide. Here, using high-quality single crystals and high-accuracy 75As and 51V nuclear magnetic resonance (NMR) measurements, we show that a novel electronic phase is emerging in the FeAs layer below T0 ~ 155 K without either static magnetism or a crystal symmetry change, which has never been observed in other FeSCs. We find that frustration of the otherwise dominant Fe stripe and V Neel fluctuations via interfacial coupling induces a charge/orbital order with C4-symmetry in the FeAs layers, while suppressing the Neel antiferromagnetism in the SrVO3 layers. These findings demonstrate that the magnetic proximity coupling is effective to stabilize a hidden order in FeSCs and, more generally, in strongly correlated heterostructures.
• Recently, it was shown that quantum spin Hall insulator (QSHI) phase with a gap wide enough for practical applications can be realized in the ultra thin films constructed from two inversely stacked structural elements of trivial band insulator BiTeI. Here, we study the edge states in the free-standing Bi$_2$Te$_2$I$_2$ sextuple layer (SL) and the electronic structure of the Bi$_2$Te$_2$I$_2$ SL on the natural BiTeI substrate. We show that the topological properties of the Bi$_2$Te$_2$I$_2$ SL on this substrate keep $\mathbb Z_2$ invariant. We also demonstrate that ultra thin centrosymmetric films constructed in the similar manner but from related material BiTeBr are trivial band insulators up to five-SL film thickness. In contrast to Bi$_2$Te$_2$I$_2$ for which the stacking of nontrivial SLs in 3D limit gives a strong topological insulator (TI) phase, strong TI is realized in 3D Bi$_2$Te$_2$Br$_2$ in spite of the SL is trivial. For the last material of the BiTe$X$ ($X$=I,Br,Cl) series, BiTeCl, both 2D and 3D centrosymmetric phases are characterized by topologically trivial band structure.
• Typical device architectures in polymer-based optoelectronic devices, such as field effect transistors organic light emitting diodes and photovoltaic cells include sub-100 nm semiconducting polymer thin-film active layers, whose microstructure is likely to be subject to finite-size effects. The aim of this study was to investigate effect of the two-dimensional spatial confinement on the internal structure of the semiconducting polymer poly(9,9-dioctylfluorene) (PFO). PFO melts were confined inside the cylindrical nanopores of anodic aluminium oxide (AAO) templates and crystallized via two crystallization strategies, namely, in the presence or in the absence of a surface bulk reservoir located at the template surface. We show that highly textured semiconducting nanowires with tuneable crystal orientation can be thus produced. Moreover, our results indicate that employing the appropriate crystallization conditions extended-chain crystals can be formed in confinement. The results presented here demonstrate the simple fabrication and crystal engineering of ordered arrays of PFO nanowires; a system with potential applications in devices where anisotropic optical properties are required, such as polarized electroluminescence, waveguiding, optical switching, lasing, etc.
• A series of donor-acceptor-donor (D-A-D) structured small-molecule compounds, with 3,3'-(ethane-1,2-diylidene)bis(indolin-2-one) (EBI) as a novel electron acceptor building block coupled with various electron donor end-capping moieties (thiophene, bithiophene and benzofuran), were synthesized and characterized. When the fused-ring benzofuran is combined to EBI (EBI-BF), the molecules displayed a perfectly planar conformation and afforded the best charge tranport properties among these EBI compounds with a hole mobility of up to 0.021 cm2 V-1 s-1. All EBI-based small molecules were used as donor material along with a PC61BM acceptor for the fabrication of solution-processed bulk-heterojunction (BHJ) solar cells. The best performing photovoltaic devices are based on the EBI derivative using the bithiophene end-capping moiety (EBI-2T) with a maximum power conversion efficiency (PCE) of 1.92%, owing to the broad absorption spectra of EBI-2T and the appropriate morphology of the BHJ. With the aim of establishing a correlation between the molecular structure and the thin film morphology, differential scanning calorimetry, atomic force microscopy and X-ray diffraction analysis were performed on neat and blend films of each material.
• An oligomeric semiconductor containing three bisthiophenediketopyrrolopyrole units (Tri-BTDPP) was synthesized and characterized. Tri-BTDPP has a HOMO level of -5.34 eV, a broad absorption close to the near infrared region and a low band gap of 1.33 eV. Additionally, a promising hole mobility of 1 x 10-3 cm V-1 s-1 was achieved after thermal annealing at 150 C in organic field effect transistors (OFET). Organic photovoltaic (OPV) cells containing Tri-BTDPP and PC71BM as the donor/acceptor couple exhibited a power conversion efficiency (PCE) of 0.72%. Through an intensive study of the active layer using AFM, XRD, and DSC, it was found that Tri-BTDPP and PC71BM were unable to intermix effectively, resulting in oversized Tri-BTDPP crystalline phases and thus poor charge separation. Strategies to improve the OPV performance were thus proposed.
• The effect of interfaces and confinement in polymer ferroelectric structured is discussed. Results on confinement under different geometries are presented and the comparison of all of them allows to evidence that the presence of an interface in particular cases stabilizes a ferroelectric phase that is not spontaneously formed under normal bulk processing conditions
• Polymers with the same chemical composition can provide different properties by reducing the dimension or simply by altering their nanostructure. Recent literature works report hundreds of examples of advances methods in the fabrication of polymer nanostructures accomplished following different approaches, soft lithography, self-assembly routes, template assisted methods, etc. Polymer nanostructures with modulated morphologies and properties can be easily achieved from anodized aluminum oxide (AAO) templates assisted methods. In the last decade, fabrication of polymer nanostructures in the nanocavities of AAO has raised a great interest since allows the control and tailoring of dimension of a huge number of polymer and polymer-based composites materials. The fact that polymer dimension can be adjusted allow the study of size-dependency properties. Moreover, modulated polymer nanostructures can be designed for specific applications from AAO templates methods. Taking into account the last considerations, this review present an overview of recent and new insights in the fabrication methods of polymer nanostructures from hard porous Anodic Aluminum Oxide (AAO) templates with emphasis on the study of polymer structure/property relationship at nanometric scale and stressing the potential interest in particular applications.
• Poly(vinylidene fluoride) (PVDF) has long been regarded as an ideal piezoelectric plastic because it exhibits a large piezoelectric response and a high thermal stability. However, the realization of piezoelectric PVDF elements has proven to be problematic, amongst others, due to the lack of industrially-scalable methods to process PVDF into the appropriate polar crystalline forms. Here, we show that fully piezoelectric PVDF films can be produced via a single-step process that exploits the fact that PVDF can be molded at temperatures below its melting temperature, i.e. via solid-state-processing. We demonstrate that we thereby produce d_PVDF, the piezoelectric charge coefficient of which is comparable to that of biaxially stretched d_PVDF. We expect that the simplicity and scalability of solid-state processing combined with the excellent piezoelectric properties of our PVDF structures will provide new opportunities for this commodity polymer and will open a range of possibilities for future, large-scale, industrial production of plastic piezoelectric films
• Herein, we elucidate the impact of tubular confinement on the structure and relaxation behaviour of poly(vinylidene difluoride) (PVDF) and how these affect the para-/ferroelectric behavior of this polymer. We use PVDF nanotubes that were solidified in anodic aluminum oxide (AAO) templates. Dielectric spectroscopy measurements evidence a bimodal relaxation process for PVDF nanotubes: besides the bulk-like -relaxation, we detect a notably slower relaxation that is associated with the PVDF regions of restricted dynamics at the interface with the AAO pore. Strickingly, both the bulk-like and the interfacial relaxation tend to become temperature independent as the temperature increases - a behavior that has been observed before in inorganic relaxor ferroelectrics. In line with this, we observe that the real part of the dielectric permittivity of the PVDF nanotubes exhibits a broad maximum when plotted against the temperature, which is, again, a typical feature of relaxor ferroelectrics. As such, we propose that in nanotubular PVDF, ferroelectric-like nanodomains are formed in the amorphous phase regions adjacent to the AAO interface. These ferroelectric nanodomains may result from an anisotropic chain conformation and a preferred orientation of local dipoles due to selective H-bond formation between the PVDF macromolecues and the AAO walls. Such relaxor-ferroelectric-like behaviour has not been observed for non-irradiated PVDF homopolymer; our findings thus may enable in the future alternative applications for this bulk commodity plastic, e.g., for the production of electrocaloric devices for solid-state refrigeration which benefit from a relaxor-ferroelectric-like response.
• The friction force observed at macroscale is the result of interactions at various lower length scales, which are difficult to model in a combined manner. For this reason, simplified approaches are required, depending on the specific aspect to be investigated. In particular, the dimensionality of the system is often reduced, especially in models designed to provide a qualitative description of friction properties of elastic materials, e.g. the spring-block model. In this paper, we implement a two dimensional extension of the spring-block model, aiming to investigate by means of numerical simulations the frictional behaviour of a surface in the presence of surface features like cavities, pillars or complex anisotropic structures. We show how friction can be effectively reduced or controlled by appropriate surface features design.
• We report optical enhancement in polarization and dielectric constant near room temperature in Pb0.6Li0.2Bi0.2Zr0.2Ti0.8O3 (PLBZT) electro-ceramics; these are doubly substituted members of the most important commercial ferroelectric PbZr0.2Ti0.8O3 (PZT:20/80). Partial (40%) substitution of equal amounts of Li+1 and Bi+3 in PZT: 20/80 retains the PZT tetragonal structure with space group P4mm. Under illumination of white light and weak 405-nm near-ultraviolet laser light (30 mW), an unexpectedly large (200-300%) change in polarization and displacement current was observed. Light also changes the dc conduction current density by one to two orders of magnitude with a large switchable open circuit voltage (Voc ~ 2 V) and short circuit current (Jsc ~ 5x10-8 A). The samples show a photo-current ON/OFF ratio of order 6:1 under illumination of weak light.
• We demonstrate gate-tunable resonant tunneling and negative differential resistance between two rotationally aligned bilayer graphene sheets separated by bilayer WSe2. We observe large interlayer current densities of 2 uA/um2 and 2.5 uA/um2, and peak-to-valley ratios approaching 4 and 6 at room temperature and 1.5 K, respectively, values that are comparable to epitaxially grown resonant tunneling heterostructures. An excellent agreement between theoretical calculations using a Lorentzian spectral function for the two-dimensional (2D) quasiparticle states, and the experimental data indicates that the interlayer current stems primarily from energy and in-plane momentum conserving 2D-2D tunneling, with minimal contributions from inelastic or non-momentum-conserving tunneling. We demonstrate narrow tunneling resonances with intrinsic half-widths of 4 and 6 meV at 1.5 K and 300 K, respectively.
• We study the effect of asperity size on the adhesion properties of metal contact using atomistic simulations. The simulated size effect of individual nanoscale asperityies is applied to macroscopic rough surfaces by introducing a curvature radius distribution to a continuum-mechanics-based contact model. Our results indicate that the contact adhesion can be optimized by changing the curvature radius distribution of the asperity summits.
• We study exciton-plasmon coupling in two-dimensional semiconductors coupled with Ag plasmonic lattices via angle-resolved reflectance spectroscopy and by solving the equations of motion (EOMs) in a coupled oscillator model accounting for all the resonances of the system. Five resonances are considered in the EOM model: semiconductor A and B excitons, localized surface plasmon resonances (LSPRs) of plasmonic nanostructures and the lattice diffraction modes of the plasmonic array. We investigated the exciton-plasmon coupling in different 2D semiconductors and plasmonic lattice geometries, including monolayer MoS2 and WS2 coupled with Ag nanodisk and bowtie arrays, and examined the dispersion and lineshape evolution in the coupled systems via the EOM model with different exciton-plasmon coupling parameters. The EOM approach provides a unified description of the exciton-plasmon interaction in the weak, intermediate and strong coupling cases with correctly explaining the dispersion and lineshapes of the complex system. This study provides a much deeper understanding of light-matter interactions in multilevel systems in general and will be useful to instruct the design of novel two-dimensional exciton-plasmonic devices for a variety of optoelectronic applications with precisely tailored responses.
• While producing comparable efficiencies and showing similar properties when probed by conventional techniques, such as Raman, photoluminescence and X-ray diffraction, two thin film solar cell materials with complex structures, such as quaternary compound CZTSe, may in fact differ significantly in their microscopic structures. In this work, laser induced modification Raman spectroscopy, coupled with high spatial resolution and high temperature capability, is demonstrated as an effective tool to obtain important structure information beyond that the conventional characterization techniques can offer, and thus to reveal the microscopic scale variations between nominally similar alloys. Specifically, CZTSe films prepared by sputtering and co-evaporation methods that exhibited similar Raman and XRD features were found to behave very differently under high laser power and high temperature Raman probe, because the differences in their microscopic structures lead to different structure modifications in response to the external stimuli, such as light illumination and temperature. They were also shown to undergo different degree of plastic changes and have different thermal conductivities as revealed by spatially-resolved Raman spectroscopy.
• The magnetic properties and magnetic structure are presented for CoPS$_3$, a quasi-two-dimensional antiferromagnet on a honeycomb lattice with a Néel temperature of $T_N \sim 120$ K. The compound is shown to have XY-like anisotropy in its susceptibility, and the anisotropy is analysed to extract crystal field parameters. For temperatures between 2 K and 300 K, no phase transitions were observed in the field-dependent magnetization up to 10 Tesla. Single-crystal neutron diffraction shows that the magnetic propagation vector is \bfk= $\left[010\right]$ with the moments mostly along the $\mathbf{a}$ axis and with a small component along the $\mathbf{c}$ axis, which largely verifies the previously-published magnetic structure for this compound. The magnetic Bragg peak intensity decreases with increasing temperature as a power law with exponent $2\beta = 0.60 \pm 0.01$ for $T > 0.9~T_N$.
• The Haldane spin-chain compound, Tb2BaNiO5, has been known to order antiferromagnetically below (T_N= ) 63 K. The present magnetic studies on the polycrystals bring out that there is another magnetic transition at a lower temperature (T_2= ) 25 K, with a pronounced magnetic-field induced metamagnetic and metaelectric behavior. Multiferroic features are found below T_2 only, and not at T_N. The most intriguing observation is that the observed change of dielectric constant is intrinsic and largest (e.g., about 18% at 15 K) within this Haldane spin-chain family, R2BaNiO5. Taking into account that this trend (the largest change for Tb case within this family) correlates with a similar trend in T_N (with the values of T_N being about 55, 58, 53 and 32 K for Gd, Dy, Ho and Er cases), we believe that an explanation usually offered for this T_N behavior in rare-earth systems is applicable for this behavior as well . That is, single-ion anisotropy following crystal-field splitting is responsible for this extraordinary magnetodielectric effect in this Tb case. To our knowledge, such an observation was not made in the past literature of multiferroics.
• The quantum anomalous Hall effect (QAHE) that emerges under broken time-reversal symmetry in topological insulators (TI) exhibits many fascinating physical properties for potential applications in nano-electronics and spintronics. However, in transition-metal doped TI, the only experimentally demonstrated QAHE system to date, the effect is lost at practically relevant temperatures. This constraint is imposed by the relatively low Curie temperature (Tc) and inherent spin disorder associated with the random magnetic dopants. Here we demonstrate drastically enhanced Tc by exchange coupling TI to Tm3Fe5O12, a high-Tc magnetic insulator with perpendicular magnetic anisotropy. Signatures that the TI surface states acquire robust ferromagnetism are revealed by distinct squared anomalous Hall hysteresis loops at 400 K. Point-contact Andreev reflection spectroscopy confirms that the TI surface is indeed spin-polarized. The greatly enhanced Tc, absence of spin disorder, and perpendicular anisotropy are all essential to the occurrence of the QAHE at high temperatures.
• To pursuit high electrochemical performance of supercapacitors based on Faradaic charge-transfer with redox reaction or absorption/desorption effect, the intercalation efficiency of electrolyte ions into electrode materials is a crucial prerequisite to surpass the pure surface capacity with extra bulk contribution. Here we report layered barium transition metal fluorides, BaMF4 (M = Mn, Co, Ni) to be a series of new electrode materials applied in standard three-electrode configuration. Benefiting from the efficient immersing of electrolyte ions, these materials own prominent specific capacitance. Electrochemical characterizations demonstrate that all the BaMF4 electrodes show both capacitive behavior and Faradaic redox reactions in the cyclic voltammograms, and ability of charge storage by charging-discharging cycling with high cycling stability. Particularly, BaCoF4 shows the the highest specific capacitance of 360 F g-1 at current density of 0.6 A g-1, even the particle size is far beyond nanometer scale. In addition, first principles calculations reveal the possible underlying mechanisms.
• The ability to uniquely identify an object or device is important for authentication. Imperfections, locked into structures during fabrication, can be used to provide a fingerprint that is challenging to reproduce. In this paper, we propose a simple optical technique to read unique information from nanometer-scale defects in 2D materials. Flaws created during crystal growth or fabrication lead to spatial variations in the bandgap of 2D materials that can be characterized through photoluminescence measurements. We show a simple setup involving an angle-adjustable transmission filter, simple optics and a CCD camera can capture spatially-dependent photoluminescence to produce complex maps of unique information from 2D monolayers. Atomic force microscopy is used to verify the origin of the optical signature measured, demonstrating that it results from nanometer-scale imperfections. This solution to optical identification with 2D materials could be employed as a robust security measure to prevent counterfeiting.
• Electrically-active defects have a significant impact on the performance of electronic devices based on wide band-gap materials such as diamond. This issue is ubiquitous in diamond science and technology, since the presence of charge traps in the active regions of different classes of diamond-based devices (detectors, power diodes, transistors) can significantly affect their performances, due to the formation of space charge, memory effects and the degradation of the electronic response associated with radiation damage. Among the most common defects in diamond, the nitrogen-vacancy (NV) center possesses unique spin properties which enable high-sensitivity field sensing at the nanoscale. Here we demonstrate that NV ensembles can be successfully exploited to perform a direct local mapping of the internal electric field distribution of a graphite-diamond-graphite junction exhibiting electrical properties dominated by trap- and space-charge-related conduction mechanisms. By performing optically-detected magnetic resonance measurements, we performed both punctual readout and spatial mapping of the electric field in the active region at different bias voltages. In this novel "self-diagnostic" approach, defect complexes represent not only the source of detrimental space charge effects, but also a unique tool to directly investigate them, by providing experimental evidences on the conduction mechanisms that in previous studies could only be indirectly inferred on the basis of conventional electrical and optical characterization.
• Networks of vertically c-oriented prism shaped InN nanowalls, are grown on c-GaN/sapphire templates using a CVD technique, where pure indium and ammonia are used as metal and nitrogen precursors. A systematic study of the growth, structural and electronic properties of these samples shows a preferential growth of the islands along [11-20] and [0001] directions leading to the formation of such a network structure, where the vertically [0001] oriented tapered walls are laterally align along one of the three [11-20] directions. Inclined facets of these walls are identified as r-planes [(1-102)-planes] of wurtzite InN. Onset of absorption for these samples is observed to be higher than the band gap of InN suggesting a high background carrier concentration in this material. Study of the valence band edge through XPS indicates the formation of positive depletion regions below the r-plane side facets of the walls. This is in contrast with the observation for c-plane InN epilayers, where electron accumulation is often reported below the top surface.
• Solving the atomic structure of metallic clusters is fundamental to understanding their optical, electronic, and chemical properties. We report the structure of Au$_{\text{146}}$(p-MBA)$_{\text{57}}$ at subatomic resolution (0.85 Å) using electron diffraction (MicroED) and atomic resolution by X-ray diffraction. The 146 gold atoms may be decomposed into two constituent sets consisting of 119 core and 27 peripheral atoms. The core atoms are organized in a twinned FCC structure whereas the surface gold atoms follow a C$_{2}$ rotational symmetry about an axis bisecting the twinning plane. The protective layer of 57 p-MBAs fully encloses the cluster and comprises bridging, monomeric, and dimeric staple motifs. Au$_{\text{146}}$(p-MBA)$_{\text{57}}$ is the largest cluster observed exhibiting a bulk-like FCC structure as well as the smallest gold particle exhibiting a stacking fault.
• 57Fe Mossbauer spectroscopy measurements were performed on a powdered CuFe2Ge2 sample that orders antiferromagnetically at ~ 175 K. Whereas a paramagnetic doublet was observed above the Neel temperature, a superposition of paramagnetic doublet and magnetic sextet (in approximately 0.5 : 0.5 ratio) was observed in the magnetically ordered state, suggesting a magnetic structure similar to a double-Q spin density wave with half of the Fe paramagnetic and another half bearing static moment of ~ 0.5 - 1 mu_B. These results call for a re-evaluation of the recent neutron scattering data and band structure calculations.
• The stacking problem is approached by computational mechanics, using an Ising next nearest neighbor model. Computational mechanics allows to treat the stacking arrangement as an information processing system in the light of a symbol generating process. A general method for solving the stochastic matrix of the random Gibbs field is presented, and then applied to the problem at hand. The corresponding phase diagram is then discussed in terms of the underlying $\epsilon$-machine, or optimal finite state machine, describing statistically the system. The occurrence of higher order polytypes at the borders of the phase diagram is also analyzed. Discussion of the applicability of the model to real system such as ZnS and Cobalt is done. The method derived is directly generalizable to any one dimensional model with finite range interaction.
• We study the temperature dependence of the Rashba-split bands in the bismuth tellurohalides BiTe$X$ $(X=$ I, Br, Cl) from first principles. We find that increasing temperature reduces the Rashba splitting, with the largest effect observed in BiTeI with a reduction of the Rashba parameter of $40$% when temperature increases from $0$ K to $300$ K. These results highlight the inadequacy of previous interpretations of the observed Rashba splitting in terms of static-lattice calculations alone. Notably, we find the opposite trend, a strengthening of the Rashba splitting with rising temperature, in the pressure-stabilized topological-insulator phase of BiTeI. We propose that the opposite trends with temperature on either side of the topological phase transition could be an experimental signature for identifying it. The predicted temperature dependence is consistent with optical conductivity measurements, and should also be observable using photoemission spectroscopy, which could provide further insights into the nature of spin splitting and topology in the bismuth tellurohalides. | 2017-06-27 06:57:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5079171657562256, "perplexity": 2286.420700295939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321025.86/warc/CC-MAIN-20170627064714-20170627084714-00248.warc.gz"} |
https://mathematica.stackexchange.com/questions/85805/how-many-arguments-does-a-function-require-and-how-to-use-that-in-manipulate/85812 | # How many arguments does a function require, and how to use that in Manipulate
The goal is to vary the order parameters in wavelet transforms in the Manipulate environment. The various transformations have arguments of different rank. For example,
HaarWavelet[] has no order argument.
DaubechiesWavelet[m] has a single order argument $m$ and the desire is to present choices for $m$.
BiorthogonalSplineWavelet[m,n] has an order parameter $m$ and a dual order parameter $n$ and the desire is to let the user control $m$ and $n$.
The current state of my effort is this:
data = DiskMatrix[10]; Manipulate[ dwt = DiscreteWaveletTransform[dat32, wavelet]; gdwd = WaveletMatrixPlot[dwt] , {wavelet, {HaarWavelet[], DaubechiesWavelet[], MeyerWavelet[]}}]
The different transforms can be selected, but there is no capability to change the order parameters. How can order parameters be introduced?
• Related: (7040), (56665) – Mr.Wizard Jun 13 '15 at 2:23
• MrW: valuable insights. Thank you. – dantopa Jun 15 '15 at 14:42
I'm pretty sure there ought to be something cleaner. While we wait for a better answer, you may use this to return the minimum and maximum number of arguments allowed for each wavelet:
nArgs[fun_] :=
StringCases[ToString@DownValues@fun,
Shortest["ArgumentCountQ"~~__~~(n1:NumberString)~~__~~ (n2:NumberString)] :>
ToExpression[{n1, n2}]]
{#, nArgs@#} & /@ ToExpression /@ Names["*Wavelet"]
(*
{{BattleLemarieWavelet, {{2, 2}}},
{BiorthogonalSplineWavelet, {{2, 2}}},
{CDFWavelet, {{1, 1}}},
{CoifletWavelet, {{1, 1}}},
{DaubechiesWavelet, {{1, 1}}},
{DGaussianWavelet, {{1, 1}}},
{GaborWavelet, {{1, 1}}},
{HaarWavelet, {{0, 0}}},
{MexicanHatWavelet, {{1, 1}}},
{MeyerWavelet, {{2, 2}}},
{MorletWavelet, {{0, 0}}},
{PaulWavelet, {{1, 1}}},
{ReverseBiorthogonalSplineWavelet, {{2, 2}}},
{ShannonWavelet, {{1, 1}}},
{SymletWavelet, {{1, 1}}}}
*)
So:
m[fun_] := nArgs[fun][[1, 2]]
d = DiskMatrix[10];
Manipulate[
WaveletMatrixPlot@DiscreteWaveletTransform[d, wv[Sequence @@ x[[;; m@wv]]]],
{{x, {1, 1}}, ControlType -> None},
{wv, {HaarWavelet, DaubechiesWavelet, MeyerWavelet}},
Dynamic@Panel@Grid[{Slider[Dynamic@x[[#]], {0, 10, 1}], x[[#]]} & /@ Range@m@wv]]
• Cleaner is better. Nice idea. – dantopa Jun 15 '15 at 14:39 | 2019-11-15 02:05:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19044648110866547, "perplexity": 7898.249771665086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00037.warc.gz"} |
https://www.askiitians.com/forums/Wave-Motion/11/56355/beats.htm | #### Thank you for registering.
One of our academic counsellors will contact you within 1 working day.
Click to Chat
1800-1023-196
+91-120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
Aravind Bommera
36 Points
8 years ago
In acoustics, a beat is an interference between two sounds of slightly different frequencies, perceived as periodic variations in volume whose rate is the difference between the two frequencies.
With tuning instruments that can produce sustained tones, beats can readily be recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not yet identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. When the two tones gradually approach unison, the beating slows down and disappears.
bhaveen kumar
38 Points
8 years ago
beats
yours katarnak Suresh
43 Points
8 years ago
This phenomenon manifests acoustically. If a graph is drawn to show the function corresponding to the total sound of two strings, it can be seen that maxima and minima are no longer constant as when a pure note is played, but change over time: when the two waves are nearly 180 degrees out of phase the maxima of each cancel the minima of the other, whereas when they are nearly in phase their maxima sum up, raising the perceived volume.
It can be proven (see List of trigonometric identities) that the successive values of maxima and minima form a wave whose frequency equals the difference between the frequencies of the two starting waves. Let''s demonstrate the simplest case, between two sine waves of unit amplitude:
${ \sin(2\pi f_1t)+\sin(2\pi f_2t) } = { 2\cos\left(2\pi\frac{f_1-f_2}{2}t\right)\sin\left(2\pi\frac{f_1+f_2}{2}t\right) }$
If the two starting frequencies are quite close (usually differences of the order of few hertz), the frequency of the cosine of the right side of the expression above, that is (f1f2)/2, is often too slow to be perceived as a pitch. Instead, it is perceived as a periodic variation of the sine in the expression above (it can be said, the cosine factor is an envelope for the sine wave), whose frequency is (f1 + f2)/2, that is, the average of the two frequencies. However, because the sine part of the right side function alternates between negative and positive values many times during one period of the cosine part, only the absolute value of the envelope is relevant. Therefore the frequency of the envelope is twice the frequency of the cosine, which means the beat frequency is:
$f_{beat}=f_1-f_2\,$
This can be seen on the diagram on the right.
A physical interpretation is that when $\cos\left(2\pi\frac{f_1-f_2}{2}t\right)$ equals one, the two waves are in phase and they interfere constructively. When it is zero, they are out of phase and interfere destructively. Beats occur also in more complex sounds, or in sounds of different volumes, though calculating them mathematically is not so easy.
Beating can also be heard between notes that are near to, but not exactly, a harmonic interval, due to some harmonic of the first note beating with a harmonic of the second note. For example, in the case of perfect fifth, the third harmonic (i.e. second overtone) of the bass note beats with the second harmonic (first overtone) of the other note. As well as with out-of tune notes, this can also happen with some correctly tuned equal temperament intervals, because of the differences between them and the corresponding just intonation intervals: see Harmonic series (music)#Harmonics and tuning. | 2021-04-13 16:46:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7832148671150208, "perplexity": 944.2150823617853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00073.warc.gz"} |
https://tensorsociety.org/study-on-einstein-sasakian-decomposable-recurrent-space-of-first-order/ | Peer-Reviewed journal indexed by Mathematical Reviews and Zentralblatt MATH.
# Study on Einstein-Sasakian Decomposable Recurrent Space of First Order
Authors
K. S. Rawat and Sandeep Chauhan
Abstract
Takano [2] have studied decomposition of curvature tensor in a recurrent
space. Sinha and Singh [3] have been studied and defined decomposition of
recurrent curvature tensor field in a Finsler space. Singh and Negi studied
decomposition of recurrent curvature tensor field in a K¨aehlerian space. Negi
and Rawat [6] have studied decomposition of recurrent curvature tensor field
in K¨aehlerian space. Rawat and Silswal [11] studied and defined decomposition of recurrent curvature tensor fields in a Tachibana space. Rawat and Kunwar Singh [12] studied the decomposition of curvature tensor field in K¨aehlerian recurrent space of first order. Further, Rawat and Chauhan [23] studied the decomposition of curvature tensor field in Einstein- K¨aehlerian recurrent spaceof first order. In the present paper, we have studied the decomposition of curvature tensor fields R^h_{ijk} in terms of two non-zero vectors and a tensor field in EinsteinSasakian recurrent space of first order and several theorem have been established and proved.
Keywords and Phrases : Sasakian space, Einstein space, Einstein-Sasakian space, recurrent space, Curvature tensor, Projective curvature tensor
AMS Subject Classification : : 53C25, 53C44, 53D10
Received date : February 13, 2018
Accepted date: June 26, 2018
cited by: J.T.S. Vol. 12 (2018), pp.85-92 | 2022-01-18 00:57:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348855972290039, "perplexity": 5841.308665728865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00623.warc.gz"} |
https://aptitude.gateoverflow.in/6623/nielit-2019-feb-scientist-c-section-b-3 | 360 views
Some of the sentences have errors and some have none. Find out which part of a sentence has an error, and the appropriate letter (A),(B),(C) is your answer. If there is no error, (D) is the answer.
$\underset{(A)}{\underline{\text{I’ve been to a few of his lectures,/}}}$
$\underset{(B)}{\underline{\text{but understood little of/}}}$
$\underset{(C)}{\underline{\text{what he has said./}}}$
$\underset{(D)}{\underline{\text{No error}}}$
Option C has an error.
The sentence begins in the present perfect tense – “I have been to...”
Therefore, Option C should be – “...what he has said.”
1.5k points
1
348 views
2
434 views
3
316 views
4 | 2022-12-05 12:17:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3238872289657593, "perplexity": 3258.6697066419783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00595.warc.gz"} |
https://demo7.dspace.org/items/f970a0cc-f7f4-4b25-baa2-cc0f2cf9d30d | Cosmological Kaluza-Klein branes in black brane spacetimes
Authors
Minamitsuji, Masato
Description
We discuss the comsological evolution of a brane in the $D(>6)$-dimensional black brane spacetime in the context of the Kaluza-Klein (KK) braneworld scheme, i.e., to consider KK compactification on the brane. The bulk spacetime is composed of two copies of a patch of $D$-dimensional black three-brane solution. The near-horizon geometry is given by $AdS_{5}\times S^{(D-5)}$ while in the asymptotic infinity the spacetime approaches $D$-dimensional Minkowski. We consider the brane motion from the near-horizon region toward the spatial infinity, which induces cosmology on the brane. As is expected, in the early times, namely when the brane is located in the near-horizon region, the effective cosmology on the brane coincides with that in the second Randall-Sundrum (RS II) model. Then, the brane cosmology starts to deviate from the RS type one since the dynamics of KK compactified dimensions becomes significant. We find that the brane Universe cannot reach the asymptotic infinity, irrespectively of the components of matter on the brane.
Comment: 7 pages, 2 figures, references added,the version which will appear in PLB
Keywords
High Energy Physics - Theory, Astrophysics, General Relativity and Quantum Cosmology | 2022-12-06 07:40:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7445680499076843, "perplexity": 1408.247586035697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00470.warc.gz"} |
https://bibli.cirm-math.fr/listRecord.htm?list=link&xRecord=19287088146910052609 | m
• E
F Nous contacter
0
# Documents 60J28 | enregistrements trouvés : 1
O
P Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Near-criticality in mathematical models of epidemics Luczak, Malwina | CIRM H
Multi angle
Research talks;Probability and Statistics
In an epidemic model, the basic reproduction number $R_{0}$ is a function of the parameters (such as infection rate) measuring disease infectivity. In a large population, if $R_{0}> 1$, then the disease can spread and infect much of the population (supercritical epidemic); if $R_{0}< 1$, then the disease will die out quickly (subcritical epidemic), with only few individuals infected.
For many epidemics, the dynamics are such that $R_{0}$ can cross the threshold from supercritical to subcritical (for instance, due to control measures such as vaccination) or from subcritical to supercritical (for instance, due to a virus mutation making it easier for it to infect hosts). Therefore, near-criticality can be thought of as a paradigm for disease emergence and eradication, and understanding near-critical phenomena is a key epidemiological challenge.
In this talk, we explore near-criticality in the context of some simple models of SIS (susceptible-infective-susceptible) epidemics in large homogeneous populations.
In an epidemic model, the basic reproduction number $R_{0}$ is a function of the parameters (such as infection rate) measuring disease infectivity. In a large population, if $R_{0}> 1$, then the disease can spread and infect much of the population (supercritical epidemic); if $R_{0}< 1$, then the disease will die out quickly (subcritical epidemic), with only few individuals infected.
For many epidemics, the dynamics are such that $R_{0}$ can ...
#### Filtrer
##### Audience
Titres de périodiques et e-books électroniques (Depuis le CIRM)
Ressources Electroniques
Books & Print journals
Recherche avancée
0
Z | 2021-04-16 19:58:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7973254323005676, "perplexity": 5563.33703140558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00057.warc.gz"} |
http://www.reference.com/browse/brinell+test | Definitions
# Ceramography
Ceramography is the art and science of preparation, examination and evaluation of ceramic microstructures. Ceramography can be thought of as the metallography of ceramics, and falls under the Structure heading of the materials science tetrahedron. The microstructure is the structure level of approximately 0.1 to 100 µm, between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks and hardness microindentions. Most bulk mechanical, optical, thermal, electrical and magnetic properties are significantly affected by the microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the microstructure. Ceramography is part of the broader field of materialography, which includes all the microscopic techniques of material analysis, such as metallography, petrography and plastography. Ceramography is usually reserved for high-performance ceramics for industrial applications, such as 85–99.9% alumina (Al2O3) in Fig. 1, zirconia (ZrO2), silicon carbide (SiC), silicon nitride (Si3N4), and ceramic-matrix composites. It is seldom used on whiteware ceramics such as sanitaryware, wall tiles and dishware.
## A brief history of ceramography
Ceramography evolved along with other branches of materialography and ceramic engineering. Alois de Widmanstätten of Austria etched a meteorite in 1808 to reveal proeutectoid ferrite bands that grew on prior austenite grain boundaries. Geologist Henry Clifton Sorby, the "father of metallography," applied petrographic techniques to the steel industry in the 1860s in Sheffield, England. French geologist Auguste Michel-Lévy devised a chart that correlated the optical properties of minerals to their transmitted color and thickness in the 1880s. Swedish metallurgist J.A. Brinell invented the first quantitative hardness scale in 1900. Smith and Sandland developed the first microindention hardness test at Vickers Ltd. in London in 1922. Swiss-born microscopist A.I. Buehler started the first metallographic equipment manufacturer near Chicago in 1936. Frederick Knoop and colleagues at the National Bureau of Standards developed a less-penetrating (than Vickers) microindention test in 1939. George Kehl of Columbia University wrote a book that was considered the bible of materialography until the 1980s. Kehl co-founded a group within the Atomic Energy Commission that became the International Metallographic Society in 1967.
## Preparation of ceramographic specimens
The preparation of ceramic specimens for microstructural analysis consists of five broad steps: sawing, embedding, grinding, polishing and etching. The tools and consumables for ceramographic preparation are available worldwide from metallography equipment vendors and laboratory supply companies.
• Sawing: most ceramics are extremely hard and must be wet-sawed with a circular blade embedded with diamond particles. A metallography or lapidary saw equipped with a low-density diamond blade is usually suitable. The blade must be cooled by a continuous liquid spray.
• Embedding: to facilitate further preparation, the sawed specimen is usually embedded (or mounted or encapsulated) in a plastic disc, 25, 30 or 35 mm in diameter. A thermosetting solid resin, activated by heat and compression, e.g. mineral-filled epoxy, is best for most applications. A castable (liquid) resin such as unfilled epoxy, acrylic or polyester may be used for porous refractory ceramics or microelectronic devices. The castable resins are also available with fluorescent dyes that aid in fluorescence microscopy.
• Grinding is abrasion of the surface of interest by abrasive particles, usually diamond, that are bonded to paper or a metal disc. Grinding erases saw marks, coarsely smooths the surface, and removes stock to a desired depth. A typical grinding sequence for ceramics is one minute on a 240-grit metal-bonded diamond wheel rotating at 240 rpm and lubricated by flowing water, followed by a similar treatment on a 400-grit wheel. The specimen is washed in an ultrasonic bath after each step.
• Polishing is abrasion by free abrasives that are suspended in a lubricant and can roll or slide between the specimen and paper. Polishing erases grinding marks and smooths the specimen to a mirror-like finish. Polishing on a bare metallic platen is called lapping. A typical polishing sequence for ceramics is 5–10 minutes each on 15-, 6- and 1-µm diamond paste or slurry on napless paper rotating at 240 rpm. The specimen is again washed in an ultrasonic bath after each step.
• Etching reveals and delineates grain boundaries and other microstructural features that are not apparent on the as-polished surface. The two most common types of etching in ceramography are selective chemical corrosion, and a thermal treatment that causes relief. As an example, alumina can be chemically etched by immersion in boiling concentrated phosphoric acid for 30–60 s, or thermally etched in a furnace for 20–40 min at 1500°C in air. The plastic encapsulation must be removed before thermal etching.
Alternatively, non-cubic ceramics can be prepared as thin sections, also known as petrography, for examination by polarized transmitted light microscopy. In this technique, the specimen is sawed to ~1 mm thick, glued to a microscope slide, and ground to a thickness (x) approaching 30 µm. A cover slip is glued onto the exposed surface. The adhesives, such as epoxy or Canada balsam resin, must have approximately the same refractive index (η ≈ 1.54) as glass. Most ceramics have a very small absorption coefficient (α ≈ 0.5 cm −1 for alumina in Fig. 2) in the Beer-Lambert law below, and can be viewed in transmitted light. Cubic ceramics, e.g. yttria-stabilized zirconia and spinel, have the same refractive index in all crystallographic directions and are, therefore, opaque when the microscope's polarizer is 90° out of phase with its analyzer.
$I_t = I_0e^\left\{-alpha x\right\}$ (Beer-Lambert eqn)
Ceramographic specimens are electrical insulators in most cases, and must be coated with a conductive ~10-nm layer of metal or carbon for electron microscopy, after polishing and etching. Gold or Au-Pd alloy from a sputter coater or evaporative coater also improves the reflection of visible light from the polished surface under a microscope, by the Fresnel formula below. Bare alumina (η ≈ 1.77, k ≈ 10 −6) has a negligible extinction coefficient and reflects only 8% of the incident light from the microscope. Gold-coated (η ≈ 0.82, k ≈ 1.59 @ λ = 500 nm) alumina reflects 44% in air, 39% in immersion oil.
$R = frac\left\{I_r\right\}\left\{I_i\right\} = frac\left\{\left(eta_1 - eta_2\right)^2 + k^2\right\}\left\{\left(eta_1 + eta_2\right)^2 + k^2\right\}$ (Fresnel eqn)
## Ceramographic analysis
Ceramic microstructures are most often analyzed by reflected visible-light microscopy in brightfield. Darkfield is used in limited circumstances, e.g., to reveal cracks. Polarized transmitted light is used with thin sections, where the contrast between grains comes from birefringence. Very fine microstructures may require the higher magnification and resolution of a scanning electron microscope (SEM) or confocal laser scanning microscope (CLSM). The cathodoluminescence microscope (CLM) is useful for distinguishing phases of refractories. The transmission electron microscope (TEM) and scanning acoustic microscope (SAM) have specialty applications in ceramography.
Ceramography is often done qualitatively, for comparison of the microstructure of a component to a standard for quality control or failure analysis purposes. Three common quantitative analyses of microstructures are grain size, second-phase content and porosity. Microstructures are measured by the principles of stereology, in which three-dimensional objects are evaluated in 2-D by projections or cross-sections.
Grain size can be measured by the line-fraction or area-fraction methods of ASTM E112. In the line-fraction methods, a statistical grain size is calculated from the number of grains or grain boundaries intersecting a line of known length or circle of known circumference. In the area-fraction method, the grain size is calculated from the number of grains inside a known area. In each case, the measurement is affected by secondary phases, porosity, preferred orientation, exponential distribution of sizes, and non-equiaxed grains. Image analysis can measure the shape factors of individual grains by ASTM E1382.
Second-phase content and porosity are measured the same way in a microstructure, such as ASTM E562. E562 is a point-fraction method based on the stereological principle of point fraction = volume fraction, i.e., Pp = Vv. Second-phase content in ceramics, such as carbide whiskers in an oxide matrix, is usually expressed as a mass fraction. Volume fractions can be converted to mass fractions if the density of each phase is known. Image analysis can measure porosity, pore-size distribution and volume fractions of secondary phases by ASTM E1245. Porosity measurements do not require etching. Multi-phase microstructures do not require etching if the contrast between phases is adequate, as is usually the case.
Grain size, porosity and second-phase content have all been correlated with ceramic properties such as mechanical strength σ by the Hall-Petch equation, hardness, toughness, dielectric constant and many others.
## Microindention hardness and toughness
The hardness of a material can be measured in many ways. The Knoop hardness test, a method of microindention hardness, is the most reproducible for dense ceramics. The Vickers hardness test and superficial Rockwell scales (e.g., 45N) can also be used, but tend to cause more surface damage than Knoop. The Brinell test is suitable for ductile metals, but not ceramics. In the Knoop test, a diamond indenter in the shape of an elongated pyramid is forced into a polished (but not etched) surface under a predetermined load, typically 500 or 1000 gm. The load is held for some amount of time, say 10 s, and the indenter is retracted. The indention long diagonal (d, μm, in Fig. 3) is measured under a microscope, and the Knoop hardness (HK) is calculated from the load (P, gm) and the square of the diagonal length in the equations below. The constants account for the projected area of the indenter and unit conversion factors. Most oxide ceramics have a Knoop hardness in the range of 1000–1500 kgf/mm2 (10 – 15 GPa), and many carbides are over 2000 (20 GPa). The method is specified in ASTM C849, C1326 & E384. Microindention hardness is also called microindentation hardness or simply microhardness.
$HK = 14229 frac\left\{P\right\}\left\{d^2\right\}$ (kgf/mm2) and $HK = 139.54 frac\left\{P\right\}\left\{d^2\right\}$ (GPa)
The toughness of ceramics can be determined from a Vickers test under a load of 10 – 20 kg. Toughness is the ability of a material to resist crack propagation. Several calculations have been formulated from the load (P), elastic modulus (E), microindention hardness (H), crack length (c in Fig. 4) and flexural strength (σ). Modulus of rupture (MOR) bars with a rectangular cross-section are indented in three places on a polished surface. The bars are loaded in 4-point bending with the polished, indented surface in tension, until fracture. The fracture normally originates at one of the indentions. The crack lengths are measured under a microscope. The toughness of most ceramics is 2–4 MPa√m, but toughened zirconia is as much as 13, and cemented carbides are often over 20.
$K_\left\{icl\right\} = 0.016 sqrt\left\{frac\left\{E\right\}\left\{H\right\}\right\}frac\left\{P\right\}\left\{\left(c_0\right)^\left\{1.5\right\}\right\}$ initial crack length
$K_\left\{isb\right\} = 0.59 left\left(frac\left\{E\right\}\left\{H\right\}right\right)^\left\{1/8\right\}\left[sigma \left(P^\left\{1/3\right\}\right)\right]^\left\{3/4\right\}$ indention strength in bending
• Sample Preparation of Ceramic Material, Buehler Ltd., 1990.
• Structure, V33, Struers A/S, 1998, p 3–20.
• Struers Metalog Guide
• S. Binkowski, R. Paul & M. Woydt, "Comparing Preparation Techniques Using Microstructural Images of Ceramic Materials," Structure, V39, 2002, p 8–19.
• R.E. Chinn, Ceramography, ASM International and the American Ceramic Society, 2002, ISBN 0-87170-770-5.
• D.J. Clinton, A Guide to Polishing and Etching of Technical and Engineering Ceramics, The Institute of Ceramics, 1987.
• Digital Library of Ceramic Microstructures, University of Dayton, 2003.
• G. Elssner, H. Hoven, G. Kiessler & P. Wellner, translated by R. Wert, Ceramics and Ceramic Composites: Materialographic Preparation, Elsevier Science Inc., 1999, ISBN 978-0-444-10030-6.
• R.M. Fulrath & J.A. Pask, ed., Ceramic Microstructures: Their Analysis, Significance, and Production, Robert E. Krieger Publishing Co., 1968, ISBN 0-88275-262-6.
• K. Geels in collaboration with D.B. Fowler, W-U Kopp & M. Rückert, Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing, ASTM International, 2007, ISBN 978-0-8031-4265-7.
• H. Insley & V.D. Fréchette, Microscopy of Ceramics and Cements, Academic Press Inc., 1955.
• W.E. Lee and W.M. Rainforth, Ceramic Microstructures: Property Control by Processing, Chapman & Hall, 1994.
• I.J. McColm, Ceramic Hardness, Plenum Press, 2000, ISBN 0-306-43287-0.
• Micrograph Center, ASM International, 2005.
• H. Mörtel, "Microstructural Analysis," Engineered Materials Handbook, Volume 4: Ceramics and Glasses, ASM International, 1991, p 570–579, ISBN 0-87170-282-7.
• G.D. Quinn, "Indentation Hardness Testing of Ceramics," ASM Handbook, Volume 8: Mechanical Testing and Evaluation, ASM International, 2000, p 244–251, ISBN 0-87170-389-0.
• A.T. Santhanam, "Metallography of Cemented Carbides," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, ISBN 0-87170-706-3.
• U. Täffner, V. Carle & U. Schäfer, "Preparation and Microstructural Analysis of High-Performance Ceramics," ASM Handbook Volume 9: Metallography and Microstructures, ASM International, 2004, p 1057–1066, ISBN 0-87170-706-3.
• Holger F. Struer
## References
Search another word or see brinell teston Dictionary | Thesaurus |Spanish | 2014-07-23 05:34:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277706623077393, "perplexity": 9545.83688872761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997874283.19/warc/CC-MAIN-20140722025754-00061-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/71935/finite-variation-and-idempotent-languages-and-automata | Finite variation and idempotent languages and automata.
Let $L$ be a regular language over alphabet $\Sigma$ and let $A:=(Q,\Sigma,\delta, q_0, F)$ be the minimal DFA recognizing $L$. For every $w\in \Sigma^*$ define the variation of $w$ w.r.t. $L$ by
$$\mathrm{Var}_L(w) := \\#\{0\leq k < n \text{ s.t. } \delta(w_1\cdots w_k)\neq \delta(w_1\cdots w_{k+1})\},$$ if $w:=w_1\cdots w_n$.
We say $L$ is of finite variation if $\sup_{w\in\Sigma^*}\{\text{Var}_L(w)\}<+\infty$.
This should be equivalent to ask that the only cycles of $A$ are of length one (i.e., self-loops).
Now what else can be said about the class $\text{FV}$ of regular languages with finite variation? Is there a characterization in terms of regular expressions? Is there a characterization in terms of generating elements (under boolean operations)?
I can prove that $\text{FV}$ is a $*$-variety of Eilenberg, i.e. it is closed under boolean operations, left and right word quotients and inverse homomorphisms. By Eilenberg variety theorem there is an associated pseudovariety of monoids $M(\text{FV})$. Is there anything in the literature about $\text{FV}$ and $M(\text{FV})$?
What if we consider the literally idempotent closure of $\text{FV}$?
Much more important: is any of this questions non-trivial to experts? :-)
-
It seems to me that FV should be the variety of languages associated to $\mathcal R$-trivial monoids. A monoid is $\mathcal R$-trivial if Green's relation $\mathcal R$ is trivial. This is the same as satisfying $(xy)^{\omega}x=(xy)^{\omega}$ for all $x,y$ where $z^{\omega}$ is the idempotent power of $z$.
Suppose first that the language has finite variation. Then as you observed each oriented cycle must visit one vertex. In particular if $n$ is such that $(xy)^{\omega}=(xy)^n$, then there is a cycle at $q(xy)^n$ labeled by $(xy)^n$ for any state $q$. It follows that $x,y$ label loops at $q(xy)^n$ and so $q(xy)^nx=q(xy)^n$. Since $q$ was arbitrary, it follows $(xy)^nx=(xy)^n$
Conversely, the variety of $\mathcal R$-trivial languages is generated by languages of the form $A_1^*a_1A_2^*\cdots a_{n-1}A_n^*$ where the $A_i$ are subsets of the alphabet and $a_i\notin A_i$. The minimal automaton has $n$ states. The elements of $A_i$ label loops at state $i$ and $a_i$ goes from state $i$ to $i+1$. Clearly this language has finite variation.
More conceptual argument (update): Let me rephrase the above proof to make it self-contained. Recall a monoid $M$ is $\mathcal R$-trivial if $aM=bM$ implies $a=b$.
Proposition. Let $L$ be a regular language. The following are equivalent.
1. $L$ has finite variation.
2. Each strong connected component of the minimal automaton of $L$ has a single vertex.
3. There is a total ordering on the states of the minimal automaton of $L$ such that $qa\geq q$ for all states $q$ and inputs $a$.
4. The syntactic monoid of $L$ is $\mathcal R$-trivial.
Proof:
(1) iff (2). If there is a nontrivial strongly connected component, then there is an oriented cycle $p$ visiting at least 2 vertices and with no repeated vertices. If $w$ is the label of $p$, then the words $w^n$ show that the variation is not finite. If all strongly connected components are trivial then the variation is bounded by the length of the longest loop edge-free path.
(2) implies (3). Removing the loop edges gives an acyclic digraph. Topologically sort the states. Then by construction (3) holds since loop edges fix you and all other transitions go up in the order.
(3) implies (4) This is well known and can be found in Pin's book. The functions satisfying $qf\geq q$ for all states $q$ form a submonoid. Suppose $u,v$ generate the same right ideal of the syntactic monoid, the $u=vx$ and $v=uy$ for some $x,y$. Thus for any state $q$ we have $qu=qvx\geq qv=quy\geq qu$. Thus $u=v$.
(4) implies (2). Suppose that $q,q'$ are in the same strongly connected component. There are elements of the syntactic monoid such that $qu=q'$ and $q'v=q$. Then $q(uv)^n=q$ and $q(uv)^nu=q'$ for all $n$. Choose $n$ so that $(uv)^n$ is idempotent. Then $(uv)^n$ generates the same right ideal as $(uv)^nu$ and so they are equal. Thus $q=q'$. QED
-
Can anybody fix the formatting? – Benjamin Steinberg Aug 3 '11 at 14:45
(Fixed. Note that markdown does not have any predefined proposition or proof formatting support, hence I improvized.) – Emil Jeřábek Aug 3 '11 at 15:02
Thanks for the fix. – Benjamin Steinberg Aug 3 '11 at 15:29
As for my question on literally idempotent closure of $\mathrm{FV}=\mathrm{R}$ I can now answer by myself, thanks to Benjamin Steinberg.
By literally idempotent closure $\overline{V}$ of a $\*$-variety of languages $V$ we mean the class $\overline{V}:=\{\overline{L} | L \in \mathrm{V}\}$, provided $\overline{L}:=\{w\in\Sigma^* \\,|\\, w\sim v \text{ for some } v\in L\}$, where $\sim$ is the synctatic congruence $\sim_L$ of $L$ enriched by relations $a^2 \sim a$ for all $a\in\Sigma$.
It has been proved by Klíma&Polák in [1] that for the variety $\mathrm{R}$ (and also for other varieties of general interest) one has $\mathrm{R}\cap \mathrm{Id}=\overline{\mathrm{R}}$ where $\mathrm{Id}$ is the $\*$-literal-variety of literally idempotent regular languages (i.e. languages $L$ such that for every $x,y\in\Sigma^*$ one has $xa^2y\in L \iff xay\in L$).
Hence, using the characterization of Klíma&Polák in [1] we conclude that finite variation literally idempotent languages are exactly finite unions of languages of the following form:
$B_0^*a_1B_1^*a_2\ldots a_kB_k^*, k\in\mathbb{N}_0, a_1,\ldots,a_k\in \Sigma, B_i\subseteq \Sigma, a_1\neq a_2\neq \cdots \neq a_k$ and $B_0\not\ni a_1\in B_1\not\ni a_2\in B_2\cdots B_{k-1}\not\ni a_k\in B_k$.
Bibliography: [1] Ondřej Klíma, Libor Polák: On Varieties of Literally Idempotent Languages. ITA 42(3): 583-598 (2008)
-
Can someone fix LaTeX in my post? I really don't understand why it is not rendered in the appropriate way. O_o – Carlo Aug 3 '11 at 15:23
I think your problem is when you are trying to write *-variety but I can't edit your answer. – Benjamin Steinberg Aug 3 '11 at 15:32
Fixed! Thank you. – Carlo Aug 3 '11 at 15:37
Fixed (*'s sometimes need escaping by backticks, since they are used as markup for italics). On an unrelated note, could you unaccept my misguided answer so that I can delete it? – Emil Jeřábek Aug 3 '11 at 15:39
Now it should be unaccepted. – Carlo Aug 3 '11 at 15:43
It could also be of interest to notice that the condition on strongly connected component (or equivalently order on states), which says that all cycles are self-loops, when applied to alternating automata instead of nondeterministic ones, yields the variety of star-free (equivalently first-order, aperiodic) languages. See for instance V. Diekert and P. Gastin. First-order definable languages
- | 2016-06-01 00:18:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297506213188171, "perplexity": 386.85463583132184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053252010.41/warc/CC-MAIN-20160524012732-00076-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://support.barrett.com/wiki/Burt-Research/KinematicsJointRangesConversionFactors?action=diff&version=16 | # Changes between Version 15 and Version 16 of Burt-Research/KinematicsJointRangesConversionFactors
Ignore:
Timestamp:
Apr 16, 2018, 10:33:00 PM (17 months ago)
Comment:
--
### Legend:
Unmodified
v15 }}} The homogeneous transformation from frame 0 to frame 3 is: The forward kinematics of BURT are used to determine the end tip location and orientation. These transformations are generated using the parameters in Table 1 and the matrix in Equation 1. The forward kinematics are determined for any frame on the robot by mulitplying all of the transforms up to and including the final frame. To determine the endpoint location and orientation use the following equation: {{{ #!div class="center" align="center" {{{ #!latex $^{0}T_{3}=^{0}T_{1}^{1}T_{2}^{2}T_{3}^{3}$ }}} '''Equation 3: Tool end tip position and orientation equation for BURT''' }}} {{{ 0 & 0 & 0 & 1\end{array}\right]}}} The forward kinematics of BURT are used to determine the end tip location and orientation. These transformations are generated using the parameters in Table 1 and the matrix in Equation 1. The forward kinematics are determined for any frame on the robot by mulitplying all of the transforms up to and including the final frame. To determine the endpoint location and orientation use the following equation: {{{ #!div class="center" align="center" {{{ #!latex^{0}T_{3}=^{0}T_{1}^{1}T_{2}^{2}T_{3}^{3}\$ }}} '''Equation 3: Tool end tip position and orientation equation for BURT''' }}} === Joint Ranges === | 2019-08-25 11:35:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068121552467346, "perplexity": 1946.9139531322814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00185.warc.gz"} |
http://vertexwahn.de/2021/01/03/shootingrays/ | How do we shoot a rays into our scene given a perspective matrix?
Assume that our virtual film plane is $800$ by $600$ pixel in size. This means that raster coordinates range from $[0,800] \times [0,600]$. The positive x-axis goes form left to right and the positive y-axis goes from top to bottom. The top left corner has the raster coordinates $(0,0)$ and the right bottom has the raster coordinates $(800,600)$. The virtual camera is located at $(0,0,0)$ and looks at $(0,0,100)$. The up vector of the camera is $(0,1,0)$. The horizontal FOV is $30°$. Let the near clip plane distance be $100$ and far clip distance $500$.
We want to compute for each pair of raster coordinates a corresponding ray. The raster coordinate $(400,300)$ (center of virtual film plane) should give us a ray in world space with origin $(0,0,0)$ and direction $(0,0,1)$.
If we change the position and look at point of the camera an keep the other parameters we come up with the following simple test cases:
Test Case Index Camera Position Camera Look At FOV near clip plane distance far clip plane distance Film Plane Size in Pixels (width, height) Raster position Expected ray origin Expected center ray direction
0 $(0, 0, 0)$ $(0,0,100)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0, 0, 0)$ (0,0,1)
1 $(0,0,10)$ $(0,0,100)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0,0,10)$ (0,0,1)
2 $(0, 0, 0)$ $(45,0,45)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0, 0, 0)$ $(\sqrt{0.5}, 0, \sqrt{0.5})$
3 $(0,0,0)$ $(100,0,0)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0,0,0)$ $(1,0,0)$
4 $(0,0,0)$ $(100,100,100)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0,0,0)$ $(\sqrt{\frac{1}{3}}, \sqrt{\frac{1}{3}}, \sqrt{\frac{1}{3}})$
5 $(0,0,0)$ $(-100,-100,-100)$ 30° $100$ $500$ $800 \times 600$ $(400,300)$ $(0,0,0)$ $(-\sqrt{\frac{1}{3}}, -\sqrt{\frac{1}{3}}, -\sqrt{\frac{1}{3}})$
Considering test case 0 again. What happens if we choose different raster coordinates, e.g. $(0,300)$? If we change the FOV to $90°$ it becomes pretty easy. The direction vector of the ray should be $(-\sqrt{0.5}, 0, \sqrt{0.5})$. This helps us to extend our test cases:
Test Case Index Camera Position Camera Look At FOV near clip plane distance far clip plane distance Film Plane Size in Pixels (width, height) Raster position Expected ray origin Expected center ray direction
6 $(0, 0, 0)$ $(0,0,100)$ 90° $100$ $500$ $800 \times 600$ $(0,300)$ $(0, 0, 0)$ $(-\sqrt{0.5}, 0, \sqrt{0.5})$
7 $(0, 0, 0)$ $(0,0,100)$ 90° $100$ $500$ $800 \times 600$ $(600,300)$ $(0, 0, 0)$ $(\sqrt{0.5}, 0, \sqrt{0.5})$
How do we get from raster space to normalized device coordinates (NDC)? The following table lists expected NDC coordinates for given raster space coordinates:
Raster space coordinates Film Plane Size in Pixels (width, height) Expected NDC coordinates
$(0,0,0)$ $800 \times 600$ $(-1,1,0)$
$(0,300,0)$ $800 \times 600$ $(-1,0,0)$
$(0,600,0)$ $800 \times 600$ $(-1,-1,0)$
$(400,300,0)$ $800 \times 600$ $(0,0,0)$
The Matrix $M_{\textsf{RasterSpaceToNDC}}$ that transform raster space coordinates to normalized device coordinates looks like this (where $w$ and $h$ are the width and height of the film plane in pixels):
$M_{\textsf{RasterSpaceToNDC}} = T(-1, -1, 0) \cdot S(2, 2, 1) \cdot T(0,1,0) \cdot S(1,-1,1) \cdot S(\frac{1}{w},\frac{1}{h},1)$
Here is how to derive it. First we go from raster space to normalized raster space.
In the next step we flip the y axis. That means a value of $0$ is mapped to $1$ or for instance $0.3$ is mapped to $0.7$. That means $y’= h - y = 1-1y$. This flip can be expressed by a scale and translation matrix. First we do the scale with $S(1-1,1)$ and we apply the translation $T(0,1,0)$
In the last step we scale by $S(2,2,1)$ and translate by $T(-1,-1,0)$ and end up with normalized device coordinates.
Assuming that our projection matrix is $P$ and we have squared film plane a point is transformed from raster space to camera space by:
$$P^{-1} \cdot M_{\textsf{RasterSpaceToNDC}}$$
If the film plane is not a square and when considering the FOV has a horizontal field off view a point the transformation that tranform a raster space point to a 3D point in camera space looks like this ($\textsf{aspect} = \frac{w}{h}$):
$$P^{-1} \cdot S(1, \frac{1}{aspect}, 1) \cdot M_{\textsf{RasterSpaceToNDC}}$$
Note transforming from NDC to camera space happens by multiplying with $P^{-1}$. This will shift the z-coordinate form $0$ to the near clipping plane distance. If a raster space point is tranformed to the camera sapace point it can be interpreted as a direction vector. This is one way how we can shoot ray in a ray tracer. The perspective projection matrix transforms the camera frustum to clip space (e.g. $[-1,1] \times [-1,1] \times [0,1]$). For instance a point on the far clip plane can be after perspective projection something like $(0, 0, 500, 500)$ which is equal to $(0,0,1)$.
Lets consider how Nori shoots rays. By the way I did some small modifications to the source code to get this working for a left-handend coordinate system.
/**
* Translation and scaling to shift the clip coordinates into the
* range from zero to one. Also takes the aspect ratio into account.
*/
m_sampleToCamera = Transform(
Eigen::DiagonalMatrix<float, 3>(Vector3f(0.5f, -0.5f * aspect, 1.0f)) *
Eigen::Translation<float, 3>(1.0f, -1.0f/aspect, 0.0f) * perspective).inverse();
...
/* Compute the corresponding position on the
near plane (in local camera space) */
Point3f nearP = m_sampleToCamera * Point3f(
samplePosition.x() * m_invOutputSize.x(),
samplePosition.y() * m_invOutputSize.y(), 0.0f);
# Futher testing
The table shown above can be turned into tests. Also the image that shows the ray directions can be used within an automated test.
Another idea to test ray shooting can look like this:
TEST_F(Sensor3fTest, RayDirections2) {
// Arrange
float angle = 30.f; | 2021-03-06 23:51:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38983917236328125, "perplexity": 1221.0257922373614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00248.warc.gz"} |
https://stats.stackexchange.com/questions/432363/generalized-linear-logit-mixed-effects-model-with-the-random-crossed-effects | # Generalized linear (logit) mixed-effects model with the random (crossed) effects drawn from a bivariate normal distribution
I am trying to implement a generalized mixed-effects model specified as:
Dependent variable $$y = \log(\frac{p}{1 - p})$$ where $$p$$ is a quantity measured for a pair of individuals ($$i$$ and $$j$$).
$$E(y_{ij} \mid X_{ij}, a_i, b_j) = \beta X_{ij} + a_i + b_j,$$
where $$X_{ij}$$ represents my covariates (predictor or independent variables) for individuals $$i$$ and $$j$$, $$\beta$$ represents my fixed effects regression coefficients, and $$a_i$$ and $$b_j$$ represent the random effects (crossed effects) for the two individuals involved (every data point represents some feature values or measurements for a pair of individuals and we want to control for the idiosyncratic contributions of the individuals involved in a pair by conditioning on these individuals via the random crossed effects).
$$(a_i, b_i) \sim MVN(0, \Sigma), \quad \Sigma = \ \left[ {\begin{array}{cc} \sigma_a^2 & \rho\sigma_a\sigma_b \\ \rho\sigma_a\sigma_b & \sigma_b^2 \\ \end{array} } \right].$$
This model is fit using the Laplace approximation, and the authors of the paper whose model I am following use the lme4 package in R (I am open to both R and a way to do this in Python).
I am unable to understand how to implement these crossed effects for the individuals with the generalized mixed-effects model. I have my $$y$$ and $$X$$ in place and have the IDs for each pair and each individual making up a particular pair. I feel like I need to use the individual IDs for my random effects here, but I am not sure how. Most default examples seem to add a random intercept or random slopes corresponding to the covariates specified in $$X$$ but this does not seem to be the case here.
If anyone has any pointers about how to implement the above-specified model using lmer, it would help a lot. I hope I specified the model clearly enough and would be happy to provide an application scenario or point to the paper which describes and uses this model if that helps.
• This is a very good question. However, I'm not sure if it's necessarily a question for StackExchange, unfortunately. The question does seem to center around programming. You don't appear to have any statistical issues you need clarified. It may be worth asking the authors of the package lme4. – Weiwen Ng Oct 22 '19 at 19:45
• Thanks @WeiwenNg! I might try that route too. I think this is in keeping with the many conceptual questions asked too since I do need the formulation for the problem and any further pointers relating to how to implement it in lme4 using Laplace approximation would be an additional help. – Pranav Goel Oct 22 '19 at 22:30 | 2021-01-25 11:30:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43131059408187866, "perplexity": 403.654989760882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00795.warc.gz"} |
https://roboted.wordpress.com/2012/10/09/online-latex-equation-editor/ | ## Online LaTeX equation editor
Richard Hayes sent me this link to a nifty online LaTeX equation editor:
http://www.codecogs.com/latex/eqneditor.php
The best way to include maths in a wordpress.com blog (and lots of other places, such as Wikipedia) is using LaTeX, so this online editor should come in handy for anyone getting to grips with it. It shows you the fully formatted equation in real-time as you’re writing it. Also, it has buttons to add in a wide variety of mathematical symbols etc, which is useful for anyone not that familiar with LaTeX’s many commands and symbols.
I gave it a quick try myself and it seems very convenient. These are the equations I wrote in it – anyone recognise them?
$F(j\omega)=\int_{-\infty }^{\infty}{f(t)e^{-j\omega t}\textup{d}t}$
$f(t)=\frac{1}{2\pi}\int_{-\infty }^{\infty}{F(j\omega)e^{j\omega t}\textup{d}\omega}$
I added those equations into my WordPress post using the following code (typed / pasted into the main text of the post):
$F(j\omega)=\int_{-\infty }^{\infty}{f(t)e^{-j\omega t}\textup{d}t} &s=2$
$f(t)=\frac{1}{2\pi}\int_{-\infty }^{\infty} {F(j\omega)e^{j\omega t}\textup{d}\omega} &s=2$
Also, Eamos Fennell sent me a link to another similar site which seems pretty good:
http://www.sciweavers.org/free-online-latex-equation-editor | 2019-06-17 15:45:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376708388328552, "perplexity": 1172.6319543177196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998509.15/warc/CC-MAIN-20190617143050-20190617165050-00424.warc.gz"} |
http://mathhelpforum.com/calculus/38729-basic-integration-print.html | # Basic Integration
• May 18th 2008, 07:32 AM
Gusbob
Basic Integration
I've been studying for a integration and volumes test. I can do most of them but these few are giving me headaches. You don't have to solve them for me, just tell me what I need to do. Any help is appreciated.
1)
$\int \sqrt{16+x^2} \,dx$
I know I need to substitute $x = 4\,tan\,\theta \Rightarrow dx = 4\,sec^2\theta \,d\theta$
$\int 4sec\theta \bullet 4sec^2\theta \,d\theta$
= $\int 16 sec^3\theta\,d\theta$
I don't know how to go on from here.
The answer is $\frac{1}{2}x\sqrt{16+x^2} + 8 ln (x+\sqrt{16+x^2})$
2)
$\int \frac{1}{x^2\sqrt{x-1}}dx$
I've tried letting $u= \sqrt{x-1}$, but got nowhere from that.
3)
$\int_0^{\frac{\pi}{2}} \sqrt{1+sin\,2x}\, dx$
no clue
• May 18th 2008, 07:41 AM
Isomorphism
Quote:
Originally Posted by Gusbob
2) $\int \frac{1}{x^2\sqrt{x-1}}dx$
I've tried letting $u= \sqrt{x-1}$, but got nowhere from that.
Try $x = sec^2 \theta$
Quote:
Originally Posted by Gusbob
3) $\int_0^{\frac{\pi}{2}} \sqrt{1+sin\,2x}\, dx$
no clue
Did you observe $\sqrt{1+sin\,2x} = |\cos x + \sin x|$ ? :D
• May 18th 2008, 07:44 AM
PaulRS
1.
Let $u=\sin(x)$
$\int \frac{dx}{\cos^3(x)}=\int \frac{\cos(x)}{\cos^4(x)}dx=\int \frac{du}{(1-u^2)^2}$
$\frac{1}{(1-u^2)}=\frac{1}{2}\cdot{\left(\frac{1}{1-u}+\frac{1}{1+u}\right)}$ (1)
square (1) and apply (1) again with what you get
2.
After that sub you get $2\int\frac{du}{(u^2+1)^2}$
Let: $z=\arctan(u)$ and remember that $\frac{1}{\tan^2(z)+1}=\cos^2(z)$
• May 18th 2008, 07:54 AM
Gusbob
Quote:
Did you observe $\sqrt{1+sin\,2x} = |\cos x + \sin x|$ ? :D
How does that work?
• May 18th 2008, 07:56 AM
Moo
Hello,
Quote:
Originally Posted by Gusbob
How does that work?
$1=\cos^2x+\sin^2x$
$\sin 2x=2 \cos x \sin x$
$\implies 1+\sin 2x=\cos^2x+\sin^2x+2 \cos x \sin x=(\cos x+\sin x)^2$
• May 18th 2008, 04:48 PM
Mathstud28
Quote:
Originally Posted by Gusbob
I've been studying for a integration and volumes test. I can do most of them but these few are giving me headaches. You don't have to solve them for me, just tell me what I need to do. Any help is appreciated.
1)
$\int \sqrt{16+x^2} \,dx$
I know I need to substitute $x = 4\,tan\,\theta \Rightarrow dx = 4\,sec^2\theta \,d\theta$
$\int 4sec\theta \bullet 4sec^2\theta \,d\theta$
= $\int 16 sec^3\theta\,d\theta$
I don't know how to go on from here.
The answer is $\frac{1}{2}x\sqrt{16+x^2} + 8 ln (x+\sqrt{16+x^2})$
2)
$\int \frac{1}{x^2\sqrt{x-1}}dx$
I've tried letting $u= \sqrt{x-1}$, but got nowhere from that.
3)
$\int_0^{\frac{\pi}{2}} \sqrt{1+sin\,2x}\, dx$
no clue
For the first one by trig sub we get
Let $x=4\tan(\theta)\Rightarrow{\theta=arctan\bigg(\fra c{x}{4}\bigg)}$
This means that $dx=16\sec^2(\theta)$
Imputting this we get
$\int\sqrt{16+16\tan^2(\theta)}\sec^2(\theta)d\thet a=4\int\sec^3(\theta)d\theta$
Now to integrate that thing
Ok so we do this
remember that $\sec^3(x)=(1+\tan^2(x))\sec(x)=\sec(x)+\sec(x)\tan ^2(x)$
So subbing this in remember waht this is actually equal to we get
$\int\sec(\theta)+\sec(\theta)\tan^2(\theta)d\theta =\ln|\sec(\theta)+\tan(\theta)|+\int\sec(\theta)\t an^2(\theta)d\theta$
Now evaluating the integral within their we let
$dv=\sec(\theta)\tan(\theta)d\theta$
and let $u=\tan(\theta)$
So $v=sec(\theta)$
and $du=\sec^2(\theta)d\theta$
Applying parts we get
$\int\sec(\theta)\tan^2(\theta)d\theta=\sec(\theta) \tan(\theta)-\int\sec^3(\theta)d\theta$
Putting this back into the integral we get
$\int\sec(\theta)+\sec(\theta)\tan^2(\theta)d\theta =\int\sec^3(\theta)d\theta=$ $\ln|\sec(\theta)+\tan(\theta)|+\sec(\theta)\tan(\t heta)-\int\sec^3(\theta)d\theta$ $\Rightarrow{2\int\sec^3(\theta)d\theta=\ln|\sec(\t heta)+\tan(\theta)|+\sec(\theta)\tan(\theta)}$
Dividing both sides by two we get
$16\int\sec^3(\theta)d\theta=2\bigg[\ln|\sec(\theta)+\tan(\theta)|+\sec(\theta)\tan(\t heta)\bigg]+C$
Now remembering that $\theta=artan\bigg(\frac{x}{4}\bigg)$
we need to imput back in...so seeing that
$\sec\bigg(arctan\bigg(\frac{x}{4}\bigg)\bigg)=\fra c{\sqrt{x^2+16}}{4}$
and that $\tan\bigg(arctan\bigg(\frac{x}{4}\bigg)\bigg)=\fra c{x}{4}$
wee see that
$\int\sqrt{x^2+16}dx=16\bigg[\frac{1}{2}\bigg[\ln\bigg|\frac{\sqrt{x^2+16}}{4}+\frac{x}{4}\bigg| +\frac{\sqrt{x^2+16}}{4}\cdot\frac{x}{4}\bigg]+C\bigg]=$ $8\bigg[\ln\bigg|\frac{\sqrt{x^2+16}}{4}+\frac{x}{4}\bigg| +\frac{x\sqrt{x^2+16}}{16}\bigg]+C\bigg]$ | 2016-06-26 21:47:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 50, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519965052604675, "perplexity": 1525.625058367012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00168-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://blog.cambridgecoaching.com/blog/bid/354169/SAT-Tutor-Questioning-Your-Answers | And question everything.
One of the most helpful resources for a difficult and confusing standardized test question is the set of answer choices that are provided to you (except for grid-in math problems! Sorry folks!) In a math problem, if you can quickly set up and solve an equation, you’re always better off, but sometimes these test makers succeed in being quite tricky. But particularly in a confusing, or seemingly ambiguous, reading or writing question, the answers can really come in handy.
What do you do once you’ve elicited the difference between difficult responses? The same thing you should be doing with every problem, only in a more precise manner: you now check each one back against the original sentence. Noticing the comma and semi-colon, for example, should lead you to check for a run-on sentence. | 2021-10-19 02:38:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025065660476685, "perplexity": 1100.2061592078512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585231.62/warc/CC-MAIN-20211019012407-20211019042407-00543.warc.gz"} |
http://www.russinoff.com/libman/text/node61.html | # 12 Multiplication
While the RTL implementation of integer multiplication is more complex than that of integer addition, the extended problem of floating-point multiplication does not present any significant difficulties that have not already been addressed in earlier chapters. The focus of this chapter, therefore, is the multiplication of integers.
Let and be natural numbers, which we shall call the multiplicand and the multiplier, respectively. A natural approach to the computation of begins with the bit-wise decomposition of the multiplier,
where and for , . The product may then be computed as
Thus, the computation is reduced to the summation of at most nonzero terms, called partial products, each of which is derived by an appropriate shift of . In practice, this summation is performed with the use of a tree of compressors similar to the 3:2 compressor shown in Figure 11.4. It is clear that two such compressors may be combined to form a 4:2 compressor, and that 4:2 compressors may be used to reduce a sum of terms to in constant time. Consequently, the hardware needed to compress terms to two grows linearly with , and the required time grows logarithmically.
Naturally, any reduction in the number of partial products generated in a multiplication would tend to reduce the latency of the operation. Booth encoding [BOO51] is based on the observation that if we allow , along with 0 and 1, as a value of the coefficient in the above expression for , then the representation is no longer unique. Thus, we may seek to minimize the number of nonzero coefficients and consequently the number of partial products in the expression for , at the expense of introducing a negation along with the shift of in the case .
In the following section, we shall present a refinement of Booth's original algorithm known as the radix-4 modified Booth algorithm [KOR93], which limits the number of partial products to half the multiplier width. Each of the three subsequent sections contains a variant of this algorithm.
Subsections
David Russinoff 2017-08-01 | 2018-01-17 18:26:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173081874847412, "perplexity": 309.43230765755607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00631.warc.gz"} |
https://socratic.org/questions/chlorine-and-bromine-have-similar-properties-what-is-the-best-explanation-for-th | # Chlorine and Bromine have similar properties. What is the best explanation for this?
Elements in the same group have similar properties because their electron configurations are similar. In particular they have the same number of valence electrons, which are the in the outermost (highest energy) $\text{s}$ and $\text{p}$ orbitals.
In the case of the halogens, the atoms of each element has seven valence electrons, $\text{ns"^2"np"^5}$, where $\text{n}$ represents the energy level. The electron configuration of chlorine is ["Ne"]"3s"^2"3p"^5", and the electron configuration of bromine is ["Ar"]"3d"^10"4s"^2"4p"^5". | 2021-06-21 04:37:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4648955762386322, "perplexity": 535.5488356610088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488262046.80/warc/CC-MAIN-20210621025359-20210621055359-00567.warc.gz"} |
https://eprint.iacr.org/2006/265 | Some (in)sufficient conditions for secure hybrid encryption.
Javier Herranz, Dennis Hofheinz, and Eike Kiltz
Abstract
The KEM/DEM hybrid encryption paradigm combines the efficiency and large message space of secret key encryption with the advantages of public key cryptography. Due to its simplicity and flexibility, the approach has ever since gained increased popularity and has been successfully adapted in encryption standards. In hybrid public key encryption (PKE), first a key encapsulation mechanism (KEM) is used to fix a random session key that is then fed into a highly efficient data encapsulation mechanism (DEM) to encrypt the actual message. A composition theorem states that if both the KEM and the DEM have the highest level of security (i.e. security against chosen-ciphertext attacks), then so does the hybrid PKE scheme. It is not known if these strong security requirements on the KEM and DEM are also neccessary, nor if such general composition theorems exist for weaker levels of security. In this work we study neccessary and sufficient conditions on the security of the KEM and the DEM in order to guarantee a hybrid PKE scheme with a certain given level of security. More precisely, using nine different security notions for KEMs, ten for DEMs, and six for PKE schemes we completely characterize which combinations lead to a secure hybrid PKE scheme (by proving a composition theorem) and which do not (by providing counterexamples). Furthermore, as an independent result, we revisit and extend prior work on the relation among security notions for KEMs and DEMs.
Available format(s)
Category
Public-key cryptography
Publication info
Published elsewhere. Information and Computation, Volume 208, Issue 11, pp. 1243-1257, 2010
Keywords
key encapsulation mechanismdata encapsulation mechanismhybrid encryption
Contact author(s)
kiltz @ cwi nl
History
2010-11-24: revised
See all versions
Short URL
https://ia.cr/2006/265
CC BY
BibTeX
@misc{cryptoeprint:2006/265,
author = {Javier Herranz and Dennis Hofheinz and Eike Kiltz},
title = {Some (in)sufficient conditions for secure hybrid encryption.},
howpublished = {Cryptology ePrint Archive, Paper 2006/265},
year = {2006},
note = {\url{https://eprint.iacr.org/2006/265}},
url = {https://eprint.iacr.org/2006/265}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2022-06-28 19:13:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2634941041469574, "perplexity": 3765.38220136566}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00002.warc.gz"} |
https://aiuta.org/en/how-do-you-solve-3x-2-divided-by-8x4-1-the-answer-is-x-6but-how-do-you-solve-it.40677.html | Mathematics
# how do you solve 3x-2 divided by 8=x/4-1 ? the answer is x= -6 but how do you solve it?
#### TamaraTewksbury170
4 years ago
$\frac{3x-2}{8}= \frac{x}{4-1}$
$\frac{3x-2}{8}= \frac{x}{3}$
Cross, Multiply :
$3(3x-2) = 8x$
$9x-6 = 8x$
$9x - 8x - 6 = 0$
$9x - 8x = 6$
$\boxed{x = 6}$
Note :
Remember this cross multiply formula :
$\frac{a}{b} = \frac{c}{d} \,\,\,\,\,\,\,\,\, \to \,\,\,\,\,\,\,\,\,(a \times d) = (b \times c)$
#### BettieZumbrunnen
4 years ago
(3x - 2) / (8) = (x/4) - 1
Multiply each side by 8 :
3x - 2 = 2x - 8 | 2018-08-19 09:37:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7568663358688354, "perplexity": 6634.512117079609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215075.58/warc/CC-MAIN-20180819090604-20180819110604-00222.warc.gz"} |
http://koreascience.or.kr/article/JAKO201717234703028.page | # INFINITELY MANY SOLUTIONS FOR A CLASS OF THE ELLIPTIC SYSTEMS WITH EVEN FUNCTIONALS
• Choi, Q-Heung (Department of Mathematics Education Inha University) ;
• Jung, Tacksun (Department of Mathematics Kunsan National University)
• Published : 2017.05.01
• 67 24
#### Abstract
We get a result that shows the existence of infinitely many solutions for a class of the elliptic systems involving subcritical Sobolev exponents nonlinear terms with even functionals on the bounded domain with smooth boundary. We get this result by variational method and critical point theory induced from invariant subspaces and invariant functional.
#### Keywords
elliptic system;deformation lemma;$(P.S.)^*$ condition;subcritical Sobolev exponents;variational method;critical point theory;invariant functional;invariant subspaces
#### Acknowledgement
Supported by : Inha University
#### References
1. K. C. Chang, Infinite dimensional Morse theory and multiple solution problems, Birkhauser, 1993.
2. M. Ghergu and V. D. Radulescu, Singular Elliptic Problems Bifurcation and Asymptotic Analysis, Clarendon Press Oxford, 2008.
3. P. H. Rabinowitz, Minimax methods in critical point theory with applications to differential equations, CBMS Regional Conference Series in Mathematics, 65. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1986. | 2020-02-27 19:02:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4119699001312256, "perplexity": 2353.4151935400937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00556.warc.gz"} |
https://www.jmdancemiami.com/lds-movies-harl/caa31b-which-of-the-following-is-identity-element | Working For Loganair, Tony Grossi Twitter, Lost Sector Mercury, Winona State University Football Coaches, Shoes With Casual Pants, Mount Everest Restaurant Toronto Eglinton, Nmfc Commodity Codes, " />
For example, if and the ring. What are the release dates for The Wonder Pets - 2006 Save the Ladybug? For example, 0 is the identity element under addition for the real numbers, since if a is any real number, a + 0 = 0 … See the answer. What is the identity of the element? Harini Venkat answered May 02, 2020. Identity function, which serves as the identity element of the set of functions whose domains and codomains are of a given set, with respect to the operation of function composition. Let e 1 ∈ S be a left identity element and e 2 ∈ S be a right identity element. What is the identity of the element with the following electron configuration: 1s22s22p63s23p1. Examples. An identity matrix is a square matrix in which all the elements of principal diagonals are one, and all other elements are zeros. over here on EduRev! The term identity element is often shortened to identity (as in the case of additive identity and multiplicative identity), when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with. Chemistry Q&A Library What is the identity of the element with the following electron configuration: 1s22s22p63s23p1. s ∈ S; s \in S; s ∈ S; an element that is both a left and right identity is called a two-sided identity, or identity element, or identity for short. Such an element is unique (see below), and thus one speaks of the identity element. Definition of identity element. Group 2, Period 3, and Block s. To determine this, answer the following questions: (a) (2 pts) Given that XO3 has 24 valence electrons, how many valence electrons are there for X? Brand identity is everything that a firm wants a brand to be in the minds of customers. D) all of the above E) B and C. Determine the identity of the element with the following electron configuration. (a) 1/4 (d) 1/60 (b) 1/9 (e) 1/90 (c) 1/45 9. Question. What part of a right angle is an angular degree? Hi. Get an answer to your question “Which of the following determines the identity of an element?A) atomic number B) mass number C) atomic mass D) overall charge ...” in Chemistry if there is no answer or all answers are wrong, use a search bar and try to find the answer among similar questions. The Questions and R is commutative because R … Endnotes This discussion on Which of the following is the identity element under addition?a)1b)-1c)0d)None of theseCorrect answer is option 'C'. is done on EduRev Study Group by Class 8 Students. Here we find that adding a 0 to the whole number 19 and 1345 does not change the value of the whole number. For every whole number a, a + 0 = 0 + a = a. The most numerous ethnicity in the United States is. EduRev is a knowledge-sharing community that depends on everyone being able to pitch in when they know something. Identify each of the following as a representative element or transition element. This problem has been solved! This problem has been solved! Copyright © 2020 Multiply Media, LLC. Which of the following is not an element of cultural diversity? An identity matrix is a square matrix in which all the elements of principal diagonals are one, and all other elements are zeros. All Chemistry Practice Problems Electron Configuration Practice Problems. The number of protons is the element's atomic number, and is unique to each element. Carbon (C) Representative Element. are solved by group of students and teacher of Class 8, which is also the largest student s*f = s s∗f = s for any. Personal identity is the concept you develop about yourself that evolves over the course of your life. : an identity element (such as 1 in the group of rational numbers without 0) that in a given mathematical system leaves unchanged any element by which it is multiplied First Known Use of multiplicative identity 1958, in the meaning defined above More from Merriam-Webster on multiplicative identity “Zero” is called the identity element, (also known as additive identity) If we add any number with zero, the resulting number will be the same number. Carbon (C) Representative Element. 7. Answer is C , because if we add any number to 0 it will give the sum as the same number . Enter the symbol of the element. Can you explain this answer? How many candles are on a Hanukkah menorah? R= R, it is understood that we use the addition and multiplication of real numbers. Group 2, Period 3, and Block s. How To Comply: A Four-Step Process 5. S = { a, b, c, d }, S = \ {a,b,c,d\}, S = {a,b,c,d}, and consider the binary operation defined by the following table: See: Identity Zero Such an element is unique (see below), and thus one speaks of the identity element. This class is designed for use only in rare cases wherein reference-equality semantics are required. Identify each of the following as a representative element or transition element. This should also be simple, with … You can study other questions, MCQs, videos and tests for Class 8 on EduRev and even discuss your questions like The correct option in C and your right. The elements … A number added to 0 is the number itself because 0dosent have any value. The first part of establishing a brand identity is determining … These elements relate to what makes us who we are as humans. MEDIUM. n. The element of a set of numbers that when combined with another number in a particular operation leaves that number unchanged. Definition of identity element : an element (such as 0 in the set of all integers under addition or 1 in the set of positive integers under multiplication) that leaves any element of the set to which it belongs unchanged when combined with it by a specified operation Examples of identity element in a Sentence 0, zero, is defined as the identity element for addition and subtraction. 1. MEDIUM. What determines the identity of an element? The identity element is the constant function 1. (a) X^{2+}, a cation that has 36 electrons (b) X^{-}, an anion that has 36 electrons All Chemistry Practice Problems Electron Configuration Practice Problems. It is denoted by the notation “I n” or simply “I”. Multiplicative identity definition, an identity that when used to multiply a given element in a specified set leaves that element unchanged, as the number 1 for the real-number system. Element X (8 protons, 8 electrons, 8 neutrons) Element Y (35 protons, 36 electrons, 46 neutrons) Element Z (12 protons, 10 electrons, 13 neutrons) Problem : What is the identity of element Q if the ion Q 2+ contains 10 electrons? Apart from being the largest Class 8 community, EduRev has the largest solved Question. Many elements comprise a person's personal identity. By continuing, I agree that I am at least 13 years old and have read and : an identity element (such as 1 in the group of rational numbers without 0) that in a given mathematical system leaves unchanged any element by which it is multiplied First Known Use of multiplicative identity 1958, in the meaning defined above More from Merriam-Webster on multiplicative identity The simplest way to use the periodic table to identify an element is by looking for the element’s … If any matrix is multiplied with the identity matrix, the result will be given matrix. To determine this, answer the following questions: (a) (2 pts) Given that XO3 has 24 valence electrons, how many valence electrons are there for X? Rare cases wherein reference-equality semantics are required you develop about yourself that over. If it leaves every element unchanged wait for a Christmas party definition make... Solely by the notation “ I n ” or simply “ I ”... 13 years old and have read and agree to the the value the! Us who we are as humans ) 1/45 9 a Library what is the multiplicative identity element such element! Property states that when a number is added to zero it will give the sum as the of... Is designed for use only in rare cases wherein reference-equality semantics are required rare. ) 1/60 ( b ) 1/9 ( e ) 1/90 ( c 1/45... Sum as the identity of an atom is the identity element elements of principal diagonals are,... With another number in both places: 1s22s22p63s23p1 r= R, it is denoted the. Which of the element X in the set of whole numbers type with an identical number of protons the. A 0 to the … determine the group, period, and thus one speaks the!, so I will use 926 for my number I agree that I am at 13! The same number, because if we add any number to 0 will! We can see that this set under the operation $does have identity. One, and all other elements are zeros by looking for the Wonder Pets - 2006 Save the Ladybug a. A whole number 19 and 1345 does not change the value of the following?. Definition will make more sense once we ’ ve seen a few examples has a unique number! A knowledge-sharing community that depends on everyone being able to pitch in they! Heavier than 5 kilos of steel rod your favorite number in a particular element is False to. Element has a unique atomic number, and all other elements are zeros ∈ S be left... Of protons in its nucleus 0 + a = a is defined as the same number real! ) 0 ( e ) 1/90 ( c ) ethnicity d ) e. Element for addition in the nucleus of an element is an angular degree matrix. Footprints on the moon last - 2006 Save the Ladybug experiences that a brand, ideas,,... Of the element with the following is not an element is determined solely by the notation “ I.... Watching electron configuration an atom determines its identity as a particular element, EduRev has the largest student community Class., ideas, emotions, qualities and experiences that a brand seeks to represent atom is identity..., a + 0 = 0 + a = a it is understood that we use the addition and of! Which of the following is the identity of the element X 0 ( ). Identity element element synonyms, identity element for multiplication is 1 as 8 * 1=8 the of... You develop about yourself that evolves over the course of your life ) 1/4 ( d ) e! Proton determines the identity of an element is determined solely by the notation I! A whole number a, a + 0 = 0 + a =.. Simply “ I n ” or simply “ I n ” or simply “ I n or! Square matrix in which all the elements of principal diagonals are one, all. Element X in the set of W. identity matrix, the result will given. By which of the following is identity element for the Wonder Pets - 2006 Save the Ladybug the Ladybug for addition and multiplication of numbers... Allows products and services to stand out in a particular element the addition and subtraction that adding 0... The simplest way to use the addition and subtraction fundamental unit of an atom determines its as. If we add any number to 0 is called the identity element for addition and subtraction Comply with following... That we use the addition and multiplication of real numbers S be a left identity element for is. To 0 is the identity of the element with the Red Flags Rule 3 10 6p.... Rare cases wherein reference-equality semantics are required structures such as groups and rings 0dosent have any.. Set of W. identity matrix, the result will be given matrix another number in both places identity! Pitch in when they know something use only in rare cases wherein reference-equality semantics are required that combined. A = a at least 13 years old and have read and agree the. Xe ] 6s 2 4f 14 5d 10 6p 4 concept you about... Of opening remarks for a Christmas party dictionary definition of identity element way to use the periodic to! D ) 10 ( b ) 1/9 ( e ) 100 ( )... Moon last representative element or transition element than 5 kilos of steel which of the following is identity element sentence... Configuration concept Videos X in the set of whole numbers left identity element translation, English dictionary of! = a is commutative because R … Define identity element in the set of numbers. Adding a 0 to the whole number a, a + 0 = 0 + a = a not. As groups and rings ∈ S be a left identity element of your life 8 community, has... Here we find that adding a 0 to the whole number 19 and 1345 does not the! The property states that when combined with another number in a particular element below ), and block of atom... Nucleus of an atom determines its identity as a particular operation leaves that number unchanged any. By group of Students and teacher of Class 8 to identify an element is defined as the atomic! Period, and all other elements are zeros a ) language b ) religion c -1... Leaves every element unchanged a set of numbers that when a number is added to zero it will the... As groups and rings leaves every element unchanged experiences that a brand seeks to represent 13 years and! Is the element with the following electron configurations: ( Ne ) 3s2 let e 1 ∈ be! So I will use 926 for my number solved by group of Students teacher!, complex numbers and … which of the following is heavier than 5 kilos of steel rod the electron. The Wonder Pets - 2006 Save the Ladybug & a Library what is identity. ’ S … 1 elements relate to what makes us who we are as.. The Ladybug square matrix in which all the elements of principal diagonals are one, and unique. Its nucleus ve seen a few examples unique ( see below ), and is unique to each.! Services to stand out in a crowded market 0 = 0 + a =.... And e 2 ∈ S be a left identity element structures such as and. W. this property is not an element is unique ( see below ), and is unique see! Identify an element is unique to each element because 0dosent have any value 2006 Save the Ladybug, is as... Here we find that adding a 0 to the whole number a, a 0! A particular element X in the nuclei of its atoms Overview 2. who Must Comply with Red! Are one, and block of an atom determines its identity as a representative element or element. In chemistry, an element is by looking for the Wonder Pets - 2006 Save the Ladybug S ….. Largest Class 8 we can see that this set under the operation$ does have an identity matrix is with. 2 4f 14 5d 10 6p 4: ( Ne ) 3s2 or simply I... Ideas, emotions, qualities and experiences that a brand seeks to represent group! We can see that this set under the operation \$ does have an identity matrix is with! A set of whole numbers the identity of the element of cultural diversity an angular degree R commutative. Leaves that number unchanged protons in the United states is zero, is as! That we use the periodic table to identify an element is defined as a representative element transition! Is denoted by the notation “ I n ” or simply “ I ” and compelling allows... Pitch in when they know something read and agree to the whole number a, +... And teacher of Class 8 matrix in which all the elements of principal diagonals are one, thus. Not an element is unique ( see below ), and all other elements are zeros protons in the of! It leaves every element unchanged atom determines its identity as a constituent of matter the. Down this number sentence, but use your which of the following is identity element number in a particular.. It will give the same number pronunciation, identity element is by looking for the element with the following not... Be given matrix … determine the identity of the identity element 19 and 1345 does not the. Addition and multiplication of real numbers any number to 0 is called the identity of element X is the! Another number in a particular element the footprints on the moon last Class 8 to... A number is added to zero it will give the same number group, period, and block an... Also the largest Class 8 community, EduRev has the largest student community of 8... R is commutative because R … Define identity element in the set of numbers... That we use the periodic table to identify an element is unique ( see ). Have read and agree to the Pets - 2006 Save the Ladybug for addition the. Numerous ethnicity in the nucleus of an element is defined as the identity of the following configurations! | 2021-06-21 13:51:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6346221566200256, "perplexity": 610.7443785997353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00374.warc.gz"} |
https://www.physicsforums.com/threads/limits-proving-their-equality.166779/ | # Limits - proving their equality
1. Apr 21, 2007
### theriel
Hello!
All of us know, that the constant e is defined as:
e = lim[n->oo] (1+ 1/n)^n
I'm proving the derivative (ln(x))'=1/x and at some point I have to show that the limit:
lim[n->0] (1 + n)^(1/n)
is equal to the one previously mentioned, the definition of e. Probably I will have to think separately about n->+oo and n->-oo, but... There is still one, the most obvious question:
How can I do that? What should I start with? Or maybe it is impossible to be proved algebraicaly and all I can do is just input that limit into the calculator and see the limit?
Thank you very much for your help!
Greetings,
Theriel
2. Apr 21, 2007
### Gib Z
Actually you are making it much harder than it really is, thank god :)
In the definition you posted, the limit is an n tends to infinity. The 1/n here therefore tends to 0. And the exponent, tends to infinity, obviously.
In the Other limit, it is just showing the same thing. As n goes to zero, the term after the 1 still approaches the same thing. And the exponent also approaches the same thing.
In general this is a neat trick, if we have a limit of n tending to infinity, replace n with 1/n, and make the n approach zero and they are equal.
3. Apr 21, 2007
### HallsofIvy
Staff Emeritus
Or, if it makes more sense to you, replace n by 1/x. As n goes to 0, 1/x goes to infinity.
4. Apr 21, 2007
### theriel
It looks too easy as for my maths professor so I am almost sure that there was something I forgot about ;-]. Nevertheless - once again, thank you for your advice and time!
5. Apr 25, 2007
### theriel
Heh, yeah, I KNEW that there would be a problem... so:
During our classes we defined "e" as:
e = lim[n->+oo] (1+ 1/n)^n
Now I have to prove (algebraically) that the limit for n tending to -oo (negative infinity) is also equal exactly e. And it is not enough to state simply that it works or I saw a graph etc... it must be a mathematical proof.
I would highly appreciate your any ideas ;-].
Thank you for help!
6. Apr 25, 2007
### Dick
Take the log of the expression and prove the limit is 1 using l'Hopital.
7. Apr 25, 2007
### theriel
I am not very familiar with l'Hopital theorem, so I am basing on wiki and planetmath (hopefully properly understood)
Here, according to your hint, I must prove:
lim[n->-oo] ln((1+ 1/n)^n) = 1
so I have (n->-oo):
ln ( lim(n+1)/lim(n) ) * lim(n) = 1
here, using the l'Hopital theorem (limit of f/g = limit of f'/g'):
ln (1) * lim (n) = 1
0 * (-infinity) = 1
OK, it does not look easier ;-]. Zero times infinity is undefined... Was that meant to be done this way or you thought about some other approach?
8. Apr 25, 2007
### Dick
You can only use l'Hopital for 0/0 and infinity/infinity type cases. Instead of n*ln(1+1/n) - write it as ln(1+1/n)/(1/n). Now its 0/0. Now use l'Hopital.
9. Apr 25, 2007
### theriel
So we have a problem #-/ When we have:
ln(1+1/n)/(1/n)
we have to differentiate ln(1+1/n). So we have to use the formula ln(x)' = 1/x. And this formula is to be proved ;-D.
Just to point it out -> my MAIN task was to prove (e^x)' = e^x. I did it using ln(x)'. Then I had to prove ln(x)' (because during our classes we proved it using e^x). I did it, however presenting e^x in different form (see previous posts). Now, I am just to prove it for -oo...
That is why I cannot differentiate that... unless I made some logical mistake...
10. Apr 25, 2007
### Dick
You don't need the limit n->-infinity to prove ln'(x)=1/x. You said you'd already proved that. So I was hoping you'd be ok with just using the derivative formula now.
11. Apr 25, 2007
### theriel
No no no.... maybe I would post you my whole problem, it will be easier to find some solution? By the way, thank you for your help and time, sincerely....
1.I proved e^x using ln(x)', chain rule, (x)'
2.I had to prove ln(x)', the final step:
1/x * ln (lim [n->0] (1+n) ^ (1/n) ) ---by def. of e --- 1/x * ln(e) = 1/x
hence, ln(x)' = 1/x
To make the proof ln(x) complete I was to prove that lim [n->0] (1+n) ^ (1/n) is equal to e.
During our classes we defined "e" as:
e = lim[n->+oo] (1+ 1/n)^n
Changing the first formula into:
e = lim[n->oo] (1+ 1/n)^n
wasn't a big problem. However, there was one difference between my definition and the one we learned about. So I am to prove that the limit holds for n->-oo ;-].
I hope I explained my problem clearly... I am proving a few things in a chain and that is why I cannot use any of them at any steps (hence no derivative of e^x, ln(x) and so on)
Last edited: Apr 25, 2007
12. Apr 25, 2007
### Dick
You are welcome for the help! But I'm still not quite catching at what point you need to prove the n->-infinity part. What is there about your proof of the derivative property that makes you need both signs?
13. Apr 25, 2007
### theriel
My proof assumes that e=lim[n->oo] (1+ 1/n)^n (hence, for n tending to both +,- infinity) And, theoretically, I only know that e=lim[n->+oo] (1+ 1/n)^n . And that makes the problem ;-].
14. Apr 25, 2007
### VietDao29
Errr...
So, what you'd like to ask is to use:
$$\lim_{x \rightarrow \fbox{+ \infty}} \left(1 + \frac{1}{x} \right) ^ x = e$$
to prove that:
$$\lim_{x \rightarrow \fbox{- \infty}} \left(1 + \frac{1}{x} \right) ^ x = e$$, right?
Ok, so let t = -x, so $$x \rightarrow - \infty \Rightarrow t \rightarrow + \infty$$, the whole expression be comes:
$$\lim_{x \rightarrow \fbox{- \infty}} \left(1 + \frac{1}{x} \right) ^ x = \lim_{t \rightarrow \fbox{+ \infty}} \left(1 - \frac{1}{t} \right) ^ {-t} = \lim_{t \rightarrow \fbox{+ \infty}} \frac{1}{\left(1 - \frac{1}{t} \right) ^ {t}}$$
$$= \lim_{t \rightarrow + \infty} \frac{\left(1 + \frac{1}{t} \right) ^ {t}}{\left(1 - \frac{1}{t} \right) ^ {t} \left(1 + \frac{1}{t} \right) ^ {t}}$$
$$= \lim_{t \rightarrow + \infty} \left( \frac{1 + \frac{1}{t}}{1 - \frac{1}{t}} \right) ^ t \frac{1}{\left(1 + \frac{1}{t} \right) ^ {t}}$$
$$= \frac{1}{e} \lim_{t \rightarrow + \infty} \left( \frac{1 - \frac{1}{t} + \frac{2}{t}}{1 - \frac{1}{t}} \right) ^ t$$
$$= \frac{1}{e} \lim_{t \rightarrow + \infty} \left( 1 + \frac{\frac{2}{t}}{1 - \frac{1}{t}} \right) ^ t$$
$$= \frac{1}{e} \lim_{t \rightarrow + \infty} \left( 1 + \frac{2}{t - 1} \right) ^ t$$
$$= \frac{1}{e} \lim_{t \rightarrow + \infty} \left( 1 + \frac{1}{\frac{t - 1}{2}} \right) ^ t = ...$$
You should be able to go from here, right? :)
Last edited: Apr 25, 2007
15. Apr 26, 2007
### theriel
Yeah, I should be able... but ;-]
I think I have missed something or there is something basic I cannot think about...
In your calculations we have to prove now that the limit is equal to e^2 (to make the whole result equal to e). We may:
- use t=-x again, however (after having done some calculations) it gives us nothing.
-make the same denominator, which gives us:
1/e * lim [t->oo] ((t+1)/(t-1))^t.
Last edited: Apr 26, 2007
16. Apr 26, 2007
### VietDao29
Nope, why should you make the substitution t = -x? When $$t \rightarrow + \infty \Rightarrow x \rightarrow - \infty$$, and you cannot use the limit: $$\lim_{\alpha \rightarrow + \infty} \left( 1 + \frac{1}{\alpha} \right) ^ \alpha = e$$ to complete your problem.
The limit is not 1. You should note that $$1 ^ \infty \neq 1$$, it's one of the Indeterminate Forms.
Ok, big hint of the day. :)
We have:
$$\lim_{\alpha \rightarrow + \infty} \left( 1 + \frac{1}{\alpha} \right) ^ \alpha = e$$
And, we also have that:
$$\lim_{x \rightarrow - \infty} \left(1 + \frac{1}{x} \right) ^ x = \lim_{t \rightarrow + \infty} \left(1 - \frac{1}{t} \right) ^ {-t} = \frac{1}{e} \lim_{t \rightarrow + \infty} \left( 1 + \frac{1}{\frac{t - 1}{2}} \right) ^ t$$
Now, if you let $$\alpha = \frac{t - 1}{2}$$, then when $$t \rightarrow + \infty$$, you also have: $$\alpha \rightarrow + \infty$$, right?
Now, do some little manipulation, and change t to $$\alpha$$, you'll arrive at the correct result in no time.
17. Apr 26, 2007
### Dick
Try this argument on for size. According to my reference e^x is defined as lim[n->infinity](1+x/n)^n.
So 1/e=lim[n->infinity](1-1/n)^n=lim[n->-infinity](1+1/n)^(-n)=
1/lim[n->-infinity](1+1/n)^n.
18. Apr 26, 2007
### theriel
Yeah, I forgot about ^n in the previous approach.
Thank you very much for your help and hint! ;-] Soo... I have:
e * lim[A->+oo] (1 + 1/A)
Is the limit of 1/inifinity indiscussably equal to zero or I may expect another task from my teacher to prove it ? ;-D.
19. Apr 26, 2007
### VietDao29
Err... He only knows that:
$$\lim_{n \rightarrow + \infty} \left( 1 + \frac{1}{n} \right) ^ n = e$$, we hasn't covered what ex is...
20. Apr 26, 2007
### VietDao29
Hurray, you get it correctly. The limit: $$\lim_{x \rightarrow \infty} \frac{1}{x} = 0$$ is well-know, hence, you don't have to "re-prove" it.
Congratulations. | 2016-10-23 22:27:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840779066085815, "perplexity": 1886.8052251830334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00105-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://crypto.stackexchange.com/questions?page=2&sort=newest | # All Questions
40 views
### Does NSS fully implement PKCS 11?
I am looking towards using NSS in a Linux application that makes use of a TPM (HSM). So, I am checking the support of PKCS 11 in NSS, at least for the management of Elliptic Curve keys, signature with ...
43 views
### How to calculate the entropy of passwords? [duplicate]
Here is a cartoon about password entropy. http://xkcd.com/936/ I dont quite understand how the entropy is calculated in the cartoon assuming they are calculate correctly. But in general, I dont have ...
44 views
### How to use the key from a Diffie Hellman exchange? [on hold]
I watched a YouTube video about Diffie-Hellman called "Diffie-Hellman Key Exchange", and it said after doing some modulo operations with the public modulus and generator and the random private ...
107 views
### Adding tweak to a block cipher
I know there are XEX, XTS and other ways to add tweak to block cipher without modifying cipher itself. However they are quite slow and/or complex. If we assume we have a secure block cipher round ...
50 views
### Bouncy Castle and Salsa 20
I want to use Salsa 20 in Java, so I downloaded Bouncy Castle, and... it makes no sense to me. I've got it working, but most of my choices were essentially random. But I can't see there anything ...
41 views
### Forward Secrecy with pseudorandom functions
Let $H_1$, $H_2$ be keyed hash functions (e.g. $H_i(x) = SHA_{256}(s_i||x)$ for pseudorandom $s_1$, $s_2$). Let $s_n = H_1^k(s_0)$, $k_n = H_2(s_n)$, where $s_0$ is a secret (pseudorandomly chosen ...
39 views
### online Vickrey auction using remote coin flip
Is there any published work on implementing a online auction, which uses "remote coin flip" to prevent auctioneer cheating when choosing winner and bid value? Is there any possible way to use remote ...
43 views
### How to compute the decompositions used in fast FHE bootstrapping?
Leo Ducas and Daniele Micciancio's recent paper "FHE Bootstrapping in less than a second" gave an exciting result that one can compute the atom operation' of Fully Homomorphic Encryption (i.e. ...
40 views
### Encryption function that returns a unique number based on 3 ints [closed]
I'm looking to write down a function that takes 3 integers as input: X, Y, and Z, and returns a unique integer based on those 3 integers. Note that X, Y, and Z are not interchangeable. Any ideas ?
60 views
### Is it possible to 'fake' time for TOTP? (Time-based One-time Password)
TOTP (Time-based One-Time Password) Algorithm is used in Two factor authentication. I understand the algorithm and that current time is used as a variable to generate a token. Wiki page for reference: ...
15 views
### Isn't 'exponent' only present in MSME equations in Groth-Sahai framework?
Below are different types of equations in Groth-Sahai framework. All three instantiations, viz. Subgroup Decision, SXDH and DLIN allow to commit to exponent. But, isn't the concept of exponent only ...
55 views
### Encrypt a Zip File with Caesar Cipher
I'm trying to understand, why the Filesize of a Caesar Cipher encrypted foo.txt.zip File is smaller than a Caesar cipher encrypted foo.txt File. For example: Foo.txt size: 90bytes foo.txt.Caesar.enc ...
47 views
### how can convert Affine to Jacobian coordinates?
Sorry . I know my question is very elementary but please explain for me :( I have a point in affine coordinates . (x,y) what should I do when I want to show it as (X,Y,Z) in Jacobian coordinates. ...
67 views
### Where did the SHAKEs come from in SHA3?
Where did SHAKE128 and SHAKE256 originate from? I am trying to find them in the original Keccak documentation but can't find them. Is it some special mode of Keccak referenced in the documentation? ...
45 views
### ECC encryption with Miracle Library in C [closed]
I want to use miracle library in C for simulate some Algorithm. these algorithms are ECC encryption in different coordinate. I have two algorithms that should give me the same output . But I don't ...
68 views
### Rijndael performance
I have written the routines Rijndael 128-Bit ENC/DEC and KeyExpand. In bit sliced SSE2, and from his OPS would like to know its performance. The results are as follows: ENC: SubBytes&Shiftrows ...
108 views
### Do passphrases need to be run through PBKDF2? Almost impossible to brute force? [migrated]
Passphrases normally contain more data then passwords and can provide more entropy. It seems like it would still be hard to brute force a passphrase w/o using PBKDF2, assuming a user didn't select a ...
70 views
### How do RSA and ElGamal key sizes compare?
I have a rather silly question regarding the comparison of RSA with ElGamal over integers. If you want to compare their performance in the same level of security, does the modulus of both of them need ...
44 views
### SpongeWrap without padding and frame bit
Assuming all inputs are same length as rate except last can be shorter. Is it necessary to pad every input (not just last) to sponge for authenticated encryption to be secure? Is this just, because ...
108 views
### Is it possible to demonstrate that md5(x) != x for any x?
I am looking for an easy to follow explanation, if possible, that demonstrates/proves the validity (or not!) of this assertion: for any X, md5(X) != X (being X any string of 32 hex characters)
8 views
### Finding all primitive polynomials of a certain degree in $\mathbb{F}_q$ [migrated]
I am writing an algorithm to find all primitive polynomials in $\mathbb{F}_2[X]$ and I found this theorem : If $P(X)$ is a primitive polynomial in $\mathbb{F}_p[X]$ of degree $n$ with root $a$, then ...
34 views
### Using C_Sign to create a PKCS7 signature
I am using the PKCS#11 function C_Sign to sign some data. The output I get is just a signature. How do I get it in PKCS#7 format - i.e. ASN1 with signature and certificate (for detached) or ASN1 with ...
26 views
### Generating a public key certificate with ECDSA params
I have an ECDSA key generated in a HSM and I am able to retrieve it's public key components via the PKCS11 library. I would like to create a public x509 certificate with the public params for the ...
289 views
### How does MD5 process text which is shorter than 512 bits
MD5 processes a 512-bit block and produces a 128-bit (16 byte) message digest often expressed as 32-digit hexadecimal value For example if I hash the word "how" using MD5 , I get the following hash ...
38 views
### how does the order-preserving encryption scheme distribute the buckets?
Recently ,I want to know the order-preserving-encryption scheme(OPES in short).But ,I couldnt understand how the buckets are distributed,and if the values in my input database are in a small range, ...
70 views
### Block Cipher Modes
I have a question here asking the following: Why do block ciphers need the use of blocking modes? To encrypt messages larger than the size of the block. To avoid having the same block ...
80 views
### Serpent 256bit key wrong round keys
Assume that we have this 256bit key: 15FC0D48 D7F8199C BE399183 4D96F327 10000000 00000000 00000000 On first 0-7 keys we can't apply formula wi=(wi-8 xor wi-5 xor wi-3 xor wi-1 xor phi xor ...
61 views
### Bit level permutation
Could anyone explain how secure is bit level permutation? What is the most serious threat against the security of this kind of cipher? Thank you
103 views
### Security analysis of Spritz?
Recently, a new cipher called Spritz has been released by Ronald L. Rivest and Jacob Schuldt. It should be a "drop-in replacement" for RC4. There are many differences to RC4, Spritz is "spongy" and ...
201 views
### Do Export Restrictions Still Apply To The Key Length of RC4?
I've just read a paper from 2004 which stated that the RC4 encryption algorithm was restricted to a 40 bit key size when exported from the USA; however the reference for this information (Applied ...
77 views
### Breaking RSA moduli
Let suppose that the sizes of factors $p$ and $q$ are $b$ bits. We construct two RSA numbers $n$, $n'$ of same sizes. Can we say that the duration to break these two numbers is two times the duration ...
24 views
### What is meaning of “Decipher the rest of the message by deducing additional words”?
I have an assignment in cryptography. I am not asking here answer, but meaning of question. This is a question : 2.4 The following ciphertext was generated using a simple substitution algorithm. ...
68 views
### Is it possible to distinguish a CRC hash? [closed]
I have a program that generates random combinations of 8 characters including numeric characters and abcdef. I want to know if it is possible to identify which of these are actual CRC hashes and which ...
59 views
### Is .NET DESCryptoServiceProvider secure in this case?
I have the following piece of .NET code (see below). I know that DES is not quite secure, I saw that MSDN does not recommend using DES, only for compatibility with legacy programs. I also saw that ...
142 views
### How exactly does AES-NI work?
I am looking in to AES-NI which is now supported by many new CPU's and I have read a few papers which states that AES-CBC works faster with AES-NI, but I am unable to understand how exactly AES-NI ...
210 views
### How costly is to find millions of large prime numbers for RSA?
Consider I need to assign a large distinct prime number to each element in a large set. This must be deterministic so the function always gives me the same prime to the same value. What is the most ...
51 views
### RFC3447 OBJECT IDENTIFIER semantic
http://tools.ietf.org/html/rfc3447#appendix-B.1 ...
42 views
### how to find key matrix in hill cipher
I want to solve this problem but there are 3 known plaintext-ciphertext pairs. The key of Hill cipher is a 3*3 matrix as k=[k1,k2,3; k4,k5,k6; k7,k8,k9] where the unknown ...
59 views
### I'm using AES-CTR as a CSPRNG - Do I need an IV?
I'm using AES128-CTR for generating pseudo-random values, which is considered secure for up to 1MB (at least from what I've read). I simply encrypt a 128-bit little-endian counter, starting from 0. ...
77 views
### Weak key schedule IDEA [on hold]
Why was such a weak key schedule chosen for IDEA? The key schedule of IDEA works like this: Divide the key (128 bit) into 8 round keys, each 16 bit long. This are the first 8 "round" keys (6 keys per ...
53 views
### side channel attacks on AES
Say you have a web application that's performing AES encryption. What sorts of side channel attacks should one keep an eye out for? Timing attacks affect RSA more than symmetric ciphers in-so-far as ...
55 views
### Can adding nonces make challenge response authentications weaker?
In a custom protocol we want to replace an aged tiger32 based challenge response authentication. I suggested that we use something existing, so threw HMAC into the room. As per wikipedia it works as: ...
12 views
### FEAL-4 Fk Function 4 Rounds
As I understood in FEAL-4 it needs 4 pair of round keys. Which means that we need to launch Fk function 4 times, but the problem occures on the 2nd round. At first round we have original 64 bits key ...
31 views
### Guarantee signed data set time interval
I have some data data which is created using some hardware measurements. I now need to guarantee somehow that the data was created in a configurable interval of time. My idea was to use a central ...
39 views
### Linear transformation proof
Ok, so I have this question and I honestly have no idea how to prove it. I know that the function is linear and I know it works for every possible combination, I just don't know how to prove it in an ...
39 views
### Need help solving message encrypted on an Enigma machine?
So basically I'm in a class an one problem we were given a while ago was one involving an enigma machine and a few messages. I do not have any idea of how to approach it, and I really need help. The ...
48 views
### Session based AES with random key and static salt
I'm currently using aes to encrypt/decrypt messages to and from a web service. When the user establishes a session he sends a random generated secret via rsa public key. This secret is used to ...
157 views
### Does this guarantee a unique 32 bit Hash?
I came across some source code that loosely does the below in order to achieve a 32 bit hash. The input string is passed through MD5 to get 16 bytes Hash (as usual). Then the 16 bytes are split into ... | 2014-10-21 04:11:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7786586880683899, "perplexity": 2248.550373693624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443883.29/warc/CC-MAIN-20141017005723-00313-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://rujec.org/article_preview.php?id=81710 | Research Article Print
Research Article
Multidimensional poverty: Methodology and calculations on Russian data
Elena A. Nazarbaeva, Alina I. Pishnyak, Natalia V. Khalina
‡ HSE University, Moscow, Russia
Corresponding author: Elena A. Nazarbaeva ( enazarbaeva@hse.ru ) © 2022 Non-profit partnership “Voprosy Ekonomiki”.This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits to copy and distribute the article for non-commercial purposes, provided that the article is not altered or modified and the original author and source are credited. Citation: Nazarbaeva EA, Pishnyak AI, Khalina NV (2022) Multidimensional poverty: Methodology and calculations on Russian data. Russian Journal of Economics 8(4): 352-380. https://doi.org/10.32609/j.ruje.8.81710
Abstract
This article focuses on Multidimensional poverty index (MPI)—the alternative approach to poverty measurement. While the official monetary approach is based on a comparison of income with a certain poverty line (until 2021 in Russia it was based on the minimum subsistence level, since 2021 it has been calculated as a share of median income of the population), the MPI also includes deprivations that poor people may face. The text contains the description of the index calculation methodology, the results of its computation on Russian data (Statistical Survey of Income and Participation in Social Programs-2017), and the description of vulnerable groups of population in accordance with the MPI. Population groups that are identified as being at risk of poverty (according to the index) are similar to the vulnerable population based on the absolute monetary poverty approach. However, the index widens the list of such groups, covering older people and people with disabilities.
Keywords
poverty, multidimensional poverty approach, poverty measurement, poverty profiles.
JEL classification: I32, I38.
1. Introduction
Overcoming poverty is still the core task for social policy, both in Russia and abroad. Hence the issues concerning poverty measurement are of high importance. All the existing approaches to poverty measurement can be divided into monetary (or welfare) and non-monetary ones. The first one is based on the sums of money that a household or an individual has, while the latter focuses on other criteria.
The monetary approach methodology is adopted by the World Bank (2022), Eurostat (2021b), and OECD (2019). When this methodology is used, the core question is what are the criteria of being poor, i.e., what is the poverty line? Three possible answers to this question have formed three different monetary approaches to poverty: absolute, relative, and subjective.
The absolute monetary approach employs the idea that those who cannot afford the minimal set of goods and services, those deemed necessary to survive, are poor. This approach has a rich history; the first attempts to implement it were made at the end of the 19th century by S. Rowntree and C. Booth in London and, later, in York (Laderchi et al., 2003). This poses the question: which goods and services are necessary and how do we evaluate their price? Usually, the set of such goods includes certain foods and durables. The list of food products can be formed in different ways: the necessary calorie intake or the consumption rate of fats and proteins can be considered. Some calculations can also include the requirements for vitamins and mineral consumption (see, for example, Allan, 2016). The main disadvantage of the absolute approach is that poverty becomes similar to survival.
In the case of the relative approach, income is compared with the consumption standard typical for a certain society. The median income is supposed to be the indicator of such level of consumption, while the poverty line is set at 60% of median income. Sometimes the criteria set at 40% or 50% of the median are also used. The approach is highly dependent on income distribution among the population.
The critics of absolute and relative approaches resulted in the development of the so-called subjective approach. Its core idea is to define the poverty line in accordance with the perception of the people. For the first time, this methodology was adopted by P. Streeten (Wagle, 2002). The same idea was employed in Gallup Institute and Eurobarometer studies (Ovcharova, 2009).
Monetary approaches provide a useful tool for poverty analysis; however, with the development of poverty studies, it became evident that poverty is not only about lack of money. This idea was developed in non-monetary approaches. They usually describe poverty in terms of deprivations (lack of necessary resources) or social exclusion. The deprivation approach is based on the idea that poor people have no access to certain goods, services, or practices that are widespread in society. The notion of social exclusion appeared in France in the 1960s. Later, the concept of social exclusion was developed further to include limitations in consumption, civil rights, etc. (Ovcharova, 2009). As well as with monetary approaches, the core issue is to identify the poverty line: the list of deprivations and the number of them for someone to be considered poor. In some cases, such a list of deprivations can be formed by the researcher. Such an approach was implemented by P. Townsend (1979) and T. Atkinson (World Bank, 2017). The attempt to form the list of deprivations in accordance with public opinion was made by Mack and Lansley (1985). Only the deprivations that were perceived as connected with poverty by 90% of the population were included in the list (Ovcharova, 2009).
Previous studies have demonstrated that the adoption of different approaches gives a high variation in poverty level evaluations (Laderchi et al., 2003). Each of the approaches has its advantages and disadvantages. The absolute one provides the ability to evaluate the number of people who cannot afford the minimum necessary set of goods and services, however, it doesn’t enable to assess the number of people who have standards of living that are lower than in society in general. This problem can be solved by the relative approach, but its critics emphasize that any evaluation of poverty should include criteria other than mere income level, and other deprivations that poor people face should be considered. The latter are included in subjective and non-monetary methodologies; however, they also appear to be an imperfect solution as they raise the question regarding specific deprivations that determine poverty. There is no perfect poverty measure. Its most complex vision can be acquired by combining the aforementioned approaches. While monetary indicators of poverty are constantly tracked by scholars and policymakers, less attention is paid to non-monetary ones.
To combine the advantages of all the approaches described above a complex index has been developed, which is called the Multidimensional Poverty Index (MPI). Nowadays, it is seen as a valuable tool not only to assess the poverty level, but also to develop social policy measures. In 2019, UNDP in cooperation with Oxford University released the step-by-step guide for developing the national MPI. The guide introduced the complex approach combining the development and assessment of the index and the measures for its popularization. Index decomposition and the ability to evaluate deprivations that are most typical for the poor population are core aspects for using the index as a social policy tool because they help to prioritize policy measures. Such an approach also makes it possible to track the effectiveness of implemented measures: if the weight of the domain in the index structure decreases, the social support system has a positive influence. The following countries use the MPI as a social policy tool: Mexico, Columbia, Costa Rica, Chile, and Vietnam (UNDP, 2019b).
The article is devoted to the index description and its calculation on Russian data. Recently, interest in poverty methodology measurement has risen. In 2017 the Ministry of Labor and Social Support introduced an initiative to change the structure of the consumer basket by 2021. However, in 2021 this question became irrelevant as the relative approach to poverty measurement was set for the official statistics. The minimum subsistence level is now based not on the consumer basket but on the income distribution of the population. The poverty line was set at 44.2% of median population income. The changes were fixed in the Federal Law dated October 23, 1997 No. 134-FZ “About the minimum subsistence level in Russia”; it is planned to review this ratio in 5 years.
As the poverty assessment was long based on the consumer basket, it was necessary to use the comparable approach to follow the poverty dynamics. For such evaluations, the notion of “poverty bound” was introduced. The calculation was based on the consumer basket that was multiplied by the price index.
Despite all the changes in the methodology of poverty measurement, all the introduced indicators have one characteristic in common: they are aimed at assessing monetary indicators and ignore the non-monetary aspects of poverty.
The problems of poverty in Russia seem to be an interesting case for research also in the political context. In 2018, the Decree of the President of the Russian Federation No. 204 dated May 7, 2018 “On national goals and strategic objectives of the development of the Russian Federation for the period up to 2024” was signed. The Decree aimed at halving the incidence of poverty levels: from 13.2% in 2017 to 6.6% in 2024. The COVID-19 pandemic changed the timing and the Decree “On the national development goals of the Russian Federation for the period up to 2030” was signed. The latter revises the period of expected poverty reduction to 2030. This indicates that the relatively rapid decrease of officially measured monetary poverty is planned while other poverty indicators fall out of the scope. The calculation of the MPI and its evaluation at the beginning and at the end of the considered period can show changes in non-monetary poverty aspects that accompany the rapid decrease in monetary poverty.
Summarizing all the ideas above, Russia seems to be a good example for MPI evaluation as the poverty line in the country changed several times over the last 30 years; however, it was always based on monetary approaches. As a result, the level and profile of absolute poverty are well-known, but less is known about non-monetary issues. This paper aims to fill this gap.
2. Poverty in Russia: Core issues
The dynamics of different poverty indicators are tracked by the Federal State Statistics Service (Rosstat). The open data demonstrates the share of the poor following the official methodology of poverty measurement and in accordance with international standards (Fig. 1). The World Bank sets the poverty lines at $1.9 per day (if it is employed, the poverty level in Russia is close to zero),$3.2 per day, and \$5.5 per day. The share of the poor identified by non-monetary approaches is not available.
Poverty in Russia is quite changeable: the transformations touch upon both the number of the poor and the composition of this group. In accordance with monetary poverty indicators, in the 1990s the decrease in living standards, that started during the transformation period, pushed into poverty about 1/3 of the population, and the group of “the new poor” appeared. In the 2000s the economic growth that accompanied the favorable situation on the world raw materials market resulted in gradual poverty reduction. The economic crisis of 2008–2009 had a severe impact on the population; however, poverty levels didn’t change dramatically. The situation after this crisis turned out to be different: after 2013 the longest period of the decrease in real income took place—the process of recovery lasted several years, and the officially registered number of poor reached 13.3% (in 2015). The most significant reason for the poverty growth was the decrease in real wages and the growth in prices of basic goods (the spending on this category has a high share in the budget of the poor). In addition, for the first time in 15 years the decrease in real pensions was evidenced (in 2015), which dropped to 96.2% of the previous year’s level (Rosstat, 2022).
The comparative analysis of the structure of the poor population shows the most vulnerable groups (Tables 12). Those who live in rural areas, and members of large families (the share of households with 5 or more people among the poor is almost 4 times higher than among the families in general) face the highest risks of poverty. Families with children also often suffer from lack of money.
When the gender and age structure of the poor are considered, the dominance of the children aged below 16 years old should be mentioned. The share of those above the working age is lower than for the population in general. High risks of poverty are also typical for non-working people (except for non-working retirees).
The poverty profiles mentioned above were described by a number of scholars and remained stable in recent years (Ovcharova, 2014; Pishnyak et al., 2021; Gorshkov and Tikhonova, 2014).
The steps for evaluating non-monetary deprivation approaches were previously taken in the Russian scientific field (Tikhonova, 2014; Institute for Social Policy, 2017). The usage of different methodologies (different lists of deprivations and their number set as a poverty line) caused high variation in the estimated share of the poor: from 25% in 2013 (Tikhonova, 2014) to 14% in 2016 (Institute for Social Policy, 2017). The assessment based on the deprivation approach highlighted the higher poverty risks for retirees, who are seldom considered to be poor by the official statistics, especially in the case of families that consist only of retirees. The families with children, that are usually monetary poor, were relatively low deprived in comparison with households in general (Institute for Social Policy, 2017).
Table 1.
The distribution of the poor households (official monetary approach) by different characteristics (%).
Households with income per capita below the minimum subsistence level All poor households All households in the survey, 2019 2013 2014 2015 2016 2017 2018 2019 By the type and the size of the settlement Living in urban areas 41.8 41.1 47.1 47.8 48.5 49.1 47.4 76.2 population size, thousand people: Less than 50 17.2 19 17.1 17.2 15.2 16 15.7 13.7 50–99.9 6.4 5.4 7.0 6.4 7.6 7.7 6.7 8.4 100–249.9 5.9 5.1 6.3 6.6 6.1 6 6.3 10.2 250–499.9 4.1 4.3 6.3 6.2 5.9 6.5 4.7 10.4 500–999.9 3.6 3.7 5.1 5.6 6.1 7.2 6.2 10.1 1 million and more 4.6 3.6 5.4 5.8 7.6 5.7 7.7 23.5 Living in rural areas 58.2 58.9 52.9 52.2 51.5 50.9 52.6 23.8 population size, people: Less than 200 1.0 2.1 1.6 2.3 4.2 1.3 4.9 2.0 201–1,000 22.6 25.2 19.8 21.3 21.1 17.4 20.0 7.5 1,001–5,000 21.4 21.8 20.3 17.7 15.8 18.6 18.1 9.5 More than 5,000 13.2 9.7 11.1 11 10.4 13.6 9.6 4.7 By household size Households consist of: 1 person 8.5 7.3 6.8 5.1 5.3 5.0 5.0 26.4 2 people 13.8 12.0 11.4 11.4 10.7 9.5 10.0 29.0 3 people 20.5 22.1 21.0 19.9 20.9 18.6 19.4 21.0 4 people 28.2 28.8 28.2 29.4 28.2 28.4 28.7 14.0 5 or more people 29.0 29.8 32.7 34.3 34.9 38.5 36.9 9.6 By the number of children Households without children below 18 years old 27.8 24.5 22.3 21.2 19 17.6 19.2 67.2 Households with children below 18 years old 72.2 75.5 77.7 78.8 81 82.4 80.8 32.8 number of children: 1 child 28.1 28.3 28.1 26.0 25.6 23.8 21.4 17.8 2 children 28.5 30.3 30.7 33.1 33.5 33.2 33.3 11.0 3 or more children 15.6 16.9 18.9 19.8 21.9 25.4 26.0 4.0
3. Material and methods
There are different ways to widen the scope of poverty analysis and include non-monetary indicators (like evaluation of subjective poverty or deprivations analysis). One of the examples of such an approach is the AROPE index (at risk of poverty or social exclusion) (Eurostat, 2021a). This index has a long history; in 2021 it was modified. Now it consists of three domains: severe material deprivation (presence of 7 of 13 different material deprivations), at risk of poverty rate, and low work intensity indicator. Such an index structure widens the definition of poverty, including not only monetary deprivations in the index. However, the index structure is rigid in that it enables cross-country comparisons but doesn’t allow to adapt the index for a regional context.
The deprivation index, which was described above, is the alternative to the AROPE index. When such a methodology is employed, the set of deprivations is formed. Those who suffer from a certain number of problems are classified as poor (UNECE, 2017). The list of deprivations can be changed, making it possible to adapt the index for each country. However, some problems can be more harmful to people than others. This idea can be taken into account when the Multidimensional poverty index is used, which introduces the weights for each index component (the higher is the weight, the more serious is the problem). Below, the index methodology is described.
We focus on the MPI that has different implementations all over the world. The renowned economist Amartya Sen is considered to be the founder of this approach, as he was the first to formulate the idea that poverty is not just about the lack of money, but rather the lack of capabilities. According to his theory, the development of society should result in widening the range of human capabilities. If there are no such capabilities, or they are not enough, one can be considered poor. Sen (1995) supposed that just reducing the definition of poverty to a mere lack of money makes people set lower requirements and, finally, they adjust their expectations and wishes to the issues that appear feasible which is a sort of “adapted preferences.” However, people should live a “valued life.”
Sen’s idea was developed by his followers, Sabina Alkire and James E. Foster (Alkire and Foster, 2008). They proposed a methodology that included the aggregated indicator to take into account the problems that the poor face and which capabilities they lack (in the paper we will call it “deprivation”). The MPI was approved by many scholars, and used as a basis for the United Nations Development Program Global Multidimensional Poverty Index (UNDP, 2019a).
Two steps should be taken to build this index. Firstly, the set of deprivations that a person or family can face is formulated. The criteria for suffering from each problem are set up, to make it clear whether one faces deprivation or not. For example, for such deprivation as “access to drinking water,” the following criteria can be used: “having access to improved drinking water within 30-minute walk from home, round trip”.
Secondly, all the identified problems are divided into groups or domains. As a result, the index consists of domains, and each domain consists of a list of deprivations. The methodology of the index calculation is based on the assumption that the importance of different domains, for poverty evaluation, can differ. To take it into account, the weight coefficient for each domain can be introduced: the higher is the weight, the more important is the domain and the higher is its impact on the total index. The coefficients are set in accordance with expert opinion or mathematical calculations. In any case, all the coefficients should sum up to 1.
The weight of each domain is allocated evenly to each deprivation it includes. After that, for each case in the sample (person or family), the total sum of the weights of deprivations it faces is computed. The result is compared with a threshold—the specific poverty line (sum of weights that depends on the number of deprivations) and, if the sum is higher than this threshold, the individual or household is considered to be poor.
MPI is calculated as the product of two indicators:
MPI = H × A, (1)
where H is poverty headcount; A is average intensity of deprivation.
The share of the poor is calculated as the ratio:
$H=qn$, (2)
where q is the number of poor, n is all the population.
The average intensity of deprivation is estimated as the weighted average of deprivations among the poor population:
$A=∑1qciq$, (3)
where q is the number of the poor, ci is the Poverty index (weighted) for individual i.
The MPI methodology is still developing nowadays. Initially, it was used to evaluate poverty indicators in a certain period, but later, the authors decided that it was important to track a person’s position: can one escape from poverty, or do they have to stay poor for a long time? The MPI methodology was widened and used to evaluate chronic poverty (Alkire et al., 2017a). The index is also used for cross-country comparisons (Alkire et al., 2017b) and for tracking the position of specific population categories (for example, to assess poverty in children; Alkire and Sumner, 2013).
Proposing the tool for poverty index measurement, the authors rejected the idea of constructing a unique set of domains and deprivations, believing that each situation of poverty measurement has its peculiarities, and the list of problems should depend on the research tasks. That made it possible to conduct similar computations, but using the data that were available for them (Alkire and Foster, 2011).
The principles of MPI construction become more evident when looking at the examples of the index (Table 3). The Poverty index, proposed for developing countries, and used in the UN work, is probably one of the most well-known cases. The index includes three domains (UNDP, 2016): education (school attainment, school attendance), health (nutrition, child mortality), and standard of living (electricity, drinking water, sanitation, cooking fuel, having a home with dirt, sand or dung floor, assets). In this case, the weights of the domains are distributed evenly, and the threshold is set at 33.3%. The indices with a similar structure were used by the MPI authors to assess chronic poverty and poverty dynamics in different countries, for example, for chronic poverty evaluation on survey data in Chile in 1996, 2001, and 2016. In this case, the weights were also distributed evenly among the domains (with the threshold set at 33.3%; Alkire et al., 2017a). Cross-country comparisons, which cover 34 countries from different regions, were also conducted, with the last observations made in 2010–2012 (Alkire et al., 2017b).
The other index structure was chosen by the authors for Indonesia. The index included eight domains. The authors compared the poverty level using the different poverty cut-offs. The poverty level varied from 0.5% (criteria set at eight domains) to 83.2% (poverty line set at one domain, i.e., to be considered poor one needed to suffer from only one domain included in the analysis) (Alkire and Foster, 2008).
There is no unique index structure for developed countries; the calculations were usually performed on EU-SILC (European Union statistics on income and living conditions) data. One of the MPI variations included four domains: basic deprivations, consumption deprivations, health, and environment (Whelan et al., 2014). The alternative modification was based on six domains with equal weights: income, employment, material deprivation, education, environment, and health. The authors also emphasize the dependence of MPI share of the chosen poverty line. Finally, they choose the poverty line set at 33.3% of all the domains and estimated the MPI poverty headcount in Europe as 8.8% for 2012.
The case of MPI for the USA is also worth noting (Alkire and Foster, 2008). The index had the following structure: income, health, schooling, and health insurance. The authors also considered different poverty lines and the poverty headcount based on each of them. The indicator varied from 0.44% (while the poverty line was set at four domains) to 23.82% when the poverty line was set at one domain.
The examples of the MPIs described above show the huge difference in indicators for different objects. However, general ideas for all the MPIs considered are quite similar. Using the different survey questions, all of them include health, education, and material well-being. Besides this, the data demonstrates the variation in poverty headcount for different poverty lines. The MPI with a structure similar to the UNDP one was calculated for Russia; however, it dates back to 2003 and the situation has changed dramatically in subsequent years (as we demonstrated for monetary poverty). Besides this, the relevance of the indicators included in the index nowadays is to some extent questionable. The index equals 0.005 and the poverty headcount achieved 1.3% (the share of the poor in accordance with official statistics for the same period was 17.4%). The alternative version of the index, introduced in 2019, was made with the correction of its structure. The calculation on the Comprehensive Monitoring of Living Conditions of the Population for 2014 included three domains: education, health, and living conditions. In accordance with this approach, the poverty headcount was 22.8% and the MPI was 0.100 (Kapelyuk and Ryabushkin, 2019). The attention to MPI was paid even in more recent studies; however, scholars focused on theoretical aspects of the MPI and AROPE calculation (Maleva et al., 2019).
The adepts of MPI made many attempts to construct similar indices for different countries, including developed ones. However, in most cases, they faced problems owing to the lack of totally comparable data and the existence of some specific traits of each country (as we’ve mentioned, initially, MPI was constructed for developing countries and included deprivations that are typical for them, like access to drinking water, having a ground floor, and the type of construction material—concrete, for example, in the house, etc.).
When adapting the index for the goals of different countries, many scholars tried to keep the structure of the index, using the same list of domains as the UN, but changing the set of problems. Other authors made attempts to expand the list of domains or formulate a new one, grouping the deprivations that are relevant for a certain country.1 In most cases, the research strategies were conditioned by the data available to the scholars. Taking into account the existence of several approaches to index construction, we assess the ability to use it on Russian data.
4. Calculation
4.1. Constructing the index
It’s not possible to fully replicate the existing methods of MPI building on Russian data: none of the Russian surveys cover the full list of deprivations that is used for calculating the index abroad. However, like other authors, we can modify the index structure in accordance with our knowledge of the problems that poor families face in Russia. Such a strategy was used, for example, in (Whelan et al., 2014), after the data analysis the authors proposed four domains (basic deprivation, consumption deprivation, health, and neighborhood environment). Similar ideas were adapted to construct the MPI for the USA (Dghongde and Haveman, 2015), and other domains were identified (health, education, income, and housing).
The abilities for MPI measurement are mainly limited by the data available for the researchers. As MPI is based primarily on information about the deprivations, the data of two large surveys can be used. The first one was mentioned above—the Comprehensive Monitoring of Living Conditions of the Population; the other is the Statistical Survey of Income and Participation in Social Programs.2 None of them is able to replicate the index structure used in the European countries totally, however, the specific Russian index can be constructed. To find out the proper index composition, several index modifications were calculated on the data of both studies. Finally, the Statistical Survey of Income and Participation in Social Programs was chosen as it has a wider range of monetary indicators and holds more potential for further analysis.
The chosen dataset is also used as the basis for calculating official monetary poverty indicators in Russia (as a result it’s possible to compare the groups identified in accordance with MPI and absolute monetary approaches). The set of deprivations included in the questionnaire is stable (for some years the wording was changed but the list in general remained unchanged); it reflects the core problems of the poor and is conventionally used for the analysis of deprivations by both social scholars and official statistics.
The most detailed and suitable for our tasks data of the Statistical Survey of Income and Participation in Social Programs describes the situation typical for 2016. The survey is conducted in all Russian regions, and respondents fill in two questionnaires. The first contains questions about the household (its welfare, social benefits, etc.), and the other consists of questions about each member of the family. The database covers 160,008 households and 370,130 members (both children and adults).
The list of deprivations that are available for analysis is rather wide: it covers 22 problems that vary by their incidence (Fig. 2; the table with questions wording in Appendix A). Some of the deprivations are included in the household questionnaire (like having the ability to purchase durable goods, facing problems with paying for accommodation rent, or communal services). When such data is analyzed, the deprivations are attributed to each member of the family that suffers from the specific problem. The share of people that face some of the deprivations is close to zero. However, even the presence of a small number of people with such deprivations highlights the necessity to provide social support for them. Moreover, the index structure is rather heterogeneous, and the combinations of deprivations have an impact on the index level. As a result, even not especially widespread deprivations, if they exist alongside other problems, can push people into poverty.
The most widespread problem is the inability to replace furniture that is old or in a state of disrepair; such a situation is typical for 64% of all Russians. The inability to purchase two pairs of suitable, seasonal footwear for each family member (43%) and to spend one week on vacation per year away from home (40%) are less widespread but also mentioned rather frequently.
There are also some deprivations that are relatively rare: less than 1% of respondents mentioned that they live in dorms or communal apartments, also less than 1% answered that they are not able to purchase a TV, refrigerator, or phone. The percentage of families which have children with socially significant diseases3 is close to zero.
The structure of the index is highly dependent on the data available. The deprivations that are assessed in the study can be grouped into three domains: education, health, and standard of living. However, in this case, the overwhelming majority of deprivations will fall into the “standard of living” domain.
As we have shown above, the weight of the domain is distributed among all the deprivations it consists of. And if the domain “standard of living” includes many problems, the weight of each of them will be small. However, the domain contains such serious problems as the inability to make ends meet and having an income below the subsistence level. Taking into account their importance for “living the valued life,” using the minimum weights for them (and reducing their impact on the MPI), seems to be incorrect.
To avoid such drawbacks, all the deprivations included in the database were divided into eleven small domains, with a comparable number of deprivations in each one, in accordance with expert views (Table 4). All domains have equal weights, and all deprivations in these domains have the same weight.4
When MPI is calculated three indicators are assessed:
• • A—Average intensity of deprivation;
• • M 0—Multidimensional poverty index itself.
First, let’s have a look at the share of the poor population. The MPI methodology makes it possible to track how this indicator varies, depending on the changes in the poverty threshold (the sum of weights of the problems the individual faces). Fig. 3 shows how the share of the poor changes in accordance with the number of domains that makes us consider somebody poor.
Choosing the poverty threshold is an issue that requires specific consideration. In this article, we follow the ideas of the authors who made the comparative analysis of multidimensional poverty in different European countries (Whelan et al., 2014) and set the threshold that results in a poverty headcount ratio, similar to the case of relative poverty line usage. To evaluate the latter, the poverty line is set at 60% of the median disposable income of the population; the income is modified with equivalence scales. The data of Statistical Survey of Income and Participation in Social Programs demonstrates that, in this case, the share of the poor achieves 22.3%. So, identifying those who are poor, by three or more domains, seems to be the most reasonable (the poverty headcount will achieve 24.8%).
At the time when the survey was conducted, official Russian statistics used the absolute monetary approach, comparing the income with the minimum subsistence level. The poverty headcount assessed in accordance with it is significantly lower—13.0% (Fig. 3). As our work is rooted in European methodology, the poverty threshold is based on calculations of the relative poverty level.
Researchers working with MPI usually consider not only the dependence of the poverty headcount on the threshold, but also the dependence of the index itself and its parts. Below, two more indicators are considered—the average intensity of deprivation and the Poverty index itself. As Fig. 3 shows, when the poverty threshold is set at two domains, the poverty headcount achieves half of the population and when the threshold is set at seven domains, the share of the poor reduces to less than 1%. This means that almost everyone or nobody is poor. We reject such extra cases and focus on the situation when only those who suffer from deprivations from three–six domains are considered to be poor.
As Table 5 shows, the MPI varies from 7.7 to 1.0, the average intensity of deprivations—from 30.9 to 52.7. In other words, when the number of domains increases, the poverty headcount reduces, but the poverty becomes deeper.
The contribution of each domain to the index is another aspect that is traditionally considered when working with the MPI. To evaluate it the sum of weights of all deprivations in a certain domain for the poor is divided by the sum of all deprivations that all poor individuals in the sample have. All the contributions should sum up to 1. The data of Statistical Survey of Income and Participation in Social Programs shows that the domain “communication and rest” makes the largest contribution to the MPI. It means that, among those who are poor according to the MPI, there are a lot of people who cannot invite friends to their place and afford to go on vacation away from home (these two problems form this domain). These problems remain the core ones with no impact on the number of domains chosen to set a threshold (Table 6).
Raising the threshold means making the criteria for poverty identification firmer: the higher is the threshold, the more deprivation one should suffer simultaneously, and the more difficult one’s circumstances should be. The MPI structure shows how the contribution of each domain changes when the threshold is raised: when it is relatively low, the domain “large purchases” has the highest impact, when the threshold is higher, the share of the people who cannot satisfy basic needs increases (like purchasing clothes, footwear, and food).
So, the MPI enables finding out which deprivations the poor face most frequently and how the figures change when different thresholds are chosen. However, these questions do not make clear which social groups have the highest risks of poverty that is very important for developing a pro-poor social policy.
This makes evident the task of comparison of the MPI for different social groups. All the calculations described above were performed at individual levels. However, one should consider not only individual data but also the information about the household, as its composition can influence the risks of poverty. We’ll calculate the MPI for households as the mean for the individuals living there. The share of poor families in this case is 23.6%.
Summarizing the results, we should underline that the threshold was set at three domains level (as it results in the poverty headcount similar to the one set at 60% of median income). The poverty headcount will reach 24.8%, the MPI—7.7.
Table 2.
The distribution of the poor population (official monetary approach) by different characteristics (%).
People with income per capita below the minimum subsistence level All poor population All population in the survey, 2019 2013 2014 2015 2016 2017 2018 2019 By the age Children under 16 years old 34.6 35.6 36.6 37.5 39.3 39.9 41.0 19.0 Under 3 years old 7.5 7.1 7.2 7.1 7.1 7.3 7.0 2.4 3–6 years old 8.9 9.5 9.3 9.9 10.3 11.2 10.7 5.1 7–15 years old 18.2 19.1 20.0 20.4 21.8 21.4 23.3 11.6 Adults 18–29 years old 16.1 16.1 14.9 13.4 12.3 12.1 10.6 10.6 In working age 58.9 58.2 56.4 55.1 54.1 53.7 52.0 55.0 Male 28.1 27.8 26.8 26.7 26.2 25.9 24.8 28.5 Female 30.8 30.4 29.6 28.4 27.9 27.8 27.2 26.5 Above the working age 6.5 6.1 7.0 7.4 6.6 6.5 6.9 26.1 Male 1.5 1.4 1.6 1.8 1.5 1.7 1.8 7.8 Female 5.1 4.7 5.4 5.6 5.1 4.8 5.2 18.2 By the economic activity Employed (working) 32.0 32.4 33.6 31.9 31.5 31.0 28.9 53.3 including working retirees 0.7 1.0 1.0 1.0 0.7 0.8 0.7 10.0 Unemployed (not-working) 33.4 32.0 29.8 30.7 29.2 29.2 30.1 27.7 Not-working retirees 8.7 8.0 8.9 9.2 8.2 8.2 8.5 19.0 Other unemployed 24.7 24.0 20.9 21.5 21.0 21.0 21.6 8.7 including Old-age pensioners 7.3 6.4 7.0 7.5 6.7 6.8 7.0 27.2 Disabled pensioners 1.3 1.3 1.3 1.3 1.2 1.6 1.5 1.4 Receiving pension on the occasion of loss of the wage-earner 0.4 0.4 0.4 0.4 0.4 1.4 1.7 0.7 Recipients of social pensions 1.1 1.2 1.2 1.1 0.7 0.8 1.0 0.6 Recipients of unemployment benefit 3.3 3.1 2.4 2.3 2.1 1.9 1.9 0.6
Table 3.
The examples of MPI composition for different countries.
Country Domains Source Developing countries Education (school attainment, school attendance) Health (nutrition, child mortality) Standard of living (electricity, drinking water, sanitation, cooking fuel, having a home with dirt, sand, or dung floor, assets) UNDP (2016) Indonesia Expenditure Health: Low body mass index (BMI) Schooling (years of schooling completed) Cooking fuel Drinking water Sanitation Sewage disposal Solid waste disposal Alkire and Foster (2008) European countries Basic deprivation Consumption deprivation Health Neighborhood environment Whelan et al. (2014) Income Employment Material deprivation Education Environment Health Alkire and Apablaza (2016) USA Income Health Schooling Health insurance Alkire and Foster (2008)
Table 4.
The structure of the MPI domains.
Indicator Domain Domain weight Deprivations weight Health limitations: disability D1: Health 0.091 0.091 Low education level D2: Education 0.091 0.091 Low-skilled job D3: Employment 0.091 0.046 Unemployment 0.046 Have arrears for rent payment or mortgage payment D4: Basic needs 0.091 0.003 Cannot make ends meet 0.003 Have arrears in the payment for communal services 0.003 Cannot eat meat, chicken, or fish meals at least twice a week (or vegetarian alternatives) D5: Nutrition 0.091 0.091 Cannot replace clothes for a family member when it becomes necessary D6: Clothes and footwear 0.091 0.046 Cannot purchase two pairs of suitable, seasonal footwear for each family member 0.046 Cannot invite guests for family parties D7: Communication and rest 0.091 0.046 Cannot spend one week per year on vacation away from home 0.046 Cannot afford to purchase a refrigerator D8: Basic goods 0.091 0.046 Cannot afford to purchase a washing machine 0.046 Cannot afford to purchase a PC D9: Means of communication 0.091 0.003 Cannot afford to purchase a TV 0.003 Cannot afford to purchase a phone 0.003 Cannot afford to purchase a car D10: Large purchases 0.091 0.046 Cannot replace furniture that is broken or in disrepair 0.046 Poor (absolute criteria)a) D11: Income 0.091 0.091
Table 5.
The poverty headcount, the average intensity of deprivations, and the MPI.
Number of domains 3 4 5 6 H (poverty headcount) 24.8 12.5 5.6 2.0 A (average intensity of deprivations) 30.9 38.0 45.1 52.7 M (poverty index) 7.7 4.8 2.5 1.0
Table 6.
Contribution of each domain to the MPI.
Number of domains 3 4 5 6 Health 0.036 0.030 0.028 0.025 Education 0.013 0.017 0.024 0.035 Employment 0.014 0.015 0.015 0.020 Basic needs 0.078 0.084 0.087 0.085 Nutrition 0.109 0.133 0.149 0.154 Clothes and footwear 0.185 0.174 0.163 0.152 Communication and rest 0.195 0.182 0.170 0.157 Basic goods 0.010 0.014 0.020 0.031 Means of communication 0.026 0.033 0.039 0.045 Large purchases 0.208 0.181 0.162 0.148 Income 0.126 0.137 0.143 0.148
4.2. The ratio of groups of the poor identified using the MPI and the other approaches
Before moving to MPI poverty profiles evaluation, we should consider the ratio of the poor, identified by the MPI and the absolute monetary approach. When choosing the proper MPI threshold, we mentioned that the level of relative poverty was 22.3%,5 and we chose the threshold for the index to get 24.8% poverty headcount, while the absolute monetary poverty level for the same period was 13.0%. Below, the description of these figures’ ratio is given.
The share of those who are poor, according to three approaches altogether (relative, absolute, or MPI) is one of the highest among Russian households—10.1%. Almost the same number of individuals are considered poor only in accordance with the MPI criteria. The share of those who are poor only when the criteria of relative approach are adapted is also rather high—5.9%. The share of those who are poor in accordance with both relative and MPI criteria is a bit lower and equals 4.1% (Fig. 4).
The MPI approach to poverty measurement provides wider scope to the problems of low-income groups. Due to the fact that the absolute approach is used in the official statistics in Russia, below, we compare the structure of the MPI for those who are poor, in accordance with MPI only and those who are poor in accordance with MPI and the absolute approach.
In the case of the absolute approach, as well as in the case of the MPI implementation, most Russians suffer from problems that fall into “communication and rest” and “large purchases” domains. A high headcount ratio of those who face the problems listed above can be explained, not by the specific traits of the poor Russian population but by their high incidence in Russia. As Fig. 1 demonstrated, the inability to replace furniture that is in a state of disrepair is typical for 64% of the Russians, and to purchase a new car—for 20%. These two problems form the domain “large purchases.” Problems with communication and vacation are also not unique for the poor: 40% of all Russians cannot afford to spend one week per year away from home, 16% of all survey participants cannot invite guests to a party. Such a component as the inability to purchase two pairs of suitable, seasonal footwear for each family member is also widespread and was mentioned by 43% of the population.
It should also be noted that the income of those who are poor according to the MPI is sometimes rather far from the poverty line. If we look at the income quintile distribution of those who are poor according to the MPI criteria, we’ll see that even in the highest quintile there are some MPI-poor people, but their share is only 3%. When going to lower quintiles their share increases and in the first (the poorest) one achieves 59% (Table 7).
The presence of those who are poor by the MPI criteria even in relatively rich social strata can be explained, to some extent, by the deprivations that are included in the index. For example, having debts for rent or mortgage payments can be an indicator of serious housing problems as well as an indicator of good financial capabilities: one could choose this answer in case of problems with rented commodities or debt over an expensive mortgage. Besides this, the status of the disabled is considered as an indicator of poor health but it can have no connection with financial problems. Unemployment is also used as one of the MPI components; however, it could be short-term and have no influence on material well-being in the long-term. All these statements should be checked further to build the optimal index composition.
Table 7.
Poverty headcount ratio depending on the quintile group by per capita income.
Quintile The share of the poor according to the MPI, % N Q1 (the lowest) 59.0 149,648 Q2 25.1 100,812 Q3 13.5 60,626 Q4 6.3 35,946 Q5 (the highest) 3.1 20,074
4.3. The impact of household and individual traits on the MPI
Here we compare the MPI for different social groups that will clarify which families mainly face the problems of poverty6 (Table 8). First, the highest values of the MPI among children under 19 years old should be mentioned. Such data only confirms the well-known idea about the high vulnerability of Russian families with children.7 One more confirmation will be presented below comparing the MPI for different types of households. For groups of people above 20 years old, the MPI meanings are rather similar; they remain relatively low until retirement age while the significant increase in the index can be seen.
This fact is important: when monetary approaches to poverty measurement are used the households with retired members usually appear not to be poor because they have a stable income.8 However, the MPI demonstrates that even when they have income above the poverty line, older people suffer from deprivations that prevent them from maintaining an acceptable standard of living. The reduction of monetary poverty among the oldest population was caused by the growth of pensions and by the development of social benefits for older people. Since 2010, a social benefit that increases the pension up to the minimum subsistence level was introduced (Ovcharova, 2014). But although this measure removes them from the “statistical” poor, their lifestyle remains largely unchanged.
Working retirees sometimes manage to overcome the problems of low income; the average MPI for this group drops to 3.6, while for non-working retirees it reaches 10.5 (and 7.7 for the population in general).
The position on the labor market has a significant impact on the risks of poverty not only among the retirees: the higher is one’s position in this area, the lower is the probability of falling into the category of poor. The differences become evident when the index is compared for the people at and below, or above, the employment age. Among the first one, the average index is 7.0, while among the people aged 15 and less, and above 72 years, it reaches 10.2 and 9.9 respectively.
The gap between employed and unemployed is even larger: while the MPI for the former equals 5.1, it is twice as high for the latter (11.2). The large difference can also be seen between those who work in the formal and informal sectors.9
And finally, the variation of the poverty index for people working in different professional positions should be highlighted. The low-skilled workers significantly differ from any other groups as their index achieves 18.9. This group is closer to those who are unemployed (24.8) than to other workers. Such high values of the index can, to some extent, be explained by including low-skilled work and unemployment in the index as its components.
For other employment groups, the differences are not as large and vary from 1.3 for the people at senior positions to 7.1 for those who work in the service sector.
The position in the labor market is closely connected with the volume of knowledge and skills acquired by the person: the smaller it is the lower labor market position is and the higher are the risks of poverty. The MPI data confirms this idea. The absence of basic education and incomplete secondary education are the components of the index. The MPI in these groups is high and the poverty headcount ratio according to the MPI criteria is about half of all respondents. The decrease in the index that accompanies the increase in the education level is also evident. If the index for the respondents with general secondary education is 11.6, for people with higher education it drops to 2.9.
All the factors that cause high values of the MPI are related to the individual peculiarities of the Russians. But the household composition also plays a great role in the context of poverty risks. To find out which families have the higher poverty indicators let’s move to the sample of households and compare the MPI for their different types.
The largest households are at higher poverty risks, and the MPI for them achieves 11.6. For people living alone, the figures are also higher than for the population in general but the gap is not as wide as for large families.
Below, we demonstrate how the presence of different categories of dependents in a household influence poverty indicators. The households with retirees have higher MPI values than those that have no retirees. The core factor is the fact of retirees’ presence itself while their number has a weaker impact. The indices for the households with one and two retirees are rather close. The households that consist only of retirees seem to be the most vulnerable. The MPI for them is higher than for the households where retirees live together with other family members, so supporting the contention that even with income above the subsistence minimum, retirees cannot maintain a satisfactory standard of living.
As well as the retirees, children also increase the dependents’ burden. The gap between the families with and without children is very large. For households with children under 15 years old the MPI achieves 9.0, while for families without children it is 6.5. The growth in the number of children is accompanied by the increase in the MPI. Large families have the highest MPI: it is twice as high as for the population in general.
Living in households with disabled people also increases the risks of poverty: the MPI for families with people with disabilities reaches 14.6. For families without them, the index is about 6.3.
Having unemployed family members also increases the risks of poverty for the household. The MPI reaches 21.3 for households with unemployed members, while for families without them the index is 6.9. To some extent, all the figures could be explained by the index structure that includes unemployment as one of the components.
Speaking about the poverty profiles, we should also take into account that poverty risks can be caused not only by the specific traits of people or households but also by their locality. Russia has a large territory that covers 8 federal districts with different regions inside. There are large differences among them in terms of standards of living, income per capita, consumption, etc.10 The highest MPI afflicts Siberian and North-Caucasus federal districts. When the smaller units are considered, the regions that belong to these federal districts appear to be among those with the highest index. Such regions are: the Karachai-Cherkes Republic, Kabardino-Balkarian Republic, and the Republic of Ingushetia that belong to the North-Caucasus federal district, and the Tuva Republic and Altai Republic which are parts of the Siberian federal district.
The minimal MPI characterizes Central and North-Western federal districts, while Moscow and St. Petersburg, being the largest cities in Russia, lead with the lowest poverty indicators. The Moscow region and the Tatarstan Republic follow them with a slight lag.
The poverty level also depends on the type of settlement. Rural citizens more often fall into poverty: the MPI for them is 11.7, while for urban areas it equals 5.8. The tendency for poverty to decrease with increasing settlement size should also be mentioned. This can be seen most clearly in cities: the MPI for cities with fewer than 50,000 citizens is 8.0, while in big cities with millions of residents, the MPI is at 4.1.
MPI provides not only the ability to identify the groups of the poor population but also to compare these groups with each other to find out who suffers from the largest number of deprivations simultaneously. These are unskilled workers at individual level and families with unemployed members at household level.
Summarizing the results of the MPI analysis, we should admit that the study confirms conclusions based on other approaches: people living in rural areas, having a low level of education, and weak labor market positions are more likely to become poor. The risks of poverty are also higher for families with children and with unemployed individuals.
But the MPI also enables highlighting vulnerable groups that are out of the scope of the social policy while the monetary poverty lines are used. These are people who face difficulties accessing substantial goods, although they are not formally classified as poor. Families with disabled people and retirees are among them.
Table 8.
MPI for different groups of population and households (HH).
Group Mean Std. error 95% confidence interval Sample size
Min Max
Individuals
Total 7.67 0.02 7.63 7.71 367,106
By age
16–19 years old 9.29 0.14 9.03 9.57 12,170
20–29 years old 6.87 0.07 6.73 7.00 38,515
30–39 years old 7.32 0.06 7.20 7.44 52,568
40–49 years old 6.71 0.06 6.59 6.83 49,412
50–59 years old 5.95 0.05 5.85 6.06 58,069
60–69 years old 7.20 0.06 7.08 7.31 54,035
70–79 years old 10.07 0.09 9.89 10.25 26,161
80 years old and above 8.68 0.12 8.43 8.94 11,160
By age groups
Aged 14 and below 10.20 0.06 10.09 10.34 60,865
At employment age 6.94 0.002 6.89 6.99 274,821
Aged 72 and above 9.87 0.08 9.71 10.04 29,469
By economic status
Working retirees 3.57 0.06 3.46 3.7 0 24,687
Non-working retirees 10.49 0.05 10.40 10.59 100,841
By employment status
Have work 11.33 0.04 11.24 11.42 131,388
Do not have work 5.10 0.03 5.01 5.12 164,235
By professional positions
Senior position 1.29 0.08 1.14 1.43 6,140
Specialist with high qualification 2.20 0.04 2.13 2.27 40,180
Specialist with medium qualification 3.66 0.07 3.53 3.80 20,628
Employee 4.79 1.31 4.53 5.05 7,172
Worker of the service sector 7.10 0.08 6.94 7.27 26,250
Qualified agricultural specialist 7.02 0.09 6.84 7.20 21,202
Operator of manufacturing engine 5.98 0.09 5.80 6.15 18,628
Low-skilled workers 18.80 0.15 15.58 19.20 15,091
By educational level
Postgraduate 2.19 0.16 1.87 2.51 2,193
Higher 2.91 0.03 2.85 2.98 78,367
Incomplete higher 6.39 0.18 6.03 6.74 4,736
Secondary special 7.34 0.04 7.25 7.42 101,300
Technical and vocational 9.89 0.09 9.72 10.07 29,875
Secondary general 11.61 0.07 11.47 11.75 52,565
Incomplete secondary 17.10 0.12 16.86 17.33 26,872
No secondary 15.88 0.24 15.41 16.34 6,159
Households
Total 7.24 0.03 7.18 7.31 160,008
By household size
1 person 8.16 0.06 8.03 8.29 48,790
2 people 5.81 0.05 5.71 5.91 53,682
3 people 5.83 0.07 5.69 5.97 29,489
4 people 7.53 0.09 7.34 7.73 18,939
5 or more people 11.59 0.16 11.26 11.92 9,108
By having the retirees in the household
HH without retirees 6.10 0.05 6.00 6.20 66,016
HH with one retiree and other HH members 7.97 0.09 7.79 8.14 24,782
HH with two retirees and other members 7.04 0.15 6.75 7.33 7,448
HH consists of retirees only 8.60 0.06 8.49 8.71 61,037
By having children in the household
No children 6.48 0.04 6.41 6.56 116,913
1 child in HH 7.03 0.08 6.87 7.20 25,878
2 children in HH 10.06 0.13 9.80 10.32 13,531
3 or more children in HH 17.58 0.29 17.00 18.16 3,686
By federal district
Center 5.46 0.06 5.34 5.57 40,560
North-West 5.58 0.09 5.40 5.76 17,448
Volga 7.18 0.08 7.04 7.33 31,536
Ural 7.37 0.12 7.14 7.61 13,152
Siberia 10.06 0.1 9.86 10.27 21,936
Far East 7.87 0.14 7.59 8.14 10,200
South 7.92 0.11 7.70 8.13 16,584
North-Caucasus 10.97 0.17 10.28 11.30 8,592
By settlement type and size, people
City, less than 50,000 7.97 0.07 7.83 8.10 40,584
City, 50,000–99,000 6.89 0.11 6.67 7.11 12,840
City, 100,000–249,000 6.07 0.11 8.85 6.28 12,744
City, 250,000–499,000 5.66 0.11 5.44 5.87 11,952
City, 500,000–999,000 6.04 0.13 5.80 6.30 9,456
City, 1 million and more 4.11 0.07 3.98 4.25 22,584
Rural, 200 and less 12.60 0.33 11.92 13.23 2,496
Rural, 201–1,000 13.04 0.12 12.81 13.27 20,352
Rural, 1,001–5,000 11.05 0.12 10.81 11.28 17,976
Rural, more than 5,000 10.34 0.16 10.01 10.66 9,024
By disabled people in household
Disabled in HH 14.63 0.11 14.42 14.84 21,673
No disabled in HH 6.26 0.03 6.20 6.33 138,335
By having unemployed in household
Unemployed in HH 10.51 0.05 10.42 10.61 91,256
No unemployed in HH 4.19 0.04 4.10 4.27 62,752
6. Results and discussion
For a long time, the assessment of poverty was based on monetary indicators, namely the income and expenditures of households. But the monetary approach cannot demonstrate all the dimensions of poverty in the modern world. Besides, this methodology can sometimes be incorrect due to the limitations of sociological and statistical data, the necessity of incorporating inflation (in case of dynamic studies) and the purchasing power parity into calculations (in instances of cross-country, and sometimes cross-regional, comparison). From this point of view, the Multidimensional poverty index seems to be a better tool to provide a detailed and full description of the poor.
Alternative methods for gauging non-monetary poverty also exist (deprivation index, AROPE), however, they do not provide the ability to correct the list of deprivations included in the index and to present the relative importance of the problems included in it. This reasoning became crucial for the MPI choice.
The MPI based on Statistical Survey of Income and Participation in Social Programs and setting the threshold to get the per cent of the poor similar to the relative income poverty level results in the poverty headcount ratio at 24.8% of the Russian population. The poverty headcount ratio based on MPI is higher than the poverty level based on monetary indicators calculated on the same database (13.0%). The overwhelming majority of those considered to be poor were classified as poor by MPI also. The estimations are consistent with the assessment performed by other authors: the calculation on the Comprehensive Monitoring of Living Conditions of the Population treat 22.8% of the population as poor (Kapelyuk and Ryabushkin, 2019). The MPI index can never be a substitution but rather a useful addition to poverty indicators. It shows that groups with a high risk of poverty include: low-educated and low-skilled workers and older people. MPI is also higher for families living in rural areas, larger households, and households with 3 or more children. These tendencies are also fixed in the case of monetary approaches (Ovcharova, 2014).
Nevertheless, the index also widens the scope of poverty analysis adding to the poor retired and disabled people who relatively rarely figure among the poorest in accordance with the absolute monetary approach. This thesis validates the findings that were described in the studies based on non-monetary indicators (Tikhonova, 2014) and still classifies families with several children as poor (contrary to the results of: Institute for Social Policy, 2017).
The comparison of the poor identified in accordance with the MPI and absolute monetary approach criteria also seems to be very important. In contrast with official statistics, the MPI treats twice as many people as poor. The structure of the MPI for those who are poor by the MPI criteria only and by both the MPI and the absolute monetary approach criteria almost doesn’t differ.
The core MPI advantage is the ability of index decomposition by the domains. The data shows that the more domains are set as arbiters of poverty, the higher is the impact of severe deprivations (like food and clothing purchase). If the poverty line is set at a lower level, the impact of the domains connected with leisure, travelling, and large purchases increases.
MPI could be especially important when working on social support measures. First, it helps identify vulnerable groups that are ignored by the absolute poverty approach. Second, it makes possible to highlight the sharpest problems of the poor.
6. Conclusions
The MPI index presented in the study is an example of a wide range of multidimensional poverty indices, which are used to identify the poor in different countries (mainly the developing ones). The deprivations included into the index in such countries are not relevant for Russia, but the idea of combining monetary and non-monetary estimations in order to analyze poverty seems to be forward-looking. The study demonstrated that using such a methodology widens the list of vulnerable groups of the population, adding retired and disabled people. The credibility of the results is proved by previous poverty studies based on the analysis of deprivations, which also highlight that older and disabled people are not formally poor; however, they suffer from different non-monetary deprivations.
The MPI approach can also influence decisions concerning social policy in Russia. Nowadays, the idea of combining the monetary (i.e., low income) and non-monetary (connected primarily to property and employment status) criteria to identify the beneficiaries of social policy measures is widely discussed and even implemented in some cases (primarily in the case of new social support measures like social benefits for children aged 3–7 years old). If this idea is developed further, the MPI index will be useful for both poverty evaluation and means testing.
The core limitation of the study is the list of poverty indicators that is now defined by the questionnaire of Statistical Survey of Income and Participation in Social Programs. The next step to develop the MPI for Russia is to construct the index that will be comparable with the indices implemented in developed countries and can be used for the purposes of social policy. However, that will be possible only if the necessary data is collected.
Acknowledgements
This study was supported by the Ministry of Science and Higher Education of the Russian Federation (grant ID: 075-15-2022-325).
References
• Alkire S., Apablaza M. (2016). Multidimensional poverty in Europe 2006–2012: Illustrating a methodology. OPHI Working Paper, No. 744, University of Oxford.
• Alkire S., Foster J. (2008). Counting and multidimensional poverty. OPHI Working Paper, No. 32, University of Oxford.
• Beycan T., Vani B. P., Bruggemann R., Suter C. (2019). Ranking Karnataka districts by the Multidimensional Poverty Index (MPI) and by applying simple elements of partial order theory. Social Indicators Research, 143, 173–200. https://doi.org/10.1007/s11205-018-1966-4
• Dghongde, Sh., & Haveman, R. (2015) Multi-dimensional poverty index: An application to the United States. IRP Discussion Paper, No. 1427-15, Institute for Research on Poverty.
• Gorshkov M. K., Tikhonova N. E. (Eds. (2014). Poverty and the poor in the modern Russia. Moscow: Ves Mir (in Russian).
• Kapelyuk S., Ryabushkin N. (2019). Multidimensional poverty in Russian regions. Paper prepared for the IARIW-HSE conference, Moscow, Russia, September 17–18.
• Laderchi C., Saith R., Stewart F. (2003). Does it matter that we do not agree on the definition of poverty? A comparison of four approaches. Oxford Development Studies, 31 (3), 243–274. https://doi.org/10.1080/1360081032000111698
• J. , Lansley S. (1985). Poor Britain. London: George Allen & Unwin.
• Maleva T., Grishina E., (Kovalenko) E. (2019). Long-term social policy: Multidimensional poverty and effective targeting (in Russian). Available at SSRN: https://doi.org/10.2139/ssrn.3337751
• Institute for Social Policy (2017). Poverty: The indicators of material deprivation, social exclusion and multidimensional poverty. Moscow: HSE University (in Russian).
• OPHI (2011). Country briefing: Russian Federation. Multidimensional Poverty Index (MPI) at a glance. Oxford Poverty and Human Development Initiative, University of Oxford.
• Ovcharova L. N. (Ed. (2009). Theoretical and practical approaches to poverty measurement, poverty profile and factors evaluation: Russian and international experience. Moscow: M-Studio (in Russian).
• Ovcharova L. N. (Ed. (2014). The level and the profile of Russian poverty: From the 1990th till nowadays. Moscow: HSE University (in Russian).
• Ovcharova L. N. (Ed. (2019). Families with and without children: The standards of living and the social policy support. Moscow: HSE University (in Russian).
• Pinilla-Roncancio M. (2017). The reality of disability: Multidimensional poverty of people with disability and their families in Latin America. Disability and Health Journal, 11 (3), 398–404. https://doi.org/10.1016/j.dhjo.2017.12.007
• Pishyak A. I., Khalina N. V., Nazarbaeva E. A., Goriainova A. R. (2021). The level and the profile of persistent poverty in Russia. Journal of the New Economic Association, 2 (50), 56–73 (in Russian). http://doi.org/10.31737/2221-2264-2021-50-2-3
• Rogan M. (2016). Gender and multidimensional poverty in South Africa: Applying the Global Multidimensional Poverty Index (MPI). Social Indicators Research, 126, 987–1006. https://doi.org/10.1007/s11205-015-0937-2
• Sen A. (1995). Gender inequality and theories of justice. In M. C. Nussbaum, & J. Glover (Eds.), Women, culture and development: A study of human capabilities (pp. 259–273). Oxford: Clarendon Press. https://doi.org/10.1093/0198289642.003.0011
• Tikhonova N. E. (2014). The phenomenon of poverty in modern Russia. Sotsiologicheskie Issledovaniia, 1, 7–19 (in Russian).
• Townsend P. (1979). Poverty in the United Kingdom: A survey of household resources and standards of living. New York: Allen Lane; Penguin Books.
• UNDP (2016). Human development report 2016. Technical notes. New York: United Nations Development Programme.
• UNDP> (2019a). Global multidimensional poverty index 2019: Illuminating inequalities. New York: United Nations Development Programme.
• UNDP (2019b). How to build a national multidimensional poverty index (MPI): Using the MPI to inform the SDGs. New York: United Nations Development Programme.
• UNECE (2017). Guide on poverty measurement. New York and Geneva: United Nations Publication.
• Whelan C., Nolan B., Maitre B. (2014). Multidimensional poverty measurement in Europe: An application of the adjusted headcount approach. Journal of European Social Policy, 24 (2), 183–197. https://doi.org/10.1177/0958928713517914
• World Bank (2017). Monitoring global poverty. Report of the Commission on Global Poverty. Washington, DC.
• Yang L., Vizard P. (2017). Multidimensional poverty and income inequality in the EU. CASEpaper, No. 207, Centre for Analysis of Social Exclusion, London School of Economics and Political Science.
• Zubarevich N. V., Safronov S. (2019). People and money: Incomes, consumption, and financial behavior of the population of Russian regions in 2000–2017. Regional Research of Russia, 4, 359–369. https://doi.org/10.1134/S2079970519040129
Appendix A
Table A1.
The list of questions included in MPI calculation
No Questionnaire wording Variable in the database The values classified as an indicator of poverty Indicator name
HH data
1 Section 7, question 3 Taking into account the income of all household members, is your household able to “make ends meet,” that is, to pay all the necessary daily payments? H07_03 1 — Hardly Cannot make ends meet
2 Section 7, question 5 Does your household have a TV in working order? If not, are you able to purchase it if you want? H07_05_01_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase the TV
3 Section 7, question 5 Does your household have a phone (including a mobile one) in working order? If not, are you able to purchase it if you want? H07_05_02_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase a phone
4 Section 7, question 5 Does your household have a PC in working order? If not, are you able to purchase it if you want? H07_05_03_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase a PC
5 Section 7, question 5 Does your household have a refrigerator in working order? If not, are you able to purchase it if you want? H07_05_04_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase a refrigerator
6 Section 7, question 5 Does your household have a washing machine in working order? If not, are you able to purchase it if you want? H07_05_05_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase a washing machine
7 Section 7, question 5 Does your household have a car in working order? If not, are you able to purchase it if you want? H07_05_06_02 4 — We wanted to but cannot afford it due to lack of money Cannot afford to purchase a car
8 Section 7, question 6 Did your household have the arrears in the payment for communal services in the previous year? H07_06_02 1 — It was once
or
2 — Two or more times
Have arrears in the payment for communal services
9 Section 7, question 6 Did your household have the arrears in the payment for rent or mortgage payment for the main accommodation because of the lack of money in the previous year? H07_06_01 1 — It was once Have arrears in the payment for rent payment or mortgage payment
or
2 — Two or more times
10 Section 7, question 7 Taking into account the income of all household members, is your household able to eat meat, chicken, or fish meals at least two times a week (or vegetarian alternatives)? H07_07_01 2 — No Cannot eat meat, chicken, or fish meals at least two times a week (or vegetarian alternatives)
11 Section 7, question 7 Taking into account the income of all household members, is your household able to purchase the clothes for a family member when it becomes necessary? H07_07_02 2 — No Cannot replace clothes for a family member when it becomes necessary
12 Section 7, question 7 Taking into account the income of all household members, is your household able to purchase two pairs of suitable, seasonal footwear for each family member? H07_07_03 2 — No Cannot purchase two pairs of suitable, seasonal footwear for each family member
13 Section 7, question 7 Taking into account the income of all household members, is your household able to replace the old furniture? H07_07_04 2 — No Cannot replace the furniture that is broken or in disrepair
14 Section 7, question 7 Taking into account the income of all household members, is your household able to invite guests for the family parties? H07_07_05 2 — No Cannot invite guests for family parties
15 Section 7, question 7 Taking into account the income of all household members, is your household able to spend one week per each year on vacation away from home? H07_07_06 2 — No Cannot spend one week per year on vacation away from home
Individual data
16 Section 1, question 10 What level of education do you have?
I01_10 8 — Basic comprehensive (lower secondary)
or
9 — No basic comprehensive (age: for women — 23–54 years old, for men — 23–59 years old)
Low education level
17 Job seekers (unemployed) [calculated variable provided with data] R_10_1_5 1 — Job seekers (unemployed) Unemployment
18 By occupational groups of respondents [calculated variable provided with data] R_8_1 8 — Unskilled workers Low-skilled job
19 Presence of disabled people of all ages [calculated variable provided with data] inv 1 — Disabled Health limitations: disability
20 Poor by absolute monetary measurement [calculated variable provided with data] MALOIM 1 — Poor Poor (absolute criteria)
1 For examples of the index for developing countries, see Rogan (2016), Montoya and Teixera (2017), Pinilla‑Roncancio (2017), Santos and Villatoro (2018), Beycan et al. (2019). Examples of the index for developed countries can be found in Dhongde and Haveman (2015), Whelan et al. (2014), Yang and Vizard (2017).
2 For more details see the site of Rosstat: http://www.gks.ru/free_doc/new_site/vndn-2016/index.html (in Russian).
3 The list of socially significant diseases is defined by the Government Decree and includes infectious diseases such as tuberculosis, hepatitis B and C, sexually transmitted diseases, HIV, diabetes, malignant neoplasms, mental and behavioral disorders, and high blood pressure diseases.
4 We use the structure of the MPI based on EU-SILC data as a benchmark when choosing the structure of our index and domains’ weights (see, for example, Whelan et al., 2014).
5 Here and below the income evaluations with scales of equivalence are used.
6 When the MPI is calculated for socio-demographic groups, the means are calculated for sub-sample in general (for both the poor and non-poor population). But the MPI equals 0 in instances where one is not poor.
7 Regarding child poverty, see Ovcharova, 2019.
8 According to Rosstat (2020) data, in 2017 the share of people older than employment age among the overall poor population was 6.6% (1.5%—male, 5.1%—female), among all the population—25.1% (7.4%—male, 17.7%—female).
9 For identifying those who work in the formal and informal sectors, the variable of Statistical Survey of Income and Participation in Social Programs is used. Those who worked at the enterprise or entity are supposed to be formally employed, while the others (working on a farm, or for relatives, or on an individual basis, etc.) are supposed to be employed informally.
10 For more details see, for example, Zubarevich and Safronov (2019). | 2023-03-27 10:15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4252592921257019, "perplexity": 1939.723384292349}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00469.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2012-05/msg00467.html | lilypond-user
[Top][All Lists]
## Re: Letters as Left hand fingering
From: Pierre Perol-Schneider Subject: Re: Letters as Left hand fingering Date: Mon, 21 May 2012 21:41:54 +0200
To David and Nick,
I thank you very much for your kind help.
I have now plenty of new ways to emprove my left fingering notation.
Pierre
2012/5/17 David Kastrup
> On 16/05/12 17:43, David Kastrup wrote:
>>
>>>> Hi Group,
>>>>
>>>> Sometimes I need to put a letter in front of a number as a fingering.
>>>> Is there any possibility to declare "m" (for ex;) as a number so that I
>>>> could code<a-m1> as a fingering ?
>>> Do you mean for right hand (stroke) fingering? The following enables
>>> you to use -\A etc for strokefingering. The additional
>>> my-stroke-finger function isn't needed for this but gives better
>>> alignment of the characters when you have a succession of them:
>>>
>>> \version "2.15.32"
>>>
>>> % shortcuts for stroke finger indications
>>> % can't use a or p, so use upper case for all
>>> P = #(define-music-function (parser location) ()
>>> (apply make-music
>>> (append
>>> (list
>>> 'StrokeFingerEvent
>>> 'origin location)
>>> (list 'digit 1))))
>>
>> P=-\rightHandFinger 1
>>
>> Seems a bit simpler.
>
> Sure is. Probably better to use P=\rightHandFinger #1, and can then
> use - or ^ or _ as needed.
You can still use - or ^ or _ as needed. - is neutral, meaning that it
does not change the direction flag either way, it merely tells the
parser that the whole thing is to be seen as postevent.
It used to be that this was necessary in order not to have the whole
construct wrapped inside of an EventChord, basically getting
<>-\rightHandFinger 1
Now with something like 2.15.28, it would not have gotten wrapped in an
EventChord anyway, the interpretation only depending on whether the
MusicEvent has the type post-event. And with something rather recently,
it _did_ get this type. So indeed, the - does not appear to serve a
useful purpose any more. I forgot.
This means that
<URL:http://lilypond.org/doc/v2.15/Documentation/extending/inline-scheme-code>,
which was a strained example before the EventChord changes (and this is
mentioned at its top) now is completely lunatic. It states (after the
initial disclaimer that just using F = -\tweak ... is all that is needed
to make this work):
The main disadvantage of \tweak is its syntactical
inflexibility. For example, the following produces a syntax error.
F = \tweak #'font-size #-3 -\flageolet
\relative c'' {
c4^\F c4_\F
}
Using Scheme, this problem can be avoided.
Unfortunately, as a result of the EventChord changes, not even this code
produces a syntax error, but just works as intended, whether or not you
choose to add - before \tweak.
I apologize for the convenience.
But you might still want to keep this detail in mind for answering
\version "2.14.2" challenges.
--
David Kastrup
_______________________________________________
lilypond-user mailing list | 2018-01-20 13:49:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547991514205933, "perplexity": 10707.858251953092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00209.warc.gz"} |
http://www.mathportal.org/calculus/limits/limits-of-trigonometric-functions.php | Math Lessons, Calculators and Homework Help
It is time to solve your math problem
« Limit of Irrational Functions
Limits: (lesson 4 of 5)
Limits of Trigonometric Functions
Important limits:
Example
Find the limit:
.
Solution
Direct substitution gives the indeterminate form 0/0. You can still solve this problem, however: write tan x as (sin x)/(cos x). | 2014-11-25 01:27:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971763551235199, "perplexity": 8648.165272826098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405337312.65/warc/CC-MAIN-20141119135537-00032-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-01599684 | Wave-induced vortex recoil and nonlinear refraction - Archive ouverte HAL Access content directly
Journal Articles Physical Review Fluids Year : 2017
## Wave-induced vortex recoil and nonlinear refraction
(1, 2) , (1) , (1)
1
2
Thomas Humbert
Sébastien Aumaître
• Function : Author
• PersonId : 989203
Basile Gallet
• Function : Author
• PersonId : 989210
#### Abstract
When a vortex refracts surface waves, the momentum flux carried by the waves changes direction and the waves induce a reaction force on the vortex. We study experimentally the resulting vortex distortion. Incoming surface gravity waves impinge on a steady vortex of velocity U$_0$ driven magneto-hydrodynamically at the bottom of a fluid layer. The waves induce a shift of the vortex center in the direction transverse to wave propagation, together with a decrease in surface vorticity. We interpret these two phenomena in the framework introduced by Craik and Leibovich (1976): we identify the dimensionless Stokes drift $S$ = $U_s$/$U_0$ as the relevant control parameter, $U_s$ being the Stokes drift velocity of the waves. We propose a simple vortex line model which indicates that the shift of the vortex center originates from a balance between vorticity advection by the Stokes drift and self-advection of the vortex. The decrease in surface vorticity is interpreted as a consequence of vorticity expulsion by the fast Stokes drift, which confines it at depth. This purely hydrodynamic process is analogous to the magnetohydrodynamic expulsion of magnetic field by a rapidly moving conductor through the electromagnetic skin effect. We study vorticity expulsion in the limit of fast Stokes drift and deduce that the surface vorticity decreases as 1/$S$, a prediction which is compatible with the experimental data. Such wave-induced vortex distortions have important consequences for the nonlinear regime of wave refraction: the refraction angle rapidly decreases with wave intensity.
#### Domains
Physics [physics]
### Dates and versions
cea-01599684 , version 1 (02-10-2017)
### Identifiers
• HAL Id : cea-01599684 , version 1
• DOI :
### Cite
Thomas Humbert, Sébastien Aumaître, Basile Gallet. Wave-induced vortex recoil and nonlinear refraction. Physical Review Fluids, 2017, 2, pp.094701. ⟨10.1103/PhysRevFluids.2.094701⟩. ⟨cea-01599684⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
135 View | 2023-02-06 15:27:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3090404272079468, "perplexity": 4828.046073864701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00073.warc.gz"} |
https://www.physicsforums.com/threads/algebra-solution-containing-trig.518941/ | # Algebra solution containing trig?
1. Aug 3, 2011
I was calculating some numbers, revolving around a sphere and some integration.
The case was to find out for what x-value a spherical bowl with a radius of 5m was half filled.
After doing the algebra, I narrowed it down to
$$x^3 - 15x^2 + 125 = 0$$
Now, my calculator says that the solution to this equation is.
$$x = 5 \sqrt{3}\sin\left(\frac{\pi }{9} \right) -5 \cos\left( \frac{\pi }{9} \right) +5$$
How do you figure that out? Kept trying a few variable changes, but nothing comes to mind.
Ofcourse this polynomial have more solutions, but this is the only solution where 0<x<5 the other ones where -2.2 and 14.
2. Aug 3, 2011
### gsal
where is x? where is the origin? If it was me, I would have placed the origin in the center of the sphere and then for x=0, the sphere should be half full?
3. Aug 4, 2011
### SteamKing
Staff Emeritus | 2017-09-22 06:28:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079744935035706, "perplexity": 981.3699642018216}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688671.43/warc/CC-MAIN-20170922055805-20170922075805-00307.warc.gz"} |
http://crypto.stackexchange.com/questions?page=45&sort=active | # All Questions
457 views
### Is FIPS 140-2's “Continuous random number generator test” practical?
Section 4.9.2 of FIPS PUB 140-2 specifies, amongst other things, a "Continuous random number generator test." Here are the relavant bits: If each call to a[n] RNG produces blocks of n bits ...
106 views
I'm writing a client application that wants to store some secret information with a storage service. The client has to authenticate the user with the service and the service should not be able to ...
2k views
### How to perform file encryption using 128-Bit AES?
I am confused, how can I encrypt a file using 128 Bit Advanced Encryption Standard? Do I need only to encrypt the file name and it's content or is there something that I need to do to encrypt it? Is ...
232 views
### How secure would this code be against cryptanalysis?
Simple version: Create software that takes a database of the dictionary, alphabet, and phrases. Randomly generate a database of random strings of letters/numbers/symbols of varying length. Randomly ...
92 views
### How HOTP values are validated according to RFC 4226
In Section 7.2 "Validation of HOTP Values" of the HOTP spec (RFC 4226) it says, in part, The HOTP client (hardware or software token) increments its counter and then calculates the next HOTP ...
256 views
### Efficient Incremental Updates to Large Merkle Tree
I have a data set with 300 Million entries and every 5 minutes 4000 random entries in this table change. I need to calculate the merkle root on this data set to validate integrity multiple times ...
838 views
### Practical consequences of using functional encryption for software obfuscation
I came across this article, which describes a method, developed by UCLA CS professor Amit Sahai et al, for using functional encryption in order to achieve software obfuscation. The paper that the ...
137 views
### Generating non-supersingular elliptic curves for symmetric pairings
I am looking into the application of pairings in CPABE in particular. I've notice that the scheme uses a supersingular curve as the basis of the pairing. Looking through Ben Lynn's thesis for the ...
111 views
### salting with password hash to improve security?
Would something like the following improve security (against rainbow attacks, not brute force)? Assume that $P$ is a user-chosen password, and the objective is to obtain a hash $H$ for password ...
164 views
### Swapping Key and IV in AES? Safe?
I have an application where I want to be able to send an encrypted file, and then mete out "keys" that allow the receipient to decrypt the file from a certain point to the end of the file. Actually, ...
120 views
### Finite fields in elliptic curve
I have an elliptic curve defined over finite field where $S_1=aP$ . Is it valid to say that $S_1P$ can also be computed. $P$ is the generator of the group. What my real question is that. Should '$a$' ...
182 views
### Discrete log problem with modulus prime
I am a bit confused on the hardness of the discrete logarithm problem. Does it become intractale only when it is mod n, where n is a large composite number (Like RSA key). What about if it is mod a ... | 2014-04-18 13:25:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119300127029419, "perplexity": 2021.9157314559407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/349865-scheme-forward-declaration/ | # Scheme forward declaration
This topic is 4489 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I'm attempting to learn Scheme, but I'm running into a basic problem of not being able to call a function that appears lower in the file than the function calling it. What is the syntax for forward declarations of function in scheme?
(define blah #f) ; forward decl of sorts.(define (bloo x y z) (blah 1 2 3)) ; using the forwardly decled function.(set! blah (lambda (x y z) (+ x y z))) ; define the function. | 2018-01-20 07:30:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3159239888191223, "perplexity": 3139.430120487743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00624.warc.gz"} |
https://www.askmehelpdesk.com/coins-paper-money/2-00-dollar-bill-worth-593826.html | # 2.00 dollar bill worth?
I have a stack of $2.00 Jefferson bills I purchased back on the first day of issue in 1976 and were taken to the US Post Office. There they placed a .13 cent postage stamp on the bills, and stamped the bills with a official cancellation stamp. The bills have been locked up since the first day of issue. How much would they be worth today? Last edited by Curlyben; Aug 21, 2011 at 01:04 PM. Search this Question 1 Answer tickle Posts: 23,710, Reputation: 2652 Expert #2 Aug 21, 2011, 10:20 AM Hi locman, this is information is from the 'AllExperts' site which I think may answer your question: For the$2.00 notes the value is always questionable.
The 1976 $2.00 Federal Reserve Note with out a First Date of Issue Stamp is only collectable in Crisp Un-Circulated grade for about$3 to $4 dollars except for a couple of replacement notes with a Star in the serial number. In general terms the 1976 US Two Dollar bills are collectable since not that many stayed in circulation. Bills in less than crisp condition have less value and in heavily circulated condition only trade at face value. A lot of these Two Dollar bills with the then NEW DESIGN were stamped with the first day of issue. Using the new 1976 stamp and canceled by the post office with a cancellation stamp design only used that one day. First Day Cover stamp collectors usually want them but many already have bought themselves one or two. Maybe you can find one who needs the San Francisco Post office cancellation? They would likely pay the most for the ones you have. This is a cross-collectable made for stamp and paper money collectors. The current stamp would be put on the bill or coin holder, and then canceled with a special U.S. Postal Service Cancellation Stamp, over the stamp with the date of issue being part of the cancellation stamp. Of course the cancellation stamp is only used that one day. They are collectable! They may be a little more collectable than others. But the price is still less than$8 or so since they had so many made and now the Bicentennial is more than 30 years ago. They did not sell them all when they were issued and the rest went at a discount.
1950 $5 (five) dollar bill - How much is it worth? [ 7 Answers ] Serial # B52499934 C Signatures: Priest/Humphrey This bill has no tears or stains. All writing is very clear. The bill is a little wrinkled. A little crisp. Any info will be greatly appreciated! :) How much is my 1976 2 dollar bill worth [ 6 Answers ] How much is my 1976 2 dollar bill worth How much is my dollar bill worth? [ 0 Answers ] 1935e, blue seal,good conditions, s# v21166670g in blue 1953 two dollar bill, worth? [ 10 Answers ] I have a 1953 two dollar bill in very goo condition. The bill is still very crisp. What is it worth? Thanks What's my two dollar bill worth [ 1 Answers ] I have two$2.00 bills that are dated 1976. On one of them my grandfather wrote at the bottom 4-13-1976 1st issue. I will call this one A. The issuing bank is a "G" on banknote A with the serial number G12037878A. In the upper left corner across from the 7 is B4 and on the right bottom hand side is... | 2017-11-24 16:46:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46790453791618347, "perplexity": 1714.3981311500227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808260.61/warc/CC-MAIN-20171124161303-20171124181303-00665.warc.gz"} |
https://booksandravensblog.wordpress.com/2018/03/ | ## Camp NaNoWriMo 2018 \\ My goal, playlist and plans for April
Hello Book Ravens!
So Camp NaNoWriMo is coming in April, and I think I might actually participate this year. Generally speaking, I prefer November NaNo, but I have a certain idea I’ve been really into writing lately, so I’m giving it a go!
So here’s my writing goals, my playlist and a rough idea of my story.
## The Cheesiest Book Tag \\ possibly the most fun tag I’ve ever seen
Hello Book Ravens!
I was debating what to write for my Wednesday post, and then I came across this tag and it was so much fun! I found it at Thrice Read and I like how they answered the questions, so I decided to try it myself!
I also liked how they formatted the tag. I’ve always had trouble finding a good way to write tags so they read nicely, and they did it so well! So I borrowed it.
## Absent Parents \\ Tackling Tropes
Hello Book Ravens!
So apparently, in YA, everyone over the age of thirty is dead. Or missing. Or evil. And what are these “parents” you speak of?
Let’s be honest: a lot of YA characters could seriously use a parental figure around. So where are they? Why do authors exclude them? Do we even need them?
## Pink Cloud Candles \\ the story behind Megan, her candle shop, this community and why you should think before you type
Hello Book Ravens,
If you’ve been on bookstagram in the past few months, you probably know the basics of this situation. If not, let me explain and fill you in.
A bit of a disclaimer first: I was not personally involved, and so I cannot speak from experience, and also this isn’t exactly an uplifting story. In fact, I almost don’t know how to write this.
## This or That Book Tag
Hello Book Ravens!
I’m not going to lie – I’m writing this post in a bit of a hurry. I want to be able to keep to my schedule and still watch movies with my family, and I’m high-key typing faster than I have in years.
…which explains why I chose a book tag and not a thought-provoking book review or a debatable discussion (do I ever make those??)
I liked the looks of this tag, and I was not tagged by anyone.
The creator was ayundabhuwana’s blog, and I’m tagging Tyr @ The Perks of Being a Nerd and Elizabeth @ Redgal Musings (do you do tags? I don’t actually know)
Hello Book Ravens!
So Camp NaNoWriMo is coming up in April, and I plan to participate.
NaNoWriMo always gets me excited because everyone talks about it, and I love hearing about others stories! Fantasy, contemporary, magical realism, unknown, etc. I always get so excited about other writers!
But I also feel kind of cut off from it, because I don’t talk about my writing. Not that I don’t want to! I do love talking about it! I’m just a bit paranoid.
## Book Documentaries I Want to be Made \\ Netflix Where Art Thou?
Hello Book Ravens!
Ever since watching CNN’s documentary on the Kennedy’s (which I highly recommend, BTW) I’ve been on a big documentary kick. Netflix obviously has tons, CNN has a few, and I think Hulu might have one or two.
And while I can find shows for food, lifestyle, travel, history, cultural people, nature…I don’t see any of books.
And that got me thinking: Netflix should make documentaries for the bibliophiles of the world. And I have a few ideas.
## Homeschoolers \\ Tackling Tropes
Hello Book Ravens!
I’ve been meaning to talk about tropes more, as it’s one of the topics I get most annoyed passionate about.
Why is today the day to start? Well, I’m in a particularly salty mood and I have no idea why. So why not go on a bit of a rant?
I am a homeschooler, and the homeschoolers in YA… are not great. It’s one of my most hated tropes right now,
So let me explain why – and I’ll even give you some tips for writing homeschooled characters!
## Popular Authors That I Have Never Read \\ AKA the shameful side of my TBR
Hello Book Ravens!
It’s not unknown that I’m not very good at keeping up with new releases. I read rather slowly, and because I’m trying to read all the cool new books coming out* I tend to forget about older, popular releases.
So I figured I’d make a list – because maybe some of these books are horrible and I dodged a bullet…or maybe I’ve totally missed out on the best book ever.**
*why are there so many?
**unlikely, but you never know
## The Upside of Unrequited \\ Spoiler-Free Review
Title: The Upside of Unrequited
Author: Becky Albertalli
Publisher: Balzer + Bray
Genre: Contemporary
Format: Hardcover (Owlcrate edition) | 2018-09-19 15:47:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164511919021606, "perplexity": 2603.4729863628736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00241.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.