Dataset Viewer
Auto-converted to Parquet Duplicate
url
stringlengths
15
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.07k
1.1k
https://www.acmicpc.net/problem/15838
시간 제한메모리 제한제출정답맞힌 사람정답 비율 2 초 512 MB999100.000% ## 문제 Wak Sani Satay is a humble stall located in Kajang and has been around since 1969. Many like Wak Sani’s satay because the meat is juicy and tender, served with creamy and sweet kuah kacang, alongside with nasi impit, cucumber and onion slices. Wak Sani usually calculates his net profit at the end of the week. The net profit is calculated by subtracting the cost from the gross profit. He can get 85 sticks of satay from 1 kg meat. The price for 3 types of satay are shown in Table 1. The price for nasi impit is RM0.80 each while cucumber and onion slices are free of charge. The cost for making satay for each meat type in shown in Table 2. The cost of spices to marinate satay for every kilogram of meat is RM8.00 and the cost for each nasi impit is RM0.20 each. Satay Price per stick Chicken RM0.80 Beef RM1.00 Lamb RM1.20 Table 1 Meat Price per kg Chicken RM7.50 Beef RM24.00 Lamb RM32.00 Table 2 Write a program to find the weekly net profit. ## 입력 The input consists of a few sets of test cases. The first line for each data case is an integer N (1 ≤ N ≤ 7), which represents the number of days the stall is opened to customers for a week. It is followed by N lines of data, each line represents the sales (in sticks) of chicken satay, beef satay, lamb satay and nasi impit per day. Input is terminated by a test case where N is 0. ## 출력 For each test case, output a line in the format "Case #x: RM" where x is the case number (starting from 1), follow by the calculated net profit in Malaysian currency format as shown in the sample output. ## 예제 입력 1 1 30 40 34 5 2 0 0 0 0 1 1 1 1 3 1000 1000 1000 10 5000 3000 4000 12 100 300 10 6 0 ## 예제 출력 1 Case #1: RM71.27 Case #2: RM2.57 Case #3: RM10119.98
2022-07-02 11:01:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1830446422100067, "perplexity": 5826.719034262543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00314.warc.gz"}
https://blog.juliosong.com/linguistics/mathematics/category-theory-notes-5/
Arrows are so vital to category theory that Awodey jokingly refers to the theory as “archery” (Category Theory, p. 2). Given two objects in a category, an arrow between them, if it exists, simply connects them: $A \rightarrow B.$ And if one arrow’s head overlaps with another’s tail, then the two arrows combined necessarily correspond to a third arrow in the category. For example, in the diagram there are three arrows $f,g,$ and $g \circ f.$ Such arrow chaining, called composition, is part of what it means to be a category. We needn’t care about what the objects $A, B, C$ or the arrows $f, g, g\circ f$ stand for. Category theory is the abstract study of objects and arrows, and everything works just fine even if we don’t assign interpretations to them. From a linguistic perspective, we can view this abstract theory of categories (or metacategories in Mac Lane’s words; CWM, p. 7) as a purely syntactic system without prespecified semantics. The objects and arrows are its vocabulary, composition and the like are its rules, while the possible interpretations of this syntax (which are presumably a lot!) are a secondary, domain-specific issue. Objects, arrows, and composition are part of the axiomatic definition of a category. ## Parallel arrows Little diagrams like the one above are standardly given in textbooks. Despite their convenience, however, they may become a source of confusion for beginners. The problem is that they are too neat—to the extent that things begin to look contrived. I remember when I first started learning category theory, I almost believed that such neat diagrams were all there was to categories—that they were perfect depictions of the categories behind them. Take a bunch of objects, connect them by composable arrows, and voila—we have a category! So, in my naive imagination category theory was not only archery but also astrology. 😂 But such neatness was just my illusion. I later realized that there could actually be multiple arrows between each pair of objects. In hindsight, I can’t believe that I hadn’t realized this right away. But again, that might be a sign that textbooks should give this point more emphasis so that students don’t miss it. So, in the above diagram $g \circ f$ is probably just one of the many arrows connecting $A$ and $C$. Crucially, among all the $A \rightarrow C$ arrows only one is the composite of $f$ and $g$, while all others can just be its random neighbors. In fact the arrow space between each pair of objects can be highly populated. How populated? More than sets can describe! It’s a basic mathematical fact that there are collections larger than sets. Only when the arrow collection between two objects $X$ and $Y$ is “small” enough can we call it a set, or more precisely a hom-set, denoted by $hom(X,Y)$. For instance, the collection of arrows between $A$ and $C$ in the above diagram, when it’s a set, is written $hom(A,C).$ The pronunciation of $hom$ is nonunanimous. I’ve heard both /hoʊm/ and /hɑm/ from distinguished experts. Etymologically hom comes from homomorphism, which might explain the pronunciation nonunanimity. Considering the possibly numerous or even countless parallel arrows, the “true colors” of categories may be much less neat than what we see in textbook diagrams. A fully realistic depiction of a category may well be a clump of black clutter whose details are indiscernible to the human eyes. Maybe that’s why textbooks choose to draw out only those arrows relevant to the topic under discussion. There are also less cluttered categories. A basic example is the categorical conception of a poset—a set equipped with a reflexive, transitive, and antisymmetric binary relation called a partial order. The objects of this category are elements of the set, and the arrows are instances of the partial order. Since a relation either holds or doesn’t hold with no third possibility, between any two objects in a poset category there is either no arrow or only one. That is, the hom-sets in a poset category are either empty or singleton. A special type of poset is a chain, like the Big Dipper above! In a poset category arrow composition is defined by transitivity (e.g., if $A\le B$ and $B\le C$ then $A\le C$). Since in a category all composable arrows must actually compose, composite arrows are usually omitted when the definition of composition is not the topic being discussed. This convention makes diagrams even neater. ## Commutative diagrams What’s the correlation between composite arrows that are parallel? Well, sometimes they may be equivalent in an algebraic sense, just like $1+4=2+3.$ And a diagram where all parallel composite arrows are equivalent is said to commute. For example, in the diagram , if $g \circ f = i \circ h,$ then the diagram is commutative. Commutative diagrams are another vital part of category theory, and they are closely related to arrow composition. Normally one wouldn’t expect something as clearly defined as commutative diagrams to be confusing, but the notion—or more exactly what’s left implicit of it—did confuse me for a while. My confusion was, How can we tell whether two paths are equivalent or not? Initially I had thought two paths sharing the same source and target were equivalent—all roads lead to Rome! But soon I realized there must be something wrong with this idea, because if it were true then all parallel arrows would end up being equivalent; in other words, all diagrams would be commutative. But if that were the case, why would mathematicians bother coming up with a notion of commutativity at all, let alone cherishing it so much? If that were the case, saying a diagram is commutative would be like saying a forest has trees! In hindsight, a major cause of my confusion was that the introductory texts I used only illustrated commutative diagrams but not noncommutative ones, which gave me the false impression, perhaps subconsciously, that commutativity came for free, or at least at a very low price—as if to make a diagram commute all we needed to do was draw parallel paths between objects. But that’s just another illusion from the neat textbook diagrams. Path equivalence is essentially an algebraic property and must be proven algebraically. When two paths can’t be proven equivalent, then they simply aren’t, and the diagram doesn’t commute. Noncommutative diagrams aren’t outlaws. They should be given equal status in pedagogical materials as commutative diagrams so that beginners, especially those with less mathematical experience, can get a more balanced understanding of commutativity. Smith’s 2018 draft textbook Category Theory: A Gentle Introduction (henceforth Gentle Intro) has the clearest explication on this issue I’ve ever seen: But note: to say a given diagram commutes is just a vivid way of saying that certain identities hold between composites – it is the identities that matter. And note too that merely drawing a diagram with different routes from e.g. A to D in the relevant category doesn’t always mean that we have a commutative diagram – the identity of the composites along the paths in each case has to be argued for! (p. 29) Tai-Danae Bradley gives a simple example of a noncommutative diagram in her blog post “Commutative diagrams explained”: the following diagram of real-valued functions, where $id$ is the identity function and $zero$ is a constant function that maps all real numbers to $0$, obviously doesn’t commute, because the parallel paths $\mathbb{R}\rightarrow\mathbb{R}$ don’t return the same output for the same input. Bradley’s example essentially demonstrates a set-theoretic criterion of arrow equivalence—function extensionality. This principle states that two functions are equal iff their values are equal at every argument. Since $id\circ id$ and $zero$ don’t return equal values for every real number, they as functions are not equal and hence as arrows are not equivalent. Since for any real number argument $(+5)\circ(+3)$ and $(+6)\circ(+2)$ return the same value, they are equal functions and equivalent paths, whence the diagram commutes. Admittedly, not all diagrams can be checked for commutativity in this way, because many categories have nothing to do with sets and functions. But the caveat remains the same: we can’t declare a diagram commutative on a whim but can only verify (or falsify) commutativity via a proof. ## Takeaway • There can be multiple parallel arrows between a pair of categorial objects. Textbooks don’t depict all of them in diagrams because many are irrelevant to the topic(s) under discussion. • Commutativity doesn’t come for free but must be proved, by showing that the two sides of a hypothetical path equation are really equal. • Noncommutative diagrams are also valid diagrams and shouldn’t be glossed over in textbooks for total beginners. • In categories where arrows are functions, commutativity can be checked via function extensionality. Tags: Categories: Updated: ## Subscribe to I-Yuwen * indicates required
2021-06-20 00:41:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682175278663635, "perplexity": 520.5637521471431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00501.warc.gz"}
https://ask.sagemath.org/answers/46824/revisions/
# Revision history [back] This indeed looks like a bug. As a workaround, test whether the graph has loops before computing the chromatic polynomial. sage: G = Graph([[1, 1]], multiedges=True, loops=True) sage: G.has_loops() True Fixing the bug should just amount to special-casing graphs with loops (testing for loops as above) and including a doctest (to prevent the bug from reappearing).
2020-05-29 01:32:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6453588008880615, "perplexity": 7662.362247643384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00144.warc.gz"}
https://rjlipton.wordpress.com/2015/03/17/leprechauns-will-find-you/
And perhaps even find your hidden prime factors Neil L. is a Leprechaun. He has visited me every St. Patrick’s Day since I began the blog in 2009. In fact he visited me every St. Patrick’s Day before then, but I never noted him. Sometimes he comes after midnight the night before, or falls asleep on my sofa waiting for me to rise. But this time there was no sign of him as I came back from a long day of teaching and meetings and went out again for errands. Today Ken and I wish you all a Happy St. Patrick’s Day, and I am glad to report that Neil did find me. When I came back I was sorting papers and didn’t see him. I didn’t know he was there until I heard, Top o’ the evening to ye. Neil continued as he puffed out some green smoke: “I had some trouble finding you this year. Finally got where you were—good friends at your mobile provider helped me out.” I was surprised, and told him he must be kidding. He answered, “Of course I always can find you, just having some fun wi’ ye.” Yes I agreed and added that I was staying elsewhere. He puffed again and said “yes I understand.” I said I had a challenge for him, a tough challenge, and asked if he was up for it. He said, “Hmmm, I do not owe you any wishes, but a challenge… Yes I will accept a challenge from ye, any challenge that ye can dream up.” He laughed, and added, “we leprechauns have not lost a challenge to a man for centuries. I did have a cousin once who messed up.” ## The Cousin’s Story I asked if he would share his cousin’s story, and he nodded yes. “‘Tis a sad story. My cousin was made a fool of once, a terrible black mark on our family. Why, we were restricted from any St Patrick Day fun for a hundred years. Too long a punishment in our opinion—the usual is only a few decades. Do ye want to know what my cousin did? Or just move on to the challenge? My time is valuable.” I nodded sympathetically, so he carried on. “One fine October day in Dublin me cousin was sitting under a bridge—under the lower arch where a canalside path went. “He spied a gent walking with his wife along the path but lost in thought and completely ignoring her. He thought the chap would be a great mark for a trick but forgot the woman. She spied him and locked on him with laser eyes and of course he was caught—he could not run unless she looked away. “He tried to ply her with a gold coin but she knew her leprechaun lore and was ruthless. He resigned himself to granting wishes but she would not have that either. With her stare still fixed she took off her right glove, plucked a shamrock, and lay both at his feet for a challenge. A woman had never thrown a challenge before, and there was not in the lore a provision for return-challenging a woman. So my cousin had to accept her challenge. It came with intense eyes: “I challenge you to tell the answer to what is vexing and estranging my husband.” “Aye,” Neil sighed, “you or I or any lad in the face of such female determination would be reduced to gibberish, and that is what me cousin blurted out: ${i^2 = j^2 = k^2 = ijk = -1.}$ “The gent looked up like the scales had fallen from his eyes, and he embraced his wife. This broke the stare, and my cousin vanished in great relief. And did the gent show his gratitude? Nay—he even carved that line on the bridge but gave no credit to my cousin.” I clucked in sympathy, and Neil seemed to like that. He put down his pipe and gave me a look that seemed to return comradeship. Then I understood who the “cousin” was. Not waiting to register my understanding, he invited my challenge as a peer. ## My Challenge I had in fact prepared my challenge last night—it was programmed by a student in my graduate advanced course using a big-integer package. Burned onto a DVD was a Blum integer of one trillion bits. I pulled it out of its sleeve and challenged Neil to factor it. The shiny side flashed a rainbow, and I joked there could really be a pot of gold at the end of it. Neil took one puff and pushed the DVD—I couldn’t tell how—into my MacBook Air. The screen flashed green and before I could say “Jack Robinson” my FileZilla window opened. Neil blew mirthful puffs as the progress bar crawled across. A few minutes later came e-mail back from my student, “Yes.” I exclaimed, “Ha—you did it—but the point isn’t that you did it. The point is, it’s doable. You proved that factoring is easy. Could be quantum or classical but whatever—it’s practical.” Neil puffed and laughed as he handed me back the suddenly-reappeared disk and said, “Aye, do ye really think I would let your lot fool me twice?” I replied, “Fool what? You did it—that proves it.” “Nay,” he said, “indeed I did it—I cannot lie—but ye can’t know how I did it enough to tell whether a non-leprechaun can do it. And a computer that ye build—be it quantum or classical or whatever—is a non-leprechaun.” It hit me that a quantum computer that cannot be built is a leprechaun, and perhaps Peter Shor’s factoring algorithm only runs on those. But I wasn’t going to be distracted away from my victory. “How can it matter whether a leprechaun does it?” Neil retorted that he didn’t have to answer a further challenge, “it’s not like having three wishes, you know.” But he continued, “since ye are a friend, I will tell ye three ways it could be, and you can choose one ye like but know ye: it could still be a fourth way. 1. “I could have been around when your student made the number, even gone back in time to see it. I did take a long time to find ye, did I not? 2. “Since y’are not a woman I have a return challenge, and I don’t have to give it after yours or even tell ye. I get some control and can influence you to give instructions that will lead to a particular number I am prepared for. We leprechauns do that with choices of RSA keys all the time. 3. “Everything in your world that you create by rules is succinct. Of course so was that number. Factoring of succinct numbers is easy—indeed in this world, everything is easy. “And I left ye a factor, but your student already had it, so I left ye no net knowledge at all.” And with a puff of smoke, he was gone. ## Open Problems Did I learn anything from the one-time factoring of my number? Happy St. Patrick’s Day anyway. [moved part of dialogue at end from 2. to 1.]
2018-01-19 11:13:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4385192096233368, "perplexity": 2969.151865460479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00559.warc.gz"}
https://worldbank.github.io/PIP-Methodology-2022-04/convert.html
DISCLAIMER: This is not the most recent version of the methodological handbook. You can find the most recent version here: https://worldbank.github.io/PIP-Methodology. # Chapter 3 Converting welfare aggregates Welfare aggregates from household surveys are often expressed in national currency units in prices around the time of the fieldwork. To use a welfare aggregate from a particular survey to estimate extrme poverty at the international poverty line, the welfare aggregates need to be converted to a unit comparable across time and across countries. To this end, first Consumer Price Indices (CPIs) are used to express the aggregates in the same prices within a country. Second, Purchasing Power Parities (PPPs) are used to express all welfare aggregates in the same currency by adjusting for price differences across countries. ## 3.1 Consumer Price Indices (CPIs) Consumer price indices (CPIs) summarize the prices of a representative basket of goods and services consumed by households within an economy over a period of time. Inflation (deflation) occurs when there is a positive (negative) change in the CPI between two time periods. With inflation, the same amount of rupees is expected to buy more today than one year from today. CPIs are used to deflate nominal income or consumption expenditure of households so that the welfare of households can be evaluated and compared between two time periods at the same prices. The primary source of CPI data for the Poverty and Inequality Platform is IMF’s International Financial Statistics (IFS) monthly CPI series. The simple average of the monthly CPI series for each calendar year is used as the annual CPI. When IFS data are missing, other sources of CPI data are obtained from IMFs World Economic Outlook (WEO) and National Statistical Offices (NSOs), among others. For more details on the different sources of CPI data used for global poverty measurement, see Figure 1 of and the “What’s New” technical notes accompanying PIP updates. CPI series are rebased to the International Comparison Programme (ICP) reference year, currently 2011. ## 3.2 Purchasing Power Parities (PPPs) Purchasing power parities (PPPs) are used in global poverty estimation to adjust for price differences across countries. PPPs are price indices published by the International Comparison Program (ICP) that measure how much it costs to purchase a basket of goods and services in one country compared to how much it costs to purchase the same basket of goods and services in a reference country, typically the United States. PPP conversion factors are preferred to market exchange rates for the measurement of global poverty because the latter overestimate poverty in developing countries, where non-tradable services are relatively cheap (a phenomenon known as the Balassa-Samuelson-Penn effect). The revised 2011 PPPs are currently used to convert household welfare aggregates, expressed in local currency units in 2011 prices, into a common internationally comparable currency unit. The PPP conversion only affects the cross-country comparison of levels of welfare; the growth in the survey mean for a particular country over time is the same whether it is expressed in constant local currency or in USD PPP. The PPP estimates used for global poverty measurement are the consumption PPPs from the ICP with a few exceptions. PPPs are imputed for six countries, namely Egypt, Iraq, Jordan, Lao, Myanmar and Yemen, where there are concerns over the coverage and/or quality of the underlying ICP price collection . Though PPPs are supposed to be nationally representative, to account for possible urban bias in ICP data collection, separate rural and urban PPPs are computed for China, India, and Indonesia using official national PPPs, the ratio of urban to rural poverty lines, and the urban share in ICP price data collection . ## 3.3 Derivation of the international poverty line Most countries have a national poverty line which summarizes the value of consumption or income per person or per adult equivalent needed to be non-poor. These national poverty lines are typically estimated by National Statistical Offices and reflect country-specific definitions of what it means to be poor. For low and middle-income countries, the lines usually reflect the cost of purchasing a bundle of food items necessary to obtain minimum daily calories to which a basic non-food component is added. For high-income countries the national poverty lines are often relative and are defined relative to the national mean or median income. To compare poverty across countries one needs a common standard. Hence, national poverty lines, which differ from one country to the next, cannot be used. The international poverty line is an attempt to summarize the national poverty lines of the poorest countries. Since 1990, the World Bank has derived international poverty lines from the national poverty lines of the poorest countries of the world . In 1990, this resulted in a the “Dollar-a-day” poverty line. Whenever new rounds of PPPs have been released, the nominal value of the international poverty line has been updated. This does not mean that the real value of the international poverty line has changed. The current international poverty line of $1.90/day in 2011 PPPs was derived as the mean national poverty line of some of the 15 poorest countries. That is, it represents a typical poverty line of some of the poorest countries in the world. The line is derived by first converting the national poverty lines into PPP-adjusted dollars in the same manner as welfare distributions are converted. This was done by who selected the 15 poorest countries with an available national poverty line, ranked by household final consumption expenditure per capita of countries around 2008 when the 2005 PPPs were released. An IPL of$1.25/day per person, expressed in 2005 PPP dollars was determined as the mean of the national poverty lines of these countries. When the 2011 PPPs were released in 2014, the same 15 national poverty lines were used, but now converted to 2011 PPPs yielding an IPL of $1.88, which was rounded to$1.90 . When the 2011 PPPs got revised in 2020, the IPL was similarly updated but remains unchanged at $1.90 . Below is the list of the 15 poorest countries and their national poverty lines denominated in 2005, original 2011, and revised 2011 PPPs. Country Survey year Poverty line, 2005 PPP Poverty line, original 2011 PPP Poverty line, revised 2011 PPP Chad 1995-96 0.87 1.28 1.29 Ethiopia 1999-2000 1.35 2.03 1.98 Gambia, The 1998 1.48 1.82 1.81 Ghana 1998-99 1.83 3.07 3.11 Guinea-Bissau 1991 1.51 2.16 2.08 Malawi 2004-05 0.86 1.34 1.33 Mali 1988-89 1.38 2.15 2.13 Mozambique 2002-03 0.97 1.26 1.24 Nepal 2003-04 0.87 1.47 1.47 Niger 1993 1.10 1.49 1.48 Rwanda 1999-2001 0.99 1.50 1.47 Sierra Leone 2003-04 1.69 2.73 2.64 Tajikistan 1999 1.93 3.18 3.35 Tanzania 2000-01 0.63 0.88 0.88 Uganda 1993-98 1.27 1.77 1.77 Mean 1.25 1.88 1.87 ## 3.4 Derivation of other global poverty lines In addition to the international poverty line, the World Bank uses two higher poverty lines to measure and monitor poverty in countries with a low incidence of extreme poverty. These higher lines, namely$3.20 and $5.50 in revised 2011 PPPs, are derived in as the median values of national poverty lines of lower- and upper-middle income countries, respectively. When replicating the derivation of these lines with the revised 2011 PPPs, the estimates for the$3.20 lines does not change, while the $5.50 line increases by approximately$0.15 . The World Bank decided to keep all the global poverty lines unchanged, including the \$5.50 line. These poverty lines are goalposts to be held fixed over time and they have become widely used, so there is a cost to revising them frequently. The global poverty lines were chosen with the PPPs available at the time using a reasonable method; thereafter we view them as fixed parameters to monitor progress in different parts of the global distribution of income or consumption. ### References Atamanov, Aziz, Dean Jolliffe, Christoph Lakner, and Espen Beer Prydz. 2018. “Purchasing Power Parities Used in Global Poverty Measurement.” Global Poverty Monitoring Technical Note 5. http://documents.worldbank.org/curated/en/764181537209197865/Purchasing-Power-Parities-Used-in-Global-Poverty-Measurement. Atamanov, Aziz, Christoph Lakner, Daniel Gerszon Mahler, Samuel Kofi Tetteh Baah, and Judy Yang. 2020. “The Effect of New PPP Estimates on Global Poverty: A First Look” 12. https://openknowledge.worldbank.org/handle/10986/33816. ———. 2008. “China Is Poorer Than We Thought, but No Less Successful in the Fight Against Poverty,” Policy research working paper series, no. 4621. https://openknowledge.worldbank.org/handle/10986/6674. ———. 2010. “The Developing World Is Poorer Than We Thought, but No Less Successful in the Fight Against Poverty.” The Quarterly Journal of Economics 125 (4): 1577–1625. https://doi.org/10.1162/qjec.2010.125.4.1577. Ferreira, Francisco HG, Shaohua Chen, Andrew Dabalen, Yuri Dikhanov, Nada Hamadeh, Dean Jolliffe, Ambar Narayan, Espen Beer Prydz, Ana Revenga, and Prem Sangraula. 2016. “A Global Count of the Extreme Poor in 2012: Data Issues, Methodology and Initial Results.” The Journal of Economic Inequality 14 (2): 141–72. https://link.springer.com/article/10.1007/s10888-016-9326-6. Jolliffe, Dean, and Espen Beer Prydz. 2015. Global Poverty Goals and Prices: How Purchasing Power Parity Matters. Policy Research Working Paper Series 7256. hhttps://openknowledge.worldbank.org/handle/10986/21988. ———. 2016. “Estimating International Poverty Lines from Comparable National Thresholds.” The Journal of Economic Inequality 14 (2): 185–98. https://link.springer.com/article/10.1007/s10888-016-9327-5. Lakner, Christoph, Daniel Gerszon Mahler, Minh C. Nguyen, Joao Pedro Azevedo, Shaohua Chen, Dean M. Jolliffe, Espen Beer Prydz, and Prem Sangraula. 2018. “Consumer Price Indices Used in Global Poverty Measurement.” Global Poverty Monitoring Technical Note 4. http://documents.worldbank.org/curated/en/215371537208860890/Consumer-Price-Indices-Used-in-Global-Poverty-Measurement. Ravallion, Martin, Shaohua Chen, and Prem Sangraula. 2009. “Dollar a Day Revisited.” The World Bank Economic Review 23 (2): 163–84. https://doi.org/10.1093/wber/lhp007. ———. 2020. Poverty and Shared Prosperity 2020: Reversals of Fortune. Washington, DC: World Bank. https://openknowledge.worldbank.org/handle/10986/34496.
2023-03-20 12:51:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3370811641216278, "perplexity": 5669.628178183001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00688.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=35:2080
MathSciNet bibliographic data MR211198 (35 #2080) 42.50 (46.80) Rosenthal, Haskell P. Projections onto translation-invariant subspaces of \$L\sp{p}(G)\$$L\sp{p}(G)$. Mem. Amer. Math. Soc. No. 63 1966 84 pp. Links to the journal or article are not yet available For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2014-08-01 07:21:56
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769197702407837, "perplexity": 5082.037679775132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00027-ip-10-146-231-18.ec2.internal.warc.gz"}
http://dev.goldbook.iupac.org/terms/view/F02543
Wikipedia - Diskussion:Massenwirkungsgesetz fugacity $$f$$, $$\tilde{p}$$ https://doi.org/10.1351/goldbook.F02543 Of a substance B, $$f_{\text{B}}$$ or $$\tilde{p}_{\text{B}}$$, in a gaseous mixture is defined by $$f_{\text{B}}=\lambda _{\text{B}}\ \lim _{p\rightarrow 0}\frac{p_{\text{B}}}{\lambda _{\text{B}}}$$, where $$p_{\text{B}}$$ is the @P04420@ of B and $$λ_{\text{B}}$$ its @A00019@. Source: Green Book, 2nd ed., p. 50 [Terms] [Book] See also: PAC, 1984, 56, 567. (Physicochemical quantities and units in clinical chemistry with special emphasis on activities and activity coefficients (Recommendations 1983)) [Terms] [Paper] PAC, 1994, 66, 533. (Standard quantities in chemical thermodynamics. Fugacities, activities and equilibrium constants for pure and mixed phases (IUPAC Recommendations 1994)) [Terms] [Paper]
2019-04-23 16:31:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062182664871216, "perplexity": 7105.841931809491}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605555.73/warc/CC-MAIN-20190423154842-20190423180842-00320.warc.gz"}
http://www.aot-math.org/article_46634.html
# Variants of Weyl's theorem for direct sums of closed linear operators Document Type: Original Article Authors University of Delhi, Delhi. Abstract If $T$ is an operator with compact resolvent and $S$ is any densely defined closed linear operator, then the orthogonal direct sum of $T$ and $S$ satisfies various Weyl type theorems if some necessary conditions are imposed on the operator $S$. It is shown that if $S$ is isoloid and satisfies Weyl's theorem, then $T \oplus S$ satisfies Weyl's theorem. Analogous result is proved for a-Weyl's theorem. Further, it is shown that Browder's theorem is directly transmitted from $S$ to $T \oplus S$. The converse of these results have also been studied. Keywords ### History • Receive Date: 03 January 2017 • Revise Date: 05 June 2017 • Accept Date: 07 June 2017
2019-09-18 19:56:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036804437637329, "perplexity": 568.0030221742262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00431.warc.gz"}
https://ajitjadhav.wordpress.com/tag/aristotle/
# Some running thoughts on ANNs and AI—1 Go, see if you want to have fun with the attached write-up on ANNs [^] (but please also note the version time carefully—the write-up could change without any separate announcement). The write-up is more in the nature of a very informal blabber of the kind that goes when people work out something on a research blackboard (or while mentioning something about their research to friends, or during brain-storming session, or while jotting things on the back of the envelop, or something similar). A “song” I don’t like: (Marathi) “aawaaj waaDaw DJ…” “Credits”: Go, figure [^]. E.g., here [^]. Yes, the video too is (very strongly) recommended. Update on 05 October 2018 10:31 IST: Psychic attack on 05 October 2018 at around 00:40 IST (i.e. the night between 4th and 5th October, IST). / # Off the blog. [“Matter” cannot act “where” it is not.] I am going to go off the blogging activity in general, and this blog in most particular, for some time. [And, this time round, I will keep my promise.] The reason is, I’ve just received the shipment of a book which I had ordered about a month ago. Though only about 300 pages in length, it’s going to take me weeks to complete. And, the book is gripping enough, and the issue important enough, that I am not going to let a mere blog or two—or the entire Internet—come in the way. I had read it once, almost cover-to-cover, some 25 years ago, while I was a student in UAB. Reading a book cover-to-cover—I mean: in-sequence, and by that I mean: starting from the front-cover and going through the pages in the same sequence as the one in which the book has been written, all the way to the back-cover—was quite odd a thing to have happened with me, at that time. It was quite unlike my usual habits whereby I am more or less always randomly jumping around in a book, even while reading one for the very first time. But this book was different; it was extraordinarily engaging. In fact, as I vividly remember, I had just idly picked up this book off a shelf from the Hill library of UAB, for a casual examination, had browsed it a bit, and then had began sampling some passage from nowhere in the middle of the book while standing in an library aisle. Then, some little time later, I was engrossed in reading it—with a folded elbow resting on the shelf, head turned down and resting against a shelf rack (due to a general weakness due to a physical hunger which I was ignoring [and I would have have to go home and cook something for myself; there was none to do that for me; and so, it was easy enough to ignore the hunger]). I don’t honestly remember how the pages turned. But I do remember that I must have already finished some 15-20 pages (all “in-the-order”!) before I even realized that I had been reading this book while still awkwardly resting against that shelf-rack. … … I checked out the book, and once home [student dormitory], began reading it starting from the very first page. … I took time, days, perhaps weeks. But whatever the length of time that I did take, with this book, I didn’t have to jump around the pages. The issue that the book dealt with was: [Instantaneous] Action at a Distance. The book in question was: Hesse, Mary B. (1961) “Forces and Fields: The concept of Action at a Distance in the history of physics,” Philosophical Library, Edinburgh and New York. It was the very first book I had found, I even today distinctly remember, in which someone—someone, anyone, other than me—had cared to think about the issues like the IAD, the concepts like fields and point particles—and had tried to trace their physical roots, to understand the physical origins behind these (and such) mathematical concepts. (And, had chosen to say “concepts” while meaning ones, rather than trying to hide behind poor substitute words like “ideas”, “experiences”, “issues”, “models”, etc.) But now coming to Hesse’s writing style, let me quote a passage from one of her research papers. I ran into this paper only recently, last month (in July 2017), and it was while going through it that I happened [once again] to remember her book. Since I did have some money in hand, I did immediately decide to order my copy of this book. Anyway, the paper I have in mind is this: Hesse, Mary B. (1955) “Action at a Distance in Classical Physics,” Isis, Vol. 46, No. 4 (Dec., 1955), pp. 337–353, University of Chicago Press/The History of Science Society. The paper (it has no abstract) begins thus: The scholastic axiom that “matter cannot act where it is not” is one of the very general metaphysical principles found in science before the seventeenth century which retain their relevance for scientific theory even when the metaphysics itself has been discarded. Other such principles have been fruitful in the development of physics: for example, the “conservation of motion” stated by Descartes and Leibniz, which was generalized and given precision in the nineteenth century as the doctrine of the conservation of energy; … Here is another passage, once again, from the same paper: Now Faraday uses a terminology in speaking about the lines of force which is derived from the idea of a bundle of elastic strings stretched under tension from point to point of the field. Thus he speaks of “tension” and “the number of lines” cut by a body moving in the field. Remembering his discussion about contiguous particles of a dielectric medium, one must think of the strings as stretching from one particle of the medium to the next in a straight line, the distance between particles being so small that the line appears as a smooth curve. How seriously does he take this model? Certainly the bundle of elastic strings is nothing like those one can buy at the store. The “number of lines” does not refer to a definite number of discrete material entities, but to the amount of force exerted over a given area in the field. It would not make sense to assign points through which a line passes and points which are free from a line. The field of force is continuous. See the flow of the writing? the authentic respect for the intellectual history, and yet, the overriding concern for having to reach a conclusion, a meaning? the appreciation for the subtle drama? the clarity of thought, of expression? Well, these passages were from the paper, but the book itself, too, is similarly written. Obviously, while I remain engaged in [re-]reading the book [after a gap of 25 years], don’t expect me to blog. After all, even I cannot act “where” I am not. A Song I Like: [I thought a bit between this song and another song, one by R.D. Burman, Gulzar and Lata. In the end, it was this song which won out. As usual, in making my decision, the reference was exclusively made to the respective audio tracks. In fact, in the making of this decision, I happened to have also ignored even the excellent guitar pieces in this song, and the orchestration in general in both. The words and the tune were too well “fused” together in this song; that’s why. I do promise you to run the RD song once I return. In the meanwhile, I don’t at all mind keeping you guessing. Happy guessing!] (Hindi) “bheegi bheegi…” [“bheege bheege lamhon kee bheegee bheegee yaadein…”] Music and Lyrics: Kaushal S. Inamdar Singer: Hamsika Iyer [Minor additions/editing may follow tomorrow or so.] / # On whether A is not non-A This post has its origin in a neat comment I received on my last post [^]; see the exchange starting here: [^]. The question is whether I accept that A is not non-A. My answer is: No, I do not accept that, logically speaking, A is not non-A—not unless the context to accept this statement is understood clearly and unambiguously (and the best way to do that is to spell it out explicitly). Another way to say the same thing is that I can accept that “A is not non-A,” but only after applying proper qualifications; I won’t accept it in an unqualified way. Let me explain by considering various cases arising, using a simple example. The Venn diagram: Let’s begin by drawing a Venn diagram. Draw a rectangle and call it the set $R$. Draw a circle completely contained in it, and call it the set $A$. You can’t put a round peg to fill a rectangular hole, so, the remaining area of the rectangle is not zero. Call the remaining area $B$. See the diagram below. Case 1: All sets are non-empty: Assume that neither $A$ nor $B$ is empty. Using symbolic terms, we can say that: $A \neq \emptyset$, $B \neq \emptyset$, and $R \equiv A \cup B$ where the symbol $\emptyset$ denotes an empty set, and $\equiv$ means “is defined as.” We take $R$ as the universal set—of this context. For example, $R$ may represent, say the set of all the computers you own, with $A$ denoting your laptops and $B$ denoting your desktops. I take the term “proper set” to mean a set that has at least one element or member in it, i.e., a set which is not empty. Now, focus on $A$. Since the set $A$ is a proper set, then it is meaningful to apply the negation- or complement-operator to it. [May be, I have given away my complete answer right here…] Denote the resulting set, the non-A, as $A^{\complement }$. Then, in symbolic terms: $A^{\complement } \equiv R \setminus A$. where the symbol $\setminus$ denotes taking the complement of the second operand, in the context of the first operand (i.e., “subtracting” $A$ from $R$). In our example, $A^{\complement } = B$, and so: $A^{\complement } \neq \emptyset$. Thus, here, $A^{\complement }$ also is a proper (i.e. non-empty) set. To conclude this part, the words “non-A”, when translated into symbolic terms, means $A^{\complement }$, and this set here is exactly the same as $B$. To find the meaning of the phrase “not non-A,” I presume that it means applying the negation i.e. the complement operator to the set $A^{\complement }$. It is possible to apply the complement operator because $A ^{\complement } \neq \emptyset$. Let us define the result of this operation as $A^{\complement \complement}$; note the two $^{\complement}$s appearing in its name. The operation, in symbols becomes: $A^{\complement \complement} \equiv R \setminus A^{\complement} = R \setminus B = A$. Note that we could apply the complement operator to $A$ and later on to $A^{\complement}$ only because each was non-empty. As the simple algebra of the above simple-minded example shows, $A = A^{\complement\complement}$, which means, we have to accept, in this example, that A is not non-A. Remarks on the Case 1: However, note that we can accept the proposition only under the given assumptions. In  particular, in arriving at it, we have applied the complement-operator twice. (i) First, we applied it to the “innermost” operand i.e. $A$, which gave us $A^{\complement}$. (ii) Then, we took this result, and applied the complement-operator to it once again, yielding $A^{\complement\complement}$. Thus, the operand for the second complement-operator was $A^{\complement}$. Now, here is the rule: Rule 1: We cannot meaningfully apply the complement-operator unless the operand set is proper (i.e. non-empty). People probably make mistakes in deciding whether A is not non-A, because, probably, they informally (and properly) do take the “innermost” operand, viz. $A$, to be non-empty. But then, further down the line, they do not check whether the second operand, viz. $A^{\complement}$ turns out to be empty or not. Case 2: When the set $A^{\complement}$ is empty: The set $A^{\complement}$ will be empty if $B = \emptyset$, which will happen if and only if $A = R$. Recall, $R$ is defined to be the union of $A$ and $B$. So, every time there are two mutually exclusive and collectively exhaustive sets, if any one of them is made empty, you cannot doubly apply the negation or the complement operator to the other (nonempty) set. Such a situation always occurs whenever the remaining set coincides with the universal set of a given context. In attempting a double negation, if your first (or innermost) operand itself is a universal set, then you cannot apply the negation operator for the second time, because by Rule 1, the result of the first operator comes out as an empty set. The nature of an empty set: But why this rule that you can’t negate (or take the complement of) an empty set? An empty set contains no element (or member). Since it is the elements which together impart identity to a set, an empty set has no identity of its own. As an aside, some people think that all the usages of the phrase “empty set” refers to the one and the only set (in the entire universe, for all possible logical propositions involving sets). For instance, the empty set obtained by taking an intersection of dogs and cats, they say, is exactly the same empty set as the one obtained by taking an intersection of cars and bikes. I reject this position. It seems to me to be Platonic in nature, and there is no reason to give Plato even an inch of the wedge-space in this Aristotlean universe of logic and reality. As a clarification, notice, we are talking of the basic and universal logic here, not the implementation details of a programming language. A programming language may choose to point all the occurrences of the NULL string to the same memory location. This is merely an implementation choice to save on the limited computer memory. But it still makes no sense to say that all empty C-strings exist at the same memory location—but that’s what you end up having if you call an empty set the empty set. Which brings us to the next issue. If an empty set has no identity of its own, if it has no elements, and hence no referents, then how come it can at all be defined? After all, a definition requires identity. The answer is: Structurally speaking, an empty set acquires its meaning—its identity—“externally;” it has no “internally” generated identity. The only identity applicable to an empty set is an abstract one which gets imparted to it externally; the purpose of this identity is to bring a logical closure (or logical completeness) to the primitive operations defined on sets. For instance, intersection is an operator. To formally bring closure to the intersection operation, we have to acknowledge that it may operate over any combination of any operand sets, regardless of their natures. This range includes having to define the intersection operator for two sets that have no element in common. We abstractly define the result of such a case as an empty set. In this case, the meaning of the empty set refers not to a result set of a specific internal identity, but only to the operation and the disjoint nature the operands which together generated it, i.e., via a logical relation whose meaning is external to the contents of the empty set. Inasmuch as an empty set necessarily includes a reference to an operation, it is a concept of method. Inasmuch as many combinations of various operations and operands can together give rise to numerous particular instances of an empty set, there cannot be a unique instance of it which is applicable in all contexts. In other words, an empty set is not a singleton; it is wrong to call it the empty set. Since an empty set has no identity of its own, the notion cannot be applied in an existence-related (or ontic or metaphysical) sense. The only sense it has is in the methodological (or epistemic) sense. Extending the meaning of operations on an empty set: In a derivative sense, we may redefine (i.e. extend) our terms. First, we observe that since an empty set lacks an identity of its own, the result of any operator applied to it cannot have any (internal) identity of its own. Then, equating these two lacks of existence-related identities (which is where the extension of the meaning occurs), we may say, even if only in a derivative or secondary sense, that Rule 2: The result of an operator applied to an empty set again is another empty set. Thus, if we now allow the complement-operator to operate also on an empty set (which, earlier, we did not allow), then the result would have to be another empty set. Again, the meaning of this second empty set depends on the entirety of its generating context. Case 3: When the non-empty set is the universal set: For our particular example, assuming $B = \emptyset$ and hence $A = R$, if we allow complement operator to be applied (in the extended sense) to $A^{\complement}$, then $A^{\complement\complement} \equiv R \setminus A^{\complement} = R \setminus (R \setminus A) = R \setminus B = R \setminus (\emptyset) = R = A$. Carefully note, in the above sequence, the place where the extended theory kicks in is at the expression: $R \setminus (\emptyset)$. We can apply the $\setminus$ operator here only in an extended sense, not primary. We could here perform this operation only because the left hand-side operand for the complement operator, viz., the set $R$ here was a universal set. Any time you have a universal set on the left hand-side of a complement operator, there is no more any scope left for ambiguity. This state is irrespective of whether the operand on the right hand-side is a proper set or an empty set. So, in this extended sense, feel free to say that A is not non-A, provided A is the universal set for a given context. To recap: The idea of an empty set acquires meaning only externally, i.e., only in reference to some other non-empty set(s). An empty set is thus only an abstract place-holder for the result of an operation applied to proper set(s), the operation being such that it yields no elements. It is a place-holder because it refers to the result of an operation; it is abstract, because this result has no element, hence no internally generated identity, hence no concrete meaning except in an abstract relation to that specific operation (including those specific operands). There is no “the” empty set; each empty set, despite being abstract, refers to a combination of an instance of proper set(s) and an instance of an operation giving rise to it. Exercises: E1: Draw a rectangle and put three non-overlapping circles completely contained in it. The circles respectively represent the three sets $A$, $B$, $C$, and the remaining portion of the rectangle represents the fourth set $D$. Assuming this Venn diagram, determine the meaning of the following expressions: (i) $R \setminus (B \cup C)$ (ii) $R \setminus (B \cap C)$ (iii) $R \setminus (A \cup B \cup C)$ (iv) $R \setminus (A \cap B \cap C)$. (v)–(viii) Repeat (i)–(iv) by substituting $D$ in place of $R$. (ix)–(xvi) Repeat (i)–(viii) if $A$ and $B$ partly overlap. E2: Identify the nature of set theoretical relations implied by that simple rule of algebra which states that two negatives make a positive. A bit philosophical, and a form better than “A is not non-A”: When Aristotle said that “A is A,” and when Ayn Rand taught its proper meaning: “Existence is identity,” they referred to the concepts of “existence” and “identity.” Thus, they referred to the universals. Here, the word “universals” is to be taken in the sense of a conceptual abstraction. If concepts—any concepts, not necessarily only the philosophical axioms—are to be represented in terms of the set theory, how can we proceed doing that? (BTW, I reject the position that the set theory, even the so-called axiomatic set theory, is more fundamental than the philosophic abstractions.) Before we address this issue of representation, understand that there are two ways in which we can specify a set: (i) by enumeration, i.e. by listing out all its (relatively concrete) members, and (ii) by rule, i.e. by specifying a definition (which may denote an infinity of concretes of a certain kind, within a certain range of measurements). The virtue of the set theory is that it can be applied equally well to both finite sets and infinite sets. The finite sets can always be completely specified via enumeration, at least in principle. On the other hand, infinite sets can never be completely specified via enumeration. (An infinite set is one that has an infinity of members or elements.) A concept (any concept, whether of maths, or art, or engineering, or philosophy…) by definition stands for an infinity of concretes. Now, in the set theory, an infinity of concretes can be specified only using a rule. Therefore, the only set-theoretic means capable of representing concepts in that theory is to specify their meaning via “rule” i.e. definition of the concept. Now, consider for a moment a philosophical axiom such as the concept of “existence.” Since the only possible set-theoretic representation of a concept is as an infinite set, and since philosophical axiomatic concepts have no antecedents, no priors, the set-theoretic representation of the axiom of “existence” would necessarily be as a universal set. We saw that the complement of a universal set is an empty set. This is a set-theoretic conclusion. Its broader-based, philosophic analog is: there are no contraries to axiomatic concepts. For the reasons explained above, you may thus conclude, in the derivative sense, that: “existence is not void”, where “void” is taken as exactly synonymous to “non-existence”. The proposition quoted in the last sentence is true. However, as the set theory makes it clear and easy to understand, it does not mean that you can take this formulation for a definition of the concept of existence. The term “void” here has no independent existence; it can be defined only by a negation of existence itself. You cannot locate the meaning of existence in reference to void, even if it is true that “existence is not void”. Even if you use the terms in an extended sense and thereby do apply the “not” qualfier (in the set-theoretic representation, it would be an operator) to the void (to the empty set), for the above-mentioned reasons, you still cannot then read the term “is” to mean “is defined as,” or “is completely synonymous with.” Not just our philosophical knowledge but even its narrower set-theoretical representation is powerful enough that it doesn’t allow us doing so. That’s why a better way to connect “existence” with “void” is to instead say: “Existence is not just the absence of the void.” The same principle applies to any concept, not just to the most fundamental philosophic axioms, so long as you are careful to delineate and delimit the context—and as we saw, the most crucial element here is the universal set. You can take a complement of an empty set only when the left hand-side operator is a universal set. Let us consider a few concepts, and compare putting them in the two forms: • from “A is not non-A” • to “A is not the [just] absence [or negation] of non-A,” or, “A is much more than just a negation of the non-A”. Consider the concept: focus. Following the first form, a statement we can formulate is: “focus is not evasion.” However, it does make much more sense to say that “focus is not just an absence of evasion,” or that “focus is not limited to an anti-evasion process.” Both these statements follow the second form. The first form, even if it is logically true, is not as illuminating as is the second. Exercises: Here are a few sentences formulated in the first form—i.e. in the form “A is not non-A” or something similar. Reformulate them into the second form—i.e. in the form such as: “A is not just an absence or negation of non-A” or “A is much better than or much more than just a complement or negation of non-A”. (Note: SPPU means the Savitribai Phule Pune University): • Engineers are not mathematicians • C++ programmers are not kids • IISc Bangalore is not SPPU • IIT Madras is not SPPU • IIT Kanpur is not SPPU • IIT Bombay is not SPPU • The University of Mumbai is not SPPU • The Shivaji University is not SPPU [Lest someone from SPPU choose for his examples the statements “Mechanical Engg. is not Metallurgy” and “Metallurgy is not Mechanical Engg.,” we would suggest him another exercise, one which would be better suited to the universal set of all his intellectual means. The exercise involves operations mostly on the finite sets alone. We would ask him to verify (and not to find out in the first place) whether the finite set (specified with an indicative enumeration) consisting of {CFD, Fluid Mechanics, Heat Transfer, Thermodynamics, Strength of Materials, FEM, Stress Analysis, NDT, Failure Analysis,…} represents an intersection of Mechanical Engg and Metallurgy or not.] A Song I Like: [I had run this song way back in 2011, but now want to run it again.] (Hindi) “are nahin nahin nahin nahin, nahin nahin, koee tumasaa hanseen…” Singers: Kishore Kumar, Asha Bhosale Music: Rajesh Roshan Lyrics: Anand Bakshi [But I won’t disappoint you. Here is another song I like and one I haven’t run so far.] (Hindi) “baaghon mein bahaar hain…” Music: S. D. Burman [but it sounds so much like R.D., too!] Singers: Mohamad Rafi, Lata Mangeshkar Lyrics: Anand Bakshi [Exercise, again!: For each song, whenever a no’s-containing line comes up, count the number of no’s in it. Then figure out whether the rule that double negatives cancel out applies or not. Why or why not?] [Mostly done. Done editing now (right on 2016.10.22). Drop me a line if something isn’t clear—logic is a difficult topic to write on.] [E&OE]
2019-01-18 21:04:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 66, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7383688688278198, "perplexity": 889.2142268511278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00394.warc.gz"}
https://www.math10.com/forum/viewtopic.php?f=1&t=9357
# Combinations Algebra ### Combinations 1.If the number p is chosen at random from natural numbers that are not greater than 10, what is the probability 3p + 2 <= 9? 2. If there are 4 red and 2 black balls in the basket, what is the probability that at least one of them is black when two balls are drawn at the same time? Guest ### Re: Combinations 1.If the number p is chosen at random from natural numbers that are not greater than 10, what is the probability 3p + 2 <= 9? 3p<= 9- 2= 7 p<= 7/3= 2 and 1/3. Since these are natural numbers that is the same as saying that p= 1 or p= 2. Assuming that all numbers from 1 to 10 are equally likely to be chosen, the probability is 1/10+ 1/10= 2/10= 1/5. 2. If there are 4 red and 2 black balls in the basket, what is the probability that at least one of them is black when two balls are drawn at the same time? The only way there could not be "at least one black ball" is if both balls are red. There are 4 red and 2 black balls so the probability ball A is red is 4/(4+ 2)= 4/6= 2/3. Given that, there are 5 balls left, 3 red and 2 black, so the probability ball B is also red is 3/(3+ 2)= 3/5. The probability the two balls are both red is (2/3)(3/5)= 2/5. The probability "at least one ball is black" is 1- 2/5= 3/5. HallsofIvy Posts: 341 Joined: Sat Mar 02, 2019 9:45 am Reputation: 123
2021-06-14 17:12:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9083167910575867, "perplexity": 223.9621372605018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00504.warc.gz"}
https://yiqinzhao.me/project/xihe/
Xihe A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality # Abstract Omnidirectional lighting provides the foundation for achieving spatially-variant photorealistic 3D rendering, a desirable property for mobile augmented reality applications. However, in practice, estimating omnidirectional lighting can be challenging due to limitations such as partial panoramas of the rendering positions, and the inherent environment lighting and mobile user dynamics. A new opportunity arises recently with the advancements in mobile 3D vision, including built-in high-accuracy depth sensors and deep learning-powered algorithms, which provide the means to better sense and understand the physical surroundings. Centering the key idea of 3D vision, in this work, we design an edge-assisted framework called Xihe to provide mobile AR applications the ability to obtain accurate omnidirectional lighting estimation in real time. Specifically, we develop a novel sampling technique that efficiently compresses the raw point cloud input generated at the mobile device. This technique is derived based on our empirical analysis of a recent 3D indoor dataset and plays a key role in our 3D vision-based lighting estimator pipeline design. To achieve the real-time goal, we develop a tailored GPU pipeline for on-device point cloud processing and use an encoding technique that reduces network transmitted bytes. Finally, we present an adaptive triggering strategy that allows Xihe to skip unnecessary lighting estimations and a practical way to provide temporal coherent rendering integration with the mobile AR ecosystem. We evaluate both the lighting estimation accuracy and time of Xihe using a reference mobile application developed with Xihe's APIs. Our results show that Xihe takes as fast as 20.67ms per lighting estimation and achieves 9.4% better estimation accuracy than a state-of-the-art neural network. # MobiSys'21 Paper Xihe: A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality Yiqin Zhao and Tian Guo @InProceedings{xihe_mobisys2021, author="Zhao, Yiqin and Guo, Tian", title="Xihe: A 3D Vision-based Lighting Estimation Framework for Mobile Augmented Reality", booktitle="The 19th ACM International Conference on Mobile Systems, Applications, and Services", year="2021", } # Acknowledgement We thank all anonymous reviewers, our shepherd, and our artifact evaluator Tianxing Li for their insight feedback. This work was supported in part by NSF Grants #1755659 and #1815619.
2021-08-04 22:26:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093787431716919, "perplexity": 4002.9243478141234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00512.warc.gz"}
https://mathoverflow.net/questions/182518/when-does-the-greedy-change-making-algorithm-work
# When does the greedy change-making algorithm work? The change-making problem asks how to make a certain sum of money using the fewest coins. With US coins {1, 5, 10, 25}, the greedy algorithm of selecting the largest coin at each step also uses the fewest coins. With which currencies (sets of integers including 1) does the 'greedy' algorithm work? That's a different question, Gerry. Believe it or not, the answers are different if one is asking (a) given N and a system of denominations D, is the greedy algorithm using D optimal for N? and (b) given a system of denominations D, is the greedy algorithm using D optimal for ALL N? I think the latter problem is the one that Zachary Vance is asking about. In that case, it is decidable in polynomial time. See Pearson's article here: http://dl.acm.org/citation.cfm?id=2309414 . • Oops. ${}{}{}{}$ – Gerry Myerson Oct 4 '14 at 23:44 If you look at pages 4-5 of this paper by Jeff Shallit, it says, "Suppose we are given $N$ and a system of denominations. How easy is it to determine if the greedy representation for $N$ is actually optimal? Kozen and Zaks [4] have shown that this problem is co-NP-complete if the data is provided in ordinary decimal, or binary. This strongly suggests there is no efficient algorithm for this problem." The reference is D. Kozen and S. Zaks, Optimal bounds for the change-making problem, Theoret. Comput. Sci. 123 (1994), 377–388.
2019-10-22 12:38:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764946699142456, "perplexity": 638.5657918052436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987817685.87/warc/CC-MAIN-20191022104415-20191022131915-00366.warc.gz"}
https://studyadda.com/sample-papers/jee-main-sample-paper-47_q68/301/303689
• # question_answer If $f'(x)=\,|x|\,-\{x\},$ where {x} denotes the fractional part function of $x,$ then $f(x)$ is decreasing in A)  $\left( \frac{-1}{2},\,0 \right)$                       B)  $\left( \frac{-1}{2},\,2 \right)$ C)  $\left( \frac{-1}{2},\,-2 \right]$                      D)  $\left( \frac{1}{2},\,\infty \right)$ $\because$    $f(x)=|x|-\{x\}$ $\because$     $f(x)$ is decreasing. $\therefore$    $f'(x)<0$ $\Rightarrow$            $|x|-\{x\}<0$ $\Rightarrow$            $|x|\,<\{x\}$ From figure, $x\in \,\left( -\frac{1}{2},\,\,0 \right)$ From graph, it is clear that, $f(x)$ has local maxima at $x=1$.
2022-01-16 18:21:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981899619102478, "perplexity": 2731.9881593006503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00665.warc.gz"}
https://math.stackexchange.com/questions/1313214/why-can-the-transformation-derived-from-a-list-of-points-and-a-list-of-their-tra
Why can the transformation derived from a list of points and a list of their transformed counterparts not be affine or linear? Some context (original question below): I wanted to know if there's a nice concise formula to calculate the transformation based on a list of points and another list of the transformed points. This is all 2D or $\mathbb{R}^2$. By that I mean some matrix equation that has a matrix that contains the given values, so that one can invert this matrix to solve for the transformation matrix or its components. The question I link to below has the very same goal and especially a nice answer that I was looking for, but it does not create a linear or affine transform. In his answer to this question bubba makes the following statement: The transformation can not be linear or affine, it has to be a "perspective" transform. Why is that? What if I want to find the affine or linear transformation and not the perspective/nonlinear one? I'm not sure about this, but I guess that if $c_0 = 0$ and $c_1 = 0$, then the perspective transformation will be linear. Would that help me to find the linear or affine transform of points? • It might be good to edit in more of the context so that your question does not depend on those links never breaking. – Mark S. Jun 5 '15 at 13:13
2021-03-09 01:31:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4956012964248657, "perplexity": 216.82668932241364}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00228.warc.gz"}
https://www.trustudies.com/question/446/d-and-e-are-points-on-the-sides-ca-an/
3 Tutor System Starting just at 265/hour # D and E are points on the sides CA and CB respectively of a triangle ABC right angled at C. Prove that $$AE^2 + BD^2 = AB^2 + DE^2$$. Given, D and E are points on the sides CA and CB respectively of a triangle ABC right angled at C. By Pythagoras theorem in $$\triangle$$ ACE, we get $$AC^2 + CE^2 = AE^2$$ ………………………………………….(i) In $$\triangle$$ BCD, by Pythagoras theorem, we get $$BC^2 + CD^2 = BD^2$$ ………………………………..(ii) From equations (i) and (ii), we get, $$AC^2 + CE^2 + BC^2 + CD^2 = AE^2 + BD^2$$ …………..(iii) In $$\triangle$$ CDE, by Pythagoras theorem, we get $$DE^2 = CD^2 + CE^2$$ In $$\triangle$$ ABC, by Pythagoras theorem, we get $$AB^2 = AC^2 + CB^2$$ Putting the above two values in equation (iii), we get $$DE^2 + AB^2 = AE^2 + BD^2$$.
2023-03-21 17:46:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6866235733032227, "perplexity": 697.5452016222924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00321.warc.gz"}
http://jorgebg.com/reader/
## Friday, 23 August 2019 ### 04:00 PM Future smart cities and intelligent world will have connected vehicles and smart cars as its indispensable and most essential components. The communication and interaction among such connected entities in this vehicular internet of things (IoT) domain, which also involves smart traffic infrastructure, road-side sensors, restaurant with beacons, autonomous emergency vehicles, etc., offer innumerable real-time user applications and provide safer and pleasant driving experience to consumers. Having more than 100 million lines of code and hundreds of sensors, these connected vehicles (CVs) expose a large attack surface, which can be remotely compromised and exploited by malicious attackers. Security and privacy are serious concerns that impede the adoption of smart connected cars, which if not properly addressed will have grave implications with risk to human life and limb. In this research, we present a formalized dynamic groups and attribute-based access control (ABAC) model (referred as \cvac) for smart cars ecosystem, where the proposed model not only considers system wide attributes-based security policies but also takes into account the individual user privacy preferences for allowing or denying service notifications, alerts and operations to on-board resources. Further, we introduce a novel notion of groups in vehicular IoT, which are dynamically assigned to moving entities like connected cars, based on their current GPS coordinates, speed or other attributes, to ensure relevance of location and time sensitive notification services to the consumers, to provide administrative benefits to manage large numbers of smart entities, and to enable attributes and alerts inheritance for fine-grained security authorization policies. We present proof of concept implementation of our model in AWS cloud platform demonstrating real-world uses cases along with performance metrics. S-money [Proc. R. Soc. A 475, 20190170 (2019)] schemes define virtual tokens designed for networks with relativistic or other trusted signalling constraints. The tokens allow near-instant verification and guarantee unforgeability without requiring quantum state storage. We present refined two stage S-money schemes. The first stage, which may involve quantum information exchange, generates private user token data. In the second stage, which need only involve classical communications, users determine the valid presentation point, without revealing it to the issuer. This refinement allows the user to determine the presentation point anywhere in the causal past of all valid presentation points. It also allows flexible transfer of tokens among users without compromising user privacy. Access to the system resources. The current access control systems face many problems, such as the presence of the third-party, inefficiency, and lack of privacy. These problems can be addressed by blockchain, the technology that received major attention in recent years and has many potentials. In this study, we overview the problems of the current access control systems, and then, we explain how blockchain can help to solve them. We also present an overview of access control studies and proposed platforms in different domains. This paper presents the state of the art and the challenges of blockchain-based access control systems. With the number of new mobile malware instances increasing by over 50\% annually since 2012 [24], malware embedding in mobile apps is arguably one of the most serious security issues mobile platforms are exposed to. While obfuscation techniques are successfully used to protect the intellectual property of apps' developers, they are unfortunately also often used by cybercriminals to hide malicious content inside mobile apps and to deceive malware detection tools. As a consequence, most of mobile malware detection approaches fail in differentiating between benign and obfuscated malicious apps. We examine the graph features of mobile apps code by building weighted directed graphs of the API calls, and verify that malicious apps often share structural similarities that can be used to differentiate them from benign apps, even under a heavily "polluted" training set where a large majority of the apps are obfuscated. We present DaDiDroid an Android malware app detection tool that leverages features of the weighted directed graphs of API calls to detect the presence of malware code in (obfuscated) Android apps. We show that DaDiDroid significantly outperforms MaMaDroid [23], a recently proposed malware detection tool that has been proven very efficient in detecting malware in a clean non-obfuscated environment. We evaluate DaDiDroid's accuracy and robustness against several evasion techniques using various datasets for a total of 43,262 benign and 20,431 malware apps. We show that DaDiDroid correctly labels up to 96% of Android malware samples, while achieving an 91% accuracy with an exclusive use of a training set of obfuscated apps. In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for $n$-party real summation in the shuffle model of differential privacy with $O_{\epsilon, \delta}(1)$ error and $\Theta(\epsilon\sqrt{n})$ one-bit messages per party. In contrast, every local model protocol for real summation must incur error $\Omega(1/\sqrt{n})$, and there exist protocols matching this lower bound which require just one bit of communication per party. Whether this gap in number of messages is necessary was left open by Cheu et al. In this note we show a protocol with $O(1/\epsilon)$ error and $O(\log(n/\delta))$ messages of size $O(\log(n))$ per party. This protocol is based on the work of Ishai et al.\ (FOCS 2006) showing how to implement distributed summation from secure shuffling, and the observation that this allows simulating the Laplace mechanism in the shuffle model. Nature, Published online: 22 August 2019; doi:10.1038/d41586-019-02542-3 Wunderkind gene-editing tool used to trigger smart materials that can deliver drugs and sense biological signals. Raised blood pressure is the most important risk factor in the global burden of disease.1 Although there is robust evidence to show that lowering blood pressure can substantially reduce cardiovascular morbidity and mortality,2 the global burden of hypertension is increasing.3,4 To achieve a reduction in the burden of disease related to hypertension, health systems must ensure that high blood pressure treatment and control rates are achieved. The status of controlled blood pressure is being promoted as a measure of universal health coverage, especially in the context of non-communicable diseases. Paediatrician who used digital technologies to improve global health. Born on Sept 1, 1948, in Newton, MA, USA, he died while hiking in Alaska, USA, on June 25, 2019, aged 70 years. One of the basic tenets of evidence-based medicine is that randomisation is crucial to understanding treatment effects. Observational studies are subject to confounding and selection bias. Researchers can adjust for measured differences between treatment groups, but unmeasured or unmeasurable differences might exist between groups that obscure true treatment effects and cannot be accounted for by any statistical method.1 The published medical literature is filled with examples of associations between treatment and outcome identified in observational studies that were subsequently disproven by well conducted randomised controlled trials (RCTs). We thank the correspondents for their responses to our Comment.1 Mariam Chekhchar and colleagues1 discuss branch retinal artery occlusion in a young woman, probably due to occult cardioembolus from rheumatic mitral stenosis. Despite decreasing incidence in developed nations, rheumatic heart disease remains a major source of preventable morbidity and mortality worldwide2 and we commend the authors for bringing attention to this important clinical entity. However, given this valvulopathy's highly thrombogenic nature, therapeutic anticoagulation should be considered. While convolutional neural network (CNN)-based pedestrian detection methods have proven to be successful in various applications, detecting small-scale pedestrians from surveillance images is still challenging. The major reason is that the small-scale pedestrians lack much detailed information compared to the large-scale pedestrians. To solve this problem, we propose to utilize the relationship between the large-scale pedestrians and the corresponding small-scale pedestrians to help recover the detailed information of the small-scale pedestrians, thus improving the performance of detecting small-scale pedestrians. Specifically, a unified network (called JCS-Net) is proposed for small-scale pedestrian detection, which integrates the classification task and the super-resolution task in a unified framework. As a result, the super-resolution and classification are fully engaged, and the super-resolution sub-network can recover some useful detailed information for the subsequent classification. Based on HOG+LUV and JCS-Net, multi-layer channel features (MCF) are constructed to train the detector. The experimental results on the Caltech pedestrian dataset and the KITTI benchmark demonstrate the effectiveness of the proposed method. To further enhance the detection, multi-scale MCF based on JCS-Net for pedestrian detection is also proposed, which achieves the state-of-the-art performance. In this paper, a self-guiding multimodal LSTM (sgLSTM) image captioning model is proposed to handle an uncontrolled imbalanced real-world image-sentence dataset. We collect a FlickrNYC dataset from Flickr as our testbed with 306,165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in the FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (mLSTM) model. Training of mLSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterward, during the training of sgLSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sgLSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images. A fully-parallelized work-time optimal algorithm is presented for computing the exact Euclidean Distance Transform (EDT) of a 2D binary image with the size of $ntimes n$ . Unlike existing PRAM (Parallel Random Access Machine) and other algorithms, this algorithm is suitable for implementation on modern SIMD (Single Instruction Multiple Data) architectures such as GPUs. As a fundamental operation of 2D EDT, 1D EDT is efficiently parallelized first. Specifically, the GPU algorithm for the 1D EDT, which uses CUDA (Compute Unified Device Architecture) binary functions, such as ballot(), ffs(), clz(), and shfl(), runs in $O(log_{32}n)$ time and performs $O(n)$ work. Using the 1D EDT as a fundamental operation, the fully-parallelized work-time optimal 2D EDT algorithm is designed. This algorithm consists of three steps. Step 1 of the algorithm runs in $O(log_{32}n)$ time and performs $O(N)$ ( $N = n^{2}$ ) of total work on GPU. Step 2 performs $O(N)$ of total work and has an expected time complexity of $O(logn)$ on GPU. Step 3 runs in $O(log_{32}n)$ time and performs $O(N)$ of total work on GPU. As far as we know, this algorithm is the first fully-parallelized and realized work-time optimal algorithm for GPUs. The experimental results show that this algorit- m outperforms the prior state-of-the-art GPU algorithms. Sonar imagery plays a significant role in oceanic applications since there is little natural light underwater, and light is irrelevant to sonar imaging. Sonar images are very likely to be affected by various distortions during the process of transmission via the underwater acoustic channel for further analysis. At the receiving end, the reference image is unavailable due to the complex and changing underwater environment and our unfamiliarity with it. To the best of our knowledge, one of the important usages of sonar images is target recognition on the basis of contour information. The contour degradation degree for a sonar image is relevant to the distortions contained in it. To this end, we developed a new no-reference contour degradation measurement for perceiving the quality of sonar images. The sparsities of a series of transform coefficient matrices, which are descriptive of contour information, are first extracted as features from the frequency and spatial domains. The contour degradation degree for a sonar image is then measured by calculating the ratios of extracted features before and after filtering this sonar image. Finally, a bootstrap aggregating (bagging)-based support vector regression module is learned to capture the relationship between the contour degradation degree and the sonar image quality. The results of experiments validate that the proposed metric is competitive with the state-of-the-art reference-based quality metrics and outperforms the latest reference-free competitors. We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks. Top-down saliency detection aims to highlight the regions of a specific object category, and typically relies on pixel-wise annotated training data. In this paper, we address the high cost of collecting such training data by a weakly supervised approach to object saliency detection, where only image-level labels, indicating the presence or absence of a target object in an image, are available. The proposed framework is composed of two collaborative CNN modules, an image-level classifier and a pixel-level map generator. While the former distinguishes images with objects of interest from the rest, the latter is learned to generate saliency maps by which the images masked by the maps can be better predicted by the former. In addition to the top-down guidance from class labels, the map generator is derived by also exploring other cues, including the background prior, superpixel- and object proposal-based evidence. The background prior is introduced to reduce false positives. Evidence from superpixels helps preserve sharp object boundaries. The clue from object proposals improves the integrity of highlighted objects. These different types of cues greatly regularize the training process and reduces the risk of overfitting, which happens frequently when learning CNN models with few training data. Experiments show that our method achieves superior results, even outperforming fully supervised methods. ## Thursday, 22 August 2019 ### 04:00 PM Nature, Published online: 21 August 2019; doi:10.1038/s41586-019-1502-y RNA-dependent DEAD-box ATPases (DDXs) regulate the dynamics of phase-separated organelles, with ATP-bound DDXs promoting phase separation, and ATP hydrolysis inducing compartment disassembly and RNA release. Nature, Published online: 21 August 2019; doi:10.1038/d41586-019-02451-5 The movement of small droplets on a substrate is governed by surface-tension forces. A technique that can tune the surface tension of robust oxide substrates for droplet manipulation could open up many applications. FADS1 and FADS2 Polymorphisms Modulate Fatty Acid Metabolism and Dietary Impact on Health [Annual Reviews: Annual Review of Nutrition: Table of Contents] Annual Review of Nutrition, Volume 39, Issue 1, Page 21-44, August 2019. Dietary Fuels in Athletic Performance [Annual Reviews: Annual Review of Nutrition: Table of Contents] Annual Review of Nutrition, Volume 39, Issue 1, Page 45-73, August 2019. The Benefits and Risks of Iron Supplementation in Pregnancy and Childhood [Annual Reviews: Annual Review of Nutrition: Table of Contents] Annual Review of Nutrition, Volume 39, Issue 1, Page 121-146, August 2019. Mitochondrial DNA Mutation, Diseases, and Nutrient-Regulated Mitophagy [Annual Reviews: Annual Review of Nutrition: Table of Contents] Annual Review of Nutrition, Volume 39, Issue 1, Page 201-226, August 2019. Time-Restricted Eating to Prevent and Manage Chronic Metabolic Diseases [Annual Reviews: Annual Review of Nutrition: Table of Contents] Annual Review of Nutrition, Volume 39, Issue 1, Page 291-315, August 2019. This randomized clinical trial compares the effect on relapse of continuing olanzapine vs placebo among patients with psychotic depression who achieved remission of psychosis and depressive symptoms while taking olanzapine and sertraline. To the Editor In his Viewpoint, Dr Skolnik discussed the 2018 American College of Cardiology (ACC)/American Heart Association (AHA) guideline on the management of blood cholesterol and its implications for older adults. We would like to highlight relevant features of the guidelines that merit greater recognition. Although various electronic health records (EHRs) have different features, nearly all seem to have alerts for potential problems with drug prescribing. It’s one thing that many believe that EHRs do very well. However, a recent study warns that when it comes to opioids and benzodiazepines, we shouldn’t always assume that such alerts work as intended. This Medical News article discusses a recent meta-analysis of oral immunotherapy trials for people with peanut allergies. This Viewpoint argues that the near-universal adoption of electronic fetal monitoring (EFM) in labor and delivery units has occurred without evidence that it has reduced adverse neurological events and has contributed to an increase in US cesarean delivery rates, and calls for the education of physicians and the public about EFM’s demonstrated reliability and value. ## Tuesday, 20 August 2019 ### 04:00 PM Nature, Published online: 20 August 2019; doi:10.1038/d41586-019-02475-x Lisa Feldman Barrett ponders Joseph LeDoux’s study on how conscious brains evolved. The Internet of Things (IoT) is increasingly empowering people with an interconnected world of physical objects ranging from smart buildings to portable smart devices, such as wearables. With recent advances in mobile sensing, wearables have become a rich collection of portable sensors and are able to provide various types of services, including tracking of health and fitness, making financial transactions, and unlocking smart locks and vehicles. Most of these services are delivered based on users' confidential and personal data, which are stored on these wearables. Existing explicit authentication approaches (i.e., PINs or pattern locks) for wearables suffer from several limitations, including small or no displays, risk of shoulder surfing, and users' recall burden. Oftentimes, users completely disable security features out of convenience. Therefore, there is a need for a burden-free (implicit) authentication mechanism for wearable device users based on easily obtainable biometric data. In this paper, we present an implicit wearable device user authentication mechanism using combinations of three types of coarse-grain minute-level biometrics: behavioral (step counts), physiological (heart rate), and hybrid (calorie burn and metabolic equivalent of task). From our analysis of over 400 Fitbit users from a 17-month long health study, we are able to authenticate subjects with average accuracy values of around .93 (sedentary) and .90 (non-sedentary) with equal error rates of .05 using binary SVM classifiers. Our findings also show that the hybrid biometrics perform better than other biometrics and behavioral biometrics do not have a significant impact, even during non-sedentary periods. The electroencephalography (EEG) method has recently attracted increasing attention in the study of brain activity-based biometric systems because of its simplicity, portability, noninvasiveness, and relatively low cost. However, due to the low signal-to-noise ratio of EEG, most of the existing EEG-based biometric systems require a long duration of signals to achieve high accuracy in individual identification. Besides, the feasibility and stability of these systems have not yet been conclusively reported, since most studies did not perform longitudinal evaluation. In this paper, we proposed a novel EEG-based individual identification method using code-modulated visualevoked potentials (c-VEPs). Specifically, this paper quantitatively compared eight code-modulated stimulation patterns, including six 63-bit (1.05 s at 60-Hz refresh rate) m-sequences (M1-M6) and two spatially combined sequence groups (M×4: M1-M4 and M× 6: M1-M6) in recording the c-VEPs from a group of 25 subjects for individual identification. To further evaluate the influence of inter-session variability, we recorded two data sessions for each individual on different days to measure intra-session and cross-session identification performance. State-of-the-art VEP detection algorithms in brain-computer interfaces (BCIs) were employed to construct a template-matching-based identification framework. For intra-session identification, we achieved a 100% correct recognition rate (CRR) using 5.25-s EEG data (average of five trials for M5). For cross-session identification, 99.43% CRR was attained using 10.5-s EEG signals (average of ten trials for M5). These results suggest that the proposed c-VEP based individual identification method is promising for real-world applications. ## Monday, 19 August 2019 ### 04:00 PM Nature, Published online: 19 August 2019; doi:10.1038/d41586-019-02452-4 The tip of a scanning tunnelling microscope has been used to convert a molecular assembly into a 2D polymer and back, at room temperature — revealing how extreme environmental conditions can alter the progress of reactions. ## Tuesday, 13 August 2019 ### 11:00 PM Folates are critical for central nervous system function. Folate transport is mediated by 3 major pathways, reduced folate carrier (RFC), proton-coupled folate transporter (PCFT), and folate receptor alpha (FRα/Folr1), known to be regulated by ligand-activated nuclear receptors. Cerebral folate delivery primarily occurs at the choroid plexus through FRα and PCFT;... Environmental conditions are key factors in the progression of plant disease epidemics. Light affects the outbreak of plant diseases, but the underlying molecular mechanisms are not well understood. Here, we report that the light-harvesting complex II protein, LHCB5, from rice is subject to light-induced phosphorylation during infection by the rice... Diverse organisms, from insects to humans, actively seek out sensory information that best informs goal-directed actions. Efficient active sensing requires congruity between sensor properties and motor strategies, as typically honed through evolution. However, it has been difficult to study whether active sensing strategies are also modified with experience. Here, we... ## Monday, 12 August 2019 ### 11:00 PM Although KRAS and TP53 mutations are major drivers of pancreatic ductal adenocarcinoma (PDAC), the incurable nature of this cancer still remains largely elusive. ARF6 and its effector AMAP1 are often overexpressed in different cancers and regulate the intracellular dynamics of integrins and E-cadherin, thus promoting tumor invasion and metastasis when... Participatory sensing is a crowdsourcing-based framework, where the platform executes the sensing requests with the help of many common peoples’ handheld devices (typically smartphones). In this paper, we mainly address the online sensing request admission and smartphone selection problem to maximize the profit of the platform, taking into account the queue backlog, and the location of sensing requests and smartphones. First, we formulate this problem as a discrete time model and design a location aware online admission and selection control algorithm (LAAS) based on the Lyapunov optimization technique. The LAAS algorithm only depends on the currently available information and makes all the control decisions independently and simultaneously. Next, we utilize the recent advancement of the accurate prediction of smartphones’ mobility and sensing request arrival information in the next few time slots and develop a predictive location aware admission and selection control algorithm (PLAAS). We further design a greedy predictive location aware admission and selection control algorithm (GPLAAS) to achieve the online implementation of PLAAS approximately and iteratively. Theoretical analysis shows that under any control parameter V > 0, both LAAS and PLAAS algorithm can achieve O(1/V)-optimal average profit, while the sensing request backlog is bounded by O(V). Extensive numerical results based on both synthetic and real trace show that LAAS outperforms the Greedy algorithm and Random algorithm and GPLAAS improves the profit-backlog tradeoff over LAAS. This paper presents an energy management method to optimally control the energy supply and the temperature settings of distributed heating and ventilation systems for residential buildings. The control model attempts to schedule the supply and demand simultaneously with the purpose of minimizing the total costs. Moreover, the Predicted Percentage of Dissatisfied (PPD) model is introduced into the consumers’ cost functions and the quadratic fitting method is applied to simplify the PPD model. An energy management algorithm is developed to seek the optimal temperature settings, the energy supply, and the price. Furthermore, due to the ubiquity of price oscillations in electricity markets, we analyze and examine the effects of price oscillations on the performance of the proposed algorithm. Finally, the theoretical analysis and simulation results both demonstrate that the proposed energy management algorithm with price oscillations can converge to a region around the optimal solution. The deployment of smart hybrid heat pumps (SHHPs) can introduce considerable benefits to electricity systems via smart switching between electricity and gas while minimizing the total heating cost for each individual customer. In particular, the fully optimized control technology can provide flexible heat that redistributes the heat demand across time for improving the utilization of low-carbon generation and enhancing the overall energy efficiency of the heating system. To this end, an accurate quantification of the preheating is of great importance to characterize the flexible heat. This paper proposes a novel data-driven preheating quantification method to estimate the capability of the heat pump demand shifting and isolate the effect of interventions. Varieties of fine-grained data from a real-world trial are exploited to estimate the baseline heat demand using Bayesian deep learning while jointly considering epistemic and aleatoric uncertainties. A comprehensive range of case studies are carried out to demonstrate the superior performance of the proposed quantification method, and then, the estimated demand shift is used as an input into the whole-system model to investigate the system implications and quantify the range of benefits of rolling out the SHHPs developed by PassivSystems to the future GB electricity systems. Obtaining an appropriate model is very crucial to develop an efficient energy management system for the smart home, including photovoltaic (PV) array, plug-in electric vehicle (PEV), home loads, and heat pump (HP). Stochastic modeling methods of smart homes explain random parameters and uncertainties of the aforementioned components. In this paper, a concise yet comprehensive analysis and comparison are presented for these techniques. First, modeling methods are implemented to find appropriate and precise forecasting models for PV, PEV, HP, and home load demand. Then, the accuracy of each model is validated by the real measured data. Finally, the pros and cons of each method are discussed and reviewed. The obtained results show the conditions under which the methods can provide a reliable and accurate description of smart home dynamics. Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication. ## Thursday, 08 August 2019 ### 04:00 PM Machine Learning for Sociology [Annual Reviews: Annual Review of Sociology: Table of Contents] Annual Review of Sociology, Volume 45, Issue 1, Page 27-45, July 2019. The Role of Space in the Formation of Social Ties [Annual Reviews: Annual Review of Sociology: Table of Contents] Annual Review of Sociology, Volume 45, Issue 1, Page 111-132, July 2019. The Social Structure of Time: Emerging Trends and New Directions [Annual Reviews: Annual Review of Sociology: Table of Contents] Annual Review of Sociology, Volume 45, Issue 1, Page 301-320, July 2019. Retail Sector Concentration, Local Economic Structure, and Community Well-Being [Annual Reviews: Annual Review of Sociology: Table of Contents] Annual Review of Sociology, Volume 45, Issue 1, Page 321-343, July 2019. Well-Being at the End of Life [Annual Reviews: Annual Review of Sociology: Table of Contents] Annual Review of Sociology, Volume 45, Issue 1, Page 515-534, July 2019. Clothing and carrying status variations are the two key factors that affect the performance of gait recognition because people usually wear various clothes and carry all kinds of objects, while walking in their daily life. These covariates substantially affect the intensities within conventional gait representations such as gait energy images. Hence, to properly compare a pair of input gait features, an appropriate metric for joint intensity is needed in addition to the conventional spatial metric. We therefore propose a unified joint intensity transformer network for gait recognition that is robust against various clothing and carrying statuses. Specifically, the joint intensity transformer network is a unified deep learning-based architecture containing three parts: a joint intensity metric estimation net, a joint intensity transformer, and a discrimination network. First, the joint intensity metric estimation net uses a well-designed encoder-decoder network to estimate a sample-dependent joint intensity metric for a pair of input gait energy images. Subsequently, a joint intensity transformer module outputs the spatial dissimilarity of two gait energy images using the metric learned by the joint intensity metric estimation net. Third, the discrimination network is a generic convolution neural network for gait recognition. In addition, the joint intensity transformer network is designed with different loss functions depending on the gait recognition task (i.e., a contrastive loss function for the verification task and a triplet loss function for the identification task). The experiments on the world’s largest datasets containing various clothing and carrying statuses demonstrate the state-of-the-art performance of the proposed method. At present, the fusion of different unimodal biometrics has attracted increasing attention from researchers, who are dedicated to the practical application of biometrics. In this paper, we explored a multi-biometric algorithm that integrates palmprints and dorsal hand veins (DHV). Palmprint recognition has a rather high accuracy and reliability, and the most significant advantage of DHV recognition is the biopsy (Liveness detection). In order to combine the advantages of both and implement the fusion method, deep learning and graph matching were, respectively, introduced to identify palmprint and DHV. Upon using the deep hashing network (DHN), biometric images can be encoded as 128-bit codes. Then, the Hamming distances were used to represent the similarity of two codes. Biometric graph matching (BGM) can obtain three discriminative features for classification. In order to improve the accuracy of open-set recognition, in multi-modal fusion, the score-level fusion of DHN and BGM was performed and authentication was provided by support vector machine (SVM). Furthermore, based on DHN, all four levels of fusion strategies were used for multi-modal recognition of palmprint and DHV. Evaluation experiments and comprehensive comparisons were conducted on various commonly used datasets, and the promising results were obtained in this case where the equal error rates (EERs) of both palmprint recognition and multi-biometrics equal 0, demonstrating the great superiority of DHN in biometric verification. This paper proposes an explicit predictive current control scheme implemented with a low carrier frequency pulsewidth modulation (PWM) on an induction machine fed by a three-level neutral point clamped inverter. The PWM carrier and the main current sampling frequency are both set to 1 kHz, resulting in a 500 Hz average switching frequency per device, which is very suitable for large drive applications. The explicit predictive control is introduced to optimize the available bandwidth provided by such a low sampling frequency, maximizing the dynamic performance. The strategy has been tested in a 2.2-kW induction motor experimental prototype. AC–DC light-emitting diode (LED) drivers suffer from short lifetime because of the low-lifetime electrolytic capacitors used for dc bus decoupling. In this paper, a primary-side peak current control method applied for driving a two-stage multichannel LED driver is proposed. The LED driver consists of an ac–dc boost power factor correction stage and an isolated dc–dc nonresonant stage. A long-lifetime and small film capacitor is used for implementing the intermediate dc bus. The proposed method, which controls the peak value of the primary-side current of the transformers, is applied to the dc–dc stage to ensure constant dc current output of LEDs in spite of the widely varying dc bus voltage due to low bus capacitance. The proposed method compensates the effect of the large dc bus voltage ripple by varying the switching frequency of the primary-side switches. Detailed design procedure, theoretical analysis, and experimental results of the LED driver operating at 180 W with the proposed method are provided. The LED driver with the proposed control method is proved to have high overall efficiency. The objective of this paper is to develop a method for assisting users to push power-assisted wheelchairs (PAWs) in such a way that the electrical energy consumption over a predefined distance-to-go is optimal, while at the same time bringing users to a desired fatigue level. This assistive task is formulated as an optimal control problem and solved by Feng et al. using the model-free approach gradient of partially observable Markov decision processes. To increase the data efficiency of the model-free framework, we here propose to use policy learning by weighting exploration with the returns (PoWER) with 25 control parameters. Moreover, we provide a new near-optimality analysis of the finite-horizon fuzzy Q-iteration, which derives a model-based baseline solution to verify numerically the near-optimality of the presented model-free approaches. Simulation results show that the PoWER algorithm with the new parameterization converges to a near-optimal solution within 200 trials and possesses the adaptability to cope with changes of the human fatigue dynamics. Finally, 24 experimental trials are carried out on the PAW system, with fatigue feedback provided by the user via a joystick. The performance tends to increase gradually after learning. The results obtained demonstrate the effectiveness and the feasibility of PoWER in our application. Recent years have witnessed the promising future of hashing in the industrial applications for fast similarity retrieval. In this paper, we propose a novel supervised hashing method for large-scale cross-media search, termed self-supervised deep multimodal hashing (SSDMH), which learns unified hash codes as well as deep hash functions for different modalities in a self-supervised manner. With the proposed regularized binary latent model, unified binary codes can be solved directly without relaxation strategy while retaining the neighborhood structures by the graph regularization term. Moreover, we propose a new discrete optimization solution, termed as binary gradient descent, which aims at improving the optimization efficiency toward real-time operation. Extensive experiments on three benchmark data sets demonstrate the superiority of SSDMH over state-of-the-art cross-media hashing approaches. These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal. ## Monday, 29 July 2019 ### 04:00 PM Ubiquitin (Ub)-mediated proteolysis is a fundamental mechanism used by eukaryotic cells to maintain homeostasis and protein quality, and to control timing in biological processes. Two essential aspects of Ub regulation are conjugation through E1-E2-E3 enzymatic cascades and recognition by Ub-binding domains. An emerging theme in the Ub field is that... ## Wednesday, 17 July 2019 ### 04:00 PM Researchers at Boston Children's Hospital report creating the first human tissue model of an inherited heart arrhythmia, replicating two patients' abnormal heart rhythms in a dish, and then suppressing the arrhythmia with gene therapy in a mouse model. Women tend to have a greater immune response to a flu vaccination compared to men, but their advantage largely disappears as they age and their estrogen levels decline, suggests a study from researchers at the Johns Hopkins Bloomberg School of Public Health. ## Tuesday, 16 July 2019 ### 04:00 PM Cryptococcus neoformans is a fungal pathogen that infects people with weakened immune systems, particularly those with advanced HIV/AIDS. New University of Minnesota Medical Research could mean a better understanding of this infection and potentially better treatments for patients. In a massive new analysis of findings from 277 clinical trials using 24 different interventions, Johns Hopkins Medicine researchers say they have found that almost all vitamin, mineral and other nutrient supplements or diets cannot be linked to longer life or protection from heart disease. A new study led by Dr. Antonella Fioravanti in the lab of Prof. Han Remaut (VIB-VUB Center for Structural Biology) has shown that removing the armor of the bacterium that causes anthrax slows its growth and negatively affects its ability to cause disease. This work will be published in the prestigious journal Nature Microbiology can lead the way to new, effective ways of fighting anthrax and various other diseases. ## Sunday, 09 June 2019 ### 07:32 PM The Economics and Politics of Preferential Trade Agreements [Annual Reviews: Annual Review of Political Science: Table of Contents] Annual Review of Political Science, Volume 22, Issue 1, Page 75-92, May 2019. The Politics of Housing [Annual Reviews: Annual Review of Political Science: Table of Contents] Annual Review of Political Science, Volume 22, Issue 1, Page 165-185, May 2019. Bias and Judging [Annual Reviews: Annual Review of Political Science: Table of Contents] Annual Review of Political Science, Volume 22, Issue 1, Page 241-259, May 2019. Climate Change and Conflict [Annual Reviews: Annual Review of Political Science: Table of Contents] Annual Review of Political Science, Volume 22, Issue 1, Page 343-360, May 2019. Annual Review of Political Science, Volume 22, Issue 1, Page 399-417, May 2019. ## Tuesday, 12 February 2019 ### 04:00 PM Cysteine-Based Redox Sensing and Its Role in Signaling by Cyclic Nucleotide–Dependent Kinases in the Cardiovascular System [Annual Reviews: Annual Review of Physiology: Table of Contents] Annual Review of Physiology, Volume 81, Issue 1, Page 63-87, February 2019. Biomarkers of Acute and Chronic Kidney Disease [Annual Reviews: Annual Review of Physiology: Table of Contents] Annual Review of Physiology, Volume 81, Issue 1, Page 309-333, February 2019. Cellular Metabolism in Lung Health and Disease [Annual Reviews: Annual Review of Physiology: Table of Contents] Annual Review of Physiology, Volume 81, Issue 1, Page 403-428, February 2019. Innate Lymphoid Cells of the Lung [Annual Reviews: Annual Review of Physiology: Table of Contents] Annual Review of Physiology, Volume 81, Issue 1, Page 429-452, February 2019. Regulation of Blood and Lymphatic Vessels by Immune Cells in Tumors and Metastasis [Annual Reviews: Annual Review of Physiology: Table of Contents] Annual Review of Physiology, Volume 81, Issue 1, Page 535-560, February 2019. ## Thursday, 09 August 2018 ### 04:00 PM Sorting in the Labor Market [Annual Reviews: Annual Review of Economics: Table of Contents] Annual Review of Economics, Volume 10, Issue 1, Page 1-29, August 2018. Radical Decentralization: Does Community-Driven Development Work? [Annual Reviews: Annual Review of Economics: Table of Contents] Annual Review of Economics, Volume 10, Issue 1, Page 139-163, August 2018. The Development of the African System of Cities [Annual Reviews: Annual Review of Economics: Table of Contents] Annual Review of Economics, Volume 10, Issue 1, Page 287-314, August 2018. Idea Flows and Economic Growth [Annual Reviews: Annual Review of Economics: Table of Contents] Annual Review of Economics, Volume 10, Issue 1, Page 315-345, August 2018. Progress and Perspectives in the Study of Political Selection [Annual Reviews: Annual Review of Economics: Table of Contents] Annual Review of Economics, Volume 10, Issue 1, Page 541-575, August 2018. ## Feeds FeedRSSLast fetchedNext fetched after Annual Reviews: Annual Review of Economics: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Annual Reviews: Annual Review of Nutrition: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Annual Reviews: Annual Review of Physiology: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Annual Reviews: Annual Review of Political Science: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Annual Reviews: Annual Review of Sociology: Table of Contents 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 cs.CR updates on arXiv.org 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Early Edition 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 EurekAlert! - Breaking News 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 IEEE Transactions on Image Processing - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 IEEE Transactions on Industrial Electronics - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 IEEE Transactions on Industrial Informatics - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 IEEE Transactions on Information Forensics and Security - new TOC 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 JAMA Current Issue 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Latest BMJ Research 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 Nature - Issue - nature.com science feeds 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019 The Lancet 04:00 PM, Friday, 23 August 2019 07:00 PM, Friday, 23 August 2019
2019-08-26 00:43:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2521474063396454, "perplexity": 3039.357183870654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00479.warc.gz"}
https://www.tec-science.com/mechanics/gases-and-liquids/how-do-boats-float-buoyancy-in-liquids/
Buoyancy is the force directed against gravity that an object experiences when submerged in a fluid (liquid or gas). ## Indroduction Everyone may have tried to lift another person and found that this requires a lot of strength. However, if you try to lift this person in water, it is much easier. The reason for this is due to the so-called buoyancy, which an object experiences as soon as it is submerged in a liquid. This buoyant force is also responsible for the fact that even steel ships weighing tons do not sink but float on the water. The cause of the buoyancy will be discussed in more detail in this article. ## Demonstration of buoyancy The following experiment will demonstrate the effect of the buoyant force. A spring scale (newton meter) is attached to a metal cuboid. Without touching the bottom, the piece of metal is gradually submerged in a glass of water and the newton meter is observed. Once the metal piece has reached the water, the indicated value of the newton meter decreases steadily with increasing immersion depth. Only when the cuboid is completely submerged in water does the spring scale show a constant value again. The decreasing force has nothing to do with a decreasing weight, because the mass of the metal block does not change. Rather, the buoyant force acting against gravity increases with increasing immersion depth. The buoyant force corresponds to the amount by which the body appears to have become lighter in water. The more an object is submerged in a liquid, the greater the buoyancy acting on it! The buoyant force is always directed in the opposite direction to gravity! ## The buoyancy: The Archimedes’ principle The scientist Archimedes experimented with the phenomenon buoyancy already 250 years B.C. He was able to show that the buoyant force by which a submerged body appears to become lighter corresponds to the weight of the displaced liquid. The term displaced liquid refers to the amount of liquid that has to give way to the body when it is submerged. This is the amount of liquid that theoretically overflows when a glass is full to the brim when a body is submerged. The weight of this overflowed liquid then corresponds to the buoyant force. This statement is also called the Archimedes’ principle. The Archimedes’ principle states that the buoyant force corresponds to the weight of the displaced liquid! When an object is completely submerged in a liquid, the volume of the displaced liquid obviously corresponds to the volume of the immersed body. If, for example, the 54 g metal cuboid made of aluminium has a square base area of 4 cm² and a height of 5 cm, this results in a volume of 20 cm³ (20 ml). Consequently, when completely submerged in water, the cuboid displaces a liquid volume of 20 ml. At a water density of 1 g per cm³ this corresponds to a displaced water mass of 20 g. The 54 g metal cuboid therefore feels 20 g lighter under water. A spring scale would therefore indicate only 340 mN instead of 540 mN. Note that the submersion of the body does not change its weight, but a buoyant force against the weight is now acting, which leads to an reduced resultant force. It is therefore advisable not to argue with the masses (even if this is more descriptive), but with the forces! If the weight of the body is denoted by $$F_g$$ and the counteracting buoyant force by $$F_b$$, then for the resultant force $$F_{res}$$ that the body experiences applies: \begin{align} \label{res} &\boxed{F_{res} = F_g – F_b} \\[5px] \end{align} If the metal block is not completely submerged in the liquid, but only partially, then it obviously does not displace as much water. A body will only displace as much fluid as the body volume actually submerges. If only half of the body volume is submerged, the body displaces only half of the water. Accordingly, the buoyancy is only half as great. If $$\Delta V$$ denotes the submerged body volume (= displaced liquid volume) and $$\rho_l$$ the density of the liquid, then the mass $$\Delta m$$ of the displaced liquid can be calculated as follows: \begin{align} &\Delta m = \Delta V \cdot \rho_l \\[5px] \end{align} For the buoyant force $$F_b$$ as the weight of the displaced liquid, then finally applies: \begin{align} &F_b = \Delta m \cdot g \\[5px] \label{arch} &\boxed{F_b = \Delta V \cdot \rho_l \cdot g} \\[5px] \end{align} ### Derivation of the buoyant force The buoyancy is due to the different hydrostatic pressures at the top and bottom of a submerged body. For the sake of simplicity, a cuboid object is again considered, which is completely submerged in the surrounding liquid. In the article Pressure In Liquids, the cause of liquid pressures has already been explained in detail. They only result from the depth below the liquid surface. The deeper a point lies below the liquid surface, the greater the liquid pressure and the resulting force. In this way, the upward acting force on the bottom of the body is therefore greater than the downward acting force on the top. Thus a force effectively acts upwards, the buoyant force! The liquid pressure at the bottom of the object is determined from the depth $$h_2$$ as follows: \begin{align} &p_2 = \rho_l \cdot g \cdot h_2 \\[5px] \end{align} In this equation, $$\rho_l$$ denotes the density of the liquid. Analogously, for the hydrostatic pressure at the depth $$h_1$$ at the top of the cuboid, applies: \begin{align} &p_1 = \rho_l \cdot g \cdot h_1 \\[5px] \end{align} The respective forces at the bottom and top side of the cuboid are determined according to the Definition Of Pressure by the product of pressure and surface area ($$F=p \cdot A$$). The surface area in this case is the base area $$A$$ of the cuboid: \begin{align} &\underline{F_2 = \rho_l \cdot g \cdot h_2 \cdot A} ~~~~~\text{or}~~~~~ \underline{F_1 = \rho_l \cdot g \cdot h_1 \cdot A} \\[5px] \end{align} The buoyant force $$F_b$$, with which the body is effectively pushed upwards, results from the difference of the forces: \begin{align} &F_b = F_2 – F_1 \\[5px] &F_b = \rho_l \cdot g \cdot h_2 \cdot A – \rho_l \cdot g \cdot h_1 \cdot A \\[5px] \label{d} &F_b = \rho_l \cdot g \cdot A \cdot \left(h_2-h_1\right) \\[5px] \end{align} The difference in the depths corresponds exactly to the height $$h$$ of the cuboid. Furthermore, it can be used that the product of height and base area corresponds to the volume $$V_b$$ of the submerged body: \begin{align} &F_b = \rho_l \cdot g \cdot A \cdot \underbrace{\left(h_2-h_1\right)}_{=h} \\[5px] &F_b = \rho_l \cdot g \cdot \underbrace{A \cdot h}_{=V_b} \\[5px] \label{ein} &\boxed{F_b = V_b \cdot \rho_l \cdot g}~~~~~\text{buoyant force with complete immersion} \\[5px] \end{align} Note that the depth at which the object is exactly located is of no importance to the buoyant force. From equation (\ref{d}) it is already clear that only the difference in depth between top and bottom is relevant, i.e. the height of the object*. In combination with the base area of the object, only the dependence on its volume results from this. For simplicity’s sake, this formula was derived from a cuboid, but it applies in principle to any body of any shape as long as its volume $$V_b$$ is completely submerged in the liquid (a more general derivation of the buoyant force, which also takes arbitrarily shaped bodies into account, is shown in the next section “Derivation of the Archimedes’ principle“). *) For this reason, the ambient pressure on the surface of the liquid is also irrelevant, which normally acts in addition to the hydrostatic pressure. This is because the ambient pressure acts equally on both the top and bottom of the body and thus cancel each other out. If an object is not completely submerged in a liquid (as it was the case in the previous derivation) but is only partially submerged, then the volume $$V_b$$ refers only to the actually submerged part of the body volume $$\Delta V$$ (= displaced liquid volume). The buoyant force is then generated exclusively by the hydrostatic pressure at the bottom of the body: \begin{align} F_b &=p \cdot A \\[5px] &= \rho_l \cdot g \cdot \underbrace{h \cdot A}_{\Delta V} \\[5px] \end{align} \begin{align} &\boxed{F_b = \Delta V \cdot \rho_l \cdot g} ~~~~~\text{applies in general} \\[5px] \end{align} At this point one can now also see the Archimedes’ principle. In the equation above, the product of displaced liquid volume $$\Delta V$$ and liquid density $$\rho_l$$ can be interpreted as the mass of the displaced liquid. Furthermore, the product of displaced liquid mass $$\Delta m$$ and gravitational acceleration $$g$$ results in the weight of the displaced liquid $$F_{g,dis}$$: \begin{align} &F_b = \underbrace{\Delta V \cdot \rho_l}_{\Delta m} \cdot g \\[5px] &F_b = \underbrace{\Delta m \cdot g}_{F_{g,dis}} \\[5px] &\boxed{F_b = F_{g,dis}} \\[5px] \end{align} ### Derivation of the Archimedes’ principle for arbitrarily shaped bodies The derivation of the buoyant force in the previous section was based on an object with a relatively simple geometry on which the acting forces could be calculated relatively easily. That the derived formula can not only be applied to such simple shaped objects, but that the Archimedes’ principle applies to arbitrarily shaped bodies, will be shown in the following. For this purpose a vessel filled with water is considered. In the Article Pressure In Liquids it has already been explained in detail that the hydrostatic pressure in a liquid is caused by the weight of the liquid column above it. If, for example, the pressure at the bottom of the left vessel is considered, the liquid pressure at the bottom results from the weight of the water mass above (the object has not yet been submerged). If one immerses now an arbitrarily shaped object into the water, then it experiences a certain buoyant force. According to Newton’s third law (“action = reaction”), the buoyant force exerted by the water on the object corresponds to the force that the object additionally exerts on the water when the situation is viewed from the opposite perspective (i.e. from the water’s point of view)! The force on the bottom of the vessel thus results from the sum of the weight of the water $$F_{g,water}$$ and the buoyant force $$F_b$$: \begin{align} \label{fa} &F_{bottom} = F_{g,water} + F_b \\[5px] \end{align} Note that if the submerged body floats in the liquid, the buoyant force is obviously equal to the weight of the body (otherwise the object would sink to the ground). In this case it becomes clear that not only the weight of the liquid but also the weight of the floating object is acting on the bottom of the vessel. In the general case of a non-floating object (as in the case of the metal cuboid considered above, which was immersed in water by means of a spring scale), however, not the entire weight of the body is applied to the water, but only the weight minus the force with which the object is held. This difference corresponds exactly to the buoyant force (see also the figure Demonstration of the Archimedes’ principle)! Therefore, the resultant force acting on the bottom of the vessel in general results from the sum of the weight of the liquid column and the buoyant force of the submerged object. In the article Pressure In Liquids it has already been explained in detail that the hydrostatic pressure results only from the considered depth below the water surface. Regarding the pressure at the bottom of the vessel, the water with the submerged object behaves in the same way as a vessel that is only filled with water and thereby has the same water level (principle of communicating vessels) – see the two vessels on the right in the figure above. One can thus imagine the submerged body volume as filled with water; this would obviously have the same effect on the bottom of the vessel. With this perspective, the force acting on the bottom of the vessel results from the sum of the water weight outside the imaginary immersion volume ($$F_{g,water}$$) and the water weight inside the imaginary immersion volume ($$F_{g,dis}$$). The latter corresponds to the weight of the water which the submerged object displaces in the previous perspective. It therefore applies to the second approach: \begin{align} \label{fb} &F_{bottom} = F_{g,water} + F_{g,dis} \\[5px] \end{align} Since both approaches obviously lead to the same force on the bottom of the vessel, equations (\ref{fa}) and (\ref{fb}) can be equated: \begin{align} \require{cancel} &\bcancel{F_{g,water}} + F_b = \bcancel{F_{g,water}} + F_{g,dis} \\[5px] &\boxed{F_b = F_{g,dis}} \\[5px] \end{align} This shows that the buoyancy corresponds directly to the weight of the displaced liquid, regardless of how the submerged object is shaped! ## Sinking, rising and floating Whether a fully submerged object sinks, rises or floats at a given buoyancy depends on the weight of the object. If the weight of a body is greater than the buoyancy, then according to equation (\ref{res}) it will descend with the difference of the forces to the ground. This corresponds to the force indicated by the spring balance when the object is attached to it. If, on the other hand, the buoyancy of a submerged object is greater than its weight, then it will ascend to the surface with the difference of the forces. To display this resultant force, the spring balance would then have to be attached to the object from below. However, if the buoyancy is equal to the weight, the body will appear to float “weightless” in the liquid. An attached spring balance would not indicate a resultant force. This apparent weightlessness in liquids is used, for example, to prepare astronauts for space missions. For a homogeneous object its weight can be determined by the body volume $$V_b$$ and the density of the body $$\rho_b$$: \begin{align} &F_g = \overbrace{V_b \cdot \rho_b}^{m_b} \cdot g \\[5px] \end{align} If at this point the buoyant force according to equation (\ref{ein}) is used, then due to equation (\ref{res}) the following resultant force acts on the completely immersed object: \begin{align} &F_{res} = F_g – F_b \\[5px] &F_{res} = V_b \cdot \rho_b \cdot g – V_b \cdot \rho_l \cdot g \\[5px] \label{auf} &\boxed{F_{res} = V_b \cdot g \cdot \left( \rho_b – \rho_l \right)} ~~~\text{resultant force at full immersion}\\[5px] \end{align} Using this formula, the conditions for descending, ascending or floating can now be clearly explained. If the density of the submerged body is greater than that of the surrounding liquid, a positive force results which drags the body towards the ground. If, on the other hand, the density of the body is less than that of the liquid, the result is a negative force. This means that the direction of the force is reversed and the submerged object is pulled to the surface. Only in the case that the density of the body corresponds exactly to the density of the liquid, the resultant force disappears. The body seems to float forceless in the liquid. The considerations of the bodies assumed to be homogeneous can also be extended to inhomogeneous objects, i.e. in particular to objects consisting of different materials and thus different densities. The density $$\rho_b$$ of a inhomogeneous body then refers to the mean density, i.e. to the average density which one obtains mathematically, if one refers the total mass of the body $$m_b$$ to its total volume $$V_b$$: \begin{align} &\boxed{\rho_b = \frac{m_b}{V_b}} ~~~~~\text{mean density} \\[5px] \end{align} If the mean density of an immersed object is less than that of the surrounding liquid, the object floats to the surface. If the mean density is greater, the object sinks to the bottom. If the densities are the same, the object floats in the liquid. This also explains why even steel ships weighing tons can float. The average density of a ship is lower than that of the surrounding water. This is achieved by the fact that a ship’s hull is not a massive steel body, but only a steel hull. The interior consists mainly of air. In relation to the volume of the hull, it has a relatively low mass and thus a low mean density, at least a significantly lower (average) density than the surrounding water. Thus the ship’s hull ensures that if it is submerged too much, a large buoyant force is generated which keeps the entire ship above water. If, on the other hand, water penetrates into the hull, the relatively light air gives way to the penetrating heavy water and the mean density increases. If the mean density is greater than that of the surrounding water (at the latest when the entire hull is full of water), then the ship will sink. A targeted control of the mean density of a floating body by means of air and water can be found, for example, in submarines. In this way a targeted descending and ascending as well as floating in water is made possible. Depending on the manoeuvre, either water or air is pumped into special ballast tanks. During descending, for example, the air-filled tanks are flooded with water, so that the mean density of the submarine is greater than that of the surrounding water. When the submarine ascends, however, the water in the tanks is pushed out with the aid of compressed air. The mean density of the submarine drops and finally ascends. When floating in water, the tanks are only partially filled with water or air, so that the mean density corresponds exactly to that of the surrounding water. The fact that substances with lower densities than the surrounding medium rise upwards or substances with higher densities sink downwards also plays a major role in ocean currents. Among other things, these currents are due to the fact that cold and thus heavy water sinks downwards, while warmer and thus lighter water rises upwards. However, these differences in density are not only caused by temperature influences but also by the salt content. The density is higher in waters with a high salt content than in less salty regions. ## Immersion depth (draft) When objects ascend in a liquid, experience shows that they do not emerge completely out of the liquid. A certain part will remain below the liquid surface, while the rest will float above the surface. An everyday example that illustrates this are ships whose hulls are obviously only partially submerged in the water. The question arises, of course, how to determine this depth of immersion, which in the case of ships is also referred to as draught of draft. If an object floats, it obviously neither sinks nor rises. Consequently, there is no resultant force acting on the object, so there is a balance of forces between the downward acting weight and the upward acting buoyancy: \begin{align} &F_{res} = F_g – F_b \overset{!}{=}0 \\[5px] &\underline{F_b = F_g} \\[5px] \end{align} The weight is therefore just as great as the buoyancy. According to the Archimedes’ principle, the buoyancy itself corresponds to the weight of the displaced liquid. So when an object floats on the surface, it will submerge until the weight of the displaced liquid (=buoyancy) corresponds to the weight of the object. If one imagines the volume of the object below the liquid surface to be completely filled with the surrounding liquid, this weight corresponds to the weight of the object. A ship with a mass of 50,000 tons, for example, will sink so deep that the submerged volume displaces 50,000 tons of water. When floating, the object submerges to such a depth that it displaces as much liquid as it is heavy! The immersion depth of an object therefore depends not only on its own mass, but also on the density of the surrounding liquid. For example, a ship will have a stronger draft in fresh water than in sea water, i.e. it will submerge deeper. Because of the dissolved salt, seawater has a density that is about 3 % higher than that of freshwater. The ship must therefore immerse more strongly in the “lighter” freshwater in order to displace the same mass of water as in the “heavier” salt water. For ships, the maximum permissible draught is indicated by a so-called Plimsoll mark depending on the surrounding water (density). This mark is located on the side of the ship’s hull. The upper two lines towards the stern indicate the permitted draft in general freshwater (F) or tropical freshwater (TF). The other four lines towards the bow indicate the permitted draft in saltwater. These are located lower in comparison to the marks of the freshwater, as the ship is more buoyant in saltwater due to the higher water density. A distinction is made between tropical seawater (T), seawater in summer (S) and winter (W) and between waters of the North Atlantic in winter (WNA). Plimsoll marks on ships indicate the permitted drafts depending on the surrounding water (density)! This example of the Plimsoll mark also shows that the “heavier” the surrounding liquid is, i.e. the greater the density of the liquid, the stronger the buoyancy is. This can also be seen directly from the equation (\ref{arch}), in which the liquid density directly influences the buoyant force. This fact can also be seen when bathing in the Dead Sea. Due to the very high salt content of more than 30 %, the density of the water in the Dead Sea is about a quarter higher compared to freshwater. Consequently, the buoyancy there is also about 25 % greater than in freshwaters. This leads to the fact that one floats in the Dead Sea without the need to swim. ## Outlook In this article, liquids were considered for the sake of clarity, but not only in liquids but also in gases buoyant forces are acting, which finally are based on same cause. At article Buoyancy In Gases this is discussed in more detail.
2021-06-18 00:03:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370661377906799, "perplexity": 423.9345364580253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00469.warc.gz"}
https://stats.stackexchange.com/questions/127155/guessing-the-length-of-a-fish/127168
# Guessing the length of a fish I would like to solve the following excercise. Any help is appreciated. 90% of the fish in our pond are males, the rest are females. The length of the males are: $X+5$ inches, where $X\sim exp(1)$ The length of the females are: $Y+8$ inches, where $Y \sim exp(2)$ What is the probability that a fish whose length is $x$ is male, and how can we guess the sex of the fish from their length if we want that our guess is right with the biggest possible probability? This is a classic exercise of conditional probability. The most important thing is to get right what how to write the question of the exercise. In this case, we want the probability of a fish whose length is x being male. The first thing is to write down this probability, which is conditional: is the probability of being male GIVEN THAT the length is x, then we want to compute $$P(male|x)$$ to do that now we use Bayes Theorem: $$P(male|x)=\frac{P(x|male)P(male)}{P(x)}$$ we do this because in this way we can use information that we have in the problem statement, like $P(male)$ and $P(x|male)$ (the exponential distribution). The only left thing is $P(x)$, which should be computed using law of total probability, $$P(x)=P(x|male)P(male)+P(x|female)P(female)$$
2022-05-18 00:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789761662483215, "perplexity": 214.88760401683152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00413.warc.gz"}
https://nbviewer.org/github/barbagroup/jupyter-tutorial/blob/master/3--Jupyter%20like%20a%20pro.ipynb
# Jupyter like a pro¶ In this third notebook of the tutorial "The World of Jupyter", we want to leave you with pro tips for using Jupyter in your future work. ## Importing libraries¶ First, a word on importing libraries. Previously, we used the following command to load all the functions in the NumPy library: import numpy Once you execute that command in a code cell, you call any NumPy function by prepending the library name, e.g., numpy.linspace(), numpy.ones(), numpy.zeros(), numpy.empty(), numpy.copy(), and so on (explore the documentation for these very useful functions!). But, you will find a lot of sample code online that uses a different syntax for importing. They will do: import numpy as np All this does is create an alias for numpy with the shorter string np, so you then would call a NumPy function like this: np.linspace(). This is just an alternative way of doing it, for lazy people that find it too long to type numpy and want to save 3 characters each time. For the not-lazy, typing numpy is more readable and beautiful. We like it better like this: In [1]: import numpy When you make a plot using Matplotlib, you have many options to make your plots beautiful and publication-ready. Here are some of our favorite tricks. First, let's load the pyplot module—and remember, %matplotlib notebook gets our plots inside the notebook (instead of a pop-up). Our first trick is rcparams: we use it to customize the appearance of the plots. Here, we set the default font to a serif type of size 14 pt and make the size of the font for the axes labels 18 pt. Honestly, the default font is too small. In [2]: from matplotlib import pyplot %matplotlib notebook pyplot.rcParams['font.family'] = 'serif' pyplot.rcParams['font.size'] = 14 pyplot.rcParams['axes.labelsize'] = 18 The following example is from a tutorial by Dr. Justin Bois, a lecturer in Biology and Biological Engineering at Caltech, for his class in Data Analysis in the Biological Sciences (2015). He has given us permission to use it. In [3]: # Get an array of 100 evenly spaced points from 0 to 2*pi x = numpy.linspace(0.0, 2.0 * numpy.pi, 100) # Make a pointwise function of x with exp(sin(x)) y = numpy.exp(numpy.sin(x)) Here, we added comments in the Python code with the # mark. Comments are often useful not only for others who read the code, but as a "note to self" for the future you! Let's see how the plot looks with the new font settings we gave Matplotlib, and make the plot more friendly by adding axis labels. This is always a good idea! In [4]: pyplot.figure() pyplot.plot(x, y, color='k', linestyle='-') pyplot.xlabel('$x$') pyplot.ylabel('$\mathrm{e}^{\sin(x)}$') pyplot.xlim(0.0, 2.0 * numpy.pi); Did you see how Matplotlib understands LaTeX mathematics? That is beautiful. The function pyplot.xlim() specifies the limits of the x-axis (you can also manually specify the y-axis, if the defaults are not good for you). Continuing with the tutorial example by Justin Bois, let's have some mathematical fun and numerically compute the derivative of this function, using finite differences. We need to apply the following mathematical formula on all the discrete points of the x array: $$\frac{\mathrm{d}y(x_i)}{\mathrm{d}x} \approx \frac{y(x_{i+1}) - y(x_i)}{x_{i+1} - x_i}.$$ By the way, did you notice how we can typeset beautiful mathematics within a markdown cell? The Jupyter notebook is happy typesetting mathematics using LaTeX syntax. Since this notebook is "Jupyter like a pro," we will define a custom Python function to compute the forward difference. It is good form to define custon functions to make your code modular and reusable. In [5]: def forward_diff(y, x): """Compute derivative by forward differencing.""" # Use numpy.empty to make an empty array to put our derivatives in deriv = numpy.empty(y.size - 1) # Use a for-loop to go through each point and compute the derivative. for i in range(deriv.size): deriv[i] = (y[i+1] - y[i]) / (x[i+1] - x[i]) # Return the derivative (a NumPy array) return deriv # Call the function to perform finite differencing deriv = forward_diff(y, x) Notice how we define a function with the def statement, followed by our custom name for the fuction, the function arguments in parenthesis, and ending the statement with a colon. The contents of the function are indicated by the indentation (four spaces, in this case), and the return statement indicates what the function returns to the code that called it (in this case, the contents of the variable deriv). Right after the function definition (in between triple quotes) is the docstring, a short text documenting what the function does. It is good form to always write docstrings for your functions! In our custom forward_diff() function, we used numpy.empty() to create an empty array of length y.size-1, that is, one less than the length of the array y. Then, we start a for-loop that iterates over values of i using the range() function of Python. This is a very useful function that you should think about for a little bit. What it does is create a list of integers. If you give it just one argument, it's a "stop" argument: range(stop) creates a list of integers from 0 to stop-1, i.e., the list has stop numbers in it because it always starts at zero. But you can also give it a "start" and "step" argument. Experiment with this, if you need to. It's important that you internalize the way range() works. Go ahead and create a new code cell, and try things like: for i in range(5): print(i) changing the arguments of range(). (Note how we end the for statement with a colon.) Now think for a bit: how many numbers does the list have in the case of our custom function forward_diff()? Now, we will make a plot of the numerical derivative of $\exp(\sin(x))$. We can also compare with the analytical derivative: $$\frac{\mathrm{d}y}{\mathrm{d}x} = \mathrm{e}^{\sin x}\,\cos x = y \cos x,$$ In [6]: deriv_exact = y * numpy.cos(x) # analytical derivative pyplot.figure() pyplot.plot((x[1:] + x[:-1]) / 2.0, deriv, label='numerical', marker='.', color='gray', linestyle='None', markersize=10) pyplot.plot(x, deriv_exact, label='analytical', color='k', linestyle='-') # analytical derivative in black line pyplot.xlabel('$x$') pyplot.ylabel('$\mathrm{d}y/\mathrm{d}x$') pyplot.xlim(0.0, 2.0 * numpy.pi) pyplot.legend(loc='upper center', numpoints=1); Stop for a bit and look at the first pyplot.plot() call above. The square brackets normally are how you access a particular element of an array via its index: x[0] is the first element of x, and x[i+1] is the i-th element. What's very cool is that you can also use negative indices: they indicate counting backwards from the end of the array, so x[-1] is the last element of x. A neat trick of arrays is called slicing: picking elements using the colon notation. Its general form is x[start:stop:step]. Note that, like the range() function, the stop index is exclusive, i.e., x[stop] is not included in the result. For example, this code will give the odd numbers from 1 to 7: x = numpy.array( [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ) x[1:-1:2] Try it! Remember, Python arrays are indexed from 0, so x[1] is the second element. The end-point in the slice above is index -1, that's the last array element (not included in the result), and we're stepping by 2, i.e., every other element. If the step is not given, it defaults to 1. If start is not given, it defaults to the first array element, and if stop is not given, it defaults to the last element. Try several variations on the slice, until you're comfortable with it. ## There's a built-in for that¶ Here's another pro tip: whenever you find yourself writing a custom function for something that seems that a lot of people might use, find out first if there's a built-in for that. In this case, NumPy does indeed have a built-in for taking the numerical derivative by differencing! Check it out. We also use the function numpy.allclose() to check if the two results are close. In [7]: numpy_deriv = numpy.diff(y) / numpy.diff(x) print('Are the two results close? {}'.format(numpy.allclose(numpy_deriv, deriv))) Are the two results close? True Not only is the code much more compact and easy to read with the built-in NumPy function for the numerical derivative ... it is also much faster: In [8]: %timeit numpy_deriv = numpy.diff(y) / numpy.diff(x) %timeit deriv = forward_diff(y, x) 100000 loops, best of 3: 13.4 µs per loop 10000 loops, best of 3: 75.2 µs per loop NumPy functions will always be faster than equivalent code you write yourself because at the heart they use pre-compiled code and highly optimized numerical libraries, like BLAS and LAPACK. ## Do math like a pro¶ Do you want to compute the integral of $y(x) = \mathrm{e}^{\sin x}$? Of course you do. We find the analytical integral using the integral formulas for modified Bessel functions: $$\int_0^{2\pi}\mathrm{d} x\, \mathrm{e}^{\sin x} = 2\pi \,I_0(1),$$ where $I_0$ is the modified Bessel function of the first kind. But if you don't have your special-functions handbook handy, we can find the integral with Python. We just need the right modules from the SciPy library. SciPy has a module of special functions, including Bessel functions, called scipy.special. Let's get that loaded, then use it to compute the exact integral: In [9]: import scipy.special exact_integral = 2.0 * numpy.pi * scipy.special.iv(0, 1.0) print('Exact integral: {}'.format(exact_integral)) Exact integral: 7.95492652101 Or instead, we may want to compute the integral numerically, via the trapezoid rule. The integral is over one period of a periodic function, so only the constant term of its Fourier series will contribute (the periodic terms integrate to zero). The constant Fourier term is the mean of the function over the interval, and the integral is the area of a rectangle: $2\pi \langle y(x)\rangle_x$. Sampling $y$ at $n$ evenly spaced points over the interval of length $2\pi$, we have: \begin{align} \int_0^{2\pi}\mathrm{d} x\, y(x) \approx \frac{2\pi}{n}\sum_{i=0}^{n} y(x_i), \end{align} NumPy gives as a mean method to quickly get the sum: In [10]: approx_integral = 2.0 * numpy.pi * y[:-1].mean() print('Approximate integral: {}'.format(approx_integral)) print('Error: {}'.format(exact_integral - approx_integral)) Approximate integral: 7.95492652101 Error: 0.0 In [11]: approx_integral = 2.0 * numpy.pi * numpy.mean(y[:-1]) print('Approximate integral: {}'.format(approx_integral)) print('Error: {}'.format(exact_integral - approx_integral)) Approximate integral: 7.95492652101 Error: 0.0 The syntax y.mean() applies the mean() NumPy method to the array y. Here, we apply the method to a slice of y that does not include the last element (see discussion of slicing above). We could have also done numpy.mean(y[:-1]) (the function equivalent of the method mean() applied to an array); they give equivalent results and which one you choose is a matter of style. ## Beautiful interactive plots with Bokeh¶ Matplotlib will be your workhorse for creating plots in notebooks. But it's not the only game in town! A recent new player is Bokeh, a visualization library to make amazing interactive plots and share them online. It can also handle very large data sets with excellent performance. If you installed Anaconda in your system, you will probably already have Bokeh. You can check if it's there by running the conda list command. If you installed Miniconda, you will need to install it with conda install bokeh. After installing Bokeh, we have many modules available: bokeh.plotting gives you the ability to create interactive figures with zoom, pan, resize, save, and other tools. In [12]: from bokeh import plotting as bplotting Bokeh integrates with Jupyter notebooks by calling the output function, as follows: In [13]: bplotting.output_notebook() In [14]: # create a new Bokeh plot with axis labels, name it "bop" bop = bplotting.figure(x_axis_label='x', y_axis_label='dy/dx') # add a title, change the font bop.title = "Derivative of exp(sin(x))" bop.title_text_font = "palatino" # add a line with legend and line thickness to "bop" bop.line(x, deriv_exact, legend="analytical", line_width=2) # add circle markers with legend, specify color bop.circle((x[1:] + x[:-1]) / 2.0, deriv, legend="numerical", fill_color="gray", size=8, line_color=None) bop.grid.grid_line_alpha=0.3 bplotting.show(bop); Note—As of June 2016 (v.0.11.1), Bokeh does not support LaTeX on axis labels. This is an issue they are working on, so stay tuned! Look at the neat tools on the Bokeh figure: you can zoom in to any portion to explore the data, you can drag the plot area around, resize and finally save the figure to a file. You also have many beautiful styling options! # Optional next step: get interactive with Lorenz¶ We found two really cool ways for you to get interactive with the Lorenz equations! Try out the interactive blog post by Tim Head on Exploring the Lorenz equations (January 2016), and learn about IPython widgets. Or, check out the Lorentz example on Bokeh plots. Better yet, try them both. (c) 2016 Lorena A. Barba. Free to use under Creative Commons Attribution CC-BY 4.0 License. This notebook was written for the tutorial "The world of Jupyter" at the Huazhong University of Science and Technology (HUST), Wuhan, China. Example from Justin Bois (c) 2015 also under a CC-BY 4.0 License.
2022-10-02 19:43:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.36572226881980896, "perplexity": 1827.3429948739474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00333.warc.gz"}
https://www.publiclab.org/tag/comments/potentiostat
# Potentiostat potentiostat _For measuring electrochemically active compounds and microbes in water._ [![potentiostat_cell.png](https://i.publiclab.org/system/images/photos/000/001/407/medium/potentiostat_cell.png)](https://i.publiclab.org/system/images/photos/000/001/407/original/potentiostat_cell.png) ### Join the Discussion on the [Public Lab water quality list](https://groups.google.com/forum/#!forum/plots-waterquality) ### Background **Links to other Public Lab Electrochemistry wiki's / research notes** The design, construction, and operation of a low cost, open-source potentiostat (the WheeStat) has been described in a number of Public Lab wikis and research notes. Links to some of these pages are provided here: WheeStat user's [manual](http://publiclab.org/wiki/wheestat-user-s-manual). A wiki describing how to determine metal ion concentrations [electrochemically]((http://publiclab.org/wiki/detection-of-metals-in-water-with-the-wheestat). A site where you can purchase a WheeStat kit from [Public Lab](http://store.publiclab.org/collections/new-kits/products/wheestat-potentiostat). Instructions for assembling the WheeStat [kit](http://publiclab.org/notes/JSummers/08-07-2014/wheestat-kit-assembly). Making / purchasing low cost [electrodes](http://publiclab.org/notes/JSummers/01-09-2014/potentiostat-notes-5-how-to-make-low-cost-electrodes). **Potentiostats** can be used to test for electrochemically active compounds and microbes in solution, and thus have applications in many areas such as environmental monitoring, food and drug testing. Most commercially-available potentiostats are very expensive ($1000 is on the “cheap” side). There have been several initiatives in the last decade that have focused on designing cheaper alternatives; and when investigating technologies related to water quality assessment. Our aim here is to build on these efforts, and leverage the expertise of the open hardware community in order to build accessible, and capable, devices. Possible applications include: - **Tracking heavy metal concentrations in waterways.** Various industrial processes used in the US and abroad can lead to the contamination of water with heavy metals that are dangerous to humans, like mercury and arsenic. An inexpensive, battery-powered potentiostat -- communicating over the cellular network, perhaps, or merely recording locally to an SD card -- might be able to track relative fluctuations in the concentrations of these metals, making monitoring these contaminants easier. **Limitations of electrochemical techniques:** In order to detect and quantify a chemical species by electrochemical methods, that species has to undergo electron transfer at a voltage that is accessible under the solution conditions being employed. One major limitation to measuring metal species in water is due to oxidation / reductions of water itself. The oxidation of water (to give O2 and H+) limits how positive the voltage can be applied in water. Similarly, reduction to H2 and OH- limits how negative the voltage can be. The voltage limits will depend on things like the choice of electrode used and the pH of the solution. Still, there are a number of metals that can be quantified in water. Mendham, et al, (p 564, referenced below) list the following fifteen specific metals as having been determined by voltammetry: - antimony arsenic bismuth cadmium copper gallium - germanium gold indium lead mercury silver - thallium tin zinc - **A low-cost ‘field lab’ for evaluating water samples.** An inexpensive potentiostat, when used according to the proper protocols, might be used to indicate absolute concentrations of heavy metals in water. This could allow citizens and organizations who can’t afford to send water samples to an expensive, bonded laboratory to do their own testing -- particularly relevant in a developing-world context. - **Education.** Electrochemistry is an important part of many high school, college, and graduate chemistry curricula; an inexpensive potentiostat could render these curricula more accessible to educational institutions that don’t have the budget for the more expensive commercial versions. - **Research.** Making an easily-hackable, programmable, and extensible potentiostat platform, based on a widely-used and well-supported technologies like the Arduino and the Raspberry Pi, could allow for novel electrochemistry applications in the laboratory; when a device that once cost$2000 and didn’t “play nice” with other hardware and software suddenly becomes available for under \$200, and can be integrated with easy-to-use, open source software and hardware, researchers will dream up new approaches to open research problems -- and higher-throughput approaches in already-established research areas. ### Details Typically, electrochemical experiments utilize three electrodes, the Working Electrode (WE), Reference Electrode (RE) and Counter Electrode (CE). A research note reviewing some electrodes and describing how to build a set for little cash is provided [here](http://publiclab.org/notes/JSummers/01-09-2014/potentiostat-notes-5-how-to-make-low-cost-electrodes). A **potentiostat** is a three terminal analog feedback control circuit that maintains a pre-determined voltage between the WE and RE by sourcing current from the CE. A rough schematic for a potentiostat is provided below: [![adder_potentiostat.png](https://i.publiclab.org/system/images/photos/000/001/406/medium/adder_potentiostat.png)](https://i.publiclab.org/system/images/photos/000/001/406/original/adder_potentiostat.png) The CE and WE are made of electrochemically inert conductive materials (we are using graphite, like from pencils, but platinum and gold are popular). The RE is designed to have a well-defined and stable electrochemical potential. By hooking up a power source the energy of electrons in the working electrode can raised and lowered with respect to the reference (and also with respect to compounds in solution). When the energies of electrons in the WE are high enough, they can transfer onto certain chemical species, reducing them. For example, Cu2+ ions can be reduced to Cu+ ions, or to copper metal. Alternatively, when the voltage of the WE is sufficiently positive it can pull electrons off of certain chemicals, oxidizing them. The opposite of the above reactions can be used as an example; Cu+ ion can be oxidized to Cu2+ ion - the voltages (w.r.t. the RE) and currents at which reductions and oxidations happen can be measured, revealing information about the energies and concentrations of the analytes. [![potentiostat_cell.png](https://i.publiclab.org/system/images/photos/000/001/407/medium/potentiostat_cell.png)](https://i.publiclab.org/system/images/photos/000/001/407/original/potentiostat_cell.png) The above "Adder Potentiostat" schematic was adapted from chapter 15 of Electrochemical Methods by Bard and Faulkner (reference below). ### Work updates - **8/5/2013**: Craig Versek of PVOS has been building off a fully-fledged, open potentiostat design by Jack Summers. Craig is aiming to implement programmable current ranges. In this design, a CMOS analog multiplexer will switch out one of 5 standard current sense resistors (with room for 8 total), which are trimmer rheostats tuned to 250, 2.5k 25.0k 250k and 2.50M Ohms well within 0.5% margin of error. - **1/8/2014**: Smoky Mountain Scientific (Ben Hickman and Jack Summers' lab group) have published research notes describing an open source potentiostat they call the WheeStat. The history of the WheeStat program is described [here](http://publiclab.org/notes/JSummers/11-02-2013/potentiostat-notes-1-wheestat-history). The WheeStat software is described [here](http://publiclab.org/notes/JSummers/12-20-2013/potentiostat-software) and is available for download [here](https://github.com/SmokyMountainScientific/WheeStat5_0). A description of fabricating the board is provided [here](http://publiclab.org/notes/JSummers/12-30-2013/potentiostat-notes-3-wheestat-5-1-fabrication) and copies of the board can be ordered from [OSHPark.com](http://oshpark.com/shared_projects/yepeXPFo). ### Uses - Assess arsenic, cyanide, other contaminants / toxins in water - Educational - Identifying toxins / ingredients in foodstuffs ### Development - [olm-pstat](https://github.com/p-v-o-s/olm-pstat) - repository for the PLOTS/[PVOS](http://www.pvos.org/) Open Lab Monitor potentiostat peripheral ### References - [CheapStat](https://doi.org/10.1371/journal.pone.0023783) - [Cornell U Potentiostat](http://people.ece.cornell.edu/land/courses/ece4760/) - [Potentiostat Software on Github](https://github.com/p-v-o-s/olm-pstat) - Gopinath, A. V., and Russell, D., "An Inexpensive Field Portable Programmable Potentiostat", Chem Educator, 2006. pp 23-28. - Inamdar, S. N., Bhat, M. A., Haram, S. K., "Construction of Ag/AgCl Reference Electrode from Used Felt-Tipped Pen Barrel for Undergraduate Laboratory", J. Chem. Ed., 2009, 86, 355. - Mendham, J., Denney, R. C., Barnes, J. D., Thomas, M. J. K., Vogel's textbook of Quantitative Chemical Analysis, 6th ed., 2000, Prentice Hall, Harlow, England - Bard, Allen J., and Faulkner, Larry R. Electrochemical Instrumentation. Electrochemical Methods: Fundamentals and Applications, 2nd ed. John Wiley &amp; Sons, Inc., 2001. pp. 632-658 - Nice wikipedia description of what a potentiostat is [here](http://en.wikipedia.org/wiki/Potentiostat). - A basic description of potentiostat architectures can be found at http://www.consultrsr.com/resources/pstats/design.htm - Yee, S., Chang, O. K., "A Simple Junction for Reference Electrodes", J. Chem. Ed., 1988, 65, 129 - Thanks to Jack Summers, Benjamin Hickman, Craig Versek, Ian Walls, Jake Wheeler, and Todd Crosby OHS2013_potentiostat_poster.svg OHS2013_potentiostat_poster.pdf... Author Comment Last activity Moderation nitrous2022 "I know that this project is a while in the past, but I hope it can be resurrected. Is it possible to contact you directly to discuss this? Thanks D..." | Read more » about 1 year ago kelukaliya " excellent work " | Read more » over 3 years ago nanocastro " hi Liz the raw data is stored on the project repo https://gitlab.com/nanocastro/WheeStat6-Mza/tree/master/Quercetina " | Read more » over 3 years ago liz "Thank you for graphing the output of the commercial potentiostat to the wheestat, very interested in comparisons like these. Is there a place you a..." | Read more » over 3 years ago warren " Awesome!!! " | Read more » over 3 years ago JSummers "Hi Jeff, I was not aware of these technologies. Thanks for bringing them to my attention. Jack " | Read more » almost 5 years ago warren "Hi, Jack - this was a while ago, but lots has changed in some of these technologies. I was wondering if you'd considered using something like WebJa..." | Read more » almost 5 years ago momosavar "Hi @jsummers I downloaded processing and exported the application. Thank you very much " | Read more » almost 5 years ago momosavar "Hi @JSummers I really thank you for answering my questions. I think because I do not have experience with it, I can not do it. If you can, please s..." | Read more » almost 5 years ago JSummers "Hi, For the hardware you made, use the firmware here:https://github.com/SmokyMountainScientific/D_SeriesWheeStatFirmware/tree/master/WheeStat6_d. ..." | Read more » almost 5 years ago momosavar "Hi @JSummers I used this file(https://github.com/SmokyMountainScientific/WheeStat5Eagles) to build wheestat. I would like to know which file to use..." | Read more » almost 5 years ago JSummers "Hi @momosavar, The WheeStat will run with about any rail-to-rail quad op amp that works with a 3.3 volt supply and comes in the 14-SOIC package. ..." | Read more » almost 5 years ago momosavar "Hello @JSummers I could not find AD8644.Can you suggest alternative part? " | Read more » almost 5 years ago momosavar "Thank you so much @JSummers If there is a problem, I will certainly let you know. " | Read more » about 5 years ago JSummers "Hi @momosavar, it uses the ek-tm4c123gxl. The voltage range of the model 5 potentiostat is limited to +/- 1.65 volts. The newer model 7 will go t..." | Read more » about 5 years ago momosavar "Hello Dr.jack I want to build this device but I didn't know Which Launchpad did you use in version 5.1? MSP430g or EK-TM4C123GXL " | Read more » about 5 years ago ghing "I ordered a WheeStat from the Public Lab Store. The board is stamped "WheeStat 5". I'm running Ubuntu 16.10 (64-bit) and just wanted to share the..." | Read more » over 5 years ago JSummers "Hi Aneesahmad, In the US, you can order one from my website, smokymtsci.com. Outside the US, you can contact me at my email summers at wcu dot edu..." | Read more » over 5 years ago Laszlo "Dear Dr. Summers, I ordered WheeStat 5.1 potentiostat from OSH Park and built a potentiostat system as described here. Everything seems fine excep..." | Read more » about 6 years ago JSummers "Hi Ivan, I will be happy to help you with this. Did you want to make a WheeStat or were you decided on using Arduino. The WheeStat was designed u..." | Read more » over 6 years ago ilmorales "Dear JSummers I have studied how to perform Arduino project as potenciostado , and frankly, was already giving up because there are many difficult..." | Read more » over 6 years ago Mattador "I am looking forward to reading your note and i will tell you what results gives the gelled electrolyte. " | Read more » almost 7 years ago JSummers "That seems reasonable. I don't know whether the gel-ceramic junction will be an issue or not. My guess is that it will not be a problem. If it i..." | Read more » almost 7 years ago
2022-10-05 11:07:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27751973271369934, "perplexity": 5198.406613845527}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00186.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-r-section-r-2-algebra-essentials-r-2-assess-your-understanding-page-27/48
## College Algebra (10th Edition) -$\frac{7}{3}$ Plug in the values -2 for x and 3 for y and solve $\frac{2x-3}{y}$ $\frac{2(-2)-3}{3}$ $\frac{-4-3}{3}$ -$\frac{7}{3}$
2019-11-20 17:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5651932954788208, "perplexity": 1339.4973617315334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00351.warc.gz"}
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
5