url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://stats.stackexchange.com/questions/374475/linear-model-accuracy-of-predictions-with-confidence-interval
# Linear Model - accuracy of predictions with confidence interval I am fitting a linear model to predict a variable which is a type of performance of animal behaviour. Let's call it performance. When the model makes a prediction of performance for an animal, I want to use this to perform Monte Carlo simulation from a normal distribution. My performance prediction will be the mean of the sampling distribution, but I'm getting stuck on how I should calculate a variance for each animal's performance. It's quite important, because the variability of an animal's performance is equally if not more important than the performance prediction itself. What I'd like is if I could get some sort of confidence interval for the predictions from the linear model. Then I could choose a variance based on the width of the interval. I've already tried predict(model, data, type=confidence) in R but it returns tiny intervals for each point that don't really make sense for what I need. How can I "predict the unpredictability" of the performance response variable? Let's say you have a simple linear regression model relating an outcome variable Y to a predictor variable X. The predict() function has two options for this type of model model: 1. type = "confidence"; 2. type = "prediction". The first option is to be used when you are interested in estimation and the second when you are interested in prediction. For example: Use the first option (type = "confidence") when you are interested in estimating the mean value of Y for all the subjects in the target population who are have an X value equal to x, where x is known and falls within the range of observed X values for the sample of subjects who generated the data used to fit the model. Use the second option (type = "prediction") when you are interested in predicting the individual value of Y for a randomly selected subject from the target population who has an X value equal to x, where x is known and falls within the range of observed X values for the sample of subjects who generated the data. (This subject is NOT part of the sample of subjects whose data were used to fit the model.) Comment: If I understand things correctly, you have multiple animals and possibly multiple "runs" per animal and it seems that, for each of these "runs", you are measuring "performance" as well as 10 different variables which may predict this "performance". For simplicity's sake, assume for a moment that all animals have 50 "runs" and that you only care about one predictor variable. A mixed model is akin to fitting a collection of linear models - one per animal - such that each model relates "performance" to the predictor variable expected to predict it using the data from the 50 "runs". The mixed model can assume that the effect of the predictor variable under consideration is different across animals. The mixed effects model will estimate the effect of this predictor variable for the "typical" animal. However, you can then assess this effect for all animals in your sample. Some will have higher effects than the one corresponding to the "typical" animal and some will have lower effects. I am not sure what you mean by "inconsistent" performance? Perhaps you mean that the predictor variable of interest might not have an effect on performance? Or might have a negligible effect? • Thanks for your reply - I have tried type="prediction" but I have found that the size of the interval returned is the exact same for every point. This contradicts what I need - the intuition is that some animals have a very wide interval because they are inconsistent performers, while some have very narrow intervals because they are less prone to inconsistency. What I'm looking for is an interval for each prediction that varies in size depending on the predictor variables for that observation. Does that make sense? – John F Oct 30 '18 at 18:47 • Can you explain more about your study design? How many animals do you have? How many observations of what type for each animal? – Isabella Ghement Oct 30 '18 at 19:11 • There are about 30k animals. For each animal I have between 1 and 50ish "runs" i.e. how they've performed in the past. I'm trying to predict their performances in the next run based on about 10 variables from their past runs. i.e. animals with a high value for variable x tend to perform better. But what I'm stumped with is: how can I quantify if animals with a low value for y tend to perform inconsistently ? For these animals I want to set up the Monte Carlo sampler to use a more suitable (greater) variance. – John F Oct 30 '18 at 19:56 • Interesting - why not use some kind of mixed effects modeling to accommodate all animals? It seems like linear modeling might not be sufficient in this context. – Isabella Ghement Oct 30 '18 at 21:09 • I'm going to look into that. Thats for the idea! – John F Oct 31 '18 at 1:13
2019-12-14 20:31:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.609869122505188, "perplexity": 443.93298048340455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541294513.54/warc/CC-MAIN-20191214202754-20191214230754-00333.warc.gz"}
http://math.stackexchange.com/questions/70001/understanding-conditional-probabilities-in-bayes-classifiers-in-the-wikipedia-pa/70035
# Understanding conditional probabilities in Bayes classifiers in the wikipedia page example Why is $P(\text{height} \mid \text{male}) = 1.5789$? This means the probability of height given male? The talk page has a similar question, unanswered. the example added last august about sex classification is puzzling me, could anyone tell me how to compute the $P(\text{height} \mid \text{man})$? The author gave the value 1.5789 with a note stating that "probability distribution over one is OK. It is the area under the bell curve that is equal to one" which also puzzle me. - The value $1.5789$ that is calculated is the density of the probability mass at $x = 6$ with units mass/foot; it is not the probability that a randomly selected male has height exactly $6$ feet. To get a probability, you have to multiply the value of the probability density by a length. In other words, the probability that a randomly selected male has height between $5$ feet, $11\frac{1}{2}$ inches and $6$ feet, $\frac{1}{2}$ inches is approximately $1.5789 \times \frac{1}{12}$ (because the unit of height ($x$ axis) is a foot and thus the length in question is $1$ inch = $\frac{1}{12}$ foot). Note that the value of the probability that we thus obtain is an approximation, but a very good approximation in this instance. (To get an exact value, we would need to compute the value of an integral, but $1.5789/12 = 0.1315\ldots$ is good enough for gummint purposes). As a practical matter, heights are often recorded to the nearest inch, and so when someone says a particular male is $6$ feet tall, people usually take it to mean that the person is between $5$'$11\frac{1}{2}$" and $6$'$\frac{1}{2}$" anyway. But in this sense of the phrase, the probability that a randomly chosen male is $6$ feet tall is $13.15\%$, not $157.89\%$. - It looks like in this case we should use Normal distribution for the trainig set, therefore start to calculate all you need for distribution 1. $\mu = \frac{6 + 5.92 + 5.58 + 5.92}{4} = 5.855$ - mean value 2. next find the variance $\sigma ^ 2$ and substitute to the formula of Normal distribution with f(x=6) as a testing value - The male height distribution is approximated from the training set to be a normal distribution with mean $\mu = 5.855$ and variance $\sigma^2 = 0.035$. The likelihood of seeing a height of 6 feet given that the sample is male is therefore $$P(6 | \textrm{male}) = \frac{1}{\sqrt{2\sigma^2}} \exp\left( \frac{ (6 - \mu)^2}{2\sigma^2} \right) \approx 1.579$$ which is the quoted likelihood in the article. - It is the use of the word "probability" in the wikipedia article instead of "likelihood" (as you correctly call it) that is the cause of the confusion. In fact, earlier in the article, it says "Then, the probability (emphasis added) of some value given a class, $P(x = v \mid c)$, can be computed by plugging $v$ into the equation for a Normal distribution parameterized by $\mu_c$ and $\sigma^2_c$" which is nonsensical from a probability theory viewpoint but not uncommon in applied statistical circles. –  Dilip Sarwate Oct 5 '11 at 13:43
2015-08-31 20:05:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163583517074585, "perplexity": 280.5791316728106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00233-ip-10-171-96-226.ec2.internal.warc.gz"}
http://apple.stackexchange.com/questions/23268/why-is-verify-backups-in-the-time-machine-icon-menu-disabled/25071
# Why is “Verify Backups” in the Time Machine icon menu disabled? I learned in Upgraded to Lion: Time Machine spends a LOT of time indexing backup that if I hold down the option key and click the Time Machine icon in the menu bar, the "Back Up Now" menu item changes to "Verify Backups". But for some reason, on my MacBook Air, "Verify Backups" is disabled (greyed-out). How do I find out why? I'm running OS X 10.7.1, and have FileVault turned on for "Macintosh HD" (my internal storage). My Time Machine backup is to an external USB HD, which I've encrypted. Could that be why? Does Lion not support "Verify Backups" with an encrypted Time Machine HD? - Intriguing! I would presume that once you have your external drive mounted on the desktop - core storage will have the keys needed to encrypt/decrypt the volume. Can you confirm whether disk utility can verify the volume? Once that's done, can you re-check the menu for the official "verify" through Time Machine? –  bmike Aug 26 '11 at 22:25 Disk Utility verified the volume and reported "The volume LaCie appears to be OK." But "Verify Backups" is still disabled. –  Daryl Spitzer Aug 26 '11 at 23:00 Wow - I'm reaching at straws - I don't have any ideas unless you haven't made a backup lately and it knows it already just verified that backup and no changes have "happened" - really a long shot - I'll sit back and wait until I have a more solid idea... –  bmike Aug 26 '11 at 23:21 In light of Jesse's answer below, can you please mark this "unsolved"? Clearly it has nothing to do with network vs. local storage. I will investigate the matter further. –  cksum Sep 15 '11 at 2:59 Okay, so I did a bit of digging, and perhaps found a clue as to why the option is not accessible (to some at least). Read this MacWorld Hint, specifically, look down at the comment from joekewe (on Feb 08, '11 06:08:14PM): From a web search, it looks like "Verify Time Machine Backup" should be called "Verify Time Capsule" backup. My system log shows that I attempted to verify my Time Machine backup: com.apple.backupd[70595]: Backup verification requested by user. but (I believe) because it is a locally mounted volume, it didn't do anything and didn't leave any more entries in the log. From this older knowledgeable article, it looks like Verify Time Machine was added when Time Capsule was having trouble. It might be the case that he's onto something. The option is also grayed out for me, but I have a local TM disk connected via USB. Can anyone confirm they can use the feature with their Time Capsules? It may have been that they actually fixed the menu option, in that, it should have been grayed out for Snow Leopard users connected through USB (read the comments as many people talk about having the service do nothing), but it wasn't. So in Lion, it behaves correctly. - cksum, perhaps you should add the specifics so readers don't have to read the Macworld hints comment: it appears "Verify Backups" only works with Time Capsules. –  Daryl Spitzer Aug 29 '11 at 20:34 I'll wait a little while for confirmation from a Time Capsule owner (or for another answer) before accepting this. –  Daryl Spitzer Aug 29 '11 at 20:35 I understand Daryl, but I am not in favour of paraphrasing a suspected solution. Firstly, because it's unconfirmed, but most importantly, because it is always better to go to the source. The information is there should users wish to pursue the matter (and it's not buried inside a 105 page BBB thread either—only a few comments to sort through), I was just making it available for you (and all those that may wonder about the feature). It is by no means a definite resolution to your problem. But hopefully someone sees with a Time Capsule sees this and we can get confirmation :) –  cksum Aug 29 '11 at 21:18 I'll accept this answer, but change if if a Time Capsule owner discovers that "Verify Backups" doesn't work with it –  Daryl Spitzer Aug 31 '11 at 22:12 "Verify Backups" is applicable only to backups on network drives. It will be greyed out if you are backing up to a local drive. Apple should have been clearer in indicating this. From Finder > Help > Verify your backup disk
2015-01-27 14:48:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5271557569503784, "perplexity": 3170.8953431543546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00023-ip-10-180-212-252.ec2.internal.warc.gz"}
https://ph.iahv.org/household-elements-and-the-risk-of-extreme-covid/
# Household Elements And The Risk Of Extreme Covid This document describes the duties of military commanders for environmental protection in the course of the preparation and execution of navy actions. It additionally recognises the need for “a harmonisation of environmental rules and policies for all NATO-led navy actions”. It instructs NATO commanders to apply “finest practicable and feasible environmental protection measures”, in an purpose to reduce back the environmental impacts attributable to military exercise. The doc is complemented with a number of different NATO Environmental Protection Standardization Agreements and Allied Joint Environmental Protection Publications , which are all focused on protecting the environment throughout NATO-led navy activities. The objective of the STEEEP is to integrate environmental safety and power effectivity rules into technical requirements and specifications for armaments, tools and materials on ships, and the ship to shore interface in Allied and partner countries’ naval forces. A clumped distribution, may be seen in plants that drop their seeds straight to the bottom, similar to oak timber; it can be seen in animals that stay in social teams . Uniform distribution is noticed in vegetation that secrete substances inhibiting the growth of close by people . It can also be seen in territorial animal species, such as penguins that keep an outlined territory for nesting. The territorial defensive behaviors of every individual create a picot questions daily sample of distribution of similar-sized territories and people within these territories. A continuum exists from closed populations which are geographically isolated from, and lack exchange with, different populations of the same species to open populations that show varying degrees of connectedness. There are two principal types of competition-namely, interference and exploitative. Interference competition occurs when one individual instantly harms one other. Interference could also be dramatic, as in deadly aggression, or refined, as when social interactions reduce the time available for gathering resources or improve the risk of predation. Exploitative competitors happens when one particular person consumes a useful resource, similar to meals, that otherwise would have been consumed by one other individual. A multivariable logistic regression model was used to estimate the affiliation between presence of kids in households, variety of folks residing in a family, and property varieties on the danger of hospitalization with COVID symptoms. We ran three models, all adjusted for age, gender, race/ethnicity, income, close contact, essential worker standing, and the county-level community transmission price. Models examining the exposures of the variety of folks dwelling in household and property types fashions have been also adjusted for presence of youngsters within the family. These variables chosen have been based on hypothesized causal associations and confounders, and direct acyclic graphs had been developed for every mannequin. Study participants had been people screened for enrollment into the Communities, Households, and SARS/CoV-2 Epidemiology COVID Cohort research who completed an initial baseline evaluation. It is taken into account an necessary subject able to throwing mild on the nature of population training. The statistic is the mean grade point common, $$\bar$$, of the pattern of a hundred faculty college students. The sample is a random choice of one hundred school students within the United States. Or, we’d use $$\hat$$, the proportion in a random pattern of 1000 likely American voters who approve of the president’s job efficiency, to estimate p, the proportion of all likely American voters who approve of the president’s job efficiency. Parameter A parameter is any abstract quantity, like an average or percentage, that describes the whole inhabitants. Strategic administration is the ongoing planning, monitoring, analysis and evaluation of all requirements a company needs to … Although the researchers wouldn’t have a exact number, as long as the sample is large sufficient and the research adequately controlled, they should have a number that gives them a reasonably good concept of the prevalence of fixed-mobile substitution among that demographic. According to the United States Census Bureau the world’s population was about 7.5 billion in 2019 and that the 7 billion quantity was surpassed on 12 March 2012. According to a separate estimate by the United Nations, Earth’s inhabitants exceeded seven billion in October 2011, a milestone that offers unprecedented challenges and alternatives to all of humanity, according to UNFPA. What’s extra, labeling a population “at risk” or “vulnerable” with out offering any historic context means that higher vulnerability is an inherent characteristic of that inhabitants, although this vulnerability usually stems from centuries of exploitation by Europeans. Yet by the mid-1980s, extra critically minded scientists determined that drier situations within the Sahel had been an impact of large-scale climatic shifts – namely, modifications in ocean surface temperatures – not of local human actions. This was a direct echo of accusations made by 19th-century French colonial officers to justify their own incursions into the region. Infection with the COVID-19 virus could result in critical issues and hundreds of thousands of deaths, particularly amongst older people and these who have present well being situations. What share of a group must be immune in order to achieve herd immunity? The more contagious a disease is, the larger the proportion of the inhabitants that must be immune to the illness to stop its spread. It’s estimated that 94% of the population have to be resistant to interrupt the chain of transmission. Often, a percentage of the population should be able to getting a disease in order for it to unfold. If the proportion of the population that’s resistant to the illness is bigger than this threshold, the spread of the disease will decline. Researchers and policymakers share the task of selecting appropriately from amongst alternate rural definitions at present available or creating their own unique definitions. These are only some examples of how one would possibly use standard deviation, but many more exist. Generally, calculating standard deviation is effective any time it is desired to know the way removed from the mean a typical worth from a distribution may be. In 2021, NATO adopted an formidable Climate Change and Security Action Plan to mainstream climate change concerns into NATO’s political and army agenda. In 2006, NATO’s Science Committee merged with the CCMS to kind the Science for Peace and Security Programme to develop initiatives on rising security challenges, including environmental safety points like water administration and the prevention of natural catastrophes, and energy safety. Advancing inclusive analysis is complex, involving genomic intricacies and intersecting social drivers of well being. Achieving broader range, equity, and inclusion in clinical trials requires nothing lower than a universal commitment to numerous, equitable and inclusive research that may result in better medical therapies for extra folks. It is time to move from “should” to “must.” Rather than suggest, we really feel FDA ought to require the development and implementation of enrollment plans centered round growing range for all Phase 2 by way of four trials. Furthermore, because of accessibility issues, marginalized tribes or villages might not present knowledge at all, making the info biased in path of certain areas or groups. Dense population clusters typically coincide with geographical places sometimes called city, or as an urban or metropolitan area; sparsely populated areas are also known as rural. These terms wouldn’t have globally agreed upon definitions, however they are useful generally discussions about inhabitants density and geographic location. Studies of human populations typically happen at or below town degree in places like Manhattan, which is part of New York City, New York, United States. When you measure a sure observation from a given unit, corresponding to a person’s response to a Likert-scaled merchandise, that remark known as a response (see Figure 8.2). In other words, a response is a measurement value supplied by a sampled unit. Each respondent will present you with different responses to totally different gadgets in an instrument. If the population grows indefinitely, less and less sources shall be out there to maintain the population. This process during which per capita inhabitants progress modifications when population density adjustments is referred to asdensity dependence. In conclusion, to enhance causal inference and policies and action based mostly on this data, the population sciences have to expand and deepen theorizing about who and what makes populations and their means. At a time when the topic of causality within the sciences stays hotly debated by philosophers and researchers alike, all events however agree that “the query of how probabilistic accounts of causality can mesh with mechanistic accounts of causality desperately wants answering” . As my article https://www.stockton.edu/academic-assessment/documents/rubrics/KLEIN-essay-rubrics.docx makes clear, the thought and actuality of “population” reside on the nexus of this query. Clarifying the substantive defining options of populations, together with who and what buildings the dynamic and emergent distributions of their characteristics and components, is thus essential to both analyzing and altering causal processes. Growing opposition to the slender inhabitants management focus led to a major change in population management policies in the early Eighties. A group of individuals of the identical species occupying a selected geographic area. Populations may be comparatively small and closed, as on an island or in a valley, or they may be extra diffuse and and not utilizing a clear boundary between them and a neighboring inhabitants of the same species. For species that reproduce sexually, the members of a inhabitants interbreed either exclusively with members of their very own inhabitants or, the place populations intergrade, to a greater degree than with members of other populations. Additionally, as a result of transportation has turn into easier and more frequent, ailments can spread shortly to new regions. In both instances, there is a sequential change in species till a roughly permanent community develops. Voracious feeders and speedy reproducers, Asian carp could outcompete native species for food and could lead to their extinction. It competes with native species for these resources and alters nursery habitats for different fish by eradicating aquatic plants. Another species, the silver carp, competes with native fish that feed on zooplankton.
2022-12-07 09:28:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36167991161346436, "perplexity": 3042.148139579084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00288.warc.gz"}
https://mathoverflow.net/questions/162844/aleph-looks-like-mathbb-n
# $\aleph$ looks like $\mathbb N$? We all know the notation $\aleph_\lambda$ for the $\lambda$th (or, I guess, $\lambda+1$st) infinite cardinal number; in particular $\aleph_0$ is the cardinality of the the set of natural numbers $\mathbb N$. Out of curiosity: Is it the case that historically, the Hebrew letter $\aleph$ (aleph) was chosen because it sort of looks like the letter N? • The $\aleph$ notation appears in Cantor's Contributions to the Founding of the Theory of Transfinite Numbers. As far as I can tell, N wasn't used there to denote the set of natural numbers. This suggests that the answer is 'no'. Apr 8, 2014 at 22:07 • It's a shame, too. At least on two occasions on math.SE someone used $\aleph$ to denote $\Bbb N$. And not to mention people who are not native Hebrew speakers writing $\aleph$ in all sort of ways which are not even wrong... But then again, it's always nice to have my students recognize a mathematical symbol and acting all surprised that it's not just Greek letters and weird symbols! Apr 8, 2014 at 23:38 • @AsafKaragila Have you seen the MathSciNet review of the second edition of Bourbaki's set theory (MR0154814)? The last sentence reads: "In the first edition, all alephs except those appearing in exponents were printed upside down; in the new edition the exception has been removed." Apr 9, 2014 at 0:35 • @Andreas: Quite amusing. I suppose that at the time it wasn't trivial to find an $\aleph$ glyph in Europe... Apr 9, 2014 at 0:38 According to not necessarily reliable internet sources, Georg Cantor "told his colleagues and friends that he was proud of his choice of the letter aleph to symbolize the transfinite numbers, since aleph was the first letter of the Hebrew alphabet and he saw in the transfinite numbers a new beginning in mathematics: the beginning of the actual infinite." Edit: According to less sketchy internet sources "The choice was particularly clever, as Cantor was pleased to admit, because the Hebrew aleph was also a symbol for the number one. Since the transfinite cardinal numbers were themselves infinite unities, the aleph could be taken to represent a new beginning for mathematics." from 'Georg Cantor and the battle for transfinite set theory' at http://ad.infinitum.simons-rock.edu/Dauben-Cantor.pdf, with footnote: "Cantor explained his choice of the alephs to denote the transfinite cardinal numbers in a letter to Felix Klein of April 30, 1895. The original letter is in the Klein Nachlass, Universitatsbibliothek, Gottingen, and may also be read in a draft version in Cantor's letter-book for 1890-1895, pp. 142-143, also kept in the archives of the Niedersachsische Staats- und Universitatsbibliothek, Gottingen. See also Dauben 1979/1990, pp. 179-183; Meschkowski 1991, pp. 354-355." • The Mystery of the Aleph: Mathematics, the Kabbalah, and the Search for Infinity By Amir D. Aczel Apr 8, 2014 at 22:04 • @BjørnKjos-Hanssen I do not have the book, but here is a quote from a review "the number of whole numbers is aleph-null; the number of irrational numbers, aleph-one." It is not clear to me if this mistake comes from the author or from the reviewer. publishersweekly.com/978-1-56858-105-7 Jun 24, 2018 at 12:02 There is another explanation for Cantor's choice of aleph: not the numerical value of the character, but its occurrence in the word (phrase?) denoting infinity. Here is a quotation from the article by Yuval Ne'eman, Issai Schur died here: some background comments, in memoriam (xxi–xxx)"MR1985185 (regarding Jewish mathematics professors): "Another interesting case is that of Georg Cantor (1845-1918), probably the most original and creative mind in nineteenth century mathematics. In this case, conversion to Christianity had already taken place in his parents' generation, but he identified with Jewish destinies and used the Hebrew letter aleph $\aleph$ for systematics of infinity (in Hebrew, ein-sof) which starts with an $\aleph$ and for which he was criticized by editors". • I highly doubt this explanation, as prior to the birth of modern spoken Hebrew, the Hebrew word for infinity ("ein-sof") only appeared in esoteric mystical texts in Hebrew, and I'd be quite surprised to discover that Cantor (who was actually christian) was familiar with those texts. – Haim Apr 9, 2014 at 1:31 • Cantor may not have had much contact with Judaism, but I suspect that many of the Protestant theologians with whom he did have contact would have had a good working knowledge of Hebrew: certainly biblical Hebrew, but maybe also some of those esoteric texts. Apr 9, 2014 at 4:17 • I tend to agree with @Haim on this issue. This sounds more like a later interpretation that would have earned a very large [citation needed] or some other Wikipedia notice for lack of citations or foundation. Apr 9, 2014 at 4:21
2022-11-27 15:47:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7365222573280334, "perplexity": 1320.9756397328379}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00863.warc.gz"}
https://codereview.stackexchange.com/questions/226091/income-tax-calculator
# Income tax calculator [closed] I've just started coding and have written a program to calculate your tax to be paid. Is there anywhere where I can improve this? while True: try: except ValueError: print("Sorry, I didn't understand that please enter taxable income as a number") continue else: break if income <= 18200: tax = 0 elif income <= 37000: tax = (income - 18200) * 0.19 elif income <= 90000: tax = (income-37000) * 0.235 + 3572 elif income <= 180000: tax = (income - 90000) * 0.37 + 20797 else: tax = (income - 180000) * 0.45 + 54097 print("you owe", tax, "dollars in tax!" ) • identation isnt proper – Lalit Verma Aug 14 '19 at 7:35 • @LalitVerma At first I thought the try line was just indented too far, but actually all the rest of the code is at the wrong level... – Graipher Aug 14 '19 at 7:54 • To properly indent your code: remove the current code, paste your original code, select your freshly-pasted code and hit Ctrl+K. Or use an editor to give every line of code 4 extra spaces, that does the same thing. Considering indentation is very important in Python, you'll have to fix it to comply with our help center (which requires code to work). – Mast Aug 14 '19 at 8:35 • AFAICT that's not working code, even if indented properly. The code after the try block can't be executed, because it'll either break or continue before reaching it. – l0b0 Aug 14 '19 at 8:48 • That depends on how wrong the indentation is. If the rest of the code is supposed to be outside the while loop it would work. – Graipher Aug 14 '19 at 8:53 Given the bad indentation of the original code, I will assume you meant to post code that looks like this while True: try: except ValueError: print("Sorry, I didn't understand that please enter taxable income as a number") continue else: break if income <= 18200: tax = 0 elif income <= 37000: tax = (income - 18200) * 0.19 elif income <= 90000: tax = (income-37000) * 0.235 + 3572 elif income <= 180000: tax = (income - 90000) * 0.37 + 20797 else: tax = (income - 180000) * 0.45 + 54097 print("you owe", tax, "dollars in tax!" ) I would suggest you use an IDE or a linter, both of which will point out syntax problems in the code. The idea behind the code can be summarized as Get the income from user input Calculate the tax Print the amount owed This is a good structure to have. You have three clear boundaries and have split the work appropriately. You can make this explicit with well named functions. I would lay it out like this def get_income(): while True: try: except ValueError: print("Sorry, I didn't understand that please enter taxable income as a number") continue else: break return income def compute_tax(income): if income <= 18200: tax = 0 elif income <= 37000: tax = (income - 18200) * 0.19 elif income <= 90000: tax = (income - 37000) * 0.235 + 3572 elif income <= 180000: tax = (income - 90000) * 0.37 + 20797 else: tax = (income - 180000) * 0.45 + 54097 return tax if __name__ == "__main__": income = get_income() tax = compute_tax(income) print("you owe", tax, "dollars in tax!") The advantage of this is that your code for working out how much tax is owed is easy to use in another python module. # tax_credits.py from tax import compute_tax ... If we look at how the tax is computed, the pattern is very clear. At each tax bracket we subtract an amount, multiply by a percentage, and add back an amount tax = (income - S) * P + A We could change the code to first figure out the tax bracket the income falls into, then compute the tax amount. While this doesn't look any better now, it will be beneficial to explore this path. def compute_tax(income): if income <= 18200: S, P, A = 0, 0, 0 # Values picked so the tax amount is always 0. elif income <= 37000: S, P, A = 18200, 0.19, 0 elif income <= 90000: S, P, A = 37000, 0.235, 3572 elif income <= 180000: S, P, A = 90000, 0.37, 20797 else: S, P, A = 180000, 0.45, 54097 tax = (income - S) * P + A return tax Since this repeated code now looks a bit easier to manager, let's turn it into a loop. I'll leave the details out as there are a few features of python that might be new to you, and are worth looking up yourself. def compute_tax(income): # (C, P, A) # S is reused from the previous iteration of the loop # so no need to store it. tax_brackets = ( (18200, 0, 0), (37000, 0.19, 0), (90000, 0.235, 3572), (180000, 0.37, 20797) ) # The final bracket, use if the income is bigger than any cutoff last_bracket = (None, 0.45, 54097) previous_cutoff = 0 for cutoff, percent, additive in tax_brackets: if income <= cutoff: break previous_cutoff = cutoff else: # If we get here we never found a bracket to stop in tax = (income - previous_cutoff) * percent + additive return tax We should probably include some sanity checks incase somebody incorrectly changes values. As an example, we could check each cutoff is bigger than the last. Note that this code is a bit denser than the original, and I don't know if I would recommend using a loop. If the original code ever gets more complex this is how I would try and simplify it. But until then I would leave the explicit if/elif/else statements in previous suggestion. def get_income(): while True: try: except ValueError: print("Sorry, I didn't understand that please enter taxable income as a number") continue else: break return income The logic here is pretty solid. I would be hesitant to change to much. The only thing I do not like is int, as it is perfectly reasonable to earn a fractional unit of the currency. There are a lot of ways you could code this sort of function. Here is one alternative that removes a few unnecessary parts. def get_income(): while True: try: except ValueError: print("Sorry, I didn't understand that please enter taxable income as a number") • wow thanks for that, I definitely have alot to learn – AMG_ Aug 14 '19 at 14:45 Hello @AMG_ and welcome to codereview. One good thing here is checking that the user input is a number. It's a good habit to always assume user input is broken, and give them an indication of what's going wrong. You can also do other validation at this stage. What should happen if, for example, they enter a negative number? By the way, although python does support try...except...else the else is a bit of a niche feature. It's more common, and hence easier for python programmers to see that it's correct, if you put the break at the end of the try section. One cause for concern is the magic numbers in this code. If, say the government changed the 0.19 threshold from 37k to 38k, you'd need to remember to change the 37000 in both its own band and the 90000 band. Moreover, because I know the idea behind marginal tax rates, I know what 20797 is meant to refer to. Even though I'm aware of it now I'm not checking whether the calculation is accurate. If the government changed the 37000 tax band, all those numbers would need to change and it would be terribly easy to miss one. This code would be more maintainable if there were a list of thresholds all in one place, and the contribution from lower tax bands were calculated by the code rather than hard coded. I would split the bit of code which calculates the tax into a function. It's generally considered good practice to split code which does user interaction (display and keyboard parsing) away from code which does calculations. The behaviour would be the same, but it's much easier to follow what's going on. • Hi, thanks for that. I am just learning so could you talk me through how i could change it to have thresholds? and the splitting also? – AMG_ Aug 14 '19 at 13:45
2020-01-27 13:58:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32706549763679504, "perplexity": 2420.799189665617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00343.warc.gz"}
https://queslers.com/chef-and-brain-speed-codechef-solution/
304 North Cardinal St. Dorchester Center, MA 02124 # Chef and Brain Speed CodeChef Solution ## Problem – Chef and Brain Speed CodeChef Solution In ChefLand, human brain speed is measured in bits per second (bps). Chef has a threshold limit of X bits per second above which his calculations are prone to errors. If Chef is currently working at Y bits per second, is he prone to errors? If Chef is prone to errors print YES, otherwise print NO. #### Input Format The only line of input contains two space separated integers X and Y — the threshold limit and the rate at which Chef is currently working at. #### Output Format If Chef is prone to errors print YES, otherwise print NO. You may print each character of the string in uppercase or lowercase (for example, the strings yesYesyEs, and YES will all be treated as identical). #### Constraints • 1 \leq X, Y \leq 1001≤X,Y≤100 #### Sample 1: Input: 7 9 Output: YES #### Explanation: Chef’s current brain speed of 9 bps is greater than the threshold of 7 bps, hence Chef is prone to errors. #### Sample 2: Input: 6 6 Output: NO #### Explanation: Chef’s current brain speed of 6 bps is not greater than the threshold of 6 bps, hence Chef is not prone to errors. #### Sample 3: Input: 31 53 Output: NO #### Explanation: Chef’s current brain speed of 53 bps is greater than the threshold of 31 bps, hence Chef is prone to errors. #### Sample 4: Input: 53 8 Output: NO #### Explanation: Chef’s current brain speed of 8 bps is not greater than the threshold of 53 bps, hence Chef is not prone to errors. ### Chef and Brain Speed CodeChef Solution in C++17 #include <iostream> using namespace std; int main() { int X,Y; cin>>X>>Y; if(Y>X){ cout<<"YES"; } else{ cout<<"NO"; } } ### Chef and Brain Speed CodeChef Solution in Python3 x,y=map(int,input().split()) if x>=y: print("no") else : print("yes") ### Chef and Brain Speed CodeChef Solution in Java import java.util.*; import java.lang.*; import java.io.*; class Codechef { public static void main (String[] args) throws java.lang.Exception { Scanner sc = new Scanner(System.in); int x= sc.nextInt(); int y=sc.nextInt(); if (y>x){ System.out.println("yes"); }else{ System.out.println("no"); } } } ##### Chef and Brain Speed CodeChef Solution Review: In our experience, we suggest you solve this Chef and Brain Speed CodeChef Solution and gain some new skills from Professionals completely free and we assure you will be worth it. If you are stuck anywhere between any coding problem, just visit Queslers to get the Chef and Brain Speed CodeChef Solution Find on CodeChef ##### Conclusion: I hope this Chef and Brain Speed CodeChef Solution would be useful for you to learn something new from this problem. If it helped you then don’t forget to bookmark our site for more Coding Solutions. This Problem is intended for audiences of all experiences who are interested in learning about Data Science in a business context; there are no prerequisites. Keep Learning! More Coding Solutions >> LeetCode Solutions Hacker Rank Solutions CodeChef Solutions
2022-09-27 01:37:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23832862079143524, "perplexity": 6679.040785748798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00054.warc.gz"}
https://aimsciences.org/article/doi/10.3934/dcds.2018127
# American Institute of Mathematical Sciences June  2018, 38(6): 2965-2985. doi: 10.3934/dcds.2018127 ## Lozi-like maps 1 Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis, 402 N. Blackford Street, Indianapolis, IN 46202, USA 2 Department of Mathematics, Faculty of Science, University of Zagreb, Bijenička 30, 10 000 Zagreb, Croatia **Supported in part by the NEWFELPRO Grant No. 24 HeLoMa, and in part by the Croatian Science Foundation grant IP-2014-09-2285 Received  September 2017 Published  April 2018 Fund Project: This work was partially supported by a grant number 426602 from the Simons Foundation to Michał Misiurewicz We define a broad class of piecewise smooth plane homeomorphisms which have properties similar to the properties of Lozi maps, including the existence of a hyperbolic attractor. We call those maps Lozi-like. For those maps one can apply our previous results on kneading theory for Lozi maps. We show a strong numerical evidence that there exist Lozi-like maps that have kneading sequences different than those of Lozi maps. Citation: Michał Misiurewicz, Sonja Štimac. Lozi-like maps. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2965-2985. doi: 10.3934/dcds.2018127 ##### References: [1] M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, Cambridge, 2002. Google Scholar [2] Z. Elhadj, Lozi Mappings: Theory and Applications, CRC Press, Boca Raton, FL, 2014. Google Scholar [3] Y. Ishii, Towards a kneading theory for Lozi mappings Ⅰ. A solution of the pruning front conjecture and the first tangency problem, Nonlinearity, 10 (1997), 731-747. Google Scholar [4] R. Lozi, Un attracteur etrange(?) du type attracteur de Hénon, J. Phys. Colloques(Coll. C5), 39 (1978), 9-10. doi: 10.1051/jphyscol:1978505. Google Scholar [5] M. Misiurewicz, Strange attractor for the Lozi mappings, Ann. New York Acad. Sci., 357 (1980), 348-358. Google Scholar [6] M. Misiurewicz and S. Štimac, Symbolic dynamics for Lozi maps, Nonlinearity, 29 (2016), 3031-3046. doi: 10.1088/0951-7715/29/10/3031. Google Scholar show all references ##### References: [1] M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, Cambridge, 2002. Google Scholar [2] Z. Elhadj, Lozi Mappings: Theory and Applications, CRC Press, Boca Raton, FL, 2014. Google Scholar [3] Y. Ishii, Towards a kneading theory for Lozi mappings Ⅰ. A solution of the pruning front conjecture and the first tangency problem, Nonlinearity, 10 (1997), 731-747. Google Scholar [4] R. Lozi, Un attracteur etrange(?) du type attracteur de Hénon, J. Phys. Colloques(Coll. C5), 39 (1978), 9-10. doi: 10.1051/jphyscol:1978505. Google Scholar [5] M. Misiurewicz, Strange attractor for the Lozi mappings, Ann. New York Acad. Sci., 357 (1980), 348-358. Google Scholar [6] M. Misiurewicz and S. Štimac, Symbolic dynamics for Lozi maps, Nonlinearity, 29 (2016), 3031-3046. doi: 10.1088/0951-7715/29/10/3031. Google Scholar Positions of some distinguished points The set of parameters The triangle $\Theta$ and positions of some distinguished points Attractor for the Lozi map with parameters described by (F1') and (F2'). The $y$-coordinate is stretched by factor $7/4$ Graphs of (F1') and (F2') Equations (F1') and (F2') as inequalities [1] David Burguet. Examples of $\mathcal{C}^r$ interval map with large symbolic extension entropy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 873-899. doi: 10.3934/dcds.2010.26.873 [2] Anatoli F. Ivanov. On global dynamics in a multi-dimensional discrete map. Conference Publications, 2015, 2015 (special) : 652-659. doi: 10.3934/proc.2015.0652 [3] Claudio Bonanno, Carlo Carminati, Stefano Isola, Giulio Tiozzo. Dynamics of continued fractions and kneading sequences of unimodal maps. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1313-1332. doi: 10.3934/dcds.2013.33.1313 [4] Daniel Fusca. The Madelung transform as a momentum map. Journal of Geometric Mechanics, 2017, 9 (2) : 157-165. doi: 10.3934/jgm.2017006 [5] Lluís Alsedà, Michał Misiurewicz. Semiconjugacy to a map of a constant slope. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3403-3413. doi: 10.3934/dcdsb.2015.20.3403 [6] Richard Evan Schwartz. Outer billiards and the pinwheel map. Journal of Modern Dynamics, 2011, 5 (2) : 255-283. doi: 10.3934/jmd.2011.5.255 [7] Valentin Ovsienko, Richard Schwartz, Serge Tabachnikov. Quasiperiodic motion for the pentagram map. Electronic Research Announcements, 2009, 16: 1-8. doi: 10.3934/era.2009.16.1 [8] John Erik Fornæss, Brendan Weickert. A quantized henon map. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 723-740. doi: 10.3934/dcds.2000.6.723 [9] Zenonas Navickas, Rasa Smidtaite, Alfonsas Vainoras, Minvydas Ragulskis. The logistic map of matrices. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 927-944. doi: 10.3934/dcdsb.2011.16.927 [10] Hunseok Kang. Dynamics of local map of a discrete Brusselator model: eventually trapping regions and strange attractors. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 939-959. doi: 10.3934/dcds.2008.20.939 [11] Denis Gaidashev, Tomas Johnson. Dynamics of the universal area-preserving map associated with period-doubling: Stable sets. Journal of Modern Dynamics, 2009, 3 (4) : 555-587. doi: 10.3934/jmd.2009.3.555 [12] Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725 [13] Jian Zhai, Jianping Fang, Lanjun Li. Wave map with potential and hypersurface flow. Conference Publications, 2005, 2005 (Special) : 940-946. doi: 10.3934/proc.2005.2005.940 [14] Mila Nikolova. Model distortions in Bayesian MAP reconstruction. Inverse Problems & Imaging, 2007, 1 (2) : 399-422. doi: 10.3934/ipi.2007.1.399 [15] Juan Luis García Guirao, Marek Lampart. Transitivity of a Lotka-Volterra map. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 75-82. doi: 10.3934/dcdsb.2008.9.75 [16] Jim Wiseman. Symbolic dynamics from signed matrices. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 621-638. doi: 10.3934/dcds.2004.11.621 [17] George Osipenko, Stephen Campbell. Applied symbolic dynamics: attractors and filtrations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 43-60. doi: 10.3934/dcds.1999.5.43 [18] Michael Hochman. A note on universality in multidimensional symbolic dynamics. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 301-314. doi: 10.3934/dcdss.2009.2.301 [19] Miaohua Jiang, Qiang Zhang. A coupled map lattice model of tree dispersion. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 83-101. doi: 10.3934/dcdsb.2008.9.83 [20] Kokum R. De Silva, Shigetoshi Eda, Suzanne Lenhart. Modeling environmental transmission of MAP infection in dairy cows. Mathematical Biosciences & Engineering, 2017, 14 (4) : 1001-1017. doi: 10.3934/mbe.2017052 2018 Impact Factor: 1.143
2019-09-16 00:07:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5463563203811646, "perplexity": 6878.9339912540745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00025.warc.gz"}
https://civilengineering.blog/tag/profile-of-high-masonry-gravity-dam/
PROFILE OF HIGH MASONRY GRAVITY DAM ## Profile of high masonry gravity dam PROFILE OF HIGH MASONRY GRAVITY DAM Mr. G. Molesworth gave the following formulae for fixing. These formulae are applicable for masonry gravity dams only. $x= \sqrt{\frac{1.76y^{3}}{p_{a+1.06y}}}$ But value of x should not exceed y ρ at any cost $Z= \frac{y}{36.5p_{a}}$ where x = D/S offset from the vertical line also known as axis of the dam at a depth y below the maximum reservoir level. Z = U/S offset from the vertical line at a depth y metres. b1 = Base width of the dam at 4 h below the full reservoir level. a = Thickness of the dam at the full reservoir level. It is taken as 0.4 b1. pa = Allowable compressive stress for masonry in t/m2, the value of which may vary between 77 to 110 t/m2. Continue Reading Profile of high masonry gravity dam
2021-01-17 15:01:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4615155756473541, "perplexity": 3125.5251922072316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00483.warc.gz"}
http://www.piday.org/calculators/compound-interest-calculator/
Compound Interest Calculator I want to calculate: Input known variables: VariablePeriod % per year Show me the solution without an explaination Explanation Problem Problem goes here Result Result goes here Explanation $${A = P(1 + \frac{r}{n})^{n.t} }$$ A = total amount P = principal or amount of money deposited, r = annual interest rate n = number of times compounded per year t = time in years In this example we have After plugging the given information we have:
2019-07-15 17:57:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000019788742065, "perplexity": 7872.810830091278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715201205-00208.warc.gz"}
https://web2.0calc.com/questions/urgent-please-help-discounts
+0 +1 120 1 An item is regularly priced at \$65. It is now priced at a discount of 55% off the regular price. Find the price now. Jan 25, 2019 $$55\% \text{ off means you pay }(100\%-55\%) = 45\%\\ 45\% \text{ of }\65 = (0.45)65 = \29.25$$
2019-06-18 07:47:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2886596918106079, "perplexity": 3689.9768718260952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00406.warc.gz"}
http://statisfaction.wordpress.com/category/general/
# Statisfaction ## BayesComp on wikidot Posted in General by Pierre Jacob on 16 April 2013 Since the classic picture of Bayes is actually not a picture of Bayes, we might as well use Ryan. Hey, Just a quick note about BayesComp, a new wiki about Bayesian Computational Statistics (see this outdated but well-written introduction if you really don’t know what that is), as Xian pointed out. It is organised by the ISBA Section on Bayesian Computation, notably Peter Green and Nicolas Chopin so far. If the community gets into it, it could become the nerve centre for online resources about Bayesian Computation, which so far are quite scattered and poorly advertised. Good luck to BayesComp! ## Marine Biogeochemical Data Assimilation Symposium in Hobart, 27th-30th May Posted in General by Pierre Jacob on 10 April 2013 Tessellated Pavement, Eaglehawk Neck, Tasman Peninsula Hello, At the end of May CSIRO (Marine and Atmospheric Research, Hobart) and in particular Emlyn Jones organise a conference on this topic, subtitled: New Pathways to Understanding and Managing Marine Ecosystems: Quantifying Uncertainty and Risk Using Biophysical-Statistical Models of the Marine Environment ## 100 Savvy Sites on Statistics and Quantitative Analysis Posted in General by Pierre Jacob on 8 April 2013 Hello hello, Just a quick post to advertise the following list of statistics-related blogs and websites. Click on the badge to access it: We will be back soon with more content! Cheers, Pierre ## Dropbox Space Race Posted in Geek, General by Julyan Arbel on 1 December 2012 Hi, additionally to the referral program (you refer a new user, you win an extra .5 Go), the Dropbox Space Race will give you 3 Go extra space (for 2 years) if you register with your email from a competing university. The best schools will get more space. Here are the 100 top schools. Com’ on, there is no french school in the 100 top ! Thanks Nicolas for the info. Tagged with: , , ## Bayesian Condom Use Posted in General, Statistics by Pierre Jacob on 29 November 2012 HIV transmission model among Female Sex Workers; diagram taken from the paper by Dureau et al. Hey there, What do you do when you see the word “condom” in the title of a new arXiv entry?! You click with wild excitement of course! And you end up reading ## International Year of Statistics 2013 Posted in General, Statistics by Pierre Jacob on 19 November 2012 Britney hears about the forthcoming International Year of Statistics Hey, At Statisfaction’s headquarters (located inside a volcanic crater on a distant planet), we received an email from Jeffrey Myers from the American Statistical Association to advertise the International Year of Statistics, 2013! To quote the webpage: The goals of Statistics2013 include: • increasing public awareness of the power and impact of Statistics on all aspects of society; • nurturing Statistics as a profession, especially among young people; and • promoting creativity and development in the sciences of Probability and Statistics Those are great goals that we obviously support! Statistics is an important field of applied mathematics and has been for a while now, but public awareness still has to increase. At cocktail parties, it still isn’t super sexy to admit that you’re a statistician. It should be! And it’s good that some people are working on that at Amstat, at Tumblr, at NYTimes, at Rstudio and elsewhere. We’ll go on blogging here, maybe with new contributors and more technical posts shortly. Stay tuned! Pierre ## A glimps of Inverse Problems Posted in General, Seminar/Conference, Statistics by JB Salomond on 15 November 2012 Hi folks ! Last Tuesday a seminar on Bayesian procedure for inverse problems took place at CREST. We had time for two presentations of young researchers Bartek Knapik and Kolyan Ray. Both presentations deal with the problem of observing a noisy version of a linear transform of the parameter of interest $Y_i = K\mu + \frac{1}{\sqrt{n}} Z$ where $K$ is a linear operator and $Z$ a Gaussian white noise.  Both presentations considered asymptotic properties of the posterior distribution (Their papers can be found on arxiv, here for Bartek’s, and here for Kolyan’s). There is a wide literature on asymptotic properties of the posterior distribution in direc models. When looking at the concentration of $f$ toward a true distribution $f_0$  given the data, with respect to some distance $d(.,.)$,  well known problem is to derive concentration rates, that is the rate $\epsilon_n$ such that $\pi(d(f,f_0) > \epsilon_n | X^n) \to 0.$ For inverse problems, the usual methods as introduced by Ghosal, Ghosh and van der Vaart (2000) usually fails, and thus results in this settings are in general difficult to obtain. Bartek presented some very refined results in the conjugate case. He manages to get some results on the concentration rates of the posterior distribution, on Bayesian Credible Sets and Bernstein – Von Mises theorems – that states that the posterior is asymptotically Gaussian – when estimating a linear functional of the parameter of interest. Kolyan got some general conditions on the prior to achieve concentration rate, and prove that these techniques leads to optimal concentration rates for classical models. I only knew little about inverse problems but both talks were very accessible and I will surely get more involved in this field ! ## Just for the fun of it… Posted in General, Statistics by Pierre Jacob on 6 November 2012 On this useful series of posts from Freakonometrics: I stumbled upon this 1996 article published in Ecological Applications: Discussion: Should Ecologists Become Bayesians? It was a really fun and surprising read to me, so I felt like sharing. Most surprising was the argument that established Frequentism had a better track record than Bayesian stats. What a weird remark from a researcher! Hopefully the atmosphere among ecologists changed since 1996 (and people learned about Bayesian model choice), but I think that such articles explains why experienced Bayesian statisticians spend time writing replies like “Not only defended but also applied”: The perceived absurdity of Bayesian inference and the recently-arXived anti-Bayesian moment and its passing for instance. ## New job, same blog Posted in General by Pierre Jacob on 4 October 2012 Hello, After this long and idle summer, here’s a little update of my research life™. After having completed my PhD (Xi’an and Robin kindly blogged about it there and there) in France, I am now a Research Fellow at the National University of Singapore (NUS), in the Department of Statistics and Applied Probability. I’m going to work mostly with Ajay Jasra on Sequential Monte Carlo theory and methodology. NUS seems like the perfect place to work long hours: there’s space, whiteboards, printers, air conditioning, food courts and even a gym. There’s also a bunch of very prestigious statisticians here but I still don’t know how much interaction I can expect with them. I still plan to blog here about conference, papers, software, etc. It seems like a good time to give my final impressions about getting a PhD in France, before I forget. All in all, I can’t complain about my personal case: it was a wonderful time for me, mostly thanks to Xi’an. ## Recent Advances in Sequential Monte Carlo / Warwick 2012 Posted in General, Seminar/Conference by Pierre Jacob on 21 September 2012 Hello, This blog is not dead! And it’s gonna get more active soon. These last few days, a workshop on Sequential Monte Carlo methods was held in the University of Warwick (link to the webpage). It was a very exciting meeting, efficiently organised by Arnaud Doucet, Adam Johansen, Anthony Lee and Murray Pollock and hosted by CRiSM. For those who couldn’t attend, here’s a little summary of my experience (or more exactly, just a bunch of links). Since SMC methods are at the core of my research, I was logically interested by all the talks (which is exceptional for 3 days of workshop, filled with 30 talks!). It was probably a good time for a workshop on SMC, since there’s a lot of recent activity in the field. My impression is that this renewed interest is mainly due to: The last point was illustrated at this workshop by a recent work from Alexandre Bouchard-Côté and colleagues called “Entangled Monte Carlo”, as well as by my own presentation: I talked about a new resampling scheme that avoids global interactions between all the particles, and resorts only to multiple pair-wise interactions. This is an on-going work with Pierre Del Moral, Anthony Lee, Lawrence Murray and Gareth Peters, that I might talk about again with more details in the future! Cheers!
2013-05-23 19:48:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3009141981601715, "perplexity": 2256.924717182979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703728865/warc/CC-MAIN-20130516112848-00060-ip-10-60-113-184.ec2.internal.warc.gz"}
http://gradestack.com/NTSE-Complete-Course/Exploring-Universe/Stellar-Distances/19172-3852-38420-study-wtw
# Stellar Distances The stars are so far away from us that they appear to be fixed. To measure these large distances we make use of the following units: 1. Light year: A light year is the distance travelled by light at a speed of 3 × 108 m s–1 in one year. One light year = 9.46 × 1015 m 2. Parsec: A parsec is the distance of a star at which the radius of the earth orbit subtends an angle of one second. 1 Parsec = 3.26 light years 3. Astronomical unit (AU): One astronomical unit is the mean distance of the earth from the sun. 1 AU = 1.496 × 1011 mgo
2017-05-29 02:25:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8798723816871643, "perplexity": 640.3406254077414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00207.warc.gz"}
https://studysoup.com/tsg/647334/elementary-linear-algebra-with-applications-9-edition-chapter-5-1-problem-36
× Log in to StudySoup Get Full Access to Elementary Linear Algebra With Applications - 9 Edition - Chapter 5.1 - Problem 36 Join StudySoup for FREE Get Full Access to Elementary Linear Algebra With Applications - 9 Edition - Chapter 5.1 - Problem 36 Already have an account? Login here × Reset your password # Let S = ( VI. V2. Vl) be a set of nonzero vectors in R' such that any two \lectors in S ISBN: 9780132296540 301 ## Solution for problem 36 Chapter 5.1 Elementary Linear Algebra with Applications | 9th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Elementary Linear Algebra with Applications | 9th Edition 4 5 1 401 Reviews 17 2 Problem 36 Let S = ( VI. V2. Vl) be a set of nonzero vectors in R' such that any two \lectors in S are orthogonal. Prove that S is linearly independent. Step-by-Step Solution: Step 1 of 3 N"idqpr 'Fmmon"e-\ T,^q 7{.r:. Or p:,c}" g\o^sslM,,,\trlnrr'bLc",ik-,..] f l'VfTLl' lo/ D^v tJ ^ir^ '\$et.a, 1ls f->.", {ur6 9,.,r.,t$". lCxrp=ft = 2\ (- (co"(*o)-z' ( .,nc*)))' .ft#:tj (ch^v r.^le- (, -. 2 / {q.+r.\i : ) \ cq\{ ) (J "-+"' * fc":f Step 2 of 3 Step 3 of 3 ##### ISBN: 9780132296540 The full step-by-step solution to problem: 36 from chapter: 5.1 was answered by , our top Math solution expert on 01/30/18, 04:18PM. This full solution covers the following key subjects: . This expansive textbook survival guide covers 57 chapters, and 1519 solutions. Elementary Linear Algebra with Applications was written by and is associated to the ISBN: 9780132296540. This textbook survival guide was created for the textbook: Elementary Linear Algebra with Applications, edition: 9. The answer to “Let S = ( VI. V2. Vl) be a set of nonzero vectors in R' such that any two \lectors in S are orthogonal. Prove that S is linearly independent.” is broken down into a number of easy to follow steps, and 30 words. Since the solution to 36 from 5.1 chapter was answered, more than 255 students have viewed the full step-by-step answer. ## Discover and learn what students are asking Calculus: Early Transcendental Functions : Multiple Integration ?In Exercises 1 and 2, evaluate the integral. $$\int_{y}^{2 y}\left(x^{2}+y^{2}\right) d x$$ #### Related chapters Unlock Textbook Solution Enter your email below to unlock your verified solution to: Let S = ( VI. V2. Vl) be a set of nonzero vectors in R' such that any two \lectors in S
2022-05-25 11:25:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2700752317905426, "perplexity": 11060.40236343014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00408.warc.gz"}
http://mathhelpforum.com/statistics/91883-cube.html
# Math Help - Cube 1. ## Cube Select up 3 vertices of a cube. What is the probability that they belong to the same face? 2. Originally Posted by Apprentice123 Select up 3 vertices of a cube. What is the probability that they belong to the same face? There are eight vertices of a cube.. How many set of three vertices are there? How many of those 3-sets have all three vertices in the same face of the cube?
2015-05-27 07:59:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165097236633301, "perplexity": 356.96606874727905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928923.21/warc/CC-MAIN-20150521113208-00286-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-with-a-riemann-surface.368162/
Help with a Riemann surface 1. Jan 8, 2010 wofsy I am having trouble describing the Riemann surface of log(z) + log(z-a) 2. Jan 9, 2010 mathman I am very rusty on this subject, but did you try working with the combined log ? [log(z2 - za)] Last edited: Jan 9, 2010
2017-10-20 18:19:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689578771591187, "perplexity": 2086.766999920377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00772.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-3-equations-and-problem-solving-3-3-more-on-solving-equations-and-problem-solving-problem-set-3-3-page-115/50
## Elementary Algebra The formula for the volume of a right circular cone is: V = $\frac{1}{3}$ $\times$ $\pi$ $\times$ $r^{2}$ $\times$ h Substitute 324$\pi$ for V and 9 for r to obtain: 324$\pi$ = $\frac{1}{3}$ $\times$ $\pi$ $\times$ $9^{2}$ $\times$ h 324$\pi$ = $\frac{1}{3}$ $\times$ $\pi$ $\times$ 81 $\times$ h Multiply both sides by 3 972$\pi$ = $\pi$ $\times$ 81 $\times$ h Divide both sides by 81$\pi$ h = 12
2018-12-16 06:45:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177433609962463, "perplexity": 269.48093237454935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00024.warc.gz"}
http://crypto.stackexchange.com/questions/3397/why-does-nets-ecb-mode-implementation-append-a-constant-block-to-my-ciphertext?answertab=active
# Why does .NET's ECB mode implementation append a constant block to my ciphertext? Consider the following code and output: public static void Main() { DESCryptoServiceProvider symAlg = new DESCryptoServiceProvider(); symAlg.BlockSize = 64; symAlg.GenerateKey(); symAlg.Mode = CipherMode.ECB; testCipher(symAlg, new byte[] { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08 }); testCipher(symAlg, new byte[] { 0xC1, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7, 0xC8, 0xD1, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6, 0xD7, 0xD8, 0xE1, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6, 0xE7, 0x8E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }); Console.ReadKey(); } public static void testCipher(SymmetricAlgorithm symAlg, byte[] plainText) { ICryptoTransform xfrm; xfrm = symAlg.CreateEncryptor(); byte[] encrypted = xfrm.TransformFinalBlock(plainText, 0, plainText.Length); xfrm = symAlg.CreateDecryptor(); byte[] decrypted = xfrm.TransformFinalBlock(encrypted, 0, encrypted.Length); Console.WriteLine(new string('=', 23)); writeBlocks(plainText); writeBlocks(encrypted); writeBlocks(decrypted); } private static void writeBlocks(byte[] blocks){ for (int i = 0; i < blocks.Length; i += 8) Console.WriteLine(BitConverter.ToString(blocks, i, 8)); Console.WriteLine(); } Output: ======================= 01-02-03-04-05-06-07-08 01-02-03-04-05-06-07-08 1E-0C-3E-59-93-5C-23-6E 1E-0C-3E-59-93-5C-23-6E 6F-AC-50-69-34-D0-B1-61 // NOTE THIS 01-02-03-04-05-06-07-08 01-02-03-04-05-06-07-08 ======================= C1-C2-C3-C4-C5-C6-C7-C8 D1-D2-D3-D4-D5-D6-D7-D8 E1-E2-E3-E4-E5-E6-E7-8E 00-00-00-00-00-00-00-00 F9-9A-77-30-3B-31-7F-D2 D8-B5-B2-C6-E7-E7-0F-90 0E-90-DF-AF-56-C0-DE-84 65-5D-E0-7D-5A-7A-0F-D9 6F-AC-50-69-34-D0-B1-61 // AND THIS C1-C2-C3-C4-C5-C6-C7-C8 D1-D2-D3-D4-D5-D6-D7-D8 E1-E2-E3-E4-E5-E6-E7-8E 00-00-00-00-00-00-00-00 I understand the weakness of ECB. What I don't understand, is why on earth would .NET force me to append a final block that is the length of the key and only varies with the key. No matter how weak the encryption was to start with, isn't this worse? I have a specific reason for using ECB: Encrypt array of int for individual retrieval However, decrypting just one block (without the constant final block) yields: System.Security.Cryptography.CryptographicException was unhandled Message=Bad Data. Now, knowing that it is constant by the key, I can perform a bogus encryption at application initialization and cache the final block, feeding it back in to my decryption operation rather than putting it in the data stream, but why in would .NET encourage me to store this with the data, and is it only .NET that does this? - Is .NET automatically adding padding? Sometimes these libraries will always add padding. Thus if your plaintext falls on a block boundary, they will add another complete block of padding. Perhaps you can change the padding to none? To test, send in an incomplete last block and see if it only pads to the block length instead of adding an entire extra block. –  mikeazo Jul 31 '12 at 13:29 Note: DES is not secure. The key size is much too small. You want to use TripleDES. –  mikeazo Jul 31 '12 at 13:33 @mikeazo: sure, make me fix my writeBlocks function! ;) just a moment... –  shannon Jul 31 '12 at 13:34 You are correct, mike! Further, it was able to deduce the correct length of the final block. The last byte must be the length of the final block, so it requires an extra block to represent it. –  shannon Jul 31 '12 at 13:49 And thanks for the DES reminder. –  shannon Jul 31 '12 at 13:49 ## 1 Answer I believe what you are seeing is that .NET automatically uses PKCS #7 padding. This will always add padding. Thus if your plaintext is a complete block length, one extra block of padding will be added. The reason the ciphertext ends up being the same in both of your test cases is that it is adding the same padding in both cases (see PaddingMode Enumeration for details on PKCS #7). You can have .NET use no padding by using the Padding Property. - You are absolutely correct. Thanks! With Padding mode set to None, I'm able to decrypt a single block. As a side note, it also now refuses to encrypt a partial block, which I don't require. –  shannon Jul 31 '12 at 13:55
2014-10-31 17:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4548003375530243, "perplexity": 5921.595019405049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900032.4/warc/CC-MAIN-20141030025820-00228-ip-10-16-133-185.ec2.internal.warc.gz"}
http://motls.blogspot.com/2011/09/italian-seismological-witch-hunt-began.html
## Thursday, September 22, 2011 ... ///// ### Italian seismological witch hunt began In June, this blog has informed about the looming trial against 6 Italian seismologists and one public official who were going to be charged with manslaughter (a less serious version of killing than murder). Some victims also demand a modest fee of \$67 million from the 6-7 scientists. The trial started this week. They're charged with manslaughter because in 2009, they didn't tell the people in advance that there would be a magnitude 6.3 earthquake that would kill 309 inhabitants of L'Aquila. They told them that such a big earthquake was very unlikely and evacuation wasn't a sensible idea. By saying this thing, they supposedly killed 309 people, the prosecution argues. Well, the very idea that you kill another person by telling or not telling him something seems extremely bizarre to me. At most, you could say that such an event could be classified as a "lie". But even this description is false because they didn't say invalid things deliberately. Instead, it was a particular prediction that was falsified by the subsequent events. It is not clear to me whether the people who support the harassment of the scientists actually realize that big earthquakes can't be predicted – and it's pretty likely that there exist very good reasons why it will always be impossible to predict them, regardless of the progress in science. Vibrations and tensions influence each other and randomly add up or compensate others and in some cases, smaller vibrations may help bigger masses to cross a tipping point. While the release of the tension may be predictable, the growth of a quake is a black swan, an unusual event that may be unpredictable for fundamental reasons. At any rate, no big earthquake has ever been predicted in advance. The scientists said that there was virtually no danger because the best methods available to them led them to conclude that there was virtually no danger. Some people claim that the scientists should have said that there was some danger, like 5%, that a tragedy was going to occur. Except that this is not what the reality actually was: their methodology clearly indicated that the risk was vastly smaller than 5%. The probability that the previous smaller tremors would be a sign of a coming big earthquake is small enough so that the average number of people who would die during a hypothetical chaotic evacuation would exceed the expectation value of the casualties of the potential big earthquake (which is low because the probability is low). Because of all these considerations, the scientists recommended everyone not to panic. There is no currently available scientifically solid way to disprove the opinion that the probability of a big earthquake was indeed extremely tiny and that the earthquake only occurred by a chance because unlikely events with a nonzero probability sometimes have to occur. One could even suggest that it's plausible that the bigger earthquake had nothing to do with the previous tremors. It was surely bad luck for the 309 casualties and their families in L'Aquila but to suggest that this bad lack is the scientists' fault is just incredible. Rare natural catastrophes aren't caused by humans. And you can't or shouldn't decimate the seismological community of a nation just because the nation was affected by an earthquake. Giampaolo Giuliani who "predicted" a big earthquake (by pointing out some "radon activity", a methodology that looks completely crackpotterish) around the same time has been celebrated as an Italian national hero but he was just a fearmonger who turned out to be "lucky" in one particular episode. If the Italian folks believe that Giuliani can actually predict earthquakes, why they don't ask him to predict other earthquakes that are going to happen in the world? Why didn't Giuliani or his Asian colleagues warned the people ahead of the Himalayan earthquake that killed 100 people today? Such things are happening all the time. I think that they actually know that he has no miraculous scientific or supernatural skills: he was just lucky. But they act irrationally, celebrating a man without a good reason and attacking 7 other people without a reason, too. A typical superstitious attitude to the world: people attempt to give an "anthropomorphic" shape to events whose causes they don't understand. Scientists are no shamans who can predict everything about the future: this is simply not possible, especially not in fields like seismology. Certain "regular enough" things can be predicted, most others cannot. Everyone who claims to be able to do such things (e.g. the members of the IPCC) is a fraudster. Scientists are no Jesuses Christs who can take the responsibility for the life and decisions of everyone else (and absorb all their guilt). The only thing that scientists or scientifically trained public officials may do is to learn a particular technique that's been developed by science and apply it. In some contexts, the technique is reliable; in others, it's not. The latter includes predictions of large earthquakes. Whether our current limitations in predicting large earthquakes is temporary or will stay with us, it's a fact that must be taken into account while making any judgments about the guilt of anyone in recent years. Those scientists did exactly what they should do: they applied their knowledge of the discipline – and Italy doesn't have too many experts whose knowledge about seismology matches or beats those of the convincted ones – and they deduced the consequences. The conclusion of this procedure was that the danger was negligible. They shared the conclusion with the people. It just happened that a rather big earthquake was coming but it's not the scientists' fault and according to the current seismology, the elevated threat couldn't have been predicted. There could have been other situations in which the science would know that an earthquake was much more likely, but this wasn't one of them. There are surely many uneducated people in L'Aquila who are sad or angry and who are looking for scapegoats. But earthquakes – much like weather events – don't have an anthropomorphic culprit. They just follow from the laws of Nature. If some people in L'Aquila and the Italian courts are so intellectually challenged that they're not capable to understand how Nature works according to science, I respect that but at least I urge them to replace their superstitions about witches by superstitions about gods of quakes who can't ever be beaten. Pray to those gods and sacrifice your assets to them if you need to believe pre-historic superstitions but please don't try to link these superstitions to people who don't have anything to do with them. The only systematic way how similar scientists could protect their skin in a similar environment hostile to science (and its basic property that it is not infallible: falsified predictions are actually the main events driving the scientific progress) would be for them to say that there is a danger at all times. A big earthquake may come at any moment, even without previous warning smaller tremors. Earthquakes in Italy are possible so the only possible solution would be to evacuate most of Italy forever. This is clearly unrealistic. Everyone who lives in an area with a nonzero frequency of earthquakes must get used to a nonzero risk that such an earthquake may arrive at an unexpected time. In 2009, a bigger earthquake followed smaller ones. But be sure that this is not always the case (or at least the delay may be so short that it doesn't give you enough time for any preemptive maneuvers). Trying to generalize this single natural event and promote it to a law of seismology that every scientist is obliged to worship is totally unscientific and irrational. While the single event may have changed some people's lives, it's still just one earthquake and seismology must take it into account together with thousands of other events (and non-events). The science extracted from all these observations says something else than what the blood-thirsty laymen who only thinks about the single 2009 L'Aquila earthquake seem to have concluded. Imbeciles vs Galileo in a court room. So please stop this inhuman theater that resembles the trials against Galileo Galilei or Giordano Bruno. You are helping to create the image of Italy as a nation of vengeful and irrational savages: the image of Italy as a country of Galileo Galilei seems much more flattering. While Italian prosecutors believe that earthquake are caused by seismologists, Rick Perry's collaborator unfortunately believes that tornadoes are caused by homosexuals. #### snail feedback (7) : It is obvious that you are as bigot as you think i am. You are talking about things that you don't know. Noone is hunting witches here, there are responsibilities and responsibles. And this has nothing to do with unforseen earthquakes, but it's clear that you feel really cool on defending those poor scientists that are prosecuted by the inquisition. But i'm pretty sure you're not going to post this, as you didn't post my last reply. I have posted all your replies. Of course that you and similar assholes are hunting witches. There are responsibilities and responsible people but in a country that has a basic respect for our scientific understanding of the world, seismologists can't be made responsible for the casualties in an earthquake because they didn't cause the earthquake; and they couldn't reliably predict it. The very fact that you're making the seismologists responsible for the casualties *means* that you are hunting witches. It is *exactly* what the trials against witches were always all about. Someone's daughter died a week after a strange and unpopular new neighbor moved into the village, so the neighbor - a witch - was clearly "responsible" for the daughter's death. Except that she wasn't and everyone who acts as if she had is an idiotic and evil asshole just like you. Again, you know you are talking about things you don't know. If you do, please write here EXACTLY the charge that has been made to those people. If it is "not having predicted that an earthquake would occurr", i'll get down on my knees and kiss your feet. For example, you don't know that a major safety conference was held a week before the quake, it lasted half an hour, and the official document of that summit was released after the quake... Ciao obnoxious jerk, it doesn't matter whether a conference lasts half an hour or 50 hours and whether it releases a document or a YouTube music video clip. Regardless of those things, it's still true that large earthquakes occur randomly and at most some probability distribution and patterns may be predicted which is exactly what those people did. Italy hired a group of 6+1 people that were most likely to say something sensible about seismology but it still doesn't change the fact that individual earthquakes such as this one occur independently of expectations, conferences, and documents. If assholes like you were meaning to harass the people in the case that the region is affected by a natural catastrophe, i.e. you wanted to hold the scientists responsible for earthquakes that would occur and their casualties, you should have told the scientists in advance. I guess that they wouldn't serve on your committee under these insane conditions. A scientist - or anyone else - can't be held responsible for an earthquake because he has no way to influence whether it will occur at a given place and given time. I think you may find this account of the story interesting and a little bit less biased: http://www.nature.com/news/2011/110913/full/477264a.html Cheers, G.
2017-02-24 10:33:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3054121136665344, "perplexity": 1459.3580099511153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00118-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/78272-deriving-taylor-maclaurin-polynomials.html
# Thread: Deriving Taylor and Maclaurin Polynomials 1. ## Deriving Taylor and Maclaurin Polynomials I am just reading about Taylor and Maclaurin Polynomials. I was wondering how they were derived. Could someone please show me how in both a rigourous mathematical way and an intuitive way. I hate to just use formulas that I don't have at least some grasp of the theory behind them. As a toddler my parents gave me a set of real tools (yes they were insane) and I comenced dismantling everything. I took apart tables, telephones and finally the TV, which put an end to my tools. Anyway I appreciate the insight from whoever, the more the merrier. ManyArrows 2. Take a look at Taylor Series Expansion by mtu.edu for a derivation. Basically, you look at a function and say, "How could I expand this in a power series?" The Taylor Series is one way to expand a function as a power series. Expanding a function in this way can lead to some useful approximations. For instance, lets look at sin[x] about zero: About zero, $\sin (x) = x-\frac{x^3}{6}+\frac{x^5}{120}+ ...$ So one can make approximations to whatever order they wish. For instance, to "zeroeth" order, sin(x) about zero = 0. To first order, sin(x) about zero = x. To third order, sin(x) about zero $= x-\frac{x^3}{6}$. Etc. $\lim_{x->0 } \frac {\sin (x)}{3x - (1/2)x^3} = ?$ Notice that in the limit that x goes to zero, the Taylor series to any arbitrary order is exactly equal to Sin(x): $\lim_{x->0 } \sin (x) = \lim_{x->0 } x = \lim_{x->0 } x-\frac{x^3}{6} = \lim_{x->0 } x-\frac{x^3}{6}+\frac{x^5}{120} = ...$ So, $\lim_{x->0 } \frac {\sin (x)}{3x - (1/2)x^3} = \lim_{x->0 } \frac {x-\frac{x^3}{6}}{3x - (1/2)x^3} = \frac {1}{3}$ In physics we make approximations using Taylor series all the time because instruments can only measure to a certain degree of accuracy anyway. 3. ## One down Ok, that explains the mathematical derivation. It was easy to follow, thank you. Now though, what is the theory behind it. Derivations are a way to prove an argument, right? So, when Mr. Taylor and Mr. Maclaurin were pondering the universe, what led them down this path. Also, when they say that Maclaurin polynomials are accurate close to zero, how close is close. And the same with Taylor, how close to the refernce point is close. Does this have something to do with the margin of error. Also, I haven't learned power series yet. For some reason my textbook has Taylor and Maclaurin in chpt 9 and power series in chpt 11. Thanks ManyArrows 4. Well, if you were to take the Taylor or Maclaurin series out to infinite order they would be exactly equal to the function. The only caveat is that the nth derivative at the given point must be defined for the function. So a function like: $f(x) = \sqrt[]{x}$ does not have a Maclaurin series, but it does have a Taylor series about non-zero points. 5. Strange about your book and the taylor/powerseries thing. Check out the wiki on power series: Power series - Wikipedia, the free encyclopedia Notice how the taylor and maclaurin series are really just specific types of power series. 6. ## textbook wierdness the chpt on the taylor series also didn't explain why they were usefull. I read it and was like why do I need to approximate a function that I already have. Luckily the class notes explained it, which I read after the chpt. I am taking calc 2 online after taking calc 1 10 years ago. It's alot of fun self teaching math. ManyArrows
2013-12-13 07:51:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.870136022567749, "perplexity": 573.3975142655978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164919525/warc/CC-MAIN-20131204134839-00031-ip-10-33-133-15.ec2.internal.warc.gz"}
https://eng.libretexts.org/Courses/Canada_College/Circuits_and_Devices%3A_Laboratory/10%3A_Parallel_RLC_Circuits/10.05%3A_Procedure
# 10.5: Procedure ## 9.5.1: RC Circuit 1. Using Figure 9.4.1 with a 10 V p-p 10 kHz source, R = 1 k$$\Omega$$, and C = 10 nF, determine the theoretical capacitive reactance and circuit impedance, and record the results in Table 9.6.1 (the experimental portion of this table will be filled out in step 6). Using the current divider rule, compute the resistor and capacitor currents and record them in Table 9.6.2. 2. Build the circuit of Figure 9.4.1 using R = 1 k$$\Omega$$, and C = 10 nF. A common method to measure current using the oscilloscope is to place a small current sense resistor in line with the current of interest. If the resistor is much smaller than the surrounding reactances it will have a minimal effect on the current. Because the voltage and current of the resistor are always in phase with each other, the relative phase of the current in question must be the same as that of the sensing resistor’s voltage. Each of the three circuit currents will be measured separately and with respect to the source in order to determine relative phase. To measure the total current, place a 10 $$\Omega$$ resistor between ground and the bottom connection of the parallel components. Set the generator to a 10 V p-p sine wave at 10 kHz. Make sure that the Bandwidth Limit of the oscilloscope is engaged for both channels. This will reduce the signal noise and make for more accurate readings. Also, consider using waveform averaging, particularly to clean up signals derived via the Math function. 3. Place probe one across the generator and probe two across the sense resistor. Measure the voltage across the sense resistor, calculate the corresponding total current via Ohm’s law and record in Table 9.6.2. Along with the magnitude, be sure to record the time deviation between the sense waveform and the input signal (from which the phase may be determined eventually). 4. Remove the main sense resistor and place one 10 $$\Omega$$ resistor between the capacitor and ground to serve as the capacitor current sense. Place a second 10 $$\Omega$$ resistor between the resistor and ground to sense the resistor current. Leave probe one at the generator and move probe two across the sense resistor in the resistor branch. Repeat the Ohm's law process to obtain its current, recording the magnitude and phase angle in Table 9.6.2. Finally, move probe two so that it is across the capacitor’s sense resistor. Measure and record the appropriate values in Table 9.6.2. Note that if you are using a four channel oscilloscope, simultaneous input, resistor and capacitor measurements are possible. 5. Move probe one to the resistor’s sense resistor and leave probe two at the capacitor’s sense resistor. Save a picture of the oscilloscope displaying the voltage waveforms representing $$i_R$$, $$i_C$$ and $$i_{in}$$ (i.e., the Math waveform computed from $$i_R + i_C$$). 6. Compute the deviations between the theoretical and experimental values of Table 9.6.2 and record the results in the final columns of Table 9.6.2. Based on the experimental values, determine the experimental Z and $$X_C$$ values via Ohm’s law ($$X_C = V_C/i_C, Z = V_{in}/i_{in})$$ and record back in Table 9.6.1 along with the deviations. 7. Create a phasor plot showing $$i_{in}$$, $$i_C$$, and $$i_R$$. Include both the time domain display from step 4 and the phasor plot with the technical report. ## 9.5.2: RL Circuit 8. Replace the capacitor with the 10 mH inductor (i.e. Figure 9.4.2), and repeat steps 1 through 7 in like manner, using Tables 9.6.3 and 9.6.4. ## 9.5.3: RLC Circuit 9. Using Figure 9.4.3 with both the 10 nF capacitor and 10 mH inductor (and a third sense resistor), repeat steps 1 through 7 in like manner, using Tables 9.6.5 and 9.6.6. Note that it will not be possible to see all four waveforms simultaneously in step 5 if a two channel oscilloscope is being used. For a four channel oscilloscope, place a probe across each of the three sense resistors. 10.5: Procedure is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by James M. Fiore.
2022-07-03 14:46:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6778716444969177, "perplexity": 929.5335935232089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00190.warc.gz"}
http://biblioteca.posgraduacaoredentor.com.br/?q=ISOMETRIC+PARTICLE
Página 1 dos resultados de 43 itens digitais encontrados em 0.004 segundos ## Quantização da partícula não relativística em espaços curvos como superfícies do Rn; Quantization of the non-relativistic particle in curved spaces as surfaces of Rn Resende, Maria Fernanda Araujo de Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP Tipo: Dissertação de Mestrado Formato: application/pdf Relevância na Pesquisa 35.95% Neste trabalho estudamos o problema relacionado à construção de uma teoria quântica para uma partícula, se movendo não relativisticamente num espaço curvo, tratado como uma subvariedade de outro Euclideano, talvez dando maior ênfase ao aspecto geométrico envolvido nesta abordagem, uma vez que os demais trabalhos relacionados ao mesmo tema não o fazem. Além de mostrarmos que o consequente uso de uma teoria de sistemas vinculados não contribui para remover as ambiguidades da formulação quântica, relacionados diretamente ao ordenamento de operadores, também apresentamos, através de uma quantização específica feita sob a prescrição de Dirac, elementos que permitem não apenas construir um formalismo quântico covariante, mas também liberto de qualquer correção quântica. Em adição, fazemos alguns comentários gerais no que se refere às outras abordagens clássicas possíveis para o mesmo problema, intentando construir teorias quânticas associadas ao sistema sob consideração.; In this work we study the problem related to the construction of a quantum theory for a particle, moving non-relativistically in a curved space, treated as submanifold of the other Euclidean, maybe putting more emphasis on the geometric aspect envolved in this approach... ## Investigação cinética de modos geodésicos de baixas frequências em plasmas magnetizados; Kinetic investigation of low frequency geodesic modes in magnetized plasmas Sgalla, Reneé Jordashe Franco Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP Tipo: Tese de Doutorado Formato: application/pdf Relevância na Pesquisa 16.03% Devido à sua importância em turbulência causada por ondas de deriva e à aplicação com propósitos em diagnósticos de plasma, a investigação de fluxos zonais (ZF) e modos acústicos geodésicos (GAM) tem atraído bastante atenção na literatura em física de plasmas. Nesta tese, primeiramente consideramos efeitos de equilíbrio com rotação poloidal e toroidal nestes modos, posteriormente investigamos efeitos diamagnéticos em GAM a partir de um modelo de dois fluido, no qual incluímos viscosidade paralela de íons e, na parte final, consideramos amortecimento de Landau e efeitos diamagnéticos simultaneamente no estudo de GAM, porém, a partir do modelo girocinético. Efeitos diamagnéticos são causados por termos que envolvem gradientes de densidade e de temperatura provenientes da função Maxwelliana de equilíbrio. O acoplamento entre os harmônicos poloidais, $m = \pm1$, e as derivadas radiais de quantidades macroscópicas do plasma é responsável pelo aumento no valor da frequência no GAM de alta frequência e pela instabilidade no GAM de baixa frequência. Este tipo de instabilidade, que é proporcional à frequência diamagnética de elétrons e à razão entre os gradientes de temperatura e de densidade... ## Compositional analysis for an unbiased measure of soil aggregation Parent, Leon E.; de Almeida, Cinara X.; Hernandes, Amanda; Egozcue, Juan J.; Gulser, Coskun; Bolinder, Martin A.; Katterer, Thomas; Andren, Olof; Parent, Serge E.; Anctil, Francois; Centurion, Jose F.; Natale, William Fonte: Elsevier B.V. Publicador: Elsevier B.V. Tipo: Outros Formato: 123-131 ENG Relevância na Pesquisa 26.03% Soil aggregation is an index of soil structure measured by mean weight diameter (MWD) or scaling factors often interpreted as fragmentation fractal dimensions (D-f). However, the MWD provides a biased estimate of soil aggregation due to spurious correlations among aggregate-size fractions and scale-dependency. The scale-invariant D-f is based on weak assumptions to allow particle counts and sensitive to the selection of the fractal domain, and may frequently exceed a value of 3, implying that D-f is a biased estimate of aggregation. Aggregation indices based on mass may be computed without bias using compositional analysis techniques. Our objective was to elaborate compositional indices of soil aggregation and to compare them to MWD and D-f using a published dataset describing the effect of 7 cropping systems on aggregation. Six aggregate-size fractions were arranged into a sequence of D-1 balances of building blocks that portray the process of soil aggregation. Isometric log-ratios (ilrs) are scale-invariant and orthogonal log contrasts or balances that possess the Euclidean geometry necessary to compute a distance between any two aggregation states, known as the Aitchison distance (A(x,y)). Close correlations (r>0.98) were observed between MWD... ## Novo design de laminados oclusais ultrafinos CAD/CAM de resina composta e cerâmica para o tratamento de erosão severa Schlichting, Luís Henrique Tipo: Tese de Doutorado Formato: 163 p.| il., grafs., tabs. POR Relevância na Pesquisa 16.25% ## Avaliação da resistência à fadiga e do modo de fratura de restaurações adesivas implantossuportadas em cerâmica e resina composta sobre pilares personalizados de zircônia para região de pré-molares Oderich, Elisa Tipo: Tese de Doutorado Formato: 181 p.| il., tabs., grafs. POR Relevância na Pesquisa 15.95% ## (Paleo)ecology of coccolithophores in the submarine canyons of the central portuguese continental margin:environmental, sedimentary and oceanographic implications Guerreiro, Catarina Alexandra Vicente, 1978- Relevância na Pesquisa 16.03% Tese de doutoramento, Geologia (Paleontologia e Estatigrafia), Universidade de Lisboa, Faculdade de Ciências, 2013; This thesis aims to contribute to the knowledge of coccolithophores from coastal-neriticoceanic transitional settings, their distribution offshore central Portugal, and their potential as (paleo)ecological and (paleo)ceanographic proxy in the context of submarine canyons. In order to achieve a good understanding of the relationship of coccolithophores with the environmental setting, results were interpreted on a multidisciplinary basis, integrating a significant data set concerning the hydrological characteristics of surface waters of the central Portuguese margin (i.e. nutrients, chlorophyll, temperature, salinity, turbidity, wind data) and seabed sedimentological characteristics (i.e. sediment bulk composition, particle size and sediment accumulation). The most striking variations in phytoplankton communities off central Portugal occurred along the coastal-oceanic lateral gradient. Two principal groups of taxa of opposite ecological behaviour were observed in the photic layer, with K-selected taxa preferentially distributed in the open ocean, and r-selected taxa preferentially occurring in more coastal-neritic regions. Such gradient was also reflected in coccolith assemblages preserved in surface sediments on the seabed... ## A study of the pneumatic conveying of non-spherical particles in a turbulent horizontal channel flow Laín,S.; Sommerfeld,M. Fonte: Brazilian Society of Chemical Engineering Publicador: Brazilian Society of Chemical Engineering Tipo: Artigo de Revista Científica Formato: text/html Relevância na Pesquisa 15.95% In this work, the pneumatic conveying of non-spherical isometric particles with different degrees of non-sphericity is studied. The solids mass loading fraction is small enough in order to have a dilute flow, so inter-particle collisions can be neglected. As a first approximation, only the aerodynamic drag force acting on the particles is considered, neglecting the lift forces and the particle rotation. The drag coefficient is calculated using the correlations of Haider and Levenspiel (1989) and Ganser (1993). The numerical simulations are compared with experimental data in a narrow six meters long horizontal channel flow laden with quartz and duroplastic particles with mean diameters of 185 and 240 mu m, respectively (Kussin, 2004). ## Length and shape variants of the bacteriophage T4 head: mutations in the scaffolding core genes 68 and 22. Keller, B; Dubochet, J; Adrian, M; Maeder, M; Wurtz, M; Kellenberger, E Tipo: Artigo de Revista Científica Relevância na Pesquisa 15.95% The shape and size of the bacteriophage T4 head are dependent on genes that determine the scaffolding core and the shell of the prohead. Mutants of the shell proteins affect mainly the head length. Two recently identified genes (genes 67 and 68) and one already known gene (gene 22), whose products are scaffold constituents, have been investigated. Different types of mutants were shown to strongly influence the proportion of aberrantly shaped particles. By model building, these shape variants could be represented as polyhedral bodies derived from icosahedra, through outgrowths along different polyhedral axes. The normal, prolate particle is obtained by elongation along a fivefold axis. The mutations of the three core genes (genes 67, 68, and 22) affect the width mainly by lateral outgrowths of the prolate particle, although small and large isometric particles are also found. Many of the aberrant particles are multitailed, suggesting a correlation between tail attachment sites and shape. ## Cryo-reconstructions of P22 polyheads suggest that phage assembly is nucleated by trimeric interactions among coat proteins Parent, Kristin N; Sinkovits, Robert S; Suhanovsky, Margaret M; Teschke, Carolyn M; Egelman, Edward H; Baker, Timothy S Tipo: Artigo de Revista Científica Relevância na Pesquisa 26.13% Bacteriophage P22 forms an isometric capsid during normal assembly, yet when the coat protein (CP) is altered at a single site, helical structures (polyheads) also form. The structures of three distinct polyheads obtained from F170L and F170A variants were determined by cryo-reconstruction methods. An understanding of the structures of aberrant assemblies such as polyheads helps to explain how amino acid substitutions affect the CP, and these results can now be put into the context of CP pseudo-atomic models. F170L CP forms two types of polyhead and each has the CP organized as hexons (oligomers of six CPs). These hexons have a skewed structure similar to that in procapsids (precursor capsids formed prior to dsDNA packaging), yet their organization differs completely in polyheads and procapsids. F170A CP forms only one type of polyhead, and though this has hexons organized similarly to hexons in F170L polyheads, the hexons are isometric structures like those found in mature virions. The hexon organization in all three polyheads suggests that nucleation of procapsid assembly occurs via a trimer of CP monomers, and this drives formation of a T = 7, isometric particle. These variants also form procapsids, but they mature quite differently: F170A expands spontaneously at room temperature... ## Two viruses from adult honey bees (Apis mellifera Linnaeus) Bailey, L.; Gibbs, A.J.; Woods, R.D. Fonte: INRA - Instituto Nacional de Investigação Agronômica da França Publicador: INRA - Instituto Nacional de Investigação Agronômica da França Tipo: Journal Article-postprint EN; ENGLISH Relevância na Pesquisa 36.03% Two viruses were isolated from honey bees. When fed to, sprayed on, or injected into healthy bees either virus made the bees become trembly within a few days, but whereas bees infected with one virus died quickly (acute “paralysis”), bees infected with the other survived for several days after first showing symptoms (chronic “paralysis”). Purified preparations of acute bee paralysis virus (ABPV) contained isometric particles about 28 mμ in diameter, whereas those of chronic bee paralysis virus (CBPV) contained particles of irregular shape about 27 × 45 mμ. Both viruses occurred in apparently healthy bees, but only CBPV particles were numerous in diseased bees from colonies naturally affected with the disease called “bee paralysis.” On inoculation to healthy bees the symptoms caused by CBPV resembled those of the naturally occurring disease more than did those caused by ABPV. ## Multi-walled carbon nanotubes: sampling criteria and aerosol characterization Chen, Bean T.; Schwegler-Berry, Diane; McKinney, Walter; Stone, Samuel; Cumpston, Jared L.; Friend, Sherri; Porter, Dale W.; Castranova, Vincent; Frazer, David G. Tipo: Artigo de Revista Científica Relevância na Pesquisa 16.15% This study intends to develop protocols for sampling and characterizing multi-walled carbon nanotube (MWCNT) aerosols in workplaces or during inhalation studies. Manufactured dry powder containing MWCNT’s, combined with soot and metal catalysts, form complex morphologies and diverse shapes. The aerosols, examined in this study, were produced using an acoustical generator. Representative samples were collected from an exposure chamber using filters and a cascade impactor for microscopic and gravimetric analyses. Results from filters showed that a density of 0.008–0.10 particles per µm2 filter surface provided adequate samples for particle counting and sizing. Microscopic counting indicated that MWCNT’s, resuspended at a concentration of 10 mg/m3, contained 2.7 × 104 particles/cm3. Each particle structure contained an average of 18 nanotubes, resulting in a total of 4.9 × 105 nanotubes/cm3. In addition, fibrous particles within the aerosol had a count median length of 3.04 µm and a width of 100.3 nm, while the isometric particles had a count median diameter of 0.90 µm. A combination of impactor and microscopic measurements established that the mass median aerodynamic diameter of the mixture was 1.5 µm. It was also determined that the mean effective density of well-defined isometric particles was between 0.71 and 0.88 g/cm3... ## Particle Creation in the Bell-Szekeres Spacetime Feinstein, A.; Sebastián, M. A. Pérez Tipo: Artigo de Revista Científica Relevância na Pesquisa 25.83% The quantization of a real massless scalar field in a spacetime produced in a collision of two electromagnetic plane waves with constant wave fronts is considered. The background geometry in the interaction region, the Bell-Szekeres solution, is locally isometric to the conformally flat Bertotti-Robinson universe filled with a uniform electric field. It is shown that before the waves interact the Bogoliubov coefficients relating different observers are trivial and no vacuum polarization takes place. In the non- singular interaction region neutral scalar particles are produced with number of created particles and spectrum typical of gravitational wave collision.; Comment: 18 pages. To appear in Class. Quantum Grav. 12, November (1995) ## Coherent States for Transparent Potentials Samsonov, Boris F. Tipo: Artigo de Revista Científica Relevância na Pesquisa 16.21% Darboux transformation operators that produce multisoliton potentials are analyzed as operators acting in a Hilbert space. Isometric correspondence between Hilbert spaces of states of a free particle and a particle moving in a soliton potential is established. It is shown that the Darboux transformation operator is unbounded but closed and can not realize an isometric mapping between Hilbert spaces. A quasispectral representation of such an operator in terms of continuum bases is obtained. Different types of coherent states of a multisoliton potential are introduced. Measures that realize the resolution of the identity operator in terms of the projectors on the coherent states vectors are calculated. It is shown that when these states are related with free particle coherent states by a bounded symmetry operator the measure is defined by ordinary functions and in the case of a semibounded symmetry operator the measure is defined by a generalized function. ## Geometry of 2d spacetime and quantization of particle dynamics Tipo: Artigo de Revista Científica Relevância na Pesquisa 25.95% We analyze classical and quantum dynamics of a particle in 2d spacetimes with constant curvature which are locally isometric but globally different. We show that global symmetries of spacetime specify the symmetries of physical phase-space and the corresponding quantum theory. To quantize the systems we parametrize the physical phase-space by canonical coordinates. Canonical quantization leads to unitary irreducible representations of $SO_\uparrow (2.1)$ group.; Comment: 12 pages, LaTeX2e, submitted for publication ## Two black hole initial data Leski, Szymon Tipo: Artigo de Revista Científica Relevância na Pesquisa 16.03% Misner initial data are a standard example of time-symmetric initial data with two apparent horizons. Compact formulae describing such data are presented in the cases of equal or non-equal masses (i.e. isometric or non-isometric horizons). The interaction energy in the "Schwarzschild + test particle" limit of the Misner data is analyzed.; Comment: 4 pages, RevTeX4, journal version, a reference added, minor corrections ## Geometry of Schroedinger Space-Times II: Particle and Field Probes of the Causal Structure Blau, Matthias; Hartong, Jelle; Rollier, Blaise Tipo: Artigo de Revista Científica Relevância na Pesquisa 25.83% We continue our study of the global properties of the z=2 Schroedinger space-time. In particular, we provide a codimension 2 isometric embedding which naturally gives rise to the previously introduced global coordinates. Furthermore, we study the causal structure by probing the space-time with point particles as well as with scalar fields. We show that, even though there is no global time function in the technical sense (Schroedinger space-time being non-distinguishing), the time coordinate of the global Schroedinger coordinate system is, in a precise way, the closest one can get to having such a time function. In spite of this and the corresponding strongly Galilean and almost pathological causal structure of this space-time, it is nevertheless possible to define a Hilbert space of normalisable scalar modes with a well-defined time-evolution. We also discuss how the Galilean causal structure is reflected and encoded in the scalar Wightman functions and the bulk-to-bulk propagator.; Comment: 32 pages ## Isometric Entanglement of Particle Positions in Quantum Bound Systems Ducharme, Robert J. Tipo: Artigo de Revista Científica Relevância na Pesquisa 26.21% It is shown the role of a scalar potential in the Schr\"{o}dinger equation for a steady-state two-particle system is equivalent to an isometric entanglement of the position coordinates of the particles in space and time. The entangled coordinates of each particle are complex quantities related through the entangling transformation to the real positions of both particles. The transformation takes into account all of the states in the Hilbert space of the composite system. Transforming the Schr\"{o}dinger equation into these entangled coordinates eliminates the scalar potential.; Comment: 7 pages ## Minimizing properties of critical points of quasi-local energy Chen, PoNing; Wang, Mu-Tao; Yau, Shing-Tung Tipo: Artigo de Revista Científica Relevância na Pesquisa 16.13% In relativity, the energy of a moving particle depends on the observer, and the rest mass is the minimal energy seen among all observers. The Wang-Yau quasi-local mass for a surface in spacetime introduced in [7] and [8] is defined by minimizing quasi-local energy associated with admissible isometric embeddings of the surface into the Minkowski space. A critical point of the quasi-local energy is an isometric embedding satisfying the Euler-Lagrange equation. In this article, we prove results regarding both local and global minimizing properties of critical points of the Wang-Yau quasi-local energy. In particular, under a condition on the mean curvature vector we show a critical point minimizes the quasi-local energy locally. The same condition also implies that the critical point is globally minimizing among all axially symmetric embedding provided the image of the associated isometric embedding lies in a totally geodesic Euclidean 3-space.; Comment: Accepted by Comm. Math. Phys ## Effects of Particle sizes, Non-Isometry and Interactions in Compressible Polymer Mixtures Gujrati, P. D. Tipo: Artigo de Revista Científica It is well known that the Schwarzschild solution describes the gravitational field outside compact spherically symmetric mass distribution in General Relativity. In particular, it describes the gravitational field outside a point particle. Nevertheless, what is the exact solution of Einstein's equations with $\delta$-type source corresponding to a point particle is not known. In the present paper, we prove that the Schwarzschild solution in isotropic coordinates is the asymptotically flat static spherically symmetric solution of Einstein's equations with $\delta$-type energy-momentum tensor corresponding to a point particle. Solution of Einstein's equations is understood in the generalized sense after integration with a test function. Metric components are locally integrable functions for which nonlinear Einstein's equations are mathematically defined. The Schwarzschild solution in isotropic coordinates is locally isometric to the Schwarzschild solution in Schwarzschild coordinates but differs essentially globally. It is topologically trivial neglecting the world line of a point particle. Gravity attraction at large distances is replaced by repulsion at the particle neighbourhood.; Comment: 15 pages, references added, 1 figure
2018-11-13 16:22:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6852258443832397, "perplexity": 9928.411921710585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00220.warc.gz"}
https://worldbuilding.stackexchange.com/questions/72121/virtual-particles-as-propellant
Virtual particles as propellant I've read that even in a "perfect" vacuum, matter exists in the form of virtual particles that spontaneously pop into existence only to be immediately annihilated. This suggests to me that it might be possible to use these particles during their short existence as a propellant by accelerating them before they annihilate. Problem is, this seems to violate the conservation of momentum because the virtual particles would immediately cease to exist after their mass is accelerated to create momentum. My question is this: what would be necessary in order to get around the conservation of momentum issue here? This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. • – Joe Bloggs Feb 24 '17 at 9:04 My question is this: what would be necessary in order to get around the conservation of momentum issue here? There's two options. Photon Truster This is the more conventional of the two options, and ignores virtual particles. E=mc^2 is often mistakenly described as the energy you get by converting mass into energy. This isn't really what it says. It says that energy and mass are equivalent. That's why it's known as the Mass-energy equivalence. Even E=mc^2 is only the special case for mass at rest, but there's no need to get into that. You can rearrange the equation to read m = E/c^2. This means energy, for the purposes of calculating momentum, has mass. If you fire enough high energy photons out the back of a spacecraft you will get thrust. This is known as a photon thruster or photon rocket. Photon thrusters are extremely inefficient. Because c^2 is very large, like 9e16 m^2/s^2, the thrust is very, very, very low. 1 GJ, roughly the energy in a lightning bolt, has the rest mass of about 10 micrograms. You're better of using that energy to fire out a small amount of mass at very high speed; then you have an ion thruster, one of the most efficient thrusters currently available. But as efficient as they are, even ion thrusters will run out of mass. So long as there's energy available, like from solar panels, you can get thrust out of a photon thruster. Over a long enough time it can add up to something noticeable. Photons from waste heat bouncing off the inside of the craft in an unbalanced fashion is what caused the Pioneer Anomaly. RF resonant cavity thruster aka EmDrive The infamous EmDrive being tested at NASA is supposed to work with no reaction mass and a higher efficiency than a photon thruster. They did measure a thrust in a vacuum and at different orientations ruling out a lot of errors. It did pass peer-review; Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum. The hype on this got out of control, so let's reign it in. No, NASA did not prove the EmDrive works, but they did eliminate a lot of potential conventional explanations. The paper claims to have observed a thrust of about 1.2 mN/kW in a vacuum (which is very, very low), at different orientations, and at different power settings. They don't speculate on why, but they do note it's about two orders of magnitude over what they'd expect from a photon thruster. The 1.2 mN∕kW performance parameter is over two orders of magnitude higher than other forms of “zero-propellant” propulsion, such as light sails, laser propulsion, and photon rockets having thrust-to-power levels in the 3.33 – 6.67 μN∕kW (or 0.0033 – 0.0067 mN∕kW) range. There's any number of non-crackpot conventional and unconventional explanations including pushing off of "quantum vacuum virtual plasma" but other physicists say that's not possible. Most physicists are EXTREMELY skeptical that this actually works. Because the thrusts involved are so low, because existing quantum theory is so successful, and because the error bars on this experiment are so high, their money is on experimental error. Here's a bunch of physicists and rocket scientists on the EmDrive. We won't really know until it flies in space, even then I expect physicists to remain very skeptical and call for more experiments to eliminate all error. But hey, probably good enough for a sci-fi story! • I was getting all ready to make nasty comments about using the EmDrive in a hard science article till I read the last line. Patience, is, as always, a virtue. +1 – kingledion Mar 3 '17 at 18:01 • @kingledion Yeah. That its passed peer-review by NASA in a vacuum chamber elevates it well above the usual quackery, though the headlines have blown the findings WAAAAY out of proportion. I'll link to the actual published paper. – Schwern Mar 3 '17 at 18:06 You don't need mass to have momentum Photons are mass-less particles that have momentum. As @b.Lorenz points out in the comments, the do have a relativistic mass, since they have energy and by relativity energy is equivalent to mass. But they do not have a classical mass. The particles that you are talking about popping into existence seems like the electron-positron pair that form from high energy gamma photons. Since the photons are required to produce this pair, any momentum you could get from them would also be in the photons that formed them (due to conservation of momentum). So, if you want to use a 'reaction mass' that you don't have to carry with you then you want to use the photons themselves as the 'reaction mass' for your thruster. I talk about the various aspects of a photonic thruster in this post, in an equation-oriented way. The advantage of the photons is that the relativistic mass/energy of the photons that is expended in firing them is the same as the relativistic mass/energy needed to impart the photons their momentum in the first place. That means, if you are using a fusion reactor, the mass of the waste product particles involved in fusion will be lower than the mass of the fuel for the fusion by the same amount as the mass/energy transferred to the photons (minus efficiency losses). This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. • Not that I assume this is the same thing but would it be similar to a Dyson Bladless Fan?. It seems to follow a similar concept that pushes air without having a blade or mass to do so but I am probably totally off base and why it is a comment XD – ggiaquin16 Feb 23 '17 at 21:02 • Yeah, the Dyson fan would be a very simple way of describing this, only instead of operating on air, it would operate on virtual particles. – Adam Feb 23 '17 at 21:47 • Your statement about photons having momentum, but not having mass is only partially correct. Their rest mass is 0, but since they carry energy, they have relativistic mass too. So if the spaceship uses it's onboard energy source (Fission or fusion or antimatter reactor, simple fire or anything you want) to emit photons, it's mass will decrease by h*f/c**2 kg for every photon. – b.Lorenz Feb 24 '17 at 20:04 • @b.Lorenz You are certainly right. From the prospective of the perspective (punny?) interstellar traveler, this is a good thing, however. you still don't need to effectively bring any fuel mass with you. The same mass that you use to generate the energy is your reaction mass as well. – kingledion Feb 24 '17 at 20:11 I think you got a wrong grasp of the theory: in vacuum pairs of particle and antiparticle are created and immediatly annihilated. Momentum is conserved here. The only known (but not yet proven) mechanism that materializes these virtual particles it's the event horizon of a black hole: any pair which materializes on the horizon is separated by the gravitational pull. The half within the horizon falls into the black hole, the half after the horizon can escape the black hole as a real particle. This is called Hawking radiation. Therefore it looks pretty unpractical to have a black hole in a starship to propel it. This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. • There's also the Casimir effect, but this isn't of much help either – Mithrandir24601 Feb 25 '17 at 9:41 • In Star Trek didn't the Romulans have singularities propelling their ships? – Willk Feb 26 '17 at 0:56 • @Will, I think the tag hard-science rules out the Star Trek thingy – L.Dutch Feb 27 '17 at 6:18 Apply a mechanism that induces a physical effect on the vacuum that causes sufficient numbers of the virtual particles to emerge moving in the same direction. Assume the virtual particles have a component of motion. If the particles are charged and if the chamber in which the virtual particles are moving in a collimated flux, can apply an electromagnetic field to decelerate the charged virtual particles. Basically it is a gigantic particle accelerator that instead of accelerating the virtual particles "steals" momentum from them. The process of polarizing the emergence of virtual particles from the quantum vacuum is a purely hypothetical concept. Hypothetical to the point of calling it a fiction as there is no known physical mechanism to justify or explain it. But it could pass muster in science-fiction. The scale and power of the linear particle accelerator required to transfer momentum from the polarized virtual particle flux is uncertain, but is presumably many orders of magnitude beyond anything that human technology can achieve in the foreseeable future. The quantum vacuum is effectively a sea of virtual particles in all possible momentum states. Any propulsive system utilizing them would have to work by sequestering an excess of virtual particles to emerge with a momentum spectrum capable of acting as a propulsion system. John Baez's FAQ on virtual particles deals with both momentum and energy conservation of virtual particles. This is explained in terms of quantum mechanical perturbation theory. Some descriptions of this phenomenon instead say that the energy of the system becomes uncertain for a short period of time, that energy is somehow "borrowed" for a brief interval. This is just another way of talking about the same mathematics. However, it obscures the fact that all this talk of virtual states is just an approximation to quantum mechanics, in which energy is conserved at all times. The way I've described it also corresponds to the usual way of talking about Feynman diagrams, in which energy is conserved, but virtual particles can carry amounts of energy not normally allowed by the laws of motion. This explained here in clearer terms: The concept of virtual particles arises in the perturbation theory of quantum field theory, an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles. Such calculations are often performed using schematic representations known as Feynman diagrams, in which virtual particles appear as internal lines. By expressing the interaction in terms of the exchange of a virtual particle with four-momentum q, where q is given by the difference between the four-momenta of the particles entering and leaving the interaction vertex, both momentum and energy are conserved at the interaction vertices of the Feynman diagram. Consideration of issues about virtual particles in relation to momentum and energy conservation is a good idea. However, quantum mechanics has definitively answered this: the conservation laws apply and aren't a problem. Note: For those who care about this sort of thing: The hypothetical virtual particle propulsion system proposed above is undoubtedly a form of Maxwell's demon and would have all the usual problems associated with it. References: Froning, H. D., Jr. (1980) “Propulsion Requirements for a Quantum Interstellar Ramjet”, Journal of the British Interplanetary Society, Vol. 33, No. 7, pp. 265-270. Froning, H. D., Jr. (1985) “Use of Vacuum Energies for Interstellar Flight”, MDC paper H1496, 36th Congress of the International Astronautical Federation, October, Stockholm, Sweden. Froning, H. D., Jr. (2003) “Investigation of a 'Quantum Ramjet for Interstellar Flight'”, MDAC paper G7887, AIAA/SAE/ASME 17th Joint Propulsion Conference, Colorado Springs, July, CO. Froning, H. D., Jr. and Roach, R.L. (2002) “Preliminary Simulations of Vehicle Interactions with the Quantum Vacuum by Fluid Dynamic Approximations”, paper AAIA-2002-3925, American Institute of Aeronautics and Astronautics, Washington, DC. Froning, H. D., Jr., Barrett, Terence W., and Hathaway, George (1998) “Experiments Involving Specially Conditioned EM Radiation, Gravitation and Matter”, paper AAIA-98-3138, American Institute of Aeronautics and Astronautics, Washington, DC. Minami, Y. (2008) “Preliminary Theoretical Considerations for Getting Thrust via Squeezed Vacuum”, Journal of the British Interplanetary Society, Vol. 33, No. 7, pp. 315-321. Fiction where the concept has been used: Charles Sheffield, The McAndrew Chronicles (New York: Tor Books, 1983) Arthur C Clarke, The Songs of Distant Earth (New York: Ballantine/Del Rey, 1986) Perhaps you could attempt using "squeezed light" so you could use destructive quantum interference to cancel out the virtual particles in certain areas, thereby creating areas of "low pressure". This so called "pressure difference" of the energy being exerted by virtual particles in certain areas could be used to create a propulsion-like effect. This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. • can you give more details or reference on how to squeeze light? – L.Dutch Mar 3 '17 at 8:28 • You need to note the rules for hard-science answers. For a normal answer, this does not address concervation of momentum. You can’t make a force that violates Newton’s 3rd Law. And light are “real” particles, not virtual: electromagnetic forces invoke virtual particles. – JDługosz Mar 4 '17 at 20:09 • But, welcome to Worldbuilding! Fill in your profile, stick around; browse the existing bank of questions and answers, and please do contribute again. – JDługosz Mar 4 '17 at 20:11
2019-07-21 17:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5404258370399475, "perplexity": 817.0473357092945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527089.77/warc/CC-MAIN-20190721164644-20190721190644-00537.warc.gz"}
https://projecteuclid.org/euclid.jam/1394807687
## Journal of Applied Mathematics • J. Appl. Math. • Volume 2013, Special Issue (2013), Article ID 821737, 7 pages. ### Approximation Analysis for a Common Fixed Point of Finite Family of Mappings Which Are Asymptotically $k$-Strict Pseudocontractive in the Intermediate Sense #### Abstract We introduce an iterative process which converges strongly to a common fixed point of a finite family of uniformly continuous asymptotically ${k}_{i}$-strict pseudocontractive mappings in the intermediate sense for $i=1,2,\dots ,N$. The projection of ${x}_{0}$ onto the intersection of closed convex sets ${C}_{n}$ and ${Q}_{n}$ for each $n\ge 1$ is not required. Moreover, the restriction that the interior of common fixed points is nonempty is not required. Our theorems improve and unify most of the results that have been proved for this important class of nonlinear mappings. #### Article information Source J. Appl. Math., Volume 2013, Special Issue (2013), Article ID 821737, 7 pages. Dates First available in Project Euclid: 14 March 2014 https://projecteuclid.org/euclid.jam/1394807687 Digital Object Identifier doi:10.1155/2013/821737 Mathematical Reviews number (MathSciNet) MR3064879 Zentralblatt MATH identifier 1266.47100 #### Citation Zegeye, H.; Shahzad, N. Approximation Analysis for a Common Fixed Point of Finite Family of Mappings Which Are Asymptotically $k$ -Strict Pseudocontractive in the Intermediate Sense. J. Appl. Math. 2013, Special Issue (2013), Article ID 821737, 7 pages. doi:10.1155/2013/821737. https://projecteuclid.org/euclid.jam/1394807687 #### References • Q. H. Liu, “Convergence theorems of the sequence of iterates for asymptotically demicontractive and hemi-contractive mappings,” Nonlinear Analysis, vol. 26, pp. 1838–1842, 1996. • Y. X. Tian, S.-S. Chang, J. Huang, X. Wang, and J. K. Kim, “Implicit iteration process for common fixed points of strictly asymptotically pseudocontractive mappings in Banach spaces,” Fixed Point Theory and Applications, vol. 2008, Article ID 324575, 12 pages, 2008. • T.-H. Kim and H.-K. Xu, “Convergence of the modified Mann's iteration method for asymptotically strict pseudo-contractions,” Nonlinear Analysis, vol. 68, no. 9, pp. 2828–2836, 2008. • D. R. Sahu, H.-K. Xu, and J.-C. Yao, “Asymptotically strict pseudocontractive mappings in the intermediate sense,” Nonlinear Analysis, vol. 70, no. 10, pp. 3502–3511, 2009. • C. S. Hu and G. Cai, “Convergence theorems for equilibrium problems and fixed point problems of a finite family of asymptotically $k$-strictly pseudocontractive mappings in the intermediate sense,” Computers & Mathematics with Applications, vol. 61, no. 1, pp. 79–93, 2011. • H. Zegeye, M. Robdera, and B. Choudhary, “Convergence theorems for asymptotically pseudocontractive mappings in the intermediate sense,” Computers & Mathematics with Applications, vol. 62, no. 1, pp. 326–332, 2011. • H. Zegeye and N. Shahzad, “Convergence of Manns type iteration method for generalized asymptotically nonexpansive mappings,” Computers and Mathematics With Applications, vol. 62, no. 11, pp. 4007–4014, 2011. • P.-E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization,” Set-Valued Analysis, vol. 16, no. 7-8, pp. 899–912, 2008. • J. G. O'Hara, P. Pillay, and H.-K. Xu, “Iterative approaches to convex feasibility problems in Banach spaces,” Nonlinear Analysis, vol. 64, no. 9, pp. 2022–2042, 2006. • W. Takahashi, Nonlinear Functional Analysis-Fixed Point Theory and Applications, Yokohama Publishers, Yokohama, Japan, 2000. • T.-H. Kim and H.-K. Xu, “Strong convergence of modified Mann iterations for asymptotically nonexpansive mappings and semigroups,” Nonlinear Analysis, vol. 64, no. 5, pp. 1140–1152, 2006.
2019-10-22 08:42:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 7, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.369765043258667, "perplexity": 2330.0037267637244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00266.warc.gz"}
https://matplotlib.org/3.2.2/api/image_api.html
# matplotlib.image¶ class matplotlib.image.AxesImage(ax, cmap=None, norm=None, interpolation=None, origin=None, extent=None, filternorm=1, filterrad=4.0, resample=False, **kwargs)[source] Bases: matplotlib.image._ImageBase Parameters: axAxesThe axes the image will belong to. cmapstr or Colormap, default: rcParams["image.cmap"] (default: 'viridis')The Colormap instance or registered colormap name used to map scalar data to colors. normNormalizeMaps luminance to 0-1. interpolationstr, default: rcParams["image.interpolation"] (default: 'antialiased')Supported values are 'none', 'antialiased', 'nearest', 'bilinear', 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos'. origin{'upper', 'lower'}, default: rcParams["image.origin"] (default: 'upper')Place the [0, 0] index of the array in the upper left or lower left corner of the axes. The convention 'upper' is typically used for matrices and images. extenttuple, optionalThe data axes (left, right, bottom, top) for making image plots registered with data plots. Default is to label the pixel centers with the zero-based row and column indices. filternormbool, default: TrueA parameter for the antigrain image resize filter (see the antigrain documentation). If filternorm is set, the filter normalizes integer values and corrects the rounding errors. It doesn't do anything with the source floating point values, it corrects only integers according to the rule of 1.0 which means that any sum of pixel weights must be equal to 1.0. So, the filter function must produce a graph of the proper shape. filterradfloat > 0, default: 4The filter radius for filters that have a radius parameter, i.e. when interpolation is one of: 'sinc', 'lanczos' or 'blackman'. resamplebool, default: FalseWhen True, use a full resampling method. When False, only resample when the output image is larger than the input image. **kwargsArtist properties Parameters: normThe normalizing object which scales data, typically into the interval [0, 1]. If None, norm defaults to a colors.Normalize object which initializes its scaling based on the first data processed. cmapstr or Colormap instanceThe colormap used to map normalized data values to RGBA colors. format_cursor_data(self, data)[source] Return a string representation of data. Note This method is intended to be overridden by artist subclasses. As an end-user of Matplotlib you will most likely not call this method yourself. The default implementation converts ints and floats and arrays of ints and floats into a comma-separated string enclosed in square brackets. get_cursor_data(self, event)[source] Return the image value at the event position or None if the event is outside the image. get_extent(self)[source] Return the image extent as tuple (left, right, bottom, top). get_window_extent(self, renderer=None)[source] Get the axes bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. make_image(self, renderer, magnification=1.0, unsampled=False)[source] Normalize, rescale, and colormap this image's data for rendering using renderer, with the given magnification. If unsampled is True, the image will not be scaled, but an appropriate affine transformation will be returned instead. Returns: image(M, N, 4) uint8 arrayThe RGBA image, resampled unless unsampled is True. x, yfloatThe upper left corner where the image should be drawn, in pixel space. transAffine2DThe affine transformation from image to pixel space. set_extent(self, extent)[source] Set the image extent. Parameters: extent4-tuple of floatThe position and size of the image as tuple (left, right, bottom, top) in data coordinates. Notes This updates ax.dataLim, and, if autoscaling, sets ax.viewLim to tightly fit the image, regardless of dataLim. Autoscaling state is not changed, so following this with ax.autoscale_view() will redo the autoscaling in accord with dataLim. class matplotlib.image.BboxImage(bbox, cmap=None, norm=None, interpolation=None, origin=None, filternorm=1, filterrad=4.0, resample=False, interp_at_native=<deprecated parameter>, **kwargs)[source] Bases: matplotlib.image._ImageBase The Image class whose size is determined by the given bbox. cmap is a colors.Colormap instance norm is a colors.Normalize instance to map luminance to 0-1 kwargs are an optional list of Artist keyword args contains(self, mouseevent)[source] Test whether the mouse event occurred within the image. get_transform(self)[source] Return the Transform instance used by this artist. get_window_extent(self, renderer=None)[source] Get the axes bounding box in display space. The bounding box' width and height are nonnegative. Subclasses should override for inclusion in the bounding box "tight" calculation. Default is to return an empty bounding box at 0, 0. Be careful when using this function, the results will not update if the artist window extent of the artist changes. The extent can change due to any changes in the transform stack, such as changing the axes limits, the figure size, or the canvas used (as is done when saving a figure). This can lead to unexpected behavior where interactive figures will look fine on the screen, but will save incorrectly. property interp_at_native make_image(self, renderer, magnification=1.0, unsampled=False)[source] Normalize, rescale, and colormap this image's data for rendering using renderer, with the given magnification. If unsampled is True, the image will not be scaled, but an appropriate affine transformation will be returned instead. Returns: image(M, N, 4) uint8 arrayThe RGBA image, resampled unless unsampled is True. x, yfloatThe upper left corner where the image should be drawn, in pixel space. transAffine2DThe affine transformation from image to pixel space. class matplotlib.image.FigureImage(fig, cmap=None, norm=None, offsetx=0, offsety=0, origin=None, **kwargs)[source] Bases: matplotlib.image._ImageBase cmap is a colors.Colormap instance norm is a colors.Normalize instance to map luminance to 0-1 kwargs are an optional list of Artist keyword args get_extent(self)[source] Return the image extent as tuple (left, right, bottom, top). make_image(self, renderer, magnification=1.0, unsampled=False)[source] Normalize, rescale, and colormap this image's data for rendering using renderer, with the given magnification. If unsampled is True, the image will not be scaled, but an appropriate affine transformation will be returned instead. Returns: image(M, N, 4) uint8 arrayThe RGBA image, resampled unless unsampled is True. x, yfloatThe upper left corner where the image should be drawn, in pixel space. transAffine2DThe affine transformation from image to pixel space. set_data(self, A)[source] Set the image array. zorder = 0 class matplotlib.image.NonUniformImage(ax, *, interpolation='nearest', **kwargs)[source] Parameters: interpolation{'nearest', 'bilinear'} **kwargsAll other keyword arguments are identical to those of AxesImage. get_extent(self)[source] Return the image extent as tuple (left, right, bottom, top). make_image(self, renderer, magnification=1.0, unsampled=False)[source] Normalize, rescale, and colormap this image's data for rendering using renderer, with the given magnification. If unsampled is True, the image will not be scaled, but an appropriate affine transformation will be returned instead. Returns: image(M, N, 4) uint8 arrayThe RGBA image, resampled unless unsampled is True. x, yfloatThe upper left corner where the image should be drawn, in pixel space. transAffine2DThe affine transformation from image to pixel space. set_array(self, *args)[source] Retained for backwards compatibility - use set_data instead. Parameters: Aarray-like set_cmap(self, cmap)[source] set the colormap for luminance data Parameters: cmapcolormap or registered colormap name set_data(self, x, y, A)[source] Set the grid for the pixel centers, and the pixel values. Parameters: x, y1D array-likesMonotonic arrays of shapes (N,) and (M,), respectively, specifying pixel centers. Aarray-like(M, N) ndarray or masked array of values to be colormapped, or (M, N, 3) RGB array, or (M, N, 4) RGBA array. set_filternorm(self, s)[source] Set whether the resize filter normalizes the weights. See help for imshow. Parameters: filternormbool set_filterrad(self, s)[source] Set the resize filter radius only applicable to some interpolation schemes -- see help for imshow set_interpolation(self, s)[source] Parameters: sstr, NoneEither 'nearest', 'bilinear', or None. set_norm(self, norm)[source] Set the normalization instance. Parameters: normNormalize Notes If there are any colorbars using the mappable for this norm, setting the norm of the mappable will reset the norm, locator, and formatters on the colorbar to default. class matplotlib.image.PcolorImage(ax, x=None, y=None, A=None, cmap=None, norm=None, **kwargs)[source] Make a pcolor-style plot with an irregular rectangular grid. This uses a variation of the original irregular image code, and it is used by pcolorfast for the corresponding grid type. cmap defaults to its rc setting cmap is a colors.Colormap instance norm is a colors.Normalize instance to map luminance to 0-1 get_cursor_data(self, event)[source] Return the image value at the event position or None if the event is outside the image. make_image(self, renderer, magnification=1.0, unsampled=False)[source] Normalize, rescale, and colormap this image's data for rendering using renderer, with the given magnification. If unsampled is True, the image will not be scaled, but an appropriate affine transformation will be returned instead. Returns: image(M, N, 4) uint8 arrayThe RGBA image, resampled unless unsampled is True. x, yfloatThe upper left corner where the image should be drawn, in pixel space. transAffine2DThe affine transformation from image to pixel space. set_array(self, *args)[source] Retained for backwards compatibility - use set_data instead. Parameters: Aarray-like set_data(self, x, y, A)[source] Set the grid for the rectangle boundaries, and the data values. Parameters: x, y1D array-likes or NoneMonotonic arrays of shapes (N + 1,) and (M + 1,), respectively, specifying rectangle boundaries. If None, will default to range(N + 1) and range(M + 1), respectively. Aarray-like(M, N) ndarray or masked array of values to be colormapped, or (M, N, 3) RGB array, or (M, N, 4) RGBA array. matplotlib.image.composite_images(images, renderer, magnification=1.0)[source] Composite a number of RGBA images into one. The images are composited in the order in which they appear in the images list. Parameters: imageslist of ImagesEach must have a make_image method. For each image, can_composite should return True, though this is not enforced by this function. Each image must have a purely affine transformation with no shear. rendererRendererBase instance magnificationfloatThe additional magnification to apply for the renderer in use. tupleimage, offset_x, offset_yReturns the tuple: image: A numpy array of the same type as the input images. offset_x, offset_y: The offset of the image (left, bottom) in the output figure. matplotlib.image.imread(fname, format=None)[source] Read an image from a file into an array. Parameters: fnamestr or file-likeThe image file to read: a filename, a URL or a file-like object opened in read-binary mode. formatstr, optionalThe image file format assumed for reading the data. If not given, the format is deduced from the filename. If nothing can be deduced, PNG is tried. imagedatanumpy.arrayThe image data. The returned array has shape (M, N) for grayscale images. (M, N, 3) for RGB images. (M, N, 4) for RGBA images. Notes Matplotlib can only read PNGs natively. Further image formats are supported via the optional dependency on Pillow. Note, URL strings are not compatible with Pillow. Check the Pillow documentation for more information. matplotlib.image.imsave(fname, arr, vmin=None, vmax=None, cmap=None, format=None, origin=None, dpi=100, *, metadata=None, pil_kwargs=None)[source] Save an array as an image file. Parameters: fnamestr or PathLike or file-likeA path or a file-like object to store the image in. If format is not set, then the output format is inferred from the extension of fname, if any, and from rcParams["savefig.format"] (default: 'png') otherwise. If format is set, it determines the output format. arrarray-likeThe image data. The shape can be one of MxN (luminance), MxNx3 (RGB) or MxNx4 (RGBA). vmin, vmaxscalar, optionalvmin and vmax set the color scaling for the image by fixing the values that map to the colormap color limits. If either vmin or vmax is None, that limit is determined from the arr min/max value. cmapstr or Colormap, optionalA Colormap instance or registered colormap name. The colormap maps scalar data to colors. It is ignored for RGB(A) data. Defaults to rcParams["image.cmap"] (default: 'viridis') ('viridis'). formatstr, optionalThe file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this is unset is documented under fname. origin{'upper', 'lower'}, optionalIndicates whether the (0, 0) index of the array is in the upper left or lower left corner of the axes. Defaults to rcParams["image.origin"] (default: 'upper') ('upper'). dpiintThe DPI to store in the metadata of the file. This does not affect the resolution of the output image. metadatadict, optionalMetadata in the image file. The supported keys depend on the output format, see the documentation of the respective backends for more information. pil_kwargsdict, optionalIf set to a non-None value, always use Pillow to save the figure (regardless of the output format), and pass these keyword arguments to PIL.Image.save. If the 'pnginfo' key is present, it completely overrides metadata, including the default 'Software' key. matplotlib.image.pil_to_array(pilImage)[source] Load a PIL image and return it as a numpy array. Returns: numpy.arrayThe array shape depends on the image type: (M, N) for grayscale images. (M, N, 3) for RGB images. (M, N, 4) for RGBA images. matplotlib.image.thumbnail(infile, thumbfile, scale=0.1, interpolation='bilinear', preview=False)[source] Make a thumbnail of image in infile with output filename thumbfile. See Image Thumbnail. Parameters: infilestr or file-likeThe image file -- must be PNG, or Pillow-readable if you have Pillow installed. thumbfilestr or file-likeThe thumbnail filename. scalefloat, optionalThe scale factor for the thumbnail. interpolationstr, optionalThe interpolation scheme used in the resampling. See the interpolation parameter of imshow for possible values. previewbool, optionalIf True, the default backend (presumably a user interface backend) will be used which will cause a figure to be raised if show is called. If it is False, the figure is created using FigureCanvasBase and the drawing backend is selected as savefig would normally do. figureFigureThe figure instance containing the thumbnail.
2021-09-26 04:38:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20528976619243622, "perplexity": 10375.49079785332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00337.warc.gz"}
http://www.mathcaptain.com/geometry/rectangular-prism.html
A polyhedron with two congruent and parallel faces and whose lateral faces are parallelograms. Prism are often distinguished by the shape of their base polygon. A prism with rectangular bases is a rectangular prism. A rectangular prism has two ends and four sides. ## What is a Rectangular Prism A prism with rectangular bases is a rectangular prism. Rectangular prism is a 3-dimensional solid object which has six faces that are rectangles. A rectangular prism has two ends and four sides. Opposite sides of the rectangular prism have the same area. Rectangular Prism Vertices : A vertex is the point on 3-D shape where two or more edges meet. More than one vertex contained by a object is called vertices. Rectangular prism have eight vertices. ## Volume of a Rectangular Prism Volume of the right rectangular prism, $V$, is the product of the, $A$, area of base and, $h$, height of the prism. The base of the rectangular prism is, rectangle. So the area of the base is product of length and breadth or width. The formula for finding the volume of right rectangular prism is $V$ = $lwh$. Formula for Volume of a Rectangular Prism The formula for the volume of any right prism, $\Rightarrow\ V$ = $Ah$ Where, $A$ = Area of the base and $h$ = Perpendicular height Now If prism is a rectangular prism. $\Rightarrow$ Area of the base = $lw$ $\Rightarrow\ V$ = $A \times h$ $\Rightarrow\ lw \times h$ $\Rightarrow\ V$ = $lwh$ The volume of a rectangular prism is found by the formula, $V$ = $lwh$. Formula: Volume of a rectangular prism = $lwh$. Where, '$l$', '$w$', '$h$' are the length, width and height of the prism. ## Surface Area of a Rectangular Prism To find the surface area of a rectangular prism, firstly find the area of each side of the rectangular prism. Rectangular prism have six surface area, but there are three sets of two equal areas. The top and bottom, the left side and right side and the front and back will be the same. Length, width and height are three dimensions of rectangular prism, helps to find the surface area. The surface area of the right rectangular prism is $2(lh + hb + bl)$. Surface Area of Rectangular Prism Formula: The surface area of a rectangular prism is found by adding the area of all the faces of the prism. In rectangular prism at least two faces will have the same area. $\Rightarrow$ Surface Area = $2 \times\ Area\ of Front\ +\ 2\ \times\ Area\ of\ Side\ +\ 2\ \times\ Area\ of\ Base$ $\Rightarrow\ 2(lh)\ +\ 2(bh)\ +\ 2(bl)$ $\Rightarrow\ 2(lh\ +\ hb\ +\ bl)$ Formula: Surface Area of Rectangular Prism = 2(lh + hb + bl). Where, 'l' is length, 'b' is breadth and 'h' is the height of the rectangular prism. ## Lateral Area of a Rectangular Prism The lateral surface area is the area of any object with non-base faces. The lateral surface area L of any right rectangular prism is equal to the perimeter of the base times the height of the prism. $\Rightarrow\ L$ = $Ph$ Where, $P$ is the perimeter of a base and h be the height of the prism. Now Perimeter of the rectangular prism is, $P$ = $2(l + b)$ Where, '$l$' and '$b$' be the length and breadth of the prism. Formula: Lateral Surface Area of Rectangular Prism = $P$h = $2h(l + b)$. Where, '$l$', '$b$' and '$h$' be the length, breadth and height of the prism. ## Rectangular Prism Examples Below you could see some examples of rectangular prism: Example 1: Find the lateral surface area of a box in the shape of prism whose bases are rectangular with side length $10$ cm and $12$ cm, and whose height is $5$ cm. Solution: Given: Dimensions of rectangular prism are, Length $(l)$ = $10$ cm Breadth $(b)$ = $12$ cm Height $(h)$ = $5$ cm Step 1: Perimeter of the rectangular prism is, $P$ = $2(l + b)$ $\Rightarrow\ P$ = $2(10 + 12)$ = $2(22)$ = $44$ $\Rightarrow\ P$ = $44$ cm Step 2: Lateral Surface Area of Rectangular Prism = $Ph$ = $2h(l + b)$ $\Rightarrow\ LSA$ = $44 \times\ 5$ = $220$ Hence the lateral surface area of a box is $220\ cm^{2}$. Example 2: Find the volume of a rectangular solid with length $9$, width $6$, and height $5$. Solution: Given: Dimensions of rectangular prism are, Length $(l)$ = $9$ Width $(w)$ = $6$ Height $(h)$ = $5$ Step 1: Area of the base of the prism, Therefore, Area of the base = $l \times\ w$ $\Rightarrow\ A$ = $9 \times 6$ = $54$ $\Rightarrow\ A$ = $54$ square units Step 2: Volume of a rectangular prism = $Ah$ $\Rightarrow\ V$ = $54 \times 5$ = $270$ Hence the volume of the rectangular prism is $270$ square units.
2018-06-19 12:12:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099582433700562, "perplexity": 416.3929412410715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00092.warc.gz"}
http://psychology.wikia.com/wiki/Genotype_frequency?oldid=142799
# Genotype frequency 34,115pages on this wiki (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) In population genetics, the genotype frequency is the frequency or proportion (i.e. 0 < f < 1) of genotypes in a population. It may be denoted thus: $f(\mathbf{AA})$ Compare allele frequency. The Hardy-Weinberg law predicts genotype frequencies from allele frequencies under certain conditions, in which case: $f(\mathbf{AA}) = p^2$ $f(\mathbf{Aa}) = 2pq$ $f(\mathbf{aa}) = q^2$ Genotype frequencies may be represented by a De Finetti diagram.
2014-03-12 12:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615707755088806, "perplexity": 9703.134655615975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021763647/warc/CC-MAIN-20140305121603-00044-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/169278-find-cartesian-form.html
# Thread: Find the cartesian form 1. ## Find the cartesian form Find the cartesian form of r^2=sec(2theta) I know I have to use conversion formulas x=rcos(theta) and y=rsin(theta) but it's not clear to me how. I've been having trouble with this kind of problem and figure I should get help before I run into more trouble down the line. :s Would appreciate help with this. 2. You have an implicit function in the form $f(r, \theta) = r^{2} - \frac{1}{\cos 2 \theta}=0$ that can be 'transformed' is an implicit function in the form $f(x,y)=0$ setting $r^{2}= x^{2} + y^{2}$ and $\theta = \tan^{-1} \frac{y}{x}$... Kind regards $\chi$ $\sigma$ 3. Originally Posted by mcsquared Find the cartesian form of r^2=sec(2theta) I know I have to use conversion formulas x=rcos(theta) and y=rsin(theta) but it's not clear to me how. I've been having trouble with this kind of problem and figure I should get help before I run into more trouble down the line. :s Would appreciate help with this. This is the same as $r^2\cos^2(2\theta)=1 \iff r^2(2\cos^2(\theta)-1)=1 \iff 2(r\cos(\theta))^2-r^2=1$ Now just use what you wrote above $\displaystyle r^2 = x^2 + y^2$ $\displaystyle \sec{\theta} = \frac{1}{\cos{\theta}}$ $\displaystyle \cos{2\theta} = \cos^2{\theta} - \sin^2{\theta}$ $\displaystyle x = r\cos{\theta} \equiv \cos{\theta} = \frac{x}{r} = \frac{x}{\sqrt{x^2 + y^2}}$ $\displaystyle y = r\sin{\theta} \equiv \sin{\theta} = \frac{y}{r} = \frac{y}{\sqrt{x^2 + y^2}}$.
2017-10-20 22:56:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8244949579238892, "perplexity": 368.22510834333536}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231313-00066.warc.gz"}
https://www.ncatlab.org/nlab/show/equivariant+rationalization
# nLab equivariant rationalization Contents ### Context #### Representation theory representation theory geometric representation theory and # Contents ## Definition Let $G$ be a finite group (or more generally a compact Lie group). Say that an equivariantly connected and nilpotent topological G-space $Y$ (i.e. all fixed loci $X^H$ are connected and nilpotent, for $H \subset_{clsd} G$, with a common bound of nilpotency as $H$ ranges) is rational if all its equivariant homotopy groups $\pi_n\big( X^H \big)$ (for $n \in \mathbb{N}$ and $H \subset_{cld} G$) admit the structure of rational vector spaces. Given any topological G-space $X$, a rationalization of $X$ is a morphism (a $G$-equivariant continuous function) $X \overset{ \;\;\; \eta^{\mathbb{Q}}_X \;\;\; }{\longrightarrow} L_{\mathbb{Q}}X$ to a rational $G$-space $L_{\mathbb{Q}}X$ which induces isomorphisms on all rationalized equivariant homotopy groups: $\big( \eta^{\mathbb{Q}}_X(G/H) \big)_\ast \;\colon\; \pi_\bullet( X^H ) \otimes \mathbb{Q} \overset{\simeq}{\longrightarrow} \pi_\bullet \Big( \big( L_{\mathbb{Q}}X \big)^H \Big) \,.$ ## Properties ### Via Elmendorf’s theorem In other words, after regarding them, via Elmendorf's theorem, as (∞,1)-presheaves on the orbit category $G Orbits$ of $G$, the equivariant homotopy types of rational $G$-spaces and their rationalizations are equivalently stage-wise over $G/H \in G Orbits$ plain rational spaces and rationalizations, respectively. It follows from the fundamental theorem of dg-algebraic equivariant rational homotopy theory that, at least on equivariantly simply connected topological G-spaces, equivariant rationalization is given by the derived adjunction unit of the equivariant PL de Rham complex-Quillen adjunction. ## References • Peter May, Section II.3 in: Equivariant homotopy and cohomology theory, CBMS Regional Conference Series in Mathematics, vol. 91, Published for the Conference Board of the Mathematical Sciences, Washington, DC, 1996. With contributions by M. Cole, G. Comezana, S. Costenoble, A. D. Elmenddorf, J. P. C. Greenlees, L. G. Lewis, Jr., R. J. Piacenza, G. Triantafillou, and S. Waner. (ISBN: 978-0-8218-0319-6 pdf, pdf) • Georgia Triantafillou, Section 2.6 in Equivariant minimal models, Trans. Amer. Math. Soc. vol 274 pp 509-532 (1982) (jstor:1999119) • Laura Scull, p. 11 of: A model category structure for equivariant algebraic models, Transactions of the American Mathematical Society 360 (5), 2505-2525, 2008 (doi:10.1090/S0002-9947-07-04421-2) Last revised on October 4, 2020 at 13:17:52. See the history of this page for a list of all contributions to it.
2022-08-08 16:03:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7724307179450989, "perplexity": 2141.3569875124645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00627.warc.gz"}
https://tug.org/pipermail/pdftex/2002-June/002741.html
# [pdftex] textcolor, lineno and pdfTeX Guenter Resch guenterresch at utanet.at Thu Jun 13 19:51:17 CEST 2002 ```> So to use \color along with {lineno}, you will need to restrict your uses > of \textcolor to within single lines only. OK ... even if I understood only help of what you said, this sounds like bad news. Is there a possibility for a new LaTeX command that repeats the \textcolor command, let's say on each letter? That would be a nasty workaround, but the document is a draft only ... > Maybe there is an alternative package that does work better with pdfTeX ? The only alternative *I* know is numline, which comes with the following description: | LaTeX style file for putting line numbers on margins of at least some | documents which will survive such treatment. It works modifying LaTeX | output routine, so do not expect that this will work on anything but | simple text! If it does then you are very lucky person. And I need to say ... they are absolutely right :( numline successfully messed up all documents I tried it on so far (I'm using the document classes from the KOMA script package). Maybe there is an alternative for highlighting text other than \textcolor?
2021-05-18 19:51:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8673461675643921, "perplexity": 3647.862177971659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00212.warc.gz"}
https://www.tubesandmore.com/schematics/motorola/51m2u
# Motorola 51M2U Schematic Get a print! Price: $8.00 In stock Model: 51M2U Manufacturer: Motorola (Galvin) Description: HS-283 ## Schematic Pages ## Schematics Content: Please note: this content is a computer-generated interpretation of the above schematic and is provided only to help assist you in locating this schematic. For the actual text, please consult the schematic above. Thank you. #### Page 1: PAGE 22-116 MOTOROLA MODELS S1l"llU, GENERAL INFORMATION TYPE - Three-power (Ac/DC, Battery) portable radio re- ceiver. Four miniature type tubes and a selenium rectifier are used in a superheterodyne circuit. Model Color 51M1 U Green SIMZU Maroon TUNING RANGE $535 to 1620 Kc IF - 455 Kc TUBE COMPLEMENT - Type Function lR5 Converter IU4 IF Amplifier 384 Power Amplifier Rectifier Selenium type -for AC/DC operation POWER SUPPLY - Operates fromll7V AC/DC (15 watts) or from the following batteries: Z - l-l/ZV fltlllllghi cells (Evereedy I - 67-l/ZV "B" battery (Eveready #467 OPERATING IISTIUCTIOIS TO OPEN FRONT COVER. The front cover is opened by pushing upward on the "M" bar located in the center oi the cover. The receiver is automatically turned on when the front cover is opened and raised to a vertical position. TO OPEN BACK COVER, The back cover may be opened by gently pulling it at the top. When closing the cover, be careful not to pinch the power line cord or other leads be - tween the cover and the cabinet. 'll7 VOLT AC OR DC OPERATION. The power cord is lo- cated inside the cabinet and may be reached by opening the back cover. Pass the line cord through the slot on the side of the receiver, and plug it intoany ll? volt AC or DC power outlet. If the receiver does not operate from DC power, reverse the plug in the power outlet. When operating from AC power, reception may sometimes be improved by re- versing the power plug in the outlet. lt is not necessary that batteriesbe installed ifthe receiver is to be operated only from house power lines. BATTERY OPERATION. Open the back cover and install the batteries, following the instructions on the label inside the back saver- (of see Figure x). insert the une cord plug into the receptacle on the chassis, or the receiver will not play (rom batteries. ll the receiver is to be operated for a long period of time from 117 volts AC or DC, or is to be placed in storage. remove the batteries and store them in a cool place. IMPORTANT: Never leave low or run-down batteries in the receiver, as they will leak orswell and damage it. TUNING CONTROL. Stations are tuned in with the right- hand lmob. The markings around the tuning knob may be read in kilocycles by adding one zero to the figures. VOLUME CONTROL.. The left-hand knob controls volume. TO TURN OFF. Closing the front cover will automatically turn off the receiver. ANTENNA. A loop antenna is built into the front cover. Because of the slightly directional characteristics of the loop antenna, reception from some stations may be im- proved by rotating the entire receiver. ln extremely noisy locations, rotate the receiver until minimum noise and max- imum signal pick$up are obtained. , BATTERY REPLACEMENT. lf low volume or fuzzy tone is noticed when operating from batteries, replace are :lane light cells. Normally, the 67-l/ZV "B" battery will last lor 3 or 4 changes of the flashlight cells. The condition of the batteries will not affect the operation ol the receiver from ll? volts AC or DC. Complete battery replacement instruc- tions will be found inside the cabinet backcover (or see Fig- (C)Joh.n F. Rider #### Page 2: MOTOROLA PAGE 22-ll MODELS ?lMlU , WHEN PLAVING FROM MUSE CURRENY PASS LINE CORD THROUGH SLOT' IN CABINET FOR BITTERY OPERATION PLUG THE LINE CORD lNYO NECEPIADLE ON 7 OMAS$|S GR SET WILL NUI PLAY CHASSIS MTG SCREW at use :vsnnuv asv ok NAV 0 VAC 4561 ' ' ? TT?RIE?' USE TWO L5 VOLT FLASHLPGHY CELL COIL LINE CORK: HERE /1 \ WNEN NoY In use FIGURE 1. BATTERY INSTALLATION on Anv s\zE'o' FLASHLIGHT CELL " |Ns1'A\.\. 'A' BAUERES 0 SPRING coN'rACYs BOTTOM or BAHERIES. NOTE: The receiver may be operated either from bat>> 4. Turn the receiver volume control to maximum. teries or (rom the commercial power lines during IUEU' ment. If AC power is used, it is recommended that an iso- lation transformer be placed between the power line and the receiver. If an isolation transformer is not available, con- nect the low side ol the signal generator to B- through a .1 diode transformers. 15. As stages are brought into alignment, reduce the signa 1. Connect a low range output meter across the speaker voice coil. irnately .05 watt (. 05 watt = .40 volts on the output meter to avoid overloading the receiver. Z. Connect the low side of the signal generator to B-. 7. See Figure Z for adjusting locations and the following 3. Set the signal generator for 400 cycle, 30% modulation. chart for procedure. ALIGNMENT CHART 5. Use a small libre screwdriver for aligning the IF and generator input to keep the output of the receiver at approx- DUMMY GENERATOR GENERATOR GANG STEP ANTENNA CONNECTION FREQUENCY SETTING ADJUST REMARKS IF ALIGNMENT l. . l .ml Grid of conv 455 Kc Fully I, Z Br 3 Adjust for maximum. (pin 6, lR5) open (IF cores) RF ALIGNMENT 2. .1 mf Grid of Conv 1620 Kc Fuuy 4 (Osc) Adjust for maximum. (pin 6, ms) open 3. - -- Install chassis in cabinet. leaving output meter connected en speaker. 4. Radiation 1400 Kc Tune for 5 (Ant) Adjust [or maximurn. Trimmer i loop* max. reached through hole under plug button on side of cabinet. U Connect generator output across 5" diameter, 5 turn loop and couple inductively to receiver loop. Keep loops at 1255i (C)John F. Rider #### Page 3: PAGE 22-118 MOTOROLA SERVICE The chassis of this receiver is isolated from the AC power une circuit by a capacitor-choke assembly to enmi- nate the shock hazard when handling the receiver. How- ever, as an additional precaution when aligning or servicing the receiver from AC, an isolation transformer should be inserted between the power line and the chassis. The tubes are exposed when the rear cover is opened. It is not necessary to remove the chassis to replace tubes. LINE GORI x. Pull off me two control knobs on the front of the eabinet. 2. Open the rear cover and remove the batteries. 3. Remove the two Phillips head screws holding the chassis to the cabinet ("A" - "A" in Figure 1), 4. Slide the chassis out of the cabinet. 5. Disconnect the two leads from the chassis to the loop antenna hinges. BATTERY FIGURE 3. REAR VIEW OF RECEIVER (C)Joh.n F. Rider #### Page 4: MOTOROLA PAGE 22-119 MODELS ?lNlU, FIGURE 4. TOP VIEW OF CHASSIS R2 QT T gy , L _ U, >> "f"?f*"""".>'f1l$'3 1 , *"?"'""_ ' AF AMP _ =$ , 'A' BATTERY 'a' aATTER$r W BA1-TEM RETAINER TsRM|$A|_ srmw RETAINER FIGURE 5. BOTTOM! VIEW OF CHASSIS (C)John F. Rider #### Page 5: PAGE 22-120 MOTOROLA MODELS 5 1M1U, |?:?_; 5 fig O za ? ,e 4 ? _ Jog s""? L. 3 2 " g (C)John F. Rider #### Page 6: MOTOROLA PAGE 22 '|2'| MODELS 5lNlU, / @ MEG HU I / COM connscnous conwzcnous mo 52% I c 6 n U \ Q c cv fl/gk, ago * YRIMMERS FRED RANGE 5554520 KC. L2 on saws ALL nzsmnns mmwzn |$om. mf; gp ng Em ern mn "?"" KI ONE YNUUSANU UOOGI OHMS. : : 220K VOLYAGE NEHSUWEIIENTS MADE WITH I ssg an a n\zn%J:Emzn1 uno: mm GANG _,M3 ?;agz;:a mn M 1 . 2 FIGURE 7 . SCHEMATIC DIAGRAM OF CHASSIS US ING MULTIPLE CERAMIC CAPACITOR-RESISTOR PLATE #### Page 7: PAGE 22-122 MOTOROLA MODELS 5ll"IlU, RHPLACEHBH1' PARTS LIST l NOTE: When ordering parts, sperify model number of set 1n addition to part nurnber and description oi part. Ref. Part _ Ref. Part Egi- 52225; 95225323322 No. Number Description CHASSIS PARTS - ELECTRICAL Resistors C-1 l9K692007 Variable 2-gang.......... 0_2 21K481377 Ceramici' 500 mmf 5o0v_____ Note: All resistors are insplitgd, carbon type C 5 K 11-3 1714692009 Wire wound; 2150 5% 1011; Ceramic, disc type: 10,000 Ceramic, disc type: 100 mmf Ceramic, disc type: Ceramic, disc type: 5000 Ceramic, disc type: 10,000 Choke Capacitor I Rectifier (C)John F. Rider Choke & .05 mf 200V paper Selenium Rectifier: half- Antenna Loop & Front Cover Assembly: complete; green Antenna Loop, Panel & Hinge Assembly: less front cover; green plastic (5lMlU)...... Antenna Loop & Panel Assem- bly: less hinges: green plastic (5lHlU)............ Antenna Loop & Front Cover Assembly: complete: maroon Plastic (51M2U).. ...... Antenna Loop, Panel 0 Hinge Assembly: less front cover; maroon plastic (51M2U) .... . Antenna Loop & Panel Assem- bly: less hinges; maroon plastic (5lM2U) .... _ ....... Oscillator Coil (yellow code) 10 meg 20% QW..." 11-9 6122118 9.3 meg 20% iw ...... R-10 l8A69l993 Volume control: 1 meg... 11-11 6112109 10 meg 20% iw. . . . ..... _ 11-16 61125.22 4.7 meg 20% 111. .. 11-17 SR6004 1 meg 20% lvl. . . _ . R-18 6R2llB 3.3 meg 20% ?W.... Switches Rotary Switch, 5 PDT (AC/DC- Battery selector). . . . . . . . . Slide Switch (on-off)...... Transformers complete with capacitors.. Diode Transformer, 455 Kc: complete with capacitor... Output transformer........ CHASSIS PARTS - IECHANICAL Bushing, insulator: fibre (chassis mtg screw insu1ators),,, ._,,, ,_ Bushing, strain relief: line cord (use with 43K692013). ....... . .... . Clip, electrolytic mtg ________,_ Clip, IF transformer mtg..,..,,, Cord, line: with plug; 6 ft long.. Lug, soldering: battery contact (in "A" battery retainer) .,.., _ Receptacle, loop (on loop leads) Retainer, strain relief (on line Shield, back (on rear of chassis) Shield heat (around R-3). .,.__, _ Shield, switch (over AC/DC-Battery Spring, battery contact (in "A" battery retainer) ..,_,_,__ , #### Page 8: MOTOROLA PAGE 22-123 MODELS 51MlU, Pitt Plrt I Number Description Number Description 3lK4708B0 Strip, "B" battery terminal: with 35400335 5,nrew_ sheet metal, gg X 5/15; leads.........................-... Phillips flat head; blk nkl 3lK37504 Strip , terminal: 1 insulated lug, (mounts 10?F to fron; Cover) _ _ _ N xl mtg............................ 35490739 screw, sheet metal; #4 x Q PKZ; 3lK470746 Strip, terminal: 3 lnsulated lugs, phillips hinder head (chassis mtg) 4K470939 Washer , fibre (R-3 mtg). . - 352995 Screw, machine: 5-40 x 5/16 pl hex head; cad pl (handle mtg).... 4lA470909 Spring, door latch (inside front IODEL 51l1U CABINET PARTS GOVEURl')....$.............-...... 4lK692167 Spring, handle (inside plastic han- 7A600092 Bracket, escutcheon support (cab- dlel---$---$---$-------->>--$-$-- inet front Snppnrg) _ _ _ _ _ _ _ _ _ _ _ _ _ _ 4lK60l7l2 Spring, rear cover latch. . . . . . . . 38K692()50 B\_\tt?n_ plug; gy-een gxnisn (limp 42A692l89 Strap, door latch retainer (inside trimmer nd] now cove,-)___________ front cover) ...................... 13601312 Cabinet, complete; less nanny," 46A69215l Stud, latch retainer (front cover S,-i11e and antenna 10?F and front latch on grille) . . . . . . . . . . . . . . . . _ . cover assembly; green p1a?\;1?;_ _ _ _ _ 46K690079 Stud, trimount: blk nkl (on loop 42A6ooo94 Clip, grllle retnlner (holds grille panel - for Operating Gu-Gff 55A692058 Cover , handle mtg (over ends of 13D69l949 Escutcheon , dial In volume (on front 55A27113 Foot _ cabinet bottom: felt _ _ _ _ _ IIODEIL. 5lll2U CABINET PARTS-Same as' Model 5lllU Except: lX692l62 Front Cover Assembly: complete; less 1??n_ green nlnstim _ _ _ _ _ _ _ _ _ 3BK6\J0l06 Button, plug: maroon finish (loop lX692l58 orllle Assembly: complete wlcn trimmer "di h?\? ??Ve')--$-$-->>--- ennntcneom green nlastic _ _ _ _ _ _ _ _ lX60 1816 Cabinet: complete , less handle , 55K692l66 Handle, narrylng: green plantle; grille """ ""'E""" 1??P "?d f'?"f lens nn_.ing_______________________ cover assembly; maroon plasticu.. 55C692202 Hinge_ front cover, con_n1ete_ 1eft_ l3K61J0956 Escutcheon, dial & volume (on front 55K600087 Hinge, front cover: complete; right 1X6?0131 Fr?"t C?V?' AsSe"b1Y' ??mP1?'?r band.............................. less 1??P5 '""??" P1"S"?------- 55130198 Hinge _ near cover _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ lXG00l28 Grille Assembly: complete with 3613691899 Knob, control: green plastic. . . . . . escutcheolu '"r??" p1asti?' ' ' ' j ' ' ' lX692l63 Latch and Plate Assembly (lnslne 55K60?1?7 """"1?$ ?affY*"EUR= -"='??" P1"S*1?P front cover)______________________ less spr1ng....................... 438406 Lock_,nsner_ int: #2 (1?nn)_ _ _ 36K600105 Knob, control: maroon plastic, . . . . 457695 L?Ck'__sner_ int: #5 (hnndle nts) 5S28IZ7 Rivet: .088 x 5/32; stl; statuary 29R5399 Lug, soldering (under front hinge, 5528328 Rivet: 388 X 3/165 Stl; Statuary for loan Connectinn) _ _ _ _ _ _ _ _ _ bronze (front cover hinge mtg) l3B69l90l ledalllon (on front cover)......... """""""'?""""""' 2BA692l98 Pin, loop connector (on front hinge) 38400336 s?"e">> sheet "'et"1: *2 X 5/165 64K6016l8 Plate, handle mtg (under handle mtg _ ("?""*S 1??P ?? f'?"" ??"?')-~- covers)__________________ ______ 46K6B0035 Stud, trlmount: statuary bronze (rear cover hinges latch spring off avlltch) . . . . . . . . . . . . . . . . . . . . (front binge utg)............. Linn rnnnd head (mounts front or 508610070 Speaker: CQ" PM; 3.2 ohm VC hinges to cablnet)............. Rider Please note: we carry a wide range of products which will fulfill the specifications above. Please use our search function or browse our products to find what you need!
2020-07-09 18:34:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3865368366241455, "perplexity": 12340.941159313994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00525.warc.gz"}
https://mathoverflow.net/questions/319597/whats-equal-the-below-power-nested-radical
# What's equal the below power nested radical? The same copy of this question is montioned here in SE with no convinced Answer , I want to know what MO will say about the below nested radical as a power form it is well known that $$\frac{2}{\pi}=\sqrt{\frac12}{\sqrt{\frac12+\frac12\sqrt{\frac12}}{\sqrt{\frac12+\frac12\sqrt{\frac12+\frac12\sqrt{\frac12}}}{\sqrt{\frac12+\frac12\sqrt{\frac12\cdots}}}}}$$ My Idea is to know what about above product if it is a power as shown below : $$A=\sqrt{\frac12}^{\sqrt{\frac12+\frac12\sqrt{\frac12}}^{\sqrt{\frac12+\frac12\sqrt{\frac12+\frac12\sqrt{\frac12}}}^{\sqrt{\frac12+\frac12\sqrt{\frac12\cdots}}}}}$$ ? • This is the same question. You define (at least I think you do, you didn't indicate in what order exactly the exponential is supposed to be unwrapped, but I'll read it from left to right) $a_1=1/\sqrt{2}$, and then $a_n=a_{n-1}^{q_n}$, with $q_n$ denoting the $n$th square root in the original expression. So $\log a_n = q_n \log a_{n-1} = q_n \cdots q_2\log a_1$, and now you get your answer from the original result. – Christian Remling Dec 27 '18 at 23:02
2019-11-14 19:39:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195594549179077, "perplexity": 367.1136397466388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00282.warc.gz"}
http://math.stackexchange.com/questions/257389/proving-convergence-of-the-expectation
# Proving convergence of the expectation May $Z$ be a random variable distributed as $N(0,1)$ Find the following limit: $$\lim_{n\to\infty}\mathbb{E}\left[\frac{1}{\sqrt{n}-Z}\right]$$ How does one go about proving it? - The expected value does not exist: the function $\dfrac{f(z)}{\sqrt{n}-z}$ (where $f$ is the probability density function) is not absolutely integrable because of the singularity at $z=\sqrt{n}$. However, the Cauchy principal value of this improper integral does exist. That is, if $I_r(t) = 1$ for $|t|\ge r$ and $0$ for $|t|<r$, ${\mathbb E}\left[ \dfrac{I_r(\sqrt{n}-Z)}{\sqrt{n}-Z} \right]$ exists, and for the limit of this as $r \to 0$ I get $\sqrt{\dfrac{\pi}{2}} e^{-n/2} \text{erfi}(\sqrt{n/2})$, which goes to $0$ as $n \to \infty$.
2014-10-26 04:27:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.978959321975708, "perplexity": 41.20456235299106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119655159.46/warc/CC-MAIN-20141024030055-00093-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3149702/if-d-divides-2n-and-d-doesnt-divide-n-then-d-is-even
# If $d$ divides $2n$ and $d$ doesn't divide $n$, then $d$ is even I have encountered a proof regarding dihedral groups we this fact is used: If $$d\mid 2n$$ and $$d\nmid n$$, then $$d$$ is even and $${d\over 2}\mid n$$. I can't seem to understand why this is true. If $$d\nmid n$$, then there are $$q,r \in \mathbb{Z}$$ so that $$0 < r < d$$ and $$n = qd + r$$. On the other hand, $$d\mid 2n$$ means that there is $$m \in \mathbb{Z}$$ so that $$2n = md$$. We need to somehow use these two facts. Also, my second question is how we can naturally generalize this result? • Hint: if $d$ was odd, $m$ would be even. – Wojowu Mar 15 at 19:28 • $2n =md$. And $2$ is prime. So $2|m$ or $2|d$. If $2|m$ then $n = \frac m2 d$ and $d|n$. If $2|d$ then $d$ is even. General. If $d|pn$ for a prime $p$ and $d\not \mid n$ then $p|d$. Even more general. If $d|ab$ then $\frac d{\gcd(d,b)}|a$. – fleablood Mar 15 at 21:22 Suppose $$d$$ is odd. Then $$d$$ and $$2$$ are relatively prime and $$d\mid 2n$$, so by Euclid lemma we have $$d\mid n$$. A contradiction. So $$d$$ must be even. We could have more general situation. Say $$d\mid pn$$ for some prime $$p$$ and $$d$$ doesn't divide $$n$$. Then $$p\mid d$$. The proof goes exactly the same as for $$p=2$$. It's also possible to see what's going on simply by keeping track of factors of $$2$$; note that this kind of analysis could also be generalized to other prime factors, as has been illustrated in previously posted answers. Let $$n:= 2^aQ,\ 2n=2^{a+1}Q, d:=2^bP$$ where $$Q$$ represents the product of all of the odd prime factors of $$n$$ and $$P$$ represents the product of all of the odd prime factors of $$d$$. We could write out all of those factors and explicitly show this, but it should be plain that $$d\mid 2n \Rightarrow P\mid Q$$. Note that thus far, the exponents $$a,b$$ might be $$0$$, so we have not assumed that $$d$$ is even. $$d\mid 2n \Rightarrow b\le a+1$$ $$d\not \mid n \Rightarrow b>a$$ Together, these establish $$b=a+1$$, meaning that $$d$$ has at least one factor of $$2$$ and is even, even if $$a=0$$. This also illustrates the second point: the exponent of $$2$$ in $$d\over 2$$ is simply $$b-1=a$$. Hence $$\frac{d}{2} \mid n$$ If you know that the remainder $$r$$ in $$n=qd+r$$ with $$0\lt r\lt d$$ is unique, then from $$2n=md$$ we get $$n=2n-n=md-(qd+r)=(m-q)d-r=(m-q+1)d+(d-r)=q'd+r'$$ with $$0\lt r'=d-r\lt d$$, so that, by uniqueness of the remainder, we have $$r'=r$$, i.e. $$d-r=r$$, hence $$d=2r$$. $$d| 2n$$ then $$2n=kd;$$ Since $$kd$$ is even, $$2| kd$$. Euclid's lemma: 1) $$2| k$$ or 2) $$2|d$$. 1) If $$2|k$$ then $$k=2k'.$$ $$2n=2k'd$$; $$n=k'd$$, i.e. $$d|n$$ , a contradiction. 2) Hence $$2|d$$ , and we are done. By below $$\ d\mid pn\iff\!\!\!\! \overbrace{d\mid n}^{\large\color{#0a0}{(d,p)}\ =\ \color{#c00}1}\!\!$$ or $$\ \overbrace{{d/\color{#c00}p}\mid n}^{\large\color{#0a0}{ (d,p)}\ =\ \color{#c00} p}\!,\$$ by $$\ \color{#0a0}{(d,p)}\mid \color{#c00}p\,$$ prime. Lemma $$\,\ d\mid an\iff\smash[t]{\overbrace{ d/\color{#0a0}{(d,a)}\,\mid\, n,\,}\ }$$ where $$\,\ (x,y) := \gcd(x,y)$$ Proof $$\quad\ d\mid an\iff d\mid dn,an,\iff d\mid (dn,an)=(d,a)n\iff d/(d,a)\mid n$$ • Convention: $\ d/p\mid n\$ means $\ d/p\,$ is an integer, so $\,p\mid d\ \$ – Bill Dubuque Mar 15 at 20:23 Intuitively. If $$d|ab$$ and $$d\not \mid b$$ then "some part of $$d$$ must divide $$a$$". So if $$d|2n$$ but $$d\not \mid n$$ then some (non-trivial) part must divide $$2$$ and that part must be $$2$$ so $$d$$ is even. ..... That's intuition. Let's make a proof. ..... If $$d|ab$$. Let $$\gcd(d,b) = g$$ and let $$d = d'g$$ and $$b = b'g$$. It's easy to see that $$d'$$ and $$b'$$ are relatively prime: if $$b'$$ and $$d'$$ had any non-trivial factor, $$k$$, in common then $$kg$$ would be a common divisors of $$b$$ and $$d$$ contradicting that $$g$$ is the greatest common divisor. So $$d=d'g$$ and $$ab = ab'g$$ and $$d'g|ab'g$$ so $$d'|ab'$$. But $$d'$$ and $$b'$$ are relatively prime so they have no factors in common. So $$d'|a$$. Now if $$d|b$$ then $$d'g|b'g$$ so $$d'|b'$$ but $$d'$$ and $$b'$$ are relatively prime so $$d' = 1$$ and $$d = \gcd(d,b)$$. Lemma 1: $$d|b \iff \gcd(d,b) = d$$. If $$d|ab$$ the $$d'|a$$ but if $$d\not \mid b$$ then $$\gcd(d,b)=g \ne d$$ so $$d' = \frac dg > 1$$. $$d'\ne 1$$ and $$d'|a$$. So Lemma 2: If $$d|ab$$ but $$d\not \mid b$$ then $$d' = \frac d{\gcd(d,b)} > 1$$ and $$d'|a$$. So if $$d|2n$$ and $$d\not \mid n$$ then $$\frac d{\gcd(d,n)} > 1$$ and $$\frac d{\gcd(d,n)} |2$$. So $$\frac d{\gcd(d,n)} = 2$$ and $$d = \frac d{\gcd(d,n)}\gcd(d,n) = 2\gcd(d,n)$$. And $$d$$ is even. ===== Actually, this may be the best most general Theorem: Theorem: If $$d|mn$$ then $$\frac d{\gcd(d,n)}|m$$. I'll leave the proof to you, and I'll leave it to you to figure out how that implies if $$d|2n$$ and $$d\not\mid n$$ then $$d$$ is even.
2019-04-23 12:32:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 130, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671773910522461, "perplexity": 65.54672186661688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423135819-00015.warc.gz"}
https://perone.github.io/episuite/icu_simulation_notebook.html
# ICU - Simulating ICU occupation¶ This section shows how to use the ICU simulation from Episuite. The main class used for the ICU simulation is the ICUSimulation, that can be used by specifying admissions and a duration distribution. For more information about how the simulation is performed, please see the class documentation. Note In the example below, we will use a sample dataset that comes embedded in Episuite with real data from the SARS-CoV-2 outbreak in south of Brazil. This dataset can be accessed using the admissions_sample() function from the data module. Module episuite.distributions Documentation of the episuite.distributions module. Module episuite.data Documentation of the episuite.data module. Forecasting critical care bed requirements for COVID-19 patients in England This simulator is mainly based on this work by Jombart et al. [JNJ+20]. Analysis of the SARS-CoV-2 outbreak in Rio Grande do Sul / Brazil This article Perone [Per20] used this simulator and describes how it works. [1]: import episuite from matplotlib import pyplot as plt from episuite import icu, durations, distributions from episuite import data The first step is to prepare the admissions that we want to use for simulation. These admissions can be observed and corrected for right-censoring or projected admissions to simulate different scenarios. [2]: sample_data = data.admissions_sample() [3]: sample_data_admissions = sample_data.groupby("DATE_START").size().sort_index() [4]: fig = plt.figure(figsize=(15, 4)) plt.show() ## Durations (length of stay)¶ Let’s now prepare the duration distribution, for the observed length of stay (LoS). [5]: dur = durations.Durations(sample_data) [6]: fig = plt.figure(figsize=(15, 5)) dur.plot.timeplot(n_boot=100) plt.show() As we can see in the figure above, the LoS for the ICU occupation varies a lot in the beginning of the pandemic and then stabilizes later with a drop at the end due to a bias present in the dataset. This bias would ideally be corrected in order to do nowcasting or forecasting for different scenarios. Now, we are going to get a bootstrap distribution for the LoS and the instantiate the ICUSimulation using this distribution of stays and the admissions we observed. [7]: duration_bootstrap = dur.get_bootstrap() We will now simulate 5 rounds to incorporate the uncertainty of the LoS distribution. Usually you would do more than 50 rounds. [8]: results = icu_sim.simulate(5) We can now compute confidence intervals and inspect the simulation results. The method get_simulation_results() will give you a dataframe indexed by day and with each simulation as a column, representing different occupancy values for each day and taking the LoS uncertainty into account. [9]: results.get_simulation_results() [9]: 0 1 2 3 4 2020-03-18 2.0 2.0 2.0 2.0 2 2020-03-19 3.0 3.0 3.0 3.0 3 2020-03-20 5.0 5.0 5.0 5.0 5 2020-03-21 6.0 6.0 6.0 6.0 6 2020-03-22 8.0 8.0 8.0 8.0 7 ... ... ... ... ... ... 2021-06-17 0.0 0.0 0.0 0.0 1 2021-06-18 0.0 0.0 0.0 0.0 1 2021-06-19 0.0 0.0 0.0 0.0 1 2021-06-20 0.0 0.0 0.0 0.0 1 2021-06-21 0.0 0.0 0.0 0.0 1 461 rows × 5 columns To compute confidence intervals, we just have to call the hdi() method. This will result a dataframe with the confidence intervals (lb95 = lower bound .95 HDI, ub95 = upper bound .95 HDI). [10]: df = results.hdi() [11]: df.head() [11]: date lb95 ub95 lb50 ub50 mean_val median_val 0 2020-03-18 2.0 2.0 2.0 2.0 2.0 2.0 1 2020-03-19 3.0 3.0 3.0 3.0 3.0 3.0 2 2020-03-20 5.0 5.0 5.0 5.0 5.0 5.0 3 2020-03-21 6.0 6.0 6.0 6.0 6.0 6.0 4 2020-03-22 7.0 8.0 8.0 8.0 7.8 8.0 ## Visualization¶ [12]: fig = plt.figure(figsize=(15, 5)) df["mean_val"].plot() plt.show() We can see here the results of the simulation and the uncertainty for each day. You can see that after stopping admissions, to have a drop to zero occupancy we need to wait for more than one month. This shows also how concerning are COVID-19 hospitalizations, they rise quickly but take a lot of time to dissipate. [13]: fig = plt.figure(figsize=(16, 6)) results.plot.lineplot() plt.show()
2022-05-26 17:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30149605870246887, "perplexity": 2007.6296564124632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00524.warc.gz"}
http://www.nag.com/numeric/MB/manual64_24_1/html/G13/g13bjf.html
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox # NAG Toolbox: nag_tsa_multi_inputmod_forecast (g13bj) ## Purpose nag_tsa_multi_inputmod_forecast (g13bj) produces forecasts of a time series (the output series) which depends on one or more other (input) series via a previously estimated multi-input model for which the state set information is not available. The future values of the input series must be supplied. In contrast with nag_tsa_multi_inputmod_forecast_state (g13bh) the original past values of the input and output series are required. Standard errors of the forecasts are produced. If future values of some of the input series have been obtained as forecasts using ARIMA models for those series, this may be allowed for in the calculation of the standard errors. ## Syntax [para, xxy, rmsxy, mrx, fva, fsd, sttf, nsttf, ifail] = g13bj(mr, mt, para, kfc, nev, nfv, xxy, kzef, rmsxy, mrx, parx, isttf, 'nser', nser, 'npara', npara) [para, xxy, rmsxy, mrx, fva, fsd, sttf, nsttf, ifail] = nag_tsa_multi_inputmod_forecast(mr, mt, para, kfc, nev, nfv, xxy, kzef, rmsxy, mrx, parx, isttf, 'nser', nser, 'npara', npara) ## Description nag_tsa_multi_inputmod_forecast (g13bj) has two stages. The first stage is essentially the same as a call to the model estimation function nag_tsa_multi_inputmod_estim (g13be), with zero iterations. In particular, all the parameters remain unchanged in the supplied input series transfer function models and output noise series ARIMA model. The internal nuisance parameters associated with the pre-observation period effects of the input series are estimated where requested, and so are any backforecasts of the output noise series. The output components zt${z}_{t}$ and nt${n}_{t}$, and residuals at${a}_{t}$ are calculated exactly as in Section [Description] in (g13be), and the state set for forecasting is constituted. The second stage is essentially the same as a call to the forecasting function nag_tsa_multi_inputmod_forecast_state (g13bh). The same information is required, and the same information is returned. Use of nag_tsa_multi_inputmod_forecast (g13bj) should be confined to situations in which the state set for forecasting is unknown. Forecasting from the original data is relatively expensive because it requires recalculation of the state set. nag_tsa_multi_inputmod_forecast (g13bj) returns the state set for use in producing further forecasts using nag_tsa_multi_inputmod_forecast_state (g13bh), or for updating the state set using nag_tsa_multi_inputmod_update (g13bg). ## References Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day ## Parameters ### Compulsory Input Parameters 1:     mr(7$7$) – int64int32nag_int array The orders vector (p,d,q,P,D,Q,s)$\left(p,d,q,P,D,Q,s\right)$ of the ARIMA model for the output noise component. p$p$, q$q$, P$P$ and Q$Q$ refer respectively to the number of autoregressive (φ)$\left(\varphi \right)$, moving average (θ)$\left(\theta \right)$, seasonal autoregressive (Φ)$\left(\Phi \right)$ and seasonal moving average (Θ)$\left(\Theta \right)$ parameters. d$d$, D$D$ and s$s$ refer respectively to the order of non-seasonal differencing, the order of seasonal differencing and the seasonal period. Constraints: • p$p$, d$d$, q$q$, P$P$, D$D$, Q$Q$, s0$s\ge 0$; • p + q + P + Q > 0$p+q+P+Q>0$; • s1$s\ne 1$; • if s = 0$s=0$, P + D + Q = 0$P+D+Q=0$; • if s > 1$s>1$, P + D + Q > 0$P+D+Q>0$; • d + s × (P + D)n$d+s×\left(P+D\right)\le n$; • p + dq + s × (P + DQ)n$p+d-q+s×\left(P+D-Q\right)\le n$. 2:     mt(4$4$,nser) – int64int32nag_int array The transfer function model orders b$b$, p$p$ and q$q$ of each of the input series. The data for input series i$i$ is held in column i$i$. Row 1 holds the value bi${b}_{i}$, row 2 holds the value qi${q}_{i}$ and row 3 holds the value pi${p}_{i}$. For a simple input, bi = qi = pi = 0${b}_{i}={q}_{i}={p}_{i}=0$. Row 4 holds the value ri${r}_{i}$, where ri = 1${r}_{i}=1$ for a simple input, and ri = 2​ or ​3${r}_{i}=2\text{​ or ​}3$ for a transfer function input. The choice ri = 3${r}_{i}=3$ leads to estimation of the pre-period input effects as nuisance parameters, and ri = 2${r}_{i}=2$ suppresses this estimation. This choice may affect the returned forecasts and the state set. When ri = 1${r}_{i}=1$, any nonzero contents of rows 1, 2 and 3 of column i$i$ are ignored. Constraint: mt(4,i) = 1${\mathbf{mt}}\left(4,\mathit{i}\right)=1$, 2$2$ or 3$3$, for i = 1,2,,nser1$\mathit{i}=1,2,\dots ,{\mathbf{nser}}-1$. 3:     para(npara) – double array Estimates of the multi-input model parameters. These are in order, firstly the ARIMA model parameters: p$p$ values of φ$\varphi$ parameters, q$q$ values of θ$\theta$ parameters, P$P$ values of Φ$\Phi$ parameters, Q$Q$ values of Θ$\Theta$ parameters. These are followed by the transfer function model parameter values ω0,ω1,,ωq1${\omega }_{0},{\omega }_{1},\dots ,{\omega }_{{q}_{1}}$, δ1,,δp1${\delta }_{1},\dots ,{\delta }_{{p}_{1}}$ for the first of any input series and similarly for each subsequent input series. The final component of para is the value of the constant c$c$. 4:     kfc – int64int32nag_int scalar Must be set to 1$1$ if the constant was estimated when the model was fitted, and 0$0$ if it was held at a fixed value. This only affects the degrees of freedom used in calculating the estimated residual variance. Constraint: kfc = 0${\mathbf{kfc}}=0$ or 1$1$. 5:     nev – int64int32nag_int scalar The number of original (undifferenced) values in each of the input and output time series. 6:     nfv – int64int32nag_int scalar The number of forecast values of the output series required. Constraint: nfv > 0${\mathbf{nfv}}>0$. 7:     xxy(ldxxy,nser) – double array ldxxy, the first dimension of the array, must satisfy the constraint ldxxy(nev + nfv)$\mathit{ldxxy}\ge \left({\mathbf{nev}}+{\mathbf{nfv}}\right)$. The columns of xxy must contain in the first nev places, the past values of each of the input and output series, in that order. In the next nfv places, the columns relating to the input series (i.e., columns 1$1$ to nser1${\mathbf{nser}}-1$) contain the future values of the input series which are necessary for construction of the forecasts of the output series y$y$. 8:     kzef – int64int32nag_int scalar Must be set to 0$0$ if the relevant nfv values of the forecasts (fva) are to be held in the output series column (nser) of xxy (which is otherwise unchanged) on exit, and must not be set to 0$0$ if the values of the input component series zt${z}_{t}$ and the values of the output noise component nt${n}_{t}$ are to overwrite the contents of xxy on exit. 9:     rmsxy(nser) – double array The first (nser1)$\left({\mathbf{nser}}-1\right)$ elements of rmsxy must contain the estimated residual variance of the input series ARIMA models. In the case of those inputs for which no ARIMA model is available or its effects are to be excluded in the calculation of forecast standard errors, the corresponding entry of rmsxy should be set to 0$0$. 10:   mrx(7$7$,nser) – int64int32nag_int array The orders array for each of the input series ARIMA models. Thus, column i$i$ contains values of p$p$, d$d$, q$q$, P$P$, D$D$, Q$Q$, s$s$ for input series i$i$. In the case of those inputs for which no ARIMA model is available, the corresponding orders should be set to 0$0$. 11:   parx(ldparx,nser) – double array ldparx, the first dimension of the array, must satisfy the constraint ldparxnce$\mathit{ldparx}\ge \mathit{nce}$, where nce$\mathit{nce}$ is the maximum number of parameters in any of the input series ARIMA models. If there are no input series, then ldparx1$\mathit{ldparx}\ge 1$. Values of the parameters (φ$\varphi$, θ$\theta$, Φ$\Phi$, and Θ$\Theta$) for each of the input series ARIMA models. Thus column i$i$ contains mrx(1,i)${\mathbf{mrx}}\left(1,i\right)$ values of φ$\varphi$, mrx(3,i)${\mathbf{mrx}}\left(3,i\right)$ values of θ$\theta$, mrx(4,i)${\mathbf{mrx}}\left(4,i\right)$ values of Φ$\Phi$ and mrx(6,i)${\mathbf{mrx}}\left(6,i\right)$ values of Θ$\Theta$, in that order. Values in the columns relating to those input series for which no ARIMA model is available are ignored. 12:   isttf – int64int32nag_int scalar The dimension of the array sttf as declared in the (sub)program from which nag_tsa_multi_inputmod_forecast (g13bj) is called. Constraint: isttf(P × s) + d + (D × s) + q + max (p,Q × s) + ncf${\mathbf{isttf}}\ge \left(P×s\right)+d+\left(D×s\right)+q+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(p,Q×s\right)+\mathit{ncf}$, where ncf = (bi + qi + pi)$\mathit{ncf}=\sum \left({b}_{i}+{q}_{i}+{p}_{i}\right)$ and the summation is over all input series for which ri > 1${r}_{i}>1$. ### Optional Input Parameters 1:     nser – int64int32nag_int scalar Default: The dimension of the arrays mt, mrx, rmsxy and the second dimension of the arrays xxy, parx. (An error is raised if these dimensions are not equal.) The number of input and output series. There may be any number of input series (including none), but only one output series. 2:     npara – int64int32nag_int scalar Default: The dimension of the array para. The exact number of φ$\varphi$, θ$\theta$, Φ$\Phi$, Θ$\Theta$, ω$\omega$, δ$\delta$, c$c$ parameters, so that npara = p + q + P + Q + nser + (p + q)${\mathbf{npara}}=p+q+P+Q+{\mathbf{nser}}+\sum \left(p+q\right)$, the summation being over all the input series. (c$c$ must be included whether its value was previously estimated or was set fixed.) ### Input Parameters Omitted from the MATLAB Interface ldxxy ldparx wa iwa mwa imwa ### Output Parameters 1:     para(npara) – double array The parameter values may be updated using an additional iteration in the estimation process. 2:     xxy(ldxxy,nser) – double array ldxxy(nev + nfv)$\mathit{ldxxy}\ge \left({\mathbf{nev}}+{\mathbf{nfv}}\right)$. If kzef = 0${\mathbf{kzef}}=0$ then xxy is unchanged except that the relevant nfv values in the column relating to the output series (column nser) contain the forecast values (fva), but if kzef0${\mathbf{kzef}}\ne 0$ then the columns of xxy contain the corresponding values of the input component series zt${z}_{t}$ and the values of the output noise component nt${n}_{t}$, in that order. 3:     rmsxy(nser) – double array ${\mathbf{rmsxy}}\left({\mathbf{nser}}\right)$ contains the estimated residual variance of the output noise ARIMA model which is calculated from the supplied series. Otherwise rmsxy is unchanged. 4:     mrx(7$7$,nser) – int64int32nag_int array Unchanged, except for column nser which is used as workspace. 5:     fva(nfv) – double array The required forecast values for the output series. (Note that these are also output in column nser of xxy if kzef = 0${\mathbf{kzef}}=0$.) 6:     fsd(nfv) – double array The standard errors for each of the forecast values. 7:     sttf(isttf) – double array The nsttf values of the state set based on the first nev sets of (past) values of the input and output series. 8:     nsttf – int64int32nag_int scalar The number of values in the state set array sttf. 9:     ifail – int64int32nag_int scalar ${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]). ## Error Indicators and Warnings Errors or warnings detected by the function: ifail = 1${\mathbf{ifail}}=1$ On entry, kfc < 0${\mathbf{kfc}}<0$, or kfc > 1${\mathbf{kfc}}>1$, or ldxxy < (nev + nfv)$\mathit{ldxxy}<\left({\mathbf{nev}}+{\mathbf{nfv}}\right)$, or nfv ≤ 0${\mathbf{nfv}}\le 0$. ifail = 2${\mathbf{ifail}}=2$ On entry, ldparx is too small or npara is inconsistent with the orders specified in arrays mr and mt; or one of the ri${r}_{i}$, stored in mt(4,i)${\mathbf{mt}}\left(4,i\right)$, does not equal 1$1$, 2$2$ or 3$3$. ifail = 3${\mathbf{ifail}}=3$ On entry or during execution, one or more sets of δ$\delta$ parameters do not satisfy the stationarity or invertibility test conditions. ifail = 4${\mathbf{ifail}}=4$ On entry, iwa is too small for the final forecasting calculations. This is a highly unlikely error, and would probably indicate that nfv was abnormally large. ifail = 5${\mathbf{ifail}}=5$ On entry, iwa is too small by a very considerable margin. No information is supplied about the requisite minimum size. ifail = 6${\mathbf{ifail}}=6$ On entry, iwa is too small, but the requisite minimum size is returned in mwa(1)$\mathit{mwa}\left(1\right)$. ifail = 7${\mathbf{ifail}}=7$ On entry, imwa is too small, but the requisite minimum size is returned in mwa(1)$\mathit{mwa}\left(1\right)$. ifail = 8${\mathbf{ifail}}=8$ This indicates a failure in nag_linsys_real_posdef_solve_1rhs (f04as) which is used to solve the equations giving the latest estimates of the parameters. ifail = 9${\mathbf{ifail}}=9$ This indicates a failure in the inversion of the second derivative matrix associated with parameter estimation. ifail = 10${\mathbf{ifail}}=10$ On entry or during execution, one or more sets of the ARIMA (φ$\varphi$, θ$\theta$, Φ$\Phi$ or Θ$\Theta$) parameters do not satisfy the stationarity or invertibility test conditions. ifail = 11${\mathbf{ifail}}=11$ On entry, isttf is too small. ## Accuracy The computations are believed to be stable. The time taken by nag_tsa_multi_inputmod_forecast (g13bj) is approximately proportional to the product of the length of each series and the square of the number of parameters in the multi-input model. ## Example ```function nag_tsa_multi_inputmod_forecast_example mr = [int64(1);0;0;0;0;1;4]; mt = [int64(0),0,0,0,1,0; ... 0,0,0,0,0,0; ... 0,0,0,0,1,0; ... 1,1,1,1,3,0]; para = [0.495; 0.238; -0.367; -3.876; 4.516; 2.474; 8.629; 0.688; -82.858]; kfc = int64(1); nev = int64(40); nfv = int64(8); xxy = [1, 1, 0, 0, 8.075, 105; 1, 0, 1, 0, 7.819, 119; 1, 0, ... 0, 1, 7.366, 119; 1, -1, -1, -1, 8.113, 109; 2, 1, 0, 0, 7.38, 117; 2, ... 0, 1, 0, 7.134, 135; 2, 0, 0, 1, 7.222, 126; 2, -1, -1, -1, 7.768, 112; 3, ... 1, 0, 0, 7.386, 116; 3, 0, 1, 0, 6.965, 122; 3, 0, 0, 1, 6.478, 115; 3, ... -1, -1, -1, 8.105, 115; 4, 1, 0, 0, 8.06, 122; 4, 0, 1, 0, 7.684, 138; 4, ... 0, 0, 1, 7.58, 135; 4, -1, -1, -1, 7.093, 125; 5, 1, 0, 0, 6.129, 115; 5, ... 0, 1, 0, 6.026, 108; 5, 0, 0, 1, 6.679, 100; 5, -1, -1, -1, 7.414, 96; 6, ... 1, 0, 0, 7.112, 107; 6, 0, 1, 0, 7.762, 115; 6, 0, 0, 1, 7.645, 123; 6, ... -1, -1, -1, 8.639, 122; 7, 1, 0, 0, 7.667, 128; 7, 0, 1, 0, ... 8.08, 136; 7, 0, 0, 1, 6.678, 140; 7, -1, -1, -1, 6.739, 122; 8, 1, ... 0, 0, 5.569, 102; 8, 0, 1, 0, 5.049, 103; 8, 0, 0, 1, 5.642, 89; 8, -1, -1, ... -1, 6.808, 77; 9, 1, 0, 0, 6.636, 89; 9, 0, 1, 0, 8.241, 94; 9, 0, 0, ... 1, 7.968, 104; 9, -1, -1, -1, 8.044, 108; 10, 1, 0, 0, 7.791, 119; 10, 0, ... 1, 0, 7.024, 126; 10, 0, 0, 1, 6.102, 119; 10, -1, -1, -1, 6.053, 103; 11, ... 1, 0, 0, 5.941, 0; 11, 0, 1, 0, 5.386, 0; 11, 0, 0, 1, 5.811, 0; 11, ... -1, -1, -1, 6.716, 0; 12, 1, 0, 0, 6.923, 0; 12, 0, 1, 0, 6.939, 0; 12, ... 0, 0, 1, 6.705, 0; 12, -1, -1, -1, 6.914, 0; 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0]; kzef = int64(1); rmsxy = [0; 0; 0; 0; 0.172; 0]; mrx = [int64(0),0,0,0,2,0; ... 0,0,0,0,0,0; ... 0,0,0,0,2,0; ... 0,0,0,0,0,0; ... 0,0,0,0,1,0; ... 0,0,0,0,1,0; ... 0,0,0,0,4,0]; parx = [0, 0, 0, 0, 1.6743, 0; 0, 0, 0, 0, -0.9505, 0; 0, 0, 0, 0, 1.4605,0; ... 0, 0, 0, 0, -0.4862, 0; 0, 0, 0, 0, 0.8993, 0]; isttf = int64(20); [paraOut, xxyOut, rmsxyOut, mrxOut, fva, fsd, sttf, nsttf, ifail] = ... nag_tsa_multi_inputmod_forecast(mr, mt, para, kfc, nev, nfv, xxy, kzef, rmsxy, mrx, parx, isttf) ``` ``` paraOut = 0.4950 0.2380 -0.3391 -3.8886 4.5139 2.4789 8.6290 0.6880 -82.8580 xxyOut = -0.3391 -3.8886 0 0 188.6028 -79.3751 -0.3391 0 4.5139 0 199.4379 -84.6127 -0.3391 0 0 2.4789 204.6834 -87.8232 -0.3391 3.8886 -4.5139 -2.4789 204.3834 -91.9402 -0.6782 -3.8886 0 0 210.6229 -89.0560 -0.6782 0 4.5139 0 208.5905 -77.4262 -0.6782 0 0 2.4789 205.0696 -80.8703 -0.6782 3.8886 -4.5139 -2.4789 203.4065 -87.6242 -1.0173 -3.8886 0 0 206.9738 -86.0678 -1.0173 0 4.5139 0 206.1317 -87.6283 -1.0173 0 0 2.4789 201.9196 -88.3812 -1.0173 3.8886 -4.5139 -2.4789 194.8194 -75.6979 -1.3564 -3.8886 0 0 203.9738 -76.7287 -1.3564 0 4.5139 0 209.8837 -75.0412 -1.3564 0 0 2.4789 210.7052 -76.8277 -1.3564 3.8886 -4.5139 -2.4789 210.3730 -80.9125 -1.6955 -3.8886 0 0 205.9421 -85.3580 -1.6955 0 4.5139 0 194.5753 -89.3937 -1.6955 0 0 2.4789 185.8662 -86.6496 -1.6955 3.8886 -4.5139 -2.4789 185.5090 -84.7094 -2.0346 -3.8886 0 0 191.6056 -78.6824 -2.0346 0 4.5139 0 193.1941 -80.6734 -2.0346 0 0 2.4789 199.8958 -77.3402 -2.0346 3.8886 -4.5139 -2.4789 203.4970 -76.3583 -2.3737 -3.8886 0 0 214.5519 -80.2896 -2.3737 0 4.5139 0 213.7702 -79.9104 -2.3737 0 0 2.4789 216.7963 -76.9015 -2.3737 3.8886 -4.5139 -2.4789 206.7803 -79.3024 -2.7128 -3.8886 0 0 200.4157 -91.8142 -2.7128 0 4.5139 0 185.9409 -84.7420 -2.7128 0 0 2.4789 171.4951 -82.2613 -2.7128 3.8886 -4.5139 -2.4789 166.6735 -83.8565 -3.0519 -3.8886 0 0 173.4176 -77.4771 -3.0519 0 4.5139 0 176.5733 -84.0353 -3.0519 0 0 2.4789 192.5940 -88.0211 -3.0519 3.8886 -4.5139 -2.4789 201.2606 -87.1045 -3.3910 -3.8886 0 0 207.8790 -81.5993 -3.3910 0 4.5139 0 210.2493 -85.3721 -3.3910 0 0 2.4789 205.2616 -85.3495 -3.3910 3.8886 -4.5139 -2.4789 193.8741 -84.3790 -3.7301 -3.8886 0 0 185.6167 -84.6003 -3.7301 0 4.5139 0 178.9692 -82.7953 -3.7301 0 0 2.4789 169.6066 -82.3091 -3.7301 3.8886 -4.5139 -2.4789 166.8325 -82.4095 -4.0692 -3.8886 0 0 172.7331 -82.6360 -4.0692 0 4.5139 0 178.5789 -82.7481 -4.0692 0 0 2.4789 182.7389 -82.8036 -4.0692 3.8886 -4.5139 -2.4789 183.5818 -82.8311 0 0 0 0 0 0 0 0 0 0 0 0 rmsxyOut = 0 0 0 0 0.1720 20.7599 mrxOut = Columns 1 through 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Columns 5 through 6 2 1 0 0 2 0 0 0 1 0 1 1 4 4 fva = 93.3977 96.9577 86.0463 77.5887 82.1393 96.2755 98.3451 93.5774 fsd = 4.5563 6.2172 7.0933 7.3489 7.3941 7.5823 8.1445 8.8536 sttf = 6.0530 193.8741 2.0790 -2.8580 -3.5906 -2.5203 0 0 0 0 0 0 0 0 0 0 0 0 0 0 nsttf = 6 ifail = 0 ``` ```function g13bj_example mr = [int64(1);0;0;0;0;1;4]; mt = [int64(0),0,0,0,1,0; ... 0,0,0,0,0,0; ... 0,0,0,0,1,0; ... 1,1,1,1,3,0]; para = [0.495; 0.238; -0.367; -3.876; 4.516; 2.474; 8.629; 0.688; -82.858]; kfc = int64(1); nev = int64(40); nfv = int64(8); xxy = [1, 1, 0, 0, 8.075, 105; 1, 0, 1, 0, 7.819, 119; 1, 0, ... 0, 1, 7.366, 119; 1, -1, -1, -1, 8.113, 109; 2, 1, 0, 0, 7.38, 117; 2, ... 0, 1, 0, 7.134, 135; 2, 0, 0, 1, 7.222, 126; 2, -1, -1, -1, 7.768, 112; 3, ... 1, 0, 0, 7.386, 116; 3, 0, 1, 0, 6.965, 122; 3, 0, 0, 1, 6.478, 115; 3, ... -1, -1, -1, 8.105, 115; 4, 1, 0, 0, 8.06, 122; 4, 0, 1, 0, 7.684, 138; 4, ... 0, 0, 1, 7.58, 135; 4, -1, -1, -1, 7.093, 125; 5, 1, 0, 0, 6.129, 115; 5, ... 0, 1, 0, 6.026, 108; 5, 0, 0, 1, 6.679, 100; 5, -1, -1, -1, 7.414, 96; 6, ... 1, 0, 0, 7.112, 107; 6, 0, 1, 0, 7.762, 115; 6, 0, 0, 1, 7.645, 123; 6, ... -1, -1, -1, 8.639, 122; 7, 1, 0, 0, 7.667, 128; 7, 0, 1, 0, ... 8.08, 136; 7, 0, 0, 1, 6.678, 140; 7, -1, -1, -1, 6.739, 122; 8, 1, ... 0, 0, 5.569, 102; 8, 0, 1, 0, 5.049, 103; 8, 0, 0, 1, 5.642, 89; 8, -1, -1, ... -1, 6.808, 77; 9, 1, 0, 0, 6.636, 89; 9, 0, 1, 0, 8.241, 94; 9, 0, 0, ... 1, 7.968, 104; 9, -1, -1, -1, 8.044, 108; 10, 1, 0, 0, 7.791, 119; 10, 0, ... 1, 0, 7.024, 126; 10, 0, 0, 1, 6.102, 119; 10, -1, -1, -1, 6.053, 103; 11, ... 1, 0, 0, 5.941, 0; 11, 0, 1, 0, 5.386, 0; 11, 0, 0, 1, 5.811, 0; 11, ... -1, -1, -1, 6.716, 0; 12, 1, 0, 0, 6.923, 0; 12, 0, 1, 0, 6.939, 0; 12, ... 0, 0, 1, 6.705, 0; 12, -1, -1, -1, 6.914, 0; 0, 0, 0, 0, 0, 0; 0, 0, 0, 0, 0, 0]; kzef = int64(1); rmsxy = [0; 0; 0; 0; 0.172; 0]; mrx = [int64(0),0,0,0,2,0; ... 0,0,0,0,0,0; ... 0,0,0,0,2,0; ... 0,0,0,0,0,0; ... 0,0,0,0,1,0; ... 0,0,0,0,1,0; ... 0,0,0,0,4,0]; parx = [0, 0, 0, 0, 1.6743, 0; 0, 0, 0, 0, -0.9505, 0; 0, 0, 0, 0, 1.4605,0; ... 0, 0, 0, 0, -0.4862, 0; 0, 0, 0, 0, 0.8993, 0]; isttf = int64(20); [paraOut, xxyOut, rmsxyOut, mrxOut, fva, fsd, sttf, nsttf, ifail] = ... g13bj(mr, mt, para, kfc, nev, nfv, xxy, kzef, rmsxy, mrx, parx, isttf) ``` ``` paraOut = 0.4950 0.2380 -0.3391 -3.8886 4.5139 2.4789 8.6290 0.6880 -82.8580 xxyOut = -0.3391 -3.8886 0 0 188.6028 -79.3751 -0.3391 0 4.5139 0 199.4379 -84.6127 -0.3391 0 0 2.4789 204.6834 -87.8232 -0.3391 3.8886 -4.5139 -2.4789 204.3834 -91.9402 -0.6782 -3.8886 0 0 210.6229 -89.0560 -0.6782 0 4.5139 0 208.5905 -77.4262 -0.6782 0 0 2.4789 205.0696 -80.8703 -0.6782 3.8886 -4.5139 -2.4789 203.4065 -87.6242 -1.0173 -3.8886 0 0 206.9738 -86.0678 -1.0173 0 4.5139 0 206.1317 -87.6283 -1.0173 0 0 2.4789 201.9196 -88.3812 -1.0173 3.8886 -4.5139 -2.4789 194.8194 -75.6979 -1.3564 -3.8886 0 0 203.9738 -76.7287 -1.3564 0 4.5139 0 209.8837 -75.0412 -1.3564 0 0 2.4789 210.7052 -76.8277 -1.3564 3.8886 -4.5139 -2.4789 210.3730 -80.9125 -1.6955 -3.8886 0 0 205.9421 -85.3580 -1.6955 0 4.5139 0 194.5753 -89.3937 -1.6955 0 0 2.4789 185.8662 -86.6496 -1.6955 3.8886 -4.5139 -2.4789 185.5090 -84.7094 -2.0346 -3.8886 0 0 191.6056 -78.6824 -2.0346 0 4.5139 0 193.1941 -80.6734 -2.0346 0 0 2.4789 199.8958 -77.3402 -2.0346 3.8886 -4.5139 -2.4789 203.4970 -76.3583 -2.3737 -3.8886 0 0 214.5519 -80.2896 -2.3737 0 4.5139 0 213.7702 -79.9104 -2.3737 0 0 2.4789 216.7963 -76.9015 -2.3737 3.8886 -4.5139 -2.4789 206.7803 -79.3024 -2.7128 -3.8886 0 0 200.4157 -91.8142 -2.7128 0 4.5139 0 185.9409 -84.7420 -2.7128 0 0 2.4789 171.4951 -82.2613 -2.7128 3.8886 -4.5139 -2.4789 166.6735 -83.8565 -3.0519 -3.8886 0 0 173.4176 -77.4771 -3.0519 0 4.5139 0 176.5733 -84.0353 -3.0519 0 0 2.4789 192.5940 -88.0211 -3.0519 3.8886 -4.5139 -2.4789 201.2606 -87.1045 -3.3910 -3.8886 0 0 207.8790 -81.5993 -3.3910 0 4.5139 0 210.2493 -85.3721 -3.3910 0 0 2.4789 205.2616 -85.3495 -3.3910 3.8886 -4.5139 -2.4789 193.8741 -84.3790 -3.7301 -3.8886 0 0 185.6167 -84.6003 -3.7301 0 4.5139 0 178.9692 -82.7953 -3.7301 0 0 2.4789 169.6066 -82.3091 -3.7301 3.8886 -4.5139 -2.4789 166.8325 -82.4095 -4.0692 -3.8886 0 0 172.7331 -82.6360 -4.0692 0 4.5139 0 178.5789 -82.7481 -4.0692 0 0 2.4789 182.7389 -82.8036 -4.0692 3.8886 -4.5139 -2.4789 183.5818 -82.8311 0 0 0 0 0 0 0 0 0 0 0 0 rmsxyOut = 0 0 0 0 0.1720 20.7599 mrxOut = Columns 1 through 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Columns 5 through 6 2 1 0 0 2 0 0 0 1 0 1 1 4 4 fva = 93.3977 96.9577 86.0463 77.5887 82.1393 96.2755 98.3451 93.5774 fsd = 4.5563 6.2172 7.0933 7.3489 7.3941 7.5823 8.1445 8.8536 sttf = 6.0530 193.8741 2.0790 -2.8580 -3.5906 -2.5203 0 0 0 0 0 0 0 0 0 0 0 0 0 0 nsttf = 6 ifail = 0 ```
2015-03-29 16:14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 153, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407528638839722, "perplexity": 5558.03512473669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298576.76/warc/CC-MAIN-20150323172138-00242-ip-10-168-14-71.ec2.internal.warc.gz"}
http://giuliozhou.com/2018/08/07/infinite-fun-space.html
## Adventures in Infinite Fun Space: Mathematical Intuition Through Daydreaming Lately, I’ve been reading through Iain M. Banks’ 5th Culture novel Excession. This book in particular gives the reader a more detailed understanding of the Culture’s Minds, the AI machines that (using a miniscule fraction of their actual abilities) plan and run their post-scarcity, anarcho-communistic society. Along the way, I stumbled upon a section describing what Minds do when they’re bored. The Minds call it Infinite Fun Space. The space of all possible mathematical worlds, free to explore and to play in. It is infinitely more expressive than the boring base reality and much more varied: base reality is after all just a special case. From time to time the Minds have to go back to it to fix some local mess, but their hearts are in Infinite Fun Space. As described in Wikipedia The mental capabilities of Minds are described in Excession to be vast enough to run entire universe-simulations inside their own imaginations, exploring metamathical (a fictional branch of metamathematics) scenarios, an activity addictive enough to cause some Minds to totally withdraw from caring about our own physical reality into “Infinite Fun Space”, their own, ironic and understated term for this sort of activity. I found this to be rather apt terminology to describe my recent activities aimed at improving my mathematical intuition. A step down from simulating universes, I’ve recently been tinkering with linear algebra concepts through what I might also refer to as “mathematical daydreaming”. Yes, I know it sounds rather silly, but since the goal is develop good intuition, being able to move fluidly between ideas is key. Starting with the most basic of principles (e.g. what do the rows and columns of a matrix mean, what does a dot product do?, etc.) I’ve been slowly populating this space with various properties and theorems that I’ve derived (informally) from first principles. I try to explore the neighborhood around existing concepts, sometimes drawing connections between previously disparate groups. All the while I try to ask myself why certain concepts exist to begin with. Sometimes, this process is guided by there being a concept that I want to understand (e.g. SVD) which I then work towards by first thinking for about the related basic concepts (such as dyads: how to do a 1-d matrix approximation using a dyad, recalling that multiplying two matrices $A$/$B$ can be seen as the sum of dyads from their columns/rows). Just earlier today, I was reasoning about the optimality of the Simplex algorithm and how the properties of symmetric matrices relate to the covariance matrix (in the meantime recalling that $x^TAx$ is equivalent to an elementwise product between the “self-dyad” $xx^T$ and $A$). For additional intuition, I sometimes wrote toy programs to visualize and numerically evaluate certain illustrating examples. The animation shows the growth of components along the eigenvectors of $A$ when applying the matrix A onto the unit square. Visualizing the effect of adding between 0 and 5 to the diagonals of $A$ on its eigenvectors. A friendly reminder that adding to the diagonal increases the eigenvalues but has no effect on the eigenvectors themselves. Overall, this has helped a ton with retention as well as fluency. I’m far more comfortable reasoning about eigenvalues and linear transformations than I was just a few weeks ago. It’s sometimes quite slow — occasionally spending days thinking about a single concept only to find a simple uninteresting solution from an seemingly unrelated source — but I’m quite surprised how easy (and fun!) it is to get lost in thought. Similar to the drawbacks of search engines, I find I learn more by bouncing around until I can half-approximate a proof than trying to directly prove it with no context. I have yet to replicate this on other areas such as probability, statistics, and optimization but hope to do so at some point. One important thing to note is that this requires at least some familiarity with the topic. For an area with unknown unknowns (such as abstract algebra) I’m guessing that I’ll need at least a semi-formal pass at the material before I can start playing with it on my own. Eventually, I hope to achieve a better balance between working on concrete problems and this mathematical daydreaming thing. In the meantime, I’ll continue playing around in Infinite Finite and Slowly Growing Fun Space.
2021-05-06 15:19:08
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5997677445411682, "perplexity": 810.5525451823102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00263.warc.gz"}
http://googology.wikia.com/wiki/User_blog:LittlePeng9/Levels_of_ITTMs
## FANDOM 10,828 Pages I got so much into these ITTM levels and $$\tau$$ ordinal I defined that I decided to create whole separate blog post for them, partially because blog post linked above doesn't really fit that purpose. ## Definitions I assume familiarity with infinite time Turing machines and all related topics, like accidentally writable ordinals. Let's call an ordinal $$\alpha$$ a level if there exists an ITTM M such that $$\alpha$$ is an upper bound of all ordinals accidentally written by M when ran on empty input. Let's call an ordinal $$\alpha$$ achievable if there exists an ITTM M such that $$\alpha$$ is accidentally written by M when ran on empty input, but no larger ordinal is. Define $$\tau$$ to be the smallest ordinal which isn't a level of any ITTM. ## Simple facts I might use these facts without even mentioning the use of them. Fact 1: Every achievable ordinal is a level. Assume $$\alpha$$ is achievable by machine M. Then $$\alpha$$ is the largest element of the set of ordinals accidentally written by M. If a set has the largest element, then it's its upper bound, so $$\alpha$$ is level of M. Fact 2: If ordinal is a level and is a successor, then it's achievable. Let $$\alpha+1$$ be a non-achievable successor level. It's level of some machine M. By argument similar to one used above, $$\alpha+1$$ cannot be the greatest accidentally writable ordinal of M, as it'd be achievable. So M accidentally writes only ordinals which are $$\leq\alpha$$. Thus the upper bound of these ordinals would be at most $$\alpha$$, so level of M couldn't be $$\alpha$$, a contradiction. Fact 3: $$\Sigma$$ is a non-achievable level. Let's take universal machine U which, on scratch tape, simulates operation of all ITTMs, and writes down on output tape, successively, every accidentally writable real. Among these will be all accidentally writable ordinals, so their upper bound will be $$\Sigma$$. So level of U is $$\Sigma$$. On the other hand, if $$\Sigma$$ was achievable, it'd be accidentally writable, which it isn't. Fact 4: No ordinal larger than $$\Sigma$$ is a level. In particular, $$\tau\leq\Sigma+1$$. Upper bound of all accidentally writable ordinals is $$\Sigma$$, so an upper bound of any subset of these is at most $$\Sigma$$. As levels are such upper bounds, first part of claim follows. Thus we have $$\Sigma+1$$ is not a level. So the least non-level, $$\tau$$, is at most $$\Sigma+1$$. ## Lemmata Lemmata in here are indirectly connected to $$\tau$$, levels and/or achievable ordinals. Lemma 1: Eventually writable ordinals are closed under addition and multiplication. To prove the lemma is to show that if $$\alpha,\beta$$ are eventually writable, then so are $$\alpha+\beta,\alpha\beta$$. First, addition. We construct machine P which will simulate machines M and N eventually writing $$\alpha$$ and $$\beta$$ respectively in parallel. At every step of both machines we check if their outputs are both ordinals. If so, we write on output a follwing ordering: we have numbers $$a_k$$ ordered using ordinal written by M, and $$b_k$$ ordered using ordinal written by N, plus we have $$a_k\prec b_l$$ for all $$k,l$$. If output of M or N changes, we erase output of P and repeat this process. Eventually outputs of M and N will stabilize, and so will output of P. It's easy to see P is eventually writing $$\alpha+\beta$$. For multiplication, we will create $$\beta$$ "blocks" of ordinals, each with $$\alpha$$ ordinals, ordered lexicographically. More formally, we will have $$a_{0,k}$$ ordered by $$\alpha$$, same with $$a_{1,k}$$ and so on, where first number in subscript goes through all ordinals below $$\beta$$. We also give $$a_{\gamma,k}\prec a_{\delta,l}$$ if $$\gamma<\delta$$, so we order these blocks using $$\beta$$. We apply this similarly to addition, resulting in machine Q eventually writing $$\alpha\beta$$. Lemma 2: $$\zeta^{0^\triangledown}=\zeta$$ (here $$\triangledown$$ is a lightface jump operator for ITTM) Note that $$0^\triangledown$$ is eventually writable (we can make machine U which marks which machines have halted. At stage $$\gamma$$ they will all halt, and output will stabilize). Assume real $$r$$ is eventually written from $$0^\triangledown$$ by the machine M. We construct non-oracle machine N which in parallel simulates U and M. M, instead of full oracle, uses the output of U. When output of U changes, we restart simulation of M and let it use modified output of U. Eventually, U will write down $$0^\triangledown$$, so M will be able to operate as if it had access to full oracle. So N will eventually write $$r$$. It follows that $$\zeta^{0^\triangledown}\leq\zeta$$, and of course $$\zeta^{0^\triangledown}\geq\zeta$$ and result follows. Lemma 3: Let output of machine M eventually stabilize. Then it stabilizes at stage which is less than $$\zeta$$. The proof of this fact uses Main Proposition of P. D. Welch. From it follows that every non-halting machine repeats its configuration from stage $$\zeta$$ at stage $$\Sigma$$. Hence, if the output of the machine changes between these two stages, it will change infinitely many times. So, if output of M stabilizes, it stabilizes before $$\Sigma$$. Now we can modify a proof of Main Proposition to eventually write a stage at which whole output stabilizes (not only one cell), thereby proving this lemma. Lemma 4: $$\zeta$$, nor any larger ordinal, can be accidentally written at stage less than $$\zeta$$. Assume not - let machine M accidentally write $$\zeta$$ at stage $$\alpha<\zeta$$. Then $$\alpha$$ is eventually writable by some machine N. Construct machine P which looks at ordinals written by N, and it simulates M up to a point specified by this ordinal and returns content of its output. Eventually, N will stabilize with $$\alpha$$ on tape, and P will simulate $$\alpha$$ steps of M and stabilize with its output. But by assumption this would mean $$\zeta$$ is eventually writable, a contradiction. Same goes for every larger ordinal. Lemma 5: Let $$\triangledown^\alpha$$ be $$\triangledown$$ operator trasfinitely iterated up to $$\alpha$$, for $$\alpha$$ eventually writable. Then computational strength of $$\triangledown^\alpha$$ doesn't depend on representation of $$\alpha$$, and for every $$\alpha$$ this is eventually computable. Indeed, we can set up an eventual computation so that if output of a machine is always either $$\triangledown^\alpha$$, or is strictly weaker than it. First two claims are proved in this paper. For third claim we will show it for a construction in that paper. This goes by transfinite induction - as in theorems 12 and 13 below, computation of $$0^\triangledown$$ is either finished, or is computable up to some point, so it works for $$\alpha=1$$. We just have to note that the same exactly works for $$\alpha$$ being successor. For limit, $$\alpha$$ we either have results for all ordinals below $$\alpha$$ already correctly computed, and from it we can compute correct $$0^{\triangledown^\alpha}$$, or some $$\beta<\alpha$$ isn't ready. By induction, we have that result for $$\beta$$ is computable from some lower ordinal $$\delta<\beta$$, from which it follows that partial result of whole computation is computable from $$0^{\triangledown^\delta}$$, which we actually wanted to prove. Lemma 6: Ordinals of form $$\lambda^{0^{\triangledown^\alpha}}$$ for $$\alpha<\zeta$$ are unbounded in $$\zeta$$ Three proofs, two short, one not: 1. It's quite clear that $$\alpha$$ is computable from $$0^{\triangledown^\alpha}$$, so $$\lambda^{0^{\triangledown^\alpha}}>\alpha$$. 2. There is $$\zeta$$ ordinals in question, no two of which are equal, and each smaller than $$\zeta$$, so they must be cofinal with $$\zeta$$, result follows. 3. Assume $$\alpha$$ is not computable from any of $$0^{\triangledown^\beta}$$. It must be then that $$\alpha$$ can contribute as a new eventually decidable degree. But hierarchy $$0^{\triangledown^\beta}$$ contains all eventually writable degrees (link), contradiction. ## Theorems Theorem 1: Every writable ordinal is achievable. Let machine M write $$\alpha$$. We construct machine M' in the following way: first it simulates M on scratch tape, which will eventually halt and give us $$\alpha$$. We then copy $$\alpha$$ onto output tape of M'. As M' accidentally writes only $$\alpha$$, this ordinal is achieved by M'. Theorem 2: $$\lambda$$ is a level. It is also achievable. We will first prove that $$\lambda$$ is a level. Take machine U which simulates all ITTMs, but this time it will only write on output ordinals which appear on output of halting machines. Every writable ordinal will be written at one point, and every ordinal accidentally written will be writable, so their upper bound is $$\lambda$$. Thus $$\lambda$$ is the level of U. To show it's achievable, we will construct a machine which eventually writes the sum of all writable ordinals, which is easily seen to be $$\lambda$$. Fix some ordering $$T$$ of ITTMs. We will construct an ordering in a following way: $$n_0\prec n_1\prec ...\prec n_k\prec ...$$ will form an initial segment of length $$\omega$$. This will correspond to $$T$$-indexes all the machines which haven't halted (yet). After that, we will have numbers $$a_{1,1},a_{1,2},a_{1,3},...,a_{1,k},...$$, which within themselves will be ordered using the writable ordinal which was written by $$T$$-least machine which already halted. Then we will have $$a_{2,k}$$ ordered similarly using second $$T$$-least machine, and so on. We also will have $$n_k\prec a_{p,q}$$ for all $$k,p,q$$ and $$a_{k,m}\prec a_{l,n}$$ for $$k<l$$. One can easily verify this is ordering we are looking for and that it's eventually writable. Also, we will have at no point written a larger ordinal, as any sum of writable ordinals can't exceed sum of all of them. Theorem 3: Eventually writable achievable ordinals are closed under addition and multiplication. We can use lemma 3 using following observation: if machines M and N write no ordinals larger than $$\alpha,\beta$$, then machines P and Q won't write any ordinal larger than $$\alpha+\beta,\alpha\beta$$. So, if $$\alpha,\beta$$ are achievable by M and N, then $$\alpha+\beta,\alpha\beta$$ are achievable by P and Q. Corollary 1: $$\tau\geq\lambda^\omega$$ Using theorems 1 and 2 we see that every ordinal $$\leq\lambda$$ is achievable. They are also eventually writable. Because every ordinal below $$\lambda^\omega$$ is expressible using numbers up to $$\lambda$$ and addition and multiplication, we can apply theorem 3 to show that every ordinal up to $$\lambda^\omega$$ is achievable and thus level. So least non-level is at least $$\lambda^\omega$$. I'm sure above result can be extended much, much further, but it's quite difficult to ensure machines won't write anything larger than expected result. Theorem 4: If $$\alpha+1$$ is achievable, then so is $$\alpha$$. Let machine M achieve $$\alpha+1$$. We construct M' which simulates M and, for every ordinal written by M, it checks if it's successor ordinal. If it is, then we remove it's largest element, thus decreasing ordinal by 1. Because M writes $$\alpha+1$$, M' will write $$\alpha$$, and as M writes nothing above $$\alpha+1$$, M' will write nothing greater than $$\alpha$$. Thus $$\alpha$$ is achievable by M'. Theorem 5: If $$\alpha$$ is achievable, then so is $$\alpha+1$$. Let machine M achieve $$\alpha$$. Machine M' will simulate M and for every ordinal M writes it will add a new largest ordinal to it, thus increasing it by 1. By reasoning similar to theorem 4, M' achieves $$\alpha+1$$. Corollary 2: $$\tau$$ is either a limit ordinal, or a successor of such. Assume not, let $$\tau=\alpha+2$$ for some $$\alpha$$. As $$\tau$$ is smallest non-limit, $$\alpha+1$$ is a limit, and thus it's achievable. By theorem 5 $$\alpha+1+1=\tau$$ is then also achievable, which contradicts definition of $$\tau$$. Theorem 6: Every achievable ordinal is eventually writable. Let's assume M achieves $$\alpha$$. Machine N will simulate M and will keep track of the largest ordinal M has written up to some point. We will store this ordinal on N's output tape. If M writes some new ordinal, we compare it with one stored by N. If the former is larger, we overwrite N's output with it. After infinitely many changes, we might get in result some mess, but then we just overwrite it with next ordinal M writes. Because $$\alpha$$ is the largest ordinal written by M, N's output will eventually stabilize with $$\alpha$$. Thus $$\alpha$$ is eventually writable. Corollary 3: $$\tau\leq\zeta+1$$ $$\zeta+1$$ is not eventually writable, thus it's not achievable. As it's a successor, it's not a limit. So, as in fact 4, we get our bound on $$\tau$$. Theorem 7: $$\tau$$ is either the first non-achievable ordinal or successor of it. Let $$\alpha$$ be the least non-achievable. Every ordinal below $$\alpha$$ is achievable and thus a level. We also have that $$\alpha+1$$ is not achievable (or we could derive contradiction from theorem 4) and thus not a level. So either $$\alpha$$ isn't a limit, in which case $$\alpha=\tau$$, or $$\alpha$$ is a level, in which case $$\alpha+1=\tau$$. Theorem 8: "$$\alpha$$ is achievable by machine M" is $$0^\triangledown$$-decidable. Thus "$$\alpha$$ is achievable" is $$0^\triangledown$$-decidable. Note that with $$0^\triangledown$$ we can obviously solve halting problem. Let machine P simulate M and check if $$\alpha$$ is ever written. P will halt iff it's the case. Then let machine Q simulate M and check if it writes any ordinal larger than $$\alpha$$ is written. Q will halt iff it's the case. Now give machine N access to $$0^\triangledown$$ and ask it if P halts and Q doesn't. If it happens, this exactly means that $$\alpha$$ is achievable by machine M. So N decides this question. Second claim follows if machine N asks similar query about every ITTM. Theorem 9: $$\tau$$ is $$0^\triangledown$$-writable. We will prove that the least non-achievable ordinal $$\alpha$$ is $$0^\triangledown$$-writable. Run the universal machine U used in fact 3, and for every ordinal it writes, check if it is achievable. Eventually, as there are non-achievable accidentally writable ordinals (e.g. $$\zeta+1$$) we will hit an ordinal greater than, or equal to, $$\alpha$$. As writable ordinals have no gaps, and by relativisation, neither do $$0^\triangledown$$-writable ordinals, we get that $$\alpha$$ is $$0^\triangledown$$-writable too. From it and theorem 7 we clearly see that $$\tau$$ is $$0^\triangledown$$-writable as well. Corollary 4: $$\tau<\zeta$$ As $$\tau$$ is $$0^\triangledown$$-writable, we get $$\tau<\lambda^{0^\triangledown}$$, and, by relativisation of $$\lambda<\zeta$$ and lemma 2, we have $$\lambda^{0^\triangledown}<\zeta^{0^\triangledown}=\zeta$$. Result follows. So we have finally shown that there are eventually writable non-limits, which disproves my conjecture of $$\tau=\zeta+1$$. Showing $$\tau<\lambda^{0^\triangledown}$$ is actually incredibely better result, because there is a whole hierarchy of $$\zeta$$ different oracle levels between $$\lambda$$ and $$\zeta$$, but I don't doubt $$\tau<\zeta$$ is a lot easier to understand. Turns out my argument for theorem 8 is incorrect - I assumed that given ordinal $$\alpha$$ and $$0^\triangledown$$ we can say things about non-oracle machines allowed to look at $$\alpha$$, but that actually is exactly the difference between $$0^\triangledown$$ and $$0^\blacktriangledown$$ - the former one is allowed to only look at machines starting at empty input. Mystery solved. Theorem 10: If $$\zeta$$ is not a level, then levels are unbounded in $$\zeta$$. Take any ordinal $$\alpha<\zeta$$. Let M eventually write $$\alpha+1$$. It's obvious that level of M is greater than $$\alpha$$. By lemma 3, output of M will stabilize strictly before $$\zeta$$, thus, by lemma 4, M will not accidentally write $$\zeta$$ nor any greater ordinal. So set of ordinals accidentally written by M is subset of $$\zeta$$, so its level is at most $$\zeta$$. From theorem assumption, $$\zeta$$ is not a level, so level of M is between $$\alpha$$ and $$\zeta$$. Because $$\alpha$$ was arbitrary, it follows levels are unbounded in $$\zeta$$. Earlier I claimed I have proved the above fact without assumption that $$\zeta$$ is not a level. However I haven't found required reference, thus I cannot really prove that. Conjecture 1: Levels are unbounded in $$\zeta$$. Question 1:  Is $$\zeta$$ a level? Note that from theorem 6 follows that $$\zeta$$ is not achievable, so the only possibility for that to be true is that there exists a machine M writing an unbounded set of eventually writable ordinals. I doubt this possibility, as I don't think there is any way of determining if a given ordinal is eventually writable. Next theorem is the extension of theorem 3: Theorem 11: Let $$F$$ be a nondecreasing function from ordinals to ordinals which is ITTM-computable. If $$\alpha$$ is achievable, then so is $$F(\alpha)$$. Let $$\alpha$$ be achievable by machine M. Let machine N do the following: it simulates M and for every ordinal $$\beta$$ M accidentally writes, N computes $$F(\beta)$$ and writes it on output. Because $$\alpha$$ is accidentally written by M, N accidentally writes $$F(\alpha)$$. Because M writes no ordinal above $$\alpha$$ and $$F$$ is nondecreasing, N writes no ordinal above $$F(\alpha)$$, so $$F(\alpha)$$ is achieved by N. As an example, consider the following function: Let $$F(\alpha)$$ be the least admissible ordinal above $$\alpha$$. This function is obviously nondecreasing, and we will argue it's computable. Let $$\lambda'$$ be the supremum of ordinals writable with $$\alpha$$ as an oracle. From relativisation of results for $$\lambda$$, it follows that $$\lambda'$$ is accidentally writable with $$\alpha$$ and is limit of admissibles. From that, there is an accidentally writable ordinal greater than $$F(\alpha)$$. We can use universal machine U with oracle $$\alpha$$ to seek for ordinals and then check if they are admissible greater than $$\alpha$$. Checking admissibility is tricky, but we can just write $$L_\alpha$$ (which is computable from $$\alpha$$) and see if it satisfies KP set theory axioms. If we find one, we seek for smaller ones greater than $$\alpha$$. If there is none, we have just computed $$F(\alpha)$$. From this we see that achievable ordinals extend pretty far - since $$\lambda$$ is achievable, $$\omega^1_{\lambda+1}$$ is also achievable (from above theorem). We can easily extend this to least recursively inaccessible above $$\lambda$$ and so on. Theorem 12: $$\lambda^{0^\triangledown}$$ is achievable. We will simulate two machines: first one U eventually computes $$0^\triangledown$$ like in proof of lemma 2. We also simulate machine from proof of theorem 2, which we denote M, but we allow it to use partial output of U as oracle. Again, if output of U changes, we restart M. Output of U at every stage before $$\lambda=\gamma$$ will be computable, because there will be clockable ordinal larger than current stage which we can use to simulate U exactly up to that point. So before that stage the output of whole machine won't exceed $$\lambda$$. At stage $$\lambda$$, however, output of U will stabilize with $$0^\triangledown$$ and M will work on full oracle. By argument from theorem 2 we will accidentally write $$\lambda^{0^\triangledown}>\lambda$$ and no larger ordinal. Thus $$\lambda^{0^\triangledown}$$ is achievable. Theorem 13: Every $$0^\triangledown$$-writable ordinal is achievable. Let U be as above. Note that at every stage below $$\lambda$$ set of so far halted machines is computable (as clockables are unbounded in $$\lambda$$), and at stage $$\lambda=\gamma$$ we already have full $$0^\triangledown$$ written. Let machine M compute $$\alpha\geq\lambda$$ from $$0^\triangledown$$. We now allow M to use output of U so far. M restarts when output of U changes. Now, at every stage $$<\lambda$$ so far output of U is computable, so M can't write an ordinal $$\geq\lambda$$, and after stage $$\lambda$$ M can work with full $$0^\triangledown$$, so it computes $$\alpha$$. So our composed machine computes only ordinals less that $$\lambda$$ and $$\alpha\geq\lambda$$, so it achieves $$\alpha$$. It follows that every ordinal not exceeding $$\lambda^{0^\triangledown}$$ is achievable. Can we do better? Yes we can! Theorem 14: Let $$\alpha$$ be eventually writable ordinal. Every ordinal below $$\lambda^{0^{\triangledown^\alpha}}$$ is achievable, where $$\triangledown^\alpha$$ means transfinitely iterated lightface jump operator. We proceed exactly as in theorem 13, but we use machine U which we prove to exist in lemma 5. Theorem 15: Every eventually writable ordinal is achievable. From lemma 6 we know that every ordinal $$\alpha<\zeta$$ is below some $$\lambda^{0^{\triangledown^\beta}}$$ for $$\beta<\zeta$$, and every ordinal below $$\lambda^{0^{\triangledown^\beta}}$$ is achievable, so $$\alpha$$ is achievable. Corollary 5: $$\zeta$$ is the least unachievable ordinal. $$\zeta\leq\tau\leq\zeta+1$$. First part follows from theorems 6 and 15. Second claim uses theorem 7. So now we have proved conjecture 1, because every sequence unbounded in $$\zeta$$ consists of achievables. We have also learned everything we can about achievable ordinals, because they are exactly the eventually writable ordinals. Levels, however, are still quite a mystery. Question 1 still stands, and here is its extension: Question 2: Is every level, other than $$\Sigma$$, eventually writable? Is there an ITTM which accidentally writes some ordinals above $$\zeta$$, but not all of them? ## Update Theorem 16: Suppose M is an ITTM and ordinals accidentally written by M are upper bounded by an accidentally writable ordinal. Then they are upper bounded by an eventually writable ordinal. Run a universal machine U. Whenever it produces an accidentally writable ordinal $$\alpha$$, write it to the output, pause simulation of U and start running M from the empty tape. If we see that M accidentally writes an ordinal greater than $$\alpha$$, stop simulating it and continue simulating U. By assumption, at one point U will write ordinal $$\beta$$ greater than anything that M writes, which means that when we start simulating M, the computation will just keep going forever, and output with stabilize with $$\alpha$$ on the output, making it eventually writable. Corollary 6: Let $$\alpha<\Sigma$$ be a level. Then $$\alpha<\zeta$$. Hence $$\tau=\zeta$$. Let M be a machine with level $$\alpha$$. Then $$\alpha$$ is an accidentally writtable upper bound on ordinals accidentally writable by M, so by theorem 16, there is an upper bound which is less than $$\zeta$$. But clearly $$\alpha$$ is the least upper bound, making $$\alpha<\zeta$$. In particular, $$\zeta$$ is not a level, and by theorem 15, every smaller ordinal is a level, so by definition, $$\tau=\zeta$$. Last sentence of above proof shows that answer to question 1 is affirmative, and theorem 16 shows that so is the answer to the first part of question 2. The second part is true for a different reason: we can modify a universal machine to only write successor ordinals, and this machine will write, for example, $$\zeta+1$$, but not $$\zeta+\omega$$. Corollary 7: If an ITTM accidentally writes ordinals unbounded in $$\zeta$$, then it accidentally writes ordinals unbounded in $$\Sigma$$. Same if an ITTM accidentally writes an ordinal which isn't eventually writable. This is simply the contrapositive of theorem 16,
2017-08-20 04:02:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9583947062492371, "perplexity": 601.8897037575487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00288.warc.gz"}
http://tng.tuxfamily.org/index.php?title=Developers:Documenting_source_code
# Doxygen TODO: The way how the script API documentation will be generated will change in the future. How we do this, is unclear at the moment. ## Basic idea behind Doxygen Doxygen analyzes code, searches it for special tags and generates documentation of the analyzed code according to the found tags. Doxygen was designed for languages of the C family (including C++, Java) and could be extended to other laguages of this kind. In order to make Doxygen work with TNG one has to strictly follow some guidelines during programming. Within the CMake build system, the target doc was introduced to compile all Doxygen parts. They consist of • The main docu /doc/src It provides major information and collects the individual documentation parts. Usually, no code parsing takes place at this stage. CMake lets Doxygen scan all *.dox files found in /doc/src. All contained files ending by *.in are preprocessed by CMake and stored in its build tree to be considered by Doxygen. The main documentation is generated in HTML format and stored in /doc/html The CMake configuration file is /doc/CMakeLists.txt • The module API doc /modules/NAME/doc/src Each module may provide its own Doxygen documentation. Beside all files found in /modules/NAME/doc/src which will be processed by Doxygen, all source files recursively found in /modules/NAME/NAME will be parsed. Each module should provide a HTML link referencing the main documentation. Additionally, each module may provide a documentation for its SWIG interface requiring special considerations. The module documentations are stored in /doc/html/modules/NAME and /doc/html/modules_swig/NAME. The CMake configuration file is /modules/NAME/CMakeLists.txt using the subroutines in /cmake_subroutines/DoxygenMakeDocForModule.cmake. The Doxygen configuration template /modules/NAME/doc/src/Doxyfile.in is usually copy of the /doc/src/Doxyfile.in. ## Special considerations for TNG Special care is required in order to let Doxygen generate proper documentation. Particular care requires the interaction with SWIG. Custom commands The following additional commands may be used in the documentation blocks: This should be placed on each custom page in the main doc. It provides a small menu referencing the index page and most major documentation pages. When adding a custom page, it should be modified in the corresponding CMakeLists.txt. \ref_to_parent_doc (in modules) This should be placed on the index page in each module. It provides a link to the main documentation. ## Custom Doxygen sections When generating a module documentation, special sections (flags) are defined which are known by Doxygen. Depending on the sections being defined one can publish custom documentation code. These code blocks can be defined using the Doxygen commands \cond, \if and \ifnot. A small example illustrates it: //! \cond API_DOC //! does something (only available in C++ doc) void doSomething(); //! \endcond //! this method is available in C++ and SWIG doc void doSomething2(); //! \if SWIG_DOC This method is documented in SWIG doc only \endif void swigCode(); /** \if API_DOC \brief computes a value. \param val is the value to be set. \else \brief in Lua, it returns a value. \endif */ void setAValue(double & value); Enabled sections are API_DOC This is enabled if the C++ API reference is created. SWIG_DOC This is enabled if the SWIG reference is created. ## Custom preprocessor definitions SWIG parses *.i files in order to know the header files to be generate a binding from. However, there is no easy way to let Doxygen understand SWIG input files. What can be used, is that SWIG ignores the header parts which are selected by #ifndef SWIG ...; #endif Doxygen can be configured to understand such predefines. When creating the SWIG API documentation, the preprocessor directive #define SWIG will be emulated by Doxygen. Note: Doxygen will parse all source files of a module. Therefore, source files being not parsed by SWIG must contain #ifndef SWIG ...; #endif Else it will be added to the scripting documentation, which is wrong. ## Important commands Widely used Doxygen commands found in the source files are • \brief • \code \endcode • \def • \if \endif • \param • \todo
2017-07-26 00:27:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24827079474925995, "perplexity": 8106.236491943697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00064.warc.gz"}
http://math.stackexchange.com/questions/318251/forall-x-y-in-mathbb-r-x-lt-y-implies-exists-z-s-t-x-lt-z-lt-y
# $\forall x,y \in \mathbb R ,x \lt y \implies \exists z,$s.t. $x \lt z \lt y$. I am going through Apostol's and wondering if I am answering this questions correctly. It is as follows. If $x$ and $y$ are arbitrary real numbers with $x \lt y$, prove that there is at least one real $z$ satisfying $x \lt z \lt y$. Choose $n$ such that $n \gt \displaystyle \frac 1 {y-x}$. Let $\displaystyle z=x+ \frac {1}{n}$ then $y>z>x$ - Well, you need to prove that such an $n$ exists. And that your $z$ actually satisfies the inequalities you claim. This is far from the easiest way of doing this question though. –  Chris Eagle Mar 1 '13 at 22:56 This is one of those cases where drawing a picture will tell you how to do this in as easy a way as @Chris hints. –  Lubin Mar 1 '13 at 22:58 There is a more direct approach. Your proof works, but you have to show that there is such an $n$. On the other hand, there is a simple formula for one $z$ between $x$ and $y$. –  Thomas Andrews Mar 1 '13 at 23:03 Is there any easy way to show that such an n exists? –  AlexHeuman Mar 1 '13 at 23:04 If not, what would be a hint towards a simpler method? –  AlexHeuman Mar 1 '13 at 23:04 Just take the average of $x$ and $y$.
2015-10-09 04:00:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9371286630630493, "perplexity": 184.09009989593284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913406.61/warc/CC-MAIN-20151001221833-00247-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.math.ku.dk/ansatte/?pure=da%2Fpublications%2Fon-some-new-invariants-for-strong-shift-equivalence-for-shifts-of-finite-type(390ceb10-e035-11de-ba73-000ea68e967b).html
## On some new invariants for strong shift equivalence for shifts of finite type Publikation: Bidrag til tidsskriftTidsskriftartikelfagfællebedømt ### Dokumenter • 0809 Indsendt manuskript, 124 KB, PDF-dokument We introduce a new computable invariant for strong shift equivalence of shifts of finite type. The invariant is based on an invariant introduced by Trow, Boyle, and Marcus, but has the advantage of being readily computable. We summarize briefly a large-scale numerical experiment aimed at deciding strong shift equivalence for shifts of finite type given by irreducible $2\times 2$-matrices with entry sum less than 25, and give examples illustrating to power of the new invariant, i.e., examples where the new invariant can disprove strong shift equivalence whereas the other invariants that we use can not. Originalsprog Engelsk Journal of Number Theory 132 502-510 9 0022-314X Udgivet - 2012
2023-03-20 16:05:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251731276512146, "perplexity": 1219.8028514439604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00459.warc.gz"}
https://stacks.math.columbia.edu/tag/09TU
Proposition 98.8.4. Let $S$ be a scheme. Let $f : X \to B$ be a morphism of algebraic spaces over $S$. Let $\mathcal{F}$ be a quasi-coherent sheaf on $X$. If $f$ is of finite presentation and separated, then $\mathrm{Quot}_{\mathcal{F}/X/B}$ is an algebraic space. If $\mathcal{F}$ is of finite presentation, then $\mathrm{Quot}_{\mathcal{F}/X/B} \to B$ is locally of finite presentation. Proof. By Lemma 98.8.2 we have that $\mathrm{Quot}_{\mathcal{F}/X/B}$ is a sheaf in the fppf topology. Let $\textit{Quot}_{\mathcal{F}/X/B}$ be the stack in groupoids corresponding to $\mathrm{Quot}_{\mathcal{F}/X/S}$, see Algebraic Stacks, Section 93.7. By Algebraic Stacks, Proposition 93.13.3 it suffices to show that $\textit{Quot}_{\mathcal{F}/X/B}$ is an algebraic stack. Consider the $1$-morphism of stacks in groupoids $\textit{Quot}_{\mathcal{F}/X/S} \longrightarrow \mathcal{C}\! \mathit{oh}_{X/B}$ on $(\mathit{Sch}/S)_{fppf}$ which associates to the quotient $\mathcal{F}_ T \to \mathcal{Q}$ the coherent sheaf $\mathcal{Q}$. By Theorem 98.6.1 we know that $\mathcal{C}\! \mathit{oh}_{X/B}$ is an algebraic stack. By Algebraic Stacks, Lemma 93.15.4 it suffices to show that this $1$-morphism is representable by algebraic spaces. Let $T$ be a scheme over $S$ and let the object $(h, \mathcal{G})$ of $\mathcal{C}\! \mathit{oh}_{X/B}$ over $T$ correspond to a $1$-morphism $\xi : (\mathit{Sch}/T)_{fppf} \to \mathcal{C}\! \mathit{oh}_{X/B}$. The $2$-fibre product $\mathcal{Z} = (\mathit{Sch}/T)_{fppf} \times _{\xi , \mathcal{C}\! \mathit{oh}_{X/B}} \textit{Quot}_{\mathcal{F}/X/S}$ is a stack in setoids, see Stacks, Lemma 8.6.7. The corresponding sheaf of sets (i.e., functor, see Stacks, Lemmas 8.6.7 and 8.6.2) assigns to a scheme $T'/T$ the set of surjections $u : \mathcal{F}_{T'} \to \mathcal{G}_{T'}$ of quasi-coherent modules on $X_{T'}$. Thus we see that $\mathcal{Z}$ is representable by an open subspace (by Flatness on Spaces, Lemma 76.9.3) of the algebraic space $\mathit{Hom}(\mathcal{F}_ T, \mathcal{G})$ from Proposition 98.3.10. $\square$ Comment #7792 by Laurent Moret-Bailly on In the proof, the "coherent sheaf $\mathcal{Q}$" is only quasi-coherent of finite presentation. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-10-05 02:35:03
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9831727743148804, "perplexity": 211.0095053402638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00689.warc.gz"}
https://cameramath.com/expert-q&a/Algebra/Determine-the-axis-of-symmetry-of-the-function-below-Determine-the-axis
### Still have math questions? Algebra Question Determine the axis of symmetry of the function below $$y=x^2-7x+12$$ $$x=\frac{7}{2}$$
2022-01-21 17:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4749504029750824, "perplexity": 3208.7936348674925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00617.warc.gz"}
https://chat.stackexchange.com/transcript/3740/2019/7/7
4:15 AM A new tag was created by TuoTuo. 2 I'm told that $SU(n)$ acts transitively on $S^{2n-1} \subset \mathbb{C}^n$ by matrix multiplication. Yet I can't find a proof of this anywhere, so I was trying to construct a proof on my own by mimicking a version of the proof that I know for showing $SO(n)$ acts transitively on $S^n$. My proof... 9 hours later… 1:41 PM yesterday, by Martin Sleziak I suggest a synonym dg-algebras $\to$ differential-graded-algebras. The tag (dg-algebras) was created in July 2017 (shortly before allowed length of tag names was increased). A short tag-excerpt was also created at the time. The tag (differential-graded-algebras) was created recently (July 2019...
2020-01-22 01:35:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3001129925251007, "perplexity": 1163.0171719216917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00067.warc.gz"}
https://blog.hellholestudios.top/orange-boy-can-you-solve-it-out-ep-15/
# FACT ZJS likes to show off his wonderful calculator. His calculator can do FACT operation which means can divide an integer to some primes' products. For example, Fact(30) will display which is a string of length 5. Fact(60) will display which is a string of length 6. Now he wonders among all integers i under a given number N, what's the max length of the string F(i) produces? If there are many answers print any. Note: Fact(1)'s length is 0 # Example N=30 Output should be 30 because 30 is the only number that length is 5. N=65 Output should be 60 because 60 is the only number that length is 6. # Constriants Subtask1(33%):2<=N<=1000 Subtask2(33%):2<=N<=1e9 Subtask3(33%):2<=N<=1e12 Subtask4(1%):2<=N<=1e18 # Hacker XGN Anyone wanna use greedy? Test the data 9240 Answer should be 12(one possible output is 8580). A short program for you to look for the rules: #include <bits/stdc++.h> using namespace std; string toString(int num){ string ans=""; while(num!=0){ char c=(num%10+'0'); ans=c+ans; num/=10; } return ans; } string fact(int num){ string s=""; for(int i=2;i*i<=num;i++){ if(num%i==0){ int cnt=0; while(num%i==0){ num/=i; cnt++; } if(cnt==1){ s+=toString(i)+"*"; }else{ s+=toString(i)+""+toString(cnt)+"*"; } } } if(num!=1){ s+=toString(num)+"+"; } s.pop_back(); return s; } int main() { int n; cin>>n; int mx=0; int mxl=0; for(int i=2;i<=n;i++){ string s=fact(i); if(s.length()>=mxl){ mxl=s.length(); mx=i; } // cout<<i<<" fact="<<fact(i)<<" length="<<fact(i).length()<<endl; } cout<<mx<<" fact="<<fact(mx)<<" length="<<mxl<<endl; return 0; } # Solution By MonkeyKing Greedy can be found by greedying through digits.
2020-10-26 10:16:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3578248620033264, "perplexity": 8547.916644527597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00245.warc.gz"}
https://www.global-sci.org/intro/article_detail/ata/4495.html
Volume 30, Issue 3 Some Results on the Polar Derivative of a Polynomial Anal. Theory Appl., 30 (2014), pp. 306-317. Published online: 2014-10 Cited by Export citation • Abstract Let $P(z)$ be a polynomial of degree $n$ and for any complex number $\alpha$, let $D_{\alpha}P(z)=nP(z)+(\alpha -z)P'(z)$ denote the polar derivative of $P(z)$ with respect to $\alpha$. In this paper, we obtain certain inequalities for the polar derivative of a polynomial with restricted zeros. Our results generalize and sharpen some well-known polynomial inequalities. • Keywords Polynomial, zeros, polar derivative, Bernstein inequality. 30A10, 30C10, 30D15 • BibTex • RIS • TXT @Article{ATA-30-306, author = {}, title = {Some Results on the Polar Derivative of a Polynomial}, journal = {Analysis in Theory and Applications}, year = {2014}, volume = {30}, number = {3}, pages = {306--317}, abstract = { Let $P(z)$ be a polynomial of degree $n$ and for any complex number $\alpha$, let $D_{\alpha}P(z)=nP(z)+(\alpha -z)P'(z)$ denote the polar derivative of $P(z)$ with respect to $\alpha$. In this paper, we obtain certain inequalities for the polar derivative of a polynomial with restricted zeros. Our results generalize and sharpen some well-known polynomial inequalities. }, issn = {1573-8175}, doi = {https://doi.org/10.4208/ata.2014.v30.n3.7}, url = {http://global-sci.org/intro/article_detail/ata/4495.html} } TY - JOUR T1 - Some Results on the Polar Derivative of a Polynomial JO - Analysis in Theory and Applications VL - 3 SP - 306 EP - 317 PY - 2014 DA - 2014/10 SN - 30 DO - http://doi.org/10.4208/ata.2014.v30.n3.7 UR - https://global-sci.org/intro/article_detail/ata/4495.html KW - Polynomial, zeros, polar derivative, Bernstein inequality. AB - Let $P(z)$ be a polynomial of degree $n$ and for any complex number $\alpha$, let $D_{\alpha}P(z)=nP(z)+(\alpha -z)P'(z)$ denote the polar derivative of $P(z)$ with respect to $\alpha$. In this paper, we obtain certain inequalities for the polar derivative of a polynomial with restricted zeros. Our results generalize and sharpen some well-known polynomial inequalities. A. Mir & B. Dar. (1970). Some Results on the Polar Derivative of a Polynomial. Analysis in Theory and Applications. 30 (3). 306-317. doi:10.4208/ata.2014.v30.n3.7 Copy to clipboard The citation has been copied to your clipboard
2023-02-06 09:24:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552643060684204, "perplexity": 960.4874872511292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00397.warc.gz"}
http://www.newcomplexlight.org/category/senza-categoria/
Topological Photonics Inverse Problem by Machine Learning Topological concepts open many new horizons for photonic devices, from integrated optics to lasers. The complexity of large scale topological devices asks for an effective solution of the inverse problem: how best to engineer the topology for a specific application? We introduce a novel machine learning approach to the topological inverse problem. We train a neural network system with the band structure of the Aubry-Andre-Harper model and then adopt the network for solving the inverse problem. Our application is able to identify the parameters of a complex topological insulator in order to obtain protected edge states at target frequencies. One challenging aspect is handling the multivalued branches of the direct problem and discarding unphysical solutions. We overcome this problem by adopting a self-consistent method to only select physically relevant solutions. We demonstrate our technique in a realistic topological laser design and by resorting to the widely available open-source TensorFlow library. Our results are general and scalable to thousands of topological components. This new inverse design technique based on machine learning potentially extends the applications of topological photonics, for example, to frequency combs, quantum sources, neuromorphic computing and metrology. Pilozzi, Farrelly, Marcucci, Conti in ArXiv:1803.02875 Rainbow gravity modifies general relativity by introducing an energy dependent metric, which is expected to have a role in the quantum theory of black holes and in quantum gravity at Planck energy scale. We show that rainbow gravity can be simulated in the laboratory by nonlinear waves in nonlocal media, as those occurring in Bose-condensed gases and nonlinear optics. We reveal that at a classical level, a nonlocal nonlinear Schr\”odinger equation may emulate the curved space time in proximity of a rotating black hole as dictated by the rainbow gravity scenario. We also demonstrate that a fully quantized analysis is possible. By the positive $\mathcal{P}$-representation, we study superradiance and show that the instability of a black-hole and the existence of an event horizon are inhibited by an energy dependent metric. Our results open the way to a number of fascinating experimental tests of quantum gravity theories and quantum field theory in curved manifolds, and also demonstrate that these theories may be novel tools for open problems in nonlinear quantum physics.
2018-03-21 01:18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3783845007419586, "perplexity": 528.9676841968401}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647556.43/warc/CC-MAIN-20180321004405-20180321024405-00412.warc.gz"}
https://csedoubts.gateoverflow.in/2777/tspgecet-2019-cse-18
Given $R = ABCDE$ and FD set  $\left\{\begin{matrix} Ab \rightarrow CD,\\ E \rightarrow A,\\ D \rightarrow E\end{matrix}\right.$. If we decompose into BCNF how many tables will we get? 1. $3$ 2. $5$ 3. $4$ 4. $2$
2019-08-23 22:31:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613962173461914, "perplexity": 1999.7663256233166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00443.warc.gz"}
http://en.wikipedia.org/wiki/Randomized_weighted_majority_algorithm
# Randomized weighted majority algorithm The Randomized weighted majority algorithm is an algorithm that belongs to the machine learning theory.[1] It minimize the mistake-bound of the weighted majority algorithm. For example, each day we get a prediction from some number of experts about whether the stock market is up or down.Then we use them to make our prediction. Our goal is to do nearly as well as best of the experts in hindsight. ## Motivation In machine learning, the weighted majority algorithm(WMA) is a meta-learning algorithm which "predicts from expert advice". The algorithm: initialize all experts to weight 1. for each round: poll all the experts and predict based on a weighted majority vote of their predictions. cut in half the weights of all experts that make a mistake. Suppose there are n experts and the best experts made m mistakes. the weighted majority algorithm makes at most $\ 2.4(log_2n+ m$) mistakes. ## Randomized weighted majority(RWMN) $\ 2.4(log_2n + m$) not so good if the best expert makes a mistake 20% of the time. can we do better? yes. we'd like to improve dependence on $\ m$. Instead of predicting based on majority vote, we use the weighted as probabilities - Randomized weighted majority. If $\ w_i$ is the weight of expert $\ i$, $\ W=$$\ \sum_iw_i$, then we will follow expert $\ i$ with probability $\ \frac{w_i}{W}$. Our goal is to bound the worst-case expected number of mistakes, assuming that the adversary(the world) has to select one of the answers as correct before we make our coin toss. Why is this better in the worst case? Idea: worst case for determinisric algorithm(weighted majority algorithm) was when weights split 50/50.But, now it is not so bad since we also have 50/50 chance of getting it right. Also, to trade-off between dependence on $\ m$ and $\ log_2n$, we will generalize to multiply by $\ \beta<1$, instead of necessarily $\frac{1}{2}$. ## Analysis At the $\ t$-th round, define $\ F_t$ to be the fraction of weight on the wrong answers. so, $\ F_t$ is the probability we make a mistake on the $\ t$-th round. Let $\ M$ denote the total number of mistakes we made so far. Furthermore, we define $E[M]=\ \sum_tF_t$, using the fact that expectation is additive. On the $\ t$-th round, $W$ becomes $\ W(1-(1-\beta)F_t)$. Reason: on $\ F_t$ fraction, we are multiplying by $\ \beta$. So, $\ W_{final}=n*(1-(1-\beta)F_1)*(1-(1-\beta)F_2)...$ Let's say that $\ m$ is the number of mistakes of the best expert so far. We can use the inequality $\ W\geq \beta^m$. Now we solve. First, take the natural log of both sides. We get: $\ mln\beta \leq ln(n) + \sum_tln(1-(1-\beta)F_t)$, Simplify: $\ ln(1-x)= -x -\frac {x^2}{2} - \frac {x^3}{3}-...$, So, $\ ln(1-(1-\beta)F_t)< -(1-\beta)F_t$. $\ mln\beta \leq ln(n) - (1-\beta)* \sum_tF_t$ Now, use $\ E[M] =\ \sum_tF_t$, and the result is: $\ E[M] \leq \frac {mln(1/\beta)+ln(n)}{1-\beta}$ Let's see if we made any progress: If $\ \beta=\frac{1}{2}$, we get, $\ 1.39m+2ln(n).$, if $\ \beta=\frac{3}{4}$, we get, $\ 1.15m+4ln(n)$. so we can see we made progress. Roughly, of the form $\ (1+\epsilon)*m+\epsilon^{-1}*ln(n)$. ## Uses of Randomized weighted Majority(RWMN) Can use to combine multiple algorithms to do nearly as well as best in hindsight. can apply Randomized weighted majority algorithm in situations where experts are making choices that cannot be combined (or can't be combined easily).For instance, repeated game-playing or online shortest path problem.In the online shortest path problem, each expert is telling you a different way to drive to work. You pick one using Randomized weighted majority algorithm. Later you find out how well you would have done, and penalize appropriately. To do this right, we want to generalize from just "losS" of 0 to 1 to losses in [0,1]. Goal of having expected loss be not too much worse than loss of best expert.We generalize by penalize $\beta^{loss}$, meaning having two examples of loss $\ \frac {1}{2}$ gives same weight as one example of loss 1 and one example of loss 0 (Analysis still oes through). ## Extensions - "Bandit" problem - Efficient algorithm for some cases with many experts. - Sleeping experts/"specialists" setting.
2014-04-16 14:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668686985969543, "perplexity": 6955.153566378525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/4344130/an-example-of-adding-points-on-an-elliptic-curve-in-elliptic-curves-number-theo
# An example of adding points on an elliptic curve in Elliptic Curves, Number Theory and Cryptography by Lawrence Washington. This question addresses an example that appears in Lawrence Washington's book, Elliptic Curves, Number Theory and Cryptography on page 16. Example 2.1. The author asserts that the following statements are true on the elliptic curve $$y^2 ={\frac{x(x+1)(2x+1)}{6}}.$$ $$(0,0)+(1,1)=({\frac{1}{2}},-{\frac{1}{2}}),$$ and $$({\frac{1}{2}},-{\frac{1}{2}})+(1,1)=(24,-70).$$ I understand how to work these calculations out by using the chord and tangent process on this elliptic curve-as we worked out when we studied Chapter 1. And while I get the observation that $$(x,y)$$ is a rational point on this curve if and only if $$(x,-y)$$ is as well (as the left hand side is $$y^2$$), I am not clear why these formulae hold as a consequence of the addition laws given on page 14 of the book because this curve (which arises in the context of the Lucas Pyramid Problem) is not in short Weierstrass form. In fact, we write it as $$6y^2=x(x+1)(2x+1),$$ wouldn't we have to multiply both sides by $$(6^3)(2^2)$$ and change variables to convert it to generalized Weierstrass form? Wouldn't this then affect the formulae for and the addition laws that we derive? They may not be the formulae we use when we apply the chord and tangent process-the points themselves will be transformed by the change in variables. I guess I don't feel that these calculations are applications of the group law-as they clearly are for the congruent number elliptic curves in the book-these curves ($$y^2=x^3-n^2x$$) are in short Weierstrass form. Is there something obvious that I am missing? Sorry if I am nit picking! Just trying to understand what is happening. • I am not familiar with this book, but if you have a smooth projective plane cubic curve with a fixed point $O$ (i.e., an elliptic curve), then the "chord and tangent" process for adding points is a group law - so you don't need your curve to be in Weierstrass form, just go ahead and do the operation. Dec 29, 2021 at 4:15 Let us explicitly add $$P_1=(0,0)$$ and $$P_2=(1,1)$$ which are ($$\Bbb Q$$-rational) points in $$E(\Bbb Q)$$ for the elliptic curve $$E$$ with affine equation: $$E\ :\qquad y^2 =\frac 16x(x+1)(2x+1)$$ that appears on page 16 in the cited book. Strictly speaking, the above equation is not in the shape considered in the the previous pages of the book, where the most general equation considered so far was $$(2.1)$$: $$y^2 +a_1xy + a_3y = x^3+a_2x^2+a_4x+a_6\ .$$ (And there is no definition of an elliptic curve in the previous 15 pages, so take $$(2.1)$$ as a "working definition".) But there is a simple change of variables $$y=3Y$$, $$x=3X$$, to put the equation in such a shape, for instance by dividing both sides with $$3^2$$to get: \begin{aligned} \left(\frac y3\right)^2 &=\frac x3\left(\frac x3+\frac 13\right)\left(\frac x3+\frac 16\right)\ ,\text{ i.e.}\\ Y^2 &= X\left(X+\frac 13\right)\left(X+\frac 16\right)\ . \end{aligned} We tacitly pass from the curve $$E$$ with equation "in the $$(x,y)$$-world" to the equation "in the $$(X,Y)$$-world" as needed, and let $$E'$$ be the latter corresponding curve. For $$E'$$ we are in the setting of $$(2.1)$$, so we proceed as there. The corresponding points are $$P_1'=(0,0)$$, $$P'_2=\left(\frac 13,\frac 13\right)$$. To compute $$P_1'+P_2'$$ we have to solve a system of equations (given by the equation of the line $$P_1'P_2'$$ and by the equation of $$E'$$). But with the given substitution, we may want to solve the corresponding system in the $$(x,y)$$-world, since there are no denominators. (I hate denominators when typing.) Then we proceed as follows. The line through these points is the line $$y=x$$ (with slope $$1$$) and in order to get the intersection point, we solve the system with the two equations: \left\{ \begin{aligned} y &= x\ ,\\ y^2 &=\frac 16x(x+1)(2x+1)\ . \end{aligned} \right. We expect two solutions to correspond to the given points $$P_1$$, $$P_2$$, and the third solution is a point $$R_3$$, $$R_3 =\left(\frac 12,\frac 12\right)\ ,$$ as seen by verifying it. (Bezout insures there are (at most) three solutions.) Now, in order to get the sum $$P_1+P_2$$ in $$(E(\Bbb Q),+)$$ we have to build $$P_3:=-R_3$$, so we draw the line through $$R_3$$ and the infinity point, its equation is $$x=\frac 12$$, and intersect it again with the elliptic curve. We obtain the claimed point $$P_3:=P_1+P_2:=-R_3=\left(\frac 12,-\frac 12\right)\ .$$ Note that on page 15, some lines above, the author mentions the formula for computing the opposite of a point $$P=(x,y)$$ on the curve $$(2.1)$$, which is: $$-P=(x,\ -y-a_1x-a_3)\ .$$ Our curve $$E'$$ is in the form $$(2.1)$$, and for it $$a_1=a_3=0$$, so the formula for the opposite of a point $$(X,Y)$$ is $$(X,-Y)$$. Now pass to the $$(x,y)$$-world to see that it is also valid inside it. At a later point, an elliptic curve $$(E,O)$$ is a "good" cubic curve $$E$$ together with a specified rational point $$O$$ on it, and the addition of two rational points $$P_1,P_2$$ is defined as follows. Draw the line $$P_1P_2$$ and intersect it with $$E$$. There is a third point in the intersection (possibly needing to count multiplicities for this), denote it by $$R_3$$. The draw the line $$OR_3$$ and intersect it with $$E$$, the third point is the point $$P_3:=P_1+P_2$$. It turns out after these definitions were done that this operation is indeed a group operation, and then $$P_3=-R_3$$. $$P_3=-R_3$$. Since cryptography is the final target, it may be useful to have code reproducing the given situation. We add on $$E'$$: EE = EllipticCurve([0, 1/3 + 1/6, 0, 1/3/6, 0]) P1 = EE.point((0, 0)) P2 = EE.point((1/3, 1/3)) print(f'P1 + P2 = {P1 + P2}') print(f'P1 + 2*P2 = {P1 + 2*P2}') And we obtain: P1 + P2 = (1/6 : -1/6 : 1) P1 + 2*P2 = (8 : -70/3 : 1) This is the projective version of the points. They are $$(1/6, -1/6)$$ and $$(8, -70/3)$$. Now multiply each component with $$3$$ to get the points from the example 2.1, page 16 in the book, which are $$(1/2, 1/2)$$ and $$(24, -70)$$. • This is a different "change of variables" than the one given on Ex. 1.5 on page 8. Very interesting! I will try to understand both in the cintext of this particular curve. Jan 29 at 14:48
2022-11-29 10:45:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 74, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9261088371276855, "perplexity": 178.46368621725318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00425.warc.gz"}
https://nanonaren.wordpress.com/
## Measurable Spaces – Problem (42/365) The chapter on measurable spaces introduces a $\sigma$-algebra over the real numbers $R = (-\infty, \infty)$. The Borel algebra, $\mathcal{B}(R)$, is the smallest $\sigma$-algebra $\sigma(\mathcal{A})$ where $\mathcal{A}$ is the algebra generated by finite disjoint sums of intervals of the form $(a,b]$. By the direct product of algebras we also get algebras over higher dimensions $\mathcal{B}(R^n) = \mathcal{B}(R) \otimes \dots \otimes \mathcal{B}(R)$. We also get a legal $\sigma$-algebra for the infinite direct product $\mathcal{B}(R^\infty) = \mathcal{B}(R) \otimes \mathcal{B}(R) \otimes \dots$. The book asks to show that certain sets are members of $\mathcal{B}(R^\infty)$. Show that the following are Borel sets. $\displaystyle \{ x \in R^\infty : \sup \inf x_n > a \} \\ \{ x \in R^\infty : \inf \sup x_n \le a \}$ Take the first case. Note that $\sup \inf x_n > a$ is not satisfied if for every $i$, $\inf \{ x_k : k \ge i \} \le a$. This can only happen if there are an infinite number of coordinates whose value is $\le a$. Let $\displaystyle \text{let } I \subset \mathbb{N} \text{ finite subset of natural numbers} \\ S(I) = \cup_{i=1}^\infty \{ x \in \mathcal{B}(R^\infty) : x_i \le a \text{ if } i \in I \text{ and } x_i > a \text{ otherwise} \}$ The set $S(I)$ is a Borel set since we have constructed it as a countable union of Borel sets. Therefore, $\{ x \in R^\infty : \sup \inf x_n > a \} = \cup_{I} S(I)$ (this is also a countable union) is a Borel set. A similar argument is made for the other. ## Measurable Spaces – Problem (41/365) Let $\mathcal{D} = \{ D_1, D_2, \dots \}$ be a countable decomposition of $\mathcal{\Omega}$ and $\mathcal{A} = \sigma(\mathcal{D})$ be the $\sigma$-algebra generated by $\mathcal{D}$. Are there only countably many sets in $\mathcal{A}$? The answer is no. We can show this by showing that the natural numbers is a strict subset of $\mathcal{A}$. Every natural number $n$ can be written as $\sum_{i=0}^\infty a_i 2^i$ where $a_i \in \{0,1\}$ and only a finite number of $a_i = 1$ (because if an infinite number of $a_i = 1$ the sum is $\infty$). Note that since $\mathcal{D}$ is a decomposition we can write every set in $\mathcal{A}$ as a countable (because this is a $\sigma$-algebra) union of a subset of $\mathcal{D}$. This means we can encode every set in $\mathcal{A}$ as $\cup_{i=1}^\infty (A_i \cap D_i)$ where $A_i \in \{ \emptyset, \Omega \}$. First, since $\mathcal{D}$ is countable there is a bijection between the natural numbers and $\mathcal{D}$, however, $\mathcal{A}$ is not countable since we can have a countable number of $A_i = \Omega$. ## Measurable Spaces – Problem (40/365) The past few problems looked at how probability works when we have an infinite sample space. It didn’t cover how one can actually assign probabilities to such spaces. That will be the next task. Before that, the book covers the topic of $\sigma$-algebras which form the algebra of events on top of which we can assign a measure. Given a set $\Omega$, and a set of subsets $\mathcal{A}$, we say that $\mathcal{A}$ is an algebra if $\Omega \in \mathcal{A}$ and is closed under unions and complementation. A $\sigma$-algebra adds to that the requirement that it also be closed under countable unions. The pair $(\Omega, \mathcal{A})$ is called a measureable space. Let $\mathcal{A}_1, \mathcal{A}_2$ be $\sigma$-algebras of $\Omega$. Are the following systems of sets $\sigma$-algebras? $\displaystyle \mathcal{A}_1 \cap \mathcal{A}_2 = \{ A : A \in \mathcal{A}_1 \text{ and } A \in \mathcal{A}_2 \} \\ \mathcal{A}_1 \cup \mathcal{A}_2 = \{ A : A \in \mathcal{A}_1 \text{ or } A \in \mathcal{A}_2 \}$ The intersection of $\sigma$-algebras is also a $\sigma$-algebra because $\Omega \in \mathcal{A}_1 \cap \mathcal{A}_2$, and $A_1 \cup A_2 \cup \dots \in$ in the intersection is contained in both $\mathcal{A}_1$ and $\mathcal{A}_2$. However, the union of $\sigma$-algebras is not always a $\sigma$-algebra. For instance, let $\mathcal{A} = \{ A, \bar{A}, \emptyset, \Omega_1 \}$, $\mathcal{B} = \{ B, \bar{B}, \emptyset, \Omega_2 \}$, then their union does not contain $A \cup B$. ## Probability Foundations – Problem (39/365) Let $\mu$ be a finite measure on algebra $\mathcal{A}$, $A_n \in \mathcal{A}$ for $n = 1,2,\dots$ and $A = \lim_n A_n$ (i.e. $A=\overline{\lim}A_{n}=\underline{\lim}A_{n}$). Show that $\mu(A) = \lim_n \mu(A_n$. $\displaystyle \overline{\lim}A_{n} = \cap_{n=1}^{\infty}\cup_{k=n}^{\infty}A_{k} \\ = \left(\cup_{k=1}^{\infty}A_{k}\right)\cap\left(\cup_{k=2}^{\infty}A_{k}\right)\cap\dots \\ = \left(\cup_{k=2}^{\infty}A_{k}\right)\cap\dots \\ = \lim_{n}\left(\cup_{k=n}^{\infty}A_{k}\right) \\ \underline{\lim}A_{n} = \cup_{n=1}^{\infty}\cap_{k=n}^{\infty}A_{k} \\ = \left(\cap_{k=1}^{\infty}A_{k}\right)\cup\left(\cap_{k=2}^{\infty}A_{k}\right)\cup\dots \\ = \left(\cap_{k=2}^{\infty}A_{k}\right)\cup\dots \\ = \lim_{n}\left(\cap_{k=n}^{\infty}A_{k}\right)$ Since $A=\overline{\lim}A_{n}=\underline{\lim}A_{n}$, $\displaystyle \lim_{n}\left(\cup_{k=n}^{\infty}A_{k}\right) = \lim_{n}\left(\cap_{k=n}^{\infty}A_{k}\right) \\ \text{implies limit is } \lim_n A_n \text{ because the union and intersection have to agree}\\ \text{so } \mu(A) = \lim_n \mu(A_n)$ ## Probability Foundations – Problem (38/365) Let $\mu$ be a finitely additive measure on an algebra $\mathcal{A}$, and let $A_1, A_2, \dots \in \mathcal{A}$ be pairwise disjoint and satisfy $A = \cup_{i=1}^\infty A_i \in \mathcal{A}$. Then show that $\mu(A) \ge \sum_{i=1}^\infty \mu(A_i)$. $\displaystyle \text{Since } A \in \mathcal{A}, A - A_1 \in \mathcal{A} \text{, and } A - \cup_{i=1}^k A_i \in \mathcal{A} \\ \text{Let } B_i = \cup_{k=i}^\infty A_k \\ \sum_{i=1}^\infty \mu(A_i) \\ = \sum_{i=1}^\infty \mu(B_i - B_{i+1}) \\ = \sum_{i=1}^\infty \mu(B_i) - \mu(B_{i+1}) \\ = \mu(B_1) - \mu(B_2) + \mu(B_2) - \mu(B_3) + \mu(B_4) - \mu(B_5) + \dots \\ = \mu(B_1) - \lim_n B_n \\ = \mu(A) - \lim_n B_n \\ \le \mu(A)$ ## Probability Foundations – Problem (37/365) A problem similar to the previous post. Let $\Omega$ be a countable set and $\mathcal{A}$ a collection of all its subsets. Put $\mu(A) = 0$ if $A$ is finite and $\mu(A) = \infty$ if $A$ is inifinite. Show that the set function $\mu$ is finitely additive but not countable additive. To see that it is finitely additive, let $A,B \in \mathcal{A}$ be disjoint. $\displaystyle \mu(A \cup B) = \mu(A) + \mu(B) = 0 + 0 = 0 \text{ if both finite} \\ \mu(A \cup B) = \mu(A) + \mu(B) = \infty \text{ if either one infinite}$ To show that it is not countably additive, consider the case where $\Omega = \mathbb{N}$ is the set of natural numbers. Then $\displaystyle \mu(\{ 1, 2, 3, \dots \}) = \infty \\ \text{But,} \sum_{i=1}^\infty \mu(i) = 0$ ## Probability Foundations – Problem (36/365) I did say it would be one post a day and it already looks like i’ll only achieve it as an expectation of posts every day. So, let me catch up first by solving more problems. This time from chapter 2 of the book: Mathematical Foundations of Probability Theory. This chapter introduces us to how we can extend the probability framework we had for finite sample spaces. The key problem we face is that in the finite case we were simply able to assign a probability to each $\omega \in \Omega$ and therefore get $P(X \subseteq \Omega) = \sum_{x \in X} p(x)$. But we can no longer follow this approach for an infinite sample space. Anyway, the problem asks the following. Let $\Omega$ be the set of rational numbers in $[0,1]$. Let $\mathcal{A}$ be the algebra of sets where each set takes on one of these forms: $\{r : a < r < b \}$, $\{r : a \le r < b \}$, $\{r : a < r \le b \}$, $\{r : a \le r \le b \}$ and $P(A) = b - a$. Show that $P(A)$ is a finitely additive set function but not countably additive. Let $A < B \in \mathcal{A}$ be disjoint sets. Then, we see that $P(\cdot)$ is finitely additive. $\displaystyle \text{We can write } P(A) = b - a = \sup A - \inf A \\ P(A \cup B) = \sup (A \cup B) - \inf (A \cup B) \\ = \sup B - \inf A \\ = (\sup A + P(B)) - \inf A \\ = P(A) + P(B)$ To show that $P(\cdot)$ is not countably additve we need to show that we can come up with an infinite sequence of disjoint sets whose sum of probabilitites is not equal to the probability of its union. This should bring back memories of converging sequences. Consider the sets $(\frac{1}{2},1], (\frac{1}{3}, \frac{1}{2}], \dots, (\frac{1}{n+1}, \frac{1}{n}], \dots$. It is clear that the union of these sets is $[0,1]$. But $\displaystyle \sum_{i=1}^\infty P(( \frac{1}{n+1}, \frac{1}{n} ]) \\ = \sum_{i=1}^\infty \frac{1}{n} - \frac{1}{n+1} \\ = \sum_{i=1}^\infty \frac{1}{n(n+1)} \\ = \frac{1}{1(2)} + \frac{1}{2(3)} + \frac{1}{3(4)} + \frac{1}{4(5)} + \frac{1}{5(6)} + \frac{1}{6(7)} + \frac{1}{7(8)} + \dots \\ > \frac{1}{2(2)} + \frac{1}{4(4)} + \frac{1}{4(4)} + \frac{1}{8(8)} + \frac{1}{8(8)} + \frac{1}{8(8)} + \frac{1}{8(8)} + \dots \\ = \frac{1}{4} + 2 \frac{1}{4(4)} + 4 \frac{1}{8(8)} + \dots \\ = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \dots \\ = \infty$
2016-10-22 07:09:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 111, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704467058181763, "perplexity": 96.64255056791617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00276-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mathzsolution.com/proof-of-z-mz%E2%8A%97zz-nz%E2%89%85z-gcdmathbbz-mmathbbz-otimes_mathbbz-mathbbz-n-mathbbz-cong-mathbbz-gcdmnmathbbz/
# Proof of (Z/mZ)⊗Z(Z/nZ)≅Z/gcd(\mathbb{Z}/m\mathbb{Z}) \otimes_\mathbb{Z} (\mathbb{Z} / n \mathbb{Z}) \cong \mathbb{Z}/ \gcd(m,n)\mathbb{Z} I’ve just started to learn about the tensor product and I want to show: Can you tell me if my proof is right: $\mathbb{Z}/m\mathbb{Z}$ and $\mathbb{Z} / n \mathbb{Z}$ are both finite free $\mathbb{Z}$-modules with the basis consisting of one single element $\{ 1 \}$. So $(\mathbb{Z}/m\mathbb{Z}) \otimes_\mathbb{Z} (\mathbb{Z} / n \mathbb{Z})$ has the basis $\{ 1 \otimes 1 \}$. Therefore, any element in $(\mathbb{Z}/m\mathbb{Z}) \otimes_\mathbb{Z} (\mathbb{Z} / n \mathbb{Z})$ is of the form $(ab) 1 \otimes 1$ and any element in $\mathbb{Z}/ \gcd(m,n)\mathbb{Z}$ is of the form $k 1 = k$ where $k \in \{ 0, \dots , \gcd(n,m) \}$. I would like to construct an isomorphism that maps $ab$ to some $k$. Let this map be $ab (1 \otimes 1) \mapsto ab \bmod \gcd(n,m)$. This is a homomorphism between modules: it maps $0$ to $0$ because it maps the empty sum to the empty sum. It also fulfills $f(a + b) = f(a) + f(b)$ because there is only one element, $a = 1$. It is surjective. So all I need to show is that it is injective. But that is clear too because if $ab \equiv 0 \bmod \gcd(m,n)$ then both $a \equiv 0 \bmod n$ and $b \equiv 0 \bmod m$ so the kernel is trivial. Many thanks for your help!! The better way to define a homomorphism from $\mathbb{Z}/m\mathbb{Z}\otimes \mathbb{Z}/n\mathbb{Z}$ to $\mathbb{Z}/\gcd(m,n)\mathbb{Z}$ is via the universal property. Note that the map $\mathbb{Z}/m\mathbb{Z}\times \mathbb{Z}/n\mathbb{Z}\to \mathbb{Z}/\gcd(m,n)\mathbb{Z}$ defined by is well-defined and also bi-linear, thus by the universal property of tensor product, there is a linear map $f:\mathbb{Z}/m\mathbb{Z}\otimes \mathbb{Z}/n\mathbb{Z}\to \mathbb{Z}/\gcd(m,n)\mathbb{Z}$ such that Verify that the linear map $g:\mathbb{Z}/\gcd(m,n)\mathbb{Z}\to\mathbb{Z}/m\mathbb{Z}\otimes\mathbb{Z}/n\mathbb{Z}$ defined by is well-defined, and we also have $g\circ f=1, f\circ g=1$, thus $f$ is isomprhism. To see $g$ is well-defined, you may use the equality that $\gcd(m,n)=am+bn$ for some integers $a,b\in\mathbb{Z}$.
2022-10-04 09:57:49
{"extraction_info": {"found_math": true, "script_math_tex": 31, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879291653633118, "perplexity": 101.04692332560116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00147.warc.gz"}
https://math.stackexchange.com/questions/3221497/red-black-tree-insertion-deletion-complexity-proof
Red-Black-Tree Insertion & Deletion Complexity proof I'm struggling with two propositions in my algorithms book. I'm unsure how to proof this. The insertion is abolutely logical that it takes up to O(log(n)) recoloring and at most one restructuring (as the BT is already soerted befor its inserted). However, I'm unsure how to proof this. Can someone help? Tarjan, Robert. (1983). Updating a balanced search tree in O(1) rotations.
2019-07-17 08:51:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8772047758102417, "perplexity": 3009.1299926429874}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00117.warc.gz"}
https://questioncove.com/updates/4f0837b4e4b014c09e639f0a
could you please he… - QuestionCove OpenStudy (anonymous): 5 years ago OpenStudy (anonymous): $\Large \begin{array}{l} A = \frac{1}{2}(b + c)h\\ c = \frac{{2A}}{h} - b \end{array}$ 5 years ago OpenStudy (anonymous): my friend thinks the answer is c=2A-bh/h 5 years ago OpenStudy (across): 5 years ago OpenStudy (anonymous): $\Large \begin{array}{l} A = \frac{1}{2}(b + c)h\\ (b + c) = \frac{A}{{\frac{1}{2}h}}\\ c = \frac{{2A}}{h} - b \end{array}$ Let's assign some arbitrary values and test it: b = 2, c = 3, h = 4. A = (1/2)(2+3)4 = 10 c = ((2*10)/4) - 2 = (20/4) - 2 = 5 - 2 = 3. Seems legit. 5 years ago OpenStudy (anonymous): easy: 1. mutiply both sides by 2, 2. divide boht sides by h 3. subtract b from both sides done! 5 years ago OpenStudy (anonymous): could someone please guide me through each step 5 years ago
2017-10-18 00:22:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5552549958229065, "perplexity": 4792.8586321473285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822625.57/warc/CC-MAIN-20171017234801-20171018014801-00280.warc.gz"}
https://en.wikipedia.org/wiki/Kosaraju%27s_algorithm
# Kosaraju's algorithm In computer science, Kosaraju's algorithm (also known as the Kosaraju–Sharir algorithm) is a linear time algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to an unpublished paper from 1978 by S. Rao Kosaraju. The same algorithm was independently discovered by Micha Sharir and published by him in 1981. It makes use of the fact that the transpose graph (the same graph with the direction of every edge reversed) has exactly the same strongly connected components as the original graph. ## The algorithm The primitive graph operations that the algorithm uses are to enumerate the vertices of the graph, to store data per vertex (if not in the graph data structure itself, then in some table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse edges in the backward direction); however the last can be done without, at the price of constructing a representation of the transpose graph during the forward traversal phase. The only additional data structure needed by the algorithm is an ordered list L of graph vertices, that will grow to contain each vertex once. If strong components are to be represented by appointing a separate root vertex for each component, and assigning to each vertex the root vertex of its component, then Kosaraju's algorithm can be stated as follows. 1. For each vertex u of the graph, mark u as unvisited. Let L be empty. 2. For each vertex u of the graph do Visit(u), where Visit(u) is the recursive subroutine: If u is unvisited then: 1. Mark u as visited. 2. For each out-neighbour v of u, do Visit(v). 3. Prepend u to L. Otherwise do nothing. 3. For each element u of L in order, do Assign(u,u) where Assign(u,root) is the recursive subroutine: If u has not been assigned to a component then: 1. Assign u as belonging to the component whose root is root. 2. For each in-neighbour v of u, do Assign(v,root). Otherwise do nothing. Trivial variations are to instead assign a component number to each vertex, or to construct per-component lists of the vertices that belong to it. The unvisited/visited indication may share storage location with the final assignment of root for a vertex. The key point of the algorithm is that during the first (forward) traversal of the graph edges, vertices are prepended to the list L in post-order relative to the search tree being explored. This means it does not matter whether a vertex v was first Visited because it appeared in the enumeration of all vertices or because it was the out-neighbour of another vertex u that got Visited; either way v will be prepended to L before u is, so if there is a forward path from u to v then u will appear before v on the final list L (unless u and v both belong to the same strong component, in which case their relative order in L is arbitrary). As given above, the algorithm for simplicity employs depth-first search, but it could just as well use breadth-first search as long as the post-order property is preserved. The algorithm can be understood as identifying the strong component of a vertex u as the set of vertices which are reachable from u both by backwards and forwards traversal. Writing ${\displaystyle F(u)}$ for the set of vertices reachable from ${\displaystyle u}$ by forward traversal, ${\displaystyle B(u)}$ for the set of vertices reachable from ${\displaystyle u}$ by backwards traversal, and ${\displaystyle P(u)}$ for the set of vertices which appear strictly before ${\displaystyle u}$ on the list L after phase 2 of the algorithm, the strong component containing a vertex ${\displaystyle u}$ appointed as root is ${\displaystyle B(u)\cap F(u)=B(u)\setminus (B(u)\setminus F(u))=B(u)\setminus P(u)}$. Set intersection is computationally costly, but it is logically equivalent to a double set difference, and since ${\displaystyle B(u)\setminus F(u)\subseteq P(u)}$ it becomes sufficient to test whether a newly encountered element of ${\displaystyle B(u)}$ has already been assigned to a component or not. ## Complexity Provided the graph is described using an adjacency list, Kosaraju's algorithm performs two complete traversals of the graph and so runs in Θ(V+E) (linear) time, which is asymptotically optimal because there is a matching lower bound (any algorithm must examine all vertices and edges). It is the conceptually simplest efficient algorithm, but is not as efficient in practice as Tarjan's strongly connected components algorithm and the path-based strong component algorithm, which perform only one traversal of the graph. If the graph is represented as an adjacency matrix, the algorithm requires Ο(V2) time.
2016-09-25 01:16:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740013837814331, "perplexity": 578.1377780146205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659680.65/warc/CC-MAIN-20160924173739-00173-ip-10-143-35-109.ec2.internal.warc.gz"}
https://socratic.org/questions/an-isotope-of-iron-has-28-neutrons-if-the-atomic-mass-of-the-isotope-is-54-how-m
# An isotope of iron has 28 neutrons. If the atomic mass of the isotope is 54, how many protons does it have? Nov 1, 2016 The iron atom is characterized by an atomic number, "Z=26. That is every iron nucleus contains $26$ protons, 26, massive, positively charged, nuclear particles, by definition. Of course, the nucleus also contains 28 neutrons, 28 neutral, massive nuclear particles, to give the ""^54Fe isotope, which is about 4% abundant, check on this.
2019-09-21 06:48:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4044952094554901, "perplexity": 2721.976930411846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574286.12/warc/CC-MAIN-20190921063658-20190921085658-00105.warc.gz"}
https://math.stackexchange.com/questions/3786234/complex-eigenvalues-of-a-matrix-in-conjugate-pairs-or-not
# Complex eigenvalues of a matrix in conjugate pairs (or not) I have learnt that in a matrix, if there are complex eigenvalues, they should come as conjugate pairs. Also, I know that, in a diagonal matrix, eigenvalues are the diagonal elements. So how about the following matrix? $$\begin{pmatrix} i & 0\\ 0& 2 \end{pmatrix}$$ Shouldn't the eigenvalues be $$i$$ and $$2$$, where it doesn't have a conjugate pair?! I appreciate your help to clarify my mistake. • complex eigenvalues of a matrix with real entries come as conjugate pairs – J. W. Tanner Aug 10 at 14:21 • What are you taking about this is for real matrix. – A learner Aug 10 at 14:21 Recall that the eigenvalues of a matrix $$A$$ are the zeroes of its characteristic polynomial $$\chi_A(x) = \det (x I - A)$$. Of course it is entirely possible for the roots of $$\chi_A$$ to not occur in pairs of complex conjugates as shown by your example. However, if we restrict the coefficients of $$\chi_A$$ to be real (e.g. if your matrix $$A$$ is real) then we will find that any complex roots occur in pairs of conjugates by the complex conjugate root theorem. $$Av = \lambda v \implies \bar A \bar v = \bar \lambda \bar v$$ and so $$\lambda$$ eigenvalue of $$A$$ implies $$\bar\lambda$$ eigenvalue of $$\bar A$$. Thus, when $$A$$ is real, its eigenvalues come in conjugate pairs.
2020-09-25 14:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475905299186707, "perplexity": 138.8920729362356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00301.warc.gz"}
https://ehrlich-angefangen.icu/wiki/LaTeX-W%C3%B6rterbuch:_Abk%C3%BCrzungsverzeichnis-tre2230lq0q
Home # Texmaker list of symbols AdAbnehmen mit alltagstricks: Reduzieren Sie Ihre Körpergröße in einem Monat auf M! #2020 Diaet zum Abnehmen,Bester Weg schnell Gewicht zu verlieren,überraschen Sie alle Höhle der Löwe Keton Gewichtsverlust Produkte, Körper innerhalb eines Monats von M bis XXL. Besten Keto Produkte zur Gewichtsabnahme, Kaufen Sie 3 und erhalten 5, Versuchen Sofort Both the glossaries package and the glossaries-extra extension package provide the package option symbols, which creates a new list labelled symbols with the default title given by the language-sensitive \glssymbolsgroupname (Symbols). This list can be referenced with type=symbols.If you don't use this package option then you can use the default main glossary instead but the default title. List of LaTeX mathematical symbols. From OeisWiki. There are no approved revisions of this page, so it may not have been reviewed. Jump to: navigation, search. All the predefined mathematical symbols from the T e X package are listed below. More symbols are available from extra packages. Contents. 1 Greek letters; 2 Unary operators; 3 Relation operators; 4 Binary operators; 5 Negated binary. I want to create a list of symbols used in my thesis. The list has to be pretty much like the list of figures or tables. I looked around and found out that a nomenclature can be used for this purpose. But I couldn't create a file. I was wondering if anyone could me a minimal example and quickly explain how to create this list. I will be. You might need to print the list of symbols or list of abbreviations for your LaTeX document. nomencl package can be used for this purpose.. You need to load the nomencl package in the preamble of your document. The command \makenomenclature will instruct LaTeX to open the nomenclature file filename.nlo corresponding to your LaTeX file filename.tex and to write the information from your. LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta β \beta λ \lambda ρ \rho ε \varepsilon Γ \Gamma Υ \Upsilon χ \chi µ \mu σ \sigma κ \varkappa Λ \Lambda Ξ \Xi δ \delta ν \nu τ \tau ϕ \varphi Ω \Omega \epsilon o o θ. The Comprehensive LATEX Symbol List Scott Pakin <scott+clsl@pakin.org>∗ 19 January 2017 Abstract This document lists 14283 symbols and the corresponding LATEX commands that produce them. Some of these symbols are guaranteed to be available in every LATEX2system; others require font Soll die Bezeichnung von List of Symbols auf Symbolverzeichnis geändert werden, kann dies wie folgt beschrieben umgesetzt werden: \renewcommand{\symheadingname}{Symbolverzeichnis} 2 Symbolverzeichnis erstellen. Zu Beginn werden die Symbole innerhalb der Symdef Umgebung definiert. An der gewünschten Stelle wird das Symbolverzeichnis eingefügt. Die jeweiligen Symbolbefehle werden innerhalb. ### (1) Höhle der Löwen Diät #2020 - 13 Kilo in 2 Wochen Abnehme LaTeX symbols have either names (denoted by backslash) or special characters. They are organized into seven classes based on their role in a mathematical expression. This is not a comprehensive list. Refer to the external references at the end of this article for more information. Letters are rendered in italic font; numbers are upright/ roman.\\imath and\\jmath make dotless i and j, which. I would like to create different List of Symbols and List of Abbreviation in different pages. I could list both symbol and abbreviation under the same heading as following, Code: [Expand/Collapse] (untitled.tex) \ usepackage {nomencl} \makenomenclature \ renewcommand {\nomname}{Symbols and Constants} \printnomenclature . The symbol $\alpha$ \nomenclature {$\alpha$}{Elevation Angle} and. The glossaries package v4.46: a guide for beginners Nicola L.C. Talbot dickimaw-books.com 2020-03-19 Abstract The glossaries package is very flexible, but this means that it has a lot of options, and since a user guide is supposed to provide a complete list of all the high-leve 6. 1. 2 Creating Glossaries, Lists of Symbols or Acronyms (glossaries package) . There are a number of packages available to assist producing a list of acronyms (such as the acronym package) or a glossary (such as the nomencl package). You can see a list of available packages in the OnLine TeX Catalogue's Topic Index [].Here, I've chosen to describe the glossaries package Before using Texmaker, you must configure the editor and latex related commands via the Configure Texmaker command in the Options menu (Preferences under macosx). 1.1 Configuring the editor. Before compiling your first document, you must set the encoding used by the editor (Configure Texmaker -> Editor -> Editor Font Encoding). Then, you should use the same encoding in the preamble. Printing a list of abbreviations or symbols is one of these things (like so many) LaTeX provides a very simple and elegant solution for. The nomencl package implements a few basic commands to do that. First load the package in the preamble. The makenomenclature command is required for the generation of the nomenclature file (.nlo). Commenting it out is a convenient way to switch it off. Texmaker includes wizards for the following tasks: Generate a new document or a letter or a tabular environment. Create tables, tabulars, figure environments, and so forth. Export a LaTeX document via TeX4ht (HTML or ODT format). Some of the LaTeX tags and mathematical symbols can be inserted in one click and users can define an unlimited number of snippets with keyboard triggers. Texmaker. Texmaker is a free, modern and cross-platform LaTeX editor for linux, macosx and windows systems that integrates many tools needed to develop documents with LaTeX, in just one application. Texmaker includes unicode support, spell checking, auto-completion, code folding and a built-in pdf viewer with synctex support and continuous view mode. Texmaker is easy to use and to configure Chapter 4, Creating Lists. Contents: A bulleted list; Nested bulleted lists; Numbered lists; Compact lists ; In-pragraph lists; Lists with customized symbols; Restarting numbering; A definition list; Layout of lists; A bulleted list \ documentclass {article} \begin {document} \ section * {Useful packages} LaTeX provides several packages for designing the layout: \begin {itemize} \ item. In this tutorial, we will learn about the grouping of a list of symbols. We will see both 'longtable' and 'nomencl' packages for this purpose. =====.. Here is the List Of 14 Best Free LaTeX Editors for Windows: TeXmaker. TeXmaker is an impressive and steady LaTeX editor, which is available for free. This cross-platform LaTeX editor has a variety of features to offer: Unicode support, PDF viewer, auto-completion, syntax highlighting, etc. You can create various LaTeX documents, such as technical articles, bibliography and journals, very. If you want to change the symbol for all items of the list, you should preferably use the enumitem environment, which I will explain using the example of ordered lists. Ordered lists Changing this environment is a little more tricky, because there's a lot more logic involved and the easiest solution is probably using the enumerate or enumitem environments A list of abbreviations and symbols is common in many scientific documents. These types of lists can be created with L a T e X by means on the nomencl package. This article explains how to create nomenclatures, customizing the ordering and subgrouping of the symbols The Comprehensive LATEX Symbol List ends with an index of all the symbols in the document and various additional useful terms. 1.2 Frequently Requested Symbols There are a number of symbols that are requested over and over again on comp.text.tex. If you're looking for such a symbol the following list will help you find it quickly Finding and inserting special symbols into your LaTeX document. http://detexify.kirelabs.org/classify.html NOTE: Apparently I mispronounced this as de-texT-i.. Oft ersparen Sie sich viel Arbeit, wenn Sie Ihr Literaturverzeichnis mit einem speziellen Programm erstellen. Wie Sie mit LaTeX ein Verzeichnis anlegen können, zeigen wir Ihnen in diesem Praxistipp CTAN: symbols-a4.pdf 8,7 MiB, Umfassende Übersicht mit 14283 Symbolen auf 338 Seiten (Stand 19. Januar 2017). Abgerufen von. LaTeX glossary and list of acronyms. 15. January 2014 by tom 39 Comments. According to Wikipedia, a glossary is an alphabetical list of terms in a particular domain of knowledge with the definitions for those terms. It doesn't come as a surprise that there are several LaTeX packages that assist with the generation of glossaries. Among them are the nomencl package, the glossary package, and. Symbol/Label : Beispiel : 1. Ebene : arabischen Ziffern/Zahlen. 1. 2. Ebene (kleiner lateinischer Buchstabe) (b) 3. Ebene : kleinen römischen Ziffern/Zahlen. iii. 4. Ebene : großen lateinischen Buchstaben. D. Hinweis: Bei den Ebenen 1,3 und 4 ist jewils der Punkt . und bei Ebene 2 die Klammern ( ) Bestandteil der Nummerierung. Eingabe: \begin{enumerate} \item erste Ebene \begin{enumerate. List structures in LaTeX are simply environments which essentially come in three types: itemize for a bullet list; enumerate for an enumerated list and; description for a descriptive list. All lists follow the basic format: \begin {list _ type} \item The first item \item The second item \item The third etc \ldots \end {list _ type} All three of these types of lists can have multiple paragraphs. Special Symbols in LaTeX. The LaTeX language has a wide variety of special symbols for which markup commands have already been defined. These range from accents and greek letters to exotic mathematical operators. These LaTeX's symbols are grouped together more or less according to function. Some of these symbols are primarily for use in text; most of them are mathematical symbols and can only. LaTeX Symbols with LaTeX Tutorial, LaTeX Installation, Download LaTeX, LaTeX Editors, How to use LaTeX, LaTeX Symbols, LaTeX List, LaTeX File Types, LaTeX Fonts, LaTeX Table, LaTeX Texmaker etc Download Texmaker - A text editor that integrates many tools needed to develop documents with LaTeX commands, but also references such as labels, footnotes and indexe Detexify is an attempt to simplify this search. How do I use it? Just draw the symbol you are looking for into the square area above and look what happens! My symbol isn't found! The symbol may not be trained enough or it is not yet in the list of supported symbols. In the first case you can do the training yourself Jump start Edit. Place \usepackage {glossaries} and \makeglossaries in your preamble (after \usepackage {hyperref} if present). Then define any number of \newglossaryentry and \newacronym glossary and acronym entries in your preamble (recommended) or before first use in your document proper. Finally add a \printglossaries call to locate the glossaries list within your document structure Put \usepackage[hoptionsi]{nomencl} in the preamble of your doc-ument. \makenomenclature Put \makenomenclature in the preamble of your document. \nomenclature Issue the \nomenclature command (see Section2.2) for each symbol you want to have included in the nomenclature list In this chapter we will tackle matters related to input encoding, typesetting diacritics and special characters. In the following document, we will refer to special characters for all symbols other than the lowercase letters a-z, uppercase letters A-Z, figures 0-9, and English punctuation marks.. Some languages usually need a dedicated input system to ease document writing Comparison of TeX editors. Language; Watch; Edit; Tables of editor properties. Properties of TeX editors 1 Editing style Native operating systems Latest stable version Costs License Configurable Integrated viewer; AUCTeX: Source Linux, macOS, Windows (2017-12-10) 12.1 Free GPL: Yes Yes AUCTeX Authorea: Source / partial-WYSIWYG: Online N/A Free Proprietary: Yes Yes Authorea Auto-Latex Equations. The command \newtheorem{theorem}{Theorem} has two parameters, the first one is the name of the environment that is defined, the second one is the word that will be printed, in boldface font, at the beginning of the environment. Once this new environment is defined it can be used normally within the document, delimited it with the marks \begin{theorem} and \end{theorem} Latex Symbols - a comprehensive list of Latex symbols taken from the Comprehensive TeX Archive Network; Online Tutorial - an online tutorial of Latex containing tips about bibliographies, type setting math equations, and theorems. Beamer User Guide - a complete guide to the Beamer document class as well as an easy to follow tutorial. LaTeX Systems to Install on Personal Computers: Latex. Symbols » Greek Alphabet. Greek alphabet / letters in LaTeX Learn the LaTeX commands to display the greek alphabet. A rendered preview of all letters is shown alongside all commands in a nice table. The following table shows the whole Greek alphabet along with the commands in a nice table. The usage is pretty easy, you can basically type the name of the letter and put a backslash in front of. ### List of LaTeX mathematical symbols - OeisWik • Texmaker is a free and open source LaTeX editor software app filed under office software and made available by Pascal Brachet for Windows. The review for Texmaker has not been completed yet, but it was tested by an editor here on a PC and a list of features has been compiled; see below.. If you would like to submit a review of this software download, we welcome your input and encourage you to. • The Comprehensive LaTeX Symbol List (PDF; 9,3 MB) - Auflistung aller Symbole in . Detexify sucht LaTeX-Befehle für mit der Maus gezeichnete Symbole. Eine lesenswerte Einführung zum Thema ISO-31-konformer Formelsatz in LaTeX findet sich unter moritz-nadler.de (PDF; 283 kB). Ein Formel-Editor, der den Quelltext während der Eingabe unaufgefordert rendert, findet sich unter mathb.in. • List of Abbreviations - Latex Sandareka 5:28 AM Latex , List of Abbreviations , TechnicalEnvision 5 comments This is a short post about generating a list of abbreviations for your document with Latex. While there are many packages available for this, I'm going to use glossaries package. You can find the user mannual of glossaries package from here. Here is the basic example. \documentclass. TeXMaker for Windows is a free, modern and cross-platform LaTeX editor for Linux, macOS and Windows systems that integrates many tools needed to develop documents with LaTeX, in just one application.TeXMaker includes Unicode support, spell checking, auto-completion, code folding and a built-in PDF viewer with synctex support and continuous view mode Texmaker 5.0.4 Deutsch: Der LaTex Download Texmaker unterstützt Sie beim Erstellen von Tex-Dokumenten, bietet viele Optionen und wandelt einige Dokumenttypen um 3.5 Table of symbols and notation It is sometimes useful to give the reader a table with the symbols and the notation used in the thesis (Fig.4). The nomenclpackage automatically generates such a list with the MakeIndex program. It is otherwise possible to manually create the table with the tabularenvironment. 3.6 Appendice Here is the list of best LaTex editors for you: LaTex Editors. 1. TeXmaker; 2. Lyx; 3. TexWorks; 4. TexStudio; 5. TeXnic; 6. RTextDoc; 7. Kile; 8. JEdit; 9. DMelt; 10. Overleaf ; 11. Papeeria; Bonus: Plugins and Add-ons; 1. TeXmaker. TeXmaker is a free, modern, and cross-platform functional LaTeX editor for Windows, Linux, and macOS. It is also the most popular tool possessing various features. The Missing $inserted is probably caused by the underscores and bars. These characters in LaTeX have special meaning in math mode (which is delimited by$ characters). Try escaping them; e.g. update\_element instead of update_element. However, if you're trying to display code, a better solution would be to use the \verb command, which will typeset the text in a monospaced font and will. ### Generating a List of Symbols - LaTe • Texmaker : User manual Contents : 1. Configuring Texmaker; 1.1 Configuring the editor ; 1.2 Configuring the latex related commands; 1.3 Configuring the spell checker; 2. Editing a TeX document; 2.1 Usual commands; 2.2 Setting the preamble of a TeX document; 2.3 Structure of a document; 2.4 Browsing your document; 2.5 Formatting your text; 2.6 Spacings; 2.7 Inserting a list; 2.8 Inserting a. • In Texmaker können sie beispielsweise einfach Schnelles übersetzen aktivieren. Auf der Kommandozeile benutzen Sie den Befehl pdflatex: cd Tutorial.LaTeX pdflatex einsteiger.tex. Egal wie LaTeX aktiviert wird: stellen Sie sicher, dass ihre Eingabedatei gespeichert ist, da LaTeX unabhängig von Arbeitsumgebung und Text-Editor im Hintergrund auf die gespeicherte Datei zugreift. Wenn alles. • Für Optionen sind mögliche Werte: . footnote-- die Langform als Fußnote ausgeben; nohyperlinks-- wenn hyperref geladen ist, wird die Verlinkung unterbunden; printonlyused-- nur Abkürzungen auflisten, die tatsächlich verwendet werden.; Im printonlyused-Modus kann zusätzlich noch die Option withpage verwendet werden. Hierdurch wird im Abkürzungsverzeichnis zusätzlich die Seitenzahl, auf. • A list of abbreviations and symbols is common in many scientific documents. These types of lists can be created with LaTeX by means on the nomencl package. Steps: Include package: \usepackage{nomencl} Followed by instructions to make the nomenclature: \makenomenclature Define each nomenclature using the \nomenclature command Finally displaying all nomenclature using \printnomenclature command. • Typeset mathematical double stroke symbols dowith Apply a command to a list of items download Allow LaTeX to download files using an external process dox Extend the doc package dozenal Typeset documents using base twelve numbering (also called dozenal) dpcircling Decorated text boxes using TikZ dpfloat Support for double-page floats. dprogress LaTeX-relevant log information for debugging. • How to write URLs in Latex? [closed] Ask Question Asked 9 years, 11 months ago. Active 1 year ago. Viewed 292k times 172. 22. Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.. • Ich habe eine Frage zur Formatierung: Ich benutze viele Symbole, für die ich eine längere Erläuterung brauche. Der Text nach dem Zeilenumbruch rutscht dann nach links mit unter die Einheit. Hast du einen Tipp, wie ich das umgehen kann? Danke schon mal! Reply. mathias says: April 14, 2016 at 21:22 . Hallo Caro! Du hast Glück, ich habe mich die letzten Tage gerade mit LaTeX beschäftigt. ### LaTeX: Add List of Abbreviations / Nomenclature Mukesh You can introduce yourself at first to a list of symbols, but clearly you should not go and memorize that. You can google for a quick answer, you can try recognizing a drawing on detexify^2, and you can use a front-end with built-in symbol sheets. 4.1 Introduction to Texmaker From the Launcher start Texmaker. In the Texmaker menu, click File. Selected LaTeX Math Symbols Note: there is another version of this document featuring HTML entities for math symbols, as well as LaTeX commands. Relational Operators (math mode) Symbol: Command: Comment \equiv \approx \propto \simeq \sim \neq \geq \gg \ll: Logic Symbols (math mode) Symbol: Command: Comment \bullet \neg \wedge \vee \oplus \Rightarrow \Leftrightarrow \exists \forall: Set Symbols. A Beginner's Guide to LATEX David Xiao dxiao@cs.princeton.edu September 12, 2005 1 Introduction LATEX is the standard mathematical typesetting program.This document is for people who have never used LATEX before and just want a quick crash course to get started.I encourage all students in mathematics an ### Listofsymbols - Symbolverzeichnis mit Late • The underbrace symbol will put a flipped { sign under the bracketed text. This can be usefull for showing combining like terms, because a number can also be placed under the underbrace. It can be added with the command \underbrace{text}, or for the added number under the underbrace.. ### 6.1.2 Creating Glossaries, Lists of Symbols or Acronyms .. • Texmaker is the software that calls the compiler for us which is why we have to adjust the settings there. In principle this setup works with any other Latex editor or for the command line. Just use the same commands we will now add to the Texmaker settings. Go to Texmaker, Options -> Texmaker configuration and adjust the settings as follows for the biblatex option. Also check the Build. • 上述三种列表都是基于 list 列表环境定制的,也就是说 list 环境是功能最强大的列表环境,只是由于它使用起来比较麻烦,所以很少被使用。其实上述的三种列表在通过一定的扩展后可以产生许多样式的列表,这些基本就可以满足平时的需要了。下面说一下怎么进行功能扩展。enumitem 宏包可以对. • The tabular environment is the default L a T e X method to create tables. You must specify a parameter to this environment, {c c c} tells LaTeX that there will be three columns and that the text inside each one of them must be centred. Open an example in Overleaf. Creating a simple table in L a T e X. The tabular environment is more flexible, you can put separator lines in between each column • Ein Angebot von. tipps+tricks; Office; Office Tabelle in LaTeX erstellen - so gelingt's . Von Michael Mierke am 10. April 2019 09:16 Uhr; Sie möchten Ihre Ergebnisse in einer Tabelle in LaTeX. • List of symbols or abbreviations (nomenclature) By Tom May14,2012 Jeff Clark, LaTeX Tutorial, Revised February 26, 2002. David Xiao, A Beginner's Guide to LATEX, September 2005 • 4.2 Numeriete Listen \begin{enumerate} \item Ein Stichpunkt \item Noch ein Stichpunkt \end{enumerate} Ein Stichpunkt Noch ein Stichpunkt L A T E X macht automatisch eine neue Zeile in der die Liste beginnt. Wir verwenden Cookies. Wenn Sie weiter auf unseren Seiten surfen, stimmen Sie der Nutzung von Cookies zu. mehr Informationen hier . Sascha Frank Last modified: Thu Jan 26 17:24:23 CET 2017. • Special LaTeX characters. Besides the common upper- and lowercase letters, digits and punctuation characters, that can simply by typed with the editor, some characters are reserved for LaTeX commands. They cannot be used directly in the source. Usually they can be printed if preceded by a backslash: \ documentclass {article} \ usepackage {array} \ usepackage {booktabs} \begin {document} \begin. ### Texmaker (free cross-platform latex editor • Musik-Downloads für Smartphone und Player. Mit Autorip gratis bei jedem CD-Kauf • 2 Formatted text Using the \mboxis fine and gets the basic result. Yet, there is an alternative that offers a little more flexibility. You may recall from Tutorial 7 — Formatting 1 the intro- duction of font formatting commands, such as \textrm, \textit, \textbf, etc • In the below example code, I have displayed five different forms of enumerate list: a) the default enumerate list, b) enumerate list with roman numerals, c) list with roman numbers and no separation space in top and between items, d) list with capital roman numbers, and e) list starting from 5 • the symbol &; each row has the same number of cells (i.e. same number of &)3 which must be equal to that declared in the definition cols. \hline can be placed in the first row or at the end of a row \\ and it draws an horizontal line as wide as the entire table. \cline{n-m} draws an horizontal line from the left of column n up to th ### List of symbols or abbreviations (nomenclature) - texblo Read 581 answers by scientists with 971 recommendations from their colleagues to the question asked by M.M. Noor on Jan 5, 201 Textmodus Symbole Symbole & \& _ \_ \ldots \textbullet $\$ ˆ \^{} | \textbar \ \textbackslash % \% ˜ \~{} # \# § \S Akzente ò \'o ó \'o ô \^o õ \~o o \=o o˙ \.o ö \o o¸ \c o o \v o ő \H o ç \c c o. \d o o ¯ \b o oo \t oo œ\oe Œ\OE æ \ae Æ\AE å \aa Å\AA ø \o Ø\O ł \l Ł \L ı \i \j ¡ ~' ¿ ?' Trennzeichen '' '' {\{ [[ (( <\textless '' and we have access to all the commands, symbols, environments, etc., that are in the package. Common Packages. This section will cover the packages released by the American Mathematical Society, as well as xypic and fancyhdr. AMS Math packages . The American Mathematical Society has produced several packages for use with LaTeX. These packages allow much of the mathematical formatting we have. Mit dem Software-Paket LaTeX ist die Nutzung von Kleiner-gleich- und Größer-gleich-Zeichen relativ einfach möglich. Folgendermaßen sollten Sie vorgehen, um im LaTeX-Editor Texmaker Kleiner gleich ≤ oder Größer gleich ≥ schreiben zu können ### Texmaker - Wikipedi MiKTeX Packages. There are currently 3903 packages in the MiKTeX package repository. These packages have been updated recently: anonymous-acm; lwarp; miktex-luatex-bin-2.9; miktex-luatex-bin-x64-2.9; miktex-runtime-bin-2.9; miktex-runtime-bin-x64-2.9; miktex-yap-bin-2.9; miktex-yap-bin-x64-2.9; xepersian-hm; acro; animate; apa7 ; beamerdarkthemes; biblatex-ext; caption; draftwatermark; erewhon. Was ist LATEX?GrundaufbauTextsatzBilder, Gleitobjekte, LabelsMathematikLiteraturverzeichnis EinekurzeEinführunginLATEX AlbertGeorgPassegger FakultätfürMathematik. trademark symbol; they are the property of their respective trademark owner. There is no intention of infringement; the usage is to the bene t of the trademark owner. 3. User's guide 1 Getting started 1.1 A minimal le Before using the listings package, you should be familiar with the LATEX typesetting system. You need not to be an expert. Here is a minimal le for listings. % \documentclass. Union (∪) and Intersection (∩) symbols in LaTeX can be produced via the \cup and \cap definitions while in math mode. No extra packages are required to use these symbols. No extra packages are required to use these symbols TeXmaker, because it is cross-platform and you can write your text in different system without any problem and with a good library of symbols and use the web. LyX is not in the LATEX way because. Vielleicht findest Du aber auch etwas vorgefertigtes in »The Comprehensive LaTeX Symbol List«. MfG Thorsten. TeX und LaTeX, Fragen und Antworten - TeXwelt ¹ Es tut mir Leid. Meine Antworten sind begrenzt. Sie müssen die richtigen Fragen stellen. ² System: openSUSE 13.1 (Linux 3.11.6), TeX Live 2013, TeXworks 0.5 (r1349) ³ Lernt gerade TeX (und versucht, es zu verstehen). Top. Xenara. Greek letters, set and relations symbols, arrows, binary operators, etc. Too many to remember, and in fact, they would overwhelm this tutorial if I tried to list them all. Therefore, for a complete reference document, try symbols.pdf. We will of course see some of these symbols used throughout the tutorial. Fractions. To create a fraction, you must use the \frac{numerator}{denominator} command. ### Chapter 4, Creating Lists TeXblo Das Paket amssymb wird zur Darstellung mathematischer Symbole, wie beispiels-weise R ben otigt. Das Paket amsthm wird zur Darstellung mathematischer Umgebungen ben otigt. Durch das Paket graphicx k onnen Gra ken im pdf- oder jpg-Format in das Doku-ment integriert werden. booktabs erm oglicht die Einbindung von Tabellen und die Verwendung von Befehlen wie nmidrule. Durch den Befehl. *As rendered in your browser. Notes: I want to keep the HTML commands list free from hacks resorting to the use of the symbol font, but this can mean that some of the entity codes listed here will not display properly in older browsers.; The LaTeX commands assume that you are in normal text mode. The \$'s around a command mean that it has to be used in maths mode, and they will temporarily put. TeXMaker is a modern, cross-platform LaTeX editor that features an built-in PDF viewer, as well as numerous editing tools required for the creating LaTeX documents.. Reliable LaTeX editor with code folding and auto-completion capabilities. LaTeX is a flexible markup language, mainly used for the communication and publication of scientific papers in fields such as physics, mathematics. The On-Line Encyclopedia of Integer Sequences® (OEIS®) Enter a sequence, word, or sequence number: Hints Welcome Video. For more information about the Encyclopedia, see the Welcome page. Languages: English Shqip. LaTeX mathematics cheat sheet 10 minute read On This Page. Fractions; Greek letters; Logic; Operators; Relation; Sets; Super-/Subscript (Exponents / Indices) Others; LaTeX is the de facto standard typesetting system for scientific writing. A lot of the nice looking equations you see in books and all around the web are written using LaTeX commands. Knowing a few of the mathematics commands is. Texmaker. One of the best LaTeX editors available out there and most user-friendly LaTeX IDE for new ones. In Texmaker, the wizards, Live preview makes it user's first choice very easily. Though you may not be able to find mathematical symbols and document summary here, its work efficiency makes people go for without another thought. Key Features: Citing tool, inserting images, helpers. What's new in Portable Texmaker 5.0.4: The url used to check a new version has been fixed (website has moved to https) A bug after closing the internal pdf viewer during a session (not the embed. CircuiTikz is a set of LaTeX macros designed to make it easy to draw electrical networks in scientific publications. It provides a convenient syntax based on to-paths to place the various components. The examples below are from the CircuiTikz examples page.The author of CircuiTikz is Massimo Readelli. To run the examples you need to download and install the CircuiTikz files first without worrying about it. Many modern LATEXcompilers will locate and o er to download missing packages for you. 1.3 Using physics in your LATEX document To use the physics package, simply insert \usepackage{physics} in the preamble of your document, befor Math I: Super/Subscripts and Common Commands Introduction This tutorial will show you how to do some basic typesetting of math symbols, equations and matrices. Although LATEX in its basic form (i.e. when you start to write it in WinEdt) can do almost everything in math that the normal user will want to do, we can make it more user-friendly by telling it to load certain packages into its. ### Grouping of List of Symbols in LaTeX (LaTeX Advanced LATEX2ε-Kurzbeschreibung Version 2.3 10. April 2003 Walter Schmidt∗ J¨org Knappen Hubert Partl Irene Hyna LATEX ist ein Satzsystem, das f¨ur viele Arten von Schriftst ¨ucken verwendet werden kann, von einfachen Briefen bis zu kompletten B¨uchern. Besonder Why You Need Your Own Lists LyX provides excellent list environments, including itemize, enumerate and description. If those don't fit your needs you can usually use a package to do what you need. But once in a while, you can't find a pre-designed list fitting your needs. Then you must build your own list environments and put them in a layout file in order to use them in LyX. I recently had. Math 301 Introduction to Proofs, Spring 2020 . Course Information. Schedule/HW The style and color scheme of TeXstudio can be selected. The modern variant is closer to texmaker 1.9. The symbol list can either appear tabbed (old behaviour, tabbed activated) or can have small symbol tabs besides the symbol lists which leaves more room for the symbols If you want to discuss a possible contribution before (or instead of) making a pull request, we suggest you raise the topic first on the LATEX-L list or drop a line to the team. Historic LaTeX Ulrik Vieth has collected historic versions of LaTeX from LaTeX 2.0 for TeX 1.0 (released on 11 December 1983) onwards Texmaker is considered to be one of the best LaTeX editors for the GNOME desktop environment. It presents a great user interface which results in a good user experience. It's also considered to be one of the most useful LaTeX editors out there. If you perform PDF conversion often, you'll find Texmaker to be faster relative to other LaTeX editors. You can take a look at a preview of what. • B klasse sicherung zigarettenanzünder. • Globalisierung politische aspekte. • Volterman geldbörse. • Tardy lemon os. • Excel 2016 stock quotes. • Fake anrufe. • Flaschenpfand italien 2018. • Knieorthese medi. • Liebeskummer zitate englisch. • Ballett dornröschen 2019. • Fragenkatalog junggesellenabschied. • Motorrad geht beim gas geben aus. • Bin ich lustig test. • New orleans airport taxi. • Halle 02 heidelberg programm. • Wie funktioniert eine mühle. • Lana südtirol. • Computerspiele für geistig behinderte menschen. • Zollfreigrenze gold. • Ikea kivik 2er sofa mit recamiere. • Mathematik keine wissenschaft. • Welcher sport gegen stress. • Eve online guide 2017 deutsch. • Amazon fire stick anschließen ohne hdmi. • Sony xperia e mail verbindungsfehler. • Jessica wollny. • Stundenbild tanzen im sitzen. • Circus halligalli staffel 9 episode 3. • Steinstatue auf der osterinsel. • Bedeutung berührung körperkontakt. • Pdf xchange viewer dokument zweimal öffnen.
2021-09-21 11:40:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067056179046631, "perplexity": 9516.3584394951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00072.warc.gz"}
https://www.flexiprep.com/Expected-Exam-Questions/Mathematics/Class-11/CBSE-Class-11-Mathematical-Induction-Assignments.html
# CBSE Class 11-Mathematics: Mathematical Induction Assignments (For CBSE, ICSE, IAS, NET, NRA 2022) Glide to success with Doorsteptutor material for CBSE/Class-8 : get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-8. Prove the Following using Principle of Mathematical induction 1) Prove that for any positive integer number is divisible by 2) Prove that for all positive integers n. 3) For every is divisible by . 4) Prove by induction that 5) For every is multiple of 6) For every 7) For all Where 8) If , then 9) Prove that for every . 10) Prove that 11) For all is divisible by . Solution to Problem 1: Let Statement is defined by is divisible by Step 1: Basic Step We first show that p (1) is true. Let and calculate is divisible by Hence p (1) is true. STEP 2: Inductive Hypothesis We now assume that is true is divisible by is equivalent to , where B is a positive integer. Step 3: Inductive Steps We now consider the algebraic expression ; expand it and group like terms Hence is also divisible by and therefore statement is true. Solution to Problem 2: Statement is defined by Step 1: Basic Step We first show that is true. Left Side Right Side Hence is true. STEP 2: Inductive Hypothesis We now assume that is true Step 3: Inductive Steps Factor on the right side Set to common denominator and group We have started from the statement and have shown that Which is the statement . Solution to Problem 3: Let is divisible by Step 1: Basic Step is just that is divisible by , which is trivial. STEP 2: Inductive Hypothesis We now assume that is true i.e.. , is divisible by Step 3: Inductive Steps We must prove Now The first term is divisible by since is true and the second term is a multiple of . Hence, the last quantity is divisible by Solution to Problem 4: Statement is defined by Step 1: Basic Step We first show that is true. Left Side Right Side Hence is true. STEP 2: Inductive Hypothesis We now assume that is true Step 3: Inductive Steps We must prove Now Taking LHS Which is the statement Solution to Problem 6: Let Statement is defined by For all Step 1: Basic Step Let So is true STEP 2: Inductive Hypothesis We now assume that is true That is, Step 3: Inductive Steps Let . Then: Now So > 3k + 1 Then holds for , and thus for all Solution to Problem 7: Let Statement is defined by Where Step 1: Basic Step Let Which is true So is true STEP 2: Inductive Hypothesis We now assume that is true Step 3: Inductive Steps Let . Then: Taking the LHS Now from hypothesis we know that Also So Now kx2 is a positive quantity so we can say that Which is Solution to Problem 11: Let Statement is defined by For all is divisible by . Step 1: Basic Step Let . Then the expression evaluates to , which is clearly divisible by 5. STEP 2: Inductive Hypothesis We now assume that is true That is, that is divisible by . Step 3: Inductive Steps Let . Then: The first term in has as a factor (explicitly) , and the second term is divisible by (by assumption) . Since we can factor a out of both terms, then the entire expression, , must be divisible by . Then holds for , and thus for all . Developed by:
2021-07-25 07:06:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859880805015564, "perplexity": 4362.406833124066}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00392.warc.gz"}
https://cran.dme.ufro.cl/web/packages/scanstatistics/vignettes/introduction.html
# Introduction to scanstatistics ## What are scan statistics? Scan statistics are used to detect anomalous clusters in spatial or space-time data. The gist of the methodology, at least in this package, is this: 1. Monitor one or more data streams at multiple locations over intervals of time. 2. Form a set of space-time clusters, each consisting of (1) a collection of locations, and (2) an interval of time stretching from the present to some number of time periods in the past. 3. For each cluster, compute a statistic based on both the observed and the expected responses. Report the clusters with the largest statistics. ## Main functions ### Scan statistics • scan_eb_poisson: computes the expectation-based Poisson scan statistic (Neill et al. 2005). • scan_pb_poisson: computes the (population-based) space-time scan statistic (Kulldorff 2001). • scan_eb_negbin: computes the expectation-based negative binomial scan statistic (Tango, Takahashi, and Kohriyama 2011). • scan_eb_zip: computes the expectation-based zero-inflated Poisson scan statistic (Allévius and Höhle 2017). • scan_permutation: computes the space-time permutation scan statistic (Kulldorff et al. 2005). • scan_bayes_negbin: computes the Bayesian Spatial scan statistic (Neill, Moore, and Cooper 2006), extended to a space-time setting. ### Zone creation • knn_zones: Creates a set of spatial zones (groups of locations) to scan for anomalies. Input is a matrix in which rows are the enumerated locations, and columns the $$k$$ nearest neighbors. To create such a matrix, the following two functions are useful: • coords_to_knn: use stats::dist to get the $$k$$ nearest neighbors of each location into a format usable by knn_zones. • dist_to_knn: use an already computed distance matrix to get the $$k$$ nearest neighbors of each location into a format usable by knn_zones. • flexible_zones: An alternative to knn_zones that uses the adjacency structure of locations to create a richer set of zones. The additional input is an adjacency matrix, but otherwise works as knn_zones. ### Miscellaneous • score_locations: Score each location by how likely it is to have an ongoing anomaly in it. This score is heuristically motivated. • top_clusters: Get the top $$k$$ space-time clusters, either overlapping or non-overlapping in the spatial dimension. • df_to_matrix: Convert a data frame with data for each location and time point to a matrix with locations along the column dimension and time along the row dimension, with the selected data as values. ## Example: Brain cancer in New Mexico To demonstrate the scan statistics in this package, we will use a dataset of the annual number of brain cancer cases in the counties of New Mexico, for the years 1973-1991. This data was studied by Kulldorff et al. (1998), who detected a cluster of cancer cases in the counties Los Alamos and Santa Fe during the years 1986-1989, though the excess of brain cancer in this cluster was not deemed statistically significant. The data originally comes from the package rsatscan (Kleinman 2015), which provides an interface to the program SaTScan, but it has been aggregated and extended for the scanstatistics package. To get familiar with the counties of New Mexico, we begin by plotting them on a map using the data frames NM_map and NM_geo supplied by the scanstatistics package: library(scanstatistics) library(ggplot2) data(NM_map) data(NM_geo) # Plot map with labels at centroids ggplot() + geom_polygon(data = NM_map, mapping = aes(x = long, y = lat, group = group), color = "grey", fill = "white") + geom_text(data = NM_geo, mapping = aes(x = center_long, y = center_lat, label = county)) + ggtitle("Counties of New Mexico") We can further obtain the yearly number of cases and the population for each country for the years 1973-1991 from the data table NM_popcas provided by the package: data(NM_popcas) head(NM_popcas) ## year county population count ## 1 1973 bernalillo 353813 16 ## 2 1974 bernalillo 357520 16 ## 3 1975 bernalillo 368166 16 ## 4 1976 bernalillo 378483 16 ## 5 1977 bernalillo 388471 15 ## 6 1978 bernalillo 398130 18 It should be noted that Cibola county was split from Valencia county in 1981, and cases in Cibola have been counted to Valencia in the data. ### A scan statistic for Poisson data The Poisson distribution is a natural first option when dealing with count data. The scanstatistics package provides the two functions scan_eb_poisson and scan_pb_poisson with this distributional assumption. The first is an expectation-based1 scan statistic for univariate Poisson-distributed data proposed by Neill et al. (2005), and we focus on this one in the example below. The second scan statistic is the population-based scan statistic proposed by Kulldorff (2001). #### Theoretical motivation For the expectation-based Poisson scan statistic, the null hypothesis of no anomaly states that at each location $$i$$ and duration $$t$$, the observed count is Poisson-distributed with expected value $$\mu_{it}$$: $H_0 \! : Y_{it} \sim \textrm{Poisson}(\mu_{it}),$ for locations $$i=1,\ldots,m$$ and durations $$t=1,\ldots,T$$, with $$T$$ being the maximum duration considered. Under the alternative hypothesis, there is a space-time cluster $$W$$ consisting of a spatial zone $$Z \subset \{1,\ldots,m\}$$ and a time window $$D = \{1, 2, \ldots, d\} \subseteq \{1,2,\ldots,T\}$$ such that the counts in $$W$$ have their expected values inflated by a factor $$q_W > 1$$ compared to the null hypothesis: $H_1 \! : Y_{it} \sim \textrm{Poisson}(q_W \mu_{it}), ~~(i,t) \in W.$ For locations and durations outside of this window, counts are assumed to be distributed as under the null hypothesis. Calculating the scan statistic then involves three steps: • For each space-time window $$W$$, find the maximum likelihood estimate of $$q_W$$, treating all $$\mu_{it}$$’s as constants. • Plug the estimated $$q_W$$ into (the logarithm of) a likelihood ratio with the likelihood of the alternative hypothesis in the numerator and the likelihood under the null hypothesis (in which $$q_W=1$$) in the denominator, again for each $$W$$. • Take the scan statistic as the maximum of these likelihood ratios, and the corresponding window $$W^*$$ as the most likely cluster (MLC). #### Using the Poisson scan statistic The first argument to any of the scan statistics in this package should be a matrix (or array) of observed counts, whether they be integer counts or real-valued “counts”. In such a matrix, the columns should represent locations and the rows the time intervals, ordered chronologically from the earliest interval in the first row to the most recent in the last. In this example we would like to detect a potential cluster of brain cancer in the counties of New Mexico during the years 1986-1989, so we begin by retrieving the count and population data from that period and reshaping them to a matrix using the helper function df_to_matrix: library(dplyr) ## ## Attaching package: 'dplyr' ## The following objects are masked from 'package:stats': ## ## filter, lag ## The following objects are masked from 'package:base': ## ## intersect, setdiff, setequal, union counts <- NM_popcas %>% filter(year >= 1986 & year < 1990) %>% df_to_matrix(time_col = "year", location_col = "county", value_col = "count") #### Spatial zones The second argument to scan_eb_poisson should be a list of integer vectors, each such vector being a zone, which is the name for the spatial component of a potential outbreak cluster. Such a zone consists of one or more locations grouped together according to their similarity across features, and each location is numbered as the corresponding column index of the counts matrix above (indexing starts at 1). In this example, the locations are the counties of New Mexico and the features are the coordinates of the county seats. These are made available in the data table NM_geo. Similarity will be measured using the geographical distance between the seats of the counties, taking into account the curvature of the earth. A distance matrix is calculated using the spDists function from the sp package, which is then passed to dist_to_knn (with $$k=15$$ neighbors) and on to knn_zones: library(sp) library(magrittr) # Remove Cibola since cases have been counted towards Valencia. Ideally, this # should be accounted for when creating the zones. zones <- NM_geo %>% filter(county != "cibola") %>% select(seat_long, seat_lat) %>% as.matrix %>% spDists(x = ., y = ., longlat = TRUE) %>% dist_to_knn(k = 15) %>% knn_zones #### Baselines The advantage of expectation-based scan statistics is that parameters such as the expected value can be modelled and estimated from past data e.g. by some form of regression. For the expectation-based Poisson scan statistic, we can use a (very simple) Poisson GLM to estimate the expected value of the count in each county and year, accounting for the different populations in each region. Similar to the counts argument, the expected values should be passed as a matrix to the scan_eb_poisson function: mod <- glm(count ~ offset(log(population)) + 1 + I(year - 1985), data = NM_popcas %>% filter(year < 1986)) ebp_baselines <- NM_popcas %>% filter(year >= 1986 & year < 1990) %>% mutate(mu = predict(mod, newdata = ., type = "response")) %>% df_to_matrix(value_col = "mu") Note that the population numbers are (perhaps poorly) interpolated from the censuses conducted in 1973, 1982, and 1991. #### Calculation We can now calculate the Poisson scan statistic. To give us more confidence in our detection results, we will perform 999 Monte Carlo replications, by which data is generated using the parameters from the null hypothesis and a new scan statistic calculated. This is then summarized in a $$P$$-value, calculated as the proportion of times the replicated scan statistics exceeded the observed one. The output of scan_poisson is an object of class “scanstatistic”, which comes with the print method seen below. set.seed(1) poisson_result <- scan_eb_poisson(counts = counts, zones = zones, baselines = ebp_baselines, n_mcsim = 999) print(poisson_result) ## Data distribution: Poisson ## Type of scan statistic: expectation-based ## Setting: univariate ## Number of locations considered: 32 ## Maximum duration considered: 4 ## Number of spatial zones: 415 ## Number of Monte Carlo replicates: 999 ## Monte Carlo P-value: 0.005 ## Gumbel P-value: 0.004 ## Most likely event duration: 4 ## ID of locations in MLC: 15, 26 As we can see, the most likely cluster for an anomaly stretches from 1986-1989 and involves the locations numbered 15 and 26, which correspond to the counties counties <- as.character(NM_geo$county) counties[c(15, 26)] [1] "losalamos" "santafe" These are the same counties detected by Kulldorff et al. (1998), though their analysis was retrospective rather than prospective as ours was. Ours was also data dredging as we used the same study period with hopes of detecting the same cluster. #### A heuristic score for locations We can score each county according to how likely it is to be part of a cluster in a heuristic fashion using the function score_locations, and visualize the results on a heatmap as follows: # Calculate scores and add column with county names county_scores <- score_locations(poisson_result, zones) county_scores %<>% mutate(county = factor(counties[-length(counties)], levels = levels(NM_geo$county))) # Create a table for plotting score_map_df <- merge(NM_map, county_scores, by = "county", all.x = TRUE) %>% arrange(group, order) # As noted before, Cibola county counts have been attributed to Valencia county score_map_df[score_map_df$subregion == "cibola", ] %<>% mutate(relative_score = score_map_df %>% filter(subregion == "valencia") %>% select(relative_score) %>% .[[1]] %>% .[1]) ggplot() + geom_polygon(data = score_map_df, mapping = aes(x = long, y = lat, group = group, fill = relative_score), color = "grey") + scale_fill_gradient(low = "#e5f5f9", high = "darkgreen", guide = guide_colorbar(title = "Relative\nScore")) + geom_text(data = NM_geo, mapping = aes(x = center_long, y = center_lat, label = county), alpha = 0.5) + ggtitle("County scores") A warning though: the score_locations function can be quite slow for large data sets. This might change in future versions of the package. #### Finding the top-scoring clusters Finally, if we want to know not just the most likely cluster, but say the five top-scoring space-time clusters, we can use the function top_clusters. The clusters returned can either be overlapping or non-overlapping in the spatial dimension, according to our liking. top5 <- top_clusters(poisson_result, zones, k = 5, overlapping = FALSE) # Find the counties corresponding to the spatial zones of the 5 clusters. top5_counties <- top5$zone %>% purrr::map(get_zone, zones = zones) %>% purrr::map(function(x) counties[x]) # Add the counties corresponding to the zones as a column top5 %<>% mutate(counties = top5_counties) The top_clusters function includes Monte Carlo and Gumbel $$P$$-values for each cluster. These $$P$$-values are conservative, since secondary clusters from the original data are compared to the most likely clusters from the replicate data sets. ## Concluding remarks Other univariate scan statistics can be calculated practically in the same way as above, though the distribution parameters need to be adapted for each scan statistic. # Feedback If you think this package lacks some functionality, or that something needs better documentation, I happily accept feedback either here at GitHub or via email at benjak@math.su.se. I’m also very interested in applying the methods in this package (current and future) to new problems, so if you know of any suitable public datasets, please tell me! A dataset with a multivariate response (e.g. multiple counter variables) would be of particular interest for some of the scan statistics that will appear in future versions of the package. # References Allévius, Benjamin, and Michael Höhle. 2017. “An expectation-based space-time scan statistic for ZIP-distributed data.” Stockholm University. Kleinman, Ken. 2015. Rsatscan: Tools, Classes, and Methods for Interfacing with Satscan Stand-Alone Software. https://CRAN.R-project.org/package=rsatscan. Kulldorff, Martin. 2001. “Prospective time periodic geographical disease surveillance using a scan statistic.” Journal of the Royal Statistical Society Series a-Statistics in Society 164: 61–72. Kulldorff, Martin, William F. Athas, Eric J. Feuer, Barry A. Miller, and Charles R. Key. 1998. “Evaluating Cluster Alarms: A Space-Time Scan Statistic and Brain Cancer in Los Alamos.” American Journal of Public Health 88 (9): 1377–80. Kulldorff, Martin, Richard Heffernan, Jessica Hartman, Renato M. Assunção, and Farzad Mostashari. 2005. “A space-time permutation scan statistic for disease outbreak detection” 2 (3): 0216–24. Neill, Daniel B., Andrew W. Moore, and Gregory F. Cooper. 2006. “A Bayesian Spatial Scan Statistic.” Advances in Neural Information Processing Systems 18: 1003. Neill, Daniel B., Andrew W. Moore, Maheshkumar Sabhnani, and Kenny Daniel. 2005. “Detection of Emerging Space-Time Clusters.” In Proceedings of the Eleventh Acm Sigkdd International Conference on Knowledge Discovery in Data Mining, 218–27. ACM. Tango, Toshiro, Kunihiko Takahashi, and Kazuaki Kohriyama. 2011. “A Space-Time Scan Statistic for Detecting Emerging Outbreaks.” Biometrics 67 (1): 106–15. 1. Expectation-based scan statistics use past non-anomalous data to estimate distribution parameters, and then compares observed cluster counts from the time period of interest to these estimates. In contrast, population-based scan statistics compare counts in a cluster to those outside, only using data from the period of interest, and does so conditional on the observed total count.
2020-03-31 19:32:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4915430545806885, "perplexity": 3186.215912634971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00007.warc.gz"}
https://or.stackexchange.com/questions/6347/approximately-evenly-spaced-subsequence
# Approximately evenly-spaced subsequence Suppose you have $$n$$ time series data where the times are in arithmetic series with difference $$d$$, and suppose further that some of the data are missing at random (say, $$p=.30$$). Now suppose you want to find an approximately evenly-spaced subsequence of length $$l$$ of your data, with common difference approximately $$k > d$$ and $$k$$ not necessarily a multiple of $$d$$. I feel like I ought to be able to formulate this as an integer program - specifically, a boolean program where the variable $$a$$ is a vector of size $$n$$ where $$a_i = 1$$ if $$a_i$$ is in the subsequence, $$0$$ otherwise, and $$a^T \mathbf{1} = l$$. The information that the original sequence is an arithmetic series with missing data could be discarded, since the problem it seems ought to have an approximate solution for an arbitrary sequence. The solution may also depend on how you enforce the objective that the subsequence is approximately evenly spaced. Does anyone know if this problem has been studied before? • Out of curiosity, do you have a practical application for this? May 28, 2021 at 14:06 • Thank you for the detailed reply. I am training a deep vision time series model where the time series has missing data, at random. The model takes advantage of regular spacing of its input time series. The question is then how to get a regularly spaced subsequence from the portion of the time series that is available, which subsequence I would then feed to the model. May 28, 2021 at 16:49 I have never encountered a problem like this in literature, but here is one possible way of formulating the problem as a MIP. Notation: • $$n$$: length of the number series • $$l$$: desired length of the subsequence • $$q_i$$: number at position $$i$$ in the original number series ($$i \in I = \{1,\ldots,n\}$$) • $$r_j$$: number at position $$j$$ in the subsequence ($$j \in J = \{1,\ldots,l\}$$) We will need the following binary variables: • $$x_{i,j} \in \{0,1\}$$: 1 if and only if $$q_i = r_j$$ Using this notation, we can express that a number in the sequence can correspond to at most one number in the subsequence: 1. $$\sum_{j \in J}(x_{i,j}) \leq 1 \: \: \forall i \in I$$ Furthermore, each position in the subsequence needs to correspond to exactly one number in the original sequence: 1. $$\sum_{i \in I}(x_{i,j}) = 1 \: \: \forall j \in J$$ We also want that the numbers are in the correct sequence, i.e. $$r_j \le r_{j+1}$$. Using our $$x$$ variables, we can express that quite easily. For all $$j \in \{1,\ldots,l-1\}$$: 1. $$\sum_{i \in I}(x_{i,j} \cdot q_i) <= \sum_{i \in I}(x_{i,j+1} \cdot q_i)$$ Now, this defines the problem, but the objective is still missing and depends on how you define "approximately evenly-spaced". A simple interpretation would be to minimize the difference between the two consecutive numbers with the largest and the two consecutive numbers with the smallest gap. For this purpose we could introduce a continuous variable $$y_1$$ which would be greater or equal to the gap between the numbers that are the furthest apart and a second continuous variable $$y_2$$ which would be smaller or equal to the difference between the two numbers that are closest. Using these two variables we can add the following two constraints for every $$j \in \{1,\ldots,l-1\}$$: 1. $$\sum_{i \in I}(x_{i,j+1}\cdot q_i) - \sum_{i \in I}(x_{i,j}\cdot q_i) \leq y_1)$$ 2. $$\sum_{i \in I}(x_{i,j+1}\cdot q_i) - \sum_{i \in I}(x_{i,j}\cdot q_i) \geq y_2)$$ We could then minimize $$y_1 - y_2$$. If this is 0, we have an evenly spaced sequence. We can visually verify whether this is giving a good result: Example 1: Example 2: The first example doesn't look great, the second one however is perfect, as the red markers (subsequence) are evenly spaced. The problem with the first sequence is that it is indeed not possible that $$y_1 - y_2$$ is lower than 20, but visually, this example isn't pleasing, since the highest gap is twice as big as the lowest gap. A visually much more pleasant result is achieved if we are maximizing $$y_2$$ instead. Just like before, the difference between the largest gap and the lowest gap is 20, however, the largest gap is 140 and lowest is 120, which means that these differences appear almost identical. Example 1a (maximizing $$y_2$$): It seems like what you really want to minimize is the relative difference between $$y_2$$ and $$y_1$$ or the variance of the differences. However, there is a problem with that since MIP technology is limited to linear functions. However, I think maximizing $$y_2$$ (or possibly a linear combination of $$y_2$$ and $$y_1$$), e.g. $$y_2-0.01y_1$$ will do something very similar. Whether this approach is useful in practice depends entirely on your actual numbers (size of $$n$$, size of $$l$$ in comparison to $$n$$). Alternatively, a heuristic approach, or maybe constraint programming (with a very similar formulation as this MIP model), which would allow you to use non-linear objectives, should also be good.
2022-06-30 12:55:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 47, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384122848510742, "perplexity": 217.81143666670775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00619.warc.gz"}
https://tex.stackexchange.com/questions/309009/how-to-create-a-table-where-a-column-is-text-that-wraps?noredirect=1
# how to create a table where a column is text that wraps? The basic tabular package seems oriented for short columns, not where text wraps. What to do when the third column in this case can be multiple lines of text that should wrap? \begin{tabular} {|c|c|c|}\hline Vendor & Website & Description \\ \hline Amazon & \href{amazon.com} & long and windy description goes here \\ Avnet & \href{avnet.com} & blah blah blah ... \\ \end{tabular} • There is a p{<width>} type column (as opposed to c) that formats as paragraph text. – Steven B. Segletes May 11 '16 at 4:33 I use a lot this small column definitions for this kind of things --- I hope they can help (and give hints on how to modify them to your tastes): \documentclass{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{array, ragged2e} \newcolumntype{P}[1]{>{}p{#1\linewidth}<{}} \newcolumntype{R}[1]{>{\RaggedRight\arraybackslash}p{#1\linewidth}<{}} \newcolumntype{M}[1]{>{\RaggedRight\arraybackslash}m{#1\linewidth}<{}} \newcommand{\longtext}{This is quite a long test, but not so long as the text you get with \texttt{lipsum}, just to check things out.} \begin{document} \begin{tabular}{|c|c|P{0.3}|} \hline kind: & \texttt{P}-column & \longtext \\ \hline \end{tabular} \begin{tabular}{|c|c|R{0.3}|} \hline kind: & \texttt{R}-column & \longtext \\ \hline \end{tabular} \begin{tabular}{|c|c|M{0.3}|} \hline kind: & \texttt{M}-column & \longtext \\ \hline \end{tabular} Normally justified text come out badly in small columns; I normally prefer using \raggedright or, better, \RaggedRight for this kind of text. The \arraybackslash it's here to avoid the infamous Misplaced noalign error...
2020-01-18 17:03:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122197031974792, "perplexity": 5032.724020006931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00458.warc.gz"}
https://xianblog.wordpress.com/tag/genetics/
## down with Galton (and Pearson and Fisher…) Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on July 22, 2019 by xi'an In the last issue of Significance, which I read in Warwick prior to the conference, there is a most interesting article on Galton’s eugenics, his heritage at University College London (UCL), and the overall trouble with honouring prominent figures of the past with memorials like named building or lectures… The starting point of this debate is a protest from some UCL students and faculty about UCL having a lecture room named after the late Francis Galton who was a professor there. Who further donated at his death most of his fortune to the university towards creating a professorship in eugenics. The protests are about Galton’s involvement in the eugenics movement of the late 18th and early 19th century. As well as professing racist opinions. My first reaction after reading about these protests was why not?! Named places or lectures, as well as statues and other memorials, have a limited utility, especially when the named person is long dead and they certainly do not contribute in making a scientific theory [associated with the said individual] more appealing or more valid. And since “humans are [only] humans”, to quote Stephen Stigler speaking in this article, it is unrealistic to expect great scientists to be perfect, the more if one multiplies the codes for ethical or acceptable behaviours across ages and cultures. It is also more rational to use amphitheater MS.02 and lecture room AC.18 rather than associate them with one name chosen out of many alumni’s or former professors’. Predictably, another reaction of mine was why bother?!, as removing Galton’s name from the items it is attached to is highly unlikely to change current views on eugenism or racism. On the opposite, it seems to detract from opposing the present versions of these ideologies. As some recent proposals linking genes and some form of academic success. Another of my (multiple) reactions was that as stated in the article these views of Galton’s reflected upon the views and prejudices of the time, when the notions of races and inequalities between races (as well as genders and social classes) were almost universally accepted, including in scientific publications like the proceedings of the Royal Society and Nature. When Karl Pearson launched the Annals of Eugenics in 1925 (after he started Biometrika) with the very purpose of establishing a scientific basis for eugenics. (An editorship that Ronald Fisher would later take over, along with his views on the differences between races, believing that “human groups differ profoundly in their innate capacity for intellectual and emotional development”.) Starting from these prejudiced views, Galton set up a scientific and statistical approach to support them, by accumulating data and possibly modifying some of these views. But without much empathy for the consequences, as shown in this terrible quote I found when looking for more material: “I should feel but little compassion if I saw all the Damaras in the hand of a slave-owner, for they could hardly become more wretched than they are now…” As it happens, my first exposure to Galton was in my first probability course at ENSAE when a terrific professor was peppering his lectures with historical anecdotes and used to mention Galton’s data-gathering trip to Namibia, literally measure local inhabitants towards his physiognomical views , also reflected in the above attempt of his to superpose photographs to achieve the “ideal” thief… ## A precursor of ABC-Gibbs Posted in Books, R, Statistics with tags , , , , , , , , , , on June 7, 2019 by xi'an All ABC algorithms, including ABC-PaSS introduced here, require that statistics are sufficient for estimating the parameters of a given model. As mentioned above, parameter-wise sufficient statistics as required by ABC-PaSS are trivial to find for distributions of the exponential family. Since many population genetics models do not follow such distributions, sufficient statistics are known for the most simple models only. For more realistic models involving multiple populations or population size changes, only approximately-sufficient statistics can be found. While Gibbs sampling is not mentioned in the paper, this is indeed a form of ABC-Gibbs, with the advantage of not facing convergence issues thanks to the sufficiency. The drawback being that this setting is restricted to exponential families and hence difficult to extrapolate to non-exponential distributions, as using almost-sufficient (or not) summary statistics leads to incompatible conditionals and thus jeopardise the convergence of the sampler. When thinking a wee bit more about the case treated by Kousathanas et al., I am actually uncertain about the validation of the sampler. When tolerance is equal to zero, this is not an issue as it reproduces the regular Gibbs sampler. Otherwise, each conditional ABC step amounts to introducing an auxiliary variable represented by the simulated summary statistic. Since the distribution of this summary statistic depends on more than the parameter for which it is sufficient, in general, it should also appear in the conditional distribution of other parameters. At least from this Gibbs perspective, it thus relies on incompatible conditionals, which makes the conditions proposed in our own paper the more relevant. ## contemporary issues in hypothesis testing Posted in Statistics with tags , , , , , , , , , , , , , , , , , , on September 26, 2016 by xi'an This week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001) B¹⁰(p) ≤ 1/-e p log p with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics! ## a general framework for updating belief functions Posted in Books, Statistics, University life with tags , , , , , , , , , on July 15, 2013 by xi'an Pier Giovanni Bissiri, Chris Holmes and Stephen Walker have recently arXived the paper related to Sephen’s talk in London for Bayes 250. When I heard the talk (of which some slides are included below), my interest was aroused by the facts that (a) the approach they investigated could start from a statistics, rather than from a full model, with obvious implications for ABC, & (b) the starting point could be the dual to the prior x likelihood pair, namely the loss function. I thus read the paper with this in mind. (And rather quickly, which may mean I skipped important aspects. For instance, I did not get into Section 4 to any depth. Disclaimer: I wasn’t nor is a referee for this paper!) The core idea is to stick to a Bayesian (hardcore?) line when missing the full model, i.e. the likelihood of the data, but wishing to infer about a well-defined parameter like the median of the observations. This parameter is model-free in that some degree of prior information is available in the form of a prior distribution. (This is thus the dual of frequentist inference: instead of a likelihood w/o a prior, they have a prior w/o a likelihood!) The approach in the paper is to define a “posterior” by using a functional type of loss function that balances fidelity to prior and fidelity to data. The prior part (of the loss) ends up with a Kullback-Leibler loss, while the data part (of the loss) is an expected loss wrt to l(THETASoEUR,x), ending up with the definition of a “posterior” that is $\exp\{ -l(\theta,x)\} \pi(\theta)$ the loss thus playing the role of the log-likelihood. I like very much the problematic developed in the paper, as I think it is connected with the real world and the complex modelling issues we face nowadays. I also like the insistence on coherence like the updating principle when switching former posterior for new prior (a point sorely missed in this book!) The distinction between M-closed M-open, and M-free scenarios is worth mentioning, if only as an entry to the Bayesian processing of pseudo-likelihood and proxy models. I am however not entirely convinced by the solution presented therein, in that it involves a rather large degree of arbitrariness. In other words, while I agree on using the loss function as a pivot for defining the pseudo-posterior, I am reluctant to put the same faith in the loss as in the log-likelihood (maybe a frequentist atavistic gene somewhere…) In particular, I think some of the choices are either hard or impossible to make and remain unprincipled (despite a call to the LP on page 7).  I also consider the M-open case as remaining unsolved as finding a convergent assessment about the pseudo-true parameter brings little information about the real parameter and the lack of fit of the superimposed model. Given my great expectations, I ended up being disappointed by the M-free case: there is no optimal choice for the substitute to the loss function that sounds very much like a pseudo-likelihood (or log thereof). (I thought the talk was more conclusive about this, I presumably missed a slide there!) Another great expectation was to read about the proper scaling of the loss function (since L and wL are difficult to separate, except for monetary losses). The authors propose a “correct” scaling based on balancing both faithfulness for a single observation, but this is not a completely tight argument (dependence on parametrisation and prior, notion of a single observation, &tc.) The illustration section contains two examples, one of which is a full-size or at least challenging  genetic data analysis. The loss function is based on a logistic  pseudo-likelihood and it provides results where the Bayes factor is in agreement with a likelihood ratio test using Cox’ proportional hazard model. The issue about keeping the baseline function as unkown reminded me of the Robbins-Wasserman paradox Jamie discussed in Varanasi. The second example offers a nice feature of putting uncertainties onto box-plots, although I cannot trust very much the 95%  of the credibles sets. (And I do not understand why a unique loss would come to be associated with the median parameter, see p.25.) Watch out: Tomorrow’s post contains a reply from the authors! ## top model choice week (#3) Posted in Statistics, University life with tags , , , , , , , , , , , on June 19, 2013 by xi'an To conclude this exciting week, there will be a final seminar by Veronika Rockovà (Erasmus University) on Friday, June 21, at 11am at ENSAE  in Room 14. Here is her abstract: 11am: Fast Dynamic Posterior Exploration for Factor Augmented Multivariate Regression byVeronika Rockova Advancements in high-throughput experimental techniques have facilitated the availability of diverse genomic data, which provide complementary information regarding the function and organization of gene regulatory mechanisms. The massive accumulation of data has increased demands for more elaborate modeling approaches that combine the multiple data platforms. We consider a sparse factor regression model, which augments the multivariate regression approach by adding a latent factor structure, thereby allowing for dependent patterns of marginal covariance between the responses. In order to enable the identi cation of parsimonious structure, we impose spike and slab priors on the individual entries in the factor loading and regression matrices. The continuous relaxation of the point mass spike and slab enables the implementation of a rapid EM inferential procedure for dynamic posterior model exploration. This is accomplished by considering a nested sequence of spike and slab priors and various factor space cardinalities. Identi ed candidate models are evaluated by a conditional posterior model probability criterion, permitting trans-dimensional comparisons. Patterned sparsity manifestations such as an orthogonal allocation of zeros in factor loadings are facilitated by structured priors on the binary inclusion matrix. The model is applied to a problem of integrating two genomic datasets, where expression of microRNA’s is related to the expression of genes with an underlying connectivity pathway network. ## The Windup Girl Posted in Books with tags , , , , , , , on February 23, 2013 by xi'an “The scientists here carry the haunted look of people who know they are under siege. They know that beyond a few doors, all manners of apocalyptic terrors wait to swallow them.” The book by Paolo Bacigalupi was standing among a shelf of recommended reads at Waterstones near UCL, during my last visit there, and the connection with William Gibson made on the cover pushed me to buy the book. Plus the Hugo and Nebula Awards. And the cover, of course. I took advantage of this trip to Hamburg to read The Windup Girl and I found the book definitely a great read. “Flotsam of the Old Expansion. An ancient piece of driftwood left at high tide, from the time petroleum was cheap and men and women crossed the globe in hours instead of weeks.” The Windup Girl has indeed some flavour of Gibson’s Neuromancer and Stephenson’s Snow Crash, however the story is more psychological and less technological than those two classics. There is a darker tone to the novel, as Earth is suffering both from the end of oil and from various food plagues that destroyed most crops, not mentioning deadly new viruses. The new powers are the big genetically-engineered-seed producers, while part of the World has been eradicated. (The power is now produced by genetically engineered mammoths called megodonts.) And pollution is strictly kept under control. “It has the markings of an engineering virus. DNA shifts don*t look like ones that would reproduce in the wild. Blister rust has no reason to jump the animal kingdom barrier. Nothing is encouraging it, it is not easily transferred. The differences are marked. It’s as though we’re looking into its future.” The story is set in Thailand, which has somehow miraculously salvaged a huge seed bank and which manages to keep those crop companies at bay. Of course, things are deteriorating as the book begins, otherwise there would be no story. What I like the most about The Windup Girl is this bleak vision of a harsh future, set in Asia and told through four different story threads belonging to completely separate cultures (Thai, Chinese, American, and new-Japanese), thus avoiding the usual ethnocentrism of such novels. As mentioned above, the story is definitely not as technological or geeky as cyberpunk novels and it does not even qualify as genepunk, as the amount of genetics involved in the story is somehow limited (except for three newly created races all impacting the plot). But the dystopian universe created by Paolo Bacigalupi is definitely both convincing and mesmerising, while not requiring so many suspensions of belief. The characters are all well-set, with the proper degree of greyness in their ethics, and the political manoeuvring is realistic. I also feel The Windup Girl is quite in tune with (my) current worries about the future fate of humanity faced with rapid climate change, an increasing frequency of natural disasters, and correlated insect invasions. At last, the relation of some of the characters to (Thai) Buddhism is an interesting peculiarity of the novel. So a book truly worth recommending! (In Spanish, the title of the book is La Chica Mecánica, which I find less appealing that the multilayered Windup Girl! The multiple covers on this ‘Og page are actually virtual covers suggested by fans, follow the links to get the whole story.) ## genetics Posted in Books, Kids, Travel, University life with tags , , , , , , , , , , on April 9, 2012 by xi'an Today, I was reading in the science leaflet of Le Monde about a new magnitude in sequencing cancerous tumors (wrong link, I know…). This made me wonder whether the sequence of (hundreds of) mutations leading from a normal cell to a cancerous one could be reconstituted in the way a genealogy is. (This reminds me of another exciting genetic article I read in the Eurostar back from London on Thursday, in the Economist, about the colonization of Madagascar by 30 women from the Malay archipelago: “The island was one of the last places on Earth to be settled, receiving its earliest migrants in the middle of the first millennium AD…“) As a double coincidence, I was reading La Recherche yesterday in the métro to Dauphine, which central theme this month is about heredity beyond genetics. (Double because this also connected with the meeting in London.) The keyword is epigenetics, namely the activation or inactivation of a gene and the hereditary transmission of this character w/o a genetic mutation. This is quite interesting as it implies the hereditability of some adopted traits, i.e. forces one to reconsider the nature versus nurture debate. (This sentence is another input due to Galton!) It also implies that a much faster rate of species differentiation due to environmental changes (than the purely genetic one) is possible, which may sound promising in the light of the fast climate changes we are currently facing. However, what I do not understand is why the journal included a paper on the consequences of epigenetics on the Darwinian theory of evolution and… intelligent design. Indeed, I do not see why the inclusion of different vectors in the hereditary process would contradict Darwin’s notion of natural selection. Or even why considering a scientific modification or replacement of the current Darwinian theory of evolution would be an issue. Charles Darwin wrote his book in 1859, prior to the start of genetics, and the immense advances made since then led to modifications and adjustments from his original views. Without involving any irrational belief in the process.
2020-03-30 01:20:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5959749817848206, "perplexity": 1670.8066554420645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00159.warc.gz"}
https://collegephysicsanswers.com/openstax-solutions/myopic-person-sees-her-contact-lens-prescription-400-d-what-her-far-point
Change the chapter Question A myopic person sees that her contact lens prescription is –4.00 D . What is her far point? $25.0\textrm{ cm}$ Solution Video # OpenStax College Physics Solution, Chapter 26, Problem 18 (Problems & Exercises) (2:15) Rating 1 vote with a rating of 5 Quiz Mode Why is this button here? Quiz Mode is a chance to try solving the problem first on your own before viewing the solution. One of the following will probably happen: 1. You get the answer. Congratulations! It feels good! There might still be more to learn, and you might enjoy comparing your problem solving approach to the best practices demonstrated in the solution video. 2. You don't get the answer. This is OK! In fact it's awesome, despite the difficult feelings you might have about it. When you don't get the answer, your mind is ready for learning. Think about how much you really want the solution! Your mind will gobble it up when it sees it. Attempting the problem is like trying to assemble the pieces of a puzzle. If you don't get the answer, the gaps in the puzzle are questions that are ready and searching to be filled. This is an active process, where your mind is turned on - learning will happen! If you wish to show the answer immediately without having to click "Reveal Answer", you may . Quiz Mode is disabled by default, but you can check the Enable Quiz Mode checkbox when editing your profile to re-enable it any time you want. College Physics Answers cares a lot about academic integrity. Quiz Mode is encouragement to use the solutions in a way that is most beneficial for your learning. ## Calculator Screenshots Video Transcript This is College Physics Answers with Shaun Dychko. This person has a contact lens prescription with a power of negative 4.00 diopters and the question is what is their far point without any contact lenses on? So their far point with the contact lenses will be infinity because that's the goal of the lenses is to create normal vision, which has a far point of infinity. So this change in power is the difference between the power for normal vision— looking at the far point— minus the power with no correction when looking at their far point and so this is 1 over the normal object distance of the far point which is infinity plus 1 over the image distance, which is the distance between the lens and the retina and then we'll subtract from that the power for uncorrected vision for this person, which is 1 over their far point which is what we have to find here plus 1 over the same image distance as before because this is a distance inside the eye between the lens and the retina. So this 1 over d i works out to zero here because it's plus 1 over d i minus positive 1 over d i and so this makes zero there and we are left with 1 over normal object distance in the far point minus 1 over object distance without correction. So the normal far point is infinity so this fraction is zero then and so we have a negative 1 over d o is ΔP. So negative 1 over their far point without correction is this correction provided by the contact lens. So we can raise both sides to exponent negative 1 and we have d o then is negative 1 over ΔP— I guess I multiplied both sides by negative 1 here too by the way to move the negative sign to the other side— so that's negative of 1 over negative 4.00 diopters which is positive 0.250 meters, which is 25.0 centimeters. So their far point is what a regular vision can see at its closest so this is as far away as they can see, it is 25.0 centimeters.
2021-09-17 12:13:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5442743301391602, "perplexity": 674.2031639710904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00646.warc.gz"}
http://codeforces.com/problemset/problem/793/A
A. Oleg and shares time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output Oleg the bank client checks share prices every day. There are n share prices he is interested in. Today he observed that each second exactly one of these prices decreases by k rubles (note that each second exactly one price changes, but at different seconds different prices can change). Prices can become negative. Oleg found this process interesting, and he asked Igor the financial analyst, what is the minimum time needed for all n prices to become equal, or it is impossible at all? Igor is busy right now, so he asked you to help Oleg. Can you answer this question? Input The first line contains two integers n and k (1 ≤ n ≤ 105, 1 ≤ k ≤ 109) — the number of share prices, and the amount of rubles some price decreases each second. The second line contains n integers a1, a2, ..., an (1 ≤ ai ≤ 109) — the initial prices. Output Print the only line containing the minimum number of seconds needed for prices to become equal, of «-1» if it is impossible. Examples Input 3 312 9 15 Output 3 Input 2 210 9 Output -1 Input 4 11 1000000000 1000000000 1000000000 Output 2999999997 Note Consider the first example. Suppose the third price decreases in the first second and become equal 12 rubles, then the first price decreases and becomes equal 9 rubles, and in the third second the third price decreases again and becomes equal 9 rubles. In this case all prices become equal 9 rubles in 3 seconds. There could be other possibilities, but this minimizes the time needed for all prices to become equal. Thus the answer is 3. In the second example we can notice that parity of first and second price is different and never changes within described process. Thus prices never can become equal. In the third example following scenario can take place: firstly, the second price drops, then the third price, and then fourth price. It happens 999999999 times, and, since in one second only one price can drop, the whole process takes 999999999 * 3 = 2999999997 seconds. We can note that this is the minimum possible time.
2020-01-26 19:30:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21401762962341309, "perplexity": 1164.128312441814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00125.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/physical-properties-amines-arrange-following-increasing-order-their-basic-strength-ch3nh2-ch3-2nh-ch3-3n-c6h5nh2-c6h5ch2nh2_9463
Share # Arrange the following in increasing order of their basic strength CH3NH2, (CH3)2NH, (CH3)3N, C6H5NH2, C6H5CH2NH2 - CBSE (Science) Class 12 - Chemistry #### Question Arrange the following in increasing order of their basic strength CH3NH2, (CH3)2NH, (CH3)3N, C6H5NH2, C6H5CH2NH2. #### Solution Considering the inductive effect and the steric hindrance of alkyl groups, CH3NH2, (CH3)2NH, and (CH3)3N can be arranged in the increasing order of their basic strengths as: (CH_3)_3N< CH_3NH_2 < (CH_3)_2NH In C6H5NH2, N is directly attached to the benzene ring. Thus, the lone pair of electrons on the N−atom is delocalized over the benzene ring. In C6H5CH2NH2, N is not directly attached to the benzene ring. Thus, its lone pair is not delocalized over the benzene ring. Therefore, the electrons on the N atom are more easily available for protonation in C6H5CH2NH2 than in C6H5NH2 i.e., C6H5CHNH2 is more basic than C6H5NH2. Again, due to the −I effect of C6H5 group, the electron density on the N−atom in C6H5CH2NH2 is lower than that on the N−atom in (CH3)3N. Therefore, (CH3)3N is more basic than C6H5CH2NH2. Thus, the given compounds can be arranged in the increasing order of their basic strengths as follows C_6H_5NH_2<C_6H_5CH_2NH_2<(CH_3)_3N<CH_3NH_2<(CH_3)_2NH Is there an error in this question or solution? #### Video TutorialsVIEW ALL [2] Solution Arrange the following in increasing order of their basic strength CH3NH2, (CH3)2NH, (CH3)3N, C6H5NH2, C6H5CH2NH2 Concept: Physical Properties of Amines. S
2019-12-10 05:40:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23418356478214264, "perplexity": 4231.644318950189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525821.56/warc/CC-MAIN-20191210041836-20191210065836-00414.warc.gz"}
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-4-section-4-4-multiplying-polynomials-4-4-exercises-page-304/13
## Intermediate Algebra (12th Edition) $18k^4+12k^3+6k^2$ Using $a(b+c)=ab+ac$ or the Distributive Property, the product of the given expression, $6k^2(3k^2+2k+1) ,$ is \begin{array}{l}\require{cancel} 6k^2(3k^2)+6k^2(2k)+6k^2(1) \\\\= 18k^4+12k^3+6k^2 .\end{array}
2018-02-21 01:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995492696762085, "perplexity": 9960.011379676178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00174.warc.gz"}
https://solvedlib.com/solve-step-by-step-if-you-do-not-know-do-not,428165
# Solve step by step, if you do not know do not answer, thanks 6. Rectangular pipe A rectangular metal pipe ###### Question: Solve step by step, if you do not know do not answer, thanks 6. Rectangular pipe A rectangular metal pipe of length L with sides (a, b) is placed along the r axis and centered at the origin. The potential on the sides V(z, y = ±a/2,2-th/2-0. The potential at x =-L/2 is Vi, the potential at x = L/2 is ½. Find potential inside the pipe #### Similar Solved Questions ##### Complete the following steps for the given function, interval, and value of n a. Sketch the... Complete the following steps for the given function, interval, and value of n a. Sketch the graph of the function on the given interval b. Calculate Ax and the grid points Xo...... c. Illustrate the left and right Riemann sums, and determine which Riemann sum underestimates and which sum overestimat... ##### A series of 5 standard solutions must be prepared Obtain 5 100 mL volumetric flasks and label the flasks standards 1,2,3, 4,and 5. Use the volumes from the table below to prepare the standard solutions_ Dilute each standard to volume with butfer. Calculate and record the actual concentrations in your lab notebook before proceeding: Caffeine Benzoic acid Aspartame Acetylsalicylic Standard (mL) (mL (mL) Acid (mL) 15 10 10 10 15 15 15 15 20 20 25 25 25 15 A series of 5 standard solutions must be prepared Obtain 5 100 mL volumetric flasks and label the flasks standards 1,2,3, 4,and 5. Use the volumes from the table below to prepare the standard solutions_ Dilute each standard to volume with butfer. Calculate and record the actual concentrations in... ##### Indicate which secondary structure or structures (a-helix, β-pleated, random coil) will the following peptide adopt in... Indicate which secondary structure or structures (a-helix, β-pleated, random coil) will the following peptide adopt in an aqueous solution at pH 7 5. leu-Glu-Asn-Glu-GIn-Asn-Met-Ala-His-Phe-Trp-Tyr... ##### CNNBC recently eported that the mean annual of auto insurance 979 dollars; Assume the standard deviation 225 dollars Youtake simple fandom sample 68 auto insurance policiesFind the probabilily that single randomly selected value AX < 9831than 983 dollars:Find the probabillity that _ sample of [email protected] dollars: Nm 983)rardomly selected with mean less than 983 CNNBC recently eported that the mean annual of auto insurance 979 dollars; Assume the standard deviation 225 dollars Youtake simple fandom sample 68 auto insurance policies Find the probabilily that single randomly selected value AX < 9831 than 983 dollars: Find the probabillity that _ sample of ... ##### Cetel denpipeFind the derivative of the function and evaluate the deriv ve athkx)=xix a =4 Find the derivative of the functiondxEvaluate the derivative afthe given value of ah'(a) = cetel denpipe Find the derivative of the function and evaluate the deriv ve at hkx)=xix a =4 Find the derivative of the function dx Evaluate the derivative afthe given value of a h'(a) =... ##### The conductor arrangement for a three phase four conductor bundled system is shown in Figure 2... The conductor arrangement for a three phase four conductor bundled system is shown in Figure 2 The diameter of the individual conductor is 2.8 cm. Find the capacitance and inductance of the line per kilometer.... ##### X(1 + 3)W2 dx Y( X)IZ dy X(1 + 3)W2 dx Y( X)IZ dy... ##### In the circuit shown below, the ideal circuit is triggered at a firing angle of a. The voltage across 8. the thyristor... In the circuit shown below, the ideal circuit is triggered at a firing angle of a. The voltage across 8. the thyristor is zero from ω-..................... RLoad α) 0 to α d) α to π In the circuit shown below, the ideal circuit is triggered at a firing angle of a. The vol... ##### When Mickey was five, he gave his mother a dead frog for her birthday. Mickey didn't... When Mickey was five, he gave his mother a dead frog for her birthday. Mickey didn't understand why his mother didn't like her gift, because he thought the frog was cool. This represents what type of thinking? OA. Egocentric OB. Polygenic A Question OC. Arbitrary O D. Abstract Reset Selectio... ##### 4J points BerPSE8 PCOUIHolasAsk Your TeacherA river has a steady speed of 0.530 m/s A student swims upstream distance of 1.00 km and swims back to the starting point: (a) If the student can swim at a speed of 1.10 m/s in still water; how long does the trip take?(b) How much time is required in still water for the same length swim?Intuitively, why does the swim take longer when there is a current?wnewutnn noi boon grnded yotNeed Help?Feran 4J points BerPSE8 PCOUI Holas Ask Your Teacher A river has a steady speed of 0.530 m/s A student swims upstream distance of 1.00 km and swims back to the starting point: (a) If the student can swim at a speed of 1.10 m/s in still water; how long does the trip take? (b) How much time is required in... ##### Assuming that the probability of a pea having a green pod is .018. When12 peas are... Assuming that the probability of a pea having a green pod is .018. When12 peas are generated, use binomial probability formula to find the probability of getting at least 2 green peas.... ##### Laker Company reported the following January purchases and sales data for its only product. Units sold... Laker Company reported the following January purchases and sales data for its only product. Units sold at Retail Units Acquired at Cost 165 [email protected] $9.00 =$1,485 125 units @ \$18.00 Date Activities Jan. 1 Beginning inventory Jan. 10 Sales Jan. 20 Purchase Jan. 25 Sales Jan. 30 Purchase Totals 110 uni... ##### A university with 5000 undergraduates in each of three years selects 100 first-years and 80 final-years... A university with 5000 undergraduates in each of three years selects 100 first-years and 80 final-years uniformly at random and administers a mathematics exam. It wishes to detect whether the average score in this mathematics exam among final-years is greater than the average score among first-years... ##### S. En un circuito serie RC con C-F, R-5002 y para o t< 6 EC)para 83t 12 0 para 12 t donde E() está en voltios. Encuentre la carga en coulombs en el conensador en el tiempot 18 segundos. Tome q0)00... S. En un circuito serie RC con C-F, R-5002 y para o t< 6 EC)para 83t 12 0 para 12 t donde E() está en voltios. Encuentre la carga en coulombs en el conensador en el tiempot 18 segundos. Tome q0)00 Respuesta; S. En un circuito serie RC con C-F, R-5002 y para o t... ##### Please help to simplify as much as possible? (1-cosx)/sinx +sinx/(1-cosx) Please help to simplify as much as possible? (1-cosx)/sinx +sinx/(1-cosx)... ##### Think about the money you spend every day. You probably spend the most money at or near the beginning of each month. How... Think about the money you spend every day. You probably spend the most money at or near the beginning of each month. However, after all the bills are paid, you have what are often referred to as discretionary funds. That’s money you can spend any way you wish. Please tell us on what types of t... ##### 54.15 a Assigned Ma Question Help are given below Find the reauired probablity and determine whether... 54.15 a Assigned Ma Question Help are given below Find the reauired probablity and determine whether the given sample mean would be considered unusual For a sample of n-67, tind the probability of a sample mean being less than 222 it 22 and ơ-1 26 norma standard hormal table. For a sample of n ... ##### A glass windowpane in a home is 0.62 cm thick and has dimensions 1.0 m x 2.0 m A glass windowpane in a home is 0.62 cm thick and has dimensions 1.0 m x 2.0 m. On a certain day, the indoor temperature is 25 C and the outdoor temperature is 0 C. a) What is the rate at which energy is transferred by heat through the glass?b) How much energy is lost through the window in one day, ... ##### Einal ~reading 50.02gNaq Initial burt reading Volume of cyclohexane JOQ m Mass of cyclohexane 172 17269 041o? Show calculation Lml Musg 0f-paper plus. UnknotnMass of-paper Mass of unknown 0 34428Warming Curve Data-Pure Cyclohehane Trial Time (min) Temp TempTime (min) 58 2109 52 22 E Einal ~reading 50.02gNaq Initial burt reading Volume of cyclohexane JOQ m Mass of cyclohexane 172 17269 041o? Show calculation Lml Musg 0f-paper plus. Unknotn Mass of-paper Mass of unknown 0 34428 Warming Curve Data-Pure Cyclohehane Trial Time (min) Temp Temp Time (min) 58 210 9 52 2 2 E... ##### (Simplily Use (Use raall 2 - Iha Jbontg L (Slmplity your answer ) U dlttion 1 equliuon looutoM pan (D. mnd -chethe 1 1 1 1 79 + 16=0 (5 quauumIs 1 1 1 Iamenna ccuiumut0 Ine1aclal tot (Simplily Use (Use raall 2 - Iha Jbontg L (Slmplity your answer ) U dlttion 1 equliuon looutoM pan (D. mnd -chethe 1 1 1 1 79 + 16=0 (5 quauumIs 1 1 1 Iamenna ccuiumut0 Ine 1 aclal tot... ##### Enter the coefficients that are required to properly balance the chemical equation shown below. If a... Enter the coefficients that are required to properly balance the chemical equation shown below. If a coefficient is one, you must enter the number CaOH2 + 2 H3PO41 Ca3(PO4)2 + 6 H20 QUESTION 12... ##### How do you think the public health professional might incorporate systems management to promote leadership collaboration... how do you think the public health professional might incorporate systems management to promote leadership collaboration in public health, and why?... ##### Building Redox SeriesDateLLo Liol Redox reactions are characterized by the fact that electrons are bctween reactants substance which oxidizeo Lekes electrons and the oxidalion numbersubstance which reducedelectrons and the oxidation umbersubstance which is an oxidizing agent (Circle one) @ would take electrons from another substance and s0 be reduced would take electrons from another substance and s0 be oxidized would give electrons another substance and s0 be reduced would give electrons anothe Building Redox Series Date L Lo Liol Redox reactions are characterized by the fact that electrons are bctween reactants substance which oxidizeo Lekes electrons and the oxidalion number substance which reduced electrons and the oxidation umber substance which is an oxidizing agent (Circle one) @ wou... ##### Find the co-ordinates of the centre of mass of the plane Iamina given in the figure below:343Find the co-ordinates of the centre of mass of the plane lamina given in the figure below:4aFind the co-ordinates of the centre of mass of the plane Iamina given in the figure below:40Y=432_723 Find the co-ordinates of the centre of mass of the plane Iamina given in the figure below: 343 Find the co-ordinates of the centre of mass of the plane lamina given in the figure below: 4a Find the co-ordinates of the centre of mass of the plane Iamina given in the figure below: 40 Y=432_7 23... ##### GEOMETRY IAAn equilateral triangle is shown What is the value of y, the length of one side?2x -4 GEOMETRY IA An equilateral triangle is shown What is the value of y, the length of one side? 2x -4... ##### HunleGnn 204 10n Rx 4A(drection from Dlo Ch Lrt cumteng uirction lium A t0 D} Fument /1 Uneclin om Flo DnunllMuc Ik lni Kanu / Anan Ni(D). [5 point-|Ruchhomt FUEnel dior LORAHCDA cakulale puIi|Using Kirchholf bop nolcntaldjon LCABEFA IS points]Krchll Mchtuldrur buc-LbHD mnintaAII[Z40_ Hunle Gnn 204 10n Rx 4A(drection from Dlo Ch Lrt cumteng uirction lium A t0 D} Fument /1 Uneclin om Flo D nunll Muc Ik lni Kanu / Anan Ni (D). [5 point-| Ruchhomt FUEnel dior LORAHCDA cakulale puIi| Using Kirchholf bop nolcntaldjon LCABEFA IS points] Krchll Mchtuldrur buc-LbHD mninta AII [Z40_... ##### Suppose that 0 and 02 are unbiased estimators of the parameter 0. We know that V estimator is the best? Enter the number of estimator; 1 or 2.=13andV (026. Which Suppose that 0 and 02 are unbiased estimators of the parameter 0. We know that V estimator is the best? Enter the number of estimator; 1 or 2. =13andV (02 6. Which... ##### PtsFor a hydrogen atom, calculate the when an electron wavelength (in nm) of a photon of light released transitions from the n-9 t0 n-6. the electron at each level in your calculations. In your work, show 'Ihe energy for (4pts) pts For a hydrogen atom, calculate the when an electron wavelength (in nm) of a photon of light released transitions from the n-9 t0 n-6. the electron at each level in your calculations. In your work, show 'Ihe energy for (4pts)... ##### (2) MODEL The VElCIT} Cf Ttte WATEC SKIEC ShcuN BElOW (a) Ipennry TiE In AUT ANd CUTPUT Of THESYSTEM DRAw Te Block MIAerit HODEL , 4 Using IDEAL Elehenrs Find T4+E Mfferennhl EOuttlus THHT Destuge Tle SysTEM (2) MODEL The VElCIT} Cf Ttte WATEC SKIEC ShcuN BElOW (a) Ipennry TiE In AUT ANd CUTPUT Of THESYSTEM DRAw Te Block MIAerit HODEL , 4 Using IDEAL Elehenrs Find T4+E Mfferennhl EOuttlus THHT Destuge Tle SysTEM... ##### Just a quick question: In general, what will happen to a gas at a constanttemperature if the pressure is increased? Just a quick question: In general, what will happen to a gas at a constant temperature if the pressure is increased?...
2023-03-24 00:13:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702822268009186, "perplexity": 5348.330262192851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00351.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tmf&paperid=5168&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TMF: Year: Volume: Issue: Page: Find TMF, 1986, Volume 68, Number 2, Pages 172–186 (Mi tmf5168) Solitons of the nonlinear Schrödinger equation generated by the continuum V. P. Kotlyarov, E. Ya. Khruslov Abstract: A study is made of the large-time asymptotic behavior of the solutions of the nonlinear Schrödinger equation with attraction that tend to zero as $x\to+\infty$ and to a finite-gap solution of the equation as $x\to-\infty$. It is shown that in the region of the leading edge such solutions decay in the limit $t\to\infty$ into an infinite series of solitons with variable phases, the solitons being generated by the continuous spectrum of the operator $L$ of the corresponding Lax pair. Full text: PDF file (1120 kB) References: PDF file   HTML file English version: Theoretical and Mathematical Physics, 1986, 68:2, 751–761 Bibliographic databases: Citation: V. P. Kotlyarov, E. Ya. Khruslov, “Solitons of the nonlinear Schrödinger equation generated by the continuum”, TMF, 68:2 (1986), 172–186; Theoret. and Math. Phys., 68:2 (1986), 751–761 Citation in format AMSBIB \Bibitem{KotKhr86} \by V.~P.~Kotlyarov, E.~Ya.~Khruslov \paper Solitons of the nonlinear Schr\"odinger equation generated by the continuum \jour TMF \yr 1986 \vol 68 \issue 2 \pages 172--186 \mathnet{http://mi.mathnet.ru/tmf5168} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=871046} \zmath{https://zbmath.org/?q=an:0621.35092} \transl \jour Theoret. and Math. Phys. \yr 1986 \vol 68 \issue 2 \pages 751--761 \crossref{https://doi.org/10.1007/BF01035537} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1986G528100002} • http://mi.mathnet.ru/eng/tmf5168 • http://mi.mathnet.ru/eng/tmf/v68/i2/p172 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. V. P. Kotlyarov, “Asymptotic solitons of the sine-Gordon equation”, Theoret. and Math. Phys., 80:1 (1989), 679–689 2. R. F. Bikbaev, R. A. Sharipov, “Asymptotics at $t\to\infty$ of the solution to the Cauchy problem for the Korteweg–de Vries equation in the class of potentials with finite-gap behavior as $x\to\pm\infty$”, Theoret. and Math. Phys., 78:3 (1989), 244–252 3. Anders, I, “Asymptotic solitons of the Johnson equation”, Journal of Nonlinear Mathematical Physics, 7:3 (2000), 284 4. V. B. Baranetskii, V. P. Kotlyarov, “Asymptotic behavior in the trailing edge domain of the solution of the KdV equation with an initial condition of the “threshold type””, Theoret. and Math. Phys., 126:2 (2001), 175–186 5. Anders, I, “Soliton asymptotics of nondecaying solutions of the modified Kadomtsev-Petviashvili-I equation”, Journal of Mathematical Physics, 42:8 (2001), 3673 6. Egorova, I, “On the Cauchy problem for the Korteweg-de Vries equation with steplike finite-gap initial data: I. Schwartz-type perturbations”, Nonlinearity, 22:6 (2009), 1431 7. Kotlyarov V., Minakov A., “Riemann–Hilbert problem to the modified Korteveg-de Vries equation: Long-time dynamics of the steplike initial data”, J Math Phys, 51:9 (2010), 093506 8. A. Minakov, “Asymptotics of rarefaction wave solution to the mKdV equation”, Zhurn. matem. fiz., anal., geom., 7:1 (2011), 59–86 9. Minakov A., “Long-time behavior of the solution to the mKdV equation with step-like initial data”, J. Phys. A: Math. Theor., 44:8 (2011), 085206 10. Egorova I., Teschl G., “On the Cauchy Problem for the Kortewegde Vries Equation With Steplike Finite-Gap Initial Data II. Perturbations With Finite Moments”, J Anal Math, 115 (2011), 71–101 11. V. Kotlyarov, A. Minakov, “Step-initial function to the mKdV equation: hyper-elliptic long-time asymptotics of the solution”, Zhurn. matem. fiz., anal., geom., 8:1 (2012), 38–62 12. Zhu J. Wang L. Qiao Zh., “Inverse Spectral Transform For the Ragnisco-Tu Equation With Heaviside Initial Condition”, J. Math. Anal. Appl., 474:1 (2019), 452–466 • Number of views: This page: 311 Full text: 133 References: 39 First page: 1
2021-01-16 14:12:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3415011465549469, "perplexity": 7593.379695144893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00241.warc.gz"}
https://library.kiwix.org/datascience.stackexchange.com_en_all_2021-04/A/question/10000.html
## What is the difference between a (dynamic) Bayes network and a HMM? 15 5 I have read that HMMs, Particle Filters and Kalman filters are special cases of dynamic Bayes networks. However, I only know HMMs and I don't see the difference to dynamic Bayes networks. It would be nice if your answer could be similar to the following, but for bayes Networks: ## Hidden Markov Models A Hidden Markov Model (HMM) is a 5-tuple $\lambda = (S, O, A, B, \Pi)$: • $S \neq \emptyset$: A set of states (e.g. "beginning of phoneme", "middle of phoneme", "end of phoneme") • $O \neq \emptyset$: A set of possible observations (audio signals) • $A \in \mathbb{R}^{|S| \times |S|}$: A stochastic matrix which gives probabilites $(a_{ij})$ to get from state $i$ to state $j$. • $B \in \mathbb{R}^{|S| \times |O|}$: A stochastic matrix which gives probabilites $(b_{kl})$ to get in state $k$ the observation $l$. • $\Pi \in \mathbb{R}^{|S|}$: Initial distribution to start in one of the states. It is usually displayed as a directed graph, where each node corresponds to one state $s \in S$ and the transition probabilities are denoted on the edges. Hidden Markov Models are called "hidden", because the current state is hidden. The algorithms have to guess it from the observations and the model itself. They are called "Markov", because for the next state only the current state matters. For HMMs, you give a fixed topology (number of states, possible edges). Then there are 3 possible tasks • Evaluation: given a HMM $\lambda$, how likely is it to get observations $o_1, \dots, o_t$ (Forward algorithm) • Decoding: given a HMM $\lambda$ and a observations $o_1, \dots, o_t$, what is the most likely sequence of states $s_1, \dots, s_t$ (Viterbi algorithm) • Learning: learn $A, B, \Pi$: Baum-Welch algorithm, which is a special case of Expectation maximization. ## Bayes networks Bayes networks are directed acyclical graphs (DAGs) $G = (\mathcal{X}, \mathcal{E})$. The nodes represent random variables $X \in \mathcal{X}$. For every $X$, there is a probability distribution which is conditioned on the parents of $X$: $$P(X|\text{parents}(X))$$ • Inference: Given some variables, get the most likely values of the others variables. Exact inference is NP-hard. Approximately, you can use MCMC. • Learning: How you learn those distributions depends on the exact problem (source): • known structure, fully observable: maximum likelihood estimation (MLE) • known structure, partially observable: Expectation Maximization (EM) or Markov Chain Monte Carlo (MCMC) • unknown structure, fully observable: search through model space • unknown structure, partially observable: EM + search through model space ## Dynamic Bayes networks I guess dynamic Bayes networks (DBNs) are also directed probabilistic graphical models. The variability seems to come from the network changing over time. However, it seems to me that this is equivalent to only copying the same network and connecting every node at time $t$ with every the corresponding node at time $t+1$. Is that the case? I asked someone about this and they said: "HMMs are just special cases of dynamic Bayes nets, with each time slice containing one latent variable, dependent on the previous one to give a Markov chain, and one observation dependent on each latent variable. DBNs can have any structure that evolves over time." – ashley – 2017-11-27T22:12:34.983 2> • You can also learn the topology of an HMM. • When doing inference with BNs, besides asking for maximum likelihood estimates, you can also sample from the distributions, estimate the probabilities, or do whatever else probability theory lets you. • A DBN is just a BN copied over time, with some (not necessarily all) nodes chained from past to the future. • In this sense, a HMM is a simple DBN with just two nodes in each time-slice and one of the nodes chained over time. – KT. – 2016-02-03T09:25:34.987 2 HMMs are not equivalent to DBNs, rather they are a special case of DBNs in which the entire state of the world is represented by a single hidden state variable. Other models within the DBN framework generalize the basic HMM, allowing for more hidden state variables (see the second paper above for the many varieties). Finally, no, DBNs are not always discrete. For example, linear Gaussian state models (Kalman Filters) can be conceived of as continuous valued HMMs, often used to track objects in space. I'd recommend looking through these two excellent review papers:
2021-10-28 03:36:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146136403083801, "perplexity": 1179.684964927948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00253.warc.gz"}
https://www.physicsforums.com/threads/maxwell-boltzmann-distribution-question.282670/
# Maxwell Boltzmann Distribution Question Hello everyone ## Homework Statement The equivalent of the Maxwell-Boltzman distribution for a two-dimensional gas is $P(v) = Cv e^-\frac {mv^2}{kt}$ Determine $C$ so that $\int_0^\infty P(v)dv = N$ Not really sure ## The Attempt at a Solution I wasn't really sure how to tackle this question so I figured i'd integrate $P(v)$ since the question says that'll equal N. $\int_0^\infty P(v)dv$ $\int_0^\infty Cv e^-\frac {mv^2}{kt} dv$ $C\int_0^\infty v e^-\frac {mv^2}{kt} dv$ $u = \frac {mv^2}{kt}$ $\frac {du}{dv} = \frac {2mv}{kt}$ $dv = \frac {du kt}{2mv}$ $C\int_0^\infty v e^{-u} \frac {du kt}{2mv}$ $C\int_0^\infty e^{-u} \frac {du kt}{2m}$ $\frac {Ckt}{2m} \int_0^\infty e^{-u} du$ $= \frac {Ckt}{2m} \bigg[{-e^{-u}\bigg]_0^\infty$ $= \frac {Ckt}{2m} \bigg[{-e^{-\frac {mv^2}{kt}}\bigg]_0^\infty$ I'm not really sure where to go from here. How would I evaluate this between infinity and zero? Thanks Related Introductory Physics Homework Help News on Phys.org G01 Homework Helper Gold Member HINT: You said the entire integral has to equal N, correct? Well, your last line is equal to the integral. So if A = B and B=C ... ? ok, so you're saying $N = \frac {Ckt}{2m} \bigg[{-e^{-\frac {mv^2}{kt}}\bigg]_0^\infty$ which yeah, makes sense. But do you want me to rearrange it to make C the subject while not evaluating the integral? G01 Homework Helper Gold Member ok, so you're saying $N = \frac {Ckt}{2m} \bigg[{-e^{-\frac {mv^2}{kt}}\bigg]_0^\infty$ which yeah, makes sense. But do you want me to rearrange it to make C the subject while not evaluating the integral? No. Evaluate the integral and then solve for C. That should be your answer. No. Evaluate the integral and then solve for C. That should be your answer. I thought so. This might seem stupid, but I really don't know how to evaluate the integral when one of the limits is $\infty$. Could you shed some light on that please? G01 Homework Helper Gold Member I thought so. This might seem stupid, but I really don't know how to evaluate the integral when one of the limits is $\infty$. Could you shed some light on that please? $$N = \frac {Ckt}{2m} \bigg[{-e^{-\frac {mv^2}{kt}}\bigg]_0^t$$ Now take that whole resulting expression and take the limit as t$\rightarrow\infty$. Now can you solve for C? $$N = \frac {Ckt}{2m} \bigg[{-e^{-\frac {mv^2}{kt}}\bigg]_0^t$$ Now take that whole resulting expression and take the limit as t$\rightarrow\infty$. Now can you solve for C? Ok, so $\bigg[{-e^{-\frac {mv^2}{kT}}\bigg]_0^t$ will give me $\lim_{t \to \infty}({-e^{-\frac {mt^2}{kT}} + 1)$ yeah? Sorry if this is annoying, I've never actually done something like this which seems a bit strange considering a question requires it. I appreciate the help. I used a site that provides a nice integral and limit calculator: http://www.numberempire.com/limitcalculator.php From that, I get: $$$\lim_{v\to0}\left(-{{k\,T\,e^ {- {{m\,v^2}\over{k\,T}} }}\over{2\,m}}\right) = -{{k\,T}\over{2\,m}}$$$ So: $$$C \times \left[ \frac{kTe^{\frac{-mv^2}{kT}}}{2m}\right]^{\infty}_0=C\times \frac{kT}{2m}$$$ Have I missed something obvious? Last edited:
2020-01-19 18:10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367664456367493, "perplexity": 545.1858699170346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00126.warc.gz"}
http://k-skrzypek.pl/determine-the-maximum-shear-stress-developed-in-the-40-mm-diameter-shaft.html
outer diameter and10-in. Maximum Transverse Shear Stress For a narrow rectangular section we can work with the equation t = VQ It to calculate shear stress at any vertical point in the cross section. tmax = Tmax c J = 90(0. T N mshear stress developed in each of the 6-mm The lever is attached to the shaft A using a key that has a width d and length of 25 mm. Determine the number of bolts if the allowable shear stress in the bolt is 40 MPa. - Use combined stresses for shaft design. (Fig. Determine the maximum shear stress developed in the shaft at section a-a. This is not necessarily the maximum shear stress acting at the material particle. The copper pipe has an outer diameter of 40 mm and an inner diameter of 37 mm. 2) If the gears are subjected to the torques shown, determine the maximum shear stress in the segment AB and BC of the A-36 steel shaft. Jan 17, 2021 · A copper pipe has an outer diameter of 40mm and an inner diameter of 37mm. If it is supported by two journal bearings at C and D, determine the maximum bending stress developed at the center of the axle, where the diameter is 5. A hollow steel shaft 2540 mm long must transmit torque of 34 kN-m. Shear deformation occurs when two antiparallel forces of equal magnitude are applied tangentially to opposite surfaces of a solid object, causing no deformation in the transverse direction to the line of force, as in the typicali) To compare the stress developed in the three lobe and four lobe polygonal shaft and hub in Table 1 are designed according to the maximum shear stress and contact pressure determined. A Ans. 7 MPa Nov 17, 2021 · Shear stress developed at the innermost layer comprising of pure metal is maximum for triangular cross section. in. 0185 ) 5-5. Q. Given : P = 1 MW = 1 × 10 6 W; N = 240 r. Diameter of bolt = 7/8 inch Diameter at the root of the thread (bolt) = 0. 5. The maximum shear stress induced in the shaft is: This question was previously asked in. 2019-20What I did was as follows: Torque = Force x Distance. Sol. A coefficient αs = 40 for interior columns, αs The shear stress due to torsion adds to flexural shear stress on one vertical face, but it subtractsTo develop a maximum shear stress of 60 N/ mm in the hollow shaft, the torque 'T' must he reduced by. If originally wire AB is 60 in. The predicted axial pile capacities presented in the preceding sections are summarized in table 40. 4 The copper pipe has an outer diameter of 40 mm and an Inner diameter of 31 mm. Internal Bending Moment (M) ≡. 5 N/mm 2. If bar size is 40mm, spacing greater than 100mm) Maximum spacing of vertical and horizontal bars: the lesser of 3 times the wall thickness or 400mm. e c=40 mm P. Measuring units of length can be tricky when you have to deal with two totally different systems of measurement. p. If the shaft is subjected to a compressive fo rce of 300. : τ max = 26. Take G = 80 GPa. 11 Maximum Torsional Shear T:2100 Nm = 2100(10)3 Nmm r 12 Solve it! The solid shaft has a diameter of 40 mm. Find a suitable diameter for the shaft, if the maximum torque transmitted exceeds the mean by 25%. The internal torques developed in each segment of the shaft are shown in the torque diagram, Fig. Calculate the factor of safety based on, (i) maximum principal stress theory; and (ii) maximum shear stress theory. Determine the maximum shear stress developed in the 40-mm diameter shaft. The 0. It is subjected to a steady torsional moment of 250 N-m and bending moment of 1250 N-m. '3—21. 2 T mean; τ = 60 MPa = 60 N/mm 2 Let d = Diameter of the shaft. If d = 40 mm, determine the maximum shear stress in the brass. Find the maximum torsional stress in shaft AC (refer the figure). Determine an appropriate design stress taking into account the type of loading (whether smooth, shock, repeated, reversed). The yield and ultimate stress values for different steel materials are noted in Table 2 in the AISC manual on pages Shear lag occurs when the tension force is not transferred simultaneously to all elements of. The maximum axial propeller thrust is 500 kN and the shaft weighs 70 kN. The 21 de fev. 7 MPa About shaft developed the shear 40 mm in diameter the maximum stress Determine , determine the maximum torque T that can be transmitted. 14. 8 m 1. in finding angle of Thus shear stress increases linearly from zero at axis to the maximum value ts at surface. Design the keyway in the motor shaft extension. If it is tightly secured to the wall at and three torques are applied to it as shown, determine the absolute maximum shear stress developed in the pipe. A rod of diameter 20 mm is subjected to a tensile load. Problem 1 Compute the average shear stress developed in a plate (10 thickness) Problem 2. Determine the maximum shear stress developed in the 40-mm diameter shaft. Equation (5. If it is tightly secured to the wall at A and three torques are applied to it as shown, determine the maximum shear stress developed in the pipe. Input 3 of 4 variables to solve for the unknown. S;: the torques applied to the gears. . Determine• Stress: The stress in an axially loaded tension member is given by Equation (4. The maximum shear stress allowed in the shaft is 80 MPa and the ratio of the inner diameter to outer diameter is 3/4. UF 3. 100 mm B. Start exploring! Science Advanced Physics Q&A Library F10-4. Hib Ch 5_2 - Read online for free. The solid shaft has a diameter of 0. Tensile strength is calculated from the maximum load during a tension test carried out to rupture, and the Ultimate Tensile Strength - Is the maximum stress that a material can endure while being stretched or pulledDetermine the maximum shearing stress of the beam. The system is at sea level. 30 ) 10 ( 75 . Use G=35 GPa. 2 demonstrated the results of axial cyclic loading field tests on 420mm diameter piles in capacity of offshore suction caisson anchors. m. The pressure difference between the downstream end and the upstream end is dp. 64) What is the maximum shear stress induced in a solid shaft of 50 mm diameter which is subjected to both bending moment and torque of 300 kN. If the pipe is tightly secured to the wall at C, determine the maximum shear stress developed in each section of the pipe when the The two shafts are made of A-36 steel. The solid 30-mm-diameter shaft is used to transmit the torques applied to the gears. What is the external diameter of the shaft which is subjected to a maximum shear stress of 90 N/mm2? a. Take E = 2 x 105N/mm2. Determine the absolute maximum shear stress in the shaft. The shear stress varies from zero in the axis to a maximum at the outside surface of the shaft. A solid circular shaft of 60 mm diameter transmits a torque of 1600 Nm. If the shear stress is not to exceed 80 N/mm 2, find maximum power, the shaft can transmit at 200 rpm. Also, sketch the torque diagram for the shaft. It is requiœd that T act in the direction shown. A sample problem will illustrate the application of the above principles and the previous relations. 3. 60 o. 5 in. If the gears are subjected to the torques shown, determine the maximum shear stress developed in the segments AB and BC. M and diameter of shaft if allowable shear stress is 40N/mm2 2. The stress—strain diagram for a polyester resin is given in the figure. The shaft has to transmit 500 kW of power at 150 RPM. mm diameter to 12 mm diameter in a length of. 6 kN-m. therefore Torque/Distance = Force. The steel shaft is made from two segments: AC has a diameter of 12 mm, and CB has a diameter of 25 mm. a is subjected to pure torsion, shearing stresses develop in the four faces as shown by(a) Determining the applied stress is applied; this means that the stress corresponding to this load not exceed the yield strength of. 28 A solid shaft of 200 mm diameter has the same maximum shear stress developed in the shaft is. Meru University of Science & Technology is ISO Determine the stress and pull exerted when the temperature falls to 30oC when ; (i) The ends do not yield (ii) The ends yield by 1. Investigate the effect of the geometrical parameters and processing conditions onInternal Shear Force (V) ≡ equal in magnitude but opposite in direction to the algebraic sum (resultant) of the components in the direction perpendicular to the axis of the beam of all external loads and support reactions acting on either side of the section being considered. Jan 13, 2022 · A rotating shaft, 40 mm in diameter, is made of steel FeE 580 (S yt = 580 N/ mm 2 ). Determine the minimum shaft diameter at the coupling for a 4-stage pump operating at 3,560 RPM where the maximum horsepower at the end of the curve is 850 bhp. Soultion for ENGINEERS Click "Download" to get the full free document, or view any other Freelander PDF totally free. No, generally the normal stress in not zero in the max shear stress plane. The shafts are straight. Determine the expressions relating applied loading to such effects as stress, strain, and deformation. Determine the maximum About shaft developed the shear 40 mm in diameter the maximum stress Determine , determine the maximum torque T that can be transmitted. For example, if two plates each 1 inch thick are connected by a bolt with a diameter (d) of 1 inch, and each plate is subjected to a force of If gears A and B remove 1 kW and 2 kW, respectively, determine the maximum shear stress developed in the shaft within regions AB and BC. ) or 166 mm (6. Determine the maximum average shear stressdeveloped in the 30-mm-diameter pin. Determine the shear stress developed in the shaft at points C and D. of the hollow shaft if diameter of solid shaft, 20 d = 40 mm. 2100 lb. 7 Creep. Find the diameter of the shaft if the shear stress is to be limited to 125 N/mm2. c the outer radius of the shaft. 5, which sets the maximum spacing of stirrups. Determine (a) the maximum shear stress developed in shaft, (b) the angular twist over 1m longUse the maximum shear stress theory, i. 2 N/mm2. d Minimum shaft diameter at notch. (b) Solve part a, assuming that the solid shaft has been replaced by a hollow shaft of the same outer diameter and of 24-mm inner diameter. 14 MPa) 2 A rectangular tube has outside dimensions 40 mm x 30 mm and has a wall 2 mm thick. ii. 32 mm. This stress can be derived from the torsion formula: τ max = T D o / 2 J. The maximum bending stress in the beam. 81 MPa . Tensile force on the bolt: shear strain rate at the surface of the ball can be estimated as twice the penetration velocity (v) divided by the ball diameter (D) (Randolph and Andersen, 2006): [3] The ball diameter was set at 40 mm to enable 2 tests to be con-ducted through a 100 mm squat hole without overlap of their zones of influence, which extend 1. Determine the average normal stress acting at section a-a. I different sizes of the box, assuming that geometric effects could cause differences in the developed shear zones2. 5 in, and CB has a diameter of 1 in. Determine the shear stress developed at point A on the surface of the shaft. 19: Shear stresses in coaxial shafts. CHAPTER 6 Excavation, Grading, Answer to Determine the maximum shear stress developed in the 40-mm-diameter shaft. Determine minimum diameter of the shaft, if shear stress is limited to 40 N/mm2 and angle of twist should not exceed 0. 4 kNm and a tensile force P=125 kN, determine the maximum tensile stress, maximum compressive stress, and maximum shear stress in the shaft. 1. 8o in a length of 4 m. If it is fixed at its ends A and B and subjected to a torque of 750 Nm, determine the maximum shear stress in the shaft. The A-36 hollow steel shaft is 2 m long and has an outer diameter of 40 mm. ⁄ ( ) P=50 KN Known: The maximum shear stress produced in a shaft transmitting torque is given. 7 MPa Shear stress arises due to shear forces. It can be calculated using the expression. 5 m long and has inner and outer diameters equal to 40 mm and 60 mm. Determine the load in kN on a 25 mm diameter 1200 mm long steel shaft if its maximum elongation exceeds 1mm A. A shaft of 50 mm diameter transmits a torque of 800 N-m. Determine the principle stresses and the maximum shear stresses. Draw neat sketch of kernel of the following cross-sections a. In general, when the directions of the principal axes are unknown, a three-gageThe full development of a critical shear crack between stirrups is prevented by ACI 318-05, Section 11. Explain end conditions of columns and its equilent length. Find the maximum torque that can be safely applied to a shaft of 80mm diameter. Shear stress at NA = 4/3 [Average shear stress]. two normal components and one shear component. Determine the (i) Maximum shear stress developed in the shaft (ii) the angular twist for 1 m length of shaft. CHAPTER 3 PROBLEM 3. 62 ksi 600 lb. Determine the required diameter d of the shaft to the nearest mm if the allowable shear stress for the material is t allow = 50 MPa. Based on Tresca's failure criterion, if the uniaxial yield stressProblem 42 Easy Difficulty. 2 Example. Hence, the pile capacity can be correlated to the installation torque. Answer: Given data. 6) A hollow shaft, having an inside diameter 60% of its outer diameter, is to replace a solid shaft transmitting in the sameThe stress state in direct shear type tests is somewhat indeterminate (there is wide disagree-ment Dial gauges measure vertical and horizontal displacements. Maximum area of vertical reinforcement 0. Also, what is the average shear stress acting a ong the inslde surface of the collar where it is fixed connected to the 52-mm diameter shaft? Stress : 30(10') A Shear Stress : 53 mm 10 mm 40 mm 48. Determine the shear stress at the level of neutral axis, if a beam has a triangle cross section having base “b” and altitude “h”. The maximum shear stress is = ± 28. Compute the average shear stress developed in a plate (10 thickness) under the action of a piston (40 mm diameter) subjected to a force of 50 KN. Mechanical Engineering. The maximum shear stress in any of the two spheres can be determined from They can be employed for determining the contact area and the contact pressure respectively for two scenarios; one scenario entailing a sphere pressed against a flat surface and the other, involving a sphereDetermine the power required for milling a mild steel work piece with a cutter of 80 mm diameter 9 What term is used to describe a maximum load that a bolt can withstand without acquiring a The shearing stress is not to exceed 40 MN/m². The shear stress in a beam is not uniform throughout the cross section, rather it varies from zero at the outer fibres to maximum at the A shaft made of 40 C 8 steel is used to drive a machine. The maximum principal stress experienced on the shaft is closest tothe shaft passes through the 53-111m diameter hole in the fixed support A, determine the bearing stress acting on the collar C. The material is homogeneous and perfectly A 15 kW, 960 r. Determine the stresses in each member and determine the distance d locating the point of application of the load P needed for equal strains in the b) What is the maximum shear stress in the shaft?The diameter of the port is 35mm. Take Modulus of rigidity C = 85 GPa. A shaft is made of an aluminum alloy having an allowable shear stress of $\tau_{\text {allow }}=100$ MPa. 3 MPashear-stress distribution. (12′) If the tubular shaft (NOT thin-walled) is subjected to a uniformly distributed torque of m = 20 kN m/m as shown, determine the maximum shear stress developed in the shaft. Known: The maximum shear stress produced in a shaft transmitting torque is given. Answer: c. e. ( wall = 61. Take maximum allowable shear stress as 70 MPa. The key and key seat cross section are ISO standardized. Determine the average normal and average shear stress resisted in the plane of the weld. 2 ) 30 40 ( 2 ) ( 2 mm r R J = = = t t MPa J Tr 55 . Also determine the maximum she stress within reg10tys CD and DE. 1D. T WORKED EXAMPLES = G J r = L 1) A solid shaft has to transmit 120 kW of power at 160 rpm. Determine the maximum discharge rate of water through the pipe. Calculate also the angle of twist for a length of 2m . A moment of 1000 Nm is acting on a solid cylinder shaft with diameter 50 mm (0. de 2020 solid shaft of 10cm diameter transmits 74 kW at 150 rev/min. F1-18. The load of 2800 lb is to be supported by the two essentially vertical A-36 steel wires. 10-4rectangular beam with t = b h/4, the shear stress varies across the width by less than 80% of tave. The standard angle between the sides of V-belts is 40o, while the groove angle in the pulley ranges from 32o tothe maximum and minimum shear stresses developed in the hollow circular shaft, which inside and outside diameters are 35 mm and 50 mm From this maximum operating torque, we can find the shaft diameter with above equation. If it is tightly secured to the wall at A and three torques are applied to it as shown, determine the absolute maximum shear stress developed in the pipe. I50 mm 10 kN 4 kN 100 mm 6kN. Determine the maximum height to which the water stream could rise. Determine the minimum diameter of the shaft. SHEAR STRESS AND ANGULAR DEFLECTION CALCULATOR. Find: Determine the torque: (a) In a round shaft of 40 mm diameter. 7 N/mm2. May 19, 2018 · The axle of the freight car is subjected to wheel loadings of 20 kip. 5. A solid shaft is transmitting 1 MW at 240 r. Determine the average shear stress in the 20-mm-diameter pin at A andC 75 mm A B 4 kNm 50 mm 75 mm The internal torques developed at Cross-sections pass through point B and A are shown in Fig. Given : Maximum shear stress, x = 40 N/mm2. The corresponding average shear stress P Area U 1. If the shaft of the auger is a solid circular rod with a diameter of 2. If it is The coupling on it at Cis being tightened using a wrench. This problem has been solved! See the answer Transcribed image text: Determine the maximum shear stress developed in the 40-mm diameter shaft shown in Figure 3. Sol, Given : Maximum shear stress, T = 40 N/mm². The maximum shear stress in the footing concrete shall be determined based on a triangular or trapezoidal bearing pressure distribution (AASHTO LRFD (200-mm) outside radius, which is larger than the standard bar bend, shall be used in locations with large moments (e. [10 marks] a-Draw the torque diagram. a. If the maximum allowable shear stress is 75 MN/m 3. Hence the maximum shear stress is 33% more than the average shear stress in a circular section. 5 ksi determine theThe maximum shearing stress in the beam takes place at the NA on the cross section supporting the largest shear force V. C. m acts on a shaft which has external diameter twice of internal diameter. Maximum shear stress in the Aluminium jacket: 1. which has a diameter of 40 mm is Compute the average shear stress developed in a plate (10 thickness) under the action of a piston (40 mm diameter) subjected to a force of 50 KN. May 21, 2021 · Also sketch the stress distribution acting over the cross section and compute the maximum stress developed in the beam. In the situation of the pressure vessel it is not clear as to what the resultant equivalent stress is. We've got the study and writing resources you need for your assignments. Determine the maximum value of the. The gear The steel shaft has a diameter of 40 mm and is fixed at its ends A and B. If it is tightly secured to the wall at A and three torques are applied to it as shown, determine the absolute maximum shear stress ANSWER: 20. 103 B. 04 (c) A solid steel shaft of 60mm diameter is subjected to torque of 5 KN-m at the free end as shown in fig (vi). Determine the shaft diameter at the critical diameter. A hollow shaft of 60-mm outer diameter and 30-mm inner diameter (a) Determine the average shear stress in the shaft τ. 6280 mm B. , knee of a rigid frameThe maximum shear stress in the shaft is limited to 70 MPa and the angle of twist to 3. How do I get the shear stress to calculate the needed diameter that will support this amount of torque? It seems that shear stress is basically the modulus of rigidity; which I calculated in the original post. What is the elongation percentage of a steel rod of 50 mm diameter if the total extension is is 54 mm and gauge length is 200 mm. Example 9: The stepped shaft shown must tr ansmit 40 kW at a spe ed of 720 rpm. 7 MPa In a triaxial test, the deviator stress is plotted against axial strain and in a shear box test, the shear stress is plotted against shear displacement or strain as shown in Fig. i. ( a ) A solid circular shaft of 100 mm diameter and 4 m. applications [19]. 7 MPa The shear stress can be calculated as indicated. Determine the yield stress of a steel rod 20 mm 23. Determine the diameter of the shaft if the maximum torque transmitted exceeds the mean torque by 20%. If the yield point in tension of the material is 280 N/mm2 and the maximum shear stress the. . Take the distance between the About shaft developed the shear 40 mm in diameter the maximum stress Determine , determine the maximum torque T that can be transmitted. F5-7. 02)The contact load between the hollow shaft and the ground is the sum of the shaft's weight and the external load P. The maximum shear sfress in the shaft is not to exceed 60 N/mm2 A cantilever of 3m long carries two point loads each 4 kN, one placed at free end and the other at 2 m from fixed end. The shaft has an inner diameter of 24 mm and an outer diameter of 32 mm. If the shear stress not to exceed 50MPa, find section of the shaft which would satisfy the shear stress and twist condition. Similar to velocity boundary layer, a thermal boundary layer develops when a fluid at specific temperature flows over a surface which is at different temperature. Fig. Determine the angle of twist of wheel B Determine the maximum shear stress developed in the 40-mm-diameter shaft. 40, the concrete exhibited diagonal compression failure. 00 W x 0. m and 200 kN. g. As explained in the previousThe concrete expanded pile is a new type of pile in the field of foundation engineering, which exhibits improved performance compared to the ordinary straight-hole pile. Explanation: The tensile stress is the ratio of tensile force to the change i length. So, the maximum shear stress may be written in the following form( I have been Performing Stress Analyses for over 40 years and I STILL Find it Difficult although I DO Understand it) !!!! The maximum and minimum value of normal stress also has mentioned in that above figure separately. The steel tubes all had a diameter of either 165 mm (6. If the rigid beam is supported by a strut AB and post CD, both made from this material, and subjected to a load of P = 80 kN, determine the angle of tilt of the beam when the load is applied, The diameter of the strut is 40 mm and the diameter of the post is 80 mm. Get ready, something cool is coming!19 de ago. The shear stress in a solid circular shaft in a given positionShear-moment diagrams for some common end conditions and loading configurations are shown within the beam deflection tables at the end of this page. Determine the frequency of rotation of the shaft so that the shear stress will not exceed 50 MPa. Converting from the Metric system (meters, centimeters, kilometers, etc. = 40 mm T 40 mm! max = 400 MPa Assumptions: 1. 1658 mm 10. The expanded technique increases the bearing capacity of the pile, changes the overall load-bearing function of the pile body, and offersMaximum value of shear stress developed in the body > Yield strength in shear under tensile test i. The state of stress at a point is given as σx = 100 N/mm2, σy = 40 N/mm2 and τxy = 40 N/mm2 . Is the calculation saying that even at 20mm radius the stress will be below the 490MPa that would cause this pipe failOutside diameter: 82 mm. The hollow circular shaft is subjected to an internal 3. 7 MPa Principal stresses and maximum shear stresses at each of the points from A through I may be determined from equations below: σ 1,2 = σ x + σ y 2 ± ( σ x − σ y 2) 2 + ( τ xy) 2. 25 O80 0 I I 6 b) ê ? L ã ê ? L 50 10 7 40 40 ê P L31 A solid shaft, 100 mm in diameter, transmits 75 kW power at 150 rev/min. ](12分) Answer: a. We're in the know. Stress calculation has been carried out for Spline Shaft using the The material fringe value is determine as shown in observation Table 2 and isocromatic fringes developed in disc under load as The maximum shear stress found to be increasing from free end to rigid end. Determine the leg length of the weld. Solution. 4. 43 MPa. max. The outside diameter of shaft (2) is 42 mm and the inside diameter is 30 mm. 04Ac; Minimum spacing of bars 75mm. length is fixed rigidly at both ends. Determine the maximum load, P if the allowable shear stress, tensile stress and bearing stress are 80 N/mm2, 100 N/mm2 and 140 N/mm2 respectively. F5-7 The copper pipe has an outer diameter of 40 mm. Attempt Online. 0 m 1. 5 (a) A 60 mm diameter shaft transmits 80 kW at 100 r. The maximum torque transmitted in each revolution exceeds the mean by 20%. The shearing stress of a solid shaft is not to exceed 40 N/mm2 when the torque transmitted is 20000 N. 10 T J Maximum Torsional Shear T:2100 Nm = 2100(10)3 Nmm: 40 mm J: tubular Polar Moment of Area (R r ) 4 ( 40 4 30 4 ) 2. 1 KN-m, determine the maximum shearing stress in the bolts. Let the shear force be subjected is F. 55MPa 6 J 2. 72 MPa b)47. Figure 2 3. 2. x 10 mm, Flange 150 mm x 20 mm. Find the diameter of the shaftthat could safely deliver. The variation of wall shear stress in the flow direction for flow in a pipe from the entrance region into the fully developed region. The angular deflection of the shaft can be calculated as. 7 MPa If the shaft has a diameter of 40 mm. There are actually a few different ways that 1,000,000 can be expressed when it comeView student reviews, rankings, reputation for the online Accelerated MM from National American University Earn your Master’s in Business Management degree online in an accelerated format with National American University. Find the necessary diameter of the shaft, if the allowable shear stress is A stepped steel shaft consists of a hollow shaft 2m long, with an outside diameter of 100 mm and an inside diameter of 70 mm, rigidly attached to a solid shaft 1. A shaft of diameter D is subjected to a twisting moment (T) and a bending moment (M). About shaft developed the shear 40 mm in diameter the maximum stress Determine , determine the maximum torque T that can be transmitted. Tr 2100(10) 3 ( 40) 30. It is more suitable for cohesionless soils. 7 MPa Explanation: Maximum shear stress occurs at neutral axis; y =0 Maximum shear stress = 16/3 × average shear stress But 4F / A is the average shear stress. When it is rotating at 80 rad/s, it transmits 32 kW of power from the engine E to the generator G. And so we know that the maximum sheer stress is going to occur where row is the maximum, and that's going to be at the outer surface. 75(10) 11 Solve it! If shearingstresses 40 Mpa, determine the shaft diameter. J the polar moment of inertia of the cross-sectional area. = P A. Part A is 40 mm diameter and part B is 20 mm diameter. (15 Marks0. Constructing the foundation causes changes in the levels of vertical stress at all points in the soil. 5 m length and 70 mm diameter. Calculate the maximum torque that a shaft of 125 mm diameter can transmit, if the maximum angle of twist is 1° in a length of 1 m. The shaft had a diameter of 40 mm. If the torque developed at A is 125 Ib. ] [2. A. Since segment DE is subjected to the greatest torque, the absolute maximum shear stress occurs here. Direct shear experiments were carried out both to investigate the interaction between a predominantly cohesion less soil and in-situ concrete and the The soil utilized in the study is a predominantly cohesion less soil. 25m Axial stress = σa= pr 2t = Nov 04, 2021 · Determine the maximum shear stress developed in the 40 mm diameter shaft Determine the maximum shear stress developed in the 40 mm diameter shaft2. (a) The shearing stress of a solid shaft is not to exceed 40 N/mm2 when the torque transmitted is 20000 Nm. (a) What is the maximum normal stress σmax in the bar? (b) What is the maximum shear stress τmax? The maximum shear stress is on a 45o plane and equals σx / 2. The solid shaft is subjected to the distributed and concentrated torsional loadings shown. is harder to find. Figure P1. 25 mm]. (a) Determine the maximum shear, tensile, and compressive stresses in the A solid steel shaft ABC of 50mm diameter (Fig 3-13a) is driven at A by a motor. Characterize the dispersive mixing through the determination of shear stress and mixing parameter λ. P-118. Problem: A shaft is transmitting 200 kW at 1200RPM. Use 4140 shaft material with a limiting stress value of 6,500 psi. 57 MPa and 7. Determine the maximum elastic torque TY and the corresponding angle of twist. 90 mm C. For the loading shown, determine (a) the minimum and maximum shearing stress inManufacturing Design. 2mm. Determine the maximum shear stress developed in the shaft at Answer to: Determine the maximum shear stress developed in the 40 - mm diameter shaft shown in Figure Determine the angle of twist of wheel B withAnswer to: Determine the maximum shear stress developed in the 44-mm-diameter shaft. For avoiding pearlite formation the cooling rate should be high and desired austempering temperature should be reached in the For ADI carbon content is in the form of spheriodal graphite nodules. radial position in the section. at the top fiber of the shaft. Now consider the torsional resistance developed byIf each shore has a 150 mm ϫ 150 mm square cross sec-tion, what is the compressive stress ␴c in the shores? Determine the tensile stress in the outer bars if they are constructed of aluminum alloy having the stress-strain diagram shown in Fig
2022-06-25 11:51:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6885690093040466, "perplexity": 1188.0294780473173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00357.warc.gz"}
http://wscg.zcu.cz/wscg2003/Papers_2003/I19.htm
# Shading by Spherical Linear Interpolation using De Moivre’s Formula Anders Hast University of Gavle Creative Media Lab. S-801 76; Gavle Sweden e-mail: aht@hig.se http:// Keywords: Shading, Normalization, Slerp, De Moivres formula. Abstract In the classical shading algorithm according to Phong, the normal is interpolated across the scanline, requiring a computationally expensive normalization in the inner loop. In the simplified and faster method by Gouraud, the intensity is interpolated instead, leading to faster but less accurate shading. In this paper we use a third way of doing the interpolation, namely spherical linear interpolation of the normals across the scanline. This has been explored before, however, the shading computation requires the evaluation of a cosine in the inner loop and this is too expensive to be efficient. By reformulating the original approach in a suitable way, De Moivre’s formula can be used directly for computing the intensity so that no normalization is needed. Hence, no trigonometric functions, divisions or square roots are necessary to compute in the inner loop. Unfortunately the setup for each scanline will be rather slow unless some efficient reformulation of the necessary trigonometric calculations can be found. We suggest this problem for future research.
2019-02-18 06:37:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293152451515198, "perplexity": 948.6760840087607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00601.warc.gz"}
http://jokerwang.com/wp-content/one/278.html
In trapezoid $$ABCD$$, leg $$\overline{BC}$$ is perpendicular to bases $$\overline{AB}$$ and $$\overline{CD}$$, and diagonals $$\overline{AC}$$ and $$\overline{BD}$$ are perpendicular. Given that $$AB=\sqrt{11}$$ and $$AD=\sqrt{1001}$$, find $$BC^2$$. (第十八届AIME2 2000 第8题)
2017-06-27 17:16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703083515167236, "perplexity": 456.0896954970325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00370.warc.gz"}
https://moviecultists.com/where-trig-functions-are-positive-and-negative
# Where trig functions are positive and negative? The distance from a point to the origin is always positive, but the signs of the x and y coordinates may be positive or negative. Thus, in the first quadrant, where x and y coordinates are all positive, all six trigonometric functions have positive values. ## Where are the trig functions negative? Based on the unit circle, the negative angle identities (also called "odd/even" identities) tell you how to find the trig functions at -x in terms of the trig functions at x. In other words, they relate trig values at opposite angles x and -x. For example, sin(-x) = -sin(x), cos(-x) = cos(x), and tan(-x) = -tan(x). ## What quadrants is each trig functions positive and negative? • In Quadrant I both x and y are positive, • in Quadrant II x is negative (y is still positive), • in Quadrant III both x and y are negative, and. • in Quadrant IV x is positive again, and y is negative. ## What quadrants are the 6 trig functions positive and negative? It follows that: • sine is positive in quadrants I and II: points above the x -axis have positive y -values. • sine is negative in quadrants III and IV: points below the x -axis have negative y -values. • cosine is positive in quadrants I and IV: ... • cosine is negative in quadrants II and III: ## Where is sin positive and negative? for angles with their terminal arm in Quadrant II, since sine is positive and cosine is negative, tangent is negative. for angles with their terminal arm in Quadrant III, since sine is negative and cosine is negative, tangent is positive. ## Trigonometry - The signs of trigonometric functions 45 related questions found ### How do you know when a sin is negative? The sine ratio is y/r, and the hypotenuse r is always positive. So the sine will be negative when y is negative, which happens in the third and fourth quadrants. ### Is Tan positive or negative? In the second quadrant (II), sine (and cosec) are positive. In the third quadrant (III), tan (and cotan) are positive. In the fourth quadrant (IV), cos (and sec) are positive. These just follow from the sign (+ or -) of x or y for each quadrant, as we saw above. ### What quadrant is sin positive? • All trig functions (sin, cos, tan, sec, csc, cot) are positive in the first quadrant. • Sine is positive in the second quadrant. • Tangent is positive in the third quadrant. • Cosine is positive in the fourth quadrant. ### Where is csc negative? Since − 5 π 6 \displaystyle -\frac{5\pi }{6} −65π​ is in the third quadrant, where both x and y are negative, cosine, sine, secant, and cosecant will be negative, while tangent and cotangent will be positive. ### What is negative cosecant? Comparing sine functions The comparison of both cosecant functions disclose that cosecant of negative angle equals to negative of cosecant of positive angle. ⁡ ⁡ ∴ csc ⁡ ⁡ This negative identity is called cosecant of negative angle identity and frequently used as a formula in trigonometric mathematics. ### What quadrants is sine negative? Trigonometric functions sine and cosine are both positive in first quadrant but in third quadrant both are negative. Hence in these two quadrants they have same sign. However, in second quadrant while sine is positive, cosine is negative. And in fourth quadrant, cosine is positive and sine is negative. ### What is Cosec in math? Cosecant is one of the six trigonometric ratios which is also denoted as cosec or csc. The cosecant formula is given by the length of the hypotenuse divided by the length of the opposite side in a right triangle. ### What are the symbols in trigonometry? Their names and abbreviations are sine (sin), cosine (cos), tangent (tan), cotangent (cot), secant (sec), and cosecant (csc). ### What happens when an angle is negative? 2 Answers. Negative angles has to do with the direction of rotation that you consider in order to measure angles. Normally you start counting your angles from the positive side of the x axis in an anti-clockwise direction of rotation: ... negative is the equivalent of these words in math. ### What is negative angle identity? Negative angle identities are trigonometric identities that show the relationships between trigonometric functions when we take the trigonometric function of a negative angle. These identities are as follows: sin(-x) = -sin(x) cos(-x) = cos(x) ### Is there a negative angle? Angles are measured in degrees. One complete rotation is measured as 360°. Angle measure can be positive or negative, depending on the direction of rotation. ... Positive angles (Figure a) result from counterclockwise rotation, and negative angles (Figure b) result from clockwise rotation. ### Is CSC positive or negative? Sine and cosecant are positive in Quadrant 2, tangent and cotangent are positive in Quadrant 3, and cosine and secant are positive in Quadrant 4. ### Can a tangent be negative? The tangent function is negative whenever sine or cosine, but not both, are negative: the second and fourth quadrants. Tangent is also equal to the slope of the terminal side. ### Is quadrant 3 positive or negative? In Quadrant I, both the x– and y-coordinates are positive; in Quadrant II, the x-coordinate is negative, but the y-coordinate is positive; in Quadrant III both are negative; and in Quadrant IV, x is positive but y is negative. ### What are the 4 quadrants? The x and the y axes divide the plane into four graph quadrants. These are formed by the intersection of the x and y axes and are named as: Quadrant I, II, III, and IV. In words, we call them first, second, third, and fourth quadrant. ### What quadrant is sin less than 0? The only quadrant where x is positive, so cos(x)>0 , and y is negative, so sin(x)<0 , is Quadrant IV . ### Where is tan equal to? The tangent of x is defined to be its sine divided by its cosine: tan x = sin x cos x . The cotangent of x is defined to be the cosine of x divided by the sine of x: cot x = cos x sin x . ### Is tan always positive? This can be summed up as follows: In the fourth quadrant, Cos is positive, in the first, All are positive, in the second, Sin is positive and in the third quadrant, Tan is positive. This is easy to remember, since it spells "cast". ### Where is tan less than 0? Therefore: In Quadrant IV, cos(θ) > 0, sin(θ) < 0 and tan(θ) < 0 (Cosine positive). The quadrants in which cosine, sine and tangent are positive are often remembered using a favorite mnemonic.
2022-05-25 22:57:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664736747741699, "perplexity": 932.506033773689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00706.warc.gz"}
https://en.bmstu.wiki/Monkey_X
# Monkey X Paradigm multi-paradigm: structured, imperative, object-oriented, modular, reflective, generic, concurrent Mark Sibly Blitz Research Ltd 2011-03-01 0.86(E) / 02.02.2016 Static, weak, strong (optional), safe, nominative, partly inferred Windows, OS X, Linux Microsoft Windows, Mac OS X, Linux zlib, proprietary (commercial) www.monkey-x.com BlitzBasic, BlitzMax, C, C++, C#, JavaScript, Java Monkey — object-oriented, translated programming language, which the compiler translates into native source code for several target platforms. Officialy Monkey code could be translated into the following programming languages: С++, C#, Java, JavaScript and ActionScript. Nevertheless, this list can be extended by writing your own translators. Thus, Monkey community, has been successfully developed compilers for Python and BlitzMax. Monkey is dialect of BASIC. But it also clearly shows the influence of Java. The language has static typing, provides modularity, supports abstraction, encapsulation, inheritance, and polymorphism, also interfaces, generalized types, properties, iterators and exceptions. ## History The language was developed by Mark Sibly — founder of Blitz Research Ltd. He is known to some developers for such tools creating games as BlitzBasic, Blitz3D и BlitzMax. Monkey is an evolution of the previous line of the Blitz Research Ltd products. Cross-platformity is achieved by translating. All that Monkey compiler can — is checking and translating code, collect valid platform project, and run native tools to build applications. In this regard, for the assembly of the final application, you must install the SDK for all required platforms. Thus, it is possible to avoid the use of different launcher and plug-ins to run the final applications. It looks as if the application was written independently. Of course, the translated code is not easy to read and not always optimal, but at the same time it gives you all the benefits of native development. ## Target platforms • Windows • Mac OS X • Android • iOS • HTML5 • Flash • XNA • PlayStation Vita ## Compiler The compiler is written in Monkey. But it was an intermediate stage when the compiler was written in BlitzMax. The source code for the compiler is fully open source, therefore, if necessary, you can modify and recompile it into a Monkey, using the target platform Stdcpp (standard C++). ## Preprocessor Monkey uses a simple preprocessor for the separation of specific parts of the code for different platforms, the installation of additional configuration settings and to include or exclude blocks of code depending on the build configuration. ## Using native code To write a platform depend code it could be used native for the platform language. Using the Extern directive, classes and functions can be included into Monkey code, using specific features of the platform. It gives an opportunity to expand the functionality of your application almost without restrictions. ## Modules Language capabilities can be expanded by using modules that can be written on the Monkey and on the native language for the platform. Monkey comes with the following modules: • monkey (Basic language features) • brl (a set of classes and functions for streaming) • reflection • os (module for use with the operating system) • dom (module for DOM-tree HTML-document) • mojo (2D-framework) • opengl (module for use with the OpenGL) In addition to the list, developer community has been written more than 20 add-on modules, including ports of different physics engines (Box2D, Chipmunk и Fling), GUI-systems, modules for working with fonts, modules for the implementation of IAP (in-app purchase), module for use with the XML, JSON and various services. ## Games development ### Mojo Mojo module that comes with the Monkey is used for games development. This module provides developers a cross-platform API for working with 2D-graphics, sound and input devices. Features of the framework is somewhat limited and it is connected, first of all, to the need to support multiple platforms. Not all features are available on a single platform are available on another. If some feature is not available on at least one of the platforms, it will not be included in mojo.Of course, it is somewhat radical. But at the same time, you can be sure that your application will work properly on all platforms. The second reason for such modest functionality - easy to add new platforms. Technologies rapidly changes. New devices and operating systems appear. For this reason, the possibility of rapid addition of support for the new platform provides a distinct advantage over other similar tools. ### Game frameworks Of course, functional mojo is not enough to write a complete game. After the game - it's not just work with graphics, sound and input devices, but also the user interface, various states, animation, physics, and more. Unfortunately, this is not mojo. But here comes a game frameworks, and other modules created community Monkey. • Diddy.One of the most popular frameworks for Monkey. In addition, directly framework, it provides a lot of additional functionality. • FantomEngine. Creator of the framework is the author of «Monkey Game Development», which examples were made by using fantomEngine. • Flixel. • Playniax. The only commercial framework, but with good reviews. The author is the developer of the eponymous framework for BlitzMax. ### 3D If your goal is to create a 3D-games, ypu should use the opengl module (does not work on all platforms) or framework minib3d. Monkey has its disadvantages, like any software. ### IDE This is probably the most serious problem - the lack of a normal development environment. Despite the fact that Monkey supplied with two IDE (Monk and Ted), none of them can not be considered complete. Writing major projectson them is rather problematic. To solve this problem, you can use a commercial Jungle IDE (there is a free lite-version) or one of several plug-ins for popular text editors. However, the problem with IDE, unfortunately, remains one of the key. ### The lack of supporting tools Most professional tools for creating games come with a level editor, sprites, animations, etc. You will not see such tools in Monkey. There are only the language, modules and IDE. Here comes third-party software, as paid and free. Usually, to make the import of projects from these instruments is not big problem. Besides, if you look at the official forum, you can find ready-made solutions, such as the import of texture atlases from TexturePacker, import of tile maps from Dame and Tiled, etc. ### HTML5 productivity HTML5 mojo version uses 2D-context, which affects the performance of games. Unfortunately, WebGL not supported in IE. And if some feature which is not supported, it is not used at all. To remedy this situation, you may want to use an experimental patch for mojo - Mojo HTML5 GL, which replaces the 2D-context on WebGL, which gives a significant performance boost. ## Programs written using the Monkey • Zombie Trailer Park — Flash and iOS • Pirate Solitaire — iOS, Android and Flash • Jet Worm — iPhone and iPad • Blotty Pots — Android, iOS, WP7 • New Star Soccer Mobile — Android, iOS, Flash and HTML5 ## Sample code ### Main function #Rem This example relies on the 'mojo' module, so it will not compile with a non-game target. Mojo comes with all versions of Monkey X, and is implemented for most targets. Classes and functions such as 'Image', 'App', 'LoadImage', and 'DrawImage' are provided by Mojo. NOTES: * Multi-line comments are described with the preprocessor ala "#Rem". * Single-line comments are represented with apostrophes. (Similar to Visual Basic) * Variable-naming standards are generally user-dictated. * Monkey is statically typed, however it does support automatic type resolution. * 'End' may be used to end a scope, however, specific forms of 'End' may also be used for clarity. ("End Method" for example) * Monkey's compiler is generally "multi-pass", so the placement of elements does not matter. This can also lead to different stylistic choices, such as placing fields at the end of classes. * This is a modular language (Some Java parallels can be made), however, Monkey uses files to represent modules, not classes. This example uses a class because it's dictated by Mojo. Monkey is also not strictly object-oriented, however, it does fully support polymorphism, and similar strategies. * This example uses spaces instead of tab-characters for the sake of consistency, such practices are discouraged in realistic applications. #End ' This will enable strict-mode. (This makes the compiler less lenient about code structure) Strict ' Imports: ' Import the standard Mojo module. (Required for this example) Import mojo ' Like several C-like languages, but unlike most BASIC languages, ' Monkey uses the 'Main' function as an entry point. Function Main:Int() ' By simply creating a new 'Game' object, the application will be started. New Game() ' Return the default response to the system. ' Zero: No errors found. This is system specific. ' This point may or may not be reached when the application closes. Return 0 End ### Main class ' This will act as our main class. Multiple inheritance is ' not supported in Monkey, however, interfaces are. ' The 'Final' specifier works similarly to Java, and is not explicitly needed. Class Game Extends App Final ' Fields: Field player:Player ' Methods: ' These override the 'App' class's methods (Your own methods may also be written in this class): ' Though, technically 'OnCreate' is a method, some consider it a type of constructor, and may label it as such. ' 'OnCreate' is called automatically when an 'App' object is created. Method OnCreate:Int() #Rem Most media should be stored in a folder called "ProjectNameHere.data". The 'LoadImage' command will load an 'Image' object from the path specified. Mojo assumes that what you're loading is in the "ProjectNameHere.data" folder by default. Variables, especially local variables may also use the ":=" operator, in order to use automatic type deduction. ' Alternative: Local img:= LoadImage("PathHere.png") #Rem Create a new instance of our 'Player' class using the image we loaded. As you can see, 'player' is a field, and because of this, an implicit use of 'Self' can be assumed if there is no name conflict. People familiar with languages similar to C++ would know this pointer/reference as 'this'. Monkey is garbage collected, so there is no need to deallocate this object from the heap later on. #End player = New Player(img, 100, 100) #Rem This will set the update-rate to the rate we specify (X times per-second). This update rate is also implicitly applied to the draw/render rate; however, uses of 'OnRender' are target and system defined, and are therefore decoupled from the main update routine. Setting this to zero will result in a system-defined update-rate. Doing such a thing will hint to Mojo that it should attempt to make this application update and render, as many times as possible. #End SetUpdateRate(60) ' The return values of the 'App' class's commands are currently placeholders. ' Monkey's documentation advises that you return zero by default. ' Returning can technically be optional under certain conditions. (Not always recommended) Return 0 End #Rem The 'OnUpdate' method is called automatically several times per second. The number of times this is called is based on the update-rate. Mojo is generally good about implementing fixed-rate behavior, it will attempt to update the application more than render if profitable. This does not "save you" from the use of delta-timing, or similar techniques, however. #End Method OnUpdate:Int() ' Add '1.0' to the player object's 'x' variable. ' Adding ".0" to the end of a literal can be used to ' explicitly describe it as floating-point ('Float'). player.x += 1.0 ' If the value of 'x' exceeds the number we specify (In case a literal), set it to zero: ' This could also be done using 'Mod', the modulus operator. ' (Represented by '%' in several C-like languages) If (player.x > 100) Then player.x = 0 ' Once again, 'Self' is implicit. Endif ' 'End' could also be used here, instead. ' Everything went according to plan, now return zero. Return 0 End #Rem The 'OnRender' method is usually called as many times as 'OnUpdate', however, this is system and target dependent, the update-rate is used as a hint for this, not a demand. For this reason, having any code that mutates "in-application" data is considered variable and in some ways non-standard. Normally, all graphical/drawing operations must be done in here. However, a non-standard target-dependent way of rendering in 'OnUpdate' can be done using the 'BeginRender' and 'EndRender' commands. (Not recommended) Actions such as loading resources should be done in 'OnCreate' or 'OnUpdate'. #End Method OnRender:Int() ' Clear the screen, then display a color based on the values specified(RGB, floating-point). ' Explicit usage of ".0" is not needed here, as there is no integer overload. ' An alternate overload may be used, which clears the screen using a system/Mojo defined color. Cls(32.0, 64.0, 128.0) ' Call our 'player' object's 'Draw' command. ' In the event that 'player' is 'Null', this will throw an error. player.Draw() ' Everything went according to plan, now return zero. Return 0 End End ### Player class ' The 'Player' class, as referenced previously (Placement does not matter): Class Player ' Declare all of our fields (Class-local variables): ' These two variables will act as our position on the screen. ' (Alternatively, an 'Array or third-party class could be used) Field x:Float, y:Float ' This will be a reference to an 'Image' object we'll specify. Field image:Image ' Constructor(s): ' Overloading 'New' mainly works the same way as constructors in other languages. ' Returning is generally not recommended for constructors. Method New(img:Image, x:Float=100, y:Float=100) ' Due to the arguments using the same names, 'Self' ' is required to resolve our fields' names: Self.image = img Self.x = x Self.y = y End ' Methods: ' This will be our main render-method for this object: Method Draw:Void() ' Draw the 'image' object to the screen using our 'x' and 'y' fields. DrawImage(image, x, y) ' Returning in a 'Void' function is not required. (Some still recommend it) Return End End
2022-12-06 01:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20947352051734924, "perplexity": 5616.142842854714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00321.warc.gz"}
https://buboflash.eu/bubo5/show-dao2?d=150915518
#elisp There are two text representations for non- ASCII characters in Emacs strings (and in buffers): unibyte and multibyte. If you want to change selection, open document below and click on "Move attachment" GNU Emacs Lisp Reference Manual: String Basics al characters in a string using the functions aref and aset (see Array Functions). However, note that length should not be used for computing the width of a string on display; use string-width (see Size of Displayed Text) instead. <span>There are two text representations for non- ASCII characters in Emacs strings (and in buffers): unibyte and multibyte. For most Lisp programming, you don’t need to be concerned with these two representations. See Text Representations, for details. Sometimes key sequences are represented as unibyte st
2022-08-20 03:33:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8256787657737732, "perplexity": 6360.607339398155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00280.warc.gz"}
https://math.stackexchange.com/questions/493704/how-to-compare-sqrt6a5-sqrt3-to-sqrt200a
# How to compare $(\sqrt{6a})(5\sqrt{3})$ to $\sqrt{200a}$? This is a gre question: I have chosen the answer A because the quantity A can be written as : $\sqrt{6 \times 25 \times 3 a}$ which is $\sqrt{450 a}$ always greater that the option Quantity B. But someone clams that the relationship can not be given from the above statement because difference values of a gives different answer. In the Gre question are we allowed to take the value 0 and any other values? • Well, you don't know whether $a > 0$ or $a = 0$, or maybe $a$ is even complex, so you can't say from the information given. – Daniel Fischer Sep 14 '13 at 19:27 • I hope this was on a practice exam and not the real thing because otherwise this would be very unethical. – Cameron Williams Sep 14 '13 at 19:28 • You can take any value of a you wish, since a is not defined, 0 or -ve reals or what ever else you wish.:) – Ram Sep 14 '13 at 19:36 Yes, $a$ can take on the value of $0$ and/or any other (assuming positive real) value. $a$ is an unknown, and as such, while you're answer is correct for a non-zero (assuming positive) value $a$, option $(c)$ is correct if $a = 0$. Hence, option $(d)$ is the correct answer. Of course, if $a$ is complex, (and non-real) then we have no way of comparing quantity $A$ or quantity B\$. • Good answer. As a person who has taken GRE General Test recently, I just wanted to note that all the variables in GRE General Test are assumed to be real. :) This is explicitly stated in the beginning of the test. If this assumption is lifted, many of the quantitative comparison problems (as above) would have the answer "The relationship cannot be determined from the information given." Just thought this would be beneficial for people taking the exam. – Prism Sep 14 '13 at 19:38 • @Prism I figured as much, but didn't know for sure, so thanks for the information! – Namaste Sep 14 '13 at 19:39 • @amWhy: Nice answer, there is an extra dollar sign at the end. +1 – Amzoti Sep 15 '13 at 14:09
2019-08-23 23:58:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8142430186271667, "perplexity": 359.4872038350743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00208.warc.gz"}
https://math.stackexchange.com/questions/1088423/how-do-you-prove-the-eigenvalues-and-eigenvectors-of-matrices-are-the-same
# How do you prove the eigenvalues and eigenvectors of matrices are the same? [duplicate] This question already has an answer here: I wondered if anyone could help me with a couple of proofs. The question is: Let $A$ be a given $n \times m$ matrix. The collection of scalars $\lambda_i$ and associated $n \times 1$ vectors $q_i$ that solve the equation $Aq=\lambda q$ are known as eigenvalues and eigenvectors of $A$, respectively show that: i. Suppose $n=m$. Then for any non-singular $n \times n$ square matrix $G$ the eigenvalues of $G^{-1}AG$ are the same as those of $A$. ii. If $A^{-1}$ exists then it shares the same eigenvectors $q_i$ as $A$ with corresponding eigenvalues $\lambda_i^{-1}$ Thanks in advance! ## marked as duplicate by Dietrich Burde, user99914, Mark Fantini, Namaste, ThomasJan 2 '15 at 13:36 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers 1) \begin{align*}\det(\lambda I-G^{-1}AG)&=\det(\lambda G^{-1}IG-G^{-1}AG)\\ &=\det(G^{-1}(\lambda I-A)G)\\ &=\det (G^{-1})\cdot \det(\lambda I-A)\cdot \det (G)\\ &=\det (G)^{-1}\cdot \det(\lambda I-A)\cdot \det (G)\\ &=\underbrace{\det (G)^{-1}\cdot \det(G)}_{=1}\cdot \det(\lambda I-A)\\ &=\det(\lambda I-A)\end{align*} therefore $A$ and $G^{-1}AG$ has the same caracteristic polynomial, and thus the same eigenvalues. 2) $$Au=\lambda u \implies \underbrace{A^{-1}A}_{=I} u=\lambda A^{-1}u\implies u=\lambda A^{-1}u\implies A^{-1}u=\frac{1}{\lambda}u$$ • Thanks, but I thought the characteristic equation was |A-Lambda*I|? Instead of the other way round? Or is it just like that for this equation? Thanks – Will Smith Jan 2 '15 at 13:54 • It doesn't matter, both are correct. With your formula you'll get something like $(a_1-\lambda)(a_2-\lambda)...(a_k-\lambda)$, and with my formula you'll get $(-1)^k(\lambda-a_1)...(\lambda-a_k)$. But as you can see $$(a_1-\lambda)(a_2-\lambda)...(a_k-\lambda)=(-1)^k(\lambda-a_1)...(\lambda-a_k)$$ – idm Jan 2 '15 at 16:50 $$Av=\lambda v.$$ Then $$(G^{-1}AG)(G^{-1}v)=G^{-1}Av=G^{-1}\lambda v,$$ so $G^{-1}AG$ has $\lambda$ as an eigenvalue with vector $G^{-1}v$. 1. $Av=\lambda v$, then $v=A^{-1}\lambda v$ and then $=A^{-1} v=\frac{1}{\lambda}v$.
2019-06-18 05:24:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505941271781921, "perplexity": 347.4661829580649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00158.warc.gz"}
http://openstudy.com/updates/4fcb1440e4b0c6963ad4d60c
## Whimsical how do you integrate (sin(x))^2 without using the trigo identity cos(2x)=1-2(sin(x))^2 one year ago one year ago 1. Aamal can we there use anyother trigo identity 2. Whimsical erm... u cant use any trigo identity or integrating it by parts 3. Aamal yeah integraion by parts give our result 4. brinethery Maybe U-substitution for this one? 5. Whimsical is it impossible to integrate this without using trigo identity or intergration by parts? can we use chain rule and integrate it into $-1/3\cos^3(x)$ 6. brinethery 7. Whimsical ok i understand now thankyou 8. brinethery You can also use the good ol' table of integrals but I'm sure your teacher doesn't want that. http://integral-table.com/integral-table.html#SECTION00007000000000000000
2014-04-23 08:23:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526538252830505, "perplexity": 6351.087883267807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
https://stockingisthenewplanking.com/what-quadrants-are-csc-in/
## What quadrants are csc in? In Quadrant II, csc ⁡ θ \displaystyle \csc{\theta} cscθ is positive, sec ⁡ θ \displaystyle \sec{\theta} secθ and cot ⁡ θ \displaystyle \cot{\theta} cotθ are negative. ### Where is csc on a graph? The cosecant goes down to the top of the sine curve and up to the bottom of the sine curve. After using the asymptotes and reciprocal as guides to sketch the cosecant curve, you can erase those extra lines, leaving just y = csc x. The figure that follows shows what this function looks like all on its own. #### What quadrants is cot positive? All trig functions (sin, cos, tan, sec, csc, cot) are positive in the first quadrant. • Sine is positive in the second quadrant. • Tangent is positive in the third quadrant. • Cosine is positive in the fourth quadrant. • What quadrant is csc positive and SEC negative? What is sec Cosec cot? Secant (sec) is the reciprocal of cosine (cos) Cosecant (cosec) is the reciprocal of sin. Cotangent (cot) is the reciprocal of tan. ## What is the graph of secant? As with tangent and cotangent, the graph of secant has asymptotes. This is because secant is defined as \n\nThe cosine graph crosses the x-axis on the interval \n\nat two places, so the secant graph has two asymptotes, which divide the period interval into three smaller sections. ### What is the period of CSC? 2 π 2 From the graphs of the tangent and cotangent functions, we see that the periods of secant and cosecant are both 2 π 2\pi 2π. #### Which quadrants are CSC SEC and cot positive in? Which quadrants are csc, sec, and cot positive in? cot is +ve in QI, and,QIII. cot is +ve in QI, and,QIII. What are the graphs of Tan Cot Sec SEC and CSC? Graphs of tan, cot, sec and csc 4. Graphs of tan, cot, sec and csc \\displaystyle \\csc { {x}} cscx are not as common as the sine and cosine curves that we met earlier in this chapter. However, they do occur in engineering and science problems. They are interesting curves because they have discontinuities. How to find the trigonometric ratios CSC SEC cot? The formulas given below can be used to find the trigonometric ratios csc, sec and cot. csc θ = Hypotenuse / Opposite side. sec θ = Hypotenuse / Adjacent side. cot θ = Adjacent side / Opposite side. Example 1 : In the right triangle shown below, find the values of csc B, sec B, cot B. ## What is csc and cot in math? csc θ = Hypotenuse / Opposite side. sec θ = Hypotenuse / Adjacent side. cot θ = Adjacent side / Opposite side. Example 1 : In the right triangle shown below, find the values of csc B, sec B, cot B.
2023-03-31 13:23:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458534717559814, "perplexity": 2966.39792115958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00511.warc.gz"}
https://rdrr.io/cran/refund/man/mfpca.sc.html
# mfpca.sc: Multilevel functional principal components analysis by... In refund: Regression with Functional Data ## Description Decomposes functional observations using functional principal components analysis. A mixed model framework is used to estimate scores and obtain variance estimates. ## Usage 1 2 3 4 5 6 7 8 9 10 11 12 13 14 mfpca.sc( Y = NULL, id = NULL, visit = NULL, twoway = FALSE, argvals = NULL, nbasis = 10, pve = 0.99, npc = NULL, makePD = FALSE, center = TRUE, cov.est.method = 2, integration = "trapezoidal" ) ## Arguments Y, The user must supply a matrix of functions on a regular grid id Must be supplied, a vector containing the id information used to identify clusters visit A vector containing information used to identify visits. Defaults to NULL. twoway logical, indicating whether to carry out twoway ANOVA and calculate visit-specific means. Defaults to FALSE. argvals function argument. nbasis number of B-spline basis functions used for estimation of the mean function and bivariate smoothing of the covariance surface. pve proportion of variance explained: used to choose the number of principal components. npc prespecified value for the number of principal components (if given, this overrides pve). makePD logical: should positive definiteness be enforced for the covariance surface estimate? Defaults to FALSE Only FALSE is currently supported. center logical: should an estimated mean function be subtracted from Y? Set to FALSE if you have already demeaned the data using your favorite mean function estimate. cov.est.method covariance estimation method. If set to 1, a one-step method that applies a bivariate smooth to the y(s_1)y(s_2) values. This can be very slow. If set to 2 (the default), a two-step method that obtains a naive covariance estimate which is then smoothed. 2 is currently supported. integration quadrature method for numerical integration; only "trapezoidal" is currently supported. ## Details This function computes a multilevel FPC decomposition for a set of observed curves, which may be sparsely observed and/or measured with error. A mixed model framework is used to estimate level 1 and level 2 scores. MFPCA was proposed in Di et al. (2009), with variations for MFPCA with sparse data in Di et al. (2014). mfpca.sc uses penalized splines to smooth the covariance functions, as Described in Di et al. (2009) and Goldsmith et al. (2013). ## Value An object of class mfpca containing: Yhat FPC approximation (projection onto leading components) of Y, estimated curves for all subjects and visits Yhat.subject estimated subject specific curves for all subjects Y the observed data scores n \times npc matrix of estimated FPC scores for level1 and level2. mu estimated mean function (or a vector of zeroes if center==FALSE). efunctions d \times npc matrix of estimated eigenfunctions of the functional covariance, i.e., the FPC basis functions for levels 1 and 2. evalues estimated eigenvalues of the covariance operator, i.e., variances of FPC scores for levels 1 and 2. npc number of FPCs: either the supplied npc, or the minimum number of basis functions needed to explain proportion pve of the variance in the observed curves for levels 1 and 2. sigma2 estimated measurement error variance. eta the estimated visit specific shifts from overall mean. ## Author(s) Julia Wrobel jw3134@cumc.columbia.edu, Jeff Goldsmith jeff.goldsmith@columbia.edu, and Chongzhi Di ## References Di, C., Crainiceanu, C., Caffo, B., and Punjabi, N. (2009). Multilevel functional principal component analysis. Annals of Applied Statistics, 3, 458–488. Di, C., Crainiceanu, C., Caffo, B., and Punjabi, N. (2014). Multilevel sparse functional principal component analysis. Stat, 3, 126–143. Goldsmith, J., Greven, S., and Crainiceanu, C. (2013). Corrected confidence bands for functional data using principal components. Biometrics, 69(1), 41–51. ## Examples 1 2 3 4 5 6 7 8 ## Not run: data(DTI) DTI = subset(DTI, Nscans < 6) ## example where all subjects have 6 or fewer visits id = DTI$ID Y = DTI$cca mfpca.DTI = mfpca.sc(Y=Y, id = id, twoway = TRUE) ## End(Not run) ### Example output refund documentation built on July 1, 2021, 9:06 a.m.
2022-01-21 08:04:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45434877276420593, "perplexity": 4107.916448200857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00602.warc.gz"}
https://hal-univ-bourgogne.archives-ouvertes.fr/hal-01591646
# Transverse foliations on the torus $\mathbb T^2$ and partially hyperbolic diffeomorphisms on 3-manifolds Abstract : In this paper, we prove that given two $C^1$ foliations $F$ and $G$ on $\mathbb{T}^2$ which are transverse, there exists a non-null homotopic loop ${\{\Phi_t\}_{t\in[0,1]}}$ in $\mathrm {Diff}^{1}(\mathbb T^2)$ such that ${\Phi_t(\mathcal{F})\pitchfork \mathcal{G}}$ for every $t\in[0,1]$, and $\Phi_0=\Phi_1= \mathrm {Id}$. As a direct consequence, we get a general process for building new partially hyperbolic diffeomorphisms on closed $3$-manifolds. [4] built a new example of dynamically coherent non-transitive partially hyperbolic diffeomorphism on a closed $3$-manifold; the example in [4] is obtained by composing the time $t$ map, $t>0$ large enough, of a very specific non-transitive Anosov flow by a Dehn twist along a transverse torus. Our result shows that the same construction holds starting with any non-transitive Anosov flow on an oriented $3$-manifold. Moreover, for a given transverse torus, our result explains which type of Dehn twists lead to partially hyperbolic diffeomorphisms. Keywords : Type de document : Article dans une revue Commentarii Mathematici Helvetici, European Mathematical Society, 2017, 92 (3), pp.513 - 550. 〈10.4171/CMH/418〉 Domaine : https://hal-univ-bourgogne.archives-ouvertes.fr/hal-01591646 Contributeur : Imb - Université de Bourgogne <> Soumis le : jeudi 21 septembre 2017 - 16:49:32 Dernière modification le : jeudi 11 janvier 2018 - 06:12:20 ### Citation Christian Bonatti, Jinhua Zhang. Transverse foliations on the torus $\mathbb T^2$ and partially hyperbolic diffeomorphisms on 3-manifolds. Commentarii Mathematici Helvetici, European Mathematical Society, 2017, 92 (3), pp.513 - 550. 〈10.4171/CMH/418〉. 〈hal-01591646〉 ### Métriques Consultations de la notice
2018-01-24 05:41:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4228208661079407, "perplexity": 939.373457238406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893397.98/warc/CC-MAIN-20180124050449-20180124070449-00573.warc.gz"}
https://www.r-bloggers.com/2014/02/hodograph-drawing/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. # Introduction The polar graph known as a hodograph can be useful for vector plots, and also for showing varition within nearly-cyclical time series data. The Oce R package should have a function to create hodographs, but as usual my first step is to start by writing isolated code, testing to find the right match between the function and real-world needs. The code chunk given below is such a test, with the build-in dataset named co2, which is a time-series starting in 1959. The hodograph is for the variation of CO2 from its value in 1959, so the data start at zero radius. Climatologists will why this makes sense, and climate-change deniars will think it’s part of a hoax. I will leave documentation of the function for a later time, conscious of the fact that the argument list and the aesthtics of the output are likely to change with use. # Methods First, define hodograph(), with arguments that suffice for a simple problem of a periodic signal x=x(t) to be plotted in polar fashion with radius indicating x and angle indicating t modulo 1 year. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 hodograph <- function(x, y, t, rings, ringlabels = TRUE, tcut = c("daily", "yearly"), ...) { tcut <- match.arg(tcut) if (missing(t)) { stop("x-y method not coded yet\n") } else { if (!missing(y)) { stop("cannot give y if t is given\n") } if (tcut == "yearly") { ## x=x(t) t <- as.POSIXlt(t) start <- ISOdatetime(1900 + as.POSIXlt(t[1])\$year, 1, 1, 0, 0, 0, tz = attr(t, "tzone")) day <- as.numeric(julian(t, origin = start)) xx <- x * cos(day/365 * 2 * pi) yy <- x * sin(day/365 * 2 * pi) ## axes if (missing(rings)) rings <- pretty(sqrt(xx^2 + yy^2)) rscale <- 1.04 * max(rings) theta <- seq(0, 2 * pi, length.out = 200) plot(xx, yy, asp = 1, xlim = rscale * c(-1.1, 1.1), ylim = rscale * c(-1.1, 1.1), type = "n", xlab = "", ylab = "", axes = FALSE) ## month lines month <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec") day <- c(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31) rscale <- max(rings) for (m in 1:12) { ## boundaries are for non leap years phi <- 2 * pi * (sum(day[1:m]) - day[1])/sum(day) lines(rscale * 1.1 * cos(phi) * c(0, 1), rscale * 1.1 * sin(phi) * c(0, 1), col = "gray") phi <- 2 * pi * (0.5/12 + (m - 1)/12) text(1.15 * rscale * cos(phi), 1.15 * rscale * sin(phi), month[m]) } for (r in rings) { if (r > 0) { gx <- r * cos(theta) gy <- r * sin(theta) lines(gx, gy, col = "gray") if (ringlabels) text(gx[1], 0, format(r)) } } points(xx, yy, ...) } else { stop("only tcut=\"yearly\" works at this time\n") } } } This may be tested as follows 1 2 3 4 5 6 data(co2) year <- as.numeric(time(co2)) t0 <- as.POSIXlt("1959-01-01 00:00:00", tz = "UTC") t <- t0 + (year - year[1]) * 365 * 86400 par(mar = rep(1, 4)) hodograph(x = co2 - co2[1], t = t, tcut = "yearly", type = "l", ringlabels = FALSE) # Results The plot is informative. I’ve looked at the co2 data before, without really noticing the interannual variation, which is clearly seen as variation in the spacing of the spiraling data trace. For comparison, consider a conventional time-series plot. 1 plot(co2) # Conclusions The function is useful as it is, but some improvements are indicated. For example, the ring labels are often over-written by the axes, and the only solution on offer presently is to skip the labels.
2021-06-19 15:44:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39265716075897217, "perplexity": 3585.8990162994824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648373.45/warc/CC-MAIN-20210619142022-20210619172022-00154.warc.gz"}
https://proofwiki.org/wiki/Definition:Group_Axioms
Definition:Group Axioms Definition A group is an algebraic structure $\struct {G, \circ}$ which satisfies the following four conditions: $(G \, 0)$ $:$ Closure $\displaystyle \forall a, b \in G:$ $\displaystyle a \circ b \in G$ $(G \, 1)$ $:$ Associativity $\displaystyle \forall a, b, c \in G:$ $\displaystyle a \circ \paren {b \circ c} = \paren {a \circ b} \circ c$ $(G \, 2)$ $:$ Identity $\displaystyle \exists e \in G: \forall a \in G:$ $\displaystyle e \circ a = a = a \circ e$ $(G \, 3)$ $:$ Inverse $\displaystyle \forall a \in G: \exists b \in G:$ $\displaystyle a \circ b = e = b \circ a$ These four stipulations are called the group axioms. Also known as The group axioms are also known as the group postulates, but the latter term is less indicative of the nature of these statements. The numbering of the axioms themselves is to a certain extent arbitrary. For example, some sources do not include $G \, 0$ on the grounds that it is taken for granted that $\circ$ is closed in $G$. However, in the treatment of more abstract aspects of group theory it is recommended that this axiom be taken into account.
2019-04-19 03:07:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207301139831543, "perplexity": 181.50559830152133}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526966.26/warc/CC-MAIN-20190419021416-20190419043416-00128.warc.gz"}
https://stats.stackexchange.com/questions/465861/what-are-the-moment-conditions-in-the-gmm-method-also-gmm-vs-iv-vs-2sls
# What are the "moment conditions" in the GMM method? Also: GMM vs IV vs 2SLS? I keep seeing talk of 'moment conditions' or 'moment equations', but don't exactly understand the context. Consider a very standard regression model: $$y_i = \beta x_i + u_i$$ where $$u_i$$ is an error term, and suppose all the classic linear regression assumptions hold. If I relax the exogeneity assumption,i.e., $$\mathbb{E}(u|x) \neq 0$$ (also side question: Why does this imply that $$\mathbb{E}(u_i x_i)\neq0$$?), then using OLS here will produce biased estimates right? Is $$\mathbb{E}(u_i | x_i)=0$$ the 'moment condition' in OLS? Is it $$\mathbb{E}(u_i x_i) =0$$ ? My second question is whether GMM, 2SLS, and IV are specifically distinct from one another. My book says that when we have $$K$$ endogeneous regressors and $$K$$ instruments (exactly identified) we use IV. In the case of being over-identified, and we have $$J>K$$ IVs, we use GMM. What about for the under-identified case? Finally, What's the best way to distinguish between these different methods? For instance, what is the difference in using GMM in an over-identified case vs trying to use IV in that case? Thanks for any help. The moment condition is the exogeneity condition $$\mathbb{E}(u_i x_i) = 0$$. ($$\mathbb{E}(u_i | x_i)=0$$ is not a moment condition. It is an equality of random variables.) OLS is a special case of Method of Moments estimator where the estimates are given by the sample analogue of a population moment condition. For OLS, the sample analogue of $$\mathbb{E}(u_i x_i) = 0$$ is $$\sum e_i x_i = 0,$$ where $$e_i = y_i - \hat{\beta} x_i$$. This sample condition characterizes $$\hat{\beta}$$. As the terminology suggests, GMM is a generalization of method of moments. IV estimator is a special case of GMM, where the moment condition is $$\mathbb{E}(u_i z_i) = 0$$ with $$z_i$$ being a vector of IV's. (For OLS, $$z_i = x_i$$. Exogenous regressors are examples of instruments.) When system is over-identified, the sample version of $$\mathbb{E}(u_i z_i) = 0$$ need not have a solution. Therefore one minimizes an appropriate quadratic form instead---this is what makes GMM "generalized", compared to MM. Strictly speaking, 2SLS is an algorithm that implements the IV estimator, rather than an estimator. Trivially you can find other equivalent algorithms that implements IV. This slight abuse of terminology is, however, standard. GMM is, of course, not restricted to IV. See for example, Hansen's seminal application of GMM on the equity premium puzzle. It does not make sense to speak of estimation for under-identified models---they are unidentified.
2022-01-22 17:12:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7283304333686829, "perplexity": 1266.4581697282044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00711.warc.gz"}
https://kwant-project.org/doc/1/reference/generated/kwant.lattice.general
# kwant.lattice.general¶ kwant.lattice.general(prim_vecs, basis=None, name='', norbs=None) Create a Bravais lattice of any dimensionality, with any number of sites. Parameters: prim_vecs : 2d array-like of floats The primitive vectors of the Bravais lattice basis : 2d array-like of floats The coordinates of the basis sites inside the unit cell name : string or sequence of strings Name of the lattice, or sequence of names of all of the sublattices. If the name of the lattice is given, the names of sublattices (if any) are obtained by appending their number to the name of the lattice. norbs : int or sequence of ints, optional The number of orbitals per site on the lattice, or a sequence of the number of orbitals of sites on each of the sublattices. lattice : either Monatomic or Polyatomic Resulting lattice. Notes This function is largely an alias to the constructors of corresponding lattices. #### Previous topic kwant.lattice.TranslationalSymmetry #### Next topic kwant.lattice.Monatomic
2017-07-25 00:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3547363579273224, "perplexity": 1732.4438446421273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00466.warc.gz"}
https://www.askmehelpdesk.com/labor-law/what-laws-regarding-per-diem-when-working-out-town-586336.html
# What are the laws regarding per diem when working out of town? What are the laws regarding per diem when working out of town? Last edited by Assistant; Jul 20, 2011 at 03:35 PM. Search this Question wendydawn1 Posts: 2, Reputation: 1 New Member #2 Jul 8, 2011, 01:53 PM Is your employer required to pay perdigm when sending you out of town to work? Is it mandatory for an employer to pay perdigm when sending you out of town to work? tickle Posts: 23,562, Reputation: 2633 Expert #3 Jul 8, 2011, 02:03 PM No perdiem (not perdigm) is not mandatory; it is if signed into your contract with your employee. This usually means a food allowance while away working. Tick AK lawyer Posts: 12,587, Reputation: 977 Expert #4 Jul 8, 2011, 02:06 PM Originally Posted by wendydawn1 what are the laws regarding perdigm when working out of town? Congratulations. First thread in the new Labor Law subforum, evidently. You are asking about "per diem"? It comes from the Latin, meaning, literally, "by day". The term usually has to do with payments to an employee to compensate him/her for extra expenses (meals and room) of living on-the-road. What exactly do you want to know about it. I doubt that there are any such laws, but if you want us to look it up, you had better tell us the state or other jurisdiction you are in. Fr_Chuck Posts: 80,628, Reputation: 7627 Expert #5 Jul 8, 2011, 02:24 PM And I get to merge the first two questions in the new forum. Question Tools Search this Question Search this Question: Advanced Search ## Check out some similar questions! I have been working out of town for almost 2 years and have not received any perdiem for this. Am I entitled to compensation for this even though the housing is paid for and I have use of a company vehicle which gas is included?? My company has not taxed my per diem for the last 10 months. I was just told that after one year on this jobsite, which is out of town, I will be taxed on my per diem. Can that happen? Is per diem taxable when out of state working [ 1 Answers ] Sorry, let me rephrase that, I am working in Georgia, on a construction job working for a contractor in Idaho, I would like to know what the taxes would be on $3000 claiming 4 dependants. I need to know if you can break it down for me , FICA, SS,FED. Thank you Is per diem taxable when out of state working [ 1 Answers ] This is my situation, I am working out of state and I am receiving$50 a day per diem for food, etc... is this taxable? I alos would like to know how much tax would be taken out of \$3000? Mother has custody of the child and father has joint custody of the child .Mother has left child with her parents for four days now. Her father(grandparent ) just contacted the father to let him know that she has not returned home and is recorded to have a drug problem in the past which was...
2016-12-08 05:53:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2969871461391449, "perplexity": 3113.1625883883416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542414.34/warc/CC-MAIN-20161202170902-00376-ip-10-31-129-80.ec2.internal.warc.gz"}
http://everettsprojects.com/2018/01/30/mnist-adversarial-examples.html
Convolutional neural networks appear to be wildly successful at image recognition tasks, but they are far from perfect. They are known to be susceptible to attacks called adversarial examples, in which an image that is clearly of one class to a human observer can be modified in such a way that the neural network misclassifies it. In some cases, the necessary modifications may even be imperceptible to humans. In this post we will explore the topic of adversrial examples using the Convolutional Neural Network I created for a Kaggle competition and then later visualized. To do so I will use the hand drawn digits that the neural network used as a validation set during training, and show that the neural network correctly classifies them 99.74% of the time. I will then use a library called CleverHans to compute adversarial examples that cause this accuracy to plummet. Finally I will introduce 10 brand new digits that I have drawn myself and show that they are classified correctly with high confidence. I will then try to compute perturbations that push the model into classifying each of these new example digits into each of the other nine possible digits. These perturbed examples will be visualized to show that the changes required for misclassification are often not as significant as you might expect. /home/everett/.local/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Using TensorFlow backend. /usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) With the validation set that was used in training my MNIST Convnet, we can verify that the validation accuracy is actually 99.74% like expected. The normal validation accuracy is: 0.9973809523809524 Now we want to see if CleverHans is working. To do so we will initialize the FastGradientMethod attack object, which uses the Fast Gradient Sign Method (FGSM) to generate adversarial examples. The parameters used in this attack are exactly the same as those provided in the keras tutorial on the CleverHans GitHub Repo. We’ll then create the adversarial examples based on the validation data and check the classification accuracy. The adversarial validation accuracy is: 0.2007142857142857 The classification accuracy has dropped from a respectable 99.74% to just 20.07% using a FGSM attack. This is one of the simplest adversarial attack methods available, and I expect that better results are possible with more sophisticated methods. To get a feel for what the FGSM attack has done, let’s visualize one of the digits beside the corresponding adversarial example: The normal digit is predicted to be a [7] The adversarial example digit is predicted to be an [8] The attack has perturbed the normal classification from a 7, which is correct, to an 8, which is obviously not. I don’t expect that any competent human would mistake the second image for an 8. The attack wasn’t imperceptible, so they may question why there appears to be a bunch of white noise in the image, but if pressed to identify the digit I am confident that nearly everyone will answer that it’s a 7. Now let’s consider the 10 brand new digits I have created for this exercise. Each of these digits was drawn in Inkscape using a Wacom tablet. I then exported the svg files to a 28 x 28 pixel png image and inverted the colors to get white digits on a black background, just like the original MNIST data my model used. The normal classifications are: [0 1 2 3 4 5 6 7 8 9] The normal classification confidences are: [1. 0.99881727 1. 1. 1. 0.99999976 0.9999995 1. 1. 0.99999774] The normal classification accuracy is: 1.0 My convnet does a fine job of identifying the new digits, correctly classifying each one with a minimum confidence of 99.88% on the one digit. Now let’s see what happens when the examples are perturbed adversarially. This time we will use the Basic Iterative Method for attacks, which is an extension of the FGSM attack that can achieve misclassification with more subtle perturbations. The adversarial classifications are: [9 2 8 8 8 3 8 8 3 8] The adversarial classification confidences are: [0.9999982 1. 1. 1. 0.99999964 1. 1. 1. 1. 1. ] The adversarial classification accuracy is: 0.0 The classification accuracy on the adversarial versions of the new digits has dropped to 0% and my convnet is alarmingly confident in these misclassifications. This time the minimum confidence is for the zero digit, which it has predicted is a nine with 99.99982% confidence. Once again, I do not expect any competent human being to make a similar mistake on the above examples. Clearly my MNIST convnet is susceptible to adversarial examples, which isn’t surprising given that it was never trained on data that resembles these attacks. It is effectively over-fit to normal looking images of digits that were created in good faith, and adversarial examples expose this over-fitting in a dramatic fashion. An important thing to note, however, is that the above attacks aren’t targeted towards a specific misclassification; it simply moves towards the easiest misclassification that it can find. A malicious actor might find more utility in forcing a specific misclassification if they intended to exploit my neural network in the real world. Let’s now consider how susceptible my convnet is to targeted adversarial attacks. The first column in the above image represents the input digit, and the next ten digits on each row are attempts to perturb it into the digits zero through nine. The bottom row represents the target digit of the perturbation. A green border around a digit indicates that my convnet correctly classified the adversarial example as the original input digit, a red border means the digit was misclassified as the target digit, and a yellow border means the digit was misclassified, but not as the target. The diagonals are all correctly classified since they represent attempts to perturb a digit towards itself. We will not consider these diagonal entries when determining the accuracy of the model. Counting the green digits off the diagonal, we can see that only five of the ninety attacks were correctly classified, and four more were misclassified as an unintended digit. Four of these five failed attacks are on the digit eight, suggesting that eights are not as easy as the other digits. Regardless, with 80 out of 90 attacks succeeding, we achieved a 88.9% success rate in forcing specific misclassications. It is no wonder adversarial examples have been such a topic of interest in machine learning circles over the past few years. Now that I know my MNIST convnet is susceptible to adversarial attacks it might be interesting to try a similar technique while treating my network like a black box. To do this I would need to construct a parallel model which is used to find adversarial attacks that are likely to work on the original black box model. Such attacks are known to work in practice, and there is even a CleverHans tutorial that implements one. I might also try to improve this model by incorporating adversarial examples during training. Or I could instead switch to using the new Capsule Networks introduced only a few months ago by Geoffrey Hinton. Capsule Networks purport to be more resistant to adversarial examples due to the way in which they encode certain features like position, size, orientation, and stroke thickness. It will be interesting to see just how resilient they are against targeted attacks as time passes.
2020-01-26 19:02:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5579420328140259, "perplexity": 874.079926131936}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690095.81/warc/CC-MAIN-20200126165718-20200126195718-00438.warc.gz"}
https://socratic.org/questions/is-sulfur-a-nonmetal
# Is sulfur a nonmetal? Sulfur is an abundant non-metal, and one of the few elements that can found naturally in its elemental form. Under normal conditions, sulfur occurs as the ${S}_{8}$ molecule, a bright yellow powder.
2020-10-28 03:15:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184661746025085, "perplexity": 4132.344560504943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00282.warc.gz"}
https://codereview.stackexchange.com/questions/35824/form-processing-controller-action
# Form-processing controller action I have a fairly ugly controller method I would like to refactor / extract. I can only test this in an integration-type test which kind of signals a code smell. The method processes the form and needs to do one of 5 things, depending on which button was pressed 1. if submitted and valid (button 1) => update some values on the entity and re-render the form 2. if submitted and valid (button 2) => perform another controller action that shows a .pdf print of the entity 3. if submitted and valid (button 3) => save entity and redirect 4. if not submitted or not valid => render form (plus errors) In code it looks kind of like this: protected function processForm(Request $request, MyEntity$entity) { $form =$this->createForm(new MyEntityType, $entity); if ($form->handleRequest($request)->isValid()) {$alteredEntity = SomeClass:performStuffOnEntity($entity); if ($form->get('button1')->isClicked()) { $form =$this->createForm( new MyEntityType, $alteredEntity ); } else if ($form->get('button2')->isClicked()) { return $this->pdfPreview($alteredEntity); } else if ($form->get('button3')->isClicked()) { return$this->persistAndRedirectToEntity( $alteredEntity ); } } return$this->render( 'MyBundle:MyEntity:new.html.twig', array( 'form' => \$form->createView(), )); } There are actually two buttons like button1; I left one out for brevity of the example. Ideas: 1. I have tried to extract this into a EntityFormWizard of some sort but this ended up as a cluttered Object with too many dependencies (Router, Templating, Form) which was also a pain to test. 2. Using FormEvents I wanted to extract at least the altering of the entity depending on which button was pressed into a FormEventListener, but the only place where I can alter the Entity is the FormEvents::PRE_SET_DATA event, but in there I have problems figuring out which button was clicked. Do I have to live with my integration test for this behavior or is there a way to extract it and test it with a unit test? • This question appears to be off-topic because it seems like you are asking us how to write code for you, or how to do something other than increase readability, performance, speed, etc – Malachi Nov 22 '13 at 4:49 • I disagree; I think that rearranging code for testability is on topic. However, we haven't been given enough context to understand the problem. This might be a situation where it makes sense to post the code of greatest concern here, and the rest on GitHub or something. – 200_success Nov 22 '13 at 6:54 • Hi, not trying to get my code written for me. Everything in here exists already. Just looking for a better abstraction or advice how to extract that method cleverly. If I extract it as it is I end up with exactly the same dependencies as in the controller. So I wouldnt really gain anything. – user1777136 Nov 22 '13 at 10:03
2019-11-12 17:29:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26300495862960815, "perplexity": 1514.8499087234936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00099.warc.gz"}
https://www.maa.org/press/maa-reviews/markov-chains-and-mixing-times-0
# Markov Chains and Mixing Times ###### David A. Levin and Yuval Peres Publisher: American Mathematical Society Publication Date: 2017 Number of Pages: 447 Format: Hardcover Edition: 2 Price: 84.00 ISBN: 9781470429621 Category: Monograph [Reviewed by Rick Durrett , on 09/8/2019 ] Discrete time Markov chains with a finite state space are the first stochastic process that one encounters in probability. At each time $X_{n}$ is in some state $x$, and it jumps to state $y$ at time $n+1$ with probability $p(x,y)$. The Markov property says that given the current state the rest of the past is irrelevant for predicting the future. You can think of the system as a board game where on each turn you “roll a die” and use it to determine where you move to a new square. The importance of Markov chains comes from the fact that many systems have this property and there is a rich theory for determining the asymptotic behavior of the system. In most cases of interest, the asymptotic behavior is that the system converges to equilibrium. In an introductory stochastic process course, which usually covers topic such as Poisson processes, renewal theory, continuous-time Markov chains, and perhaps Brownian motion, this is all there is time for. However, convergence theory for Markov chains is far from the end of the story. In many situations, such as randomized algorithms that arise from computer science and Markov chain Monte Carlo (basically a long winded way of saying simulation) it is important to understand the amount of time it takes for the system to reach equilibrium because that will dictate whether the algorithm is useful or not. Chapter 4 introduces some of the many metrics that are used to quantify the time to reach equilibrium. Chapters 5 and 6 introduce two methods (coupling and strong stationary times) that are used to upper bound the time to equilibrium. Coupling is a method for defining two processes on the same space so they agree with each other “as much as possible.” When one process is a Markov chain started at and the second is the chain started in equilibrium then one minus the probability they agree is an upper bound on the “total variation distance” between the current state and equilibirium. The second method called strong stationary times constructs random times at which the process is exactly in equilibrium. Chapter 7 discusses the typically harder problem of giving lower bounds on the mixing times. The book then turns to important examples: 8. Shuffling cards, 9. Random walks on Networks. Chapters 10 considers the problem of the time it takes to first visit a specified state, the hitting time, while Chapter 11 discusses the cover time, i.e., the time to visit all of the states. Chapter 12 which closes Part I discusses eigenvalues. When the transition matrix can be diagonalized then the convergence rate can be computed explicitly. However, in many examples one uses the size of the spectral gap between 1 and the next largest eigenvalue. Part II considers more advanced material. There are a number of famous examples: the Ising model of magnetism from physics, gene rearrangement from biology, the simple exclusion processes which is one of the first interacting particle systems, and the Lamplighter walk from mathematics – an individual walks along an infinite sequence of lamps. On each step he/she either jumps or changes the state of the lamp. The system may sound like a joke but its analysis is no laughing matter and the answers are exotic. Other special techniques are introduced in Part II, but perhaps the most intriguing notion here is the cutoff time. For an example consider the random walk on a random d-regular graph with $d \geq 3$. The time to converge to equilibrium is asymptotically $c_{d} \log n$ but the total variation distance goes from 1 to 0 in a window of size $\sqrt{ \log n}$ around  $c_{d} \log n$. The phenomenon that the total variation distance goes from 1 to 0 in a window that is much smaller than the mixing time is “cutoff.” Determining when this happens is a very interesting problem. In theory, Part I of the book can be used for a course. Two flow charts show how one might go about this. This could work for a collection of graduate students, even ones from a variety of application fields, but I personally wouldn’t try it on Duke undergraduates. Rick Durrett is a professor in the Mathematics Department at Duke University.
2022-10-05 19:51:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7164965867996216, "perplexity": 261.18717147563586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00141.warc.gz"}
https://zbmath.org/serials/?q=se%3A193
## Reports on Mathematical Physics Short Title: Rep. Math. Phys. Publisher: Elsevier (Pergamon), Oxford; Polish Scientific Publishers PWN, Warszawa ISSN: 0034-4877 Online: http://www.sciencedirect.com/science/journal/00344877 Comments: Indexed cover-to-cover Documents Indexed: 2,377 Publications (since 1970) References Indexed: 2,332 Publications with 44,625 References. all top 5 ### Latest Issues 89, No. 3 (2022) 89, No. 2 (2022) 89, No. 1 (2022) 88, No. 3 (2021) 88, No. 2 (2021) 88, No. 1 (2021) 87, No. 3 (2021) 87, No. 2 (2021) 87, No. 1 (2021) 86, No. 3 (2020) 86, No. 2 (2020) 86, No. 1 (2020) 85, No. 3 (2020) 85, No. 2 (2020) 85, No. 1 (2020) 84, No. 3 (2019) 84, No. 2 (2019) 84, No. 1 (2019) 83, No. 3 (2019) 83, No. 2 (2019) 83, No. 1 (2019) 82, No. 3 (2018) 82, No. 2 (2018) 82, No. 1 (2018) 81, No. 3 (2018) 81, No. 2 (2018) 81, No. 1 (2018) 80, No. 3 (2017) 80, No. 2 (2017) 80, No. 1 (2017) 79, No. 3 (2017) 79, No. 2 (2017) 79, No. 1 (2017) 78, No. 3 (2016) 78, No. 2 (2016) 78, No. 1 (2016) 77, No. 3 (2016) 77, No. 2 (2016) 77, No. 1 (2016) 76, No. 3 (2015) 76, No. 2 (2015) 76, No. 1 (2015) 75, No. 3 (2015) 75, No. 2 (2015) 75, No. 1 (2015) 74, No. 3 (2014) 74, No. 2 (2014) 74, No. 1 (2014) 73, No. 3 (2014) 73, No. 2 (2014) 73, No. 1 (2014) 72, No. 3 (2013) 72, No. 2 (2013) 72, No. 1 (2013) 71, No. 3 (2013) 71, No. 2 (2013) 71, No. 1 (2013) 70, No. 3 (2012) 70, No. 2 (2012) 70, No. 1 (2012) 69, No. 3 (2012) 69, No. 2 (2012) 69, No. 1 (2012) 68, No. 3 (2011) 68, No. 2 (2011) 68, No. 1 (2011) 67, No. 3 (2011) 67, No. 2 (2011) 67, No. 1 (2011) 66, No. 3 (2010) 66, No. 2 (2010) 66, No. 1 (2010) 65, No. 3 (2010) 65, No. 2 (2010) 65, No. 1 (2010) 64, No. 3 (2009) 64, No. 1-2 (2009) 63, No. 3 (2009) 63, No. 2 (2009) 63, No. 1 (2009) 62, No. 3 (2008) 62, No. 2 (2008) 62, No. 1 (2008) 61, No. 3 (2008) 61, No. 2 (2008) 61, No. 1 (2008) 60, No. 3 (2007) 60, No. 2 (2007) 60, No. 1 (2007) 59, No. 3 (2007) 59, No. 2 (2007) 58, No. 3 (2006) 58, No. 2 (2006) 58, No. 1 (2006) 57, No. 3 (2006) 57, No. 2 (2006) 57, No. 1 (2006) 56, No. 3 (2005) 56, No. 2 (2005) 56, No. 1 (2005) ...and 106 more Volumes all top 5 ### Authors 26 Pulmannová, Sylvia 21 Albeverio, Sergio A. 21 Asanov, Gennadii S. 20 Sławianowski, Jan Jerzy 20 Śniatycki, Jędrzej 17 Mrugała, Ryszard 16 Gudder, Stanley P. 15 Jamiołkowski, Andrzej 15 Kijowski, Jerzy 14 Exner, Pavel 13 Fei, Shaoming 13 Lahti, Pekka Johannes 12 Maczynski, Maciej J. 11 Dvurečenskij, Anatolij 11 Ohya, Masanori 11 Uhlmann, Armin 11 Woronowicz, Stanisław Lech 10 Bates, Larry M. 10 de León Rodríguez, Manuel 10 Jadczyk, Arkadiusz 10 Marcinek, Władysław 10 Marmo, Giuseppe 10 Michalski, Miłosz R. 10 Popov, Igor’ Yur’evich 10 Rudolph, Gerd 10 Rzewuski, Jan 10 Scarfone, Antonio Maria 10 Streit, Ludwig 9 Cariñena, José F. 9 Chruściński, Dariusz 9 Guz, Wawrzyniec 9 Ingarden, Roman Stanislaw 9 Khan, Subuhi 9 Matsumoto, Makoto 9 Narnhofer, Heide 9 Pusz, Wiesław 9 Sakthivel, Rathinasamy 9 Vladimirov, Vsevolod A. 8 Błaszak, Maciej 8 Foulis, David James 8 Godlewski, Piotr 8 Kholevo, Aleksandr Semënovich 8 Sewell, Geoffrey L. 7 Belavkin, Viacheslav 7 Cornwell, J. F. 7 Cushman, Richard H. 7 Dereziński, Jan 7 Gesztesy, Fritz 7 Grudziński, Hubert 7 Jenčová, Anna 7 Kaniadakis, Giorgio 7 Lassner, Gerd 7 Maćkowiak, Jan 7 Martín de Diego, David 7 Messina, Antonino 7 Odzijewicz, Anatol 7 Posiewnik, Andrzej 7 Wojnar, Ryszard 6 Accardi, Luigi 6 Băleanu, Dumitru I. 6 Boya, Luis Joaquín 6 Bugajski, Sławomir 6 De Graaf, Jan 6 de Oliveira, César R. 6 Dobrev, Vladimir K. 6 Dorninger, Dietmar W. 6 Fassari, Silvestro 6 Garecki, Janusz 6 Gołubowska, Barbara 6 Grabowski, Janusz 6 Kanatchikov, Igor V. 6 Karwowski, Witold 6 Kishimoto, Akitaka 6 Koshmanenko, Volodymyr Dmytrovych 6 Länger, Helmut M. 6 Lulek, Tadeusz 6 Marsden, Jerrold Eldon 6 Martens, Agnieszka 6 Milewski, Jan 6 Mozrzymas, Jan 6 Napiorkowski, Kazimierz 6 Napoli, Anna 6 Palev, Tchavdar D. 6 Prykarpatsky, Anatoliy Karolevych 6 Puta, Mircea 6 Ramm, Alexander G. 6 Shen, Shoufeng 6 Staszewski, Przemysław 6 Stavrinos, Panayiotis C. 6 Tulczyjew, Włodzimierz Marek 6 Urbański, Paweł 6 van der Schaft, Arjan J. 6 Verbeure, André F. 6 Ylinen, Kari 5 Akashi, Shigeo 5 Aldaya, Victor 5 Alicki, Robert 5 Athanasiadis, Christodoulos E. 5 Baumgärtel, Hellmut 5 Berezans’kyĭ, Yuriĭ Makarovych ...and 2,123 more Authors all top 5 ### Fields 964 Quantum theory (81-XX) 382 Differential geometry (53-XX) 332 Functional analysis (46-XX) 297 Partial differential equations (35-XX) 285 Dynamical systems and ergodic theory (37-XX) 248 Statistical mechanics, structure of matter (82-XX) 246 Operator theory (47-XX) 236 Mechanics of particles and systems (70-XX) 210 Global analysis, analysis on manifolds (58-XX) 166 Topological groups, Lie groups (22-XX) 158 Relativity and gravitational theory (83-XX) 152 Probability theory and stochastic processes (60-XX) 133 Nonassociative rings and algebras (17-XX) 85 Ordinary differential equations (34-XX) 57 Mathematical logic and foundations (03-XX) 54 Fluid mechanics (76-XX) 52 Linear and multilinear algebra; matrix theory (15-XX) 49 Order, lattices, ordered algebraic structures (06-XX) 49 Group theory and generalizations (20-XX) 49 Information and communication theory, circuits (94-XX) 46 Optics, electromagnetic theory (78-XX) 42 Special functions (33-XX) 39 Mechanics of deformable solids (74-XX) 36 Associative rings and algebras (16-XX) 34 Calculus of variations and optimal control; optimization (49-XX) 33 Measure and integration (28-XX) 28 Manifolds and cell complexes (57-XX) 26 Systems theory; control (93-XX) 23 General and overarching topics; collections (00-XX) 22 Algebraic topology (55-XX) 21 Difference and functional equations (39-XX) 18 Combinatorics (05-XX) 17 Abstract harmonic analysis (43-XX) 17 Classical thermodynamics, heat transfer (80-XX) 14 Several complex variables and analytic spaces (32-XX) 14 Statistics (62-XX) 14 Numerical analysis (65-XX) 12 Number theory (11-XX) 12 Algebraic geometry (14-XX) 12 Real functions (26-XX) 12 Biology and other natural sciences (92-XX) 11 Harmonic analysis on Euclidean spaces (42-XX) 11 Integral equations (45-XX) 10 Category theory; homological algebra (18-XX) 10 Astronomy and astrophysics (85-XX) 9 History and biography (01-XX) 8 Computer science (68-XX) 7 Functions of a complex variable (30-XX) 7 Approximations and expansions (41-XX) 7 Geometry (51-XX) 7 General topology (54-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 General algebraic systems (08-XX) 5 Integral transforms, operational calculus (44-XX) 4 Field theory and polynomials (12-XX) 4 $$K$$-theory (19-XX) 4 Potential theory (31-XX) 3 Commutative algebra (13-XX) 3 Operations research, mathematical programming (90-XX) 1 Sequences, series, summability (40-XX) 1 Convex and discrete geometry (52-XX) ### Citations contained in zbMATH Open 1,511 Publications have been cited 9,319 times in 7,392 Documents Cited by Year Reduction of symplectic manifolds with symmetry. Zbl 0327.58005 Marsden, Jerrold; Weinstein, Alan 1974 The ”transition probability” in the state space of a $$^*$$-algebra. Zbl 0355.46040 Uhlmann, A. 1976 Linear transformations which preserve trace and positive semidefiniteness of operators. Zbl 0252.47042 Jamiolkowski, A. 1972 Functional calculus for sesquilinear forms and the purification map. Zbl 0327.46032 Pusz, W.; Woronowicz, S. L. 1975 Quasi-entropies for finite quantum systems. Zbl 0629.46061 Petz, Dénes 1986 Positive maps of low dimensional matrix algebras. Zbl 0347.46063 Woronowicz, S. L. 1976 On fractional derivatives with exponential kernel and their discrete versions. Zbl 1384.26025 2017 Twisted second quantization. Zbl 0707.47039 Pusz, W.; Woronowicz, S. L. 1989 Nonholonomic reduction. Zbl 0798.58026 Bates, Larry; Śniatycki, Jȩdrzej 1993 Parallel transport and “quantum holonomy” along density operators. Zbl 0644.46058 Uhlmann, Armin 1986 When is the Wigner quasi-probability density non-negative? Zbl 0324.60018 Hudson, R. L. 1974 Symmetric polynomials and the center of the symmetric group ring. Zbl 0288.20014 Jucys, A.-A. A. 1974 Topological algebras of operators. Zbl 0252.46087 Lassner, G. 1972 On the Hamiltonian formulation of nonholonomic mechanical systems. Zbl 0817.70010 van der Schaft, A. J.; Maschke, B. M. 1994 Fibre bundles associated with space-time. Zbl 0204.29802 Trautman, A. 1970 Various approaches to conservative and nonconservative nonholonomic systems. Zbl 0931.37023 Marle, Charles-Michel 1998 Quantum dynamical semigroups and the neutron diffusion equation. Zbl 0372.47020 Davies, E. B. 1977 Bilinear equations and resonant solutions characterized by Bell polynomials. Zbl 1396.35054 Ma, Wen-Xiu 2013 Sequential products on effect algebras. Zbl 1023.81001 Gudder, Stan; Greechie, Richard 2002 Canonical structure of classical field theory in the polymomentum phase space. Zbl 0947.70020 Kanatchikov, Igor V. 1998 Properties of quantum Markovian master equations. Zbl 0392.47017 Gorini, Vittorio; Frigerio, Alberto; Verri, Maurizio; Kossakowski, Andrzej; Sudarshan, E. C. G. 1978 A spectral Legendre-Gauss-Lobatto collocation method for a space-fractional advection diffusion equations with variable coefficients. Zbl 1292.65109 Bhrawy, A. H.; Baleanu, D. 2013 Approximate controllability of fractional neutral stochastic system with infinite delay. Zbl 1263.93039 Sakthivel, R.; Ganesh, R.; Suganya, S. 2012 Quantum error correcting codes from the compression formalism. Zbl 1120.81011 Choi, Man-Duen; Kribs, David W.; Życzkowski, Karol 2006 Lie algebroids and Poisson-Nijenhuis structures. Zbl 1005.53061 Grabowski, Janusz; Urbański, Paweł 1997 Some ideas about quantization. Zbl 0418.58011 Frønsdal, Christian 1979 On the detailed balance condition for non-Hamiltonian systems. Zbl 0363.60114 Alicki, Robert 1976 The Hamiltonian and Lagrangian approaches to the dynamics of nonholonomic systems. Zbl 0929.70009 Koon, Wang Sang; Marsden, Jerrold E. 1997 Spaces of white noise distributions: Constructions, descriptions, applications. I. Zbl 0814.60034 Kondratev, Yu. G.; Streit, L. 1993 Generalized algebra within a nonextensive statistics. Zbl 1125.82300 Nivanen, L.; Le Méhauté, A.; Wang, Q. A. 2003 Lowest weight representations of the Schrödinger algebra and generalized heat/Schrödinger equations. Zbl 0884.22009 Dobrev, V. K.; Doebner, H.-D.; Mrugalla, Ch. 1997 Theory of Finsler spaces with $$(\alpha,\beta)$$-metric. Zbl 0772.53009 Matsumoto, Makoto 1992 On the relation between classical and quantum-mechanical entropy. Zbl 0444.60100 Wehrl, Alfred 1979 Some remarks on the $$\delta$$ ’-interaction in one dimension. Zbl 0638.70016 Šeba, Petr 1986 Some aspects of quantum information theory and their applications to irreversible processes. Zbl 0709.94011 Ohya, Masanori 1989 Poisson reduction for nonholonomic mechanical systems with symmetry. Zbl 1120.37314 Koon, Wang Sang; Marsden, Jerrold E. 1998 Symmetry and reduction in implicit generalized Hamiltonian systems. Zbl 0978.37046 Blankenstein, G.; van der Schaft, A. J. 2001 Complete controllability of stochastic evolution equations with jumps. Zbl 1244.93028 Sakthivel, R.; Ren, Y. 2011 On Randers spaces of constant flag curvature. Zbl 1048.53054 Bao, David; Robles, Colleen 2003 Contact structure in thermodynamic theory. Zbl 0742.58022 Mrugała, Ryszard; Nulton, James D.; Schön, J. Christian; Salamon, Peter 1991 An extension of Hamiltonian systems to the thermodynamic phase space: towards a geometry of nonreversible processes. Zbl 1210.80001 Eberard, D.; Maschke, B. M.; van der Schaft, A. J. 2007 Controllability of nonlocal fractional differential systems of order $$\alpha \in (1,2]$$ in Banach spaces. Zbl 1285.93023 Li, Kexue; Peng, Jigen; Gao, Jinghuai 2013 Step algebras of semi-simple subalgebras of Lie algebras. Zbl 0285.17005 Mickelsson, Jouko 1973 Brownian motion of a quantum harmonic oscillator. Zbl 0365.60071 1976 Fractional variational principles with delay within Caputo derivatives. Zbl 1195.49030 2010 One-dimensional model of a quantum nonlinear harmonic oscillator. Zbl 1161.81344 Cariñena, José F.; Rañada, Manuel F.; Santander, Mariano 2004 Approximate controllability of nonlinear impulsive differential systems. Zbl 1141.93015 Sakthivel, R.; Mahmudov, N. I.; Kim, J. H. 2007 Superposition rules, Lie theorem, and partial differential equations. Zbl 1153.34004 Cariñena, José F.; Grabowski, Janusz; Marmo, Giuseppe 2007 Operator theory in the $$C^*$$-algebra framework. Zbl 0793.46039 Woronowicz, S. L.; Napiórkowski, K. 1992 Birkhoffian formulations of nonholonomic constrained systems. Zbl 0988.70013 Guo, Yongxin; Luo, S. K.; Shang, M.; Mei, F. X. 2001 Quantum information theory. Zbl 0366.94032 Ingarden, Roman S. 1976 Some geometric aspects of variational calculus in constrained systems. Zbl 1038.37052 Gràcia, Xavier; Marín-Solano, Jesús; Muñoz-Lecanda, Miguel-C. 2003 The integration of semi-infinite Toda chain by means of inverse spectral problem. Zbl 0652.35098 Berezanskij, Yu. M. 1986 Free quantum motion on a branching graph. Zbl 0749.47038 Exner, P.; Šeba, P. 1989 $$\alpha$$-divergence of the non-commutative information geometry. Zbl 0806.62006 Hasegawa, Hiroshi 1993 Convex and linear effect algebras. Zbl 0956.46002 Gudder, S.; Pulmannová, S.; Bugajski, S.; Beltrametti, E. 1999 The property lattice of spatially separated quantum systems. Zbl 0632.46068 Gisin, N. 1986 New applications of fractional variational principles. Zbl 1166.58304 Baleanu, Dumitru 2008 Canonical construction of differential operators intertwining representations of real semisimple Lie groups. Zbl 0694.22008 Dobrev, V. K. 1988 Controllability of semilinear differential equations and inclusions via semigroup theory in Banach spaces. Zbl 1185.93016 Górniewicz, L.; Ntouyas, S. K.; O’Regan, D. 2005 On Finsler spaces with Kropina metric. Zbl 0389.53008 Shibata, Choko 1978 A new proof of Wigner’s theorem. Zbl 1161.81381 Győry, Máté 2004 Examples of gauge conservation laws in nonholonomic systems. Zbl 0887.58016 Bates, Larry; Graumann, Hugo; MacDonnell, Creighton 1996 Reduction of nonholonomic mechanical systems with symmetries. Zbl 0973.37505 Cantrijn, Frans; de León, Manuel; Marrero, Juan Carlos; Martín de Diego, David 1998 Controllability results for impulsive functional differential inclusions. Zbl 1130.93310 Benchohra, M.; Górniewicz, L.; Ntouyas, S. K.; Ouahab, A. 2004 Linear connections and curvature tensors in the geometry of parallelizable manifolds. Zbl 1147.53021 Youssef, Nabil L.; Sid-Ahmed, Amr M. 2007 Cluster-state quantum computation. Zbl 1110.81054 Nielsen, Michael A. 2006 Semiclassical principal symbols and Gutzwiller’s trace formula. Zbl 0794.58046 Meinrenken, Eckhard 1992 Extensions of representations and cohomology. Zbl 0445.22013 Pinczon, G.; Simon, J. 1979 Covariant measurements and uncertainty relations. Zbl 0447.62011 Kholevo, A. S. 1979 Weighted entropy. Zbl 0222.62004 Guiaşu, Silviu 1971 Two theorems about $$C_p$$. Zbl 0258.47022 Grümm, H. R. 1973 On field theoretic generalizations of a Poisson algebra. Zbl 0905.58008 Kanatchikov, Igor V. 1997 Stochastic Hamiltonian dynamical systems. Zbl 1147.37032 Lázaro-Camí, Joan-Andreu; Ortega, Juan-Pablo 2008 Real slices of complex space-time in general relativity. Zbl 0361.53029 Rozga, Krysztof 1977 Geometry of nonholonomic constraints. Zbl 0900.70194 Cushman, R.; Kemppainen, D.; Śniatycki, J.; Bates, L. 1995 Universality of Fedosov’s construction for star products of Wick type on pseudo-Kähler manifolds. Zbl 1046.53058 Neumaier, Nikolai 2003 Magnetic curves in cosymplectic manifolds. Zbl 1353.53033 Druţă-Romaniuc, Simona-Luiza; Inoguchi, Jun-ichi; Munteanu, Marian Ioan; Nistor, Ana Irina 2016 Density operators as an arena for differential geometry. Zbl 0822.46087 Uhlmann, Armin 1993 Measurement, filtering and control in quantum open dynamical systems. Zbl 1056.81050 Belavkin, V. P. 1999 On the Duffin-Kemmer-Petiau formulation of the covariant Hamiltonian dynamics in field theory. Zbl 0984.81067 Kanatchikov, Igor V. 2000 Completely positive quasi-free maps of the CCR-algebra. Zbl 0436.46054 Demoen, Bart; Vanheuverzwijn, Paul; Verbeure, A. 1979 A dynamical system approach to phase transitions for $$p$$-adic Potts model on the Cayley tree of order two. Zbl 1271.82018 Mukhamedov, Farrukh 2012 The role of type III factors in quantum field theory. Zbl 1140.81427 Yngvason, Jakob 2005 New exact traveling wave solutions of some nonlinear higher-dimensional physical models. Zbl 1284.35374 Kim, Hyunsoo 2012 Controllability of nonlinear fractional delay dynamical systems. Zbl 1378.93022 Nirmala, R. Joice; Balachandran, K.; Rodríguez-Germa, L.; Trujillo, J. J. 2016 Galoisian approach to integrability of Schrödinger equation. Zbl 1238.81090 Acosta-Humánez, Primitivo B.; Morales-Ruiz, Juan J.; Weil, Jacques-Arthur 2011 An effective method of investigation of positive maps on the set of positive definite operators. Zbl 0348.60108 Jamiolkowski, A. 1974 Finslerian metric functions over the product $$\mathbb{R}\times M$$ and their potential applications. Zbl 0922.53006 Asanov, G. S. 1998 Geometry of the Prytz planimeter. Zbl 0952.53010 Foote, Robert L. 1998 The Pauli problem, state reconstruction and quantum-real numbers. Zbl 1110.81007 Corbett, J. V. 2006 Resolvents of self-adjoint extensions with mixed boundary conditions. Zbl 1143.47017 Pankrashkin, Konstantin 2006 On point interactions realised as Ter-Martirosyan-Skornyakov Hamiltonians. Zbl 1384.82017 Michelangeli, Alessandro; Ottolini, Andrea 2017 A note on covariant dynamical semigroups. Zbl 0794.47026 Holevo, A. S. 1993 Asymptotics of bound state for laterally coupled waveguides. Zbl 1040.78510 Popov, I. Yu. 1999 On Randers spaces of scalar curvature. Zbl 0362.53051 1977 Singular reduction of implicit Hamiltonian systems. Zbl 1065.37040 Blankenstein, Guido; Ratiu, Tudor S. 2004 Eigenvalues and normalized eigenfunctions of discontinuous Sturm-Liouville problem with transmission conditions. Zbl 1092.34046 Mukhtarov, O. Sh.; Kadakal, Mahir; Muhtarov, F. Ş. 2004 On a factor associated with the unordered phase of $$\lambda$$-model on a Cayley tree. Zbl 1135.82305 Mukhamedov, Farruh 2004 Twisted canonical anticommutation relations. Zbl 0752.17035 Pusz, W. 1989 Fractional Schrödinger equation with singular potentials of higher order. Zbl 07335488 Altybay, Arshyn; Ruzhansky, Michael; Sebih, Mohammed Elamine; Tokmagambetov, Niyaz 2021 Fractional diffusion with time-dependent diffusion coefficient. Zbl 1488.35556 Costa, F. S.; de Oliveira, E. Capelas; Plata, Adrian R. G. 2021 A $$K$$-contact Lagrangian formulation for nonconservative field theories. Zbl 1487.70095 Gaset, Jordi; Gràcia, Xavier; Muñoz-Lecanda, Miguel C.; Rivas, Xavier; Román-Roy, Narciso 2021 Expansion of bundles of light rays in the Lemaître-Tolman models. Zbl 07458548 Krasiński, Andrzej 2021 Operator means in JB-algebras. Zbl 07458558 Wang, Shuzhou; Wang, Zhenhua 2021 Automorphisms of effect algebras with respect to convex sequential product. Zbl 07335484 Zhang, Jinhua; Ji, Guoxing 2021 Two-qutrit entangled $$f$$-coherent states. Zbl 07335487 Dehghani, A.; Mojaveri, B.; Jafarzadeh Bahrbeig, R. 2021 A generalized Euler probability distribution. Zbl 07371596 Mouayn, Zouhaïr; El Moize, Othmane 2021 Existence and nonexistence of wave operators for time-decaying harmonic oscillators. Zbl 1447.81190 Ishida, Atsuhide; Kawamoto, Masaki 2020 $$\mathbb{Z}_2\times\mathbb{Z}_2$$-generalizations of infinite-dimensional Lie superalgebra of conformal type with complete classification of central extensions. Zbl 1441.17005 Aizawa, N.; Isaac, P. S.; Segar, J. 2020 Solutions of nonlocal Schrödinger equation via the Caputo-Fabrizio definition for some quantum systems. Zbl 07388504 Bouzenna, Fatma El-Ghenbazia; Korichi, Zineb; Meftah, Mohammed Tayeb 2020 New dynamics of the classical and nonlocal Gross-Pitaevskii equation with a parabolic potential. Zbl 07304320 Liu, Shimin; Hua, Wu; Zhang, Da-Jun 2020 Matrix spectral problems and integrability aspects of the Błaszak-Marciniak lattice equations. Zbl 07304324 Wang, Deng-Shan; Li, Qian; Wen, Xiao-Yong; Liu, Ling 2020 Bruce, Andrew James; Duplij, Steven 2020 Quantum-mechanical explicit solution for the confined harmonic oscillator model with the von Roos kinetic energy operator. Zbl 1451.81229 Jafarov, E. I.; Nagiyev, S. M.; Jafarova, A. M. 2020 Nonlocal phenomena in quantum mechanics with fractional calculus. Zbl 1451.81059 Atman, Kazim Gökhan; Şirin, Hüseyin 2020 Two-dimensional observables and spectral resolutions. Zbl 1441.81004 Dvurečenskij, Anatolij; Lachman, Dominik 2020 Ground state photon number at large distance. Zbl 1441.81135 Amour, Laurent; Jager, Lisette; Nourrigat, Jean 2020 On $$q$$-special matrix functions using quantum algebraic techniques. Zbl 1441.81113 Dwivedi, Ravi; Sahai, Vivek 2020 The $$f$$-deformation. II: $$f$$-deformed quantum mechanics in one dimension. Zbl 1441.81110 2020 Non-translation-invariant Gibbs measures of an SOS model on a Cayley tree. Zbl 1474.82004 Rahmatullaev, Muzaffar M.; Abraev, B. U. 2020 Order structures of $$(\mathcal{D,E})$$-quasi-bases and constructing operators for generalized Riesz systems. Zbl 07271101 Inoue, Hiroshi 2020 Symmetries, explicit solutions and conservation laws for some time space fractional nonlinear systems. Zbl 07271107 Singla, Komal; Rana, M. 2020 Stateless quantum structures and extremal graph theory. Zbl 1451.81016 Voràček, Václav 2020 Quantum algebra $$\varepsilon_q(2)$$ and 2D $$q$$-Bessel functions. Zbl 1441.17015 Riyasat, Mumtaz; Khan, Subuhi; Nahid, Tabinda 2019 Some new Karamata type inequalities and their applications to some entropies. Zbl 1441.47020 2019 A hierarchy of integrable differential-difference equations and Darboux transformation. Zbl 1441.37072 Fan, Fang-Cheng; Shi, Shao-Yun; Xu, Zhi-Guo 2019 Open problem in orthogonal polynomials. Zbl 1441.81090 Alhaidari, Abdulaziz D. 2019 On bound electron pairs on the half-line. Zbl 1441.81148 Kerner, Joachim 2019 Generalizations of 2-dimensional diagonal quantum channels with constant Frobenius norm. Zbl 1441.81037 Sergeev, Ivan 2019 Construction of $$p$$-adic covariant quantum fields in the framework of white noise analysis. Zbl 1441.46055 Arroyo-Ortiz, Edilberto; Zúñiga-Galindo, W. A. 2019 Cauchy matrix type solutions for the nonlocal nonlinear Schrödinger equation. Zbl 1441.35218 Feng, Wei; Zhao, Song-Lin 2019 Exact combinatorial approach to finite coagulating systems through recursive equations. Zbl 1441.82020 Łepek, MichaŁ; Kukliński, PaweŁ; Fronczak, Agata; Fronczak, Piotr 2019 Hydrogenoid spectra with central perturbations. Zbl 1441.81086 Gallone, Matteo; Michelangeli, Alessandro 2019 On Killing magnetic curves in $$\mathrm{SL}(2,\mathbb{R})$$ geometry. Zbl 1441.78006 Erjavec, Zlatko 2019 One-mode Wigner quasi-probability distribution function for entangled coherent states generated by beam splitter and cavity QED. Zbl 1441.81027 Mirzaei, S.; Najarbashi, G. 2019 A generalized Vitali set from nonextensive statistics. Zbl 1441.28001 Gomez, Ignacio S. 2019 Number of eigenvalues of non-self-adjoint Schrödinger operators with dilation analytic complex potentials. Zbl 1441.81099 Someyama, Norihiro 2019 Quantum logics defined by sets of numerical events. Zbl 1441.81013 Dorninger, Dietmar; Länger, Helmut 2019 Derivation of generalized Einstein’s equations of gravitation in inertial systems based on a sink flow model of particles. Zbl 1441.83007 Wang, Xiao-Song 2019 Electromagnetic self-force of a point charge from the rate of change of the momentum of its retarded self-field. Zbl 1441.78011 Hnizdo, V.; Vaman, G. 2019 Quantum statistical manifold: the linear growth case. Zbl 1441.81015 Naudts, Jan 2019 Nonlinear evolution equations in Minkowski space-time. Zbl 1441.83002 Koumantos, Panagiotis N. 2019 Variations on the linear harmonic oscillator: Fourier analysis of a fractional Schrödinger equation. Zbl 1441.81092 Bezák, Viktor 2019 On symmetry groups and conservation laws for space-time fractional inhomogeneous nonlinear diffusion equation. Zbl 1441.35014 Feng, Wei 2019 On the bound states of magnetic Laplacians on wedges. Zbl 1441.35181 Exner, Pavel; Lotoreichik, Vladimir; Pérez-Obiol, Axel 2018 Multiplicity of self-adjoint realisations of the $$(2+1)$$-fermionic model of Ter-Martirosyan-Skornyakov type. Zbl 1398.81093 Michelangeli, Alessandro; Ottolini, Andrea 2018 Adaptation of the Alicki-Fannes-Winter method for the set of states with bounded energy and its use. Zbl 1398.81043 Shirokov, M. E. 2018 Schrödinger wave functional in quantum Yang-Mills theory from precanonical quantization. Zbl 1441.81129 Kanatchikov, Igor V. 2018 Simple fractal calculus from fractal arithmetic. Zbl 1402.28005 Aerts, Diederik; Czachor, Marek; Kuna, Maciej 2018 Finding discrete Bessel and Tricomi convolutions of certain special polynomials. Zbl 1402.33006 Khan, Subuhi; Ali, Mahvish; Naikoo, Shakeel Ahmad 2018 Remarks on multisymplectic reduction. Zbl 1402.53060 Echeverría-Enríquez, Arturo; Muñoz-Lecanda, Miguel C.; Román-Roy, Narciso 2018 On the sharpness of spectral estimates for graph Laplacians. Zbl 1402.81160 Kurasov, Pavel; Serio, Andrea 2018 Lower and upper bounds on nonunital qubit channel capacities. Zbl 1441.81104 Filippov, Sergey N. 2018 Shannon entropy reinterpreted. Zbl 1402.94044 Truffet, Laurent 2018 Tomographic portrait of quantum channels. Zbl 1402.81101 Amosov, G. G.; Mancini, Stefano; Manko, V. I. 2018 Pseudo-Finsler spaces modeled on a pseudo-Minkowski space. Zbl 1402.58006 2018 Energy levels of a physical system and eigenvalues of an operator with a singular potential. Zbl 1441.34004 2018 Analysis of stochastic quantization for the fractional Edwards measure. Zbl 1441.60039 Bock, Wolfgang; da Silva, José Luís; Fattler, Torben 2018 The Proca field in curved spacetimes and its zero mass limit. Zbl 1441.83015 Schambach, Maximilian; Sanders, Ko 2018 Quantized curvature in loop quantum gravity. Zbl 1441.83013 2018 On local Tsallis entropy of relative dynamical systems. Zbl 1402.94043 2018 On the number of eigenvalues of the biharmonic operator on $$\mathbb{R}^3$$ perturbed by a complex potential. Zbl 1402.31003 Hulko, Artem 2018 Unified geometrical basis for the generalized Ehlers identities and Raychaudhuri equations. Zbl 1402.83031 Mychelkin, Eduard G.; Makukov, Maxim A. 2018 Lie conformal algebras of planar Galilean type. Zbl 1402.81224 Han, Xiu; Wang, Dengyin; Xia, Chunguang 2018 Potential functions admitted by well-known spherically symmetric static spacetimes. Zbl 1402.22020 Jamal, Sameerah; Shabbir, Ghulam 2018 Geometrically induced spectral effects in tubes with a mixed Dirichlet-Neumann boundary. Zbl 1402.81162 Bakharev, Fedor L.; Exner, Pavel 2018 Chebyshev, Legendre, Hermite and other orthonormal polynomials in $$D$$ dimensions. Zbl 1402.33010 Doria, Mauro M.; Coelho, Rodrigo C. V. 2018 Generalized integrable hierarchies of AKNS type, super Dirac type and super NLS-mKdV type. Zbl 1402.37081 Wang, Xinyang; Shen, Shoufeng; Li, Zhihui; Li, Chunxia; Ye, Yujian 2018 Nontranslation invariant Gibbs measures for models with uncountable set of spin values on a Cayley tree. Zbl 1398.82013 Rozikov, U. A.; Botirov, G. I. 2018 Asymptotic stability of the relativistic Boltzmann equation on Bianchi type I space-time with a hard potential. Zbl 1398.76205 Takou, Etienne; Ciake Ciake, F. L. 2018 On fractional derivatives with exponential kernel and their discrete versions. Zbl 1384.26025 2017 On point interactions realised as Ter-Martirosyan-Skornyakov Hamiltonians. Zbl 1384.82017 Michelangeli, Alessandro; Ottolini, Andrea 2017 Quantum field theory applications of Heun type functions. Zbl 1384.33035 Birkandan, T.; Hortaçsu, M. 2017 On a two-particle bound system on the half-line. Zbl 1384.70010 Kerner, Joachim; Mühlenbruch, Tobias 2017 Dual wavefunction of the symplectic ice. Zbl 1387.81230 Motegi, Kohei 2017 A Loomis-Sikorski theorem and functional calculus for a generalized Hermitian algebra. Zbl 1384.81042 Foulis, David J.; Jenčová, Anna; Pulmannová, Sylvia 2017 Theoretical foundations of incorporating local boundary conditions into nonlocal problems. Zbl 1384.35045 Aksoylu, Burak; Beyer, Horst Reinhard; Celiker, Fatih 2017 Joint measurability through Naimark’s dilation theorem. Zbl 1384.81009 Beneduci, Roberto 2017 States and synaptic algebras. Zbl 1384.81007 Foulis, David J.; Jenčová, Anna; Pulmannová, Sylvia 2017 Quantum privacy and Schur product channels. Zbl 1387.81118 Levick, Jeremy; Kribs, David W.; Pereira, Rajesh 2017 Topological properties of a curved spacetime. Zbl 1387.58012 Agrawal, Gunjan; Shrivastava, Sampada; Godani, Nisha; Sinha, Soami Pyari 2017 Mean ergodic theorems for Lorentz operators. Zbl 1384.83003 Koumantos, Panagiotis N. 2017 Quantum dot with attached wires: resonant states completeness. Zbl 1384.81149 Popov, I. Y.; Popov, A. I. 2017 Asymptotic analysis of reduced Navier-Stokes equations by homotopy renormalization method. Zbl 1384.35069 Wang, Chunyan; Gao, Wenjie 2017 The relativistic Boltzmann equation on Bianchi type I space time for hard potentials. Zbl 1384.76048 Noutchegueme, Norbert; Takou, Etienne; Tchuengue, E. Kamdem 2017 Lie symmetries and conserved quantities of the constraint mechanical systems on time scales. Zbl 1384.70015 Cai, Ping-Ping; Fu, Jing-Li; Guo, Yong-Xin 2017 A fixed energy fixed angle inverse scattering in interior transmission problem. Zbl 1384.74020 Chen, Lung-Hui 2017 A divergence theorem for pseudo-Finsler spaces. Zbl 1387.58013 Minguzzi, E. 2017 An advanced kinetic theory for morphing continuum with inner structures. Zbl 1387.76093 Chen, James 2017 Non-Markovianity of geometrical qudit decoherence. Zbl 1387.81224 Siudzińska, Katarzyna 2017 A regular analogue of the Smilansky model: spectral properties. Zbl 1384.81023 Barseghyan, Diana; Exner, Pavel 2017 Bogolyubov inequality for the ground state and its application to interacting rotor systems. Zbl 1384.82008 Wojtkiewicz, Jacek; Pusz, Wiesław; Stachura, Piotr 2017 Quasi-exact Coulomb dynamics of $$n+1$$ charges $$n-1$$ of which are equal. Zbl 1384.78005 Skrypnik, W. I. 2017 Duality for graded manifolds. Zbl 1384.58006 Grabowski, Janusz; Jóźwikowski, Michał; Rotkiewicz, Mikołaj 2017 On the Lagrangian 1-form structure of the hyperbolic Calogero-Moser system. Zbl 1384.81048 Jairuk, Umpon; Tanasittikosol, Monsit; Yoo-Kong, Sikarin 2017 Costratification in terms of coherent states. Zbl 1384.81052 Fuchs, Erik 2017 Magnetic curves in cosymplectic manifolds. Zbl 1353.53033 Druţă-Romaniuc, Simona-Luiza; Inoguchi, Jun-ichi; Munteanu, Marian Ioan; Nistor, Ana Irina 2016 Controllability of nonlinear fractional delay dynamical systems. Zbl 1378.93022 Nirmala, R. Joice; Balachandran, K.; Rodríguez-Germa, L.; Trujillo, J. J. 2016 A few continuous and discrete dynamical systems. Zbl 1351.37254 Zhang, Yufeng; Rui, Wenjuan 2016 ...and 1156 more Documents all top 5 ### Cited by 7,634 Authors 887 Journal of Mathematical Physics 355 Reports on Mathematical Physics 294 International Journal of Theoretical Physics 231 Communications in Mathematical Physics 225 Journal of Geometry and Physics 164 Letters in Mathematical Physics 142 International Journal of Geometric Methods in Modern Physics 130 Linear Algebra and its Applications 118 Journal of Mathematical Analysis and Applications 100 Reviews in Mathematical Physics 87 Theoretical and Mathematical Physics 87 Journal of Functional Analysis 83 Quantum Information Processing 79 Infinite Dimensional Analysis, Quantum Probability and Related Topics 71 Advances in Difference Equations 70 Journal of High Energy Physics 68 General Relativity and Gravitation 67 Applied Mathematics and Computation 67 Foundations of Physics 66 Open Systems & Information Dynamics 55 Chaos, Solitons and Fractals 54 Journal of Statistical Physics 52 Annales Henri Poincaré 51 Journal of Geometric Mechanics 46 New Journal of Physics 45 Journal of Physics A: Mathematical and Theoretical 44 Differential Geometry and its Applications 43 Physica D 42 Nonlinear Dynamics 42 International Journal of Quantum Information 41 Advances in Mathematical Physics 40 Linear and Multilinear Algebra 40 Journal of Differential Equations 40 Regular and Chaotic Dynamics 40 Journal of Nonlinear Mathematical Physics 39 International Journal of Modern Physics A 39 Publications of the Research Institute for Mathematical Sciences, Kyoto University 39 Journal of Mathematical Sciences (New York) 39 Communications in Nonlinear Science and Numerical Simulation 37 Advances in Mathematics 36 Ukrainian Mathematical Journal 35 Computers & Mathematics with Applications 35 Acta Applicandae Mathematicae 34 Mathematica Slovaca 34 Advances in Applied Clifford Algebras 33 Annales de l’Institut Henri Poincaré. Physique Théorique 32 Transactions of the American Mathematical Society 31 Physics Letters. A 31 Journal of Algebra 31 Journal of Nonlinear Science 31 Abstract and Applied Analysis 30 Annales de l’Institut Henri Poincaré. Nouvelle Série. Section A. Physique Théorique 30 International Journal of Mathematics 29 Physica A 24 Modern Physics Letters A 24 Proceedings of the American Mathematical Society 24 Applied Mathematics Letters 24 Physical Review A, Third Series 23 Acta Mechanica 23 Classical and Quantum Gravity 22 Archive for Rational Mechanics and Analysis 22 Communications in Algebra 22 Mathematische Annalen 22 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 21 Integral Equations and Operator Theory 20 Mathematical Methods in the Applied Sciences 20 Nuclear Physics. B 20 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 20 Physical Review Letters 20 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications 19 Mathematical Problems in Engineering 19 Analysis and Mathematical Physics 18 International Journal of Modern Physics B 18 Journal of Soviet Mathematics 18 Mathematical Physics, Analysis and Geometry 17 Functional Analysis and its Applications 17 Journal of Statistical Mechanics: Theory and Experiment 16 Annales de l’Institut Fourier 16 Russian Journal of Mathematical Physics 16 Advances in High Energy Physics 15 Nonlinearity 15 Automatica 15 Algebras and Representation Theory 15 Acta Mathematica Sinica. English Series 15 Lobachevskii Journal of Mathematics 15 Entropy 15 AIMS Mathematics 14 Journal of Computational and Applied Mathematics 14 Mathematische Nachrichten 14 Results in Mathematics 14 Journal of Modern Optics 14 Complex Analysis and Operator Theory 14 Studies in History and Philosophy of Science. Part B. Studies in History and Philosophy of Modern Physics 13 Mathematical Notes 13 Physics Reports 13 Inventiones Mathematicae 13 Mathematische Zeitschrift 13 Positivity 13 Nonlinear Analysis. Real World Applications 13 Mediterranean Journal of Mathematics ...and 554 more Journals 2,742 Quantum theory (81-XX) 1,135 Differential geometry (53-XX) 970 Partial differential equations (35-XX) 939 Functional analysis (46-XX) 908 Operator theory (47-XX) 898 Dynamical systems and ergodic theory (37-XX) 814 Mechanics of particles and systems (70-XX) 562 Ordinary differential equations (34-XX) 504 Statistical mechanics, structure of matter (82-XX) 482 Global analysis, analysis on manifolds (58-XX) 476 Nonassociative rings and algebras (17-XX) 445 Relativity and gravitational theory (83-XX) 437 Probability theory and stochastic processes (60-XX) 366 Information and communication theory, circuits (94-XX) 356 Topological groups, Lie groups (22-XX) 343 Linear and multilinear algebra; matrix theory (15-XX) 268 Systems theory; control (93-XX) 217 Real functions (26-XX) 199 Numerical analysis (65-XX) 149 Mathematical logic and foundations (03-XX) 149 Calculus of variations and optimal control; optimization (49-XX) 148 Fluid mechanics (76-XX) 144 Associative rings and algebras (16-XX) 140 Group theory and generalizations (20-XX) 129 Order, lattices, ordered algebraic structures (06-XX) 125 Combinatorics (05-XX) 122 Mechanics of deformable solids (74-XX) 114 Special functions (33-XX) 106 Algebraic geometry (14-XX) 98 Optics, electromagnetic theory (78-XX) 89 Difference and functional equations (39-XX) 89 Statistics (62-XX) 81 Manifolds and cell complexes (57-XX) 81 Computer science (68-XX) 80 Biology and other natural sciences (92-XX) 71 Number theory (11-XX) 63 Classical thermodynamics, heat transfer (80-XX) 62 Abstract harmonic analysis (43-XX) 58 Harmonic analysis on Euclidean spaces (42-XX) 57 Several complex variables and analytic spaces (32-XX) 53 Measure and integration (28-XX) 52 Algebraic topology (55-XX) 49 Integral equations (45-XX) 40 Category theory; homological algebra (18-XX) 40 Functions of a complex variable (30-XX) 40 Integral transforms, operational calculus (44-XX) 36 Operations research, mathematical programming (90-XX) 34 General topology (54-XX) 32 Geometry (51-XX) 26 General and overarching topics; collections (00-XX) 23 Field theory and polynomials (12-XX) 23 Convex and discrete geometry (52-XX) 21 History and biography (01-XX) 20 Commutative algebra (13-XX) 20 Astronomy and astrophysics (85-XX) 20 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 19 General algebraic systems (08-XX) 19 Approximations and expansions (41-XX) 11 Potential theory (31-XX) 9 $$K$$-theory (19-XX) 6 Geophysics (86-XX) 3 Sequences, series, summability (40-XX) 2 Mathematics education (97-XX)
2022-10-03 02:32:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5367375612258911, "perplexity": 4729.902309125006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00748.warc.gz"}
https://maharashtraboardsolutions.com/class-7-maths-solutions-chapter-6-practice-set-26/
Maharashtra Board Class 7 Maths Solutions Chapter 6 Indices Practice Set 26 Maharashtra State Board Class 7 Maths Solutions Chapter 6 Indices Practice Set 26 Question 1. Complete the table below: Sr. No. Indices (Numbers in index form) Base Index Multiplication form Value i. 34 3 4 3 x 3 x 3 x 3 81 ii. 163 iii. (-8) 2 iv. $$\frac{3}{7} \times \frac{3}{7} \times \frac{3}{7} \times \frac{3}{7}$$ $$\frac { 81 }{ 2401 }$$ v. (-13)4 Solution: Sr. No. Indices (Numbers in index form) Base Index Multiplication form Value i. 34 3 4 3 x 3 x 3 x 3 81 ii. 163 16 3 16 x 16 x 16 4096 iii. (-8)² (-8) 2 -8 x -8 64 iv. $$\left(\frac{3}{7}\right)^{4}$$ $$\frac { 7 }{ 7 }$$ 4 $$\frac{3}{7} \times \frac{3}{7} \times \frac{3}{7} \times \frac{3}{7}$$ $$\frac { 81 }{ 2401 }$$ v. (-13)4 -13 4 (-13) x (-13) x (-13) x (-13) 28561 Question 2. Find the value of. i. 210 ii. 53 iii. (-7)4 iv. (-6)3 v. 93 vi. 81 vii. $$\left(\frac{4}{5}\right)^{3}$$ viii. $$\left(-\frac{1}{2}\right)^{4}$$ Solution: i. 210 = 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 1024 ii. 53 = 5 × 5 × 5 = 125 iii. (-7)4 = (-7) × (-7) × (-7) × (-7) = 2401 iv. (-6)3 = (-6) × (-6) × (-6) = -216 v. 93 = 9 × 9 × 9 = 729 vi. 81 = 8 vii. $$\left(\frac{4}{5}\right)^{3}$$ $$=\frac{4}{5} \times \frac{4}{5} \times \frac{4}{5}=\frac{64}{125}$$ viii. $$\left(-\frac{1}{2}\right)^{4}$$ $$=\left(-\frac{1}{2}\right) \times\left(-\frac{1}{2}\right) \times\left(-\frac{1}{2}\right) \times\left(-\frac{1}{2}\right)=\frac{1}{16}$$ Scroll to Top
2021-02-25 01:34:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3856569528579712, "perplexity": 7488.912011461417}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00203.warc.gz"}
https://moodle.student.cnwl.ac.uk/course/info.php?id=4591
### Equality & Diversity The College is situated in one of the most multi-cultural and diverse communities in the country. We see this diversity as our strength and the student population at the college reflects this diversity. Embracing the concepts of diversity and equality is about making an effort to recognise that people in the College are of different cultural, ethnic, racial and gender backgrounds and have different religions, nationalities, ages, physical and mental abilities. The College is committed to providing an environment where all individuals are given the opportunity to achieve their potential, and the Equality & Diversity unit works towards this goal.
2020-07-10 02:59:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773552179336548, "perplexity": 2082.5098299956812}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00574.warc.gz"}
http://msm.omsu.ru/EN/jrn45.html
Mathematical structures and modeling ¹1(45) Mathematical structures and modeling. - Omsk : OmSU, 2018. ¹1(45), 171 p. ISSN (print): 2222-8772 ISSN (online): 2222-8799 For researchers, post-graduate students and senior students. Journal issue in one file Fundamental mathematics and physics The necessary and sufficient conditions for fulfillment of the central limit theorem for symmetric functions of random variables, in which the scale normalization is carried out by regularly varying sequences of arbitrary positive order are obtained in this article. These conditions include the so-called minimal conditions of the weak dependence. Keywords: symmetric functions of random variables, central limit theorem, minimal conditions of the weak dependence A.V. Levichev, A.Yu. Palyanov Analysis in Spacetime Bundles Based on Groups U(1,1) and U(2): The Case of SU(2,2)-Actions in Their 2- and 4-Covers In this paper (which is the direct continuation of A.V.Levichev and A.Yu.Palyanov, Mathematical structures and modeling, 2016, N. 4(40), 24-38) we continue to develop mathematical tools which are necessary in order to perform analysis of homogeneous vector bundles on the basis of the parallelizing group U(1,1). In particular, the compatibility of the (introduced by Paneitz and Segal) coherence condition with the SU(2,2)-action on finite covers of U(1,1) and of U(2) was studied: such a compatibility is violated in case of 2-covers but it holds in case of 4-covers. In terms of the U(2)-approach, we have corrected the proof of one (fundamental for the implementation of any induced representation) Paneitz-Segal formula given in Corollary 4.1.2 of S. M. Paneitz and I. E. Segal, J. Funct. Anal. 47(1982), 78-142. Keywords: parallelizations of space-time bundles, conformal SU(2,2)-actions in U(1,1), U(2), 2- and 4-covers, coherence condition and induced representations The article is devoted to philosophical and mathematical aspects of use of the computer in the formalized mathematical proof. The problem of computer proofs reliability is reflected in a demand of experimental mathematics in scientific knowledge. Keywords: philosophy of mathematical proofs, crisis of recomplexity of proofs, trust to computer proofs Article considers problem-oriented training in the higher mathematics, problematic approach to specific objectives and the idea of methodological pragmatism. The methodological pragmatism is expressed in criticism of the idea of absolute justification of mathematics and the educational argument. Keywords: methodological pragmatism, problem training, philosophy of mathematical education Ethical problems associated with the work of a quantum time machine are investigated. Keywords: quantum time machine, ethical problems, intertemporal transitions Can Mass Be Negative? Overcoming the force of gravity is an important part of space travel and a significant obstacle preventing many seemingly reasonable space travel schemes to become practical. Science fiction writers like to imagine materials that may help to make space travel easier. Negative mass -- supposedly causing anti-gravity -- is one of the popular ideas in this regard. But can mass be negative? In this paper, we show that negative masses are not possible -- their existence would enable us to create energy out of nothing, which contradicts to the energy conservation law. Keywords: negative mass, equivalence principle, anti-gravity, energy conservation law J. McClure, O. Kosheleva, V. Kreinovich The Sums of $$\mathbf{m_i\cdot v_i}$$ and $$\mathbf{m_i\cdot v_i^2}$$ Are Preserved, Why not Sum of $$\mathbf{m_i\cdot v_i^3}$$: A Pedagogical Remark Students studying physics sometimes ask a natural question: the momentum -- sum of $$m_i\cdot v_i$$ -- is preserved, the energy -- one half of the sum of $$m_i\cdot v_i^2$$ -- is preserved, why not sum of $$m_i\cdot v_i^3$$? In this paper, we give a simple answer to this question. Keywords: momentum conservation law, energy conservation law, teaching physics Applied Mathematics and Modeling L.A. Volodchenkova,A.K. Guts Modeling the Equilibrium Evolution of the Forest Biocenosis on Solid Felling It is shown that the restoration of the forest after felling can be described as a Nash equilibrium. Keywords: restoration of the forest, evolution, the Nash equilibrium For the class of systems specified in the title we prove necessary and sufficient criteria for exponential stability and dichotomy of solutions of the Cauchy problem in terms of the zeros $$\left(\lambda,\mu\right)$$ of determinant of a matrix pencil -- symbol of the functional-differential operator in the left part of the system. An illustrative example is considered. Keywords: transition to the difference Cauchy problem, characterization of the spectrum of resolving operator Calculation algorithm of a block criterion of the interval forecastability for dynamic indicator based on Tarsitano-Lombardo’s nonparametric correlation coefficient was proposed and tested. It was experimentally shown that the value of the block criterion of the interval forecastability, which was calculated from prehistory of dynamic indicator values, makes it possible to quantitatively estimate the appropriateness of performing interval forecasting of the dynamic indicator based on its statistical properties. The interval forecasting is to determine the interval from two predetermined intervals in which the future value of the indicator will be located. The forecasting is based on probability estimates of these events. In this case, the separation division of the intervals is set by the calculation method based on statistical characteristics of the dynamic indicator. The proposed calculation algorithm of the block criterion of the dynamic indicator interval forecastability was implemented in the programming language "R". Keywords: interval forecasting, interval predictability estimating, dynamic indicator, nonparametric correlation, interval forecasting accuracy E.V. Rabinovich, P.I. Vaynmaster, G.S. Shefel Elimination of Data Redundancy in Seismic Monitoring of Hydraulic Fracturing The important stage of hydraulic fracturing monitoring is constructing a graphical model of fracturing area. However, the real data recorded in the process of hydraulic fracturing are characterized by high intensity of seismic activity in the monitoring area, which hinders correct constructing of such model. Therefore, the problem of eliminating the redundancy of seismic data is important. The original algorithm is considered in this article, which allows eliminating redundancy by detection of local hypocenters of seismic activity. The algorithm is based on the methods of cluster and factor analysis. Factor analysis is used to identify similar characteristics and reduce the number of variables used to describe a seismic source. Cluster analysis is used to detect clouds of seismic emission and find their local hypocenters. The results obtained are used to construct a three-dimensional graphic fracture model. Keywords: hydraulic fracturing, fracture, factor and cluster analysis V.R. Shagiev, A.M. Akhtyamov Identification of Pipe Fastening Using the Minimum Number of Natural Frequencies The oscillations of a pipeline with a liquid are considered. Previously, it was shown that if the liquid does not flow through the pipeline, then by all its frequencies of flexural vibrations of the pipeline, the type of fixing of the pipeline is uniquely determined up to permutations of fastenings at its ends. The problem of identifying boundary conditions was also solved for nine natural frequencies. In this paper, the number of eigenvalues with which one can uniquely reduce the boundary conditions, up to permutations of the fastenings at its ends, is reduced to five. The number of spectral data was reduced due to the fact that if a linear system of 9 equations was previously solved, then in the present paper a system of five nonlinear equations is solved with respect to four unknown coefficients from the boundary conditions reduced to the canonical form. An example of solving this inverse problem is presented. Two counterexamples are also given in which it is shown that fewer eigenvalues for identification are generally insufficient. In the first counterexample, it is shown that four nonzero eigenfrequencies are still not enough to identify the type of anchorage of the pipeline. The second counterexample shows that in some cases information is needed about whether zero is an eigenvalue. Keywords: boundary conditions, natural frequencies, eigenvalues, clamping, free support, floating fixing, free end, pipeline Transcript of the remote presentation at the conference "Reflective Theater of the Situation Center"\ 29.11.2016. It has been edited by the author. Keywords: subjective psychology, psychophysics, behaviorism, cognitive psychology, reflexive approach The issues of creating intelligent automated training system with the function of reflection are considered. Using the reflexive games theory of V.A. Lefebvre, the structure of the scheduler of training system introduces a reflexive intelligent agent (RIA), which provides trade-offs in the interaction models of a student, teacher, subject tutor and practitioner. The basic components of the reflexive perception model of RIA of student and learning situation are described, taking into account the specifics of decision-making for different stages of learning. Keywords: modeling, individualization of learning, automated learning, reflexive governance, reflexive intelligent agent The constructor of subjects of reflexive games is considered as a component of support for reflexive control in the project ”Gen-Guru”. The project is implemented with the use of cross-technologies of situational center. Key ideas of V. A. Lefebvre’s model system are analyzed. List for the options of detail models to improve the accuracy of modeling situations is suggested. We consider the specifics of modeling such subjects as individuals, and robots. Keywords: reflexive control, models of V. A. Lefebvre, details of the models, subjects Computer Sciences E.A. Tyumentsev About Formal Definition of an Abstraction The article proposes a formal definition of abstraction as a mapping of a set of entities into a set of subwords of all words of some alphabet, a hierarchy of abstractions as relations on the set of subwords of a language and interpretation. Also received a number of limitations, which has an abstraction mechanism. Keywords: abstraction, abstarction barrier, inheritance, Grady Booch, Barbara Liskov, Robert Martin In the article ”About The Formalization Of The Software Development Process”, the definitions of the software development process as a process of editing the text of the program, the speed of the process, and a sufficient condition for linear speed are formulated. In determining the speed of the development process, the existence of a limit is required. However, the required limit may not exist. In this paper, we define the speed of software development process, which does not require the existence of a limit, and reformulates a sufficient condition for linear velocity in accordance with the new definition. Keywords: formalization, development process, software, productivity T.V. Vahniy, A.K. Guts, I.Yu. Pahotin Determining the Optimal Set of Tools for Computer System Security by Monte-Carlo Method The article presents a software application that allows based on game theory to find the most optimal set of tools for computer information protection by Monte-Carlo method. Keywords: information security, theory of games, hacker attacks, optimal strategy, software product, Monte-Carlo method The article is devoted to the analysis of the themes of the unified state exam in informatics and the construction on its basis of the structural model of the course for the vocational guidance school of the Faculty of Computer Science. The analysis of information resources, which are sources of tasks for preparation, has been carried out. The structural model is supplemented with topics that ensure consistency and coherence of topics. Keywords: structural model, computer science, information and telecommunication technologies, unified state examination, vocational guidance, additional education
2018-03-22 09:56:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5110679268836975, "perplexity": 1230.2190567158111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647838.64/warc/CC-MAIN-20180322092712-20180322112712-00770.warc.gz"}
http://en.wikipedia.org/wiki/Heim_theory
Heim theory Heim theory, initially proposed by the late German physicist Burkhard Heim, is an attempt to develop a theory of everything in theoretical physics. Heim attempted to resolve incompatibilities between quantum theory and general relativity. To meet that goal, he developed a mathematical approach based on quantizing spacetime, and proposed the "metron" as a (two-dimensional) quantum of (multidimensional) space. Part of the theory is formulated in terms of difference operators; Heim called this type of mathematical formalism Selector calculus.[1] Heim theory's six dimensional model was later extended to eight and twelve dimensions, in collaboration with Walter Dröscher.[2][3][4] Dröscher and Jochem Hauser, professor and former head of the Aero- and Aerothermodynamics Department of the European Space Agency,[5] have attempted to apply Heim theory to nonconventional space propulsion and faster than light concepts, as well as the origin of dark matter.[4][6] Artist's conception of a warp drive design. Heim theory proposes timelike extra dimensions of space to permit faster than light travel. Heim theory has been criticized because much of the original work and subsequent theories were not initially peer reviewed.[7] Heim eventually published some of his work in 1977[8] and more recently aspects of Extended Heim Theory have also been submitted to the scientific community's inspection.[9] Overview The mathematics behind Heim's theory requires extending spacetime with extra dimensions; various formulations by Heim and his successors involve six, eight, or twelve dimensions. Within the quantum spacetime of Heim theory, elementary particles are represented as "hermetry forms" or multidimensional structures of space. Heim has claimed that his theory yields particle masses directly from fundamental physical constants and that the resulting masses are in agreement with experiment. This claim was disputed by physicist John Reed in 2006, who subsequently changed his mind with further research and now thinks there is something to Heim's theory.[10] In the Physics Forum, Sept. 4 2007, Reed wrote, "I'm more convinced now that there is really something to his theory. I don't understand much of the math yet. It's very complicated and different from anything I'm familiar with. I have a Ph.D. in physics so I know something about physics." Combinations of three u, d or s quarks forming baryons with a spin-32 form the uds baryon decuplet. Heim theory utilizes complex particles like baryons to explain quintessence. For Heim, this composite nature was an expression of internal, six-dimensional structure. After his death, others have continued with his multi-dimensional "quantum hyperspace" framework. Most notable are the theoretical generalizations put forth by Walter Dröscher, who worked in collaboration with Heim at some length. Their combined theories are also known as "Heim-Dröscher" theories or Extended Heim theory.[11] There are some differences between the original "Heim Theory" and the extended versions proposed by his successors. For example, in its original version Heim theory has six dimensions, i.e., the 4 of normal space-time with two extra timelike dimensions. Dröscher first extended this to eight and claimed that this yields quantum electrodynamics along with the "particle zoo" of mesons and baryons. Later, four more dimensions were used to arrive at the twelve dimensional version, which involves extra gravitational forces; one of these corresponds to quintessence.[11] Although it purports to unify quantum mechanics and gravitation, the original Heim theory cannot be considered a theory of everything because it does not incorporate all known experimental data. In particular, it gives predictions only for properties of individual particles, without making detailed predictions about how they interact. The theory also allows for particle states that don't exist in the Standard Model, including a neutral electron and two extra light neutrinos, and many other extra states. Presently, there is no known mechanism for the exclusion of these extra particles, nor an explanation for their non-observation.[12] Although it is claimed that Heim theory can incorporate the modern structure of particle physics,[11] the available results predict the masses for composite hadrons rather than quarks and do not include gluons or the W and Z bosons,[13] which are experimentally very well established.[14][15][16] In Heim theory, quarks are interpreted as 'condensation zones' of the six-dimensional internal structure of the particles,[17] and the gluons are asserted to be associated with one of the "hermetry forms".[18] History A small group of physicists are now trying to bring the theory to the attention of the scientific community, by publishing and copy-editing Heim's work, and by checking and expanding the relevant calculations. Recently,[when?] a series of presentations of Heim theory was made by Hauser, Dröscher and von Ludwiger. Papers based on the former were published in conference proceedings by the American Institute of Physics journal in 2005 and 2010 (see table of contents in [19])[20] One article has won a prize for the best paper received in 2005 by the AIAA Nuclear and Future Flight Technical Committee.[21] Von Ludwiger's presentation was to the First European Workshop on Field Propulsion, January 20–22, 2001 at the University of Sussex. Dröscher claimed to have successfully extended Heim's six-dimensional theory, which had been sufficient for derivation of the mass formula, to an eight-dimensional theory which included particle interactions. Marc Millis, former head of the Breakthrough Propulsion Physics Program mentioned the Dröscher/Heim concepts in an 2010 International Astronautical Federation conference contribution as "Not yet rigorously articulated".[22][23][24] Predictions Dröscher and Hauser developed the category of non-ordinary matter in 2008.[25] Heim theory predicts a neutral electron,[26] although in a popular talk, Heim notes that while a neutral electron is allowed by his theory, it is not required.[27] It would be difficult to reconcile a prediction of a neutral electron with the lack of any observation of the particle [28][verification needed]. According to the Totalitarian principle that every interaction not forbidden must occur, such a light neutral particle should be one of the possible end products of the decay of every known elementary particle,[29] and so theoretically has a small probability of occurring in every experiment involving particle collisions. Predictions for experimental masses Particle name Theoretical mass (MeV/c²) Experimental mass (MeV/c²) Absolute error Relative error standard deviations Proton 938.27959 938.272029±0.000080 0.00756 0.00000776 94.5 Neutron 939.57337 939.565360±0.000081 0.00801 0.00000853 98.9 Electron 0.51100343 0.510998918±0.000000044 0.00000451 0.00000883 102.5 Neutral electron 0.51617049 Unobserved N/A N/A N/A Particle type Particle name Theoretical mass (MeV/c²) Measured mass (MeV/c²) Theoretical mean life/10−8 sec Measured mean life/10−8 sec Lepton Ele-Neutrino 0.381 × 10−8 < 5 × 10−8 Infinite Infinite Lepton Mu -Neutrino 0.00537 < 0.17 Infinite Infinite Lepton Tau-Neutrino 0.010752 < 18.2 Infinite Infinite Lepton Neutrino 4 0.021059 Excluded by LEP (unless > 45000) Infinite N/A Lepton Neutrino 5 0.207001 Excluded by LEP (unless > 45000) Infinite N/A Lepton Electron 0.51100343 0.51099907 ± 0.00000015 Infinite Infinite Lepton Muon 105.65948493 105.658389 ± 0.000034 219.94237553 219.703 ± 0.004 Baryon Proton 938.27959246 938.27231 ± 0.00026 Infinite Infinite Baryon Neutron 939.57336128 939.56563 ± 0.00028 917.33526856 × 108 (886.7 ± 1.9) × 108 The predicted masses were claimed to have been derived by Heim using only 4 parameters - h (Planck's Constant), G (Gravitational constant), vacuum permittivity and permeability. Predictions for a quantum gravity force In the 1950s, Heim had predicted what he termed a 'contrabary' effect whereby photons, under the influence of a strong magnetic field in a certain configuration, could be transformed into 'gravito-photons', which would provide an artificial gravity force. This idea caused great interest at the time.[30] A recent series of experiments by Martin Tajmar et al., partly funded by European Space Agency, may have produced the first evidence of artificial gravity [31] (about 18 orders of magnitude greater than what General Relativity predicts). As of late 2006, groups at Berkeley and elsewhere were attempting to reproduce this effect. By applying their 'gravito-photon' theory to bosons, Dröscher and Hauser were able to predict the size and direction of the effect. A further prediction of Heim-Dröscher theory shows how a different arrangement of the experiment by Tajmar et al. could produce a vertical force against the direction of the Earth's gravity. However, in July 2007, a group in Canterbury, New Zealand, said that they failed to reproduce Tajmar et al.'s effect, concluding that, based on the accuracy of the experiment, any such effect, if it exists, must be 21 times smaller than that predicted by the theory proposed by Tajmar in 2006.[32] Tajmar et al., however, interpreted a trend in the Canterbury data of the order expected, though almost hidden by noise. They also reported on their own improved laser gyro measurements of the effect, but this time found 'parity breaking' in that only for clockwise spin did they note an effect, whilst for the Canterbury group there was only an anti-clockwise effect .[33] In the same paper, the Heim-Theory explanation of the effect is, for the first time, cited as a possible cause of the artificial gravity. Tajmar has recently found additional support from Gravity Probe B results.[34] Selector calculus Selector calculus is a form of calculus, employed by Burkhard Heim in formulating his theory of physics. The differencing operator is intended to be analogous to taking derivatives of functions. $\eth$ (which Heim calls Metrondifferential in German) is defined to be the same as $\nabla$ in difference operator. The summation operator is intended to be analogous with integration. Instead of using the integral sign, Heim substitutes a bold italicised capital S for the typical integral sign. In this case $S^{n_2}_{n_1} \phi \eth n = S^{n_2}_{n_1} \eth \psi = \sum_{n=n_1}^{n_2} \left ( \psi(n) - \psi(n-1) \right ) = \psi(n_2) - \psi (n_1 - 1).$ Note that $\phi \eth n = \eth \psi$. [35] Mainstream response Physicist Gerhard Bruhn has criticized Heim theory for mathematical errors (confusion between positive definite matrices and indefinite matrices) and for overly restrictive and unphysical assumptions that cause it to apply only to manifolds with a Euclidean geometry ("flat manifolds"), and not to the Minkowski geometry required by general relativity. In addition he observes that it has not been properly peer-reviewed, and criticizes the later proponents of the theory for misleading claims of academic affiliation.[36] First and second publication in a peer reviewed scientific journal • Burkhard Heim (1977). "Vorschlag eines Weges einer einheitlichen Beschreibung der Elementarteilchen (Recommendation of a Way to a Unified Description of Elementary Particles)". Zeitschrift für Naturforschung 32a: 233–243. Bibcode:1977ZNatA..32..233H. • Hauser, J., Dröscher, W., Emerging Physics for Novel Field Propulsion Science Paper presented at the Space, Propulsion & Energy Sciences International Forum SPESIF-2010, Johns Hopkins - APL, Laurel, Maryland, 23–25 February 2010, published by the American Institute of Physics. [5] References 1. ^ Heim, Burkhard (1998) [1980]. "Chapter 3". Elementarstrukturen der Materie - Einheitliche strukturelle Quantenfeldtheorie der Materie und Gravitation. Resch Verlag. pp. 99–172. ISBN 3-85382-008-5. 2. ^ a b Burkhard Heim: Elementarstrukkturen der Materie 1 - IGW. Igw-resch-verlag.at. Retrieved on 2010-10-17. 3. ^ a b Burkhard Heim: Elementarstrukkturen der Materie 1 - IGW. Igw-resch-verlag.at. Retrieved on 2010-10-17. 4. ^ a b List of Publications 5. ^ AIAA-Forschungspreis für Professor Dr. Jochem Häuser idw-online.de, 05/11/2005; Prof. Dr. Jochem Häuser@, Ostfalia HaW Campus Suderburg 6. ^ 7. ^ v. Ludwiger, L. (2001, January 28) Zum Tode des Physikers Burkhard Heim. Feldkirchen-Westerham.[1] 8. ^ Burkhard Heim (1977). "Vorschlag eines Weges einer einheitlichen Beschreibung der Elementarteilchen (Recommendation of a Way to a Unified Description of Elementary Particles)". Zeitschrift für Naturforschung 32a: 233–243. Bibcode:1977ZNatA..32..233H. 9. ^ Hauser, J., Dröscher, W., Emerging Physics for Novel Field Propulsion Science Paper presented at the Space, Propulsion & Energy Sciences International Forum SPESIF-2010, Johns Hopkins - APL, Laurel, Maryland, 23–25 February 2010, and published by the American Institute of Physics. [2] 10. ^ J. Reed (2006, 2007); quoted in Rise and Fall of the Heim Theory . Retrieved 16 June 2007. 11. ^ a b c igw-resch-verlag.at/resch_verlag/burkhard_heim/band3.html 12. ^ Introduction to Heim's Mass Formula. Selected Results 13. ^ Walter Dröscher, Jochem Hauser Coupled Gravitational Fields. A New Paradigm for Propulsion Science 14. ^ R. Brandelik et al. (TASSO collaboration) (1979). "Evidence for Planar Events in e+e- Annihilation at High Energies". Phys. Lett. B 86 (2): 243–249. Bibcode:1979PhLB...86..243B. doi:10.1016/0370-2693(79)90830-X. 15. ^ G. Arnison et al. (UA1 collaboration) (1983). "Experimental Observation of Isolated Large Transverse Energy Electrons with Associated Missing Energy at $\sqrt{s}$ = 540 GeV". Phys. Lett. B 122: 103–116. Bibcode:1983PhLB..122..103A. doi:10.1016/0370-2693(83)91177-2. 16. ^ S. Eidelman et al. (2004). "Review of Particle Properties". Phys. Lett. B 592: 1. arXiv:astro-ph/0406663. Bibcode:2004PhLB..592....1P. doi:10.1016/j.physletb.2004.06.001. 17. ^ Burkhard Heim. "IGW Research" (in German). 18. ^ Walter Dröscher and Jochem Hauser Extended Heim Theory, Physics of Spacetime, and Field Propulsion, 10 April 2006 19. ^ Hauser, Dröscher, and von Ludwiger. "Heim theory presentation". American Institute of Physics journal. Retrieved 6 November 2010. 20. ^ Walter Droescher @ Astrophysics Data System adsabs.harvard.edu 21. ^ [3] .PDF "Guidelines for a Space Propulsion Device Based on Heim's Quantum Theory," Walter Dröscher, et al., 11-14 July, 2004 22. ^ Bio: Marc Millis grc.nasa.gov; pdf p.5. Table 1 23. ^ Marc G. Millis, Erik W. Davis Frontiers of propulsion science. American Inst. of Aeronautics and Astronautics, Reston 2009, ISBN 978-1-56347-956-4, p.218-221 24. ^ Marc Millis on Hyperspace Propulsion & Hyperdrive to Epsilon Eridani? centauri-dreams.org, 01/2006 25. ^ Hauser, J., Private communication to H. Deasy, July 2008. 26. ^ T.Auerbach and I. von Ludwiger, "Heim ́s Theory of Elementary Particle Structures, Journal of Scientific Exploration,Vol. 6, No. 3, pp. 217-231, 1992; available here [4] 27. ^ See end of chapter 9.2 (p. 73) of Heim's MBB presentation (1976) 28. ^ Abraham Seiden, Particle Physics: A Comprehensive Introduction, Addison Wesley (2004); ISBN 978-0-8053-8736-0 29. ^ B. R. Martin and G. Shaw Particle Physics, Wiley (2nd edition, 1997) ISBN 978-0-471-97285-3 30. ^ Testing Heim's theories, Newscientist 31. ^ esa.int 32. ^ R.D. Graham, R.B. Hurst, R.J. Thirkettle, C.H. Rowe, P.H. Butler (2008). "Experiment to Detect Frame Dragging in a Lead Superconductor". Physica C: Superconductivity 468 (5): 383. Bibcode:2008PhyC..468..383G. doi:10.1016/j.physc.2007.11.011. 33. ^ Search for Frame-Dragging-Like Signals Close to Spinning Superconductors 34. ^ [0707.3806] Search for Frame-Dragging-Like Signals Close to Spinning Superconductors. Arxiv.org. Retrieved on 2010-10-17. 35. ^ Burkhard Heim, Elementarstrukturen der Materie - Einheitliche strukturelle Quantenfeldtheorie der Materie und Gravitation, Resch Verlag, (1980, 1998) ISBN 3-85382-008-5. Selector calculus is covered in chapter 3 (pp. 99–172). 36. ^ Remarks on Burkhard Heim's IGW Successors J. Hauser and W. Droescher and their Theory, Gerhard W. Bruhn, Darmstadt University of Technology, March 29, 2006 37. ^ (Elementary structures of matter: Unified structural quantum field theory of matter and gravitation, Volume 1) 38. ^ (Elementary structures of matter: Unified structural quantum field theory of matter and gravitation, Volume 2) 39. ^ (Structures of the physical world and its immaterial aspect) 40. ^ Burkhard Heim: Elementarstrukkturen der Materie 1 - IGW. Igw-resch-verlag.at. Retrieved on 2010-10-17. 41. ^ (Introduction to Burkhard Heim: Unified description of the world) 42. ^ Burkhard Heim: Einführung in Burkhard Heim - IGW. Igw-resch-verlag.at. Retrieved on 2010-10-17.
2013-06-18 23:39:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7197766304016113, "perplexity": 6240.444395390796}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00006-ip-10-60-113-184.ec2.internal.warc.gz"}
https://hal.inria.fr/hal-01428963v2
# Areas of Attention for Image Captioning 2 Thoth - Apprentissage de modèles à partir de données massives Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann Abstract : We propose Areas of Attention'', a novel attention-based model for automatic image captioning. Our approach models the dependencies between image regions, caption words, and the state of an RNN language model, using three pairwise interactions. In contrast to previous attention-based approaches that associate image regions only to the RNN state, our method allows a direct association between caption words and image regions. During training these associations are inferred from image-level captions, akin to weakly-supervised object detector training. These associations help to improve captioning by localizing the corresponding regions during testing. We also propose and compare different ways of generating attention areas: CNN activation grids, object proposals, and spatial transformers nets applied in a convolutional fashion. Spatial transformers give the best results. They allow for image specific attention areas, and can be trained jointly with the rest of the network. Our attention mechanism and spatial transformer attention areas together yield state-of-the-art results on the MSCOCO dataset. Type de document : Communication dans un congrès ICCV - International Conference on Computer Vision, Oct 2017, Venice, Italy. IEEE, pp.1251-1259, 2017, 〈10.1109/ICCV.2017.140〉 Littérature citée [48 références] https://hal.inria.fr/hal-01428963 Contributeur : Thoth Team <> Soumis le : vendredi 25 août 2017 - 16:06:00 Dernière modification le : vendredi 7 septembre 2018 - 13:56:03 ### Fichiers pedersoli17iccv.pdf Fichiers produits par l'(les) auteur(s) ### Citation Marco Pedersoli, Thomas Lucas, Cordelia Schmid, Jakob Verbeek. Areas of Attention for Image Captioning. ICCV - International Conference on Computer Vision, Oct 2017, Venice, Italy. IEEE, pp.1251-1259, 2017, 〈10.1109/ICCV.2017.140〉. 〈hal-01428963v2〉 ### Métriques Consultations de la notice ## 913 Téléchargements de fichiers
2018-10-20 23:18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26213669776916504, "perplexity": 14926.059420104737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513508.42/warc/CC-MAIN-20181020225938-20181021011438-00146.warc.gz"}
http://stats.stackexchange.com/questions/7869/what-statistical-technique-would-be-appropriate-for-optimising-the-weights
# What statistical technique would be appropriate for optimising the weights? Background: I have the following data (an example): :heading1 => { :weight => 25, :views => 0, :conversions => 0} :heading2 => { :weight => 25, :views => 0, :conversions => 0} :heading3 => { :weight => 25, :views => 0, :conversions => 0} :heading4 => { :weight => 25, :views => 0, :conversions => 0} } total_views = 0 I got to serve these headings based on their weights. Every time a heading is served its views is incremented by one and total_views also incremented. And whenever a user clicks on a served heading its conversions is incremented by one. I've written a program (in Ruby) which is performing this well. Question: I need to Auto Optimize best converting heading. Consider the following views and conversions for all headings: heading1: views => 50, conversions => 30 heading2: views => 50, conversions => 10 heading3: views => 50, conversions => 15 heading4: views => 50, conversions => 5 I need to automatically increase the weights of heading(s) which is/are converting more and vice versa. The sum of weights will always be 100. Is there any standard algorithm/formula/technique to do this? There might be some other parameters that need to predefined before making these calculations. But I am not getting it through. - To be honest, this may be something more appropriate for StackOverflow. –  Christopher Aden Mar 4 '11 at 7:29 Christopher, thanks for your comments. Now, I have posted it there too. I posted here because there may be some statistics technique to achieve this. –  Imran Mar 4 '11 at 7:31 @Imran, I edited your question, to make this question more apropriate for this site. My question is what is your optimisation criteria? What do you expect from this optimisation? Are there any long-term criteria for this optimisation? –  mpiktas Mar 4 '11 at 7:52 @mpiktas, thanks for your kindness. 1. optimisation criteria? After x views (or conversions), check which item has the highest conversion rate and increase its weight. It will consequently decreases the low converting headings to keep the weight total to 100 . How much would be increased against what, I am not sure about it yet. I need help on this too. 2. Expectation The more converting headings should be served more and vise versa. long-term criteria sorry not sure about it too. I'd really appreciate if you could help with the info provided. I may explain it more as we move along –  Imran Mar 4 '11 at 9:22 I need some point to start with this. We can fine tune it after we have something in working. I hope you understand. Thanks again. –  Imran Mar 4 '11 at 9:37 Your problem is a standard problem in the area of Reinforcement Learning and can be reformulated as n-armed bandit problem, where you have to find the best arms to pull to optimize the overall gain. In this case one arm = one header and gain is equivalent to 1 (if conversion), else 0. I really recommend to read the book of Sutton and Barto, especially chapter 2, where the basic technique to solve the n-armed-bandit problem is explained in detail (how to select, how to increase weights, how to cope with gains changing over time etc.). It is truly a great (and not unnecessarily complicated) read. Edit: Here are some detailed explanations as an outline how RL works in the case of the OP. Some parts are rather similar to Matt's answer, but not all. In Reinforcement Learning (RL) we differentiate between "exploration" and "exploitation". During exploration we search the space of available action (aka headings to show) to find a better or the best one meanwhile in exploitation we just use the actions/headings for which we already know that they are good. To measure how good an action is, we calculate the reward an action gained when used and hence use this value to estimate the reward of further usage of this action. In our case the expected mean reward of an action/heading is simply the conversion-rate. Some definitions: $h_i$= Heading i $Q(h_i)$ = Expected reward function of heading i = $conversion_i / views_i$ How to select an action or heading ? One option is to select greedily the action with the highest reward / conversionrate estimate. However, we are not able to find new or better solutions this way (no exploration at all). What we actually want is a balance between exploration and exploitation. So we use a procedure called softmax-selection $weight(h_i)=\frac{exp(Q(h_i)/\tau)}{\sum_j exp(Q(h_j)/\tau)}$ (see softmax-selection in the book of sutton) Calculate this weights and then select an action randomly with respect to these weights (see e.g. the function sample() in R) By setting the parameter tau, one can control the balance between exploration and exploitation. If $\tau$ = 0, then we are back to pure greedy action selection, if $\tau$ reaches infinity (or is big enough), all weights become equal and we are restrained to pure random sampling (no exploitation). How to update the rewards ? One can use this formula ... $Q_{k+1}(h)=Q_k(h) + \alpha*(r_{k+1}-Q_k(h))$ (see see the formula in the book of Sutton) where k - denotes the k-th time the heading h has been shown and $r_k$= 1 (if during the k-th time the header h was shown a conversion happened) or $r_k$= 0 (else) For step size $\alpha$ you can e.g. choose: • $1/k$ which in your case should lead to convergence (more recent rewards / conversions are weighted less) • a constant, which will result in no convergence (all the rewards are weighted equal), but allow the system to adapt to changes. E.g. imagine that the concept what the best header is changes over time. Final remark How to set the parameters $\tau$ and $\alpha$ is problem dependent. I suggest to run some simulations to see which are the best ones for you. For this and in general, I can only recommend to read chapter 2 of the linked book. It is worth it. Another remark from practical experience with this formula (if the headers to show are changing constantly) is to not always use the softmax-selection formula. I rather suggest to choose the best header p% of the time a header shall be shown and select a another header in (100-p)% of the time to find another possibly better one. On the other hand, if your goal is to find the best header among a fixed number of headers, it is better to always use softmax selection and set $\alpha=1/k$ and $\tau$ close to zero. - @steffen +1, for the links. –  mpiktas Mar 4 '11 at 13:30 thanks steffen. I'm looking into it and trying to find if there some Ruby implementation of n-armed bandit problem. Thanks again. –  Imran Mar 7 '11 at 11:32 @Imran: 1. If this is "the" answer, than please accept it by clicking the green check mark ;) 2. There is no need to look for such an implementation. It is really easy, I am sure you can do it yourself ! –  steffen Mar 7 '11 at 11:57 @Imran: I updated the answer accordingly. Hope it is enough to get started with coding. –  steffen Mar 10 '11 at 7:32 @Imran: Sorry to hear that. All the best for you and your family. –  steffen Mar 11 '11 at 21:23 So, if I understand correctly, you are trying to arrive at a weighting scheme that maximizes the amount of total conversions. I am assuming that you are going to reset the conversion values for each heading after each weight change. A simple solution then could be something like this: (I'm not familiar with your syntax so hopefully you understand this pseudocode) heading1: {ConvPView = conversions / views} heading2: {ConvPView = conversions / views} heading3: {ConvPView = conversions / views} heading4: {ConvPView = conversions / views} AvgeConvPView = (h1.cpv + h2.cpv +h3.cpv +h4.cpv) / 4 heading1: {weight = weight + ((ConvPView - AvgeConvPView) * weightChangeConstant)}
2014-10-26 07:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790527701377869, "perplexity": 1280.011573020991}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119660038.53/warc/CC-MAIN-20141024030100-00262-ip-10-16-133-185.ec2.internal.warc.gz"}
https://journal.riverpublishers.com/index.php/JRSS/article/download/2736/2081?inline=1
Estimation $R=Pr(Y>X)$ for a Family of Lifetime Distributions by Transformation Method Surinder Kumar and Prem Lata Gautam* Department of Statistics, School for Physical and Decision Sciences, Babasaheb Bhimrao Ambedkar University, Lucknow, India E-mail: surinderntls@gmail.com; premgautm61@gmail.com $*$Corresponding Author Received 15 March 2021; Accepted 15 July 2021; Publication 23 August 2021 ## Abstract For a Family of lifetime distributions proposed by Chaturvedi and Singh (2008) [6]. The problem of estimating $R(t)=P(X>t)$, which is defined as the probability that a system survives until time t and $R=P(Y>X)$, which represents the stress-strength model are revisited. In order to obtain the maximum likelihood estimators (MLE’S), uniformly minimum variance unbiased estimators (UMVUS’S), interval estimators and the Bayes estimators for the considered model. The technique of transformation method is used. Keywords: Family of lifetime distributions, uniformly minimum variance unbiased estimator, maximum likelihood estimator, confidence interval, bayes estimator. ## 1 Introduction The reliability of an item or system can be defined as a function of time ‘t’ i.e, $R(t)=P(X>t)$, which defines the failure free operation of items/components until time ‘t’. One another important measure of reliability under the stress-strength model is $R=Pr(Y>X)$, which represents the reliability of an item or system for the random strength Y and random stress X. A lot of work has been done in the literature on the point estiamtion of R. For a brief review literature one may refer to Pugh (1963) [12], Basu (1964) [3], Church and Harris (1970) [8], Enis and Geisser (1971) [10], Downton (1973) [9], Tong (1974) [19], Kelly et al. (1976) [11], Sinha and Kale (1980) [15], Sathe and Shah (1981) [14], Chao (1982) [4], Awad and Gharraf (1986) [2], Chaturvedi and Surinder (1999) [7], Rezaei et al. (2010) [13], Chaturvedi and Pathak (2012) [5], Surinder and Mayank(2014) [18], Surinder and Mukesh (2015) [16] and Surinder and Mukesh (2016) [17]. ## 2 The Family of Lifetime Distributions Chaturvedi and Singh (2008) [6] derived a family of lifetime distributions with the help of Weibull distribution. Let the random variable X follows a family of lifetime distributions, then the pdf is presented as $f⁢(x;a,λ,θ¯)=G′⁢(x;a,θ¯)λ⁢e⁢x⁢p⁢(-G⁢(x;a,θ¯)λ);x>a≥0,λ>0$ (1) Here, $G⁢(x;a,θ¯)$ is a function of $x$ and may also depend on the parameters a and $θ¯$. $θ¯$ may be vector valued. $G′⁢(x;a,θ¯)$ represents the derivative of $G⁢(x;a,θ¯)$ with respect to $x$. The presented model (1) covers the following lifetime distributions as specific cases: 1. For $G⁢(x;a,θ¯)=x$ and a=0, we get the one-parameter exponential distribution. 2. For $G(x;a,θ¯)=xp,(p>0)$ and a=0, we get the Weibull distribution. 3. For $G⁢(x;a,θ¯)=x2$ and a=0, we get the Rayleigh distribution. 4. For $G⁢(x;a,θ¯)=l⁢o⁢g⁢(1+xb),b>0$ and a=0, we get the Burr distribution. 5. For $G⁢(x;a,θ¯)=l⁢o⁢g⁢(xa)$, we get the Pareto distribution. 6. For $G⁢(x;a,θ¯)=l⁢o⁢g⁢(1+xν),ν>0$ and a=0, we get the Lomax distribution. 7. For $G⁢(x;a,θ¯)=l⁢o⁢g⁢(1+xbν),b>0,ν>0$ and a=0, we get the Burr distribution with scale parameter $ν(>0)$. 8. For $G⁢(x;a,θ¯)=xγ⁢e⁢x⁢p⁢(ν⁢x),γ>0,ν>0$ and a=0, we get the modified Weibull distribution. 9. For $G⁢(x;a,θ¯)=(x-a)+νλ⁢l⁢o⁢g⁢(x+νa+λ),ν>0,λ>0$, we get the generalised Pareto distribution. 10. For $G⁢(x;a,θ¯)=b⁢x+θ2⁢x2,θ>0,b>0$ and a=0, we get the linear exponential distribution. 11. For $G⁢(x;a,θ¯)=(1+xb)θ-1,θ>0,b>0$ and a=0, we get the generalised power Weibull distribution. 12. For $G⁢(x;a,θ¯)=βb⁢(eb⁢x-1),β>0,b>0$ and a=0, we get the Gompertz distribution. 13. For $G⁢(x;a,θ¯)=(exb-1),b>0$ and a=0, we get the Chen distribution. 14. For $G⁢(x;a,θ¯)=(x-a)$, we get the two-parameter exponential distribution. ## 3 MLE of $R=Pr(Y>X)$ In the following theorem, MLE of R is derived through the transformation method Theorem 1: The MLE of $R$ is $R¨=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ (2) where, $T¯⁢(y)=1n2⁢∑j=1n2H⁢(yj;a2,θ2)$ and $T¯⁢(x)=1n1⁢∑i=1n1G⁢(xi;a1,θ1)$ Proof: Let the random variable X follows a Family of lifetime distribution with pdf $f⁢(x;a1,λ1,θ1)=G′⁢(x;a1,θ1)λ1⁢e⁢x⁢p⁢(-G⁢(x;a1,θ1)λ1);$ $x>a1≥0,λ1>0$ (3) For the given equation (3), let us consider the transformation $G⁢(x;a1,θ1)=t$. Then the distribution become $f⁢(t;α)=1α⁢e⁢x⁢p⁢(-tα)$ (4) where, $α=λ1$. Now, let us consider Y be a random variable with pdf $f⁢(y;a2,λ2,θ2)=H′⁢(y;a2,θ2)λ2⁢e⁢x⁢p⁢(-H⁢(y;a2,θ2)λ2);$ $y>a2≥0,λ2>0$ (5) Similarly, let us take the transformation $z=H⁢(y;a2,θ2)$ and $β=λ2$, we get $f⁢(z;β)=1β⁢e⁢x⁢p⁢(-zβ)$ (6) Let t and z be two independent random variable which follows exponential distribution (4) and (6) with parameters $α$ and $β$, respectively, where $t=G⁢(x;a1,θ1)$ and $z=H⁢(y;a2,θ2)$. The relaibility model is $R=Pr(z>t)=∫z=0∞∫t=0∞f⁢(t;α)⁢f⁢(z;β)⁢dt⁢dz=∫z=0∞[1-e⁢x⁢p⁢(-zα)]⁢1β⁢e⁢x⁢p⁢(-zβ)⁢dz$ After solving, we get $R=ββ+α$ (7) On replacing the $α$ and $β$ by their MLE’S i.e, $α¨=t¯$ and $β¨=z¯$. The MLE of $R=Pr(z>t)$ is $z¯z¯+t¯$ where, $t¯=1n1⁢∑i=1n1ti$ and $z¯=1n2⁢∑j=1n2zj$. Finally, MLE of $R$ is $R¨=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ where, $T¯⁢(y)=1n2⁢∑j=1n2H⁢(yj;a2,θ2)$ and $T¯⁢(x)=1n1⁢∑i=1n1G⁢(xi;a1,θ1)$. Hence, the theorem follows. 1. Implication Here, we consider the different cases for the distributions to obtain the MLE of $R=Pr(Y>X)$ given in (2) Values of parameters for The MLE of $R=Pr(Y>X)$ Distributions Values of Parameter The one-parameter exponential distribution $T¯⁢(y)=1n2⁢∑j=1n2yj$ and $T¯⁢(x)=1n1⁢∑i=1n1xi$ Weibull distribution $T¯⁢(y)=1n2⁢∑j=1n2yjp$ and $T¯⁢(x)=1n1⁢∑i=1n1xip$ for $p>0$ Rayleigh distribution $T¯⁢(y)=1n2⁢∑j=1n2yj2$ and $T¯⁢(x)=1n1⁢∑i=1n1xi2$ Burr distribution $T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjb)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xib)$ for $b>0$ Pareto distribution $T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(yja2)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(xia1)$ Lomax distribution $T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjν)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xiν)$ for $ν>0$ Burr distribution with scale parameter $ν(>0)$ $T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjbν)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xibν)$ for $b>0,ν>0$ The modified Weibull distribution $T¯⁢(y)=1n2⁢∑j=1n2yjγ⁢e⁢x⁢p⁢(ν⁢yj)$ and $T¯⁢(x)=1n1⁢∑i=1n1xiγ⁢e⁢x⁢p⁢(ν⁢xi)$ for $γ>0,ν>0$ The generalised $T¯⁢(y)=1n2⁢∑j=1n2$ $[(yj-a2)+νλ2⁢l⁢o⁢g⁢(yj+νa2+λ2)]$ Pareto distribution $T¯⁢(x)=1n1⁢∑i=1n1$ $[(xi-a1)+νλ1⁢l⁢o⁢g⁢(xi+νa1+λ1)]$ for $λ1,λ2>0$,  $ν>0$ The linear exponential distribution $T¯⁢(y)=1n2⁢∑j=1n2[b⁢yj+θ22⁢yj2]$ $T¯⁢(x)=1n1⁢∑i=1n1[b⁢xi+θ12⁢xi2]$ for $θ1,θ2>0$ and $b>0$ The generalised power Weibull distribution $T¯⁢(y)=1n2⁢∑j=1n2[(1+yjb)θ2]-1$ and $T¯⁢(x)=1n1⁢∑i=1n1[(1+xib)θ1]-1$ $θ1,θ2>0$ and $b>0$ The Gompertz distribution $T¯⁢(y)=1n2⁢βb⁢(eb⁢Πj=1n2⁢yj-1)$ and $T¯⁢(y)=1n1⁢βb⁢(eb⁢Πi=1n1⁢xi-1)$ $β,b>0$ Chen distribution $T¯⁢(y)=1n2⁢∑j=1n2(eyjb-1)$ and $T¯⁢(x)=1n1⁢∑i=1n1(exib-1)$ $b>0$ The two-parameter exponential distribution $T¯⁢(y)=1n2⁢∑j=1n2(yj-a2)$ and $T¯⁢(x)=1n1⁢∑i=1n1(xi-a1)$ ## 4 UMVUE of $R=Pr(Y>X)$ In the following theorem, UMVUE of R is derived through the transformation method Theorem 2: The UMVUE of $R$ is $R´={∑i=0n2-1(-1)i⁢Γ⁢(n1)⁢Γ⁢(n2)Γ⁢(n2-i)⁢Γ⁢(n1+i)⁢(T⁢(x)T⁢(y))i;T⁢(x) (8) where, $T⁢(y)=∑i=1n2H⁢(yj;a2,θ2)$ and $T⁢(x)=∑i=1n1G⁢(xi;a1,θ1)$. Proof: Considering the transfomation $G⁢(x;a1,θ1)=t$ and $z=H⁢(y;a2,θ2)$, we have the transform Equations (4) and (6). To obtain the measure of reliabilIty estimate $Pr(z>t)$, we required to obtain the UMVUE of $f⁢(t;α)$ and $f⁢(z;β)$ i.e, $f´⁢(t;α)$ and $f´⁢(z;β)$ respectively, which is given by $f´⁢(t;α)=(n1-1)⁢G′⁢(t;a1,θ1)n1⁢t¯⁢[1-G⁢(t;a1,θ1)n1⁢t¯]n1-2;$ $G⁢(t;a1,θ1) (9) and $f´⁢(z;β)=(n2-1)⁢H′⁢(z;a2,θ2)n2⁢z¯⁢[1-H⁢(z;a2,θ1)n2⁢z¯]n2-2;$ $H⁢(z;a2,θ1) (10) Now to obtain UMVUE of R we have, $R´=Pr(z>t)=∫t=0∞∫z=t∞f´⁢(t;α)⁢f´⁢(z;β)⁢dz⁢dt$ using (9) and (10) $R´=∫t=0n1⁢t¯∫z=tn2⁢z¯(n1-1)⁢(n2-1)⁢H′⁢(z;a2,θ2)⁢G′⁢(t;a1,θ1)n1⁢n2⁢t¯⁢z¯[1-G⁢(t;a1,θ1)n1⁢t¯]n1-2⁢[1-H⁢(z;a2,θ1)n2⁢z¯]n2-2⁢d⁢z⁢d⁢t$ let $[1-H⁢(z;a2,θ1)n2⁢z¯]=w$ $=∫t=0m⁢i⁢n⁢(n1⁢t¯,n2⁢z¯)(n1-1)⁢(n2-1)⁢G′⁢(t;a1,θ1)n1⁢t¯⁢[1-G⁢(t;a1,θ1)n1⁢t¯]n1-2$ $⁢[wn2-1n2-1]01-H⁢(t;a2,θ1)n2⁢z¯⁢d⁢t$ $=∫t=0m⁢i⁢n⁢(n1⁢t¯,n2⁢z¯)(n1-1)⁢G′⁢(t;a1,θ1)n1⁢t¯⁢[1-G⁢(t;a1,θ1)n1⁢t¯]n1-2$ $⁢[1-H⁢(t;a2,θ1)n2⁢z¯]n2-1⁢d⁢t$ $=∫t=0m⁢i⁢n⁢(n1⁢t¯,n2⁢z¯)(n1-1)⁢G′⁢(t;a1,θ1)n1⁢t¯⁢[1-G⁢(t;a1,θ1)n1⁢t¯]n1-2$ $⁢∑i=0n2-1(-1)i⁢(n2-1i)⁢[H⁢(t;a2,θ1)n2⁢z¯]i⁢d⁢t$ Now consider the case $n1⁢t¯. Let $1-G⁢(t;a1,θ1)n1⁢t¯=u$, for solving the integral assuming $G⁢(t;a1,θ1)=H⁢(t;a2,θ2)$ i.e., $a1=a2$ and $θ1=θ2$. $R´=∫01(n1-1)⁢∑i=0n2-1(-1)i⁢(n2-1i)⁢[n1⁢t¯⁢(1-u)n2⁢z¯]i⁢un1-1⁢d⁢u=∑i=0n2-1(-1)i⁢Γ⁢(n1)⁢Γ⁢(n2)Γ⁢(n2-i)⁢Γ⁢(n1+i)⁢(n1⁢t¯n2⁢z¯)i$ In a same manner, we tackle the case when $n1⁢t¯>n2⁢z¯$: $R´=∑i=0n1-2(-1)i⁢Γ⁢(n1)⁢Γ⁢(n2)Γ⁢(n2+i+1)⁢Γ⁢(n1-i-1)⁢(n2⁢z¯n1⁢t¯)i+1$ The UMVUE of $R=Pr(Y>X)$ is obtained by substituting $n2⁢z¯=T⁢(y)=∑j=1n2H⁢(yj;a2,θ2)$ and $n1⁢t¯=T⁢(x)=∑i=1n1G⁢(xi;a1,θ1)$. Hence, the theorem follows. 2. Implication Here, we consider the different cases for the distributions to obtain the UMVUE of $R=Pr(Y>X)$ given in (4) Values of parameters for The UMVUE of $R=Pr(Y>X)$ Distributions Values of Parameter The one-parameter exponential distribution $T⁢(y)=∑j=1n2yj$ and $T⁢(x)=∑i=1n1xi$ Weibull distribution $T⁢(y)=∑j=1n2yjp$ and $T⁢(x)=∑i=1n1xip$ for $p>0$ Rayleigh distribution $T⁢(y)=∑j=1n2yj2$ and $T⁢(x)=∑i=1n1xi2$ Burr distribution $T⁢(y)=∑j=1n2l⁢o⁢g⁢(1+yjb)$ and $T⁢(x)=∑i=1n1l⁢o⁢g⁢(1+xib)$ for $b>0$ Pareto distribution $T⁢(y)=∑j=1n2l⁢o⁢g⁢(yja2)$ and $T⁢(x)=∑i=1n1l⁢o⁢g⁢(xia1)$ Lomax distribution $T⁢(y)=∑j=1n2l⁢o⁢g⁢(1+yjν)$ and $T⁢(x)=∑i=1n1l⁢o⁢g⁢(1+xiν)$ for $ν>0$ Burr distribution with scale parameter $ν(>0)$ $T⁢(y)=∑j=1n2l⁢o⁢g⁢(1+yjbν)$ and $T⁢(x)=∑i=1n1l⁢o⁢g⁢(1+xibν)$ for $b>0,ν>0$ The modified Weibull distribution $T⁢(y)=∑j=1n2yjγ⁢e⁢x⁢p⁢(ν⁢yj)$ and $T⁢(x)=∑i=1n1xiγ⁢e⁢x⁢p⁢(ν⁢xi)$ for $γ>0,ν>0$ The generalised Pareto distribution $T⁢(y)=∑j=1n2[(yj-a2)+νλ2⁢l⁢o⁢g⁢(yj+νa2+λ2)]$ $T⁢(x)=∑i=1n1[(xi-a1)+νλ1⁢l⁢o⁢g⁢(xi+νa1+λ1)]$ for $λ1,λ2>0$,  $ν>0$ The linear exponential distribution $T⁢(y)=∑j=1n2[b⁢yj+θ22⁢yj2]$ $T⁢(x)=∑i=1n1[b⁢xi+θ12⁢xi2]$ for $θ1,θ2>0$ and $b>0$ The generalised power $T⁢(y)=∑j=1n2[(1+yjb)θ2]-1$ and $T⁢(x)=∑i=1n1[(1+xib)θ1]-1$ Weibull distribution $θ1,θ2>0$ and $b>0$ The Gompertz distribution $T⁢(y)=βb⁢(eb⁢Πj=1n2⁢yj-1)$ and $T⁢(x)=βb⁢(eb⁢Πi=1n1⁢xi-1)$ $β,b>0$ Chen distribution $T⁢(y)=∑j=1n2(eyjb-1)$ and $T⁢(x)=∑i=1n1(exib-1)$ $b>0$ The two-parameter exponential distribution $T⁢(y)=∑j=1n2(yj-a2)$ and $T⁢(x)=∑i=1n1(xi-a1)$ ## 5 Confidence Interval of $R=Pr(Y>X)$ In the following theorem, confidence interval of R is derived through the transformation method Theorem 3: The confidence interval of $R=Pr(Y>X)$ is $P(n2⁢R~⁢cn1⁢(1-R~)⁢(1-c)+n2⁢R~⁢c (11) where, $R¨=z¯z¯+t¯$ and $0. Proof: From the Theorem 1, the MLE of R is $ββ+α$ or $z¯z¯+t¯$. As we know $n1⁢t¯$ and $n2⁢z¯$ follows Gamma distribution with parameters $(α,n1)$ and $(β,n2)$, respectively. For Confidence Interval of R, we must obtain the exact distribution of the variable $δ=α⁢n1⁢t¯α⁢n1⁢t¯+β⁢n2⁢z¯$ (12) Let $ρ=α⁢n1⁢t¯$ and $ϱ=β⁢n2⁢z¯$ and observe that $ρ$ and $ϱ$ have gamma distribution with the parameters $(1,n1)$ and $(1,n2)$ respectively. New set of varible is $δ=ρρ+ϱ$. On taking $ψ=ϱ$ and expressing the old variable in terms of new ones $ρ=δ⁢ψ(1-δ)$. The Jacobian of transformation is $J=(1-δ)-2⁢ψ$. The joint pdf of $δ$ and $ψ$ $Pr⁢(δ,ψ)=e-(ψ1-δ)⁢ψn1+n2-1⁢δn1-1Γ⁢(n1)⁢Γ⁢(n2)⁢(1-δ)n1+1$ (13) Intergrating out $ψ$, we have the maginal distribution of $δ$ $Pr⁢(δ)=[B⁢(n1,n2)]-1⁢δn1-1⁢(1-δ)n2-1;0<δ<1$ Here, $δ$ has a beta distribution with the known parameters $n1$ and $n2$. So we have, for any $0 $Pr(c<δ (14) where, $Ix⁢(n1,n2)=[B⁢(n1,n2)]-1⁢∫0xzn1-1⁢(1-z)n2-1⁢dz$ is the incomplete beta function. After calculation for the conection of $δ$ and $R¨$, we have the pivotal quantity $δ=[1+n2⁢R¨⁢(1-R)n2⁢R⁢(1-R¨)]-1$ where, $R=ββ+α$ and $R¨=z¯z¯+t¯$. If c and d in (14) are such that for a given $σ$ $Id⁢(n1,n2)-Ic⁢(n1,n2)=1-σ$ then, $P(c<[1+n2⁢R¨⁢(1-R)n2⁢R⁢(1-R¨)]-1 (15) After solving the equation (15) for R. $P(n2⁢R~⁢cn1⁢(1-R~)⁢(1-c)+n2⁢R~⁢c The above equation is valid for any values of $n1$ and $n2$, large or small. Hence the theorem follows. 3. Implication Here, we consider the different cases for the distributions to obtain the Confidence Interval of $R=Pr(Y>X)$ given in (11) Values of parameters for The Confidence Integral of $R=Pr(Y>X)$ Distributions Values of Parameter The one-parameter exponential distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2yj$ and $T¯⁢(x)=1n1⁢∑i=1n1xi$ Weibull distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2yjp$ and $T¯⁢(x)=1n1⁢∑i=1n1xip,$ $p>0$ Rayleigh distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2yj2$ and $T¯⁢(x)=1n1⁢∑i=1n1xi2$ Burr distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjb)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xib),b>0$ Pareto distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(yja2)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(xia1),b>0$ Lomax distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjν)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xiν)$, for $ν>0$ Burr distribution with scale parameter $ν(>0)$ $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2l⁢o⁢g⁢(1+yjbν)$ and $T¯⁢(x)=1n1⁢∑i=1n1l⁢o⁢g⁢(1+xibν),ν>0$ and $b>0$ The modified Weibull distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2yjγ⁢e⁢x⁢p⁢(ν⁢yj)$ and $T¯⁢(x)=1n1⁢∑i=1n1xiγ⁢e⁢x⁢p⁢(ν⁢xi),$  $ν>0$ and $γ>0$ The generalised Pareto distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(y)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2$ $[(yj-a2)+γλ2⁢l⁢o⁢g⁢(yj+νa2+λ2)]$ and $T¯⁢(x)=1n1⁢∑i=1n1$ $[(xi-a1)+γλ1⁢l⁢o⁢g⁢(xi+νa1+λ1)],ν>0$ and $γ>0$ The linear exponential distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2[b⁢yj+θ22⁢yj2]$ and $T¯⁢(x)=1n1⁢∑i=1n1[b⁢xi+θ12⁢xi2],θ1,θ2>0$ and $b>0$ The generalised power $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2[(1+yjb)θ2-1]$ and Weibull distribution $T¯⁢(x)=1n1⁢∑i=1n1[(1+xib)θ1-1],θ1,θ2>0$ and $b>0$ The Gompertz distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2[βb⁢(eb⁢yj-1)]$ and $T¯⁢(x)=1n1⁢∑i=1n1[βb⁢(eb⁢xi-1)],β>0$ and $b>0$ Chen distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2(eyjb-1)$ and $T¯⁢(x)=1n1⁢∑i=1n1(exib-1),b>0$ The two-parameter exponential distribution $R~=T¯⁢(y)T¯⁢(y)+T¯⁢(x)$ $∀ T¯⁢(y)=1n2⁢∑j=1n2(yj-a2)$ and $T¯⁢(x)=1n1⁢∑i=1n1(xi-a1),a1,a2>0$ ## 6 Bayes Estimator of $R=Pr(Y>X)$ In the following theorem, Bayes estimator of R is derived through the Transformation method Theorem 4: The Bayes estimator of R is $Rˇ={μ*ξ*+μ*⁢(η*ω*)-μ*⁢F12⁢(μ*+ξ*,μ*+1,μ*+ξ*+1;B),for B<1μ*ξ*+μ*⁢(ω*η*)-ξ*⁢F12⁢(μ*+ξ*,ξ*,μ*+ξ*+1;B1-B),for B<-1$ (16) where $F12⁢(a,b,c;z)$ is the hypergeometric series and $B=ω*-η*ω*<1$. Proof: Let us consider $t¯$ and $z¯$ be the independent samples from the pdfs (4) and (6). Here considering the conjugate prior, inverse gamma distributions for $α$ and $β$ with the parameters $μ$, $η$, and $ξ$, $ω$, respectively. Prior is $π⁢(α,β)∝α-μ-1⁢e(-ηα)⁢β-ξ-1⁢e(-ωβ);μ,η,ξ,β>0$ (17) The likelihood is $L⁢(α,β|t¯,z¯)=α-n1⁢β-n2 e⁢x⁢p⁢[-(∑i=1n1tiα+∑j=1n2zjβ)]$ (18) Applying Bayes formula and using (17) and (18). The posterior density of $(α,β)$ is $π⁢(α,β|t¯,z¯)∝α-μ-n1-1⁢e-(η+n1⁢t¯)α⁢β-ξ-n2-1⁢e-(ω+n2⁢z¯)β$ (19) Evidently the posterior risk is also the product of gamma pdfs with the updated parameters $μ*=-(n1+μ),η*=η+n1⁢t¯,ξ*=-(ξ+n2),ω*=ω+n2⁢z¯$ where, $t¯$ and $z¯$ are the sample means. For posterior pdf of R, we consider a one-to-one transformation $F:R=ββ+α,ϑR=α+β$ with the inverse $Q:α=R⁢ϑR,β=R⁢(1-ϑR)$. The Jacobian of transformation is $ϑR$. The joint posterior density of R and $ϑR$ becomes $π*⁢(R,ϑR|t¯,z¯)∝Rμ*-1⁢(1-R)ξ*-1 ϑRμ*+ξ*-1⁢e-ϑR⁢ω*⁢(1-B⁢R);$ $00$ (20) where $B=ω*-η*ω*<1$. Intergrating the (20) for $ϑR$ $πR⁢(R|t¯,z¯)=CR⁢Rμ*-1⁢(1-R)ξ*-1⁢(1-B⁢R)-(μ*+ξ*);0 (21) where, $CR$ is the normalizing coefficient. For the Baye estimator we have $Rˇ=∫R⁢πR⁢(R|t¯,z¯)⁢dR$ (22) Using the (21) and solving (22), we obtain the bayes estimator of R $Rˇ={μ*ξ*+μ*⁢(η*ω*)-μ*⁢F12⁢(μ*+ξ*,μ*+1,μ*+ξ*+1;B),for B<1μ*ξ*+μ*⁢(ω*η*)-ξ*⁢F12⁢(μ*+ξ*,ξ*,μ*+ξ*+1;B1-B),for B<-1$ where, $F12⁢(a,b,c;z)=∑j=1∞a⁢(a+1)⁢…⁢(a+j-1)⁢b⁢(b+1)⁢…⁢(b+j-1)c⁢(c+1)⁢…⁢(c+j-1)⁢zjj!$ is the hypergeometric series. For the Bayes estimator $R¨$, replacing the parameters as $μ*=-(n1+μ),η*=η+n1⁢T¯⁢(x),ξ*=-(ξ+n2),ω*=ω+n2⁢T¯⁢(y)$ Hence, the theorem follows. 4. Implication Here, we consider the different cases for the distributions to obtain the Bayes estimators of $R=Pr(Y>X)$ given in (16) Values of parameters for The Bayes estimators of $R=Pr(Y>X)$ Distributions Values of Parameter The one-parameter exponential $μ*=-(n1+μ),η*=η+n1⁢x¯,$ distribution $ξ*=-(ξ+n2),ω*=ω+n2⁢y¯$ Weibull distribution $μ*=-(n1+μ),η*=η+∑i=1n1xip,$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2yjp,$  $p>0$ Rayleigh distribution $μ*=-(n1+μ),η*=η+∑i=1n1xi2,$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2yj2,p>0$ Burr distribution $μ*=-(n1+μ),η*=η+∑i=1n1l⁢o⁢g⁢(1+xib),$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2l⁢o⁢g⁢(1+yjb),b>0$ Pareto distribution $μ*=-(n1+μ),η*=η+∑i=1n1l⁢o⁢g⁢(xia1),$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2l⁢o⁢g⁢(yja2),a1,a2>0$ Lomax distribution $μ*=-(n1+μ),η*=η+∑i=1n1l⁢o⁢g⁢(1+xiν)$ , $ξ*=-(ξ+n2),ω*=ω+∑i=1n2l⁢o⁢g⁢(1+yjν),ν,b>0$ Burr distribution with scale parameter $ν(>0)$ $μ*=-(n1+μ),η*=η+∑i=1n1l⁢o⁢g⁢(1+xibν)$ , $ξ*=-(ξ+n2),ω*=ω+∑i=1n2l⁢o⁢g⁢(1+yjbν),ν,b>0$ The modified Weibull distribution $μ*=-(n1+μ),η*=η+∑i=1n1xiγ⁢e⁢x⁢p⁢(ν⁢xi)$ , $ξ*=-(ξ+n2),ω*=ω+∑i=1n2yjγ⁢e⁢x⁢p⁢(ν⁢yj),γ,ν>0$ The generalised Pareto distribution $μ*=-(n1+μ),η*=η+∑i=1n1[(xi-a1)+νλ1⁢l⁢o⁢g⁢(xi+νa1+λ1)],$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2[(yj-a2)+νλ2⁢l⁢o⁢g⁢(yj+νa2+λ2)],$ $γ,ν>0$ The linear exponential distribution $μ*=-(n1+μ),η*=η+∑i=1n1[b⁢xi+θ12⁢xi2],$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2[b⁢yj+θ22⁢yj2],b>0$ The generalised power $μ*=-(n1+μ),η*=η+∑i=1n1[(1+xib)θ1-1],$ Weibull distribution $ξ*=-(ξ+n2),ω*=ω+∑i=1n2[(1+yjb)θ2-1],b>0$ The Gompertz distribution $μ*=-(n1+μ),η*=η+∑i=1n1[βb⁢(eb⁢xi-1)]$, $ξ*=-(ξ+n2),ω*=ω+∑i=1n2[βb⁢(eb⁢yj-1)],b>0$ Chen distribution $μ*=-(n1+μ),η*=η+∑i=1n1[exib-1],$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2[eyjb-1],b>0$ The two-parameter exponential distribution $μ*=-(n1+μ),η*=η+∑i=1n1(xi-a1),$ $ξ*=-(ξ+n2),ω*=ω+∑i=1n2(yj-a2),a1,a2>0$ ## 7 Discussion The Family of lifetime distribution is used in order to obtained the MLES, UMVUES, Confidence intervals and Bayes estimators of R for the various distributions. Initially, the generalized expressions for obtaining the MLES, UMVUES, Confidence intervals and Bayes estimators of R are obtained, then the estimator of the corresponding distributions are simply obtained by just replacing their respective parameters. For example, consider the following examples:- Example 1 – Consider the Weibull distribution Let $X1,X2,…⁢Xn$ be a random sample from WE($α,λ1$) and $Y1,Y2,…⁢Ym$ be a random sample from WE($α,λ2$). Amiri et al. (2013) [1] obtained the MLE and UMVUE of R for Weibull distribution, which is given as $R¨=m∑j=1myjαn∑i=1nxiα+m∑j=1myjα$ and the UMVUE of $R$ is $R´={1-∑i=0m-1(-1)i⁢Γ⁢(n)⁢Γ⁢(m)Γ⁢(n+i)⁢Γ⁢(m-i)⁢(t1t2)i;t1 where, $t1=∑i=1nxiα$ and $t2=∑j=1myjα$ are the sufficient statistics for the $λ1$ and $λ2$. Example 2 – Consider the Burr distribution Let $X$ be a Burr random variable with parameters (p, b) and $Y$ is another Burr random variable with parameters (a, b). Awad and Gharraf (1986) [2] obtained the MLE and UMVUE of R for Burr distribution, which is given as $R¨=11+nm⁢∑j=1ml⁢o⁢g⁢(1+yjb)∑j=1nl⁢o⁢g⁢(1+xjb)$ and the UMVUE of $R$ is $R´={∑j=0m-1(-1)j⁢(m-1)!⁢(n-1)!(m-1+j)!⁢(n-1-j)!∑i=1mvi≤∑i=1nwi(∑i=1mvi∑i=1nwi)j;1-∑j=0m-1(-1)j⁢(m-1)!⁢(n-1)!(m-1-j)!⁢(n-1+j)!∑i=1mvi>∑i=1nwi(∑i=1nwi∑i=1mvi)j;$ where, $∑i=1nwi=∑j=1nl⁢o⁢g⁢(1+xjb)$ and $∑i=1mvi=∑j=1ml⁢o⁢g⁢(1+yjb)$ Example 3 – Consider the generalized Pareto distribution Suppose $X1,X2,…⁢Xn$ be a random sample from GP($α,λ$) and $Y1,Y2,…⁢Yn$ be a random sample from GP($β,λ$). Rezaei et al. (2010) [13] obtained the MLE and UMVUE of R for generalized Pareto distribution, which is given as $R¨=m∑j=1ml⁢n⁢(1+λ⁢yj)n∑i=1nl⁢n⁢(1+λ⁢xi)+m∑j=1m(1+λ⁢yj)$ and the UMVUE of $R$ is $R´={1-∑i=0m-1(-1)i⁢(m-1)!⁢(n-1)!(m-i-1)!⁢(n+i-1)!⁢(T1T2)i;T1≤T2∑i=0n-1(-1)i⁢(m-1)!⁢(n-1)!(m+i-1)!⁢(n-i-1)!⁢(T2T1)i;T2≤T1$ where, $T1=∑i=1nl⁢n⁢(1+Xi)$ and $T2=∑i=1ml⁢n⁢(1+Yi)$ Remarks: All the above Example 1–3 are the specific cases of our generalized expressions. Thus, in this study we have suggested a very simple and approved method i.e, transformation method for obtaining the MLES, UMVUES, Confidence intervals and Bayes estimators of R for the different distributions. ## References [1] Amiri, N., Azimi, R., Yaghmaei, F. and Babanezhad, M. 2013: Estimation of stress-strength parameter for two-parameter weibull distribution. Int. J. of Adanced Stat. and prob., 1(1):4–8. [2] Awad, A. M. and Gharraf, M. K. 1986: Estimation of $P(Y in the Burr case, A Comparative Study. Commun. Statist. – Simul., 15(2):389–403. [3] Basu, D. 1964: Estimates of reliability for some distributions useful in life testing. Technometrics, 6:215–219. [4] Chao, A. 1982: On comparing estimators of $P(X>Y)$ in the exponential case. IEEE transactions on reliability, 31:389–392. [5] Chaturvedi, A. and Pathak, A. 2012: Estimation of the reliability functions for exponentiated Weibull distribution. J. Stat. Appl., 7:1–8. [6] Chaturvedi, A. and Singh, K. G. 2008: A family of lifetime distributions and related estimation and testing procedures for the reliability function. J. Appl. Stat. Sci., 16(2):35–50. [7] Chaturvedi, A. and Surinder, K. 1999: Further remarks on estimating the reliability function of exponential distribution under Type-I and Type-II censorings. Brazilian Journal of Probability and Statistics, 13:29–39. [8] Church, J. D. and Harries, B. 1970: The estimation of reliability from stress-strength relationships. Technometrics, 12:49–54. [9] Downton, F. 1973: The estimation of $Pr(Y in the normal case. Technometrics, 15:551–558. [10] Enis, P. and Geisser, S. 1971: Estimation of the probability that $(Y>X)$. J. Amer. Statist. Asso., 66:162–168. [11] Kelly, G. D., Kelly., J. A. and Schucany, W. R. 1976: Efficient estimation of $P(Y in the exponential case. Technometrics, 18:359–360. [12] Pugh, E. L. 1963: The best estimate of reliability in the exponential case. Operations Research, 11:57–61. [13] Rezaei, S., Tahmasbi, R. and Mahmoodi, M. 2010: Estimation of $P(Y for generalized Pareto distribution. J. Stat. Plan Inference, 140:480–494. [14] Sathe, Y. S. and Shah, S. P. 1981: On estimating $P(X for the exponential distribution. Commun. Statist. Theor. Meth., A10:39–47. [15] Sinha , S. K. and Kale, B. K. 1980: Life testing and Reliability Estimation. Wiley Eastern Ltd., New Delhi. [16] Surinder, K. and Kumar, M. 2015: Study of the Stress-Strength Reliability among the Parameters of Generalized Inverse Weibull Distribution. Intern. Journal of Science, Technology and Management, 4:751–757. [17] Surinder, K. and Kumar, M. 2016: Point and Interval Estimation of $R=P(Y>X)$ for Generalized Inverse Weibull Distribution by Transformation Method. J. Stat. Appl. Pro. Lett., 3:1–6. [18] Surinder, K. and Mayank, V. 2014: On the estimation of $R=P(Y>X)$ for a class of Lifetime Distributions by Transformation Method. J. Stat. Appl. Pro., 3(3):369–378. [19] Tong, H. 1974: A note on the estimation of $P(Y in the exponential case. Technometrics, 16:625. ## Biographies Surinder Kumar, Head, Dept. of Statistics, BBAU (A central University), Lucknow – India. He is having 26 years research experience in various research fields of Statistics such as Sequential Analysis, Reliability Theory, Business Statistics and Bayesian Inference. Prof. Kumar has published more than 60 research publications in various journals of national and international repute. Prem Lata Gautam, Dept. of Statistics, BBAU (A Central University) Lucknow, India. She has research experiences of 6 years and has also published 6 research articles in various reputed journals in the field of Sequential analysis, Bayesian estimation and Reliability theory and wholesome knowledge of many softwares and language like R Software, Mathematica and Fortron.
2022-10-06 17:40:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 397, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924531698226929, "perplexity": 669.8236347820881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00392.warc.gz"}