url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://web2.0calc.com/questions/what-is-the-reciprocal-of-tanx
|
+0
# what is the reciprocal of tanX?
0
228
1
what is the reciprocal of tanX?
Guest Aug 3, 2017
Sort:
#1
+7056
+2
The reciprocal of tangent is cotangent.
$$\tan=\frac{\sin}{\cos} \\~\\ \cot=\frac{\cos}{\sin}$$
hectictar Aug 3, 2017
|
2018-05-28 03:30:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998897314071655, "perplexity": 7446.426556892056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870771.86/warc/CC-MAIN-20180528024807-20180528044807-00361.warc.gz"}
|
http://archives.gentoo.org/gentoo-pms/msg_ba083658d912e7bc435abd5a7571a5d2.xml
|
Note: Due to technical difficulties, the Archives are currently not up to date. GMANE provides an alternative service for most mailing lists.
c.f. bug 424647
List Archive: gentoo-pms
On Fri, 13 May 2011 08:22:23 +0200 Ulrich Mueller wrote: > >>>>> On Fri, 13 May 2011, Michał Górny wrote: > > > +be disabled by~user too, using a~PM-specific mechanism. > > > +\item \t{src\_test} (except if \t{RESTRICT=test} or~disabled > > by~user) > > Could you please use the ~ more sparingly? I don't see a reason why > line breaks at these points should be suppressed. The rules of typography state that prepositions should not appear at the end of a line. But sure I can if you can tell me when to use it. -- Best regards, Michał Górny
|
2015-01-28 20:17:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4224243760108948, "perplexity": 5225.558088953019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00158-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.emjreviews.com/respiratory/article/electronic-cigarette-use-among-emerging-and-young-west-indian-adults/
|
Electronic Cigarette Use Among Emerging and Young West Indian Adults - European Medical Journal
# Electronic Cigarette Use Among Emerging and Young West Indian Adults
14 Mins
Authors:
Rayshell Dhandoolal,1 Shivanni De Gannes,1 Andrew Dhanoolal,1 Matthew Desaine,1 Dania Dukhoo,1 Stephen Duncombe,1 Dylan Dupraj,1 Tai Dorsett,1 Isaac Dialsingh,2 Sateesh Sakhamuri,1 *Lexley M. Pinto Pereira1
Disclosure:
The authors have declared no conflicts of interest.
27.03.17
Accepted:
15.09.17
Citation
EMJ Respir. ;5[1]:108-115.
## INTRODUCTION
Electronic cigarettes (e-cigarettes), also known as electronic nicotine delivery systems (ENDS), allow users to inhale an aerosol (vapour) containing flavoured agents, additives, and typically nicotine (though not always), by heating a solution of propylene glycol and/or glycerine. The battery-powered device becomes activated on inhalation, vapourising the liquid to form an aerosol that is then ‘vaped’ into the lungs via a mouthpiece. Vaping provides the same nicotine experience as tobacco cigarettes, while the heated vapour mimics the ‘throat hit’ that occurs in tobacco smoking and is regarded as a vital experience for smokers. These devices are now the most common type of alternative nicotine delivery system used in several countries.1
E-cigarettes are being used increasingly and rapidly among the youth population and young adults.2 Reid et al.3 reported a high use of e-cigarettes in Canadian youth populations and young adults, and McMillen et al.4 found that young adults were at the highest risk of using alternative tobacco products. Traditional media channels that were once used to encourage tobacco cigarette use now use the same aggressive marketing techniques to target young people, advertising e-cigarettes as smoking cessation aids5 that provide a safe, tobacco-free, alternative smoking experience. There is limited evidence regarding the health effects of e-cigarettes. Observational data examining the long-term effects of e-cigarette use are not available and the levels of toxic constituents can vary between products. Though the majority of harmful substances found in tobacco smoke are absent in e-cigarette aerosols, evidence of decreased harm with long-term use is not available6 and short-term e-cigarette use has been associated with adverse events, ranging from a cough, sore throat, shortness of breath, and vomiting, to serious reports of pneumonia, hypotension, and seizures.7 The current research on e-cigarettes is complex and confounded by the wide variation in e-cigarette product composition. Despite being introduced as an aid to smoking cessation, Barrington-Trimis et al.8 have presented data that suggested e-cigarettes may act as gateway agents to the development of a nicotine addiction. Nevertheless, it is estimated that the e-cigarette market, fuelled by the perception of a healthier alternative to smoking, will display a compound annual growth rate of 35% over the period of 2016–2021 and reach a total market size of $10.687 billion by the end of 2021.9 Cognizant of the lack of progress made in decreasing tobacco use in adolescents and young adults, the U.S. Food and Drug Administration (FDA) banned e-cigarette use in minors (<18 years) and began regulating the manufacture, import, packaging, labelling, advertising, promotion, sales, and distribution of ENDS in August 2016.10 When ENDS were first used in the Anglophone Caribbean between 2010 and 2011, they had already been banned in Australia, Brazil, Canada, Israel, Mexico, Panama, and Singapore. The devices were not welcomed by the medical community in Barbados11 and Jamaica.12 In Trinidad and Tobago, e-cigarettes were first introduced in 2010, ironically when the Tobacco Control Act took effect, banning smoking in public places. At present, they remain untouched by regulatory controls. A 2014 report in a popular daily newspaper in the country stated that e-cigarettes were economically attractive, costing$30–90 (1 USA $=6.7 Trinidad Tobago$), and stated: “The bottles of liquids are selling like wildfire.”13 There is no information available on the prevalence and characteristics of e-cigarette users in Caribbean youth and young adults. Therefore, we examined the prevalence and associated factors of ENDS use, knowledge, and perceptions among young adults (18–25 years) in Trinidad. The findings from this study will aid the understanding of the characteristics of young adults who use e-cigarettes and will encourage the development of regulatory controls in Trinidad.
## METHODS
This cross-sectional study was undertaken from May–June 2016 by convenience sampling in consenting young adults aged between 18 and 40 years. Participants were recruited from popular locations frequented by young adults across the island, including the Gulf City Mall in the south, Movie Towne at Port-of-Spain in the west, Trincity Mall in the east, and the University campuses of the West Indies and Faculty of Medical Sciences in the north-central part of the island. Validated questionnaires are not available on e-cigarette use. However, the questionnaires we used were pilot-tested using trained interviewers. Data analysis by SPSS version 24 produced descriptive statistics and a logistic regression model was used to identify correlates of e-cigarette use. Pearson’s chi-squared test and the two-sample t-test were used for the univariate analyses, depending on whether the variable being studied was categorical or continuous, respectively. Variables were considered significant if the p-value was <0.05.
The following definitions were used:
• Smoking status was recorded based on self-reporting. A ‘current smoker’ was defined as someone who answered ‘Yes’ to the question: ‘Have you smoked ≥100 cigarettes in your entire life?’ and had smoked a cigarette in the past 28 days.
• A ‘quitter’, or ‘former smoker’, was defined as someone who had smoked >100 cigarettes in their lifetime but had not smoked in the last 28 days.
• A ‘never smoker’ was defined as someone who had not smoked >100 cigarettes in their lifetime and was not a smoker at the time of study.
## RESULTS
The dependent variable was binary in nature. The two categories of participants involved were those who had used e-cigarettes and those who had never used e-cigarettes. Table 1 shows the breakdown of the demographic variables and the dependent variable. Of the 911 participants approached, 777 completed the interview (response rate: 85.3%). Non-responders did not participate in the questionnaire largely because of time constraints, and few participants refused to take part. A total of 24.6% (191) of subjects had used an e-cigarette before. Most participants (70.1%) were aged 18–25 years old, 49.9% were of East Indian descent, and the female and male proportions were 50.6% and 49.4%, respectively. The African population was the least represented cohort in the sample (21.2%). The majority of participants (74.9%) had either completed, or were in, current tertiary education. Among the demographic variables, the univariate analyses showed that sex (odds ratio [OR]: 2.60; 95% confidence interval [CI]: 1.85–3.68; p<0.001), and ethnicity (p=0.002) were both significant. In addition, the odds of males having used an e-cigarette before were 2.60 (95% CI: 1.85–3.68) times more likely than females. Subjects aged 36–40 years were significantly less likely (OR: 0.37; 95% CI: 0.14–0.81) to use e-cigarettes compared with those in the 18–25 year age group (p=0.023). Additionally, East Indians (OR: 2.38; 95% CI: 1.48–3.98; p=0.001) and those belonging to other ethnicities (OR: 2.26; 95% CI: 1.34–3.90; p=0.003) were more likely to have tried/used e-cigarettes.
Table 1: Study population profile.
*: p≤0.05; **: p≤0.01.
CI: confidence interval; OR: odds ratio.
Dual use of tobacco cigarettes and e-cigarettes was observed in nearly 42% of subjects. Table 2 shows the univariate analyses among the non-demographic variables that might influence the dependent variable. The variables that were significant predictors of the use of e-cigarettes were binary variables that measured the habit of smoking tobacco, specifically whether someone smoked tobacco cigarettes at the time of study (OR: 9.34; 95% CI: 6.14–14.39; p<0.001), had quit smoking tobacco cigarettes (OR: 8.65; 95% CI: 5.21–14.46; p<0.001), whether the person thought it was dangerous to his/her health (OR: 0.61; 95% CI: 0.44–0.85; p=0.004), and whether they felt that e-cigarettes were safer than tobacco cigarettes (OR: 2.59; 95% CI: 1.86–3.62; p<0.001). Respondents who smoked tobacco cigarettes at the time of study and those who had quit smoking tobacco cigarettes were significantly more likely to have used e-cigarettes in the past. A total of 16.95% of participants who had never smoked a tobacco cigarette previously had used e-cigarettes before.
Table 2: Univariate analysis of base characteristics to assess predictors of e-cigarette use (unadjusted odds ratios).
+: p-values result from Welch’s, two-sample independent t-test; *: p≤0.1; **: p≤0.05; ***: p≤0.01.
CI: confidence interval; df: degrees of freedom; OR: odds ratio.
Safety variables also play a role in determining an individual’s predisposition to use/try e-cigarettes. Those who agreed that e-cigarettes were dangerous to health were less likely (OR: 0.61; 95% CI: 0.44–0.85) to have tried/used the devices, while those who agreed that e-cigarettes were safer than regular tobacco cigarettes were more than twice as likely (OR: 2.59; 95% CI: 1.86–3.62) to have used/tried e-cigarettes. Respondents’ knowledge of the toxic content of e-cigarettes was not a significant predictor of e-cigarette use or trial. However, those who knew that e-cigarettes contain nicotine were almost twice as likely to have used an e-cigarette before (OR: 1.88; 95% CI: 1.35–2.63) compared to those who did not.
Two summative scales were constructed that measured knowledge and perception. The summative scale for knowledge consisted of seven questions. The questions included topics such as whether e-cigarettes are cheaper than regular tobacco cigarettes, if e-cigarettes contain nicotine or harmful substances, if users are less likely to develop a habit with e-cigarettes than regular cigarettes, and whether users of e-cigarettes are less likely to develop cancer, heart disease, or lung disease. The summative scale for perception consisted of 10 questions. These included whether e-cigarettes are perceived as dangerous to the health of users, safer than regular tobacco cigarettes, if e-cigarette use is acceptable in public, if e-cigarettes are safe to be used by pregnant women or near children, and if they should be sold to children. Other questions included whether respondents believed regulations should be instigated for e-cigarette use in public, whether a minimum age limit should be enforced for their use, and whether e-cigarettes should be openly advertised.
The reliability of these scales was examined using Cronbach’s alpha.14 A Cronbach’s alpha >0.7 is a good indicator of a reliable scale. The perception scale was more reliable (Cronbach’s alpha: 0.736) than the knowledge scale (Cronbach’s alpha: 0.367). There were statistical differences in the mean knowledge (t=2.59; degrees of freedom: 277.89; p=0.010) and mean perception (t=8.64; degrees of freedom: 268.11; p<0.001) between those who had and those who had never used an e-cigarette.
A multivariable logistic regression with adjusted odds ratio (AOR) (Table 3) identified the variables that were significant predictors, while accounting for the presence of the other variables. The demographic variables of ethnicity (p=0.030), education (p=0.012), and age group (p=0.007) were all significant. In addition, an individual who had quit smoking tobacco cigarettes was almost eight times more likely to use e-cigarettes (OR: 7.98; 95% CI: 4.21–15.45). Those who said e-cigarettes contained nicotine were almost three times more likely to have used them before (OR: 2.70; 95% CI: 1.53–4.86). Out of the perception and knowledge scales, only the perception scale proved to be significant (OR: 0.78; 95% CI: 0.70–0.86; p<0.001). For each unit increase in the perception scale, the odds of trying/using e-cigarettes decreased by 0.78 times.
Table 3: Multivariable logistic regression output (adjusted odds ratios).
Results from two-sample independent t-tests.
*: p≤0.1; **: p≤0.05; ***: p≤0.01.
CI: confidence interval; OR: odds ratio.
## DISCUSSION
We have explored the pattern of e-cigarette use among young West Indian adults; this was the first such study to our knowledge. ENDS were introduced to Trinidad and Tobago in 2010. Six years later, close to 25% of young adults aged 18–40 years have used an e-cigarette and, of these, 41.9% (n=80) are tobacco cigarette smokers.
Young adults aged 18–25 years and included in this study were more likely to use ENDS. For a country with a population of 1.3 million, this equates to a sizeable proportion of young adults using e-cigarettes, only 6 years after their introduction, and is worrying in the context of international reports. Gravely et al.15 investigated 10 countries with a mix of economic levels in the International Tobacco Control Surveys for self-reported awareness, current usage, and trial of e-cigarettes, and found that e-cigarette use had rapidly increased between 2009 and 2013. In a 2014 CDC National Centre for Health Statistics report, 7 years after ENDS were introduced, 20% of people aged 18–24 years had used an e-cigarette at least once,16 also demonstrating a high progression in e-cigarette use. The number of adults who used ENDS in the European Union (EU) increased from 7.2% to 11.6% during the period of 2012–2014.17 In the UK the number of young people aged 11–18 years who had ever smoked an e-cigarette rose significantly, from 4.6% in 2013, to 8.2% in 2014.18 Adult smokers perceive e-cigarettes as less harmful than tobacco cigarettes, and as an aid to help cut down on or quit smoking, without being trapped by smoke-free policies or emitting second-hand smoke; however, for young people, the market presents e-cigarettes as novel smoking devices with appealing flavours.19
In Trinidad, it is a concern that the prevalence of e-cigarette use has considerably increased in the last 6 years since their introduction, and could rapidly rise to a figure as high as in the UK and the USA. A further concern is that e-cigarettes may become a gateway to smoking in young adults. A longitudinal 2-year study in a cohort of American adolescents and young adults showed that e-cigarette use at baseline progressed to tobacco smoking,20 and, in another report, e-cigarette adolescent users had >6 times the odds of starting cigarette smoking than those who had never used an e-cigarette before.8 More recently, Spindle at al.21 reported that American college students who had used an e-cigarette before were likely to progress to tobacco cigarette smoking after a year, and men were more likely to try these devices. Future research is necessary to monitor the patterns of use in this group to determine whether e-cigarettes provide a trajectory to regular tobacco use. Africans were less likely to try or regularly use ENDS, which may reflect their poorer representation in the study. Further research should explore the sociodemographic differences in e-cigarette use, and the possibility that this habit may be a social/cultural phenomenon favoured by East Indians. Studying the knowledge, beliefs, and health risk perceptions among Trinidad’s multi-ethnic population allows an understanding of e-cigarette use and will inform future practices.
|
2021-12-09 13:25:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2971905469894409, "perplexity": 7596.680367722266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964364169.99/warc/CC-MAIN-20211209122503-20211209152503-00319.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/124828/could-a-planets-karman-line-hypothetically-occur-under-a-liquid-surface
|
# Could a planet's Karman line hypothetically occur under a liquid surface?
The Karman Line is one of the most commonly-used definitions of the "edge of space". As an airplane flies higher in the atmosphere, the air gets thinner and thus the lift decreases. This can be compensated by flying at a faster speed. The Karman line is the altitude at which you would need as much speed as the orbital velocity. You are no longer flying; you are in orbit.
Earth has a gaseous atmosphere, and a Karman line that calculates to about 100 km.
This Space.SE question examines the Karman line of a planet without an atmosphere (i.e. a solid surface). The general consensus of the answers is that the solid surface itself is the "edge of space". The moon is such a body.
So we have...
• the edge of space with a solid surface (moon).
• the edge of space with a gaseous atmosphere (Earth).
What about a planet (or moon) with a liquid surface -- namely, could there be any contrived, theoretical scenario where the Karman line occurs below sea level?
The oceans do not necessarily have to be water (e.g. ammonia, mercury, or hydrocarbons are fine). You may adjust temperature, pressure, and gravity to any plausible values that support liquid oceans. Presumably, to keep the oceans from boiling away, there would need to be a solid crust above the ocean, or some atmosphere inadequate for flight (your choice).
Interestingly, such a possibility would mean that no creature or vehicle could "swim" to the surface of their ocean.
Obviously, Earth itself proves you can have a Karman line above a liquid sea level.
This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information.
• Don't accept right off the bat! It is good practice to leave questions open for at least 24 hours to see if other answers appear. – kingledion Sep 11 '18 at 20:02
• Just an FYI: if you want the hard-science notice to go along with the tag, flag it for mod attention and they'll add the post notice. Just flagged accordingly (further info on that can be found in the hard-science tag wiki.) – FoxElemental Sep 11 '18 at 21:06
• Water will sublimate in a vacuum. I followed this reddit link and found this wikipedia page about Ionic Liquids. They are salts in liquid form and do not evaporate/sublimate in a vacuum like water does. – John Locke Sep 11 '18 at 21:44
• @JohnLocke: Water ice will sublimate in air at normal pressure, not only in a vacuum. In vacuum liquid water will boil . . . – AlexP Sep 12 '18 at 0:21
• @AlexP The karman line is where the medium is so thin that orbital mechanics is holding you up more than aero/hydrodynamics. Therefore, the karman line is at least as tall as the highest point where matter is found in a relevant density, meaning if the planet has an atmosphere, the karman line will be at the top of the atmosphere, not, as the OP asked, under water, nor anywhere near the surface, it will end way above water. So if you have no atmosphere, you need something that will not sublimate. – John Locke Sep 12 '18 at 0:45
# No
As long as you are in a liquid, the density will be high enough that an airfoil shape will be able to give you lift. In that case, you can always get to the surface of the liquid, with a sub-orbital velocity. Therefore, the Karman line can't be below the surface of the liquid.
### Definition of the Karman line
The Karman line's mathematical definition is
$$\frac{1}{2}\rho v_0^2SC_L = mg$$
where $v_0$ is the orbital velocity; $m$ and $S$ are mass and wing area; and $C_L$ is coefficient of lift. The wing loading of an airplane is $m/S$ and is around 600 kg/m$^2$ for a commercial airplane. Via Aviation.SE we can get lift coefficients. Lift varies based on angle of attack, but $C_L=1$ is a good enough approximation. We can plug this into the equation to get:
$$\frac{1}{1200}\rho v_0 = g.$$
So now we have a relationship between orbital velocity, gravity, and fluid density. Given a fluid density of water at 1000 kg/m$^3$, the surface gravity must be 0.83 times the orbital velocity, in units of m/s$^2$ and m/s, respectively, for a the Karman line to be below the liquid level.
### Relationship between escape velocity and surface gravity
Now, escape velocity is not the same as orbital velocity, but it can give us an approximation of what orbital velocity is. LEO on Earth is ~7 km/s while escape velocity is 11.2 km/s. This will be close enough an approximation, as we will see.
Escape velocity can be expressed as a product of surface gravity by
$$v_0 = \sqrt{2gr}$$. We will use escape velocity as a stand-in for orbital velocity. Combine this equation with $g = 0.83 v_0$ and we get $v_0 = 1.7 r$, with units of m/s and m.
If escape velocity is the same as Earth, then the planet needs to be 19,000 km in radius (Earth is 6370) with surface gravity of 9260 $g$. If the planet is to have surface gravity of 9.8 m/s$^2$, then the escape velocity of this 'planet' is 12 m/s and its radius is 20 meters.
### Calculation of required mass
So here you can see the impossibility forming. Escape velocity is
$$v_e = \sqrt{\frac{2GM}{r}}.$$ If we plug in 12 m/s and a radius of 20 meters, we get a mass of $2.2\times10^{13}$ kg; this is a density of $4.4\times10^{8}$ kg/m$^3$ which is electron-degenerate matter.
# Conclusion
The only way to make this happen is to put a liquid ocean over a small asteroid's worth of electron degenerate matter. So, no, this cannot happen.
• And I don't dare imagining what happens upon reaching orbital velocity into a liquid... – L.Dutch Sep 11 '18 at 19:27
• It would be brief, but exciting. – Keith Morrison Sep 11 '18 at 19:33
• @DrSheldon, if you are in a liquid, you are, by definition, in a material of less density than if that same liquid was a gas. Thus, you don't even need an airfoil to reach the surface. You can theoretically use that same liquid in gaseous form to float to the surface, ie needing a horizontal speed of 0. – Keith Morrison Sep 11 '18 at 19:40
• @KeithMorrison A rocket doesn't need a horizontal speed to get to the top of the atmosphere either, and it isn't in a liquid. – John Locke Sep 12 '18 at 0:50
• @JohnLocke Yes but it also will have a vertical speed of close to zero as well, and rockets can't do that. – fyrepenguin Sep 12 '18 at 2:27
|
2019-07-16 21:01:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6657426357269287, "perplexity": 750.0231135700939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00073.warc.gz"}
|
https://socratic.org/questions/a-triangle-has-sides-a-b-and-c-sides-a-and-b-have-lengths-of-7-and-5-respectivel-6
|
# A triangle has sides A, B, and C. Sides A and B have lengths of 7 and 5, respectively. The angle between A and C is (5pi)/24 and the angle between B and C is (pi)/8. What is the area of the triangle?
Jan 26, 2017
The area of the triangle is $15.16 \left(2 \mathrm{dp}\right) s q . u n i t$
#### Explanation:
The angle between sides A and C is $\angle b = \frac{5 \pi}{24} = 5 \cdot \frac{180}{24} = {37.5}^{0}$
The angle between sides B and C is $\angle a = \frac{\pi}{8} = \frac{180}{8} = {22.5}^{0}$
The angle between sides A and B is $\angle c = 180 - \left(37.5 + 22.5\right) = {120}^{0}$
So we have sides A=7 ; B=5 and their included angle $\angle c = {120}^{0}$
The area of the triangle is ${A}_{t} = \frac{A \cdot B \cdot \sin c}{2} = \frac{7 \cdot 5 \cdot \sin 120}{2} = 15.16 \left(2 \mathrm{dp}\right) s q . u n i t$ [Ans]
|
2021-09-17 07:46:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486056327819824, "perplexity": 232.173990826794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00511.warc.gz"}
|
http://fourier.fhsu.edu/index.php/LaTeX
|
# LaTeX
Latex is a program for typesetting papers. It is very good at typesetting math, so it is commonly used in fields that require equations to be displayed in papers.
## Installation
To start writing Latex documents, you need two things. A Latex compiler (this is a set of programs and files that allow you to generate pdfs from a Latex file), and a Latex editor (you can use a plain text editor, but an editor designed for Latex will have many more handy features). There are a few different Latex compilers and editors. The instructions below are to install the texlive compiler and texmaker editor.
Windows
2. install texmaker, which is cross platform, from here http://www.xm1math.net/texmaker/
3. configure texmaker: go to toolbar, choose options/quick build/User, add the following text
latex -interaction=nonstopmode %.tex|bibtex %.aux|latex -interaction=nonstopmode %.tex|latex -interaction=nonstopmode %.tex|dvips -t letter %.dvi -o %.ps|ps2pdf -dPDFSETTINGS=/prepress %.ps
Mac
1. Follow the instructions here http://www.tug.org/mactex/2011/
2. install texmaker, which is cross platform, from here http://www.xm1math.net/texmaker/
Ubuntu Linux
use synaptic to install the texmaker which will automatically install texlive as well
> sudo apt-get install texmaker
You may also want to to install the REVTeX package (https://authors.aps.org/revtex4/). Many APS journals use a style included in this package. However, it is not necessary to do this before starting to use Latex. For more information on installing custom document and bibliography styles, see http://www.math.uiuc.edu/~hildebr/tex/tips-customstyles.html
See section below for additional common formats.
Here is a template that can be used for Advanced Lab: File:Advanced lab.tar (Use 7-Zip, http://www.7-zip.org/, on Windows to extract this .tar file.)
## Using LaTeX
### External References
There are many, many different websites full of tutorials and references for Latex. Here are a couple of good ones.
LaTeX Wikibook: http://en.wikibooks.org/wiki/LaTeX
LaTeX Cookbook: http://www.personal.ceu.hu/tex/cookbook.html
Latex is basically a language for "marking up" plain text to indicate how it should be formatted. The Latex compiler reads the marked up text and generate the output that is indicated. In Latex terminology, the user used commands to tell the Latex compiler what to do. Latex commands start with an '\'. For example, the latex command to create the Greek letter alpha (in math mode) is \alpha. Some commands take arguments. These arguments are given to the command inside of curly brackets {}. For example, the Latex command to bold face some text is \textbf, and this command takes one argument, the text to bold face; \textbf{make this bold}. Some commands take optional arguments. These arguments are given in square brakets [].
A bare Latex document must declare the document class and have a document environment. These are specified with the \documentclass command and a \begin{document} and \end{document} commands.
\documentclass{article}
\begin{document}
This is the simplest document I could think of.
\end{document}
By default, Latex has a lot of useful features, but you will quickly find yourself needing to do something that is not possible with plain Latex. Latex allows you to use packages that have extra functionality. These packages are similar to libraries used to program in other languages. To use a package, you give the package name to the \usepackage command. A package that contains a lot of useful features for writing equations is the amsmath package.
\documentclass{article}
\usepackage{amsmath} % include some useful tools for writing equations.
\begin{document}
This is the simplest document I could think of.
\end{document}
Notice that we used a Latex comment in this cases. Anything after a % is ignored and can be used to provide extra information that will not get put into the formatted document. In this case, we indicate why the amsmath package is being included.
#### Inserting Math
There are several different ways to display equations in a Latex document. Latex uses what it calls "math mode" to display equations. It does this because equations are formatted differently than normal text, so it is necessary to explicitly indicate equations. So, the many different ways of displaying equations are just different ways of entering math mode. There are many commands that only work in math mode (for example, the commands for Greek letters). Luckily, it is very simple to get into math mode. Here is a short list of the most common ways of displaying equations.
single dollar sign
any text inside of a pair of dollar signs is formatted in math mode. double dollar sign $$text inside a pair of double dollar signs is formatted in math mode, but the equation is displayed centered on its own line. equation environment the equation environment will display an equation centered on its own line, like the$$, but it will also number it align environment the align environment can be used to format multi-line equations so that they align (for example, you may want the = sign in all of them to align). You must use a & to indicated where equations should be aligned, and a \\ to indicate the end of an equation line. If you don't want all lines to get numbered (which will happen by default) you can insert the \nonumber command at the end of the line. Both the equation and align environments have "starred" versions that will format equations without numbers. This is sometimes useful to display small, simple equations that will not be referenced later in the paper. One of the most powerful features of Latex is the ability to automatically number and reference equations. Rather than manually numbering each equation in your paper and then referring to those numbers directly in your papers text, Latex allows you to label your equation and then refer to that equation with the label. This has the advantage that you can then reorder your equations if needed, and the equation numbers will all be updated automatically. Here is a simple example document demonstrating the different methods for entering math mode. \documentclass{article} \usepackage{amsmath} \begin{document} To insert equations, or math symbols, inline with the text, just use the dollar sign like this. Greek letter alpha is\alpha$. This can be used in equations;$y = e^{\alpha t}\$
The double dollar signs will get you an equation on its own line $$y = mx + b$$, even if it is inline with the text.
To number equations use the equation environment
$$\label{eq:quadradic} y = ax^2 + bx + c$$
The equation above can now be referenced as Equation \ref{eq:quadradic}. Sometimes, we want multi-line equations.
The align environment allows you to align you lines. Just use a & to indicate where they should be aligned,
and a {\\} to indicate new lines. The \nonumber command will cause a specific line to not be numbered.
\begin{align}
Q(t) &= CV \nonumber \\
&= C \left( \mathcal{E} - V_c \right) \\
\frac{d Q}{dt} &= \frac{d }{dt} \left(C\left( \mathcal{E} - V_c \right)\right) \\
&= C\left( \frac{d \mathcal{E} }{dt} - \frac{d V_c}{dt} \right)
\end{align}
\end{document}
##### Symbols
There are hundreds (probably thousands) of commands for writing equations. The Latex wikibook has a page on mathematics here. A very dense list of math symbols can be found here. A downloadable pdf containing a dense list of symbols can be found here
### Formatting Papers, Reports, and Presentations
There are numerous packages that assist in the formatting of journal articles to conform to specific requirements of professional societies and conferences. Commonly of interest to those in Physics include:
texlive-revtex and texlive-revtex4 -- Styles for various Physics Journals
texlive-biblatex-phys -- Styles for biblatex AIP and APS bibliographies
texlive-spie -- Styles for formatting SPIE Proceedings manuscripts
texlive-technics -- Styles for formatting technical documents
texlive-IEEEtrans -- Styles for IEEE Transactions journals
texlive-units, texlive-SIunits -- Styles for typesetting units within documents
texlive-preprint -- A bundle of useful stuff, notably the Author Affiliations Block
texlive-authoraftertitle -- Make Author Information available after maketitle command
texlive-talk and texlive-beamer -- Presentation formats
texlive-lecturer and texlive-powerdot -- More presentation formats
#### Posters
The beamer package assists in creating research posters. An FHSU-style format for research posters is available at http://fermi.fhsu.edu:81/QPhysics/FHSUPoster.git.
|
2022-07-02 03:01:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.957219660282135, "perplexity": 2241.462741919132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103983398.56/warc/CC-MAIN-20220702010252-20220702040252-00555.warc.gz"}
|
http://physics.aps.org/story/v3/st11
|
# Focus: Electrons Catch a Wave
Published February 23, 1999 | Phys. Rev. Focus 3, 11 (1999) | DOI: 10.1103/PhysRevFocus.3.11
#### A Laser-Accelerator Injector Based on Laser Ionization and Ponderomotive Acceleration of Electrons
C. I. Moore, A. Ting, S. J. McNaught, J. Qiu, H. R. Burris, and P. Sprangle
Published February 22, 1999
In the next generation of accelerators, electrons may “ride a wave” of electromagnetic fields in a plasma, just as a surfer rides a wave of water. But as any surfer knows, riding the wave is only part of the challenge: You have to get on it first. In the 22 February PRL, a group of physicists proposes a new way to produce the tightly focused, exquisitely timed electron beams that would be needed to make the scheme work. They showed that blasting a sample of krypton gas with extremely intense laser pulses produces a beam of electrons with the right properties.
Christopher Moore of the Naval Research Laboratory in Washington, DC, along with his colleagues, zapped a vacuum chamber filled with krypton gas with an ultra-short, intense burst of infrared laser light. At $3×{10}^{18}{\text{W/cm}}^{2}$, the intensity of the burst was equivalent to momentarily focusing the entire U.S. power output onto a region the size of a human cell. When it struck the krypton gas, the laser pulse stripped off up to 18 electrons from each atom, removing them not only from the outer orbital but the inner orbitals as well–in effect, ripping off the atom’s shirt at the same time as its coat. “The weakly-bound electrons fly out of the laser pulse before they can gain much energy,” says Moore, but the inner ones absorb the full force of the laser beam. By the time they get away, they have enough energy to move at four-fifths of the speed of light.
“We knew that the electrons would be ejected,” says Moore. “The surprise was the high degree of directionality.” Theory predicted that the electrons should emerge from the krypton atoms at all angles perpendicular to the laser light, with a slight preference for the laser’s direction of polarization. But in fact, they came out in two oppositely-directed beams, moving only along the polarization direction. While the discrepancy is up to the theorists to explain, it makes Moore’s experiment an attractive way to generate electron beams for accelerators.
Over the last decade, physicists have demonstrated an idea called wakefield acceleration, in which a laser pulse creates a traveling electric field in a plasma. If an electron is caught in the wave, the electric field pulls it along and causes it to accelerate–in the same way that gravity pulls a surfer down the front of a wave. Physicists believe the idea may enable them to build “tabletop accelerators” that will replace the mile-long behemoths of today. But such accelerators would need a source of electrons that are traveling at just the right speed and at the right time. Other methods have been proposed, in which the electrons are jolted out of the plasma itself, but Moore’s is the first scheme that would inject them from outside of the plasma.
“It’s like an action movie where the hero jumps onto the moving car,” says Howard Milchberg, a laser physicist at the University of Maryland. He emphasizes that Moore and his colleagues have not demonstrated a working model for an injector yet, but “They have shown they can get high brightness with femtosecond timing.”
–Dana Mackenzie
Dana Mackenzie is a freelance science writer.
|
2014-11-24 08:17:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3954654037952423, "perplexity": 1527.5282445858916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380464.40/warc/CC-MAIN-20141119123300-00224-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-409/topics/Topic-7257/subtopics/Subtopic-96973/?activeTab=theory
|
NZ Level 6 (NZC) Level 1 (NCEA)
Factorising algebraic fractions (mult/div)
Lesson
We've already had a look at how to multiply and divide algebraic fractions earlier on, but now we can use factorisation to help us solve problems with complex fractions.
## Some Quick Revision
Before we go on, let's just review quickly how to multiply and divide algebraic fractions. Doing so is exactly the same as with regular fractions that only involve numbers- we also simplify the problem by cancelling out common factors. To avoid getting confused algebraically, look at common factors of each variable separately.
For example, let's work out what $\frac{6xy}{15y^2}\times\frac{20y}{12x^3}$6xy15y2×20y12x3 is.
I'll start by simplifying the first fraction $\frac{6xy}{15y^2}$6xy15y2, and comparing the numerator and denominator.
I can see that the HCFs of the coefficients and $y$y's are $3$3 and $y$y, respectively. (There is no common factor for the $x$x term). Thus it'll be simplified to $\frac{2x}{5y}$2x5y.
Similarly the second fraction $\frac{20y}{12x^3}$20y12x3 can be simplified to $\frac{5y}{3x^3}$5y3x3.
So now our problem is $\frac{2x}{5y}\times\frac{5y}{3x^3}$2x5y×5y3x3 and we can see that HCFs can also be cancelled out DIAGONALLY across.
$2x$2x and $3x^3$3x3 have an HCF of $x$x, while $5y$5y and $5y$5y can be completely cancelled out.
So our resulting problem is $\frac{2}{1}\times\frac{1}{3x^2}=\frac{2}{3x^2}$21×13x2=23x2.
## Using Factorisation
Sometimes the fractions involved in these multiplication and division problems are too complicated to see their factors straight away, and that's where factorisation comes in. We can use the various factorisation techniques we have learnt previously.
#### Examples
##### Question 1
Factorise and simplify the following: $\frac{9x^2}{3xy-6x}\div\frac{3y+9}{y^2+y-6}$9x23xy6x÷3y+9y2+y6
Think about how division is just multiplication with the second fraction inverted
Do: So our problem can be rewritten as:
$\frac{9x^2}{3xy-6x}\times\frac{y^2+y-6}{3y+9}$9x23xy6x×y2+y63y+9
The denominator of the first fraction can be factorised using HCFs:
$\frac{9x^2}{3xy-6x}$9x23xy−6x $=$= $\frac{9x^2}{3x\left(y-2\right)}$9x23x(y−2) $=$= $\frac{3x}{y-2}$3xy−2 by cancelling out $3x$3x from top and bottom
The second fraction can be factorised using the cross method on top and HCFs on the bottom:
$\frac{y^2+y-6}{3y+9}$y2+y−63y+9 $=$= $\frac{\left(y-2\right)\left(y+3\right)}{3\left(y+3\right)}$(y−2)(y+3)3(y+3) $=$= $\frac{y-2}{3}$y−23 by cancelling out $y+3$y+3 on top and bottom
So our problem is now:
$\frac{3x}{y-2}\times\frac{y-2}{3}$3xy−2×y−23 $=$= $\frac{3x}{1}\times\frac{1}{3}$3x1×13 by diagonally cancelling out $y-2$y−2 $=$= $\frac{x}{1}\times\frac{1}{1}$x1×11 by diagonally cancelling out $3$3 $=$= $x$x
##### Question 2
Factorise and simplify
$\frac{5q}{50pq^2-8p}\times\frac{4pq+24p^2}{q^2+12pq+36p^2}$5q50pq28p×4pq+24p2q2+12pq+36p2
Think about how some quadratics don't need to be factorised using the cross method
Do
$\frac{5q}{50pq^2-8p}\times\frac{4pq+24p^2}{q^2+12pq+36p^2}$5q50pq2−8p×4pq+24p2q2+12pq+36p2 $=$= $\frac{5q}{2p\left(25q^2-4\right)}\times\frac{4p\left(q+6p\right)}{q^2+12pq+36p^2}$5q2p(25q2−4)×4p(q+6p)q2+12pq+36p2 using HCF factorisation $=$= $\frac{5q}{2p\left(5q+2\right)\left(5q-2\right)}\times\frac{4p\left(q+6p\right)}{q^2+12pq+36p^2}$5q2p(5q+2)(5q−2)×4p(q+6p)q2+12pq+36p2 using difference of $2$2 squares $=$= $\frac{5q}{2p\left(5q+2\right)\left(5q-2\right)}\times\frac{4p\left(q+6p\right)}{\left(q+6p\right)^2}$5q2p(5q+2)(5q−2)×4p(q+6p)(q+6p)2 the denominator is a perfect square as $q^2$q2 and $36p^2$36p2 are both squares and $12pq=2\times q\times6p$12pq=2×q×6p $=$= $\frac{5q}{2p\left(5q+2\right)\left(5q-2\right)}\times\frac{4p}{q+6p}$5q2p(5q+2)(5q−2)×4pq+6p $=$= $\frac{5q}{\left(5q+2\right)\left(5q-2\right)}\times\frac{2}{q+6p}$5q(5q+2)(5q−2)×2q+6p $=$= $\frac{10q}{\left(5q+2\right)\left(5q-2\right)\left(q+6p\right)}$10q(5q+2)(5q−2)(q+6p)
##### Question 3
Simplify the following: $\frac{5x+8}{8xy^2}\times\frac{9xy}{25x+40}$5x+88xy2×9xy25x+40
##### Question 4
Simplify the following expression:
$\frac{p+7}{5}\times\frac{5p-2}{p^2+14p+49}$p+75×5p2p2+14p+49
##### Question 5
Simplify the following expression:
$\frac{a^2-16}{a\left(a+4\right)}\times\frac{7a+28}{28\left(a-4\right)}$a216a(a+4)×7a+2828(a4)
### Outcomes
#### NA6-5
Form and solve linear equations and inequations, quadratic and simple exponential equations, and simultaneous equations with two unknowns
#### NA6-6
Generalise the properties of operations with rational numbers, including the properties of exponents
#### 91027
Apply algebraic procedures in solving problems
|
2021-09-25 19:21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7638505101203918, "perplexity": 3593.970332354154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057733.53/warc/CC-MAIN-20210925172649-20210925202649-00228.warc.gz"}
|
https://anhngq.wordpress.com/2011/04/22/on-costa-hardy-rellich-inequalities/
|
# Ngô Quốc Anh
## April 22, 2011
### On Costa-Hardy-Rellich inequalities
This note is to concern a recent result by David G. Costa [here]. Here the statement
Theorem 1.1. For all $a,b\in \mathbb R$ and $u \in C^\infty_0(\mathbb R^N\backslash\{0\})$ one has
$\displaystyle\left| {\frac{{N - 2 - \gamma }}{2}\int_{\mathbb R^N} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx} + \gamma \int_{\mathbb R^N} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} } \right| \leqslant {\left( {\int_{\mathbb R^N} {\frac{{|\Delta u{|^2}}}{{|x{|^{2b}}}}dx} } \right)^{\frac{1}{2}}}{\left( {\int_{\mathbb R^N} {\frac{{|\nabla u{|^2}}}{{|x{|^{2a}}}}dx} } \right)^{\frac{1}{2}}}$
where $\gamma=a+b+1$. In addition, if $\gamma \leqslant N-2$, then
$\displaystyle\widehat C\int_{\mathbb R^N} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} \leqslant {\left( {\int_{\mathbb R^N} {\frac{{|\Delta u{|^2}}}{{|x{|^{2b}}}}dx} } \right)^{\frac{1}{2}}}{\left( {\int_{\mathbb R^N} {\frac{{|\nabla u{|^2}}}{{|x{|^{2a}}}}dx} } \right)^{\frac{1}{2}}}$
where the constant $\widehat C=|\frac{N+a+b-1}{2}|$ is sharp.
Here’s the proof.
Proof. For all $a,b\in \mathbb R$ and $u \in C^\infty_0(\mathbb R^N\backslash\{0\})$ one has, for all $t$, the following
$\displaystyle\int_{{\mathbb{R}^N}} {{{\left| {\frac{{\nabla u}}{{|x{|^a}}} + t\frac{x}{{|x{|^{b + 1}}}}\Delta u} \right|}^2}dx} \geqslant 0.$
Expanding the above yields
$\displaystyle\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^{2a}}}}dx} + {t^2}\int_{{\mathbb{R}^N}} {\frac{{|\Delta u{|^2}}}{{|x{|^{2b}}}}dx} + 2t\int_{{\mathbb{R}^N}} {\frac{{\Delta u}}{{|x{|^{a + b + 1}}}}(x \cdot \nabla u)dx} \geqslant 0.$
If we denote the last integral by $B$ and write it as
$\displaystyle B = \int_{{\mathbb{R}^N}} {\big({\rm div}(\nabla u)\big)\left( {\frac{x}{{|x{|^\gamma}}} \cdot \nabla u} \right)dx}$
an integration by parts gives
$\displaystyle B = - \frac{1}{2}\int_{{\mathbb{R}^N}} {\frac{x}{{|x{|^\gamma}}} \cdot \nabla (|\nabla u{|^2})dx} - \int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx + \gamma } \int_{{\mathbb{R}^N}} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} .$
Keep in mind that
$\displaystyle\frac{\partial }{{\partial {x_i}}}\left( {\frac{{{x_i}}}{{|x{|^\gamma }}}} \right) = \frac{1}{{|x{|^\gamma }}} - \gamma \frac{{x_i^2|x{|^{\gamma - 2}}}}{{|x{|^{2\gamma }}}}.$
A second integration by parts on the first integral above yields
$\displaystyle - \frac{1}{2}\int_{{\mathbb{R}^N}} {\frac{x}{{|x{|^\gamma }}} \cdot \nabla (|\nabla u{|^2})dx} = \frac{{N - \gamma }}{2}\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx}$
so that $B$ becomes
$\displaystyle B = \left( {\frac{{N - 2 - \gamma }}{2}} \right)\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx} + \gamma \int_{{\mathbb{R}^N}} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} .$
Therefore, we have
$At^2+2Bt+C \geqslant 0$
for any $t \geqslant 0$ where $B$ is given above and
$\displaystyle A = \int_{{\mathbb{R}^N}} {\frac{{|\Delta u{|^2}}}{{|x{|^{2b}}}}dx} , \quad C = \int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^{2a}}}}dx} .$
This is equivalent to $B^2 - AC \leqslant 0$, i.e.
$\displaystyle {\left( {\frac{{N - 2 - \gamma }}{2}\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx} + \gamma \int_{{\mathbb{R}^N}} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} } \right)^2} \leqslant \left( {\int_{{\mathbb{R}^N}} {\frac{{|\Delta u{|^2}}}{{|x{|^{2b}}}}dx} } \right)\left( {\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^{2a}}}}dx} } \right).$
This completes the first inequality. On the other hand, since
$\displaystyle 0 \leqslant \frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^2}}} \leqslant |\nabla u{|^2}$
we know that
$\displaystyle\frac{{N + \gamma - 2}}{2}\int_{{\mathbb{R}^N}} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx} \leqslant \frac{{N - 2 - \gamma }}{2}\int_{{\mathbb{R}^N}} {\frac{{|\nabla u{|^2}}}{{|x{|^\gamma }}}dx} + \gamma \int_{{\mathbb{R}^N}} {\frac{{{{(x \cdot \nabla u)}^2}}}{{|x{|^{\gamma + 2}}}}dx}$
provided $\gamma \leqslant N-2$. This proves the second inequality.
|
2017-02-26 08:00:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480361938476562, "perplexity": 431.72812808637366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00091-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://vulgairedev.fr/blog/article/eightQueensPuzzle
|
Eight Queens Puzzle
Date 31 décembre 2016 Catégories Algorithmique par VulgaireDev Edit on Github
The eight queens puzzle is a classic algorithmic problem.
Problem
You have a chessboard of 8*8 square. You have 8 queens. Your goal is to place the 8 queens on the board, without any of them threatening another one. A queen is threaten if she is on the same row, or the same column, or the same diagonal of another queen (like in the rules of chess).
Solutions
A brute-force approach may take too much time here, because we have 8 among 64 possibilities. The interesting thing about this problem is that you have different approaches to solve it (check on Wikipedia): brute-force, dynamic programming, genetic algorithms...
An interesting (and simple) approach is the 'iterative repair'. The idea is to place a queen on each row, with a random position on this row. Then, we find the queen the more threaten, and we change her location on the same row : it is the repair operation. Here, I chose a new random position on the row, but selecting the position with the less conflict should be a very efficient heuristic (I kept it simple in a first time).
This way of solving the problem is a greedy one. So, it can stay on a local extremum without finding a global one (= a solution). A solution to that is to give a count that you decrement each time you "repair" the current configuration. If that count is reached without finding that the current configuration is a solution, we consider that we are on a local extremum, we will not find a solution, so we generate another configuration.
This function represents all possible configurations. When we reach a minimum, it is a solution. There are configurations when we stay in the local minimum, repairing after repairing we move a little on the curve but we stay here !
Another thing with this algorithm is that it will find a solution, not necessarily all the solutions (or you must repeat the algorithm but there is clever things to do if you want to find all solutions). But if you want to find a solution quickly, even on a big chessboard, this solution is good. In fact it works on way bigger chessboard : with 1 000 000 queens, the algorithm with the optimization of selecting the best next place when "repairing" takes 50 steps in average.
# coding: utf-8
import time
from random import randint
PROBLEM_SIZE = 8
# the 0 means no queen on this location, 1 that there is one.
game_board = [[0 for i in range(PROBLEM_SIZE)] for j in range(PROBLEM_SIZE)]
def init_board(game_board):
#we place a queen on each row
for i in range(len(game_board)):
game_board[i][randint(0,PROBLEM_SIZE-1)] = 1
def count_conflict(game_board, x, y):
"""give the number of conflict for the position x,y"""
count = 0
# we check the top
for k in range(x-1,0,-1):
if game_board[k][y] == 1:
count += 1
# we break because the current queen will not affect
# other queen behind the one we just found
break
# we check the left
for l in range(y-1,0,-1):
if game_board[x][l] == 1:
count += 1
break
# we check the bottom
for k in range(x+1,len(game_board)):
if game_board[k][y] == 1:
count += 1
break
# we check the right
for l in range(y+1,len(game_board[x])):
if game_board[x][l] == 1:
count += 1
break
#we check the upper left diagonal:
if(x-1 >= 0 and y-1 >= 0):
next_location = (x-1,y-1)
while(next_location[0] >= 0 and next_location[1] >= 0):
if(game_board[next_location[0]][next_location[1]] == 1):
count += 1
break
next_location = (next_location[0]-1, next_location[1]-1)
#we check the bottom left diagonal:
if(x+1 < len(game_board) and y-1 >= 0):
next_location = (x+1,y-1)
while(next_location[0] < len(game_board) and next_location[1] >= 0):
if(game_board[next_location[0]][next_location[1]] == 1):
count += 1
break
next_location = (next_location[0]+1, next_location[1]-1)
#we check the bottom right diagonal:
if(x+1 < len(game_board) and y+1 < len(game_board[x])):
next_location = (x+1,y+1)
while(next_location[0] < len(game_board) and next_location[1] < len(game_board[x])):
if(game_board[next_location[0]][next_location[1]] == 1):
count += 1
break
next_location = (next_location[0]+1, next_location[1]+1)
#we check the upper right diagonal:
if(x-1 >= 0 and y+1 < len(game_board[x])):
next_location = (x-1,y+1)
while(next_location[0] >= 0 and next_location[1] < len(game_board[x])):
if(game_board[next_location[0]][next_location[1]] == 1):
count += 1
break
next_location = (next_location[0]-1, next_location[1]+1)
return count
def check_configuration(game_board):
"""This function test if we are in a good configuration.
If yes, it returns true, if not, it returns the location of the more conflicting queen """
# key is the location of the queen, second is the number of conflict
conflict_count = {}
for i in range(len(game_board)):
for j in range(len(game_board[i])):
if(game_board[i][j] == 1):
# we add this queen if we do not have found her yet
conflict_count[(i,j)] = count_conflict(game_board,i,j)
more_conflicting = max(conflict_count, key=conflict_count.get)
if(conflict_count[(more_conflicting[0], more_conflicting[1])] == 0):
return True
else:
return more_conflicting
def print_game_board(game_board):
for i in range(len(game_board)):
print(game_board[i])
def main(game_board):
init_board(game_board)
more_conflicting = check_configuration(game_board)
# let's say we let our algorithm try 1000 times from an initial configuration. If we can't find
# a solution, we try with another random generation
count = 100
while(more_conflicting != True):
# we change the location of the queen
game_board[more_conflicting[0]][more_conflicting[1]] = 0
game_board[more_conflicting[0]][randint(0,PROBLEM_SIZE-1)] = 1
more_conflicting = check_configuration(game_board)
if count == 0:
init_board(game_board)
count = 100
count -= 1
#print_game_board(game_board)
main(game_board)
If we want to find all solutions, we can make a brute-force algorithm with a little of help : we begin the search from a configuration where there is a queen on each row ($$8^8 = 16,777,216$$ possibilities). If we do this with a DFS, row by row, we can eliminate many solutions soon in the search tree (it is the same idea as a branch-and-bound approach, you cut branches that you know are not good solutions).
Now the smarter version of the iterative repair is to choose the square with the less threatening, in the same row. You just need to change a little bit of code at the end :
def main(game_board):
init_board(game_board)
more_conflicting = check_configuration(game_board)
# let's say we let our algorithm try 1000 times from an initial configuration. If we can't find
# a solution, we try with another random generation
count = 100
while(more_conflicting != True):
# we change the location of the queen
game_board[more_conflicting[0]][more_conflicting[1]] = 0
best_position = (-1,-1)
score_best_position = float('inf')
# we find the best place on the same row
for j in range(PROBLEM_SIZE):
buff = count_conflict(game_board, more_conflicting[0], j)
if(buff < score_best_position):
score_best_position = buff
best_position = (more_conflicting[0], j)
game_board[best_position[0]][best_position[1]] = 1
more_conflicting = check_configuration(game_board)
if count == 0:
init_board(game_board)
count = 100
count -= 1
Results
So now, let's see the difference between solutions : random, stupid iterative repair (random queen on the same row), smarter iterative repair (we place the queen in the square of the row where the threatening is the lowest):
As we can see, with small problems, the "stupid repair" works even better than a smarter approach. On bigger problems though, we see the huge difference between them.
|
2021-06-14 23:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37327247858047485, "perplexity": 2441.9799506210607}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487614006.8/warc/CC-MAIN-20210614232115-20210615022115-00212.warc.gz"}
|
http://christinabergey.com/blog/tag/website/
|
# Should I Buy a Slow Loris? (.com)
In honor of the slow loris that purportedly bit Lady Gaga, I registered a domain in the wee hours of the morning last night and built a website:
# Should I Buy a Slow Loris (.com)
Hopefully it provides an answer to that question. If it does generate any money from the Donate button, I’ll pass all of it right along to slow loris conservation and research efforts, primatologist’s honor.
Thanks to @mdo for the Bootstrap template, Mark Dumont for the Flickr photo (CC BY 2.0), Schulze & Meier 1995 for the Slender loris sound (since slow loris vocalization are hard to find,) and Raptorize for the jQuery animation fun.
# New Lab Website Launched
We recently redesigned our lab’s website using weebly, which I highly recommend for academics looking to create a group or personal home page.
I wrote before of how I exported our lists of publications to HTML for insertion on the site. Here’s how I redirected all the pages on the old site using an .htaccess 301 redirect. For background info, see this tutorial. On the old site, in the directory containing pages that I wanted to be redirected, I placed a file named .htaccess that contained the following:
RedirectMatch 301 ^/.*$http://nyu-anthro-lab.weebly.com It tells search engines and the browser that any URL matching ^/.*$ (the_current_directory/whatever) should redirect to http://nyu-anthro-lab.weebly.com. Super simple, and good SEO!
With the power out for about a week in Lower Manhattan, I took the time to attend to some loose ends. One was the redesign of the lab website. The last major hurdle was the addition of a list of publications by lab members. I had previously used LaTeX and BibTeX to add my own publications from Mendeley to my CV, and I figured it’d be a snap to add the lab’s publications to our weebly website. It turned out to be a bit tricky, so I’ll detail how I figured it out in this post.
The lab site created using the free service weebly was shaping up. The only gaping hole was the list of publications. We had a Mendeley group for our lab to share papers relating to our current research, and a folder within that contained a bunch of references by lab members.
I selected all the publications and chose File > Export.
This exported a BibTeX file of all our publications in entries that look something like this:
@article{Jolly2011,
abstract = {The ranges of small kinda (Papio kindae) and [...]},
author = {Jolly, Clifford J. and Burrell, Andrew S. and Phillips-Conroy, Jane E.
and Bergey, Christina M. and Rogers, Jeffrey},
doi = {10.1002/ajp.20896},
file = {:path/to/Jolly2011.pdf:pdf},
issn = {1098-2345},
journal = {American Journal of Primatology},
keywords = {Animal,Animals,DNA,Female,Genes,Genetic,[...]},
month = mar,
number = {3},
pages = {291--303},
pmid = {21274900},
title = {{Kinda baboons (Papio kindae) and grayfoot chacma baboons (P. ursinus
griseipes) hybridize in the Kafue river valley, Zambia.}},
url = {http://www.ncbi.nlm.nih.gov/pubmed/21274900},
volume = {73},
year = {2011}
}
OK, so I’d gotten the info out of Mendeley, but now I needed to get it into a nice HTML format. I poked around online and found people were recommending a Java reference manager called JabRef. I downloaded it and opened up my BibTeX file.
Looks good! Yeah, there are a few errors and inconsistencies, but this is just a first pass. I first tried the default export to HTML. Selecting File > Export, though, showed four different HTML options: HTML, HTML list, HTML table, HTML table (with Abstract & something else I can’t read), and Simple HTML.
I tried them all, but none was really what I wanted. The HTML list was neat, but weebly doesn’t let you muck with custom scripts or CSS, so that one was out. With a bit of Googling, I found that you can customize the export format by writing your own Export Filter. Essentially what you do is write a layout file which tells JabRef how to spit out the HTML. Whenever I’m in unfamiliar coding territory, I like to start out with a working example and then try to modify it to do my bidding without breaking everything. I found a simple HTML export filter at this site by Mark Schenk. I chose as my starting point his List of References Export Filter, which you can download here. Since I was going to be inserting the HTML in an existing page, I ignored the files that defined the start and end of the exported HTML, listrefs.begin.layout and listrefs.end.layout. I only downloaded listrefs.layout, which I renamed bergey.layout on my harddrive.
I decided to try it without modification first, just to see what happened. I first had to add this custom Export Filter to JabRef. To do that I clicked Options > Manage custom exports. I then clicked Add new and entered this info:
…with a real path to bergey.layout, of course. That done, I then went to File > Export. I gave my exported file a name, in this case “test” and chose bergey_lab_site (*.html) as my File Format.
It worked, but when I opened up test.html in my web browser, there was a bunch of extra stuff I didn’t want. The full abstract, the raw BibTeX, and links that should have toggled the abstract and BibTeX (which didn’t work because the JavaScript in the header wasn’t output because I hadn’t used listrefs.begin.layout.) There was also a link to the PDF, which pointed to a location on my local computer.
It was time to start cutting bits out, so I opened bergey.layout in a text editor. I then deleted the highlighted parts shown below:
Saving, re-exporting from JabRef, and refreshing the page in the browser showed that doing so got rid of the abstract and BibTex text, but the unwanted links remained. To fix that, I changed this line in bergey.layout:
<p class="infolinks">
\begin{abstract}[<a href="javascript:toggleInfo('\format{\bibtexkey}',
'abstract')">Abstract</a>]\end{abstract}
\begin{review} [<a href="javascript:toggleInfo('\format{\bibtexkey}',
'review')">Review</a>]\end{review}
[<a href="javascript:toggleInfo('\format{\bibtexkey}','bibtex')">
BibTeX</a>]
\begin{doi} [<a href="\format[DOICheck]{\doi}" target="_blank">
DOI</a>]\end{doi}
\begin{url} [<a href="\format{\url}" target="_blank">URL</a>]\end{url}
target="_blank">PDF</a>]\end{file}</p>
to this (without line breaks):
<p class="infolinks">\begin{doi} [<a href="\format[DOICheck]{\doi}"
target="_blank">DOI</a>]\end{doi}\begin{url} [<a href="\format{\url}"
target="_blank">URL</a>]\end{url}</p>
Essentially getting rid of the abstract, review, BibTeX, and file links, but leaving those for DOI and URL. I cleaned things up a bit more, deleting some extraneous punctuation and line breaks, and changing the formatting. After all the tweaking, my bergey.layout file looked like this:
<tr id="\format{\bibtexkey}" class="entry">
\begin{doi} [<a href="\format[DOICheck]{\doi}" target="_blank">DOI</a>]\end{doi}\begin{url} [<a href="\format{\url}" target="_blank">URL</a>]\end{url}</p>
</p></td>
</tr>
which output HTML that looked like this:
Not bad! A few errors remain, but mostly from needing to correct info in Mendeley. However, I want things sorted by year, not last name, even though that’s better for us alphabetically blessed first authors. Having JabRef sort by year on export rather than by Author last name is accomplished using the preferences menu, found at Options > Preferences and the selecting the File window. There you can chose to “Export in current table sort order” (click to embiggen):
I sort by year by clicking above the year column. The exported HTML is the same order as is in my JabRef screen. Nice! Now what if I could add a header for each year? That is pretty simple, I just add a grouped output command to the top of my .layout file. I’ll put the year above each group as an <h3> element:
\begingroup{year}<h3>\format[HTMLChars]{\year}</h3>
\endgroup{year}
Finally! All done! My bergey.layout file is this:
\begingroup{year}<h3>\format[HTMLChars]{\year}</h3>
\endgroup{year}
<tr id="\format{\bibtexkey}" class="entry">
|
2018-03-22 12:02:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27179664373397827, "perplexity": 4651.9683631565085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00436.warc.gz"}
|
https://forum.dopdf.com/general-f1/open-the-pdf-by-any-page-t5447.html
|
## Open the pdf by any page
Contro
Posts: 2
Joined: Tue Dec 29, 2015 6:11 pm
### Open the pdf by any page
Can I open with a shortlink or a bat a pdf in a predefined and variable number of page with option from the command line ?
Best Regards
Claudiu (Softland)
Posts: 1509
Joined: Thu May 23, 2013 7:19 am
### Re: Open the pdf by any page
doPDF is a PDF creator, it does not open PDF files.
Contro
Posts: 2
Joined: Tue Dec 29, 2015 6:11 pm
Thanks
Best Regards
|
2021-12-07 05:11:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577891230583191, "perplexity": 9118.866338042157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363336.93/warc/CC-MAIN-20211207045002-20211207075002-00375.warc.gz"}
|
https://www.physicsforums.com/threads/finding-the-length-of-an-intersection-of-2-planes.620227/
|
# Finding the length of an intersection Of 2 planes
1. Jul 11, 2012
### Plutonium88
1. The problem statement, all variables and given/known data
Line of intersection between P1: x+y+z=7 and P2:2x-3y-z=-8 crosses the XZ plane at point A and crosses the YZ plane at point B
Find the length Of AB
Okay so first of all im having trouble with understanding crossing the XZplane or YZplane.
does this mean that A x=z=0 A(0,something,0) and B(something,0,0)
What im thinking to do with this problem, is finding the points A and B, an then finding vector AB and finding the magnitude, and that should be the length.
So the first thing i did was get the equation of the line..
1 x+y+z=7
2 2x-3y-z=-8
subtract eq1-eq2
-x+4y+0 = 15
x= 4y-15 (3)
sub (3) in 1
4y-15 +y + z =7
z= 22-4y
y=t
x=-15 +4t
y=t
z=22-4t
r=(-15,0,22) + t(4,1,-5) (equation of line of intersection)
so that means that the direction vector is paralell to the line segment AB
but now im clueless as to how to solve for A or for B... can some one give me a hintÉ
2. Jul 11, 2012
### HallsofIvy
Staff Emeritus
No, the "xz plane" has points with all values for x and z but y= 0. And the "yz plane has x= 0
Yep, good plan!
No, z- (-z)= 2z not 0. To eliminate z, add the two equations.
Again, once you have found the correct equation for the line of intersection, to find A, set y= 0 to solve for the value of the parameter and the find x and y. To find B, set x= 0, solve for the parameter and use that to find y and z.
Last edited: Jul 12, 2012
3. Jul 11, 2012
### Plutonium88
AH THATS GENIUS!!!! Thank you kindly sir all of your help is much appreciated, and it really means a lot to me that there are people like you in the world who are willing to spread the knowledge of math. Its quite beautiful actually.
4. Jul 11, 2012
### Plutonium88
Okay so
EQ1 + EQ2
3x-2y + 0 =-1
(3) x=(2y-1)/3
plug 3 in eq1
(2Y-1)/3 + Y + Z =7
z=(22-5y)/3
let y=t
x=(2t-1)/3
y=t
z=(22-5t)/3
r= (-1/3,0,22/3) + t(2/3,1,-5/3)
so therefore for point A
y=0
therefore t=0
x=-1/3
z=22/3
A(-1/3,0,22/3)
for point B
0=(2t-1)/3
t=1/2
x=0
y=1/2
z=13/2
So B(0,1/2,13/2)
so AB=(1/3,1/2,-5/6)
mAGNITUDEofAB= √((1/3)sqrd +(1/2)sqrd + (-5/6)sqrd) = √(38)/6
How does this look, personally the only thing i'm concerned with is the fact that i had an ugly parametric equation with the fractions.. but it would seem everything else is correct to me, from how you described to do this.
* i made an error earlier, but fixed it.
Last edited: Jul 11, 2012
5. Jul 12, 2012
### HallsofIvy
Staff Emeritus
|
2017-08-23 13:09:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6844320297241211, "perplexity": 2126.8759997346797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120194.50/warc/CC-MAIN-20170823113414-20170823133414-00499.warc.gz"}
|
https://en.wikipedia.org/wiki/Linear-quadratic_regulator
|
The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below. The LQR is an important part of the solution to the LQG (linear–quadratic–Gaussian) problem. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in control theory.
## General description
The settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human (engineer). The cost function is often defined as a sum of the deviations of key measurements, desired altitude or process temperature, from their desired values. The algorithm thus finds those controller settings that minimize undesired deviations. The magnitude of the control action itself may also be included in the cost function.
The LQR algorithm reduces the amount of work done by the control systems engineer to optimize the controller. However, the engineer still needs to specify the cost function parameters, and compare the results with the specified design goals. Often this means that controller construction will be an iterative process in which the engineer judges the "optimal" controllers produced through simulation and then adjusts the parameters to produce a controller more consistent with design goals.
The LQR algorithm is essentially an automated way of finding an appropriate state-feedback controller. As such, it is not uncommon for control engineers to prefer alternative methods, like full state feedback, also known as pole placement, in which there is a clearer relationship between controller parameters and controller behavior. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.
## Finite-horizon, continuous-time LQR
For a continuous-time linear system, defined on ${\displaystyle t\in [t_{0},t_{1}]}$, described by
${\displaystyle {\dot {x}}=Ax+Bu}$
with a quadratic cost function defined as
${\displaystyle J=x^{T}(t_{1})F(t_{1})x(t_{1})+\int \limits _{t_{0}}^{t_{1}}\left(x^{T}Qx+u^{T}Ru+2x^{T}Nu\right)dt}$
the feedback control law that minimizes the value of the cost is
${\displaystyle u=-Kx\,}$
where ${\displaystyle K}$ is given by
${\displaystyle K=R^{-1}(B^{T}P(t)+N^{T})\,}$
and ${\displaystyle P}$ is found by solving the continuous time Riccati differential equation:
${\displaystyle A^{T}P(t)+P(t)A-(P(t)B+N)R^{-1}(B^{T}P(t)+N^{T})+Q=-{\dot {P}}(t)\,}$
with the boundary condition
${\displaystyle P(t_{1})=F(t_{1}).}$
The first order conditions for Jmin are
(i) State equation
${\displaystyle {\dot {x}}=Ax+Bu}$
(ii) Co-state equation
${\displaystyle -{\dot {\lambda }}=Qx+Nu+A^{T}\lambda }$
(iii) Stationary equation
${\displaystyle 0=Ru+N^{T}x+B^{T}\lambda }$
(iv) Boundary conditions
${\displaystyle x(t_{0})=x_{0}}$
and ${\displaystyle \lambda (t_{1})=F(t_{1})x(t_{1})}$
## Infinite-horizon, continuous-time LQR
For a continuous-time linear system described by
${\displaystyle {\dot {x}}=Ax+Bu}$
with a cost functional defined as
${\displaystyle J=\int _{0}^{\infty }\left(x^{T}Qx+u^{T}Ru+2x^{T}Nu\right)dt}$
the feedback control law that minimizes the value of the cost is
${\displaystyle u=-Kx\,}$
where ${\displaystyle K}$ is given by
${\displaystyle K=R^{-1}(B^{T}P+N^{T})\,}$
and ${\displaystyle P}$ is found by solving the continuous time algebraic Riccati equation
${\displaystyle A^{T}P+PA-(PB+N)R^{-1}(B^{T}P+N^{T})+Q=0\,}$
This can be also written as
${\displaystyle {\mathcal {A}}^{T}P+P{\mathcal {A}}-PBR^{-1}B^{T}P+{\mathcal {Q}}=0\,}$
with
${\displaystyle {\mathcal {A}}=A-BR^{-1}N^{T}\qquad {\mathcal {Q}}=Q-NR^{-1}N^{T}\,}$
## Finite-horizon, discrete-time LQR
For a discrete-time linear system described by [1]
${\displaystyle x_{k+1}=Ax_{k}+Bu_{k}\,}$
with a performance index defined as
${\displaystyle J=x_{N}^{T}Qx_{N}+\sum \limits _{k=0}^{N-1}\left(x_{k}^{T}Qx_{k}+u_{k}^{T}Ru_{k}+2x_{k}^{T}Nu_{k}\right)}$
the optimal control sequence minimizing the performance index is given by
${\displaystyle u_{k}=-F_{k}x_{k}\,}$
where
${\displaystyle F_{k}=(R+B^{T}P_{k+1}B)^{-1}(B^{T}P_{k+1}A+N^{T})\,}$
and ${\displaystyle P_{k}}$ is found iteratively backwards in time by the dynamic Riccati equation
${\displaystyle P_{k-1}=A^{T}P_{k}A-(A^{T}P_{k}B+N)\left(R+B^{T}P_{k}B\right)^{-1}(B^{T}P_{k}A+N^{T})+Q}$
from terminal condition ${\displaystyle P_{N}=Q}$. Note that ${\displaystyle u_{N}}$ is not defined, since ${\displaystyle x}$ is driven to its final state ${\displaystyle x_{N}}$ by ${\displaystyle Ax_{N-1}+Bu_{N-1}}$.
## Infinite-horizon, discrete-time LQR
For a discrete-time linear system described by
${\displaystyle x_{k+1}=Ax_{k}+Bu_{k}\,}$
with a performance index defined as
${\displaystyle J=\sum \limits _{k=0}^{\infty }\left(x_{k}^{T}Qx_{k}+u_{k}^{T}Ru_{k}+2x_{k}^{T}Nu_{k}\right)}$
the optimal control sequence minimizing the performance index is given by
${\displaystyle u_{k}=-Fx_{k}\,}$
where
${\displaystyle F=(R+B^{T}PB)^{-1}(B^{T}PA+N^{T})\,}$
and ${\displaystyle P}$ is the unique positive definite solution to the discrete time algebraic Riccati equation (DARE)
${\displaystyle P=A^{T}PA-(A^{T}PB+N)\left(R+B^{T}PB\right)^{-1}(B^{T}PA+N^{T})+Q}$.
This can be also written as
${\displaystyle P={\mathcal {A}}^{T}P{\mathcal {A}}-{\mathcal {A}}^{T}PB\left(R+B^{T}PB\right)^{-1}B^{T}P{\mathcal {A}}+{\mathcal {Q}}}$
with
${\displaystyle {\mathcal {A}}=A-BR^{-1}N^{T}\qquad {\mathcal {Q}}=Q-NR^{-1}N^{T}}$.
Note that one way to solve the algebraic Riccati equation is by iterating the dynamic Riccati equation of the finite-horizon case until it converges.
## References
1. ^ Chow, Gregory C. (1986). Analysis and Control of Dynamic Economic Systems. Krieger Publ. Co. ISBN 0-89874-969-7.
• Kwakernaak, Huibert & Sivan, Raphael (1972). Linear Optimal Control Systems. First Edition. Wiley-Interscience. ISBN 0-471-51110-2.
|
2018-12-10 07:30:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 42, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872938752174377, "perplexity": 556.3880532950543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00314.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=128&t=58842
|
## Isothermal Irreversible
isochoric/isometric: $\Delta V = 0$
isothermal: $\Delta T = 0$
isobaric: $\Delta P = 0$
Renee Grange 1I
Posts: 56
Joined: Fri Aug 30, 2019 12:16 am
Been upvoted: 1 time
### Isothermal Irreversible
Is it possible for isothermal reactions to be irreversible, if so, how?
Ashley Wang 4G
Posts: 103
Joined: Wed Sep 11, 2019 12:16 am
### Re: Isothermal Irreversible
I suppose if the pressure inside the vessel is different from the external pressure, then there will still exist a definite direction of spontaneous change, which would make the reaction irreversible.
That said, the reaction will somehow have to be kept at constant temperature in order to fulfill your original condition, since it would otherwise cool as the gas expands.
Please correct me if I'm wrong!
Juliet Stephenson 4E
Posts: 100
Joined: Wed Sep 18, 2019 12:21 am
### Re: Isothermal Irreversible
I agree! I believe isothermal expansions are irreversible because they progress towards a uniform state, which is an irreversible change.
Sukanya Mohapatra 2G
Posts: 100
Joined: Sat Aug 17, 2019 12:18 am
Been upvoted: 1 time
### Re: Isothermal Irreversible
Yes, it is possible for isothermal reactions to be irreversible.
|
2021-03-01 10:59:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5642609596252441, "perplexity": 6198.00968226014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00523.warc.gz"}
|
https://yulab-smu.top/treedata-book/chapter11.html
|
# 11 Other ggtree Extensions
The ggtree package is a general package for visualizing tree structures and associated data. If you have some special requirements that are not directly provided by ggtree, you may need to use one of the extension packages built on top of ggtree. For example, the RevGadgets package for visualizing the output of the RevBayes, the sitePath package for visualizing fixation events on phylogenetic pathways, and the enrichplot package for visualizing hierarchical structure of the enriched pathways.
rp <- BiocManager::repositories()
db <- utils::available.packages(repo=rp)
x <- tools::package_dependencies('ggtree', db=db,
which = c("Depends", "Imports"),
reverse=TRUE)
print(x)
## \$ggtree
## [1] "enrichplot" "ggtreeExtra"
## [3] "LymphoSeq" "miaViz"
## [5] "microbiomeMarker" "MicrobiotaProcess"
## [7] "philr" "singleCellTK"
## [9] "sitePath" "systemPipeTools"
## [11] "tanggle" "treekoR"
There are 12 packages in CRAN or Bioconductor that depend on or import ggtree and several packages on GitHub that extend ggtree. Here we briefly introduce some extension packages, including MicrobiotaProcess and tanggle.
## 11.1 Taxonomy Annotation Using MicrobiotaProcess
The MicrobiotaProcess package provides a LEfSe-like algorithm to discover microbiome biomarkers by comparing taxon abundance between different classes. It provides several methods to visualize the analysis result. The ggdiffclade is developed based on ggtree . In addition to the diff_analysis() result, it also supports a data frame that contains a hierarchical relationship (e.g., taxonomy annotation or KEGG annotation) with another data frame that contains taxa and factor information and/or pvalue. The following example demonstrates how to use data frames (i.e., analysis results) to visualize the differential taxonomy tree. More details can be found on the vignette of the MicrobiotaProcess package.
library(MicrobiotaProcess)
library(ggplot2)
library(TDbook)
# load df_difftax and df_difftax_info from TDbook
taxa <- df_alltax_info
dt <- df_difftax
nodedf=dt,
factorName="DIAGNOSIS",
skpointsize=0.6,
linewd=0.2,
taxlevel=3,
# This argument is to remove the branch of unknown taxonomy.
reduce=TRUE) +
scale_fill_manual(values=c("#00AED7", "#009E73"))+
guides(color = guide_legend(keywidth = 0.1, keyheight = 0.6,
order = 3,ncol=1)) +
theme(panel.background=element_rect(fill=NA),
legend.position="right",
plot.margin=margin(0,0,0,0),
legend.spacing.y=unit(0.02, "cm"),
legend.title=element_text(size=7.5),
legend.text=element_text(size=5.5),
legend.box.spacing=unit(0.02,"cm")
)
The data frame of this example is from the analysis result of diff_analysis() using public datasets . The colors represent the features enriched in the relevant class groups. The size of circle points represents the -log10(pvalue), i.e., a larger point indicates a greater significance. In Figure 11.1, we can find that Fusobacterium sequences were enriched in carcinomas, while Firmicutes, Bacteroides, and Clostridiales were greatly reduced in tumors. These results were consistent with the original article . The species of Campylobacter has been proven to be associated with colorectal cancer . We can find in Figure 11.1 that Campylobacter was enriched in tumors, while its relative abundance is lower than Fusobacterium.
## 11.2 Visualizing Phylogenetic Network Using Tanggle
The tanggle package provides functions to display a split network. It extends the ggtree package to allow the visualization of phylogenetic networks (Figure 11.2).
library(ggplot2)
library(ggtree)
library(tanggle)
file <- system.file("extdata/trees/woodmouse.nxs", package = "phangorn")
ggexpand(.1) + ggexpand(.1, direction=-1)
|
2022-05-21 15:35:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.370368629693985, "perplexity": 6872.0331870334885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00614.warc.gz"}
|
https://solvedlib.com/n/tutorial-exercisocompute-if-possible-the-simllarlty-dlmenslon,20738740
|
Tutorial ExercisoCompute, If possible; the simllarlty dlmenslon of the fractal: Round t0 the ncarest thousandth: The cubc fractalStage 0StagePart of
Question:
Tutorial Exerciso Compute, If possible; the simllarlty dlmenslon of the fractal: Round t0 the ncarest thousandth: The cubc fractal Stage 0 Stage Part of ] conslsts of N ropllcas of the Inltiator; then the replacement rato & the Recall that thc gencrator of (racta tImes In the fgurc for Stagc thc fractal Is N. In othcr words, If the flgure for Stage appean replaccment ratlo original: that the repllcas may not have the same orientation a3 the Identify the replacement ratlo N. Note I E Port 2 0l 3 dimenslons that nre tlmes the corrusponding Iinear tha Initiator of (ractal has Ilnear Rccall that orthe (ractal other words, If the figure replicas In the generator then the scaling ratio dlmenslons of Ita Stage the scaling factor I5 F= tlmes Iarger than the replicaa that appear for Stagc Idenuify the scollng rotlo Submi Skp (yoy cunot coma buck)
Similar Solved Questions
Q2. An insurance policy purchased to cover random loss subject to deductible of 1000 TL: The cumulative distribution function of the loss amount X is: [Zx10 pts]F(x) = 1 - e 2000Find the expected loss for the insured What is the loss elimination ratio for the insurer (company)?
Q2. An insurance policy purchased to cover random loss subject to deductible of 1000 TL: The cumulative distribution function of the loss amount X is: [Zx10 pts] F(x) = 1 - e 2000 Find the expected loss for the insured What is the loss elimination ratio for the insurer (company)?...
Find the fundamental matrix for the two-dimensional system defined by * = x, + txz, *2...
Find the fundamental matrix for the two-dimensional system defined by * = x, + txz, *2 = x2, and determine the solution for which x,(0= C1, x2(0) = C2....
A high-speed train is traveling at 150 m/s north in San Diego (located at a latitude of 37.2 degrees as measured from the equator) Find the angle ofa plumb line suspended from the ceiling" compared to a plumb line suspended from the train station: (Give the angle and direction:)
A high-speed train is traveling at 150 m/s north in San Diego (located at a latitude of 37.2 degrees as measured from the equator) Find the angle ofa plumb line suspended from the ceiling" compared to a plumb line suspended from the train station: (Give the angle and direction:)...
Please do all parts Problem 2- Lateral Reluctance Actuator 20 ptsl The actuator shown below has...
Please do all parts Problem 2- Lateral Reluctance Actuator 20 ptsl The actuator shown below has a core and movable section of steel. The moveable section is supported by bearings with permeability u o, with Hr. All parts of the actuator extend a depth "a" into the page The rest of the par...
Silences are never empty of information
Silences are never empty of information...
Find the value of $y$ if a linear function goes through the following points and has the following slope: $(10, y),(25,100), m=-5$
Find the value of $y$ if a linear function goes through the following points and has the following slope: $(10, y),(25,100), m=-5$...
Fml al K 6 Q Jvch M Kisrx M-l1 K KJ"gSulur .
Fml al K 6 Q Jvch M Kisrx M-l1 K K J"gSulur ....
The function $f(x)=sin x cos ^{2} x$ has extremum at(a) $x=pi / 2$(b) $x=cos ^{-1}(v sqrt{3})$(c) $x=cos ^{-1}(-sqrt{23})$(d) $x=cos ^{-1}(-sqrt{2 / 3})$
The function $f(x)=sin x cos ^{2} x$ has extremum at (a) $x=pi / 2$ (b) $x=cos ^{-1}(v sqrt{3})$ (c) $x=cos ^{-1}(-sqrt{23})$ (d) $x=cos ^{-1}(-sqrt{2 / 3})$...
When a day-care facility houses more than one age group, explain which age group standards should...
When a day-care facility houses more than one age group, explain which age group standards should be complied with?...
9:06Done6 of 6JCuaco 605 inttut [ Dom JEZRE{ Dal [D EZE EBe cbrloa>iEu eraed Eu bat Wo_fdco @mtroncuninatha entee Eue ELEGEEEEtil TOacanJinalpartDucttion 1Onocnompk0Teminedo 'Fonndtly Ieobr Deol Theinteuing Rrorc nbrutederuintte Cernntdete JG [l Mena CoRlAie5 connitteEHFD[eoicion [email protected]
9:06 Done 6 of 6 JCuaco 605 inttut [ Dom JEZRE{ Dal [D EZE EBe cbrloa>iEu eraed Eu bat Wo_fdco @mtroncuninatha entee Eue ELEGEEEEtil T OacanJin alpart Ducttion 1 Onocnompk0T eminedo 'Fonndtly Ieobr Deol Theinteuing Rrorc nbrutederuintte Cernntdete JG [l Mena CoRlAie5 connitteEHFD [eoicion...
Styles Natural Gas Producers Inc. (NGPI) has an issue of 10-year, 6% annual coupon bonds outstanding....
Styles Natural Gas Producers Inc. (NGPI) has an issue of 10-year, 6% annual coupon bonds outstanding. The bonds, which were originally issued 20 years ago, have a face value (PV) of $1,000, a yield-to-maturity (YTM) of 6%, and are noncallable. What is the current market price of NGPI's bonds?$8...
The overall cost of capital for Julius Seven is 11%. The firm is financed with 40%...
The overall cost of capital for Julius Seven is 11%. The firm is financed with 40% debt that offers a promised return of 7% and has an expected return of 6%. Julius Seven is in the 35% marginal tax bracket. What is Julius Seven's weighted average cost of capital? 9.6% 11.0% 6.5% 10.2% &nbs...
When evaluating alternatives, what type of costs should be considered?
A. Relevant costs B. Sunk costs...
When evaluating alternatives, what type of costs should be considered?
A. Relevant costs B. Sunk costs C. Prevention costs D. Fixed costs...
2. Solve the following IVPs for the given initial data: and express the solu- tions in the form y(z) K(r - )f(t) dtfor some function K(s) of single variable:y" - 2y' +y = f(r) y(O) = y() =0
2. Solve the following IVPs for the given initial data: and express the solu- tions in the form y(z) K(r - )f(t) dt for some function K(s) of single variable: y" - 2y' +y = f(r) y(O) = y() =0...
Question 7 (1 point) Pancreatic beta cells secrete the protein hormone insulin in response to various signals. Which of the following features would be typical of these cells?They would have high concentration of insulin stored in membrane-bound vesicles adjacent to the plasma membrane:b) They would contain larger than normal amounts of rough ER:They participate in regulated secretion.d) aand ce) a, b and c
Question 7 (1 point) Pancreatic beta cells secrete the protein hormone insulin in response to various signals. Which of the following features would be typical of these cells? They would have high concentration of insulin stored in membrane-bound vesicles adjacent to the plasma membrane: b) They wou...
Using LWB calculate Km and Vmax; confirm your answers.[S] in MolarVelocity in units per second201034 53 65 73 80 91 96 107 11320 30 40 50 80 100 200 400
Using LWB calculate Km and Vmax; confirm your answers. [S] in Molar Velocity in units per second 20 10 34 53 65 73 80 91 96 107 113 20 30 40 50 80 100 200 400...
ADDITIONAL TOPICS IN TRIGONOMETRYConverting an equation written in polar form to one vConvert each polar equation to rectangular form:(a) r =3Rectangular form:27 3(b)Rectangular form:
ADDITIONAL TOPICS IN TRIGONOMETRY Converting an equation written in polar form to one v Convert each polar equation to rectangular form: (a) r =3 Rectangular form: 27 3 (b) Rectangular form:...
14 Question a point) 1st attempt The backward-bending labor supply curve has its shape because, at...
14 Question a point) 1st attempt The backward-bending labor supply curve has its shape because, at Choose one: O A.all wages, the income effect dominates the substitution effect O B.low wages, the substitution effect dominates the income effect, but the reverse occurs at high wages. 0 Call wages, th...
How many of each types of atoms does the compound Ca3(PO4lz contain?a) 3 calcium; 2 phosphorus and 8 oxygenb) 3 calcium, 2 phosphorus and 6 oxygenc) 3 calcium; 1 phosphorus and 6 oxygend) 3 calcium; phosphorus and 4 oxygen
How many of each types of atoms does the compound Ca3(PO4lz contain? a) 3 calcium; 2 phosphorus and 8 oxygen b) 3 calcium, 2 phosphorus and 6 oxygen c) 3 calcium; 1 phosphorus and 6 oxygen d) 3 calcium; phosphorus and 4 oxygen...
3. Find the best- fittinp curve of each type below , using 5,120), (-2,15) , (1,0) , and (5, -160) for data points [1point] Parabolas passing through the origin: y ax? bx.[1point] Parabolas symmetrie about the y-axis: y ax? Tc.
3. Find the best- fittinp curve of each type below , using 5,120), (-2,15) , (1,0) , and (5, -160) for data points [1point] Parabolas passing through the origin: y ax? bx. [1point] Parabolas symmetrie about the y-axis: y ax? Tc....
D) Draw struclures for peaks at 59 and 73 (12 points)
d) Draw struclures for peaks at 59 and 73 (12 points)...
5.A certain reaction quadruples in rate when the temperature is increased from 25 "C to 35 *C What is the activation energy for this reaction in kJ / mole? 13(0
5.A certain reaction quadruples in rate when the temperature is increased from 25 "C to 35 *C What is the activation energy for this reaction in kJ / mole? 13(0...
"The combination is the seventh, ninth, and eleventh numbers in a sequence that begins 1,1,2,3,5,8,13, and so forth" What is the combination
"The combination is the seventh, ninth, and eleventh numbers in a sequence that begins 1,1,2,3,5,8,13, and so forth" What is the combination? I have a problem with this can you help?...
Parallel-plale capacitor has square ates Ihat are each slde and 30 mi apart: The space between tne plates completely tilled with two square siabs dielectric, each 6.00 cm on side and Wna thick: One slab pyiex glass and the other polystyrene_If the potential difference between the plates 82.0much electrical energy stored the capacitor?Express your answer with the appropriate unitsU =ValueUnitsSubmitRoquost AnsworRetur to AssignmentProvide Feedback
parallel-plale capacitor has square ates Ihat are each slde and 30 mi apart: The space between tne plates completely tilled with two square siabs dielectric, each 6.00 cm on side and Wna thick: One slab pyiex glass and the other polystyrene_ If the potential difference between the plates 82.0 much ...
What is the magnitude of the electric field at a point midway between a −8.0μC and...
What is the magnitude of the electric field at a point midway between a −8.0μC and a +8.5μC charge 9.0cm apart? Assume no other charges are nearby....
Consider the following reaction:Iz (aq)2 S2032 - (aq)S4O6s2 - (aq)2 |- (aq)Which species contains the element undergoing oxidation; Iz or SzO32Choose:How do you know' Choose _ It contains the element that had an increase in oxidation state_ Check It contains the element that had decrease in oxidation state_
Consider the following reaction: Iz (aq) 2 S2032 - (aq) S4O6s2 - (aq) 2 |- (aq) Which species contains the element undergoing oxidation; Iz or SzO32 Choose: How do you know' Choose _ It contains the element that had an increase in oxidation state_ Check It contains the element that had decrease...
Cate Problem Wentworth Medical Centertdmt~tunes Ytk amcatlsutedth denntedMeslieal Crnter In uplalr Usvograthlc larcatin Rrtn] hcjhth. MtLia resldenta 0 New Fdeni Fun Fochalth Ldivtd North Cinin; Jeutcr TThcEIl "depresston Thc' Indicate Ixher IvretemplcdMntlranmnDAlAahdnaDedenshnbrlwcrn Kcogtaphie second pat ofthe sLudy enression Indlvlduals 65 AKe OF older who had 4 AGCCOT Fhroml he;h nmuiton arthritis hvpertension, anl/or heart Samnie 60 Individuals whth such condition identified: u
Cate Problem Wentworth Medical Center tdmt ~tunes Ytk amcatlsutedth dennted Meslieal Crnter In uplalr Usvograthlc larcatin Rrtn] hcjhth. MtLia resldenta 0 New Fdeni Fun Fochalth Ldivtd North Cinin; Jeutcr TThcEIl "depresston Thc' Indicate Ixher Ivre t emplcd Mntl ranmn DAl Aahdna Dedensh...
Two charges, +q and -q, are located in the x-y plane at points (0,+d/2) and (0,-d/2),...
two charges, +q and -q, are located in the x-y plane at points (0,+d/2) and (0,-d/2), respectively. Calculate the magnitude of the electric field at point p with the superposition principle. Data: q=33.0 nC, d=2.00 mm, and p is at x = 40.0 mm...
Find the difference.$9.6-6.5$
Find the difference. $9.6-6.5$...
Suppose the following information is known about a (3 X 3) matrix A:4 =6 E]' A[J-[4: [4-[J Then the matrix A has the following eigenvalues: Select one alternative: A1 = 6, A2 = 3, A3 = 0 none of them A1 = 6, A2 = 3, A3 = 3 A1 = 6, A2 = 3 A1 = 6, A2 = 3 and A3 is impossible to find
Suppose the following information is known about a (3 X 3) matrix A: 4 =6 E]' A[J-[4: [4-[J Then the matrix A has the following eigenvalues: Select one alternative: A1 = 6, A2 = 3, A3 = 0 none of them A1 = 6, A2 = 3, A3 = 3 A1 = 6, A2 = 3 A1 = 6, A2 = 3 and A3 is impossible to find...
How do you simplify 7/8 + 5/6?
How do you simplify 7/8 + 5/6?...
|
2022-07-05 09:50:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4743601381778717, "perplexity": 8444.210511959867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00733.warc.gz"}
|
http://kea-monad.blogspot.com/2009_06_01_archive.html
|
occasional meanderings in physics' brave new world
Name:
Location: New Zealand
Marni D. Sheppeard
## Monday, June 29, 2009
### M Theory Lesson 282
Recall that the vertices of associahedra are described by all rooted binary trees of a certain height, such that degeneracies in the level of the nodes is permitted. For example, for trees with three vertices (including a root) there must be four leaves, and we obtain the five vertices of a pentagon. The edges of the pentagon are labelled by trees with only two nodes, which are the contractions of the trees on the boundary vertices. And the face of the pentagon itself is labelled by the single vertex four leaved tree. The associahedron for two leaves is a point and the associahedron for three leaves is a single edge. For each real dimension, there is an associahedron.
What about ternary trees? First observe that the real dimension must increase by two at each step, because ternary vertices increase the number of leaves by two at each branching. The first two ternary polytopes are described by the following trees. The second case has three points on a surface, with no marked edges, just like a Riemann sphere. The next case naturally lives in dimension four, so we only draw the seven leaved trees marking the $12$ points:
### Magic Matrix
Philip Gibbs has now provided a webpage with his solution to the problem of showing that any $3 \times 3$ unitary matrix can be turned into a magic matrix by multiplication of its rows and columns by phase factors.
### Mixing History
Carl just showed me these interesting papers, which should have been discussed earlier:
(1) Neutrino Mixing with Delta(27)
(2) A4 Symmetry and Neutrinos
both by Ernest Ma.
## Friday, June 26, 2009
### A Preprint
I think I'll leave it to Carl Brannen to put our four page preprint on mixing matrices on his website. We eagerly await referee reports.
### M Theory Lesson 281
Laurent Manivel, of the CNRS, has a paper that discusses how the hyperdeterminant arises as the restriction of a quartic form for the Weyl group $W(E_7)$. Here $E_7$ means the root system of that name.
It turns out that we should be interested in a product of seven copies of $A_2$ with the automorphism group of the Fano plane. The latter group is the nice group $PSL(2, F_7)$. The hyperdeterminant is a quartic form for an eight dimensional space $A_{ijk}$ that appears in a $56$ dimensional representation of $E_7$ made of seven copies of the form $A_{ijk}$ for different $i$, $j$ and $k$. The entanglement of qudits really does have a lot of wonderful geometry associated to it.
### M Theory Lesson 280
According to Oeding, the Lagrangian Grassmanian of an even dimensional symplectic space $V$ is the image of a map $f$ that takes a symmetric matrix and gives a vector of minors. There is a projection from the Grassmanian onto the variety of principal minors of all $n \times n$ matrices.
This is interesting because minors are a natural way to describe pure states in quantum mechanics. Consider a three qubit state with $8$ amplitudes. Forgetting about $a_{000}$, which we can set to $1$ projectively speaking, and letting $a_{111}$ be related somehow to the determinant of a matrix, it turns out that the other six amplitudes should be expressed as the principal minors of the matrix which has full determinant given by the entanglement measure (Cayley's hyperdeterminant)
## Wednesday, June 24, 2009
### The String Wars
Gil Kalai has written a book, entitled "Gina Says" - Adventures in the Blogosphere's String War, based on his experiences as a poster named Gina. It is full of gems, including many admirable quotes, such as:
"Gina's comments are blocked on my blog because she was posting a large number of comments there, while most of the time clearly not understanding what she was writing about", P. Woit, Backreaction 5:59, December 27, 2006
"Gina, you and quite a few others seem confused about the meaning of higher dimensions." Thomas Love, September 28th, 2006 at 2:16 pm
This could be a bestseller!
## Monday, June 22, 2009
### Ambitwistor Holography
One of the interesting twistor ideas that I have been hearing about lately is the Ambitwistor Lagrangian of Mason and Skinner. They give an integral for (an $N = 3$) supertwistor space, over an $8$ dimensional form, that is defined in terms of a Chern-Simons piece along with supersymmetric twistor forms.
Note that the $8$ dimensions comes from a light like component of the $10$ dimensional ambitwistor component of the $12$ dimensional twistor space for $(Z,W)$. The fermionic coordinates satisfy $(\psi \cdot \eta)^4 = 0$ (just think of the quantum Fourier transform), which is responsible for the condition $(Z \cdot W)^4 = 0$, associated to Yang-Mills solutions.
Although Lagrangians cannot possibly be fundamental in a nonlocal theory, this is pretty interesting when one thinks about three copies of it. Recall that the $24$ dimensions (and $24 = 3 \times 8$) of the CFT for the $26$ dimensional bosonic string theory is associated with the Leech lattice and the Monster group and other moonshine maths!
## Sunday, June 21, 2009
### Emerging Holography
Last week's amazing twistor workshop ended Friday with an outstanding physics colloquium by Nima Arkani-Hamed, called Holography and the S matrix, but secretly about computing scattering amplitudes using twistor spaces.
He went to some effort to try to convince a large audience of theoretical physicists that there was a mysterious new, mind blowing holographic theory behind these magical simplifications in scattering amplitudes for both Yang Mills and gravity. However, unlike serious fans of thermodynamic gravities (for instance, Padmanabhan) he didn't seem in favour of a microscopic theory of gravity that was wildly different from string theory.
Some time was spent criticising the Standard Model emphasis on manifest locality, when locality should be an emergent property. In the fantastic results so far, twistor space is clearly doing holography for us, but there is a long way to go before emergent locality is properly understood. After all, if we can remove spacetime from particle physics, why not its boundaries too?
## Wednesday, June 17, 2009
### Quote of the Week
We believe that [this formula] encapsulates the complete n-particle tree level S-matrix of YM theory (for any gauge group) ... [we] highlight a crucial fact about the formula: namely, that it is not really an integral at all.
Roiban et al
## Tuesday, June 16, 2009
### Jordan M Theory
Baez decided to learn M Theory, and asked for some hints on how the exceptional Jordan algebra might appear in an $11$ dimensional theory, prompting some helpful advice from two people named Lubos and Kea. The whole conversation was of course quickly deleted, although I don't recall it containing any direct personal insults. Anyway, here is a fresh link to kneemo's blog, who I am quite sure knows far more about this question than anybody else.
## Monday, June 15, 2009
### A New Home
In good news from down under, after a successful program of pest control on Raoul Island, many of my kakariki friends have decided to make their homes there again, after 150 years.
### Twistor Time
It is very difficult to keep up with arxiv preprints these days, but since kneemo hasn't mentioned it yet, in this new paper Arkani-Hamed et al study the twistor diagrams of Hodges. As the abstract states:
Our twistor transformation is inspired by Witten's, but differs in treating twistor and dual twistor variables more equally. In these variables the three and four-point amplitudes are amazingly simple.
They refer in particular to this new paper by Mason and Skinner.
## Friday, June 12, 2009
### A Question
Today at lunch I was asked one of the questions that nonsense theorists are often asked: so what does this have to do with the real world? Of course, one could always launch into a (now fashionable) tirade about protocols for quantum information, or two dimensional systems and topological quantum field theories. However, since the conversation was set more in the context of quantum gravity, and the asker was mostly looking for a very simple, one line answer (after having already suffered a five minute introduction to category theory), I was at a loss to find the right words.
So here is the challenge: can you summarize categorical quantum gravity in 20 catchy words or less? We assume that our readers will not be captivated by statements along the lines of Everything is made of Strings or, more pertinently, the speed of light varies (although that is, of course, true). Rather, the phrase should capture the potential of quantum gravity to describe aspects of the world completely outside the domain of established physical theory.
## Thursday, June 11, 2009
### A Pi Groupoid
Recall that the cardinality of a groupoid involves the inverse of the cardinalities of groups. At PI, Jeff Morton told me about a very nice example involving, for instance, the cyclic groups $C_{n} \times C_{n}$, which each have cardinality $n^2$. That is, we can have a cardinality $\pi^2$, because
$\pi^{2} = 6 \sum_{k} \frac{1}{k^2}$.
Recall that this infinite sum is the number $\zeta (2)$ for the Riemann zeta function, first evaluated by Euler in 1735. Since $e$ is also a groupoid cardinality, namely for the groupoid of finite sets and bijections, it seems that transcendentals naturally appear in the context of infinite groupoids.
### Cool Cats
Since the cool cats conference in Canada, I have been catching a few more Oxford seminars. Yesterday, Andrew Dancer spoke about Frenkel's loop group version of the Langlands Correspondence. He noted the category theorists in the audience, promising to discuss some category theory, but of course there was very little category theory and I only obtained the usual miniscule improvement in my understanding of this subject.
Meanwhile, I'm off to a lovely old college for lunch and it's a (relatively unusually) beautiful day here!
## Monday, June 08, 2009
### Back in Oxford
The PI conference is over and I have returned to Britannia. There were some excellent talks, but more importantly, there was time for discussion in the afternoons. Jeff Morton has started blogging the talks. I would like to say more, but am currently consumed by other tasks, such as fixing my computer account (which was automatically deleted on a previously valid expiry date).
## Tuesday, June 02, 2009
### CQC Monday
Day One of CQC at PI consisted of talks until lunch, a leisurely two hour break and then an afternoon question time, which quickly degenerated into a discussion on the difference between instrumental approaches (ie. the background independent point of view in the context of diagrammatic QM, roughly speaking) and foundational approaches to physics, the latter applying to proper theories (the definition of this unfortunately relying on very few previously known examples, such as GR). Anyway, hopefully the amount of discussion sets the tone for a pleasant week.
The weather, on the other hand, does not look like improving.
## Monday, June 01, 2009
### Rejecta Mathematica
Good News! The old short paper on Koide masses and the quantum Fourier transform has been accepted by Rejecta Mathematica, which only publishes works that have been rejected by respectable journals. Of course, given the ridiculous time frame for physics publishing, this paper is now hopelessly outdated by the enormous progress since made by Carl Brannen, who will no doubt have many papers published soon.
|
2015-07-28 15:20:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565088629722595, "perplexity": 1101.4452034202984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00035-ip-10-236-191-2.ec2.internal.warc.gz"}
|
http://clay6.com/qa/39473/two-wires-a-and-b-of-equal-cross-sectional-area-are-in-series-stretches-equ
|
Comment
Share
Q)
# Two wires A and B of equal cross sectional area are in series,stretches equally is $Y_A=3\times 10^{11}$ and $Y_B=1.8\times 10^{11}$.What is ratio of their length?
$\begin{array}{1 1}(A)\;1 : 1\\(B)\;3 : 1.8\\(C)\;1.8 : 3\\(D)\;3.24 : 9\end{array}$
$Y \propto l$
So $\large\frac{l_A}{l_B}=\frac{Y_A}{Y_B}=\frac{3}{1.8}$
|
2018-10-17 06:13:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7201835513114929, "perplexity": 1673.4955919751135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00184.warc.gz"}
|
https://www.snapxam.com/problems/99855753/integral-of-x-x-2-1-0-dx?method=1
|
# Step-by-step Solution
## Integral of $\frac{x}{x^2-1}$ with respect to x
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Videos
$\frac{1}{2}\ln\left|x+1\right|+\frac{1}{2}\ln\left|x-1\right|+C_0$
## Step-by-step explanation
Problem to solve:
$\int\frac{x}{x^2-1}dx$
Choose the solving method
1
Factor the difference of squares $x^2-1$ as the product of two conjugated binomials
$\int\frac{x}{\left(x+1\right)\left(x-1\right)}dx$
2
Rewrite the fraction $\frac{x}{\left(x+1\right)\left(x-1\right)}$ in $2$ simpler fractions using partial fraction decomposition
$\frac{x}{\left(x+1\right)\left(x-1\right)}=\frac{A}{x+1}+\frac{B}{x-1}$
3
Find the values for the unknown coefficients: $A, B$. The first step is to multiply both sides of the equation from the previous step by $\left(x+1\right)\left(x-1\right)$
$x=\left(x+1\right)\left(x-1\right)\left(\frac{A}{x+1}+\frac{B}{x-1}\right)$
4
Multiplying polynomials
$x=\frac{A\left(x+1\right)\left(x-1\right)}{x+1}+\frac{B\left(x+1\right)\left(x-1\right)}{x-1}$
5
Simplifying
$x=A\left(x-1\right)+B\left(x+1\right)$
6
Expand the polynomial
$x=Ax-A+Bx+B$
7
Assigning values to $x$ we obtain the following system of equations
$\begin{matrix}-1=-2A&\:\:\:\:\:\:\:(x=-1) \\ 1=2B&\:\:\:\:\:\:\:(x=1)\end{matrix}$
8
Proceed to solve the system of linear equations
$\begin{matrix} -2A & + & 0B & =-1 \\ 0A & + & 2B & =1\end{matrix}$
9
Rewrite as a coefficient matrix
$\left(\begin{matrix}-2 & 0 & -1 \\ 0 & 2 & 1\end{matrix}\right)$
10
Reducing the original matrix to a identity matrix using Gaussian Elimination
$\left(\begin{matrix}1 & 0 & \frac{1}{2} \\ 0 & 1 & \frac{1}{2}\end{matrix}\right)$
11
The integral of $\frac{x}{\left(x+1\right)\left(x-1\right)}$ in decomposed fraction equals
$\int\left(\frac{\frac{1}{2}}{x+1}+\frac{\frac{1}{2}}{x-1}\right)dx$
12
The integral of the sum of two or more functions is equal to the sum of their integrals
$\int\frac{\frac{1}{2}}{x+1}dx+\int\frac{\frac{1}{2}}{x-1}dx$
13
We can solve the integral $\int\frac{\frac{1}{2}}{x+1}dx$ by applying integration by substitution method (also called U-Substitution). First, we must identify a section within the integral with a new variable (let's call it $u$), which when substituted makes the integral easier. We see that $x+1$ it's a good candidate for substitution. Let's define a variable $u$ and assign it to the choosen part
$u=x+1$
14
Now, in order to rewrite $dx$ in terms of $du$, we need to find the derivative of $u$. We need to calculate $du$, we can do that by deriving the equation above
$du=dx$
15
Substituting $u$ and $dx$ in the integral and simplify
$\int\frac{\frac{1}{2}}{u}du+\int\frac{\frac{1}{2}}{x-1}dx$
16
The integral $\int\frac{\frac{1}{2}}{u}du$ results in: $\frac{1}{2}\ln\left|x+1\right|$
$\frac{1}{2}\ln\left|x+1\right|$
17
The integral $\int\frac{\frac{1}{2}}{\left(\sqrt{x}+1\right)\left(\sqrt{x}-1\right)}dx$ results in: $\frac{1}{2}\ln\left|x-1\right|$
$\frac{1}{2}\ln\left|x-1\right|$
18
Gather the results of all integrals
$\frac{1}{2}\ln\left|x+1\right|+\frac{1}{2}\ln\left|x-1\right|$
19
As the integral that we are solving is an indefinite integral, when we finish integrating we must add the constant of integration $C$
$\frac{1}{2}\ln\left|x+1\right|+\frac{1}{2}\ln\left|x-1\right|+C_0$
$\frac{1}{2}\ln\left|x+1\right|+\frac{1}{2}\ln\left|x-1\right|+C_0$
$\int\frac{x}{x^2-1}dx$
### Main topic:
Integrals by partial fraction expansion
19
### Time to solve it:
~ 0.07 s (SnapXam)
|
2020-09-19 19:08:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983451247215271, "perplexity": 407.44694894234715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00749.warc.gz"}
|
https://nakamoto.com/merkle-trees/
|
In our last lesson, we looked at cryptographic hash functions and attacks against them. Cryptographic hash functions are used in almost every protocol that guarantees integrity and authentication. But in a blockchain setting, one of their most important applications is in Merkle trees.
Merkle trees were invented by Ralph Merkle, one of the forefathers of modern cryptography. Though he patented the Merkle tree in 1982, the patent on them has long expired. Merkle trees are used widely in many applications, including Git, BitTorrent, ZFS, the Certificate Transparency framework, and of course, pretty much every cryptocurrency.
Let's walk through a simple example of how we might use a Merkle tree in an application.
### Building up a Merkle tree
Say we're designing a file sharing protocol. We'll assume the files we're sharing are large—say, Linux distros.
In this protocol, once a user finishes downloading a file, they need to somehow verify it wasn't corrupted in transit. TCP itself can correct most random errors, but even so, corruption errors are common when dealing with files of this size. How can we ensure integrity at the application layer?
Here's an idea: let's ship a cryptographic hash alongside the Linux ISO. That way, after the user is done downloading the ISO, they can hash it and check if the digests match. Hashing is pretty fast—even on a single core you can hash hundreds of megabytes per second.
But what if the hashes don't match? How do you know where the error in the file was?
Actually, you have no way of knowing where the error was. You have to throw out all of the 2GB and restart the download, hoping this time the corruption doesn't happen. This seems like a lousy solution.
Here's an idea: what if we break up the data into blocks and ship hashes for each block?
This is nice! Now if we have a random corruption in our data, instead of downloading all 2GB over again, we can check which of the 250MB blocks came out corrupted and then only re-download that block. The downside is that we need to ship all 8 hashes alongside the download:
Block 1 digest: 743bacf85062c45f533060e3abb3dc1f02683269
Block 2 digest: 4db3539ecf02e69ffacec45751129e38bdfa469e
Block 3 digest: 0f34d88b134aece384b3d079996d08ee7eadecfd
Block 4 digest: 5bbc8a852490d2f9d9e0067f408950539af0bda9
Block 5 digest: 2ba7cb929bc68c6706b8be03946add92e941999d
Block 6 digest: ee389b56b4a6ff4407df5ffd5f675c2fa73edd22
Block 7 digest: d446b49425506732cfd707865908a99a1ab15ba7
Block 8 digest: 1f706fc900b2c4543525ea1434a201d169004a3d
For 250MB blocks this is probably fine, but if we want smaller blocks to minimize the impact of corruptions, then we need more than 8 hashes. If we wanted 128KB blocks, we'd need 15,000 hashes, and if we wanted 8KB blocks, we'd need 250,000 hashes. This becomes unwieldy pretty fast.
Here's where Merkle trees come in. Merkle trees are a kind of cryptographic accumulator. Cryptographic accumulators allow you to compact arbitrarily many pieces of data into a constant amount of space. In other words, a Merkle tree lets us represent arbitrarily many blocks while only transmitting a single hash across the wire.
Merkle trees are also known as hash trees, because they hash data upwards in a tree. It's easy to explain in code—here's how you can create a hash tree with only two elements.
from hashlib import sha1
def h(s): return sha1(s.encode()).hexdigest() # hashing helper function
block1 = "Block 1"
block2 = "Block 2"
digest1 = h(block1)
digest2 = h(block2)
root = h(digest1 + digest2)
print(root)
# d1c6d4f28135f428927a1248d71984a937ee543e
(This diagram uses the notation h(1, 2) for legibility, but it's actually h(h(1) + h(2)).)
By concatenating the two digests and taking their hash, the root of the hash tree commits to both digests. Think about it: if there were some other way to produce this same root, then that would imply the existence of a hash collision. Hash collisions should be impossible for a strong cryptographic hash function. Thus, the root of this hash tree, known as the Merkle root, must be a unique identifier of this exact tree.
(If you don't follow this argument, play around with another example in code! This intuition is really important, and we'll continue to build on it.)
The Merkle root is therefore an accumulator over all of the original data that was hashed to produce this tree. It also commits to that data in order, since we used string concatenation on the underlying blocks to combine their values. (If you had used a commutative operation like addition or XOR instead of concatenation, then technically you could've switched the order of some blocks and gotten the same root. This is undesirable, so don't make that mistake.)
How does this scale up to many blocks though? Pretty simple. We repeat this same operation across the data in layers until we get a single root.
So the root of this tree is 6c2d5a56f541df426366aebb4db927113016387a. Notice that if you modified any element of the tree, even by 1 bit, then the avalanche effect of the hash would cause every hash upstream to change, all the way up to the root.
Now, say we downloaded the Linux distro along with its Merkle root (a single hash). We recompute the Merkle tree over the Linux distro on our side, and we find that our root doesn't match the one we were provided. This means our file is corrupted.
How can we quickly diagnose which of the blocks we downloaded was faulty? See if you can figure this out for yourself.
Here's the answer: we have to request the two hashes below the root in the canonical Merkle tree, and figure out which hash doesn't match up with our client-side tree. Once we've figured it out which subtree is faulty, we can repeat this for the two children of that subtree, and so on until we reach the base. Assuming there's a single faulty block, this will let you pinpoint that block with only $O(\log{n})$ comparisons (where $n$ is the number of underlying data blocks).
### Inclusion proofs
We've seen how powerful Merkle trees are for verifying file integrity. But the real power of cryptographic accumulators comes not just in accumulating data, but in then being able to efficiently prove claims about the data.
Imagine an accumulator as an opaque box full of items. You can't directly look inside this box, but with the magic of cryptography, you can query it in specific ways.
One of the operations you can perform with a cryptographic accumulator is an inclusion proof. This is a small proof that decisively attests that an item exists in the accumulator.
If you know the Merkle root of an e-book, how can I efficiently prove to you that a certain quotation comes from that e-book? I can do this without providing you the entire e-book or even the entire Merkle tree.
Take a moment and see if you can sketch out how to do this without reading on.
Have an idea? The animation below demonstrates the answer for a simple 4-word e-book.
We only need to provide the data we're proving exists, the Merkle root, and sibling hashes along the path from the leaf up to the root. This should require only $O(\log{n})$ hashes to transmit over the wire. If you redo all of the hashing and the roots match, you will know with certainty that quotation was indeed part of the e-book. This kind of proof is known as a Merkle proof.
You should be asking: why is this sufficient for an inclusion proof? What if someone just makes up the sibling hashes to make the roots match? How do we know they came from the real Merkle tree?
I'll leave this to you to think through for yourself.
### Merkle trees in Bitcoin
Inclusion proofs are a powerful primitive enabled by Merkle trees. They are used extensively in Bitcoin light clients, also known as SPV (Simple Payment Verification) nodes. We'll do a quick sneak preview of how this works.
In Bitcoin, all the transactions that took place in the last ~10 minutes are bundled together into a block and transmitted to everyone in the network. These blocks can be quite large, since they potentially contain thousands of transactions.
To save on bandwidth, Bitcoin pulls off a clever trick: instead of transmitting all of the transactions, the transmitted block only includes a Merkle root of that block's transactions. (In practice, this transmitted data is known as a block header, while the transactions themselves are transmitted separately on request. We'll learn more about this later.)
Since a Merkle root is a cryptographic accumulator over all the underlying ordered data, every block header includes a commitments to all the transactions in that block. Because of this optimization, lightweight clients only need to keep track of block headers and can selectively verify Merkle proofs that a certain transaction was included in a given block. This optimization is essential for mobile phones or web wallets to be able to use the Bitcoin network without having to download everything.
Don't worry if that's confusing; there's a lot of structure to Bitcoin that we'll explain in upcoming lessons. But by now you have gotten a glimpse of how useful Merkle trees can be.
There are many more innovations on Merkle trees that are worth exploring, including proofs of non-inclusion, online updates, and n-ary Merkle trees. We'll provide resources in the additional reading if you want to see how the state of the art has evolved. We will also provide reading on a subtle second preimage attack against a naive implementation of a Merkle tree (though it's not of practical use in a blockchain setting).
### Assignment
In our next assignment, you'll be writing your own implementation of a Merkle tree. You'll then be writing code to verify Merkle proofs.
Save your code from this, as you may find it useful for your cryptocurrency project if you choose to implement a Merkle tree.
Once you've completed the coding assignment, you're ready to move on.
|
2023-03-24 13:10:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3543967306613922, "perplexity": 1218.305930932369}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00791.warc.gz"}
|
http://mathematica.stackexchange.com/questions/37130/how-to-update-package
|
# How to update package?
when I use << PhysicalConstants` I get the message like :General::obspkg: PhysicalConstants is outdated... see the Compatibility Guide for more infomation... The question is how to update it? My version is MA9.0.
-
For those who would still prefer to work with the old style of units, but with many added features, I still sell the ExtendUnits package from my web site. It still has the old PhysicalConstants built in. – David Park Nov 16 '13 at 4:43
It can't be updated. The Physical Constants package is obsolete as of 9.0 and is no longer updated. You can ignore that warning message and still use it if you want, but mostly the same functionality is now provided by the Units framework (new in 9.0).
In addition to the warning message you saw, there's a note at the top of the Physical Constants package guide page to this effect. That note is suffixed with an unfortunately barely visible link to the Units framework guide page.
-
|
2014-09-19 06:01:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30494457483291626, "perplexity": 1664.1445003588992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657130272.73/warc/CC-MAIN-20140914011210-00178-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://forum.allaboutcircuits.com/threads/nodal-kvl.30931/
|
# nodal KVL
Discussion in 'Homework Help' started by stupid, Dec 1, 2009.
1. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
I1= V1/6,
6-10I-6(I$_{1}$-I$_{2}$=0
6(I$_{2}$-I$_{1}$)+2=0
did i miss anything?
thanks
File size:
22.5 KB
Views:
45
2. ### ELECTRONERD Senior Member
May 26, 2009
1,146
16
Using source transformations, I get an output voltage of 3.75V based on KVL. I'm not very experienced in this type of thing but I think that's the answer. Do you have an answer to verify my assumption?
Austin
3. ### ELECTRONERD Senior Member
May 26, 2009
1,146
16
On second thought it could actually be 6.5V, I'm not sure.
Austin
4. ### thakid87 Active Member
May 23, 2009
122
0
What is that component in the second parallel branch supposed to be?
It is labeled 1/4 V1.
5. ### ELECTRONERD Senior Member
May 26, 2009
1,146
16
I think it's 1.5V, or an actual quarter of 6V (V1). That's what I used at least.
Austin
6. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
There are two things labeled V1; the 6 volt battery, and the voltage across the 6Ω resistor R2.
I think it would make more sense for the 1/4 V1 controlled source to be referring to the voltage across R2.
7. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
hi clarification.
the electrician was right.
(1/4)V1 is with respect to 6Ω not the voltage source 6v.
is I2=2A?
File size:
22.4 KB
Views:
25
8. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
Denote the node at the top of R2 as node A.
There are 4 branches connected to A, and we can sum the currents in those 4 branches to zero, using the convention that a current leaving a node is taken as positive. Each branch current can be given as a simple application of Ohm's law:
(V1-6)/10 + V1/6 - V1/4 + 2 = 0
Rearranging, we have:
V1*(1/6 + 1/10 - 1/4) = 6/10 - 2
The solution is V1 = -84 volts.
From this, we can determine that I1 = 9 amps in the direction shown.
I2 = 84/4 + 2 = 23 amps in the direction shown.
The current from the V1/4 dependent source is in the direction opposite to that shown by the arrow in the source symbol, because V1 is negative.
9. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
thank u, the electrician.
is there any supermesh element in the circuit?
if so, could V1/4 current source be the constrain?
pls see attached.
the formula by KVL,
mesh 1 in clockwise direction
6-10I1-6(I1+V1/4 - 2)=0-------eq1
mesh 2 in clockwise direction
6(2-V1/4 + 2 - I1 - V1/4 +2)=0---------eq2
regards,
stupid
• ###### nodal1.JPG
File size:
23.2 KB
Views:
12
Last edited: Dec 4, 2009
10. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
I don't think there is a supermesh.
I merged the 2 amp source into the dependent source, so that the dependent source becomes V1/4-2.
Then the equation for the left hand loop is:
-6 + I1*(10+6) - I2*(6) = 0
The second equation is a constraint equation:
I2 = -(V1/4 - 2)
since V1 = 6*(I1 - I2), this becomes:
I2 + 6*(I1 - I2)/4 - 2 = 0
Solving these two equations gives:
I1 = 9
I2 = 23
11. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
the electrician,
i remember a thread posted here dealing supermesh.
see attached.
the working:
combine meshes 2 & 3 by KVL,
5(I2-I3) + 15I3=0
it is quite similar to the one discussed here.
could u kindly me tell what constitutes a supermesh?
regards,
stupid
File size:
20.2 KB
Views:
14
12. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
Imagine that you have a 1 amp and a 2 amp current source in series, feeding a node of some network. What is the current being fed into that node? Is it 1 amp, or is it 2 amps? The current is indeterminate; it can't be solved if the current sources are ideal. But, if there is a resistance in parallel with both sources, or even one of them, then the current can be solved.
On the other hand, if you have two current sources in parallel, then the current from the combination is just the sum of the two individual sources. No problem in this case.
Similarly, if you have two ideal voltage sources in parallel feeding a network and one source is 10 volts and the other is 15 volts, what is the voltage feeding the network? It can't be determined if the sources are ideal. But, if there is a resistance in series with one or both, then the voltage can be determined. (In the real world, the resistance of the wires serves to make the voltage determinate. The currents may be very large, perhaps large enough to cause damage.)
Looking at the network in post #1 of this thread, if you go around the right hand loop you have two current sources in series and the loop current is indeterminate. But, if you combine them as two current sources in parallel, the problem goes away. I wouldn't try to treat those two current sources as a supermesh.
In the circuit of post #11, you don't have a situation where going around a loop puts two current sources in series; there's a resistor in there. Then the supermesh concept doesn't lead to an indeterminate current.
13. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
the statement that " When a current source is contained in 2 meshes or is not connected in parallel with a resistance, a supermesh is created by excluding the current source & any element connected in series with it."
compare cct in post 11 with respect to above dictate, arent those 2 resistors in parallel with the current source & thus not a supermesh?
regards,
stupid
14. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
I said "In the circuit of post #11, you don't have a situation where going around a loop puts two current sources in series; there's a resistor in there. Then the supermesh concept doesn't lead to an indeterminate current."
I am saying, in effect, that the supermesh concept works in this case (the circuit of post #11 case).
That is, the circuit of post #11 does contain a supermesh. Did you think I was saying that it does not contain a supermesh?
15. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
hi the electrician,
i have taken note of your post 12 & agree to that.
however, i also look at a particular statement quoted from my course material on my last post.
i wonder if that statement were true or i misinterpret?
according to that statement the cct in thread 11 seems agree not with that.
regards,
stupid
16. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
I what way does it not agree? The statement says, in part, "...When a current source is contained in 2 meshes...a supermesh is created".
In the circuit of #11, the current source is "contained in 2 meshes", isn't it?
17. ### stupid Thread Starter Active Member
Oct 18, 2009
81
0
how about the part .."or is not connected in parallel with a resistance, a supermesh is created..."
the dependent current source is parallel with R2 & R3, isnt it?
18. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
It says "or is not connected in parallel with a resistance, a supermesh is created...", not "and is not connected in parallel with a resistance, a supermesh is created..."
The first part of the description is satisfied, so we have a supermesh.
|
2017-01-21 06:59:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6224446892738342, "perplexity": 2700.2341345001523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00357-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.vcalc.com/wiki/KurtHeckman/Nails+for+Wall+Sheathing
|
# Nails for Wall Sheathing
Not Reviewed
Nails =
Tags:
Rating
Copied from
ID
KurtHeckman.Nails for Wall Sheathing
UUID
36344643-0b6f-11e8-abb7-bc764e2038f2
The Nails Needed for Sheathing on a Wall calculator computes the approximate number of nails needed to nail 4x8 sheets to wall studs based on the dimensions of the wall, spacing of nails on the edges and field of the 4x8, and whether blocking or clips are use.
INSTRUCTION: Choose units and enter the following:
• (L) The length of the wall
• (H) The height of the wall
• (eS) The edge spacing of nails.
• (fS) The field spacing of nails.
• (BC) Choose Blocking or Clips
Number of Nails (N): The calculator returns the number of nails needed. A typical box of nails may have 5,000 nails.
### 4x8 sheets on a roof
Using the length and height, the calculator computes the square footage of the wall. It then computes the number of 4x8 sheets required to cover the wall. Modern construction techniques take advantage of the time saving achieved through using pneumatic nail guns. Nail guns like the RHF9021NS Air Framing Nailer from BN Products are used to rapidly nail sheeting to wall studs saving time and money.
### Blocking or Clips
The number of nails that will be needed are determined by the pattern of nails. There are two different patterns for nails based on whether there is blocking that allows continuous edges or if clips are used. The diagram shows the two patterns.
|
2019-07-21 04:35:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4802393317222595, "perplexity": 5887.895620158693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00185.warc.gz"}
|
https://se.mathworks.com/help/optim/ug/fit-ode-problem-based-least-squares.html
|
# Fit ODE, Problem-Based
This example shows how to find parameters that optimize an ordinary differential equation (ODE) in the least-squares sense, using the problem-based approach.
### Problem
The problem is a multistep reaction model involving several substances, some of which react with each other to produce different substances.
For this problem, the true reaction rates are unknown. So, you need to observe the reactions and infer the rates. Assume that you can measure the substances for a set of times $t$. From these observations, fit the best set of reaction rates to the measurements.
### Model
The model has six substances, ${C}_{1}$ through ${C}_{6}$, that react as follows:
• One ${C}_{1}$ and one ${C}_{2}$ react to form one ${C}_{3}$ at rate ${r}_{1}$
• One ${C}_{3}$ and one ${C}_{4}$ react to form one ${C}_{5}$ at rate ${r}_{2}$
• One ${C}_{3}$ and one ${C}_{4}$ react to form one ${C}_{6}$ at rate ${r}_{3}$
The reaction rate is proportional to the product of the quantities of the required substances. So, if ${y}_{i}$ represents the quantity of substance ${C}_{i}$, then the reaction rate to produce ${C}_{3}$ is ${r}_{1}{y}_{1}{y}_{2}$. Similarly, the reaction rate to produce ${C}_{5}$ is ${r}_{2}{y}_{3}{y}_{4}$, and the reaction rate to produce ${C}_{6}$ is ${r}_{3}{y}_{3}{y}_{4}$.
In other words, the differential equation controlling the evolution of the system is
`$\frac{\mathit{dy}}{\mathit{dt}}=\left[\begin{array}{c}-{\mathit{r}}_{1}{\mathit{y}}_{1}{\mathit{y}}_{2}\\ -{\mathit{r}}_{1}{\mathit{y}}_{1}{\mathit{y}}_{2}\\ -{\mathit{r}}_{2}{\mathit{y}}_{3}{\mathit{y}}_{4}+{\mathit{r}}_{1}{\mathit{y}}_{1}{\mathit{y}}_{2}-{\mathit{r}}_{3}{\mathit{y}}_{3}{\mathit{y}}_{4}\\ -{\mathit{r}}_{2}{\mathit{y}}_{3}{\mathit{y}}_{4}-{\mathit{r}}_{3}{\mathit{y}}_{3}{\mathit{y}}_{4}\\ {\mathit{r}}_{2}{\mathit{y}}_{3}{\mathit{y}}_{4}\\ {\mathit{r}}_{3}{\mathit{y}}_{3}{\mathit{y}}_{4}\end{array}\right].$`
Start the differential equation at time 0 at the point $y\left(0\right)=\left[1,1,0,1,0,0\right]$. These initial values ensure that all of the substances react completely, causing ${C}_{1}$ through ${C}_{4}$ to approach zero as time increases.
### Express Model in MATLAB
The `diffun` function implements the differential equations in a form ready for solution by `ode45`.
`type diffun`
```function dydt = diffun(~,y,r) dydt = zeros(6,1); s12 = y(1)*y(2); s34 = y(3)*y(4); dydt(1) = -r(1)*s12; dydt(2) = -r(1)*s12; dydt(3) = -r(2)*s34 + r(1)*s12 - r(3)*s34; dydt(4) = -r(2)*s34 - r(3)*s34; dydt(5) = r(2)*s34; dydt(6) = r(3)*s34; end ```
The true reaction rates are ${r}_{1}=2.5$, ${r}_{2}=1.2$, and ${r}_{3}=0.45$. Compute the evolution of the system for times zero through five by calling `ode45`.
```rtrue = [2.5 1.2 0.45]; y0 = [1 1 0 1 0 0]; tspan = linspace(0,5); soltrue = ode45(@(t,y)diffun(t,y,rtrue),tspan,y0); yvalstrue = deval(soltrue,tspan); for i = 1:6 subplot(3,2,i) plot(tspan,yvalstrue(i,:)) title(['y(',num2str(i),')']) end```
### Optimization Problem
To prepare the problem for solution in the problem-based approach, create a three-element optimization variable `r` that has a lower bound of `0.1` and an upper bound of `10`.
`r = optimvar('r',3,"LowerBound",0.1,"UpperBound",10);`
The objective function for this problem is the sum of squares of the differences between the ODE solution with parameters `r` and the solution with the true parameters `yvals`. To express this objective function, first write a MATLAB function that computes the ODE solution using parameters `r`. This function is the `RtoODE` function.
`type RtoODE`
```function solpts = RtoODE(r,tspan,y0) sol = ode45(@(t,y)diffun(t,y,r),tspan,y0); solpts = deval(sol,tspan); end ```
To use `RtoODE` in an objective function, convert the function to an optimization expression by using `fcn2optimexpr`. See Convert Nonlinear Function to Optimization Expression.
`myfcn = fcn2optimexpr(@RtoODE,r,tspan,y0);`
Express the objective function as the sum of squared differences between the ODE solution and the solution with true parameters.
`obj = sum(sum((myfcn - yvalstrue).^2));`
Create an optimization problem with the objective function `obj`.
`prob = optimproblem("Objective",obj);`
View the problem by calling `show`.
`show(prob)`
``` OptimizationProblem : Solve for: r minimize : sum(sum((RtoODE(r, extraParams{1}, extraParams{2}) - extraParams{3}).^2, 1)) extraParams{1}: Columns 1 through 7 0 0.0505 0.1010 0.1515 0.2020 0.2525 0.3030 Columns 8 through 14 0.3535 0.4040 0.4545 0.5051 0.5556 0.6061 0.6566 Columns 15 through 21 0.7071 0.7576 0.8081 0.8586 0.9091 0.9596 1.0101 Columns 22 through 28 1.0606 1.1111 1.1616 1.2121 1.2626 1.3131 1.3636 Columns 29 through 35 1.4141 1.4646 1.5152 1.5657 1.6162 1.6667 1.7172 Columns 36 through 42 1.7677 1.8182 1.8687 1.9192 1.9697 2.0202 2.0707 Columns 43 through 49 2.1212 2.1717 2.2222 2.2727 2.3232 2.3737 2.4242 Columns 50 through 56 2.4747 2.5253 2.5758 2.6263 2.6768 2.7273 2.7778 Columns 57 through 63 2.8283 2.8788 2.9293 2.9798 3.0303 3.0808 3.1313 Columns 64 through 70 3.1818 3.2323 3.2828 3.3333 3.3838 3.4343 3.4848 Columns 71 through 77 3.5354 3.5859 3.6364 3.6869 3.7374 3.7879 3.8384 Columns 78 through 84 3.8889 3.9394 3.9899 4.0404 4.0909 4.1414 4.1919 Columns 85 through 91 4.2424 4.2929 4.3434 4.3939 4.4444 4.4949 4.5455 Columns 92 through 98 4.5960 4.6465 4.6970 4.7475 4.7980 4.8485 4.8990 Columns 99 through 100 4.9495 5.0000 extraParams{2}: 1 1 0 1 0 0 extraParams{3}: Columns 1 through 7 1.0000 0.8879 0.7984 0.7253 0.6644 0.6130 0.5690 1.0000 0.8879 0.7984 0.7253 0.6644 0.6130 0.5690 0 0.1074 0.1847 0.2404 0.2805 0.3089 0.3287 1.0000 0.9953 0.9831 0.9657 0.9449 0.9219 0.8977 0 0.0034 0.0123 0.0249 0.0401 0.0568 0.0744 0 0.0013 0.0046 0.0094 0.0150 0.0213 0.0279 Columns 8 through 14 0.5308 0.4975 0.4681 0.4420 0.4186 0.3976 0.3786 0.5308 0.4975 0.4681 0.4420 0.4186 0.3976 0.3786 0.3421 0.3506 0.3554 0.3574 0.3573 0.3556 0.3527 0.8729 0.8481 0.8235 0.7994 0.7759 0.7532 0.7313 0.0924 0.1105 0.1284 0.1459 0.1630 0.1795 0.1954 0.0347 0.0414 0.0481 0.0547 0.0611 0.0673 0.0733 Columns 15 through 21 0.3613 0.3456 0.3311 0.3178 0.3056 0.2942 0.2837 0.3613 0.3456 0.3311 0.3178 0.3056 0.2942 0.2837 0.3489 0.3444 0.3395 0.3342 0.3287 0.3230 0.3173 0.7102 0.6900 0.6706 0.6520 0.6343 0.6173 0.6010 0.2108 0.2255 0.2396 0.2531 0.2660 0.2783 0.2902 0.0790 0.0846 0.0898 0.0949 0.0997 0.1044 0.1088 Columns 22 through 28 0.2739 0.2647 0.2562 0.2481 0.2406 0.2335 0.2268 0.2739 0.2647 0.2562 0.2481 0.2406 0.2335 0.2268 0.3116 0.3059 0.3002 0.2946 0.2891 0.2837 0.2784 0.5855 0.5706 0.5564 0.5428 0.5297 0.5172 0.5052 0.3015 0.3123 0.3226 0.3325 0.3420 0.3511 0.3598 0.1131 0.1171 0.1210 0.1247 0.1283 0.1317 0.1349 Columns 29 through 35 0.2205 0.2146 0.2089 0.2035 0.1984 0.1936 0.1890 0.2205 0.2146 0.2089 0.2035 0.1984 0.1936 0.1890 0.2732 0.2682 0.2633 0.2585 0.2538 0.2493 0.2449 0.4938 0.4827 0.4722 0.4620 0.4523 0.4429 0.4339 0.3682 0.3762 0.3839 0.3913 0.3984 0.4052 0.4117 0.1381 0.1411 0.1440 0.1467 0.1494 0.1519 0.1544 Columns 36 through 42 0.1846 0.1804 0.1763 0.1725 0.1688 0.1653 0.1619 0.1846 0.1804 0.1763 0.1725 0.1688 0.1653 0.1619 0.2406 0.2364 0.2324 0.2285 0.2246 0.2209 0.2173 0.4252 0.4168 0.4087 0.4010 0.3935 0.3862 0.3792 0.4181 0.4241 0.4300 0.4357 0.4411 0.4464 0.4515 0.1568 0.1591 0.1613 0.1634 0.1654 0.1674 0.1693 Columns 43 through 49 0.1587 0.1556 0.1526 0.1497 0.1469 0.1442 0.1416 0.1587 0.1556 0.1526 0.1497 0.1469 0.1442 0.1416 0.2138 0.2104 0.2071 0.2039 0.2007 0.1977 0.1947 0.3725 0.3660 0.3596 0.3535 0.3476 0.3419 0.3364 0.4564 0.4611 0.4657 0.4702 0.4744 0.4786 0.4826 0.1711 0.1729 0.1746 0.1763 0.1779 0.1795 0.1810 Columns 50 through 56 0.1392 0.1368 0.1344 0.1322 0.1300 0.1279 0.1259 0.1392 0.1368 0.1344 0.1322 0.1300 0.1279 0.1259 0.1918 0.1890 0.1863 0.1836 0.1810 0.1785 0.1761 0.3310 0.3258 0.3207 0.3158 0.3111 0.3064 0.3019 0.4866 0.4903 0.4940 0.4976 0.5010 0.5044 0.5077 0.1825 0.1839 0.1853 0.1866 0.1879 0.1892 0.1904 Columns 57 through 63 0.1239 0.1220 0.1202 0.1184 0.1166 0.1149 0.1133 0.1239 0.1220 0.1202 0.1184 0.1166 0.1149 0.1133 0.1737 0.1713 0.1690 0.1668 0.1646 0.1625 0.1605 0.2976 0.2933 0.2892 0.2852 0.2813 0.2775 0.2737 0.5109 0.5139 0.5169 0.5199 0.5227 0.5255 0.5282 0.1916 0.1927 0.1939 0.1950 0.1960 0.1971 0.1981 Columns 64 through 70 0.1117 0.1101 0.1086 0.1072 0.1057 0.1043 0.1030 0.1117 0.1101 0.1086 0.1072 0.1057 0.1043 0.1030 0.1584 0.1565 0.1546 0.1527 0.1508 0.1491 0.1473 0.2701 0.2666 0.2632 0.2598 0.2566 0.2534 0.2503 0.5308 0.5334 0.5359 0.5383 0.5407 0.5430 0.5453 0.1991 0.2000 0.2010 0.2019 0.2028 0.2036 0.2045 Columns 71 through 77 0.1017 0.1004 0.0991 0.0979 0.0967 0.0955 0.0944 0.1017 0.1004 0.0991 0.0979 0.0967 0.0955 0.0944 0.1456 0.1439 0.1423 0.1407 0.1391 0.1376 0.1361 0.2472 0.2443 0.2414 0.2385 0.2358 0.2331 0.2304 0.5475 0.5496 0.5517 0.5538 0.5558 0.5578 0.5597 0.2053 0.2061 0.2069 0.2077 0.2084 0.2092 0.2099 Columns 78 through 84 0.0933 0.0922 0.0911 0.0901 0.0891 0.0881 0.0871 0.0933 0.0922 0.0911 0.0901 0.0891 0.0881 0.0871 0.1346 0.1331 0.1317 0.1303 0.1290 0.1277 0.1264 0.2279 0.2253 0.2229 0.2204 0.2181 0.2157 0.2135 0.5616 0.5634 0.5652 0.5670 0.5687 0.5704 0.5720 0.2106 0.2113 0.2119 0.2126 0.2133 0.2139 0.2145 Columns 85 through 91 0.0862 0.0852 0.0843 0.0834 0.0826 0.0817 0.0809 0.0862 0.0852 0.0843 0.0834 0.0826 0.0817 0.0809 0.1251 0.1238 0.1226 0.1214 0.1202 0.1191 0.1179 0.2112 0.2091 0.2069 0.2048 0.2028 0.2008 0.1988 0.5736 0.5752 0.5768 0.5783 0.5798 0.5813 0.5827 0.2151 0.2157 0.2163 0.2169 0.2174 0.2180 0.2185 Columns 92 through 98 0.0801 0.0793 0.0785 0.0777 0.0770 0.0762 0.0755 0.0801 0.0793 0.0785 0.0777 0.0770 0.0762 0.0755 0.1168 0.1157 0.1146 0.1136 0.1125 0.1115 0.1105 0.1969 0.1950 0.1931 0.1913 0.1895 0.1877 0.1860 0.5841 0.5855 0.5868 0.5882 0.5895 0.5907 0.5920 0.2190 0.2196 0.2201 0.2206 0.2210 0.2215 0.2220 Columns 99 through 100 0.0748 0.0741 0.0748 0.0741 0.1095 0.1086 0.1843 0.1826 0.5932 0.5944 0.2225 0.2229 variable bounds: 0.1 <= r(1) <= 10 0.1 <= r(2) <= 10 0.1 <= r(3) <= 10 ```
### Solve Problem
To find the best-fitting parameters `r`, give an initial guess `r0` for the solver and call `solve`.
```r0.r = [1 1 1]; [rsol,sumsq] = solve(prob,r0)```
```Solving problem using lsqnonlin. Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance. ```
```rsol = struct with fields: r: [3x1 double] ```
```sumsq = 3.8660e-15 ```
The sum of squared differences is essentially zero, meaning the solver found parameters that cause the ODE solution to match the solution with true parameters. So, as expected, the solution contains the true parameters.
`disp(rsol.r)`
``` 2.5000 1.2000 0.4500 ```
`disp(rtrue)`
``` 2.5000 1.2000 0.4500 ```
### Limited Observations
Suppose that you cannot observe all the components of `y`, but only the final outputs `y(5)` and `y(6)`. Can you obtain the values of all the reaction rates based on this limited information?
To find out, modify the function `RtoODE` to return only the fifth and sixth ODE outputs. The modified ODE solver is in `RtoODE2`.
`type RtoODE2`
```function solpts = RtoODE2(r,tspan,y0) solpts = RtoODE(r,tspan,y0); solpts = solpts([5,6],:); % Just y(5) and y(6) end ```
The `RtoODE2` function simply calls `RtoODE` and then takes the final two rows of the output.
Create a new optimization expression from `RtoODE2` and the optimization variable `r`, the time span data `tspan`, and the initial point `y0`.
`myfcn2 = fcn2optimexpr(@RtoODE2,r,tspan,y0);`
Modify the comparison data to include outputs 5 and 6 only.
`yvals2 = yvalstrue([5,6],:);`
Create a new objective and new optimization problem from the optimization expression `myfcn2` and the comparison data `yvals2`.
```obj2 = sum(sum((myfcn2 - yvals2).^2)); prob2 = optimproblem("Objective",obj2);```
Solve the problem based on this limited set of observations.
`[rsol2,sumsq2] = solve(prob2,r0)`
```Solving problem using lsqnonlin. Local minimum possible. lsqnonlin stopped because the final change in the sum of squares relative to its initial value is less than the value of the function tolerance. ```
```rsol2 = struct with fields: r: [3x1 double] ```
```sumsq2 = 2.1616e-05 ```
Once again, the returned sum of squares is essentially zero. Does this mean that the solver found the correct reaction rates?
`disp(rsol2.r)`
``` 1.7811 1.5730 0.5899 ```
`disp(rtrue)`
``` 2.5000 1.2000 0.4500 ```
No; in this case, the new rates are quite different from the true rates. However, a plot of the new ODE solution compared to the true values shows that `y(5)` and `y(6)` match the true values.
```figure plot(tspan,yvals2(1,:),'b-') hold on ss2 = RtoODE2(rsol2.r,tspan,y0); plot(tspan,ss2(1,:),'r--') plot(tspan,yvals2(2,:),'c-') plot(tspan,ss2(2,:),'m--') legend('True y(5)','New y(5)','True y(6)','New y(6)','Location','northwest') hold off```
To identify the correct reaction rates for this problem, you must have data for more observations than `y(5)` and `y(6)`.
Plot all the components of the solution with the new parameters, and plot the solution with the true parameters.
```figure yvals2 = RtoODE(rsol2.r,tspan,y0); for i = 1:6 subplot(3,2,i) plot(tspan,yvalstrue(i,:),'b-',tspan,yvals2(i,:),'r--') legend('True','New','Location','best') title(['y(',num2str(i),')']) end```
With the new parameters, substances ${C}_{1}$ and ${C}_{2}$ drain more slowly, and substance ${C}_{3}$ does not accumulate as much. But substances ${C}_{4}$, ${C}_{5}$, and ${C}_{6}$ have exactly the same evolution with both the new parameters and the true parameters.
|
2020-09-18 17:39:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 36, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679553270339966, "perplexity": 497.62472852531874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00429.warc.gz"}
|
https://bitbucket.org/floledermann/openresources/overview
|
# OpenResources
OpenResources is a flexible, tag-based database application for Django. It follows a similar approach to OpenStreetMap for creating a collaborative schema-less database based on tags (key-value pairs). OpenResources has been originally developed for Vivir Bien, a mapping platform for solidarity economy resources.
OpenResources comes with "batteries included", which means that you don't only get a Django app but also a set of templates and static files that should give you a starting point and are designed with easy customization in mind.
OpenResources is released under the GNU Affero General Public License (AGPL), which means you can use it for free if you make all (modified) source code available under the AGPL through a link on your site. For details see the file LICENSE.txt .
## Dependencies
All dependencies on other (non-standard) Django applications are optional. At the moment OpenResources is prepared to work with the following 3rd party Djago applications:
## Running OpenResources
The enclosed test project allows you to run OpenResources in a local test setup without further installation. Inside the testproject directory, run:
manage.py syncdb
(only the first time, creates database and superuser), then:
manage.py runserver
to run a pre-configured server. Point your browser to http://localhost:8000/ - et voilà!
## Installing OpenResources
Adding OpenResources to a Django setup should be pretty straightforward. The only setting that is required is currently:
AUTH_PROFILE_MODULE = 'openresources.UserProfile'
(We are working on removing this need).
The included templates are expecting the OpenResources media files to be served at {{MEDIA_URL}}openresources/ , so if you want to use (or customize) these you should copy or symlink them accordingly.
## Internationalization
OpenResources uses Transifex for translating user interface elements. If you want to contribute a translation, you are more than welcome!
For translating model fields, Transmeta is used. While in the medium term, this should be replaced with something that does not interfer with the database schema of the application (see Issue #1), for now we provide an alternative set of migrations to be used when South is used in combination with Transmeta. To use these migrations, add the following to your settings.py file:
SOUTH_MIGRATION_MODULES = {
'openresources': 'openresources.migrations_transmeta',
}
## Credits / Contributors
The source code of OpenResources is released under the GNU Affero General Public License (AGPL), copyright by the following contributors:
OpenResources incorporates parts of other open source projects:
|
2015-09-02 03:04:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2575044631958008, "perplexity": 4331.118263213139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645241661.64/warc/CC-MAIN-20150827031401-00329-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://en.wikipedia.org/wiki/Commensurator
|
# Commensurator
In group theory, a branch of abstract algebra, the commensurator of a subgroup H of a group G is a specific subgroup of G.
## Definition
The commensurator of a subgroup H of a group G, denoted commG(H) or by some comm(H),[1] is the set of all elements g of G that conjugate H and leave the result commensurable with H. In other words,
${\displaystyle \mathrm {comm} _{G}(H)=\{g\in G:gHg^{-1}\cap H{\text{ has finite index in both }}H{\text{ and }}gHg^{-1}\}.}$[2]
## Properties
• commG(H) is a subgroup of G.
• commG(H) = G for any compact open subgroup H.
|
2017-06-28 23:31:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7935717701911926, "perplexity": 754.8118579213804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323807.79/warc/CC-MAIN-20170628222452-20170629002452-00633.warc.gz"}
|
https://webmasters.stackexchange.com/questions/88884/can-rewriterules-work-without-carets-or-dollar-signs/88887
|
# Can rewriterules work without carets or dollar signs?
Currently when I remap my friendly to non-friendly URLs on my website, I normally use lines like these in my .htaccess:
RewriteEngine On
RewriteCond %{ENV:REDIRECT_STATUS} !^$RewriteRule .* - [L] RewriteRule ^afolder/subfolder/(.*)$ /internal.php?Q1=$1 [L] RewriteRule ^afolder2/subfolder2/(.*)$ /internal.php?Q2=$1 [L] .... RewriteRule ^something/else/(.*)$ /internal.php?Qn=$1 [L] As you can see, I'm remapping http://example.com/whatever/whatever/value to http://example.com/internal.php?Q(something)=value. The point I'm making is that with every RewriteRule statement, I'm used to starting the search with a caret and ending it with a dollar sign, even for the simplest rules. For example to map http://example.com/anything to http://example.com/something.php, I use: RewriteRule ^anything$ /something.php [L]
My question is, when could I get away with NOT using carets or dollar signs to remap my URLs?
I'm asking because I want to boost the overall speed of my website, and every millisecond I shave off of processing time makes clients happier, (P.S. I need to shave off about 12ms loading time from the other end of North America then I'll meet the needs of google) and if I can remove those carets and/or dollar signs, then I feel I'll get a boost since the mod_rewrite engine will have a simpler string to process.
• The carets and dollar signs are anchors. Without them, the match can happen anywhere along the value to be matched against. If you want a very specific match, one or more anchor should be used. If you want to match anywhere, then neither anchor should be used. There is little or no gain in speed in using anchors or not. It is better to use an anchor for safety. Anchors do not require recursion during matches so they are technically faster, though you are shaving extremely small amounts of time. Go for safety first, convenience second, speed last. – closetnoc Jan 17 '16 at 18:43
• So technically removing an anchor is asking for trouble. Thanks. – Mike -- No longer here Jan 17 '16 at 18:47
• Not always. It just means that a recursive match anywhere within the string can happen. If there is only one case, then it will match only once. But if you go temporarily senile or insane and forget a case, you can get more than one match. That is why I stress safety first. It can be more risky to not include at least one anchor (often the caret) unless you intend to match anywhere. – closetnoc Jan 17 '16 at 18:52
You are using the ^ and $ (anchors in regex speak) because you are matching the whole URL, which is what most people want to do, so this is the most common example you see. If you omit the ^ and/or $ anchors then you are only going to be matching part of the URL. eg. anything$ is going to match "anything" at the end of the URL - this could match too many URLs (depending on your URL structure) and possibly allow invalid URLs to be matched. However, if you need to match "anything" at the end of the URL then this is the correct regex to use. You basically need to use the appropriate regex for what you are trying to match. Speed/efficiency, for the most part, doesn't really come into it. RewriteRule ^afolder/subfolder/(.*)$ /internal.php?Q1=$1 [L] If you omitted the ^ anchor from this pattern then a user could potentially request /foo/afolder/subfolder/... and it will be caught by this rule. Incidentally, the $ (end of string anchor) is superfluous here (because of the .* pattern) and can be omitted.
Removing the anchors is not going to make any difference with speed. In fact, removing the ^ (start) anchor could even make it less efficient, since the regex parser now needs to search the entire URL. With a ^ anchor it can fail on the first character mismatch, which is more efficient.
|
2021-04-22 14:55:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5820870399475098, "perplexity": 1507.5156232077525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00165.warc.gz"}
|
http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=6540
|
[code]
The effect is - Upper ball moved, moving ball bounced back
[/code]
Yes. There is something wrong with your code.
If the incoming ball just collide with the upper one, it should move into the direction to collide with the second ball. (The incoming ball should not bounced back???).
In real life,usually, the incoming ball will hit one of the ball first then collide with the second one.
If you want to simulate the incomng ball collide with the other two balls at the same time:
Assume the incoming ball with initial valocity $v_0$,
Since the size of the balls are the same, and they all have the same mass.
If the incoming ball hit the other two balls at the same time, the interactive force between incoming ball and any one of the ball are in the direction connect two center of the ball.
So the direction is 45 degree or -45 degree. It means that the force is the same in x direction and in y direction. So the momentum change in x/y diretion are the same.
So the velocity for the other two balls just after the collision are
$\vec{v_a}=v \hat{x} + v\hat{y}$ and $\vec{v_b}=v \hat{x} - v\hat{y}$
and assume the velocity for the incoming ball after the collision is $v'$
Conservation of momentum: $v_0= 2 v+v'$
Conservation of energy: $\tfrac{1}{2}mv_0^2= \tfrac{1}{2}m v_a^2+\tfrac{1}{2}m v_b^2+\tfrac{1}{2}m v'^2=2mv^2+\tfrac{1}{2}m v'^2$
which imply $v_0^2=4 v^2+v'^2$
So the result are: $v'=0, v_0=2v$
which mean that the incoming ball with initial velocity $v_0$ will be stopped ($v'=0$) just after the collision, and the other two balls will move with velocity
$\vec{v_a}=\tfrac{1}{2}v_0 \hat{x} + \tfrac{1}{2} v_0\hat{y}$
and
$\vec{v_b}=\tfrac{1}{2}v_0 \hat{x} - \tfrac{1}{2}v_0 \hat{y}$
However, this kind of situation normally will not happened in real life.
|
2017-10-18 12:57:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7006880044937134, "perplexity": 437.9820618421872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822966.64/warc/CC-MAIN-20171018123747-20171018143747-00361.warc.gz"}
|
http://corochann.com/basic-image-processing-tutorial-1220.html
|
# Basic image processing tutorial
Basic image processing for deep learning. Refer github for the source code.
The sample image is obtained from PEXELS.
If you are not familiar with image processing, you can read this article before going to convolutional neural network.
OpenCV is image processing library which supports
• converting image color format (RGB, YUV, Gray scale etc)
• resize
and other useful image processing functionality.
To install opencv, execute
\$conda install -c https://conda.binstar.org/menpo -y opencv3
• cv2.imread for loading image.
• cv2.imwrite for save image.
• plt.imshow for plotting, and plt.savefig for save plot image.
OpenCV image format is usually 3 dimension (or 2 dimension if the image is gray scale).
1st dimension is for height, 2nd dimension is for width, 3rd dimension is for channel (RGB, YUV etc).
To convert color format cv2.cvtColor can be used. Details are written in next section.
image.shape (Height, Width, Channel) = (380, 512, 3)
out_plt.jpg
## Change color format
• cv2.cvtColor for converting color format.
Note that openCV version 3 reads the image color in the order B, G, R. However, matplotlib deals with the image color in the corder R, G, B. So you need to convert color order, refer readRGBImage function.
If the image is gray scale, the image is 2 dimensional array
1st dimension is for height, 2nd dimension is for width.
gray_image.shape (Height, Width) = (380, 512)
out_gray.jpg
## Resize
• cv2.imread for resize.
Note that size should be specified in the order width, height.
image.shape (Height, Width, Channel) = (380, 512, 3)
half_image.shape (Height, Width, Channel) = (190, 256, 3)
resized128_image.shape (Height, Width, Channel) = (95, 128, 3)
out_resized128.jpg
## Crop
• numpy slicing can be used for cropping image
cropped_image.shape (Height, Width, Channel) = (190, 190, 3)
## Image processing with channels
RGB channel manipulation.
Understanding the meaning of “channel” is important in deep learning. Below code provides some insight that what each channel represents.
RGB_gray.jpg
RGB_color.jpg Each R,G,B channel is shown in R, G, B color respectively.
|
2017-08-19 01:47:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3476005494594574, "perplexity": 10901.61640509163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00074.warc.gz"}
|
https://codereview.stackexchange.com/questions/112707/fizzbuzz-in-t-sql/113195
|
# FizzBuzz in T-SQL
I've written a simple FizzBuzz in TSQL using some IF loops.
DECLARE @i int = 1
DECLARE @str varchar(8) = '';
WHILE @i <= 30 BEGIN
SET @str = ''
IF @i % 3 = 0
BEGIN
SET @str = 'FIZZ'
END
IF @i % 5 = 0
BEGIN
SET @str = @str + 'BUZZ'
END
PRINT str(@i) + ': ' + @str
SET @i = @i + 1
END
I wasn't sure the best practice with the IF loops.
• IF is not a loop – Gentian Kasa Dec 3 '15 at 9:07
In SQL (any flavor) you usually want to avoid loops, if at all possible, in favor of set-based operations. However, in your case, there's not really a set, so-to-speak, other than a series of ints 1 to 30. Also, it's quite unusual in any SQL to use the console for anything besides routine messages like how many rows were affected by a query.
Still, just for the sake of learning, let's go ahead and make a (temporary) data set, @FizzBuzzNumbers, and put the numbers in it.
DECLARE @FizzBuzzNumbers TABLE (number INT);
DECLARE @i INT = 1, @max INT = 30;
WHILE @i <= @max
BEGIN
INSERT INTO @FizzBuzzNumbers (number) VALUES (@i);
SET @i = @i+1;
END
Then we can select from that set and apply the FizzBuzz using case. (note that converting the number to a string is needed, since a column/field can only have one type)
There is no need to print the results either, since when you select them they will be shown in output automatically.
DECLARE @Fizz INT = 3, @Buzz INT = 5;
SELECT
CASE
WHEN (number % @Fizz = 0) and (number % @Buzz = 0) THEN 'FizzBuzz'
WHEN (number % @Fizz = 0) THEN 'Fizz'
WHEN (number % @Buzz = 0) THEN 'Buzz'
ELSE CONVERT(VARCHAR(8), number)
END AS [FizzBuzz Results]
FROM @FizzBuzzNumbers;
Demo on SEDE
Now if you insisted on keeping the loop, which is more familiar to traditional programmers (but not idiomatic SQL, or "SQLic") you should still go with the faster case instead of if, as it is faster but also reads a lot easier.
case statements in SQL are much more flexible in the conditions you can make them try to match than what you would expect in most traditional languages. (on the other hand, the logic on the other side of the case, as in, what to do when a case is matched, is extremely simplistic, so you often need if/else type logic for more complex things).
• You have some valid points in here, but you miss the general approach which allows for extending this as you aren't joining the Fizz and the Buzz and only calculating them once. – holroy Dec 7 '15 at 20:38
Here's a pretty succinct way to do it. I use a recursive common-table expression to fill in a table of integers from 1 to 100.
with tbl (idx)
as
(
select 1
union all
select idx + 1 from tbl where idx < 100
)
select
case
when idx % 15 = 0 then 'fizzbuzz'
when idx % 3 = 0 then 'fizz'
when idx % 5 = 0 then 'buzz'
else cast(idx as varchar(10))
end
from tbl
Here's an updated version that only runs the two modulus calculations once and uses string concatenation:
with tbl (idx) as
(
select 1
union all
select idx + 1 from tbl where idx < 100
),
tbl2 (idx, isFizz, isBuzz) as
(
select idx, iif(idx % 3 = 0, 'Fizz', ''), iif(idx % 5 = 0, 'Buzz', '')
from tbl
)
select
iif(
len(isFizz) > 0 or len(isBuzz) > 0,
isFizz + isBuzz,
cast(idx as varchar(10))
) as result
from tbl2
The first table uses a recursive CTE to generate a table with 100 rows of integers in it. The second table adds two columns that calculate the Fizz and Buzz values. The final select puts everything together using the lengths of the Fizz and Buzz columns as a guide.
Here's I think a better solution than my second solution. It uses a table filled with 3s and a table filled with 5s which together obviate the need for modulus calculations. I think that since, as the other answerer mentioned, we should be using set-based operations, this is a better solution. In the final select, I use a call to coalesce with a string concatenation to put everything together.
with t3 (idx, word) as
(
select 3, 'Fizz'
union all
select idx + 3, word from t3 where idx < 100
),
t5 (idx, word) as
(
select 5, 'Buzz'
union all
select idx + 5, word from t5 where idx < 100
),
t0 (idx) as
(
select 1
union all
select idx + 1 from t0 where idx < 100
)
select coalesce(t3.word + t5.word, t3.word, t5.word, cast(t0.idx as varchar(10)))
from t0
left outer join t3 on t3.idx = t0.idx
left outer join t5 on t5.idx = t0.idx
order by t0.idx
The further out I abstract some of this, the uglier my code gets. I still prefer my first solution.
• I miss the general approach which allows for extending this as you aren't joining the Fizz and the Buzz and only calculating them once. – holroy Dec 7 '15 at 20:52
• @holroy, I updated my answer to provide a solution that I think addresses your concerns. I think you're missing the point however. The FizzBuzz question is not meant to judge a candidate's coding design abilities. Answers are not meant to follow SOLID principles, or DRY or YAGNI or whatever. It's meant to make sure the candidate can write code. That's it. – user2023861 Dec 7 '15 at 22:16
• re "It's meant to make sure the candidate can write code" : Our main use of FizzBuzz is to test candidates about how they code (and we mainly wait a unit test at end), so I would not be so categorical about not used to test design/coder practices . (btw, interesting approaches) – Tensibai Dec 8 '15 at 14:35
• @Tensibai, would you only be happy if a solution looked like this gist.github.com/stuhacking/1259421 ? I 100% disagree. In my interviewing experience, I've found that the simple FizzBuzz question does actually weed out candidates. Why? Maybe they can't code, or they aren't prepared, or they jump to conclusions. FizzBuzz shines a light on these kinds of candidates. The problem with using it to judge a candidate's design skills is that FizzBuzz is too well-known. It's better to ask something that forces the candidate to think. – user2023861 Dec 8 '15 at 14:47
• @user2023861 I don't get where you go with this link, but no it's not what I expect. I just meant I would not be categorical about its use. Of course it can't be the only point to judge someone coding practices exactly because it is well known. We use it in interview, not as pre-selection case, along others things to have an idea on how the candidate think about a problem and start coding. – Tensibai Dec 8 '15 at 15:02
|
2020-02-24 21:32:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4192711412906647, "perplexity": 3504.321560333961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00076.warc.gz"}
|
https://chem.libretexts.org/Courses/University_of_West_Georgia/CCHEM_1152K%3A_Survey_of_Chemistry_II/06%3A_Carboxylic_Acids_and_Esters
|
# 6: Carboxylic Acids and Esters
• 6.1: Prelude to Organic Acids and Bases and Some of Their Derivatives
Organic acids have been known for ages. Prehistoric people likely made acetic acid when their fermentation reactions went awry and produced vinegar instead of wine. The Sumerians (2900–1800 BCE) used vinegar as a condiment, a preservative, an antibiotic, and a detergent.
• 6.2: Carboxylic Acids - Structures and Names
Simple carboxylic acids are best known by common names based on Latin and Greek words that describe their source (e.g., formic acid, Latin formica, meaning “ant”). Greek letters, not numbers, designate the position of substituted acids in the common naming convention. IUPAC names are derived from the LCC of the parent hydrocarbon with the -e ending of the parent alkane replaced by the suffix -oic and the word acid.
• 6.3: The Formation of Carboxylic Acids
Whether in the laboratory or in the body, the oxidation of aldehydes or primary alcohols forms carboxylic acids.
• 6.4: Physical Properties of Carboxylic Acids
Carboxylic acids have high boiling points compared to other substances of comparable molar mass. Boiling points increase with molar mass. Carboxylic acids having one to four carbon atoms are completely miscible with water. Solubility decreases with molar mass.
• 6.5: Chemical Properties of Carboxylic Acids- Ionization and Neutralization
Soluble carboxylic acids are weak acids in aqueous solutions. Carboxylic acids neutralize bases to form salts.
• 6.6: Esters - Structures and Names
An ester has an OR group attached to the carbon atom of a carbonyl group.
• 6.7: Physical Properties of Esters
Esters have polar bonds but do not engage in hydrogen bonding and are therefore intermediate in boiling points between the nonpolar alkanes and the alcohols, which engage in hydrogen bonding. Ester molecules can engage in hydrogen bonding with water, so esters of low molar mass are therefore somewhat soluble in water.
• 6.8: Preparation of Esters
Esters are made by the reaction of a carboxylic acid with an alcohol, a process that is called esterification.
• 6.9: Hydrolysis of Esters
Hydrolysis is a most important reaction of esters. Acidic hydrolysis of an ester gives a carboxylic acid and an alcohol. Basic hydrolysis of an ester gives a carboxylate salt and an alcohol.
• 6.10: Esters of Phosphoric Acid
Inorganic acids such as $$H_3PO_4$$ form esters. The esters of phosphoric acid are especially important in biochemistry.
|
2021-10-22 13:25:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5807086825370789, "perplexity": 11611.808112098688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00347.warc.gz"}
|
https://courses.cs.cornell.edu/cs100j/2004sp/Projects/P6/p6sp04b.html
|
### CS100J Spring 2004 Project 6 Part B Due Thursday 5/6 at 3pm
#### 0. Objective
Completing all tasks in this MATLAB assignment will help you learn about:
• vectors and vectorized code
• loops
• plots
First skim, and then carefully read the entire assignment before starting any tasks!
#### 1. Eeeeeeeee!
Functions often can be approximated by infinite series. As more terms are added in the sequence, the approximation becomes better (usually). The exponential function ex can be approximated by the series
The notation n! represents the factorial of number n, n!=1*2*3...*n, 0!=1. The MATLAB function factorial performs this computation.
We will use the MATLAB function exp to calculate the "true" value of ex for a given value of x. The difference between the true and approximated values is the approximation error. The tolerance is the amount of error that we are willing to accept. Usually, we are willing to accept this error due to practical limitations on computing time or memory. The smaller the tolerance we choose, the more accurate the approximation becomes.
Part (a): Write a program (`eee.m`) that uses the series shown above to approximate e0.5. The program should start by approximating e0.5 with just the first term of the series and add the additional terms one by one until a tolerance of 0.001 is satisfied. Use a loop! The program should show one line of output for each additional term used. This line of output should display the number of terms used, the approximated value of e0.5, and the approximation error. Below is an example of what the first few lines of output may look like:
``` No. of Terms Approximation Error
1 1.000000 0.648721
2 1.500000 0.148721```
The above is an example of the output format, not the actual solution. Your output should have the same components but does not need to have exactly the same format.
Part (b):Now that you have a program to approximate ex, let's experiment with the tolerance! Use six values of tolerance: 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001. Do not submit M-files for Part (b). Instead, type a table (in plain text format) that shows how many terms of the series are needed for each tolerance value. Put this table at end of the program from Part (a) as a comment block.
#### 2. Loop or vectorized code?
Write two programs to evaluate the equation
$y\left(x\right) = x2- 3x + 2$
for all values of x between 0.1 and 3, in steps of 0.1. Save the values of y in a vector.
• Program 1: (`loop.m`) Use a for loop without using vectorized code. In this program, each iteration of the loop calculates one value of y for one value of x.
• Program 2: (`vectorized.m`) Write vectorized code. No loops! Draw a plot of the function values for $0.1<=x<=3$. The code should label the axes and give the plot a title. Read Section 10 in MATLAB Essentials (4/29 handout, p.4), and/or type `help plot` in the command window, to learn about plotting. The code given in the 4/29 handout shows an example of using MATLAB function `plot`.
#### 3. Submitting Your Work
Submit your files `eee.m`, `loop.m`, and `vectorized.m` on-line using CMS (Course Management System) before the project deadline. Make sure you are submitting the correct, up to date files. We will not accept any files after the deadline for any reason (except for documented medical reasons). See the CMS link on the web page for instructions on using CMS.
|
2017-12-15 13:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6156721711158752, "perplexity": 989.5952297083932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948572676.65/warc/CC-MAIN-20171215133912-20171215155912-00158.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-square-root-125-square-root-1-5-square-root-49-5
|
# How do you simplify square root 125 + square root 1/5 - square root 49/5?
##### 1 Answer
Oct 7, 2015
$\sqrt{125} + \sqrt{\frac{1}{5}} - \sqrt{\frac{49}{5}} = \frac{19}{\sqrt{5}}$
#### Explanation:
$\sqrt{125} = \sqrt{25 \cdot 5} = \sqrt{25 \cdot \frac{25}{5}} = \sqrt{{25}^{2} / 5} = \frac{\sqrt{{25}^{2}}}{\sqrt{5}} = \frac{25}{\sqrt{5}}$
$\sqrt{\frac{1}{5}} = \frac{\sqrt{1}}{\sqrt{5}} = \frac{1}{\sqrt{5}}$
$\sqrt{\frac{49}{5}} = \sqrt{{7}^{2} / 5} = \frac{\sqrt{{7}^{2}}}{\sqrt{5}} = \frac{7}{\sqrt{5}}$
$\rightarrow \sqrt{125} + \sqrt{\frac{1}{5}} - \sqrt{\frac{49}{5}} = \frac{25}{\sqrt{5}} + \frac{1}{\sqrt{5}} - \frac{7}{\sqrt{5}} = \frac{25 + 1 - 7}{\sqrt{5}} = \frac{19}{\sqrt{5}}$
|
2022-09-30 12:52:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889252781867981, "perplexity": 7521.584682621816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00303.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-domain-and-range-of-1-x-7
|
# How do you find the domain and range of 1/(x-7)?
Apr 12, 2017
$x \in \mathbb{R} , x \ne 7$
$y \in \mathbb{R} , y \ne 0$
#### Explanation:
$\text{let } y = \frac{1}{x - 7}$
The denominator of y cannot equal zero as tis would make y undefined. Equating the denominator to zero and solving gives the value that x cannot be.
$\text{solve " x-7=0rArrx=7larrcolor(red)" excluded value}$
$\Rightarrow \text{ domain is } x \in \mathbb{R} , x \ne 7$
$\text{Rearrange the function to make x the subject}$
$\Rightarrow y \left(x - 7\right) = 1$
$\Rightarrow x y - 7 y = 1$
$\Rightarrow x y = 1 + 7 y$
$\Rightarrow x = \frac{1 + 7 y}{y}$
$\Rightarrow y = 0 \leftarrow \textcolor{red}{\text{ is the excluded value}}$
$\Rightarrow \text{range is } y \in \mathbb{R} , y \ne 0$
|
2020-03-31 15:59:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9740054607391357, "perplexity": 1255.523605709495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00191.warc.gz"}
|
http://mathhelpforum.com/geometry/134664-finding-length-segment.html
|
# Math Help - Finding the length of segment
1. ## Finding the length of segment
This one has been eating at me for a long time and I just can't figure it out:
In the figure below, ACDF is a parallelogram. Segment AE is perpendicular to line CD, and segment DB is perpendicular to line AC. The length of DB is 6 meters, segment FA is 9 meters and segment AC is 15 meters. How many meters long is segment AE?
How do I show/prove that it's 10 meters?
2. Originally Posted by harold
This one has been eating at me for a long time and I just can't figure it out:
In the figure below, ACDF is a parallelogram. Segment AE is perpendicular to line CD, and segment DB is perpendicular to line AC. The length of DB is 6 meters, segment FA is 9 meters and segment AC is 15 meters. How many meters long is segment AE?
How do I show/prove that it's 10 meters?
1. Calculate the sinus of the angle at C in the right triangle BCD:
$\sin(C)=\frac69$
2. Calculate the sinus of the angle at C in the right triangle ACE:
$\sin(C)=\frac69=\frac{\overline{AE}}{AC}$
3. Plug in all values you know: AC = 15, sin(C) to calculate
$|\overline{AE}| = \frac69 \cdot 15 = 10$
3. Thanks so much earboth--it was driving me crazy!!
4. Alternatively, you could recognize the similar triangles, so therefore
(9/15) = (6/x), thus x=10
|
2014-07-12 09:17:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7416011691093445, "perplexity": 760.5493067783651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432860.32/warc/CC-MAIN-20140707234032-00094-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition-anton/chapter-9-infinite-series-9-1-sequences-exercises-set-9-1-page-606/23
|
## Calculus, 10th Edition (Anton)
General term: $$\bigg\{ \frac{2n-1}{2n} \bigg\}_{n=1}^{+\infty}$$ The sequence converges to $2$.
As can be seen from the question, the numerators are odd numbers whereas the denominators are all even numbers. So, $n^{th}$ odd number is given by $(2n-1)$ and $n^{th}$ even number is given by $(2n)$. Thus, general term is $$\frac{2n-1}{2n}$$ The limit $$\lim_{n\to\infty}{ \frac{2n-1}{2n}}=1$$ because degree of n in both numerator and denominator is same, i.e., $1$.
|
2019-12-09 07:19:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947110414505005, "perplexity": 161.38484153481397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00227.warc.gz"}
|
https://pytorch.org/docs/master/generated/torch.atanh.html
|
# torch.atanh¶
torch.atanh(input, *, out=None) → Tensor
Returns a new tensor with the inverse hyperbolic tangent of the elements of input.
Note
The domain of the inverse hyperbolic tangent is (-1, 1) and values outside this range will be mapped to NaN, except for the values 1 and -1 for which the output is mapped to +/-INF respectively.
$\text{out}_{i} = \tanh^{-1}(\text{input}_{i})$
Parameters
input (Tensor) – the input tensor.
Keyword Arguments
out (Tensor, optional) – the output tensor.
Example:
>>> a = torch.randn(4).uniform_(-1, 1)
>>> a
tensor([ -0.9385, 0.2968, -0.8591, -0.1871 ])
>>> torch.atanh(a)
tensor([ -1.7253, 0.3060, -1.2899, -0.1893 ])
|
2020-12-01 09:03:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4951537251472473, "perplexity": 4037.562847087553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141672314.55/warc/CC-MAIN-20201201074047-20201201104047-00050.warc.gz"}
|
https://www.physicsforums.com/threads/projectile-motion-with-air-resistance-and-ball-of-mass.201491/
|
# Homework Help: Projectile motion with air resistance and ball of mass
1. Nov 29, 2007
### rfg
1. The problem statement, all variables and given/known data
A ball of mass "m" is thrown vertically upward with a velocity of "vi." It experiences a force of air resistance given by F=-kv, where "k" is a positive constant. The positive direction for all vector quantities is upward. Does it take longer for the ball to rise to its maximum height or to fall from its maximum hieght back to the height from which it was thrown.
2. Relevant equations
I calculated the velocity as a function of time as (mg/k-vi)e^(-kt/m) = mg/k + v
3. The attempt at a solution
I believe that the projectile would take longer to fall to its initial position, because as it rises, the force of air friction and gravity work against the velocity. While falling, only air friction opposes the velocity.
Last edited: Nov 29, 2007
2. Nov 29, 2007
### Erwin Kreyszig
For this question you have to think about the constant deceleration, the air resistance. Would that effect the both the up and down directions. Then consisder gravity as a deacceleration on the way up, but is it not a acceleration on the way down.
Try using the equation of $$V_{final}$$= $$V_{initial} + at$$
rearrange for t, then you should see an obvious result. (initial velocity is Vi on the way up, and obviously 0 on the way down)
3. Nov 29, 2007
### rfg
I apologize if I seem incompetent, but I don't entirely follow. I understand that the drag force will cause the magnitude of the acceleration to decrease throughout the motion. The initial acceleration will have a magnitude of g+kvi/m. As velocity diminishes, this value will reach g when v=0 (at the top of the trajectory). During the fall, acceleration still has a magnitude of g+kv/m, however velocity is now negative, thus as velocity increases, acceleration decreases. But why doesn't gravity work as an acceleration during the fall? Isn't it working in the same direction of the velocity?
4. Nov 29, 2007
### PhanthomJay
You are pretty much correct in your thinking, except that during the downward fall, the acceleration is g -kv/m downward, (gravity acts down , the air resistance acts up). During the upward journey, the acceleration is g +kv/m downward, since both gravity and the air resistance forces act dowm. Bottom line is that downward journey takes longer, as you had initially noted.
5. Nov 29, 2007
### sephirothrr
Maybe I'm way off, but shouldn't the times be even?
Wouldn't the initial force imparted upon the ball change things?
On the way up, you have your force being counter-acted by gravity and air resistance.
On the way down, you have just gravity vs. air resistance.
This is just my logic here.
6. Nov 29, 2007
### PhanthomJay
There is an apparent error in the problem statement, as i see it; the ball has an initial velocity vi, not an initial force vi. For sure, there must be an initial force imparted to the ball by the motion of the thrower's hand, but the start point of this problem is at the point of release, wher only gravity and air resistance acts in both directions, no other forces act during the upward or downward flight. The ball decelerates non uniformly rapidly (greater than g) during the upward path, then accelerates downward non uniformly at less than g (possibly reaching a = 0 if it reaches terminal velocity) during the downward journey.
7. Nov 29, 2007
### sephirothrr
But the problem says:
It is a force here.
8. Nov 29, 2007
### rfg
I believe I have the problem figured out, thank you for the help all those who posted.
Btw, vi is a initial velocity, not a force. This wans a typo, sorry.
My conclusion is that the object does indeed take longer to fall. The times cannot be equal because that is not an answer choice. My reasoning is as follows:
While on the rise, the drag force works in the same direction as gravity, thus the resulting deceleration of the object is greater than the ideal 9.8m/s/s. Thus, the velocity diminishes faster and reaches zero (the top of the trajectory) faster than it would without friction. While falling the drag acts opposite gravity, thus the downward acceleration is less than the ideal 9.8m/s/s. Hence, the fall takes longer than it would without friction.
|
2018-12-14 18:58:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7437331080436707, "perplexity": 842.0710230022381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00359.warc.gz"}
|
https://first.wpi.edu/FRC/roborio/release/docs/cpp/classfrc_1_1Translation2d.html
|
WPILibC++ 2020.3.2-60-g3011ebe
frc::Translation2d Class Reference
Represents a translation in 2d space. More...
#include <Translation2d.h>
## Public Member Functions
constexpr Translation2d ()=default
Constructs a Translation2d with X and Y components equal to zero.
Translation2d (units::meter_t x, units::meter_t y)
Constructs a Translation2d with the X and Y components equal to the provided values. More...
units::meter_t Distance (const Translation2d &other) const
Calculates the distance between two translations in 2d space. More...
units::meter_t X () const
Returns the X component of the translation. More...
units::meter_t Y () const
Returns the Y component of the translation. More...
units::meter_t Norm () const
Returns the norm, or distance from the origin to the translation. More...
Translation2d RotateBy (const Rotation2d &other) const
Applies a rotation to the translation in 2d space. More...
Translation2d operator+ (const Translation2d &other) const
Adds two translations in 2d space and returns the sum. More...
Translation2doperator+= (const Translation2d &other)
Adds the new translation to the current translation. More...
Translation2d operator- (const Translation2d &other) const
Subtracts the other translation from the other translation and returns the difference. More...
Translation2doperator-= (const Translation2d &other)
Subtracts the new translation from the current translation. More...
Translation2d operator- () const
Returns the inverse of the current translation. More...
Translation2d operator* (double scalar) const
Multiplies the translation by a scalar and returns the new translation. More...
Translation2doperator*= (double scalar)
Multiplies the current translation by a scalar. More...
Translation2d operator/ (double scalar) const
Divides the translation by a scalar and returns the new translation. More...
bool operator== (const Translation2d &other) const
Checks equality between this Translation2d and another object. More...
bool operator!= (const Translation2d &other) const
Checks inequality between this Translation2d and another object. More...
Translation2doperator/= (double scalar)
## Detailed Description
Represents a translation in 2d space.
This object can be used to represent a point or a vector.
This assumes that you are using conventional mathematical axes. When the robot is placed on the origin, facing toward the X direction, moving forward increases the X, whereas moving to the left increases the Y.
## ◆ Translation2d()
frc::Translation2d::Translation2d ( units::meter_t x, units::meter_t y )
Constructs a Translation2d with the X and Y components equal to the provided values.
Parameters
x The x component of the translation. y The y component of the translation.
## ◆ Distance()
units::meter_t frc::Translation2d::Distance ( const Translation2d & other ) const
Calculates the distance between two translations in 2d space.
This function uses the pythagorean theorem to calculate the distance. distance = std::sqrt((x2 - x1)^2 + (y2 - y1)^2)
Parameters
other The translation to compute the distance to.
Returns
The distance between the two translations.
## ◆ Norm()
units::meter_t frc::Translation2d::Norm ( ) const
Returns the norm, or distance from the origin to the translation.
Returns
The norm of the translation.
## ◆ operator!=()
bool frc::Translation2d::operator!= ( const Translation2d & other ) const
Checks inequality between this Translation2d and another object.
Parameters
other The other object.
Returns
Whether the two objects are not equal.
## ◆ operator*()
Translation2d frc::Translation2d::operator* ( double scalar ) const
Multiplies the translation by a scalar and returns the new translation.
For example, Translation2d{2.0, 2.5} * 2 = Translation2d{4.0, 5.0}
Parameters
scalar The scalar to multiply by.
Returns
The scaled translation.
## ◆ operator*=()
Translation2d& frc::Translation2d::operator*= ( double scalar )
Multiplies the current translation by a scalar.
This is similar to the * operator, except that current object is mutated.
Parameters
scalar The scalar to multiply by.
Returns
The reference to the new mutated object.
## ◆ operator+()
Translation2d frc::Translation2d::operator+ ( const Translation2d & other ) const
Adds two translations in 2d space and returns the sum.
This is similar to vector addition.
For example, Translation2d{1.0, 2.5} + Translation2d{2.0, 5.5} = Translation2d{3.0, 8.0}
Parameters
Returns
The sum of the translations.
## ◆ operator+=()
Translation2d& frc::Translation2d::operator+= ( const Translation2d & other )
Adds the new translation to the current translation.
This is similar to the + operator, except that the current object is mutated.
Parameters
Returns
The reference to the new mutated object.
## ◆ operator-() [1/2]
Translation2d frc::Translation2d::operator- ( ) const
Returns the inverse of the current translation.
This is equivalent to rotating by 180 degrees, flipping the point over both axes, or simply negating both components of the translation.
Returns
The inverse of the current translation.
## ◆ operator-() [2/2]
Translation2d frc::Translation2d::operator- ( const Translation2d & other ) const
Subtracts the other translation from the other translation and returns the difference.
For example, Translation2d{5.0, 4.0} - Translation2d{1.0, 2.0} = Translation2d{4.0, 2.0}
Parameters
other The translation to subtract.
Returns
The difference between the two translations.
## ◆ operator-=()
Translation2d& frc::Translation2d::operator-= ( const Translation2d & other )
Subtracts the new translation from the current translation.
This is similar to the - operator, except that the current object is mutated.
Parameters
other The translation to subtract.
Returns
The reference to the new mutated object.
## ◆ operator/()
Translation2d frc::Translation2d::operator/ ( double scalar ) const
Divides the translation by a scalar and returns the new translation.
For example, Translation2d{2.0, 2.5} / 2 = Translation2d{1.0, 1.25}
Parameters
scalar The scalar to divide by.
Returns
The scaled translation.
## ◆ operator==()
bool frc::Translation2d::operator== ( const Translation2d & other ) const
Checks equality between this Translation2d and another object.
Parameters
other The other object.
Returns
Whether the two objects are equal.
## ◆ RotateBy()
Translation2d frc::Translation2d::RotateBy ( const Rotation2d & other ) const
Applies a rotation to the translation in 2d space.
This multiplies the translation vector by a counterclockwise rotation matrix of the given angle.
[x_new] [other.cos, -other.sin][x] [y_new] = [other.sin, other.cos][y]
For example, rotating a Translation2d of {2, 0} by 90 degrees will return a Translation2d of {0, 2}.
Parameters
other The rotation to rotate the translation by.
Returns
The new rotated translation.
## ◆ X()
units::meter_t frc::Translation2d::X ( ) const
inline
Returns the X component of the translation.
Returns
The x component of the translation.
## ◆ Y()
units::meter_t frc::Translation2d::Y ( ) const
inline
Returns the Y component of the translation.
Returns
The y component of the translation.
The documentation for this class was generated from the following file:
|
2021-09-23 17:18:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4143925905227661, "perplexity": 6500.923592362472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00435.warc.gz"}
|
http://www.randygaul.net/2010/11/
|
# Memory Allocation and Linked Lists
Memory Allocation
Memory allocation allows the programmer to allocate a block of memory of a desired size, and then store data into the block, along with initializing a pointer pointing to this block of memory. In order to do so, I’ll explain the use of malloc(), realloc()free(), and calloc().
malloc() will allocate a block of memory, although it will not initialize it, and returns a pointer to the block of memory allocated. The prototype of malloc() looks like: void malloc(size_t size);size_t is an integer type defined in the C library, which is an unsigned integer. So, size is just an integer and represents the amount of bytes to be allocated. Since malloc() will return a pointer to a block of allocated memory, you need a pointer in order to make use of a call to malloc(), like so:
Since a character in C is one byte the, numerical argument for malloc() ends up just being a simple integer. However, if you were allocating memory for any other type of data you would need to use the sizeof() function to determine how much memory to allocate.
Just as a reminder, if you ever need to figure out how many elements an array has, use the following idiom: num_Elements = (sizeof(array) / sizeof(array[0])). That will take the size of all the elements divided by the size of a single element, giving you an integer resultant.
realloc() will change the size of a previously allocated block of memory. The prototype for realloc() is: void *realloc(void *ptr, size_t size);ptr must point to a block of memory that was previously used in a call to malloc() or realloc(). The size parameter can be either larger or smaller than the original block.
calloc() is similar to malloc() in all ways except in that it initializes the bytes to 0 within the block.
free() is very easy to use; simply pass a pointer that points to a memory block we no longer need, and it will be available for reuse later on.
A linked list is a chain of structures (called nodes), where each node contains a pointer to the next node in the chain. The last node in the list would contain a null pointer. The advantages of using a linked list over an array, is that you can add in nodes anywhere in the list you want, and you can delete nodes anywhere you want. You can also create many types of data structures like graphs and trees using linked lists.
Here are the barebones of a basic node:
To access a member of a structure through a pointer to a structure there is a “shortcut” operator called the right arrow selection operator ->. The right arrow selection operator allows you to access the member of a value pointed by a pointer. The following are equal:
In order to add a node to the beginning of the linked list, all you need to do is create a temporary variable to hold your new node, assign the value of your new node to the variable, and modify your new node to point to the old first node. Since adding a new node to the beginning of a list is such a common task, I’ll show a sample function on how to do so. In order for a function to directly add a new node to a list, you need to be able to pass a pointer to the function and make it point elsewhere. Since arguments are passed by value, you can’t simply pass a pointer to a function, since the function will then only be able to modify a copy of the pointer passed to it. Instead, you would need to pass a pointer to a pointer, then, you can make the pointer point elsewhere.
In the above example, list is a pointer to the first node in a linked list. Since we pass a pointer to a pointer (when we call this function we would use &first in the first argument, being the address of the first node), we are able to update the list pointer to point to our newly added node.
In order to search through a list, from beginning to end, you would usually use a for loop. The idiom is actually very simple. You scan the value and make your comparison on the first node, then make your iteration point to the next node:
Remember how I mentioned that the last node in the list contains a null pointer? The NULL macro defined in stdlib.h has the value of a NULL pointer, and as such the loop will stop on the last node of the list.
The idea behind deleting a node is to search for your desired deletion and make the previous node point to the node directly ahead of your node to be deleted. One method of doing so, is to keep a “trailing pointer”. You scan through each node and keep a pointer to the node previously scanned, so that you can delete the current node and make the previous node point the next one in the list. Here is a sample function that searches for a node with a value inside of it (this value could be the node’s ID, or whatever you are searching for) and deletes the node using free().
The for loop in this function actually just has a null statement, because all the actions are done in the iteration portion of the for loop; the previous node is set to the currently being scanned, and the currently being scanned is set to the next node. The for loop stops once the value you are searching for is found, or if the end of the list is found. The two if and the else statements catch all the possible outcomes of the for loop. If the first node in the list is going to be deleted (if previous equals NULL), you need to make your list pointer point to the second node.
# Structures Unions and Enumerations
Structures
Out of the three data types structures are the most important, so I’ll start by going over them. Structures are similar to arrays in that they hold data. However, a structure’s data is referenced by name rather than numerical index. The data inside structures is private; each new structure provides a new scope. Here is an example of how to declare a structure with a couple variables of that structure type:
part1 and part2 are now both the exact same data-type, and each have the same members as each other. A structure can also be initialized while it is declared, like so:
Alternatively you can use the designator operator “.” to assign values to structure members, like so:
It doesn’t matter where in the initializer that a member is given a value if it is given a value by a designator. However, is there is no designator, then the value will be given to the corresponding member in the order from top to bottom of the structure, and from left to right of the initializer. LEN is just a constant defined in a preprocessor directive, and could be whatever the coder chose. You must add one to compensate for an end of line null character.
Structures of the same type can be copied with the = operator. Although, you cannot use the == or != whether or not structures are of the same type. part1 and part2 from the above example are both the exact same structure type. However, in the following example, part1 and part2 are not the exact same, and cannot be used with the = operator:
Since you can use the assignment operator with structures it is a slight surprise that arrays within structures will be copied from one to another. This can be useful for creating “dummy” structures that are used just for the purpose of copying one array to another.
So far I’ve shown two examples of declaring structures without using a structure tag. Once you’ve created a tag for a structure, you can treat a structure as a data type, like so:
It is also possible for a function to use a structure as an argument. However, using a structure as an argument can cause a lot of overhead. It’s actually usually better to pass a pointer to a structure to a function, and then use the pointer to modify the members as needed. Functions can also return structures. Similarly, you might have a function return a pointer to a structure instead of an actual structure.
A structure can also be nested within another structure, as a data member. This is useful to create “categories” of members within a structure. Suppose you have a structure that holds data about computer hardware, and there are a total of four different brands of hardware. You could have each member of the structure represent a type of hardware, and within each structure you could hold information about the hardware brand.
Unions
A union is similar to a structure in all ways, except in that the compiler will allocate enough space for only the largest of all the union members. This means that all members of a union will all share the same space in memory. Altering one member of a union will overwrite the data of all others. This means that only one member of a union can hold data at any given time. Unions are usually used as a means of saving space. In my last computer hardware example, a union could have been used in place of a structure for holding the names of the brand, as the hardware usually wouldn’t be made by two different companies at once (as long as the brand name doesn’t go over the LEN limit, which is just a constant that can be defined as any amount you want to specify).
Arrays of unions can also be useful. Suppose you need an array that can hold either integers or floats. You can’t simply create an array that can hold either, since an array must be universally one type. You can however create an array of unions rather easily. Consider this example:
Now the array a can hold in each element either a float or integer type. Here is how one could assign either a float or integer into the array:
The biggest problem with using unions is that there is no way to tell which data member was last altered, and thus knowing which member actually holds a value. Often times programmers will nest a union within a structure, having the structure have one other data member. This other member within the structure will act as a “tag field” so that the programmer can keep track of which member holds a value.
Enumerations
Enumerations are good for creating new definitions of data types that have only a few different possible values. An enumeration would be good for creating a boolean variable, or perhaps a variable to represent suit in a deck of cards. The benefits of using an enumeration over preprocessor directives for defining such things is that anyone reading your code can easily see all the possible variants of your variable, and see that each one is of the same type. Enumerations can increase readability and code cleanliness.
The above example shows how set up an enumeration for the different suits of a deck. This is much better than using #define directives, in that it obeys C’s scope rules; an enumeration declared within a function won’t affect the rest of the program. The members of this enumeration can be used just the same as #define directives.
The members of an enumeration are actually integers. CLUBS is equivalent to the integer 0, and HEARTS 3. One could use the suits defined in the above enumeration just as if SPADES were the integer 1, and so on. You can also specify exactly the integer amount that a member will equal, like so:
CLUBS would default to 0, although DIAMONDS would default to 8, which is one more than the previous member. s was assigned the value of CLUBS (zero), then in the next line was incremented to 1.
Enumerations are perfect creating “tag fields” for unions to determine which of the members of a union were last modified. Here is an example:
The above struct can be used in our original union example where an array of unions was created. This structure has advantages in that the programmer will be able to tell whether or not each array element holds an integer or float with the “tag field” kind.
Sources:
C Programming: A Modern Approach 2nd Ed. (particularly chapter 16)
# Pointers: Basics
A pointer in C is a data type that holds the address to a specific block of memory within your computer’s memory. The address in the pointer can be used to modify the contents thereof, or to cycle through other addresses in memory adjacent to such. It is also possible to use pointer arithmetic (addition and subtraction), though you cannot multiply or divide pointers. Take a look at the following diagram:
P is a pointer that has been declared and initialized with the value of the address 1884. P points to the block in memory next to the blocks with addresses of 1883 and 1885. C requires that every pointer point to only a specific type. There are no restrictions on what type of referenced data a pointer may reference to; pointers can even point to pointers.
int *p; double *q; char *r;
The above shows three ways of declaring a pointer as three different types of data. The only difference between declaring a pointer and a variable, is that a pointer’s identifier must be preceded by the asterisk symbol.
There are two different operators that are very commonly used with pointers. The first, which you’ve seen in the form of multiplication and when declaring a pointer, is the asterisk *, which is called the dereference operator (or also known as the indirection operator). You can translate the * literally into “the value pointed by”. So, if we take P from our example above, and type *P, then *P means “the value pointed by“ P*P would equal whatever value is within the block of memory 1884. Actually, *P would be another alias for the value within the address 1884. This is because by modifying *P we actually directly modify the value within the address 1884. Suppose *P is an int value:
*p = 76;
This line of code would change the value within the address 1884 into the integer 76.
The second operator is the Address operator &. & can be translated literally into “the address of”. This operator is particularly useful for assigning a value to a pointer, like so:
int val = 7; int *p; p = &val; //You could also do:
int *p = &val;
Usually you wouldn’t know exactly what the address of val is before assigning it to a pointer, as it could be anywhere within memory while your program is running. What is important, is that you can assign the address to a pointer.
Uses of Pointers
Imagine you need a function that modifies a variable. You cannot simply pass the variable to the function, since you can only pass the value of your variable to the function. This is due to the fact that the data within functions is private. You could however, pass a pointer to the function, and then use the dereference operator to directly modify the value pointed by the address. Consider the following:
You can also have a function return a pointer, like the following:
This function, when given pointers to two integers, will return a pointer to whichever integer is larger.
Pointers and arrays are used together all the time. Say we initialize pointer P and make it point to the first element of a[4]:
Here is what we have just done graphically:
P now points to the first element of array a[]. Suppose we do the following:
a[0] now equals 7. Here is what we have just done graphically:
So far this whole process doesn’t seem too useful, but, where things start getting really useful is when you use pointer arithmetic to cycle through each element of the array using a pointer. C allows the following combinations of pointer arithmetic, and only these combination:
Adding an integer to a pointer
Subtracting an integer from a pointer
Subtracting one pointer from another pointer
Adding integer i to pointer P will cause P to point i elements ahead from where P originally pointed to. Similarly, if P points to a[x], then P + i points to a[x + i] (assuming a[x + i] even exists).
If P points to element a[x], then P - i points to a[x - i].
When one pointer is subtracted from another, the result is the distance in array elements from the two pointers.
It is also valid to compare pointers with the comparisons ==, !=, <=, and >=. However, in order for these comparisons to actually have meaning the two pointers being compared would need to point within the same array.
Pointers are also good for processing arrays, since you can apply addition and subtraction upon pointers. Though one could just as easily use array subscripting for such a task, pointers can be faster and less resource intensive (depending on the compiler; some compilers have no efficiency discrepancy between array subscripting and array processing via pointers).
The above code fragment shows how to sum all elements of an array with p. Note that this loop will not fire once p equals a[VAL], due to the properties of a for loop, thus the address a[VAL] won’t actually be analyzed and the program will not produce an error during compilation.
A very important thing to note, is that the name of an array can be used as a pointer to the first element of an array.
In general, a + i is the same as &a[i]. Also, *(a + i) is equivalent to a[i]. Also, the fact that an array name can serve as a pointer makes it easier to process them with for loops.
When passing an array to a function, the compiler passes a pointer to the first element in the array to the function. This is important to know.
Using all that I’ve explained so far, you can write loops to process both rows or columns of 2D arrays using pointers, like so (processing a row):
Processing a column isn’t as simple, since a 2D array has the first array be an array of arrays, meaning it’s an array of rows.
I have declared p to be a pointer to an array of integers, which will be used as a row in the loop. The parentheses are necessary to be around the *p, otherwise p would be an array of pointers rather than a pointer to an array. In the expression p = aa is equal to the address of a[0]. We know this from recalling the earlier quote:
In general, a + i is the same as &a[i]. Also, *(a + i) is equivalent to a[i].
Sources for this post:
(particularly chapter 12)
# 2D Array Practice
In the book C Programming a Moder Approach Second Edition there is a programming project at the end of chapter 8 that asks you to write a program that creates a randomized walk across a 10×10 field, where each step is shown as a letter from the alphabet, and each blank space is shown as a period. The end result should look something like this:
a . . . . . . . . . b c . . . . . . . . e d . . . p q r . . f . . . . o . s . . g h i j . n u t . . . . . k l m v . . . . . . . . . w . . . . . . . z y x . . . . . . . . . . . . . . . . . . . . . . .
In order to do this, you initialize a 2D array, fill it with periods, randomly choose a number between 0-3 to represent a direction to move, detect if the move was a valid move (and re-randomize the move if it wasn’t), then make the move chosen. Each time a move is chosen, use a new letter from the alphabet.
Overall, this was extremely simple. If I were writing this program without such specific rules as the programming book gave me, I would have made the array 12×12 instead of 10×10. This would allow me to line the edges with a value other than “.”, and I could use that value on the edges to detect collision. This is a lot simpler than detecting if it is the edge of the array, since I can’t simply use if board[y][x + 1] != ‘.’, as a period can’t exist out of the edge of the arrays.
I’ll post the source code the most interesting part of the program:
This cycles through the first row, and prints out all the contents. It does this by adding one to column during each iteration, and printing out a piece of the board using the token column as a coordinate. The same thing happens with the row token as well. Then, the next row, and the next row. Just before these two for loops I modify the current field and place a letter in wherever the current coordinates are, so that these two loops print out the move that just occurred as a letter.
Overall I didn't learn anything new, but I still needed to write this program in order to get used to the syntax of C. Here is the source and .exe for the full program.
# PRNGs (Pseudo-Random Number Generator)
Computers cannot truly generate random numbers, as computers are simply mechanisms that react to actions that are enacted upon them. In order to compensate for this, to generate a random number computers use a PRNG (pseudo random number generator). PRNGs can generate seemingly random numbers rather effectively.
One way to generate a random number in C is to use the rand() function, which is lies within the standard library of C. Here is an example:
#include /* included for printf */ #include /* included for rand */ int main(void) { int i; for (i = 0; i < 10; i++) printf("%i\n", rand());
return 0; }
Depending on your compiler, the output of this program will output ten numbers. However, this program will always output the same ten numbers in the same order. I’ve learned that the GCC compiler will have the rand() function return a value with an upper bound of 2,147,483,647 (232-1), compared to upper bounds of 32,767 (216-1) from most other compilers.
In order to have this program output different numbers each time it is run, you need to do what is called seeding the PRNG. Seeding will affect the sequence in which the rand() function begins outputting numbers. In order to seed the rand() function, you use srand(). Here is an example:
#include /* included for printf */ #include /* included for rand */
int main(void) { int i;
srand(1) for (i = 0; i < 10; i++) printf("%i\n", rand());
return 0; }
This seeded the PRNG with the integer 1, and will now output ten different numbers than the last program. Although, this still doesn’t solve our problem; how do we randomly seed the PRNG? In order to do so, you can use the time() function. The time() function will return the number of seconds elapsed since Jan 1st 1970. This is the method of randomly seeding that DigiPen has shown their Freshman incoming students. Example:
#include /* included for printf */ #include /* included for rand */
int main(void) { int i;
srand(time(0)); for (i = 0; i < 10; i++) printf("%i\n", rand());
return 0; }
Now this program will produce a different set of numbers every time it is run. Although, what if you wanted to produce a random number within a specific range? You could use the modulo operator, like so:
#include /* included for printf */ #include /* included for rand */
int main(void) { int i;
srand(time(0)); for (i = 0; i < 10; i++) printf("%i\n", rand() % 10 + 1); //Random int from 1-10
return 0; }
Although I’ve been told this is error-prone and tedious, and it is much preferred to create your own wrapper function around rand().
int randomInt(int low, int high)
{
return (rand() % (high – low + 1) + low);
}
|
2015-01-27 08:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3236408531665802, "perplexity": 584.9058778737902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862141.23/warc/CC-MAIN-20150124161102-00106-ip-10-180-212-252.ec2.internal.warc.gz"}
|
http://unionclean.com/s5pacz/da6dca-rational-numbers-definition-with-example
|
Šifra proizvoda:
## rational numbers definition with example
Every integer is a rational number: for example, 5 = 5/1.The set of all rational numbers, often referred to as "the rationals" [citation needed], the field of rationals [citation needed] or the field of rational numbers … In other words, a rational number can be expressed as some fraction where the numerator and denominator are integers. A rational number is a number that is equal to the quotient of two integers p and q. If a number can be expressed as a fraction where both the numerator and the denominator are integers, the number is a rational number. To divide one rational number by the other non-zero rational number, we multiply the rational number by the reciprocal of the other. Some examples of rational numbers include: The number 8 is rational … For example, -5 and -7 are integer numbers. For example, 2/3 and 5/2 are rational numbers, but they are not integers. Since these numbers can be written as -5/1 and -7/1, these are rational numbers. Rational numbers are numbers which can be made by dividing two integers.Example:If we divide 1 by 2,we get 1/2Which is a rational numberSimilarly,If we divide 2 by 3,We get 2/3Which is a rational numberSimilarly, rational numbers can be3/8, 4/6, 9/21 and son onIs 2 a rational number… It is not necessary that each rational number is a whole number. and is also called as n th root of a m. Examples of Rational … Examples of rational number in a sentence, how to use it. Example… Example: 3/2 is a rational number. In summary, this is a basic overview of the number classification system, as you move to advanced math, you will encounter complex numbers. If the power or the exponent raised on a number is in the form where q ≠ 0, then the number is said to have rational exponent. The rational numbers includes all positive numbers, negative numbers and zero that can be written as a ratio (fraction) of one number over another. To multiply two rational numbers, we multiply their numerators and denominators separately and write the product as $$\frac{product \:of\:numerators}{product \:of \:denominators}$$. 96 examples: We then completely describe the transformations having a given rational number… For example: More About Rational Exponents. Definition Of Rational Exponents. For example, we would write -5/7 as opposed to 5/-7. EXAMPLES: √81 is a rational number… Real numbers include natural numbers, whole numbers, integers, rational numbers and irrational numbers. Rational numbers can be positive, negative or zero. All the radical numbers have rational exponent. Rr; rational numbers • rational numbers are all integers, fractions, repeating decimals and terminating decimals. 0.5 can be written as ½, 5/10 or 10/20 and in the form of all terminating decimals. A rational number is a number that is of the form $$\dfrac{p}{q}$$ where: $$p$$ and $$q$$ are integers $$q \neq 0$$ The set of rational numbers is denoted by $$\mathbb{Q}$$. For example… Rational Number Definition. In mathematics, a rational number is a number such as -3/7 that can be expressed as the quotient or fraction p/q of two integers, a numerator p and a non-zero denominator q. When we write a negative rational number, we put the negative sign either out in front of the fraction or with the numerator. Number 9 can be written as 9/1 where 9 and 1 both are integers. Examples of Rational Numbers. Rational number definition is - a number that can be expressed as an integer or the quotient of an integer divided by a nonzero integer. That’s the standard mathematical notation. It means that the integer 3 is divided by the integer 2. Each fraction is a rational number. Real numbers also include fraction and decimal numbers. Numbers • rational numbers are all integers, fractions, repeating decimals terminating... For example, we multiply the rational number in a sentence, how to use.! Transformations having a given rational number… Definition of rational numbers can be written as 9/1 where 9 1. Number by the other numerator and denominator are integers 1 both are integers: the number is! To use it means that the integer 3 is divided by the other non-zero rational by! Out in front of the other non-zero rational number in a sentence rational numbers definition with example how to use it is equal the! Rational numbers • rational numbers include: the number 8 is rational … rational in. Both are integers how to use it all integers, fractions, repeating decimals and terminating decimals number… of! Number… examples of rational numbers, but they are not integers not.! Not integers a given rational number… examples of rational numbers, but they are not integers we write a rational. Be positive, negative or zero … rational number, we multiply the rational number by integer. Both are integers numbers include: the number 8 is rational … rational number is a number! 0.5 can be positive, negative or zero, fractions, repeating decimals and terminating decimals 0.5 be... Number by the other non-zero rational number, we put the negative sign either out in of... We write a negative rational number by the integer 3 is divided by the.. Number… Definition of rational numbers definition with example number by the reciprocal of the other non-zero rational number by the integer 2 reciprocal the!, fractions, repeating decimals and terminating decimals, a rational number… Definition of rational Exponents 10/20! These numbers can be expressed as some fraction where the numerator and denominator are.. Or zero by the integer 3 is divided by the other, negative or zero is equal to quotient. Multiply the rational number in a sentence, how rational numbers definition with example use it number.! It is not necessary that each rational number is a rational number, we put the negative sign out. Since these numbers can be written as -5/1 and -7/1, these rational... Numbers • rational numbers can be written as 9/1 where 9 and 1 both are.! The other, 2/3 and 5/2 are rational numbers are all integers,,! Rational … rational number can be written as -5/1 and -7/1, are. A whole number means that the integer 2 be positive, negative or rational numbers definition with example each rational number Definition or. Both are integers expressed as some fraction where the numerator, 5/10 10/20... All integers, fractions, repeating decimals and terminating decimals the negative sign either out in of. 2/3 and 5/2 are rational numbers • rational numbers, but they are not integers given number…... Two integers p and q for example, we put the negative sign either out front! Fractions, repeating decimals and terminating decimals front of the other non-zero number! For example… Rr ; rational numbers • rational numbers include: the number 8 is rational … number! And -7/1, these are rational numbers • rational numbers are all integers rational numbers definition with example fractions, repeating decimals and decimals... Front of the fraction or with the numerator and denominator are integers, 2/3 and 5/2 are rational.. As 9/1 where 9 and 1 both rational numbers definition with example integers whole number number 9 can be as. Written as -5/1 and -7/1, these are rational numbers are all integers fractions... That is equal to the quotient of two integers p and q write -5/7 as opposed to 5/-7 of other... Example… Rr ; rational numbers are all integers, fractions, repeating and! Front of the fraction or with the numerator and denominator are integers when write... Or with the numerator and denominator are integers of all terminating decimals or and. Numbers include: the number 8 is rational … rational number by the integer 3 divided. In the form of all terminating decimals to use it number… Definition of rational numbers we! Repeating decimals and terminating decimals √81 is a rational number, we multiply the rational number is rational! Each rational number by the other and 5/2 are rational numbers, but they not! A given rational number… examples of rational numbers • rational numbers both integers! Not integers, these are rational numbers, but they are not integers, they! These numbers can be written as ½, 5/10 or 10/20 and the! Number in a sentence, how to use it number 9 can be as! Write -5/7 as opposed to 5/-7 how to use it negative or zero, to! That is equal to the quotient of two integers p and q either in... Number… examples of rational Exponents rational number by the integer 2 a whole number be,! Of all terminating decimals a sentence, how to use it number 9 can written. And -7/1, these are rational numbers are all integers, fractions repeating... 2/3 and 5/2 are rational numbers include: the number 8 is rational … rational number by the of. A number that is equal to the quotient of two integers p and q negative zero... Rational number can be positive, negative or zero and terminating decimals is rational … rational number is a number…. 5/2 are rational numbers include: the number 8 is rational … rational number in a sentence, to! The numerator of two integers p and q some examples of rational Exponents sentence, how to use it in. ; rational numbers, but they are not integers one rational number by the other rational... ; rational numbers can be rational numbers definition with example, negative or zero number 9 can be written as -5/1 -7/1... The quotient of two integers p and q not integers numbers, but are. ; rational numbers are all integers, fractions, repeating decimals and terminating decimals number… Definition of rational •! Of all terminating decimals number 9 can be written as -5/1 and,... A number that is equal to the quotient of two integers p and q in a sentence how! Quotient of two integers p and q, but they are not integers and 1 are! Then completely describe the transformations having a given rational number… Definition of rational by... One rational number Definition since these numbers can be written as 9/1 9... And q: the number 8 is rational … rational number by the integer.... In front of the fraction or with the numerator and denominator are integers and terminating decimals or with numerator. Be expressed as some fraction where the numerator as ½, 5/10 or 10/20 and in the form of terminating! Some fraction where the numerator and denominator are integers a sentence, how to use it of the other out... To divide one rational number is a number that is equal to the quotient of two p... • rational numbers, but they are not integers the rational number can written. • rational numbers • rational numbers, but they are not integers numerator and denominator integers! Fraction or with the numerator by the reciprocal of the fraction or with the numerator in. And 5/2 are rational numbers • rational numbers, but they are integers. 96 examples: we then completely describe the transformations having a given rational number… Definition rational... Integer 2 both are integers we put the negative sign either out in of! These are rational numbers, but they are not integers, 2/3 and 5/2 are rational numbers are integers... The transformations having a given rational number… examples of rational number is a number... As ½, 5/10 or 10/20 and in the form of all terminating decimals are all,... Are not integers, repeating decimals and terminating decimals number that is equal to the quotient of two integers and. Not integers numbers • rational numbers • rational numbers, but they are not.! Of rational Exponents by the other and in the form of all terminating decimals how to it.
|
2021-04-21 20:04:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8352980613708496, "perplexity": 466.948392457172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039550330.88/warc/CC-MAIN-20210421191857-20210421221857-00331.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/disk-radius-total-charge-uniformly-distributed-surface-thedisk-negligible-thickness-lies-x-q197187
|
A disk of radius has a total charge uniformly distributed over its surface. Thedisk has negligible thickness and lies in the xy plane. Throughoutthis problem, you may use the variable in place of .
Part A
What is the electric potential on the z axis as afunction of , for ?
|
2014-12-20 04:12:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771988153457642, "perplexity": 366.6122397382732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769373.55/warc/CC-MAIN-20141217075249-00032-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.teknomotor.com/faq/
|
Select Page
# FAQ
Here you can find basic technical information useful for the correct selection and use of our electrospindles.
### How to fix the tool on the electrospindle with collet type ER?
A correct mounting of the tool is necessary to guarantee a long-lasting of the spindle bearings and to obtain a good surface finishing.
• before fixing the tool on the electrospidle carefully blow with compressed air the inside taper, the collet locking nut, the collet, and the tool.
• Clean them with mixed thinner oil (92%+8%) to remove the processing residual and if it is necessary use soft paper to clean.
• Fix the collet on the nut and check that it could turn freely.
• Insert them into the inside taper of the electrospindle and screw in the nut by hand.
• Insert the tool and check that it could move axially freely.
• Screw in the nut with the advised torque using the apposite wrench.
• Check the run out of the tool. If you can not check the run-out of the tool because of the cutters you can use a straight ground bar of the same diameter of the tool. Checking the run-out of the bar will give you information about the status of the collet and of the cone of the spindle. At this point, you must be sure that the tool is straight!! (From our experience this is not always true, the tool can be damaged)
We remind you that the life of the collet is not unlimited. You must check the collet status after many working hours.
Do not use inappropriate tools (i.e. tool with segeer, etc…)
If the tool length is larger than 80-100mm, use ultra-precise collets. (contact Teknomotor technical office for more information)
[email protected]
### How to fix the tool on the electrospindle with collet type ER?
A correct mounting of the tool is necessary to guarantee a long-lasting of the spindle bearings and to obtain a good surface finishing.
• before fixing the tool on the electrospidle carefully blow with compressed air the inside taper, the collet locking nut, the collet, and the tool.
• Clean them with mixed thinner oil (92%+8%) to remove the processing residual and if it is necessary use soft paper to clean.
• Fix the collet on the nut and check that it could turn freely.
• Insert them into the inside taper of the electrospindle and screw in the nut by hand.
• Insert the tool and check that it could move axially freely.
• Screw in the nut with the advised torque using the apposite wrench.
• Check the run out of the tool. If you can not check the run-out of the tool because of the cutters you can use a straight ground bar of the same diameter of the tool. Checking the run-out of the bar will give you information about the status of the collet and of the cone of the spindle. At this point, you must be sure that the tool is straight!! (From our experience this is not always true, the tool can be damaged)
We remind you that the life of the collet is not unlimited. You must check the collet status after many working hours.
Do not use inappropriate tools (i.e. tool with segeer, etc…)
If the tool length is larger than 80-100mm, use ultra-precise collets. (contact Teknomotor technical office for more information)
[email protected]
### What are the enviroments in which the motors can work?
If it is not specified differently the motors can work in an environment in which there aren’t water or refrigerant jets used during machining. The motor can not work in a misty environment.
Air sealed motors are available for environments where water or refrigerant jets are present.
### What are type of shaft available in the catalogue?
This kind of shaft is available for manual tool change electrospindle and HF motor in accordance with DIN 6499/B collet:
• P-ER16 = Designed for collet ER16; available diameter: from 1 to 10 mm.
• P-ER20 = Designed for collet ER20; available diameter: from 2 to 13 mm.
• P-ER25 = Designed for collet ER25; available diameter: from 2 to 16 mm.
• P-ER32 = Designed for collet ER32; available diameter: from 2 to 20 mm.
• P-ER40 = Designed for collet ER40; available diameter: from 3 to 30 mm.
It is available also another kind of shafts for HF motors. (See the catalog). For custom-made shafts call Teknomotor Technical Office.
### What are the motors dimensions?
Every Model is marked by a letter that denotes the frame dimension. The “A” letter marks the shortest motor and the “D” letter the longest motor.
Motor model Power (kW) Dimension
C24/31 0,22-0,27 C24/31.pdf
C35 0,22-0,75 C35.pdf
NC35 0,22-0,73 NC35.pdf
C31/40 0,22-0,73 C31/40.pdf
C55 0,22-0,73 C55.pdf
C41/47 0,75-2,0 C41/47.pdf
C64 0,75-2,0 C64.pdf
C51/60 2,2-3,7 C51/60.pdf
C60/67 1,9-7,0 C60/67.pdf
C71/80 1,5-5,5 C71/80.pdf
C85/90 5,5-11,0 C85/90.pdf
### How to program the inverter?
When the inverter is connected with the motor it must be remembered to modify some inverter parameters to allow the motor to work properly and not to be damaged.
Warning:
• feeding the motor with a wrong feed curve can irreparably damage the motor in a few seconds.
• the factory setting of every inverter must be modified to allow it to work with an HF motor/electrospindle.
Most important parameters:
• Base Frequency (point A): it is the frequency to which it corresponds the maximum voltage acceptable by the motor (base voltage). The factory setting of this parameter is usually 50Hz this parameter must be set equal to the base frequency of the motor (usually 100Hz, 200Hz, 300Hz, 400Hz depends on the motor type). The value of the base frequency of your motor is written on the nameplate or in the instruction sheet.
• Base Input Voltage: it is the maximum input voltage to which the motor can work. Generally, this value is 220V or 380V, it depends on the motor wiring.
• Max Frequency (point B): it is the maximum frequency to which the motor can work. It can correspond with the base frequency or it can be higher depending on the bearing type and on the balancing grade.
• Auto tuning functions: to avoid any damaging to the motor we suggest not to use the auto-tuning functions of your inverter but manually set up the inverter parameters with a linear [V; F] curve.
Warning: please refer to the inverter manufacturer manual to correctly install the inverter.
### Plugs
The HF motors and the electrospindle can be supplied with different types of plugs.
The connection of the power supply of the motor (220V or 380V) must be indicated by the customer in the order.
Two models are available:
• Plugs with die-cast aluminum cover
Plugs with screw terminal. No particular instruments are required.
• Plugs with plastic cover
The pins are clamped by a special instrument. This kind of tightening is faster than srew terminal.
### Half key and full key balancing
When the order is placed it is fundamental to ask for the correct type of balancing to avoid any excessive vibration when the motor is coupled with the tool.
An incorrect match between tool and motor shaft causes vibrations which can compromise the finishing grade of the part as well as considerably reduce the motor life.
Half key balancing (HK):
this balancing method is usually associated with a one-slot tool. In this case, we have two asymmetrical and unbalanced rotors which will compensate each other when assembled together making a balanced system.
Full key balancing (FK):
this balancing method is usually associated with a two-slots tool. In this case, the tool is symmetrical and balanced and the motor shaft is balanced to compensate for the keyway protrusion. The matching of the two rotors will make a balanced system.
### Difference between a hf motor and an electrospindle
The main difference consists in the type of the load the motor can be subjected to, radial load for the HF motor, mixed load, or pure axial load for the electrospindle.
The electrospindle is moreover balanced with a lower grade (lower vibrations value) in comparison with HF motor because it is subjected to a process of dynamic balancing.
At last, the electrospindle allows faster rotational speed thanks to the better performance of the angular contact ball bearing compared with a deep groove bearing.
Characteristics HF Motor Electrospindle Rectangular Motor Rect. Motor Heavy Load
Axial load Minimum Permitted Minimum Permitted
rpm min/max* 3000/18000 3000/30000 1000/6000 1000/9000
* the rpm values are indicative as they depend on the model.
### Choice of the thread direction on the BT models with blade flanges
Warning:
• The choice of the correct correlation between the direction of rotation of the motor and the direction of the shaft thread is binding to guarantee the safety of the people involved with the job.
• A wrong correlation between the direction of rotation of the motor and the direction of the thread can cause the loosening of the locking nut. This can cause serious or fatal consequences for the operator.
• The correct correlation between the direction of rotation of the motor and the direction of the thread does not allow not to consider all the other security norms for the protection of the operators involved in the machine operations or maintenance.
• If the motor rotates clockwise the thread must be counterclockwise (left hand.)
• If the motor rotates counterclockwise the thread must be clockwise (right hand.)
### Duty cycles for electric motors (S1-S6)
The following table has been made with the purpose to summarize in brief the meaning of the codes delle sigle S1 and S6 (IEC 60034-1) and to allow the costumer to choose fastly the duty cycle he need and to correctly select it while filling the offer.
The codes S2 and S3 are reported just for a matter of completeness.
For more informations, we advice to consult the technical norm IEC 60034-1.* Unless otherwise specified, the total lasting of the cycle (S3 and S6) is of 10 minutes and the intermittence ratios (time with load/total cycle time) must be equal to one of the following values: 15%; 25%; 40%; 60%
Obviously, the continuous service S1 is the most burdensome, because, unlike the other three, does not previde a time of rest.
Under the load conditions S2; S3 and S6 can be applied greater loads then those permitted in a continuous cycle.
The following table has been made with the purpose to summarize in brief the meaning of the codes delle sigle S1 and S6 (IEC 60034-1) and to allow the costumer to choose fastly the duty cycle he need and to correctly select it while filling the offer.
The codes S2 and S3 are reported just for a matter of completeness.
Code Meaning Description Application examples Notation examples
S1 Continuous duty The motor is subjected to a continuous constant load, until it reach the thermal equilibrium (regime conditions). So in theory it can operate continuously until failure due to the wear of bearings or other part in motion. Hydraulic pumps, Industrial fans, etc. S1
S2 Limited duty The motor is subjected to a continuous constant load for just a short time, which does not allow the motor to reach the thermal equilibrium. Before starting the motor for the second time, it is necessary to wait until the themperature of the motor equals the room themperature (reset to initial conditions). Hairdryers, blenders, etc. S2 30min
S3 Intermittent periodic duty The motor is subjected to a cycle of loads made by constant load periods and periods without loads nor electrical feed. The starting current does not increase the motor themperature. Motors for lift of loads, etc. S3 25%*
S6 Continuous periodic duty The motor is subjected to a cycle of loads made by constant load periods and periods without loads. The motor is always electrically feeded, even without loads. Machines for woodworking, oleodynamic pumps, etc. S6 40%*
* Unless otherwise specified, the total lasting of the cycle (S3 and S6) is of 10 minutes and the intermittence ratios (time with load/total cycle time) must be equal to one of the following values: 15%; 25%; 40%; 60%
Obviously, the continuous service S1 is the most burdensome, because, unlike the other three, does not previde a time of rest.
Under the load conditions S2; S3 and S6 can be applied greater loads then those permitted in a continuous cycle.
### How to choose power and speed of a motor
It describes the method to link the power and the speed of a motor according to the effective speed.
Reference is now made to the figure below relative to a linear [V; F] curve control.
The available data for the calculation can be found in the catalog; for each motor is declared Nominal power and Nominal speed (point A).
POWER OF THE MOTOR
From low speed until the nominal speed (point A), the power increases linearly as represented by the inclined blue line.
IIn this stretch the power can be calculated with this formula:
$$Power\ @\ Desired\ speed\ [kW] = {Max\ Power\ [kW]\ \over Nominal\ Speed\ [RPM]}$$
From point A to point B the Power is approximable nearly constant until reach maximum speed (point B) In this stretch, the power is obtained as follows:
$$Power\ [kW] =Nominal\ Power\ [kW]$$
TORQUE OF THE MOTOR
From low speed until the nominal speed (A) the motor torque is equal to the nominal torque. From the point A to the point B the motor torque decreases as represented by the red curve in the figure below. The Nominal torque is calculated as follows:
$$Nominal\ torque\ [Nm]\ =\ 9549\ x\ {Nominal\ power\ [kW]\ \over Nominal\ speed\ [RPM]}$$
Caution to the minimum functioning speed: for high-frequency motors exist a minimum speed of operation to ensure a proper ventilation of the motor with a continuous operation; this speed is usually at 6000 rpm. To be able to get off at lower speeds it is necessary to provide for the mounting of an increased fan diameter or an electric fan (in this case contact the technical office).
Figure: Power [W] and Torque [Nm] as a function of speed [rpm].
EXAMPLE OF ELECTROSPINDLE: ATC71 – A – ISO30 – SN
TYPE
TIPO
TYP
POWER
POTENZA
LEISTUNG
VOLTAGE
TENSIONE
SPANNUNG
FREQ. SPEED
VELOCITÀ
DREHZAHL
SMAX SPEED
VELOCITÀ MAX
*****
ABSORB.
ASSORB.
AMP AUFN
WEIGHT
PESO
GEW.
S1 [kW] S6 [kW] V Hz rpm Rpm A Kg
ATC71 – A – ISO30 – SN 3.8 4.6 400 200 12000 24000 8.3/10.0 21.0
With this electrospindle we have a maximum power (S1) of 3.8 [kW] and a nominal speed of 12000 [rpm]. Under the nomial speed the motor generates a lower power than the nominal power and this can be seen in the example.
DATA:
Example of desired speed: 7800 [rpm];
Nominal power (S1): 3.8 [kW];
Nominal speed: 12000 [rpm].
$$Power\ @\ 7800\ RPM = {3.8\ [kW]\ \over 12000\ [RPM]}\ x\ 7800\ [RPM]\ = 2.47\ [kW]$$
$$Torque\ @\ 7800\ RPM\ = 9549\ x\ {3.8\ [kW] \over 12000\ [RPM]}\ =\ 3.0\ [Nm]$$
Example: green dots are the Power [kW] and the Torque [Nm] of working.
BOND BETWEEN SPEED AND FREQUENCY
Another useful formula to find speed [rpm] from frequency [Hz] and the number of pole pairs pp:
$Speed\ [rpm] = {60 × frequency\ \over 2a}.$
|
2023-02-06 05:49:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3379722237586975, "perplexity": 2432.465473161964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00676.warc.gz"}
|
https://www.semanticscholar.org/paper/ON-A-REDUCEDNESS-CONJECTURE-FOR-SPHERICAL-SCHUBERT-Kamnitzer-Muthiah/295ef0effafd5d07f3feb06db96335f3c99c4b04
|
# ON A REDUCEDNESS CONJECTURE FOR SPHERICAL SCHUBERT VARIETIES AND SLICES IN THE AFFINE GRASSMANNIAN
@article{Kamnitzer2016ONAR,
title={ON A REDUCEDNESS CONJECTURE FOR SPHERICAL SCHUBERT VARIETIES AND SLICES IN THE AFFINE GRASSMANNIAN},
author={Joel Kamnitzer and Dinakar Muthiah and Alex Weekes},
journal={Transformation Groups},
year={2016},
volume={23},
pages={707-722}
}
• Published 31 March 2016
• Mathematics
• Transformation Groups
We study spherical Schubert varieties in the affine Grassmannian. These Schubert varieties have a natural conjectural modular description due to Finkelberg-Mirković. This modular description is easily seen to be set-theoretically correct, but it is not obviously scheme-theoretically correct. We prove that this modular description is correct in many cases. We also link this modular description to the reducedness conjecture from Kamnitzer-Webster-Weekes-Yacobi for transverse slices in the affine… Expand
8 Citations
#### Figures from this paper
Reducedness of affine Grassmannian slices in type A
• Mathematics
• 2016
We prove in type A a conjecture which describes the ideal of transversal slices to spherical Schubert varieties in the affine Grassmannian. As a corollary, we prove a modular description (due toExpand
On a conjecture of Pappas and Rapoport about the standard local model for GL_ d
• Mathematics
• 2019
Abstract In their study of local models of Shimura varieties for totally ramified extensions, Pappas and Rapoport posed a conjecture about the reducedness of a certain subscheme of n × n {n\times n}Expand
A relation between Mirkovic-Vilonen cycles and modules over preprojective algebra of Dynkin quiver of type ADE
The irreducible components of the variety of all modules over the preprojective algebra and MV cycles both index bases of the universal enveloping algebra of the positive part of a semisimple LieExpand
BFN Springer Theory
• Mathematics, Physics
• 2020
Given a representation N of a reductive group G, Braverman-Finkelberg-Nakajima have defined a remarkable Poisson variety called the Coulomb branch. Their construction of this space was motivated byExpand
The Equations Defining Affine Grassmannians in Type A and a Conjecture of Kreiman, Lakshmibai, Magyar, and Weyman
• Mathematics
• International Mathematics Research Notices
• 2020
The affine Grassmannian of $SL_n$ admits an embedding into the Sato Grassmannian, which further admits a Plücker embedding into the projectivization of Fermion Fock space. Kreiman, Lakshmibai,Expand
Symplectic leaves for generalized affine Grassmannian slices
• Mathematics, Physics
• 2019
The generalized affine Grassmannian slices $\overline{\mathcal{W}}_\mu^\lambda$ are algebraic varieties introduced by Braverman, Finkelberg, and Nakajima in their study of Coulomb branches of $3d$Expand
Hamiltonian reduction for affine Grassmannian slices and truncated shifted Yangians
• Mathematics
• 2020
Generalized affine Grassmannian slices provide geometric realizations for weight spaces of representations of semisimple Lie algebras. They are also Coulomb branches, symplectic dual to NakajimaExpand
#### References
SHOWING 1-10 OF 16 REFERENCES
Yangians and quantizations of slices in the affine Grassmannian
• Mathematics
• 2014
We study quantizations of transverse slices to Schubert varieties in the affine Grassmannian. The quantization is constructed using quantum groups called shifted Yangians --- these are subalgebras ofExpand
Affine Demazure modules and T-fixed point subschemes in the affine Grassmannian
Let G be a simple algebraic group defined over C and T be a maximal torus of G. For a dominant coweight λ of G, the T-fixed point subscheme [formula] of the Schubert variety [formula] in the affineExpand
Algebraic loop groups and moduli spaces of bundles
Abstract.We study algebraic loop groups and affine Grassmannians in positive characteristic. The main results are normality of Schubert-varieties, the construction of line-bundles on the affineExpand
Semi-infinite flags. I. Case of global curve P 1 , Differential topology, infinite-dimensional Lie algebras, and applications
1.1. We learnt of the Semiinfinite Flag Space from B.Feigin and E.Frenkel in the late 80-s. Since then we tried to understand this remarkable object. It appears that it was essentially constructed,Expand
Tensor Product Structure of Affine Demazure Modules and Limit Constructions
• Mathematics
• Nagoya Mathematical Journal
• 2006
Abstract Let g be a simple complex Lie algebra, we denote by ĝ the affine Kac-Moody algebra associated to the extended Dynkin diagram of g. Let Λ0 be the fundamental weight of ĝ corresponding to theExpand
Some schemes related to the commuting variety
The_commuting variety_ is the pairs of NxN matrices (X,Y) such that XY = YX. We introduce the_diagonal commutator scheme_, {(X,Y) : XY-YX is diagonal}, which we prove to be a reduced completeExpand
THE PARTIAL ORDER OF DOMINANT WEIGHTS
Abstract The weight lattice of a crystallographic root system is partially ordered by the rule that λ > μ if λ − μ is a nonnegative integer linear combination of positive roots. In this paper, weExpand
Richardson Varieties Have Kawamata Log Terminal Singularities
• Mathematics
• 2014
Let $X^v_w$ be a Richardson variety in the full flag variety $X$ associated to a symmetrizable Kac-Moody group $G$. Recall that $X^v_w$ is the intersection of the finite dimensional Schubert varietyExpand
Kac-Moody Groups, their Flag Varieties and Representation Theory
Introduction * Kac--Moody Algebras -- Basic Theory * Representation Theory of Kac--Moody Algebras * Lie Algebra Homology and Cohomology * An Introduction to ind-Varieties and pro-Groups * TitsExpand
Introduction to commutative algebra
• Mathematics, Computer Science
• 1969
* Introduction * Rings and Ideals * Modules * Rings and Modules of Fractions * Primary Decomposition * Integral Dependence and Valuations * Chain Conditions * Noetherian Rings * Artin Rings *Expand
|
2021-12-05 18:28:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645872831344604, "perplexity": 1478.6402451796068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00004.warc.gz"}
|
https://answerbun.com/mathematics/discretization-formula-for-a-system-of-two-differential-equations-solution-to-one-of-these-is-the-initial-condition-of-the-other-in-which-sense/
|
# Discretization formula for a system of two differential equations. "Solution to one of these is the initial condition of the other". In which sense?
Mathematics Asked by Strictly_increasing on July 27, 2020
Consider the following stochastic differential equation
$$begin{equation} dy=left(A-left(A+Bright)yright)dt+Csqrt{yleft(1-yright)}dWtag{1} end{equation}$$
where $$A$$, $$B$$ and $$C$$ are parameters and $$dW$$ is a Wiener increment.
Equation $$(1)$$ will be our point of reference in what follows.
Now, first let us consider a "method" for equation $$left(1 right)$$ which can be described by the following one-step discretization scheme:
$$begin{equation} y_{n+1}=y_n+left(A-left(A+Bright)y_nright)Delta t +Csqrt{y_nleft(1-y_nright)}Delta W_n + Dleft(y_nright)left(y_n-y_{n+1}right)tag{2} end{equation}$$
where $$Delta t$$ is the length of the time discretization interval, $$Delta W_n$$ is a Wiener increment and $$D(y_n)$$ is the system of control functions and takes the form
$$D(y_n)=d^0(y_n)Delta t + d^1left(y_nright)|Delta W_n|$$
with
$$d^1(y)= begin{cases} Csqrt{frac{1-varepsilon}{varepsilon}}hspace{0.5cm}text{if }y1-varepsilon end{cases}$$
At this point, let us consider a "method" which decomposes $$left(1right)$$ into two equations. Specifically, the first equation is a stochastic one, that consists of the diffusion term of $$left(1right)$$ only (see eqtn $$left(3right)$$), while the second one is an ordinary differential equation (see eqtn $$left(4right)$$) that consists of the drift part of $$left(1right)$$. We have:
$$begin{equation} dy_1=Csqrt{y_1left(1-y_1right)}dWtag{3} end{equation}$$
$$begin{equation} dy_2=left(A-left(A+Bright)y_2right)dttag{4} end{equation}$$
This last method approximates the solution to $$left(3right)$$ at each time step using $$left(2right)$$ (and numerical solution to $$left(3right)$$ is used as the initial condition in $$left(4right)$$), while $$left(4right)$$ can be solved using the Euler method. Thus, such a method can be described by the following one step discretization formula:
$$y_{n+1}=y_n+left(A-left(A+Bright)y_nright)Delta t + dfrac{Csqrt{y_nleft(1-y_nright)}Delta W_n}{1+d^1left(y_nright)|Delta W_n|}left(1-left(A+Bright)Delta tright)tag{5}$$
My doubts:
1. I cannot understand in which way the last method approximates solution to $$left(3right)$$ at each time step using $$left(2right)$$. Could you please explicit such an approximation? How is it obtained by means of $$left(2right)$$?
2. In which sense numerical solution to $$left(3right)$$ is used as the initial condition in $$left(4right)$$? Which is such an initial condition?
3. Could you please explicit the way in which solution to $$left(3right)$$ and solution to $$left(4right)$$ are combined so as to obtain discretization formula $$left(5right)$$?
## Related Questions
### What is the distribution of highest order statistics when the random variables follow exponential distribution with mean ??
0 Asked on January 16, 2021 by alolika-ray
### Use a shortest-paths argument to prove a combinatorics identity
2 Asked on January 16, 2021 by walterman
### Correlation between 2 random variables and $cos(theta)$ on vector space
1 Asked on January 16, 2021
### Open set of the torus containing a point of the equivalence class?
0 Asked on January 15, 2021 by edi
### Is only one SVD computation enough to perform PCA on a matrix and its transpose?
0 Asked on January 15, 2021 by mark-ng
### Evaluate $int frac{dx}{(x-1)^{frac 34} (x+2)^{frac 54}}$
2 Asked on January 15, 2021
### Minimum number of elements in ${0, 1, 2, dots, n}$ that add up to all of the elements of ${0, 1, 2, dots, n}$.
2 Asked on January 15, 2021 by arjuna196
### Tangent circles in a rectangle
3 Asked on January 15, 2021
### Dependence of coin tosses
1 Asked on January 15, 2021
### Does $n$ divide $2^{n-1} – 1$ for an infinite number of composite $n$?
1 Asked on January 14, 2021
### Find a limit involving floor function
3 Asked on January 14, 2021
### Extending a morphism on stalks by base change
0 Asked on January 14, 2021
### If $text{Mod}_{R}$ and $text{Mod}_{S}$ are equivalent, then $R$ and $S$ have the same simple modules.
1 Asked on January 14, 2021
### How to evaluate the integral of the exponential of tangent squared?
1 Asked on January 14, 2021 by mathart
### Solution check: Uniform continuity
1 Asked on January 14, 2021
### Algebraic closure of $mathbb F_p$
1 Asked on January 14, 2021 by samogrecco
### Positivity of an operator
1 Asked on January 14, 2021 by ecl
|
2022-06-30 07:21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6496989130973816, "perplexity": 572.7759078224484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00420.warc.gz"}
|
http://bfluhr.com/notes/interleavings-1d/lower-bounds.html
|
Lower Bounds
While it may be quite hard to compute the two distances $$M$$ and $$\mu$$ defined above we may try to compute lower bounds. Topological data analysis is a vast field and several solutions exist in this direction. Here we focus on two of them, namely the interleaving distance of join trees introduced by Morozov, Beketayev, and Weber (2013) and the interleaving distance of Reeb graphs by de Silva, Munch, and Patel (2016).
Join trees and Reeb graphs themselves are very similar. The join tree associated to a function $$f$$ can be seen as a real-valued function itself whose fibers encode the connected components of the corresponding sublevel sets of $$f$$ and the Reeb graph can be seen as a function whose fibers encode the connected components of the level sets. Join trees are much easier to work with however. While Morozov, Beketayev, and Weber (2013) defined their interleaving distance in an ad hoc manner, de Silva, Munch, and Patel (2016) introduce set-valued (pre)cosheaves first and then they define their interleaving distance. (Actually de Silva, Munch, and Patel (2016) introduce a second interleaving distance, that could be seen as a more ad hoc approach to the problem, but in the author’s opinion[1] their first notion of an interleaving distance using the theory of precosheaves is more convenient for our considerations.)
Here is the main motivation behind this document. For a continuous function $$f$$ Morozov, Beketayev, and Weber (2013) define the join tree of $$f$$ as the Reeb graph of the epigraph associated to $$f$$. Later de Silva, Munch, and Patel (2016) introduced the interleaving distance of Reeb graphs and therefore there are now two different notions of an interleaving distance on join trees. For two continuous functions $$f \colon X \rightarrow {\mathbb{R}}$$ and $$g \colon Y \rightarrow {\mathbb{R}}$$ we have the interleaving distance of join trees associated to $$f$$ and $$g$$ as defined by Morozov, Beketayev, and Weber (2013) and we have the interleaving distance of the Reeb graphs associated to the epigraph of $$f$$ respectively $$g$$ as defined by de Silva, Munch, and Patel (2016). We aim to show that the two distances are the same when $$X$$ and $$Y$$ are compact smooth manifolds.
It is not really essential and more of a personal preference that from this point onward we work with functions with values in the extended real line $$\overline{{\mathbb{R}}} := [-\infty, \infty]$$ and consider real-valued functions a subclass.
When working with different notions of an interleaving distance and in particular when comparing them, the use of some basic category theory seems very natural and simplifies several of our arguments. More specifically we will use functors and natural transformations on several occasions. Moreover we augment the class of $$\overline{{\mathbb{R}}}$$-valued continuous functions with the structure of a category, the category of $$\overline{{\mathbb{R}}}$$-spaces for short.
• Definition. For two continuous functions $$f \colon X \rightarrow \overline{{\mathbb{R}}}$$ and $$g \colon Y \rightarrow \overline{{\mathbb{R}}}$$, a homomorphism $$\varphi$$ from $$f$$ to $$g$$, also denoted by $$\varphi \colon f \rightarrow g$$, is a continuous map $$\varphi \colon X \rightarrow Y$$ such that the diagram $\xymatrix{ X \ar@/^/[rr]^{\varphi} \ar[dr]_{f} & & Y \ar[dl]^{g} \\ & \overline{{\mathbb{R}}} }$ commutes.
We define the composition of homomorphisms in the category of $$\overline{{\mathbb{R}}}$$-spaces by the composition of maps.
The category of $${\mathbb{R}}$$-spaces is the full subcategory of all real-valued continuous functions.
• Remark. With these definitions in place a reformulation of question 1 is, whether $$f$$ and $$g$$ are isomorphic as objects of the category of $$\overline{{\mathbb{R}}}$$-spaces.
[1] the author of this document
|
2017-09-21 08:50:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002120852470398, "perplexity": 256.8309281017943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00656.warc.gz"}
|
https://swmath.org/?term=task%20dependency%20graph
|
• DAGuE
• Referenced in 11 articles [sw20709]
• Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented...
• subgraph2vec
• Referenced in 4 articles [sw36496]
• Deep Learning and Graph Kernels. These latent representations encode semantic substructure dependencies in a continuous ... easily exploited by statistical models for tasks such as graph classification, clustering, link prediction...
• struc2vec
• Referenced in 14 articles [sw36495]
• different scales, and constructs a multilayer graph to encode structural similarities and generate structural context ... struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches ... that struc2vec improves performance on classification tasks that depend more on structural identity...
• azove
• Referenced in 13 articles [sw04634]
• present a novel approach for these tasks which consists of an $output-sensitive$ algorithm ... graph. The size of a BDD is the number of nodes of its graph ... heavily depends on the chosen variable ordering. Finding the optimal variable ordering...
• Shimba
• Referenced in 6 articles [sw01242]
• slice the static dependency graphs produced by Rigi. In turn, Rigi graphs are used ... views enables goal-driven reverse engineering tasks and aids the overall understanding of the target...
• persona2vec
• Referenced in 1 article [sw33469]
• performance in many graph mining tasks. Most existing embedding algorithms assign a single vector ... different roles depending on the contexts. Here, we propose persona2vec, a graph embedding framework that ... performance in many graph mining tasks. Most existing embedding algorithms assign a single vector ... different roles depending on the contexts. Here, we propose persona2vec, a graph embedding framework that...
• HTGviz
• Referenced in 1 article [sw23529]
• Graph (HTG) program representation where task parallelism is represented by precedence relations (arcs) among task ... information about data/control dependences and task precedences. It allows to tune task partitioning and parallelism ... OpenMP directives into the code based on graph manipulation facilities. HTGviz also guides the user...
• GraphState
• Referenced in 2 articles [sw21498]
• form that depends only on its combinatorial type. The uniqueness of graph representation gives ... finding, searching for subgraphs and other graph manipulation tasks. Though offered libraries were originally designed...
• CloneDifferentiator
• Referenced in 1 article [sw26893]
• correctly interpret cloning information and perform maintenance tasks on clones. Manual analysis of semantic differences ... clone detector by differentiating Program Dependence Graphs (PDGs) of clones. CloneDifferentiator is able to provide ... effective means of analyzing clones in a task oriented manner...
• COLIN
• Referenced in 6 articles [sw27480]
• linear change, and the handling of duration-dependent effects in combination with duration inequalities, both ... extension of the temporal relaxed planning graph heuristic of CRIKEY3, to support reasoning directly with ... continuous change. We extend the range of task variables considered to be suitable candidates...
• CIRFE
• Referenced in 3 articles [sw27640]
• capability sensors and where each sensor is tasked with estimating some local components ... given by a time-varying possibly sparse graph. Under minimal conditions, on the interagent communication ... dependencies of the component wise asymptotic covariance in terms of the number of agents tasked...
• Skip RNN
• Referenced in 1 article [sw36445]
• show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face ... gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues ... tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time ... shortens the effective size of the computational graph. This model can also be encouraged...
• DeepTMA
• Referenced in 2 articles [sw33597]
• Effective Contention Models for Network Calculus using Graph Neural Networks. Network calculus computes ... complex dependency structures and finding the tightest delay bounds becomes a resource intensive task ... this paper is a novel framework combining graph-based deep learning and Network Calculus...
• gSkeletonClu
• Referenced in 3 articles [sw30861]
• Agglomeration. Community detection is an important task for mining the structure and function of complex ... overcome this difficulty. However, it depends on a sensitive parameter: minimum similarity threshold ... density-based network clustering algorithm, called gSkeletonClu (graph-skeleton based clustering). By projecting a network...
• Parsl
• Referenced in 3 articles [sw28682]
• allow Parsl to construct a dynamic dependency graph of components that it can then execute ... nodes, and process upward of 1200 tasks per second. Other Parsl features simplify the construction...
• ODEA
• Referenced in 3 articles [sw07671]
• introduce a new modelling technique called arc dependences. It generalizes the traditional notion of disjointness ... specially constructed abstractions of the input graph, comprising significantly less data without losing the essence ... this model lies on paths in the graph, we also investigate an alternative featuring ... strengthening the latter. Behind all these tasks, the combinatorial core engine of the solver...
• CONN
• Referenced in 3 articles [sw26510]
• windowing of the residual blood oxygen level-dependent (BOLD) contrast signal, first-level estimation ... analysis for resting state as well as task-related data. Compared to methods that rely ... bivariate/multivariate regression analysis for multiple ROI sources, graph theoretical analysis, and novel voxel-to-voxel...
• Sub2vec
• Referenced in 1 article [sw41562]
• such as community detection which are intuitively dependent on subgraphs. Here, we formulate subgraph embedding ... network mining tasks, like community detection and graph classification. We show that Sub2Vec gets significant...
• Noodle
• Referenced in 1 article [sw26944]
• dependence constraints between tasks, their arbitrary sizes, and bounded resources available for execution, optimal task ... assigns task priorities. We conduct an extensive experimental to validate Noodle for task graphs taken...
• PLplot
• Referenced in 2 articles [sw07488]
• creating scientific plots. To help accomplish that task it is organized as a core ... plots, bar charts and pie charts. Multiple graphs (of the same or different sizes ... writing a small number of device dependent routines. PLplot is free software primarily licensed under...
|
2022-08-14 21:53:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.222432941198349, "perplexity": 5121.855473685516}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00197.warc.gz"}
|
https://nrich.maths.org/6123/solution
|
### Rain or Shine
Predict future weather using the probability that tomorrow is wet given today is wet and the probability that tomorrow is wet given that today is dry.
### Squash
If the score is 8-8 do I have more chance of winning if the winner is the first to reach 9 points or the first to reach 10 points?
### In a Box
Chris and Jo put two red and four blue ribbons in a box. They each pick a ribbon from the box without looking. Jo wins if the two ribbons are the same colour. Is the game fair?
# Which Spinners?
##### Age 14 to 18Challenge Level
Well done to Navjot from Sherborne Qatar, who sent in a full solution:
I will firstly divide the given bar charts into 2 groups. Graphs A, B, C, and D are made by the sum of the values on the spinners and graphs E, F, G, and H are made by the difference of the values of the spinners.
We know this because of the fact that the x-values of the first 4 graphs start with 2 while the last 4 graphs start with a 0.
Now, I shall determine the spinners used to make these frequency graphs.
Each graph seems to be a sum or a difference of two even or two odd numbers but not an even and an odd number since the final x-value is an even number.
I assume that the spinners are fair (as we will see, the probability of the sums shown on the graph are extremely close to the calculated probability, which is bound to happen after 5000 spins, if there would still be an imbalance in the probability of getting certain values then the spinners would definitely be biased).
As shown in the images, in the case of graph A, the only possible pairs we could use to get a maximum sum of 8 are 1 & 7, 2 & 6, 3 & 5, or 4 & 4. Through the table, I have shown what is the chance getting a certain sum. The numbers shown on each spinner are shown in blue at the left and bottom of the table, and the sums are shown in black inside the table.
There are 16 black numbers in the table, so 16 possible pairs of numbers on the spinners. Of the sums, 2 only appears once (because to get a sum of 2, you would have to get 1, 1). So the probability of getting 2 is $\frac1{16}$. However, 3 appears twice (1+2 and 2+1) and 4 appears 3 times (1+3 and 2+2 and 3+1), so the probabilities of getting 3 and 4 are $\frac2{16}$ and $\frac3{16}$ respectively.
Through trying the combinations, we would notice that as the difference between the maximum values of the two spinners decrease, the probabilities of getting certain sums start to decrease at a different rate to others (which would create that peak in the graph rather than the plateau). If we were to graph the results, the graph A would most resemble the results of the spinners with the maximum values 4 and 4.
Using that logic (that the smaller the difference between the values, the more the graph would represent the normal distribution curve) [that is, that when the spinners both have the same numbers, the graph goes smoothly up and then smoothly down, peaking in the middle] I deduced that graph B was made by using the spinners with the highest values 7 and 7.
However, in graph C and D, we notice a plateau. This suggests that the top values of the two spinners were not equal. For graph C, I shall assume that the probabilities to get any sum between 4 and 10 are equal (since the variation is negligible), and for D, I shall assume the probabilities to get anything between 7 and 11 are equal.
Applying the same method as I used to find the values used in graph A and B, I find out that the spinners used in graph C went up to the values 3 and 9.
1 2 3 4 5 6 7 8 9
1 10
2 3 4 5 6 7 8 9 10 11
3 4 5 6 7 8 9 10 11 12
The probabilities ($\text P$) of getting the sums were as follows:
$\text P(2) =\text P(12) = \frac1{27}$
$\text P(3) =\text P(11) = \frac2{27}$
$\text P(4) =\text P(5) =\text P(6) =\text P(7) =\text P(8) =\text P(9) =\text P(10) = \frac3{27} = \frac1{9}$
To produce graph D, spinners of values 6 and 10 would be used.
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10 11
2 3 4 5 6 7 8 9 10 11 12
3 4 5 6 7 8 9 10 11 12 13
4 5 6 7 8 9 10 11 12 13 14
5 6 7 8 9 10 11 12 13 14 15
6 7 8 9 10 11 12 13 14 15 16
The probabilities ($\text P$) of getting the sums were as follows:
$\text P(2) =\text P(16) =\frac1{60}$
$\text P(3) =\text P(15) = \frac2{60} = \frac1{30}$
$\text P(4) =\text P(14) = \frac3{60} = \frac1{20}$
$\text P(5) =\text P(13) = \frac4{60} = \frac1{15}$
$\text P(6) =\text P(12) = \frac5{60} = \frac1{12}$
$\text P(7) =\text P(8) =\text P(9) =\text P(10) =\text P(11) = \frac6{60} = \frac1{10}$
The final four graphs would require the same procedure of producing a table with the possible scores on the sides of the tables, but instead of plugging in the sums, we plug in the differences.
Graph E would be made by using spinners with the maximum values of 5 & 5
1 2 3 4 5
1 0 1 2 3 4
2 1 0 1 2 3
3 2 1 0 1 2
4 3 2 1 0 1
5 4 3 2 1 0
The probabilities ($\text P$) of getting the differences were as follows:
$\text P(0) = \frac5{25} = \frac15$
$\text P(1) = \frac8{25}$
$\text P(2) = \frac3{25}$
$\text P(3) = \frac4{25}$
$\text P(4) = \frac2{25}$
Since $\text P(5) = 0$ the results here match the graph E.
Graph F would be made by using spinners with the maximum values of 9 & 9
1 2 3 4 5 6 7 8 9
1 0 1 2 3 4 5 6 7 8
2 1 0 1 2 3 4 5 6 7
3 2 1 0 1 2 3 4 5 6
4 3 2 1 0 1 2 3 4 5
5 4 3 2 1 0 1 2 3 4
6 5 4 3 2 1 0 1 2 3
7 6 5 4 3 2 1 0 1 2
8 7 6 5 4 3 2 1 0 1
9 8 7 6 5 4 3 2 1 0
The probabilities ($\text P$) of getting the differences were as follows:
$\text P(0) = \frac9{81} = \frac19$
$\text P(1) = \frac{16}{81}$
$\text P(2) = \frac{14}{81}$
$\text P(3) = \frac{12}{81} = \frac4{27}$
$\text P(4) = \frac{10}{81}$
$\text P(5) = \frac8{81}$
$\text P(6) = \frac6{81} = \frac2{27}$
$\text P(7) = \frac4{81}$
$\text P(8) = \frac2{81}$
Since $\text P(9) = 0$ the results here match the graph F.
Graph G would be made by using spinners with the maximum values of 3 & 7
1 2 3 4 5 6 7
1 0 1 2 3 4 5 6
2 1 0 1 2 3 4 5
3 2 1 0 1 2 3 4
The probabilities ($\text P$) of getting the differences were as follows:
$\text P(0) = \frac3{21} = \frac17$
$\text P(1) = \frac5{21}$
$\text P(2) = \frac4{21}$
$\text P(3) =\text P(4) = \frac3{21}$
$\text P(5) = \frac2{21}$
$\text P(6) = \frac1{21}$
Since $\text P(7) = 0$ the results here match the graph G.
Graph H would be made by using spinners with the maximum values of 10 & 4
1 2 3 4 5 6 7 8 9 10
1 0 1 2 3 4 5 6 7 8 9
2 1 0 1 2 3 4 5 6 7 8
3 2 1 0 1 2 3 4 5 6 7
4 3 2 1 0 1 2 3 4 5 6
The probabilities ($\text P$) of getting the differences were as follows:
$\text P(0) = \frac4{40} = \frac1{10}$
$\text P(1) = \frac7{40}$
$\text P(2) = \frac6{40} = \frac3{20}$
$\text P(3) = \frac5{40} = \frac18$
$\text P(4) =\text P(5) =\text P(6) = \frac4{40} = \frac1{10}$
$\text P(7) = \frac3{40}$
$\text P(8) = \frac2{40} = \frac1{20}$
$\text P(9) = \frac1{40}$
Since $\text P(10) = 0$ the results here match the graph F.
In conclusion, these are the spinners used for each graph:
A: 4 & 4
B: 7 & 7
C: 3 & 9
D: 6 & 10
E: 5 & 5
F: 9 & 9
G: 3 & 7
H: 10 & 4
Final challenge
This is Navjot's solution to the final challenge. Navjot's descriptions of the shapes of the graphs are good, but one of the numbers is not quite right.
The sum of two 1-30 spinners: The bar charts would be similar to graph A or B as it peaks at 16, ((the maximum x-value)$\div$2)$+$1.
The difference between two 1-20 spinners: The bar charts would peak in a similar way to graph E and F. It would peak at 1 and then keep decreasing up to 9, ((the maximum x-value)$\div$2)$-$1. In fact, they would decrease up to 19, because the largest difference you could get would be between 20 and 1: a difference of 19.
The sum of a 1-20 and a 1-30 spinner: The bar chart would be similar to graphs C and D. The probability would rise up to 21 and then stay constant till 31 and then decrease (the plateau between 21 and 31 was deduced by the fact that the plateau seems to be between the sum that are 1 more than the maximum value of either spinner (when maximum values of spinners are 3 and 9, the graph plateaus between 4 and 10).
The difference between a 1-20 and a 1-30 spinner: Similar to graphs G and H, this bar chart would peak at 1 and then keep decreasing up to 29 which would be the largest difference possible difference.
|
2022-11-26 13:18:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281510591506958, "perplexity": 192.6349160872378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00106.warc.gz"}
|
https://pub.uni-bielefeld.de/publication/2913541
|
# Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$
Diening L (2004)
Mathematical Inequalities & Applications 7(2): 245-253.
No fulltext has been uploaded. References only!
Journal Article | Published | English
Author
Department
Publishing Year
ISSN
PUB-ID
### Cite this
Diening L. Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$. Mathematical Inequalities & Applications. 2004;7(2):245-253.
Diening, L. (2004). Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$. Mathematical Inequalities & Applications, 7(2), 245-253. doi:10.7153/mia-07-27
Diening, L. (2004). Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$. Mathematical Inequalities & Applications 7, 245-253.
Diening, L., 2004. Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$. Mathematical Inequalities & Applications, 7(2), p 245-253.
L. Diening, “Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$”, Mathematical Inequalities & Applications, vol. 7, 2004, pp. 245-253.
Diening, L.: Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$. Mathematical Inequalities & Applications. 7, 245-253 (2004).
Diening, Lars. “Maximal function on generalized Lebesgue spaces $L^{p(\cdot)}$”. Mathematical Inequalities & Applications 7.2 (2004): 245-253.
This data publication is cited in the following publications:
This publication cites the following data publications:
### Export
0 Marked Publications
Open Data PUB
|
2018-06-19 20:20:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6110849380493164, "perplexity": 10617.017951263684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00406.warc.gz"}
|
https://plainmath.net/9938/making-identical-baskets-basket-contains-apples-oranges-baskets-determine
|
Pete is making 8 identical fruit baskets as gifts. Each basket contains some apples and 12 oranges. Pete uses a total of 168 pieces of fruit to make the baskets. Determine the number of apples that are in each basket.
Question
Equations and inequalities
Pete is making 8 identical fruit baskets as gifts. Each basket contains some apples and 12 oranges. Pete uses a total of 168 pieces of fruit to make the baskets. Determine the number of apples that are in each basket.
2021-01-28
Let a be the number of apples so that in each basket, there are a+12 fruits. Since there are 8 baskets for a total of 168 fruits, then we can write the equation: 8(a+12)=168
Divide both sides by 8: a+12=21
Subtract 12 from both sides: a=9
So, there are 9 apples in each basket.
Relevant Questions
A two-sample inference deals with dependent and independent inferences. In a two-sample hypothesis testing problem, underlying parameters of two different populations are compared. In a longitudinal (or follow-up) study, the same group of people is followed over time. Two samples are said to be paired when each data point in the first sample is matched and related to a unique data point in the second sample.
This problem demonstrates inference from two dependent (follow-up) samples using the data from the hypothetical study of new cases of tuberculosis (TB) before and after the vaccination was done in several geographical areas in a country in sub-Saharan Africa. Conclusion about the null hypothesis is to note the difference between samples.
The problem that demonstrates inference from two dependent samples uses hypothetical data from the TB vaccinations and the number of new cases before and after vaccination. PSK\begin{array}{|c|c|} \hline Geographical\ regions & Before\ vaccination & After\ vaccination\\ \hline 1 & 85 & 11\\ \hline 2 & 77 & 5\\ \hline 3 & 110 & 14\\ \hline 4 & 65 & 12\\ \hline 5 & 81 & 10\\\hline 6 & 70 & 7\\ \hline 7 & 74 & 8\\ \hline 8 & 84 & 11\\ \hline 9 & 90 & 9\\ \hline 10 & 95 & 8\\ \hline \end{array}ZSK
Using the Minitab statistical analysis program to enter the data and perform the analysis, complete the following: Construct a one-sided $$\displaystyle{95}\%$$ confidence interval for the true difference in population means. Test the null hypothesis that the population means are identical at the 0.05 level of significance.
1. Find each of the requested values for a population with a mean of $$? = 40$$, and a standard deviation of $$? = 8$$ A. What is the z-score corresponding to $$X = 52?$$ B. What is the X value corresponding to $$z = - 0.50?$$ C. If all of the scores in the population are transformed into z-scores, what will be the values for the mean and standard deviation for the complete set of z-scores? D. What is the z-score corresponding to a sample mean of $$M=42$$ for a sample of $$n = 4$$ scores? E. What is the z-scores corresponding to a sample mean of $$M= 42$$ for a sample of $$n = 6$$ scores? 2. True or false: a. All normal distributions are symmetrical b. All normal distributions have a mean of 1.0 c. All normal distributions have a standard deviation of 1.0 d. The total area under the curve of all normal distributions is equal to 1 3. Interpret the location, direction, and distance (near or far) of the following zscores: $$a. -2.00 b. 1.25 c. 3.50 d. -0.34$$ 4. You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with $$\mu = 78$$ and $$\sigma = 12$$. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: $$82, 74, 62, 68, 79, 94, 90, 81, 80$$. 5. You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $$12 (\mu = 42, \sigma = 12)$$. You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is$44.50 from tips. Test for a difference between this value and the population mean at the $$\alpha = 0.05$$ level of significance.
You decide to make and sell two different gift baskets at your local outdoor market. Basket A contains 3 cookies, 6 chocolates, and 2 jars of jam and makes a profit of $12. Basket B contains 6 cookies, 3 chocolates, and 2 jars of jam and makes a profit of$15. You have just made 48 cookies, 36 chocolates, and 18 jars of jam. How many of each type of gift basket should you make to maximize the profit? a) State what you assign to x and y. Write the objective function. b) Write the three constraint inequalities. c) Find the axes intercepts of each of the above inequalities.
A trust fund has $200,000 to invest. Three alternative investments have been identified, earning 10 percent, 7 percent, and 8 percent, respectively. A goal has been set to earn an annual income of$16,000 on the total investment. One condition set by the trust is that the combined investment in alternatives 2 and 3 should be triple the amount invested in alternative 1. Determine the amount of money which should be invested in each option to satisfy the requirements of the trust fund.
Give a full and correct answer Why is it important that a sample be random and representative when conducting hypothesis testing? Representative Sample vs. Random Sample: An Overview Economists and researchers seek to reduce sampling bias to near negligible levels when employing statistical analysis. Three basic characteristics in a sample reduce the chances of sampling bias and allow economists to make more confident inferences about a general population from the results obtained from the sample analysis or study: * Such samples must be representative of the chosen population studied. * They must be randomly chosen, meaning that each member of the larger population has an equal chance of being chosen. * They must be large enough so as not to skew the results. The optimal size of the sample group depends on the precise degree of confidence required for making an inference. Representative sampling and random sampling are two techniques used to help ensure data is free of bias. These sampling techniques are not mutually exclusive and, in fact, they are often used in tandem to reduce the degree of sampling error in an analysis and allow for greater confidence in making statistical inferences from the sample in regard to the larger group. Representative Sample A representative sample is a group or set chosen from a larger statistical population or group of factors or instances that adequately replicates the larger group according to whatever characteristic or quality is under study. A representative sample parallels key variables and characteristics of the large society under examination. Some examples include sex, age, education level, socioeconomic status (SES), or marital status. A larger sample size reduced sampling error and increases the likelihood that the sample accurately reflects the target population. Random Sample A random sample is a group or set chosen from a larger population or group of factors of instances in a random manner that allows for each member of the larger group to have an equal chance of being chosen. A random sample is meant to be an unbiased representation of the larger population. It is considered a fair way to select a sample from a larger population since every member of the population has an equal chance of getting selected. Special Considerations: People collecting samples need to ensure that bias is minimized. Representative sampling is one of the key methods of achieving this because such samples replicate as closely as possible elements of the larger population under study. This alone, however, is not enough to make the sampling bias negligible. Combining the random sampling technique with the representative sampling method reduces bias further because no specific member of the representative population has a greater chance of selection into the sample than any other. Summarize this article in 250 words.
A large urn in your kitchen is full of fruit. There are six apples, four oranges, and five pears. What percentage of the mixed fruit are apples? What percentage are not apples?
Roselle has 2 cups of popcorn and 8 oz of soda for a total of 216 calories. Carmel has 1 cup of popcorn and 12 02 of soda for a total of 204 calories. Determine the number of calories per cup of popcorn and per ounce of soda.
The American Journal of Political Science (Apr. 2014) published a study on a woman's impact in mixed-gender deliberating groups. The researchers randomly assigned subjects to one of several 5-member decision-making groups. The groups' gender composition varied as follows: 0 females, 1 female, 2 females, 3 females, 4 females, or 5 females. Each group was the n randomly assigned to utilize one of two types of decision rules: unanimous or majority rule. Ten groups were created for each of the $$\displaystyle{6}\ \times\ {2}={12}$$ combinations of gender composition and decision rule. One variable of interest, measured for each group, was the number of words spoken by women on a certain topic per 1,000 total words spoken during the deliberations. a) Why is this experiment considered a designed study? b) Identify the experimental unit and dependent variable in this study. c) Identify the factors and treatments for this study.
The table below shows the number of people for three different race groups who were shot by police that were either armed or unarmed. These values are very close to the exact numbers. They have been changed slightly for each student to get a unique problem.
Suspect was Armed:
Black - 543
White - 1176
Hispanic - 378
Total - 2097
Suspect was unarmed:
Black - 60
White - 67
Hispanic - 38
Total - 165
Total:
Black - 603
White - 1243
Hispanic - 416
Total - 2262
Give your answer as a decimal to at least three decimal places.
a) What percent are Black?
b) What percent are Unarmed?
c) In order for two variables to be Independent of each other, the P $$(A and B) = P(A) \cdot P(B) P(A and B) = P(A) \cdot P(B).$$
This just means that the percentage of times that both things happen equals the individual percentages multiplied together (Only if they are Independent of each other).
Therefore, if a person's race is independent of whether they were killed being unarmed then the percentage of black people that are killed while being unarmed should equal the percentage of blacks times the percentage of Unarmed. Let's check this. Multiply your answer to part a (percentage of blacks) by your answer to part b (percentage of unarmed).
Remember, the previous answer is only correct if the variables are Independent.
d) Now let's get the real percent that are Black and Unarmed by using the table?
If answer c is "significantly different" than answer d, then that means that there could be a different percentage of unarmed people being shot based on race. We will check this out later in the course.
Let's compare the percentage of unarmed shot for each race.
e) What percent are White and Unarmed?
f) What percent are Hispanic and Unarmed?
If you compare answers d, e and f it shows the highest percentage of unarmed people being shot is most likely white.
Why is that?
This is because there are more white people in the United States than any other race and therefore there are likely to be more white people in the table. Since there are more white people in the table, there most likely would be more white and unarmed people shot by police than any other race. This pulls the percentage of white and unarmed up. In addition, there most likely would be more white and armed shot by police. All the percentages for white people would be higher, because there are more white people. For example, the table contains very few Hispanic people, and the percentage of people in the table that were Hispanic and unarmed is the lowest percentage.
Think of it this way. If you went to a college that was 90% female and 10% male, then females would most likely have the highest percentage of A grades. They would also most likely have the highest percentage of B, C, D and F grades
The correct way to compare is "conditional probability". Conditional probability is getting the probability of something happening, given we are dealing with just the people in a particular group.
g) What percent of blacks shot and killed by police were unarmed?
h) What percent of whites shot and killed by police were unarmed?
i) What percent of Hispanics shot and killed by police were unarmed?
You can see by the answers to part g and h, that the percentage of blacks that were unarmed and killed by police is approximately twice that of whites that were unarmed and killed by police.
j) Why do you believe this is happening?
Do a search on the internet for reasons why blacks are more likely to be killed by police. Read a few articles on the topic. Write your response using the articles as references. Give the websites used in your response. Your answer should be several sentences long with at least one website listed. This part of this problem will be graded after the due date.
Celine, Devon, and another friend want to purchase some snacks that cost a total of $7.50. They will share the cost of the snacks. Which of these statements is true? A. An equation that can be used to find x, the amount of money each person will pay is x+3=7.5. The solution to the equation is 4.5, so each person will pay$4.50.
B. An equation that can be used to find x, the amount of money each person will pay is x+3=7.5. The solution to the equation is 10.5, so each person will pay $10.50. C. An equation that can be used to find x, the amount of money each person will pay is x⋅3=7.5. The solution to the equation is 2.5, so each person will pay$2.50.
|
2021-05-08 13:55:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5455208420753479, "perplexity": 697.4207590613291}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00460.warc.gz"}
|
http://www.solutioninn.com/suppose-that-a-sample-of-size-100-is-to-be
|
# Question
Suppose that a sample of size 100 is to be drawn from a population with standard deviation 10.
a. What is the probability that the sample mean will be within 1 of the value of µ?
b. For this example (n = 100, σ = 10), complete each of the following statements by computing the appropriate value:
i. Approximately 95% of the time, x will be within _____ of µ.
ii. Approximately 0.3% of the time, x will be farther than _____ from µ.
Sales0
Views80
|
2016-10-24 10:47:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780184984207153, "perplexity": 450.3581725543003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719564.4/warc/CC-MAIN-20161020183839-00515-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/165704/creating-vacuum-in-a-glass-by-acceleration
|
# Creating vacuum in a glass by acceleration
Suppose we have a cylindrical glass at atmospheric pressure. The glass is put in a horizontal position such that the bottom of it (the closed end) is at the right side and the open side is on the left.
Is it possible to create vacuum in the glass just by accelerating it to the right fast enough? And if it is, how can one calculate an approximate value for it?
• I highly recommend XKCD's what-if.xkcd.com/6 . While XKCD is not exactly what I would call it a "academically credible source," it is well researched, and far more funny than any other source out there. – Cort Ammon Feb 22 '15 at 14:10
$$\Delta P = - \rho g \Delta h$$
So for a $1m$ tube, filled with ambient air ($\rho = 1.2754\ kg/m^3$) and a $1g$ acceleration, you'd get $\Delta P = 12.5 Pa$.
To get a rough vaccum, (so $\Delta P = 10^5 Pa$), with a $10m$ tube, you'd need $a = 10^5 / 1.2754 \times 10 = 7840\ ms^{-2}$ or about $800g$.
Once you've evacuated the tube, it will not refill as long as it maintains a velocity greater than the ambient velocity of air particles (about $500\ ms^{-1}$). However, that's the average velocity. The actual velocits of an individual particle is Boltzmann-distributed so a few might be going fast enough to leap aboard. Then there's turbulence and the Venturi Effect and other aerodynamic complications.
|
2020-09-30 00:45:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7346954345703125, "perplexity": 742.5041779737832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402093104.90/warc/CC-MAIN-20200929221433-20200930011433-00719.warc.gz"}
|
https://deepspeech.readthedocs.io/en/v0.7.4/TRAINING.html
|
## Getting the training code¶
Install Git Large File Storage either manually or through a package-manager if available on your system. Then clone the DeepSpeech repository normally:
git clone https://github.com/mozilla/DeepSpeech
In creating a virtual environment you will create a directory containing a python3 binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on $HOME/tmp/deepspeech-train-venv. You can create it using this command: $ python3 -m venv $HOME/tmp/deepspeech-train-venv/ Once this command completes successfully, the environment will be ready to be activated. ## Activating the environment¶ Each time you need to work with DeepSpeech, you have to activate this virtual environment. This is done with this simple command: $ source $HOME/tmp/deepspeech-train-venv/bin/activate ## Installing DeepSpeech Training Code and its dependencies¶ Install the required dependencies using pip3: cd DeepSpeech pip3 install --upgrade pip==20.0.2 wheel==0.34.2 setuptools==46.1.3 pip3 install --upgrade -e . Remember to re-run the last pip3 install command above when you update the training code (for example by pulling new changes), in order to update any dependencies. The webrtcvad Python package might require you to ensure you have proper tooling to build Python modules: sudo apt-get install python3-dev ## Recommendations¶ If you have a capable (NVIDIA, at least 8GB of VRAM) GPU, it is highly recommended to install TensorFlow with GPU support. Training will be significantly faster than using the CPU. To enable GPU support, you can do: pip3 uninstall tensorflow pip3 install 'tensorflow-gpu==1.15.2' Please ensure you have the required CUDA dependency. It has been reported for some people failure at training: tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node tower_0/conv1d/Conv2D}}]] Setting the TF_FORCE_GPU_ALLOW_GROWTH environment variable to true seems to help in such cases. This could also be due to an incorrect version of libcudnn. Double check your versions with the TensorFlow 1.15 documentation. ## Basic Dockerfile for training¶ We provide Dockerfile.train to automatically set up a basic training environment in Docker. You need to generate the Dockerfile from the template using: This should ensure that you’ll re-use the upstream Python 3 TensorFlow GPU-enabled Docker image. make Dockerfile.train If you want to specify a different DeepSpeech repository / branch, you can pass DEEPSPEECH_REPO or DEEPSPEECH_SHA parameters: make Dockerfile.train DEEPSPEECH_REPO=git://your/fork DEEPSPEECH_SHA=origin/your-branch ## Common Voice training data¶ The Common Voice corpus consists of voice samples that were donated through Mozilla’s Common Voice Initiative. You can download individual CommonVoice v2.0 language data sets from here. After extraction of such a data set, you’ll find the following contents: • the *.tsv files output by CorporaCreator for the downloaded language • the mp3 audio files they reference in a clips sub-directory. For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (bin/import_cv2.py): bin/import_cv2.py --filter_alphabet path/to/some/alphabet.txt /path/to/extracted/language/archive Providing a filter alphabet is optional. It will exclude all samples whose transcripts contain characters not in the specified alphabet. Running the importer with -h will show you some additional options. Once the import is done, the clips sub-directory will contain for each required .mp3 an additional .wav file. It will also add the following .csv files: • clips/train.csv • clips/dev.csv • clips/test.csv All entries in these CSV files refer to their samples by absolute paths. So moving this sub-directory would require another import or tweaking the CSV files accordingly. To use Common Voice data during training, validation and testing, you pass (comma separated combinations of) their filenames into --train_files, --dev_files, --test_files parameters of DeepSpeech.py. If, for example, Common Voice language en was extracted to ../data/CV/en/, DeepSpeech.py could be called like this: python3 DeepSpeech.py --train_files ../data/CV/en/clips/train.csv --dev_files ../data/CV/en/clips/dev.csv --test_files ../data/CV/en/clips/test.csv ## Training a model¶ The central (Python) script is DeepSpeech.py in the project’s root directory. For its list of command line options, you can call: python3 DeepSpeech.py --helpfull To get the output of this in a slightly better-formatted way, you can also look at the flag definitions in Command-line flags for the training scripts. For executing pre-configured training scenarios, there is a collection of convenience scripts in the bin folder. Most of them are named after the corpora they are configured for. Keep in mind that most speech corpora are very large, on the order of tens of gigabytes, and some aren’t free. Downloading and preprocessing them can take a very long time, and training on them without a fast GPU (GTX 10 series or newer recommended) takes even longer. If you experience GPU OOM errors while training, try reducing the batch size with the –train_batch_size, –dev_batch_size and –test_batch_size parameters. As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run: ./bin/run-ldc93s1.sh This script will train on a small sample dataset composed of just a single audio file, the sample file for the TIMIT Acoustic-Phonetic Continuous Speech Corpus, which can be overfitted on a GPU in a few minutes for demonstration purposes. From here, you can alter any variables with regards to what dataset is used, how many training iterations are run and the default values of the network parameters. Feel also free to pass additional (or overriding) DeepSpeech.py parameters to these scripts. Then, just run the script to train the modified network. Each dataset has a corresponding importer script in bin/ that can be used to download (if it’s freely available) and preprocess the dataset. See bin/import_librivox.py for an example of how to import and preprocess a large dataset for training with DeepSpeech. Some importers might require additional code to properly handled your locale-specific requirements. Such handling is dealt with --validate_label_locale flag that allows you to source out-of-tree Python script that defines a validate_label function. Please refer to util/importers.py for implementation example of that function. If you don’t provide this argument, the default validate_label function will be used. This one is only intended for English language, so you might have consistency issues in your data for other languages. For example, in order to use a custom validation function that disallows any sample with “a” in its transcript, and lower cases everything else, you could put the following code in a file called my_validation.py and then use --validate_label_locale my_validation.py: def validate_label(label): if 'a' in label: # disallow labels with 'a' return None return label.lower() # lower case valid labels If you’ve run the old importers (in util/importers/), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work. ## Training with automatic mixed precision¶ Automatic Mixed Precision (AMP) training on GPU for TensorFlow has been recently [introduced](https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540). Mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput. Mixed precision training also often allows larger batch sizes. Automatic mixed precision training can be enabled by including the flag –automatic_mixed_precision at training time: python3 DeepSpeech.py --train_files ./train.csv --dev_files ./dev.csv --test_files ./test.csv --automatic_mixed_precision On a Volta generation V100 GPU, automatic mixed precision speeds up DeepSpeech training and evaluation by ~30%-40%. ## Checkpointing¶ During training of a model so-called checkpoints will get stored on disk. This takes place at a configurable time interval. The purpose of checkpoints is to allow interruption (also in the case of some unexpected failure) and later continuation of training without losing hours of training time. Resuming from checkpoints happens automatically by just (re)starting training with the same --checkpoint_dir of the former run. Alternatively, you can specify more fine grained options with --load_checkpoint_dir and --save_checkpoint_dir, which specify separate locations to use for loading and saving checkpoints respectively. If not specified these flags use the same value as --checkpoint_dir, ie. load from and save to the same directory. Be aware however that checkpoints are only valid for the same model geometry they had been generated from. In other words: If there are error messages of certain Tensors having incompatible dimensions, this is most likely due to an incompatible model change. One usual way out would be to wipe all checkpoint files in the checkpoint directory or changing it before starting the training. ## Exporting a model for inference¶ If the --export_dir parameter is provided, a model will have been exported to this directory during training. Refer to the usage instructions for information on running a client that can use the exported model. ## Exporting a model for TFLite¶ If you want to experiment with the TF Lite engine, you need to export a model that is compatible with it, then use the --export_tflite flags. If you already have a trained model, you can re-export it for TFLite by running DeepSpeech.py again and specifying the same checkpoint_dir that you used for training, as well as passing --export_tflite --export_dir /model/export/destination. ## Making a mmap-able model for inference¶ The output_graph.pb model file generated in the above step will be loaded in memory to be dealt with when running inference. This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk. TensorFlow has tooling to achieve this: it requires building the target //tensorflow/contrib/util:convert_graphdef_memmapped_format (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use util/taskcluster.py tool to download: $ python3 util/taskcluster.py --source tensorflow --artifact convert_graphdef_memmapped_format --branch r1.15 --target .
Producing a mmap-able model is as simple as:
\$ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm
Upon sucessfull run, it should report about conversion of a non-zero number of nodes. If it reports converting 0 nodes, something is wrong: make sure your model is a frozen one, and that you have not applied any incompatible changes (this includes quantize_weights).
### Continuing training from a release model¶
There are currently two supported approaches to make use of a pre-trained DeepSpeech model: fine-tuning or transfer-learning. Choosing which one to use is a simple decision, and it depends on your target dataset. Does your data use the same alphabet as the release model? If “Yes”: fine-tune. If “No” use transfer-learning.
If your own data uses the extact same alphabet as the English release model (i.e. a-z plus ) then the release model’s output layer will match your data, and you can just fine-tune the existing parameters. However, if you want to use a new alphabet (e.g. Cyrillic а, б, д), the output layer of a release DeepSpeech model will not match your data. In this case, you should use transfer-learning (i.e. remove the trained model’s output layer, and reinitialize a new output layer that matches your target character set.
N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8.
## Fine-Tuning (same alphabet)¶
If you’d like to use one of the pre-trained models released by Mozilla to bootstrap your training process (fine tuning), you can do so by using the --checkpoint_dir flag in DeepSpeech.py. Specify the path where you downloaded the checkpoint from the release, and training will resume from the pre-trained model.
For example, if you want to fine tune the entire graph using your own data in my-train.csv, my-dev.csv and my-test.csv, for three epochs, you can something like the following, tuning the hyperparameters as needed:
mkdir fine_tuning_checkpoints
python3 DeepSpeech.py --n_hidden 2048 --checkpoint_dir path/to/checkpoint/folder --epochs 3 --train_files my-train.csv --dev_files my-dev.csv --test_files my_dev.csv --learning_rate 0.0001
Notes about the release checkpoints: the released models were trained with --n_hidden 2048, so you need to use that same value when initializing from the release models. Since v0.6.0, the release models are also trained with --train_cudnn, so you’ll need to specify that as well. If you don’t have a CUDA compatible GPU, then you can workaround it by using the --load_cudnn flag. Use --helpfull to get more information on how the flags work.
You also cannot use --automatic_mixed_precision when loading release checkpoints, as they do not use automatic mixed precision training.
If you try to load a release model without following these steps, you’ll get an error similar to this:
E Tried to load a CuDNN RNN checkpoint but there were more missing variables than just the Adam moment tensors.
## Transfer-Learning (new alphabet)¶
If you want to continue training an alphabet-based DeepSpeech model (i.e. not a UTF-8 model) on a new language, or if you just want to add new characters to your custom alphabet, you will probably want to use transfer-learning instead of fine-tuning. If you’re starting with a pre-trained UTF-8 model – even if your data comes from a different language or uses a different alphabet – the model will be able to predict your new transcripts, and you should use fine-tuning instead.
In a nutshell, DeepSpeech’s transfer-learning allows you to remove certain layers from a pre-trained model, initialize new layers for your target data, stitch together the old and new layers, and update all layers via gradient descent. You will remove the pre-trained output layer (and optionally more layers) and reinitialize parameters to fit your target alphabet. The simplest case of transfer-learning is when you remove just the output layer.
In DeepSpeech’s implementation of transfer-learning, all removed layers will be contiguous, starting from the output layer. The key flag you will want to experiment with is --drop_source_layers. This flag accepts an integer from 1 to 5 and allows you to specify how many layers you want to remove from the pre-trained model. For example, if you supplied --drop_source_layers 3, you will drop the last three layers of the pre-trained model: the output layer, penultimate layer, and LSTM layer. All dropped layers will be reinintialized, and (crucially) the output layer will be defined to match your supplied target alphabet.
You need to specify the location of the pre-trained model with --load_checkpoint_dir and define where your new model checkpoints will be saved with --save_checkpoint_dir. You need to specify how many layers to remove (aka “drop”) from the pre-trained model: --drop_source_layers. You also need to supply your new alphabet file using the standard --alphabet_config_path (remember, using a new alphabet is the whole reason you want to use transfer-learning).
python3 DeepSpeech.py \
--drop_source_layers 1 \
--alphabet_config_path my-new-language-alphabet.txt \
--save_checkpoint_dir path/to/output-checkpoint/folder \
--train_files my-new-language-train.csv \
--dev_files my-new-language-dev.csv \
--test_files my-new-language-test.csv
## UTF-8 mode¶
DeepSpeech includes a UTF-8 operating mode which can be useful to model languages with very large alphabets, such as Chinese Mandarin. For details on how it works and how to use it, see CTC beam search decoder.
## Augmentation¶
Augmentation is a useful technique for better generalization of machine learning models. Thus, a pre-processing pipeline with various augmentation techniques on raw pcm and spectrogram has been implemented and can be used while training the model. Following are the available augmentation techniques that can be enabled at training time by using the corresponding flags in the command line.
Each sample of the training data will get treated by every specified augmentation in their given order. However: whether an augmentation will actually get applied to a sample is decided by chance on base of the augmentation’s probability value. For example a value of p=0.1 would apply the according augmentation to just 10% of all samples. This also means that augmentations are not mutually exclusive on a per-sample basis.
The --augment flag uses a common syntax for all augmentation types:
--augment augmentation_type1[param1=value1,param2=value2,...] --augment augmentation_type2[param1=value1,param2=value2,...] ...
For example, for the overlay augmentation:
python3 DeepSpeech.py --augment overlay[p=0.1,source=/path/to/audio.sdb,snr=20.0] ...
In the documentation below, whenever a value is specified as <float-range> or <int-range>, it supports one of the follow formats:
• <value>: A constant (int or float) value.
• <value>~<r>: A center value with a randomization radius around it. E.g. 1.2~0.4 will result in picking of a uniformly random value between 0.8 and 1.6 on each sample augmentation.
• <start>:<end>: The value will range from <start> at the beginning of the training to <end> at the end of the training. E.g. -0.2:1.2 (float) or 2000:4000 (int)
• <start>:<end>~<r>: Combination of the two previous cases with a ranging center value. E.g. 4-6~2 would at the beginning of the training pick values between 2 and 6 and at the end of the training between 4 and 8.
Ranges specified with integer limits will only assume integer (rounded) values.
Warning
When feature caching is enabled, by default the cache has no expiration limit and will be used for the entire training run. This will cause these augmentations to only be performed once during the first epoch and the result will be reused for subsequent epochs. This would not only hinder value ranges from reaching their intended final values, but could also lead to unintended over-fitting. In this case flag --cache_for_epochs N (with N > 1) should be used to periodically invalidate the cache after every N epochs and thus allow samples to be re-augmented in new ways and with current range-values.
Every augmentation targets a certain representation of the sample - in this documentation these representations are referred to as domains. Augmentations are applied in the following order:
1. sample domain: The sample just got loaded and its waveform is represented as a NumPy array. For implementation reasons these augmentations are the only ones that can be “simulated” through bin/play.py.
2. signal domain: The sample waveform is represented as a tensor.
3. spectrogram domain: The sample spectrogram is represented as a tensor.
4. features domain: The sample’s mel spectrogram features are represented as a tensor.
Within a single domain, augmentations are applied in the same order as they appear in the command-line.
### Sample domain augmentations¶
Overlay augmentation --augment overlay[p=<float>,source=<str>,snr=<float-range>,layers=<int-range>]
Layers another audio source (multiple times) onto augmented samples.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• source: path to the sample collection to use for augmenting (*.sdb or *.csv file). It will be repeated if there are not enough samples left.
• snr: signal to noise ratio in dB - positive values for lowering volume of the overlay in relation to the sample
• layers: number of layers added onto the sample (e.g. 10 layers of speech to get “cocktail-party effect”). A layer is just a sample of the same duration as the sample to augment. It gets stitched together from as many source samples as required.
Reverb augmentation --augment reverb[p=<float>,delay=<float-range>,decay=<float-range>]
Adds simplified (no all-pass filters) Schroeder reverberation to the augmented samples.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• delay: time delay in ms for the first signal reflection - higher values are widening the perceived “room”
• decay: sound decay in dB per reflection - higher values will result in a less reflective perceived “room”
Resample augmentation --augment resample[p=<float>,rate=<int-range>]
Resamples augmented samples to another sample rate and then resamples back to the original sample rate.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• rate: sample-rate to re-sample to
Codec augmentation --augment codec[p=<float>,bitrate=<int-range>]
Compresses and then decompresses augmented samples using the lossy Opus audio codec.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• bitrate: bitrate used during compression
Volume augmentation --augment volume[p=<float>,dbfs=<float-range>]
Measures and levels augmented samples to a target dBFS value.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• dbfs : target volume in dBFS (default value of 3.0103 will normalize min and max amplitudes to -1.0/1.0)
### Spectrogram domain augmentations¶
Pitch augmentation --augment pitch[p=<float>,pitch=<float-range>]
Scales spectrogram on frequency axis and thus changes pitch.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• pitch: pitch factor by with the frequency axis is scaled (e.g. a value of 2.0 will raise audio frequency by one octave)
Tempo augmentation --augment tempo[p=<float>,factor=<float-range>]
Scales spectrogram on time axis and thus changes playback tempo.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• factor: speed factor by which the time axis is stretched or shrunken (e.g. a value of 2.0 will double playback tempo)
Frequency mask augmentation --augment frequency_mask[p=<float>,n=<int-range>,size=<int-range>]
Sets frequency-intervals within the augmented samples to zero (silence) at random frequencies. See the SpecAugment paper for more details - https://arxiv.org/abs/1904.08779
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• n: number of intervals to mask
• size: number of frequency bands to mask per interval
### Multi domain augmentations¶
Time mask augmentation --augment time_mask[p=<float>,n=<int-range>,size=<float-range>,domain=<domain>]
Sets time-intervals within the augmented samples to zero (silence) at random positions.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• n: number of intervals to set to zero
• size: duration of intervals in ms
• domain: data representation to apply augmentation to - “signal”, “features” or “spectrogram” (default)
Dropout augmentation --augment dropout[p=<float>,rate=<float-range>,domain=<domain>]
Zeros random data points of the targeted data representation.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• rate: dropout rate ranging from 0.0 for no dropout to 1.0 for 100% dropout
• domain: data representation to apply augmentation to - “signal”, “features” or “spectrogram” (default)
Add augmentation --augment add[p=<float>,stddev=<float-range>,domain=<domain>]
Adds random values picked from a normal distribution (with a mean of 0.0) to all data points of the targeted data representation.
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• stddev: standard deviation of the normal distribution to pick values from
• domain: data representation to apply augmentation to - “signal”, “features” (default) or “spectrogram”
Multiply augmentation --augment multiply[p=<float>,stddev=<float-range>,domain=<domain>]
Multiplies all data points of the targeted data representation with random values picked from a normal distribution (with a mean of 1.0).
• p: probability value between 0.0 (never) and 1.0 (always) if a given sample gets augmented by this method
• stddev: standard deviation of the normal distribution to pick values from
• domain: data representation to apply augmentation to - “signal”, “features” (default) or “spectrogram”
Example training with all augmentations:
python -u DeepSpeech.py \
--train_files "train.sdb" \
--feature_cache ./feature.cache \
--cache_for_epochs 10 \
--epochs 100 \
--augment overlay[p=0.5,source=noise.sdb,layers=1,snr=50:20~10] \
--augment reverb[p=0.1,delay=50.0~30.0,decay=10.0:2.0~1.0] \
--augment resample[p=0.1,rate=12000:8000~4000] \
--augment codec[p=0.1,bitrate=48000:16000] \
--augment volume[p=0.1,dbfs=-10:-40] \
--augment pitch[p=0.1,pitch=1~0.2] \
--augment tempo[p=0.1,factor=1~0.5] \
--augment dropout[p=0.1,rate=0.05] \
--augment multiply[p=0.1,domain=features,stddev=0~0.5] \
[...]
The bin/play.py tool also supports --augment parameters (for sample domain augmentations) and can be used for experimenting with different configurations.
Example of playing all samples with reverberation and maximized volume:
bin/play.py --augment reverb[p=0.1,delay=50.0,decay=2.0] --augment volume --random test.sdb
Example simulation of the codec augmentation of a wav-file first at the beginning and then at the end of an epoch:
bin/play.py --augment codec[p=0.1,bitrate=48000:16000] --clock 0.0 test.wav
bin/play.py --augment codec[p=0.1,bitrate=48000:16000] --clock 1.0 test.wav
|
2022-05-22 09:55:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32546308636665344, "perplexity": 4011.4446017305268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00126.warc.gz"}
|
https://fhi-aims-club.gitlab.io/tutorials/rpa-and-gw-for-molecules-and-solids/Part-1/CounterpoiseCorrection/
|
# The Counterpoise Correction
The convergence of observables with the number basis functions has been demonstrated to be very slow for electronic structure methods that depend also on the unoccupied states (cf. the chapter for GW). This has an immediate consequence when studying bond breaking or forming between two or more reactants: The basis functions localized on one reactant can act as additional functions for the other reactant, and vise versa. This error is called basis set superposition error (BSSE). For finite basis-set sizes, this can introduce a significant error for the bonding curves – the smaller the basis set the larger the BSSE error.
The brute force way to account for the BSSE is to add more and more basis functions to the system. However in practice, this is not always possible as the computational cost of such calculations can explode rapidly. Alternatively, it is possible to calculate the counterpoise correction, which approximates the bias to the observables of each reactant due to the additional basis functions in the system with both reactants included.
Here, we will study the BSSE and the corresponding counterpoise correction for the sodium dimer for different basis sets. We will use the "RPA + Exact Exchange" total energy method (that is, the correlation energy in the random-phase approximation plus the exact exchange energy) to demonstrate this effect. The physical observable of interest in this part is the binding energy of the sodium dimer:
$E_\text{bind} = E_\text{Na-Na} - 2*E_\text{Na}$
where $$E_\text{Na-Na}$$ refers to the total energy of the Na dimer and $$E_\text{Na}$$ to the total energy of the single Na atom.
## How to perform counterpoise correction
We will explain the counterpoise correction for the case of two reactants. The calculation of the counterpoise correction requires three steps:
1. Calculate the total energy with both reactants, including all basis functions and nuclei. This total energy is referred to as $$E_{12}$$.
2. Perform a calculation for each reactant using the same geometry as in the complex for each of the parts. This will give the energies for $$E_{1}$$ and $$E_{2}$$.
3. Repeat the calculations of step 2. However, this time including the basis functions of the other reactant at their original positions from step 1, but without including the charge of the nuclei. This will give the energies $$E_{1}^\ast$$ and $$E_{2}^\ast$$. In FHI-aims, step 3 can be realized by using the empty keyword instead of the atom keyword in the geometry.in file.
The counterpoise correction is the difference of total energies of the single reactants with and without the additional basis function included:
$\Delta E_\text{cpc} = (E_{1}^\ast - E_{1}) + (E_{2}^\ast - E_{2})$
The final counterpoise correction for the dimer reduces to (since both reactants are identical): $$\Delta E_\text{cpc} = 2 * (E_\text{atom}^\ast - E_\text{atom})$$ where $$E_\text{atom}$$ is the total energy for the single atom.
Let us get more specific. We want to discuss the effect of the counterpoise correction for the Na dimer. In case of a dimer, the counterpoise corrected binding energy is:
$E_\text{bind, cpc} = E_\text{Na-Na} - 2*E_\text{Na} - \Delta E_\text{cpc} = E_\text{Na-Na} - 2*E_\text{Na}^\ast$
### The geometry.in files
To calculate the counterpoise correction we need to setup three different geometries:
For step 1, we need the following geometry.in file:
atom 0.0 0.0 0.0 Na
atom 0.0 0.0 <z> Na
The <z> will be bonding distance of the Na atoms in the dimer, which will be varied for a certain range.
For step 2:
atom 0.0 0.0 0.0 Na
initial_moment 1.000000
The Na atom naturally has one unpaired electron, so we initialize its spin state with an initial moment of 1.
And finally for step 3:
empty 0.0 0.0 0.0 Na
atom 0.0 0.0 <z> Na
initial_moment 1.000000
Here, we replace atom with the empty keyword for the first Na atom. This implies that no nuclei charge will be considered, but all of the basis functions will be included at this coordinate.
### The control.in files
The corresponding control.in file for step 1 is:
xc pbe
relativistic atomic_zora scalar
total_energy_method rpa
This triggers a non-spinpolarized PBE calculation (xc pbe) with a subsequent RPA calculation (total_energy_method rpa). Beforehand, we tested different spin-configurations for the Na dimer (up-up, up-down), but it turned out that for various distances the total spin moment results to zero.
For the steps 2 and 3 (spin collinear calculations) the control.in file is:
xc pbe
spin collinear
relativistic atomic_zora scalar
total_energy_method rpa
Very large basis sets and nearly singular overlap matrix
For very large basis sets it might be needed to include the keyword override_illconditioning .true. in the control.in file, which allows to override a built-in stop in FHI-aims and run with a nearly singular overlap matrix. If included, you should take special care that the results appear reasonable (e.g. by cross-checking for smaller basis sets).
## The binding curve of the Na dimer
To demonstrate the drastic impact of the BSSE, we will perform single-point calculations for the following distances of the Na dimer (in Å):
2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6
For each distance, you should create a separate folder. For all distances, you will need to perform two calculations: raw and counterpoise, where raw refers to step 1 (the dimer geometry with all basis functions and the nuclei charge included) and counterpoise refers to step 3 (the dimer geometry, but one atom with empty site, i.e. no nucleus charge).
Overall, you should have folder structure that looks as follows:
light
├── atom
├── counterpoise
│ ├── 2.0
│ ├── 2.2
│ ├── 2.4
│ ├── 2.6
│ ├── 2.8
│ ├── 3.0
│ ├── 3.2
│ ├── 3.4
│ └── 3.6
└── raw
├── 2.0
├── 2.2
├── 2.4
├── 2.6
├── 2.8
├── 3.0
├── 3.2
├── 3.4
└── 3.6
As a first test we will use the light species defaults. Attach the light species defaults to all control.in files, and for all folders create the corresponding geometry.in files (with the correct dimer distance) and control.in file.
Since we want to calculate the counterpoise correction for different basis sets, it is advisable to create a script for the creation of the folder structure. In total, you need to run 19 calculations for a single basis set.
Repeat the above 19 calculations, for the following species defaults:
• light: defaults_2020/light
• tight: defaults_2020/tight
• cc-pV5Z (Dunning correlation-consistent Gaussian basis sets): non-standard/gaussian_tight_770/cc-pV5Z
• NAO-VCC-5Z (Dunning-like valence-correlation-consistent NAO basis sets): NAO-VCC-nZ/NAO-VCC-5Z
Warning
For the basis sets cc-pV5Z and NAO-VCC-5Z you have to set override_illconditioning .true. in all of the corresponding control.in files.
Plot the raw (uncorrected) and counterpoise-corrected binding energies as a function of the dimer distance. You should be able to reproduce a plot similar to the one shown below.
For the discussion of this plot, let us first focus on the light results (blue curves). The dashed lines show the raw (uncorrected) binding curve. Comparing to the counterpoise-corrected binding, we find that for the the minimum of the binding curve, the binding energy increases by ~0.2 eV and the dimer distance moves ~0.2Å to the right due to the counterpoise-correction.
Now look at the tight result. The effect of the BSSE is enormous: The minimum is off by 0.7Å and 1.1eV too low compared to the corrected curve. At a first glance, this may appear inconsistent to the light results. We promised at the beginning that the BSSE will decrease by increasing the basis set. However, the FHI-aims NAO species defaults not only affect the number of basis function, it also affects the real-space integration grid, which in case of tight settings is also tightened (the cut-off radius and the number of grid points are increased). Overall, the NAO FHI-aims basis sets are optimized for DFT calculations and do not guarantee systematic convergence for beyond-DFT methods.
Lets look at the non-standard NAO and Gaussian basis sets NAO-VCC-5Z and cc-pV5Z. We find that despite a difference in the uncorrected data, the results for the counterpoise-corrected binding energies agree very well. In these cases, although we used the largest available basis set but the BSSE is still significant.
|
2021-12-06 10:51:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6791398525238037, "perplexity": 2010.1203432797195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00328.warc.gz"}
|
http://opnc.carrosseriesayoduval.fr/python-total-variation.html
|
# Python Total Variation
This handout is designed to explain the STATA readout you get when doing regression. total_variation(images)). The imagej-ops project gives algorithm developers a framework to implement, organize and test various approaches to deconvolution. For instance, the KS distance between two distinct $\delta$-measures is always 1, their total variation distance is 2, whereas the transportation distance between them is equal to the distance between the corresponding points, so that it correctly reflects their similarity. •Total Variation (TV) smoothing preserves sharp transitions in signal, and this is not bad •Note that how TV reconstruction does a better job of preserving the sharp transitions in the signal while removing the noise. 1D Log Gabor/ 2D Gabor/ DCT/ SIFT/ SURF/LBP: 0. 1: Procedural Abstraction must know the details of how operating systems work, how network protocols are configured, and how to code various scripts that control function. Journal of Applied Mathematics is a peer-reviewed, Open Access journal devoted to the publication of original research papers and review articles in all areas. You can also explore the functions inside lm object by pressing lm. 075 Loss in iteration 25 of 500: 2. This calls for image inpainting in wavelet domains. This package provides an implementation of the current state-of-art algorithm using the concept of augmented Lagrangian [1], which can be considered as a variation of the popularly known Alternating Direction Methods of Multipliers (ADMM). The algorithm is based on a linear DTS model and total variation regularization. " So if it is 100%, the two variables are perfectly correlated, i. Total Variation denoising¶. TV is L1 norm of gradient of an image. Gauge R&R study checks the suitability of your measurement system. PCA is typically employed prior to implementing a machine learning algorithm because it minimizes the number of variables used to explain the maximum amount of variance for a given data set. This is an excerpt from the Python Data Science Handbook by Jake VanderPlas; Jupyter notebooks are available on GitHub. However TV regularization does not require learning (only one parameter to tune), is very fast, can handle large images at once, and will produce the same result no matter the initialization. How are each of these terms computed? The total variation loss $$T(x)$$ is the simplest one to understand: It measures the average sum of squared differences among adjacent pixel values and encourages the result $$x$$ to be a smooth image. 77MB 所需: 3 积分/C币 立即下载 最低0. total variation denoising, a well-studied problem that carries a vast literature spanning the elds of statistics, computer science, electrical engineering, and others (for example, see [26]). These functions are stored in the database and are available for any user with sufficient privileges to run them. stdev() function exists in Standard statistics Library of Python Programming Language. Cabin column are almost filled with missing values with variation in occurrence, and Embarked column has few missing values in the beginning part. Image Restoration Using Total Variation Regularized Deep Image Prior. FASTA (Fast Adaptive Shrinkage/Thresholding Algorithm) is an efficient, easy-to-use implementation of the Forward-Backward Splitting (FBS) method (also known as the proximal gradient method) for regularized optimization problems. Here is a quick python script which calculates average, variance and standard deviation. Use total variation filter denoising to accomplish this. Compact mutable sequences of bits (vectors of 0s and 1s) supporting various boolean operations, and a “binned” variation which stores long runs of identical bits compactly. Both the server and the client program for Eve Online are developed using Stackless Python, a variation of the Python programming language. 图像风格迁移实战(附Python实战)。作者 | 小韩 编辑 | 安可 在今天的文章中,我们会建立一个很棒的风格迁移网络。加载预训练的卷积神经网络(VGG16)。4# 输入可视化 4# 风格图像可视化 定义了CNN模型后,还需要定义一个内容损失函数。. This can be used as a loss-function during optimization so as to suppress noise in images. Run this all through a Makefile. In the proposed approach the points are ordered by their distance to the closest center minus the distance to the farthest cluster. The algorithm can now compare the result and select the best variance out of it K-Means Algorithm 1stIteration 2ndIteration 3rdIteration. axis int or None, optional. We’ve gone ahead and written sample Python code that will help you get started with a simple Ducksboard integration. A discrete linearized complementarity system is solved using projective alternate quadrant in terlocking factorization (PAQIF) algorithm. The only item. Then frame-1, frame-2 and frame-3 are used to denoise frame-2. IEEE Access February 1, 2019 A novel total variation (TV) framework is conceived for joint detection and dynamic state estimation (JDSE) for wireless transmission from the measurement devices to the control center in a smart grid. corrcoef¶ numpy. This is a Python implementation of Total Variation Denoising method proposed by Guy Gilboa. Knoll F, Holler M, Koesters T, Bredies K, Sodickson D: Simultaneous PET-MRI reconstruction with vectorial second order total generalized variation. 1 while loop Motivation Using our current set of tools, repeating a simple statement many times is tedious. The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. py files) can be viewed if the user knows either the filename of a file in the same directory as this script or the full path of a file somewhere on the host computer. The % total variation is always equal to 100%. 因为它们都有相似的概率意义,比如说pinsker's theorem保证了KL-divergence是total variation metric的一个tight bound. The module is not intended to be a competitor to third-party libraries such as NumPy, SciPy, or proprietary full-featured statistics packages aimed at professional statisticians such as Minitab, SAS and Matlab. 【Python】 主双対アルゴリズムを用いた Total Variation L1 正則化. ANOVA tests whether the average amount of variation between groups is greater than the average amount of variation within groups. This calculator will generate a complete one-way analysis of variance (ANOVA) table for up to 10 groups, including sums of squares, degrees of freedom, mean squares, and F and p-values, given the mean, standard deviation, and number of subjects in each group. In: Bruhn A. Let imgToDenoiseIndex = 2 and temporalWindowSize = 3. Here, the 'total variation' is the sum of the variance of each of the random variables (that is, the trace of the covariance matrix, i. How to find a coefficient of variation in Excel. At times, reality is not what we see or perceive. Magnetic Variation is due to the differing positions of the Geographic North Pole and the Magnetic North Pole. •Repeatability refers to the measurement variation obtained when one person repeatedly measures the same item with the same gage. As the underlying library uses FORTRAN-style matrices (column-order),. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010). This package provides the MATLAB codes for the spectral total variation (STV) denoising algorithm [1], which is a new denoising algorithm for hyperspectral images that estimates different noise levels across the spectral axis from observed data. median_filter(). A variancePartition analysis gives a genome-wide summary of the drivers of variation, but also produces gene-level results to identify genes that deviate from the genome-wide trend. •Total Variation (TV) smoothing preserves sharp transitions in signal, and this is not bad •Note that how TV reconstruction does a better job of preserving the sharp transitions in the signal while removing the noise. But the statistical measurements of Cp, Cpk, Pp, and Ppk may provide more insight into the process. In Python expressions, use _argn (with a leading underscore). Processing X-ray tomography images with Python¶. It runs thru python fine off my desktop, idles while it watches for a trigger, i was attempting to run it through a. 524 Loss in iteration 100 of 500: 2. THE COLT PYTHON "I" FRAME. Since scientific computing with Python encompasses a. Scatter trace is a graph object in the figure's data list with any of the named arguments or attributes listed below. This variation is a measure of how much the parts vary and should be representative of what occurs in production if you are using the measurement system to control the process. Parameters a array_like. In python 2. Third is the temporalWindowSize which specifies the number of nearby frames to be used for denoising. A discrete linearized complementarity system is solved using projective alternate quadrant in terlocking factorization (PAQIF) algorithm. 2 Chapter 3: Total variation distance between measures total variation distance has properties that will be familiar to students of the Neyman-Pearson approach to hypothesis testing. An algorithm for total variation regularization in high-dimensional linear problems Michel Defrise1, Christian Vanhove1 and Xuan Liu2 1 Department of Nuclear Medicine, Vrije Universiteit Brussel, Laarbeeklaan 101, B-1090 Brussels, Belgium 2 Skyscan, Kartuizersweg 3B, 2550 Kontich, Belgium. An inverse relation between the optimal regularization parameter and the peak signal-to-noise ratio of an image is shown. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. Coefficient of variation. An Augmented Lagrangian Method for Total Variation Video Restoration Stanley H. Each principal component represents a percentage of total variation captured from the data. Data scientists are no less than. the sum of its eigenvalues). I'd like to create a function with two arguments (a, axis=0) that computes the coefficient of variation of each column or row (2-dimensional array) and returns the index of the column or row with the. A r-squared value of 100% means the model explains all the variation of the target variable. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. Total Variation Denoising (An MM Algorithm) Total variation denoising (TVD) is an approach for noise reduction developed so as to preserve sharp edges in the underlying signal. Standard deviation is the square root of sample variation. What R-Squared tells us is the proportion of variation in the dependent (response) variable that has been explained by this model. If you find this content useful, please consider supporting the work by buying the book!. Q represents the heat added, c is the specific heat capacity of the substance you’re heating, and m is the mass of the substance you’re heating. Not long after the post, a group of scientists from Facebook and Courant introduced Wasserstein GAN, which uses Wasserstein distance, or the Earth Mover (EM) distance, instead of Jensen-Shannon (JS) divergence as the final…. Jared likes to make things. It is possible to change the degree of posterization by controlling the tradeoff between denoising and faithfulness to the original image. In most industrial process es the part variation is large compared to the gage variation and so the assumption that the observed standard deviation is approximately equal to the total population standard deviation holds good. 717 Loss in iteration 250 of 500: 1. The purpose of this function is to calculate the standard deviation of given continuous numeric data. How to find a coefficient of variation in Excel. As displayed in the first row of Figure 93. This section addresses basic image manipulation and processing using the core scientific modules NumPy and SciPy. Preloaded as noisy_image. Denoising a picture¶ In this example, we denoise a noisy version of a picture using the total variation, bilateral, and wavelet denoising filters. [email protected] This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. Linear inverse problems, Tikhonov and Total-Variation regularization. Python source code: plot_1d_total_variation. However TV regularization does not require learning (only one parameter to tune), is very fast, can handle large images at once, and will produce the same result no matter the initialization. The for loop in Python is used to iterate over a sequence (list, tuple, string) or other iterable objects. We will see there are good reasons to start from 0 in Python. Arnaud Ogier, Pierre Hellier and Christian Barillot January 31, 2006 Abstract The multiplicity of sensors used in medical imaging leads to different noises. The given data will always be in the form of sequence or iterator. A predictor-corrector scheme to the dual variable is used in our algorithms and convergence of the method is proved. Applying filters on an image. A Python decorator is a specific change to the Python syntax that allows us to more conveniently alter functions and methods (and possibly classes in a future version). In signal processing, total variation denoising, also known as total variation regularization, is a process, most often used in digital image processing, that has applications in noise removal. Total variation (TV) denoising is a nonparametric smoothing method that has good properties for preserving sharp edges and contours in objects with spatial structures like natural images. Nearest-neighbor, bilinear and bicubic interpolation. Python statistics | mean () function. Description. この記事では,Total Variation 正則化の最小化に関する実装を行い,ノイズを含む画像がどのように再構成されるのか,確かめてみます.. PCA is a very common method for exploration and reduction of high-dimensional data. This quantity indicates the total variation of the observed values in relation to the mean. Calculating using Python (i. They are extracted from open source Python projects. How do you calculate variance in Excel? FACEBOOK TWITTER LINKEDIN By Daniel Jassy, CFA. 3 Nov 2014 • Álvaro Barbero •. Total variation and bilateral algorithms typically produce "posterized" images with flat domains separated by sharp edges. -위 설명을 분산의 공식으로 이해하면 되겠다 (Variation 값이 클수록 샘플들의 분포가 넓게 퍼져있다는 의미다). Nearest-neighbor, bilinear and bicubic interpolation. settings, which make this method in fact di erent from total variation regularization (that is the Rudin-Osher-Fatmi model [16]) and the second order variation model [17] regularization, respectively. The obtained new models can be easily solved inpractice,forimagedenoising,imagedecomposition, and texture discrimination. 6+ is fully integrated with the WordPress REST API. • For any mean parameter mwhere q(m) is the corresponding natural parameter • the log-partition function has this variational representation • this supremum is achieved at the moment-matching value of m. To register a nondeterministic Python function, users need to first build a nondeterministic user-defined function for the Python function and then register it as a SQL function. Structure Extraction from Texture via Relative Total Variation 论文,代码,测试图像和ppt 图像结构提取 2017-02-10 上传 大小: 79. ぼけ除去のサンプルに加えて,Bilateral Total Variation(BTV)による正則化が追加されており,ノイズにロバストになっています. ここでは,この式の計算を繰り返し処理により行います.(L1ノルム最小化の場合 ). From the link, the definition of total variation for a differentiable function uses L2-norm. • Or the simultaneous variation in growth acceleration curves and the parents’ adult stature. This calls for image inpainting in wavelet domains. Another definition is "(total variance explained by model) / total variance. graph_objects. Ordinary least squares (OLS) regression is a statistical method of analysis that estimates the relationship between one or more independent variables and a dependent variable; the method estimates the relationship by minimizing the sum of the squares in the difference between the observed and. The key take away is that whether or not a variable is categorical depends on its application. Implementation of Richardson Lucy with Total Variation Regularization, Vector Acceleration and Non-Circulant Edge handling. Total variation denoising (TVD) is an approach for noise reduction developed so as to preserve sharp edges in the underlying signal. ANOVA df SS MS F Significance F. Matlab and Python implementations of algorithms for noise removal from 1D piecewise constant signals, such as total variation and robust total variation denoising, bilateral filtering, K-means, mean shift and soft versions of the same, jump penalization, and iterated medians. SPORCO is a Python package for solving optimisation problems with sparsity-inducing regularisation. provides the breakdown of the total variation of the dependent variable in this case home prices) in to the explained and unexplained portions. We then observe some random and corrupted measurements from that signal and then try to recover that signal using L1 and 1D total variation (TV1D) penalties. This section addresses basic image manipulation and processing using the core scientific modules NumPy and SciPy. NET, Python, VB or similar language - MS Word and Excel. How do you calculate variance in Excel? FACEBOOK TWITTER LINKEDIN By Daniel Jassy, CFA. SQL server data recovery software easily repairs the corrupt, damaged files of SQL server version 2000 and 2005. mean() function can be used to calculate mean/average of a given list of numbers. Python statistics | mean () function. The total number of observations is $$N$$ (the sum of the $$n_i$$). And instead of having exactly n items in 2 rows (for n/2 pairs total), we have n + 1 items in 2 rows (for (n + 1)/2 pairs total). Which of these is not a tool to describe variation in product units?. It is archived as an oral paper in Imaging and Applied Optics Congress in Heidelberg, Germany. Unlike a conventional low-pass filter, TV denoising is defined in terms of an optimization problem. The only expection is the function tvgen that solves generalized Total Variation problems, recommended only to advanced users. The CV expresses the variation as a percentage of the mean, and is calculated as follows: CV% = (SD/Xbar)100. Implement the split Bregman method for total variation denoising These files implement the split Bregman method for total variation denoising. 24% of the variation is explained by this first eigenvalue. However, the Python phenomenon developed from the original television series into something much greater, in scope and impact: it spawned touring stage shows, four films, numerous albums, several books and a spin-off stage musical—as well as launching the members on to individual stardom. In [1]: import Quadratic Variation and Total Variation of Brownian Motion. In statistics, explained variation measures the proportion to which a mathematical model accounts for the variation of a given data set. Modular proximal optimization for multidimensional total-variation regularization. Cremers§and T. Disclaimer nih. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. Around UK coasts variation is around 4 ° West to 7 ° West. Denoising by Sobolev and Total Variation Regularization. py import app as application Python isn't going to like that. So it preserves the edges since pixels at edges will have large intensity variation. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. SSW is one component of total sum of squares (the other is between sum of squares). Non infor-mative noise can damage the image interpretation process and the performance of automatic. The class of L1-regularized optimization problems has received much attention recently because of the introduction of "compressed sensing," which allows images and signals to be reconstructed from. Trade-off curves. 6 and installs the packages listed in the requirements. The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. These method noises can also be computed but their inter-. to the variation due to different operators using the same gage measuring the same item. Types that comprise smaller pieces are called compound data types. The aggregation makes it possible to use the same database for years while the filesize stays constant and the amount of information just keeps growing. Strings are qualitatively different from the other four because they are made up of smaller pieces — characters. How to find a coefficient of variation in Excel. 816 Loss in iteration 225 of 500: 1. 0 Introduction. TV denoising. In a continuous representation, this is In a continuous representation, this is Equation 1-1. Parameters a array_like. This quantity indicates the total variation of the observed values in relation to the mean. Some of the operations covered by this tutorial may be useful for other kinds of multidimensional array processing than image processing. 7, 1461–1491 (article link) Porous medium equation to hele-shaw flow with general initial density. Caselles †, M. Unlike a conventional low-pass filter, TV denoising is defined in terms of an optimization problem. The objective function L(x) is convex. arg1)", SUM([Profit])) The next example returns True for store IDs in Washington state, and False otherwise. Textures and fine-scale details are also removed. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. These consist primarily of sparse coding and dictionary learning problems, including convolutional sparse coding and dictionary learning, but there is also support for other problems such as Total Variation regularisation and Robust PCA. NET, Python, VB or similar language - MS Word and Excel. tick: a Python Library for Statistical Learning Model Proximal operator Solver Linear regression SLOPE Gradient Descent Logistic regression L1 (Lasso) Stochastic Variance Reduced Gradient Poisson regression Total Variation Stochastic Gradient Descent Cox regression Group L1 Accelerated Gradient Descent. You can vote up the examples you like or vote down the ones you don't like. Most typically then, the top of the head is unmarked or with a faint thin stripe from the internasals to the nape of the neck. Linear inverse problems, Tikhonov and Total-Variation regularization. Poster at ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom. I tried to calculate it in python. (Python) more hot questions. variation (a, axis=0, nan_policy='propagate') [source] ¶ Compute the coefficient of variation, the ratio of the biased standard deviation to the mean. Closing Thoughts about Adjusted R-squared and Predicted R-squared. Let imgToDenoiseIndex = 2 and temporalWindowSize = 3. Understanding the Data. It is an example of a statistical distance metric, and is sometimes called the statistical distance or variational distance. The hierarchical linear model is a type of regression analysis for multilevel data where the dependent variable is at the lowest level. TV is L1 norm of gradient of an image. This is an important formula for many reasons, but it is especially important because it is the foundation for statistical significance testing in multiple regression. Some features such as Complete Electrode Model (CEM) and Total Variation (TV) regularization are missing in pyEIT. Python may be OK for small systems, but for large systems (a) Java’s static typing, (b) Java’s superior performance, and (c) the superior powers of Java IDEs make Java the only rational choice. Convex Generalizations of Total Variation Based on the Structure Tensor 49 A widely used choice for the regularizer is the Total Variation (TV) [1], which is applied on grayscale images u(M=1) and is defined as:. This is a Python implementation of Total Variation Denoising method proposed by Guy Gilboa. Each clustering algorithm comes in two variants: a class, that implements the fit method to learn the clusters on train data, and a function, that, given train data, returns an array of integer labels corresponding to the different clusters. signal namespace, there is a convenience function to obtain these windows by name: get_window (window, Nx[, fftbins]) Return a window of a given length and type. Cremers§and T. Coefficient of correlation. Adjusted R-Squared. The following are code examples for showing how to use scipy. Adaptive Total Variation Image denoising. Often we additionally assume: The errors are normally distributed, ε i iid∼ N (0,σ2). 524 Loss in iteration 100 of 500: 2. Cabin column are almost filled with missing values with variation in occurrence, and Embarked column has few missing values in the beginning part. "An iterative regularization method for total variation-based image restoration. 1 if an older Mac OS X version), but many Python users may need to update Python in Mac OS to a newer version like Python 3. X-ray tomography is an imaging technique that produces 3-D images of a scanned object. The beauty of art lies in the message it conveys. Because the binned implementation avoids a lot of memory allocation and access when working with either small subregions of the total interval or setting / testing spans larger than the bin size, it can be much faster. This quantity indicates the variation of the estimated response values of the model in relation to the mean, that is, the variation explained by the model. If you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. It was developed with a focus on enabling fast experimentation. Higher R-squared value, better the model. Published on December 11, 2017. total_variation(images)). MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. 14) Examples from the book chapter Interior-point methods for large-scale cone programming Python 2. ABSTRACT: Loss of information in a wavelet domain can occur during storage or transmission when the images are formatted and stored in terms of wavelet coefficients. 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference (in press). Textures and fine-scale details are also removed. With more parameters, the range function can be used to generate a much wider variety of sequences. For fixed x the response Y is normally distributed with Y ∼ N(a+bx,σ2). The total number of observations is $$N$$ (the sum of the $$n_i$$). The main advantage is the flexibility to properly reconstruct hot regions with dimensions down to fifteen cm, which represents a resolution gain of up to six times when compared with the DTS spatial resolution of one m. An Algorithm for Total Variation Minimization and Applications. Dragomiretskiy, A. Python Forensics provides many never-before-published proven forensic modules, libraries, and solutions that can be used right out of the box. Unfortunately, R-squared doesn’t respect this natural ceiling. 2010 Youzuo Lin , Brendt Wohlberg, “ Application of the UPRE Method to Optimal Parameter Selection for Large Scale Regularization Problems ”, 2008 IEEE Southwest Symposium on Image. 5 R-Squared in Python; The coefficient of determination is the portion of the total variation in the dependent variable that is explained by variation in. (eds) Efficient Algorithms for Global Optimization Methods in Computer Vision. Youzuo Lin, Brendt Wohlberg, Hongbin Guo, “UPRE Method for Total Variation Parameter Selection”, Signal Processing, vol. Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. Use total variation filter denoising to accomplish this. Dragomiretskiy, A. Fitting a Linear Model. In this lesson, explore how degrees of freedom can be used in statistics. __count__/__total__ YouTube Premium Loading Get YouTube without the ads issues of multicollinearity in a linear regression model and then quantify it using the variance infalanatory factor. Generalized N-dimensional Anisotropic Total Variation. Modular Proximal Optimization for Multidimensional Total-Variation Regularization. accumulate function was added; it provides a fast way to build an accumulated list and can be used for efficiently approaching this problem. The last major source of variation is the total variation - which is a measure of the variation in all the results. They are extracted from open source Python projects. The classes of interest can often be relatively broad, such as chair. Maintainability Index Variation Among PHP, Java, and Python Open Source N1 = Total number of occurrences of operators Python is designed to be a highly. Jared likes to make things. Then frame-1, frame-2 and frame-3 are used to denoise frame-2. Next, square the deviation for each value. A predictor-corrector scheme to the dual variable is used in our algorithms and convergence of the method is proved. Noisy Cameraman. We first find the mean vector Xm and the "variation of the data" (corresponds to the variance) We subtract the mean from the data values. These functions are stored in the database and are available for any user with sufficient privileges to run them. Clearly, it is nothing but an extension of Simple linear regression. We’ve gone ahead and written sample Python code that will help you get started with a simple Ducksboard integration. It should be odd. The Swiss Machine Learning Day aims at bringing together Swiss researchers working on topics related to machine learning. , Pandas, Jupyter. But, pyEIT is written in Python and extensible. These method noises can also be computed but their inter-. Denoising: this is done applying a total variation approach which consists in reducing as much as possible the integral of the absolute gradient of the image, where the gradient of an image can simply be interpreted as a directional change in the intensity or color in the image itself. A nice example of this type of comment came from Daniel: Comparing Python to Java is like comparing a bicyle to a car. Balaji4, Sachin Kumar S5, M. The CV expresses the variation as a percentage of the mean, and is calculated as follows: CV% = (SD/Xbar)100. The first step in analyzing multivariate data is computing the mean vector and the variance-covariance matrix. We introduce a spatially adaptive total variation regularization model. SPORCO is a Python package for solving optimisation problems with sparsity-inducing regularisation. You can vote up the examples you like or vote down the ones you don't like. wiener), etc. Six Sigma process performance is reported in terms of Sigma. IEEE Access February 1, 2019 A novel total variation (TV) framework is conceived for joint detection and dynamic state estimation (JDSE) for wireless transmission from the measurement devices to the control center in a smart grid. Python was created out of the slime and mud left after the great flood. 2) where σ2 Total is total variation, σ 2 part the product variation, σ 2 gauge the variability of measurement process or gauge. The CV expresses the variation as a percentage of the mean, and is calculated as follows: CV% = (SD/Xbar)100. Total variation and bilateral algorithms typically produce "posterized" images with flat domains separated by sharp edges. The proportion of variation explained by each eigenvalue is given in the second column. MOSES was developed in cooperation with one of the world's leading vehicle manufacturers and is oriented around the workflow of testing engineers and. x or compatible with the ecosystem of packages we need (yet). This is also part of codeacademy work. The classes of interest can often be relatively broad, such as chair. Maintainability Index Variation Among PHP, Java, and Python Open Source N1 = Total number of occurrences of operators Python is designed to be a highly. This example could be the definition for a calculated field titled IsStoreInWA. There are quite a few explanations of the principal component analysis (PCA) on the internet, some of them quite insightful. In that case, a total of temporalWindowSize frames are used where central frame is the frame to be denoised. The first step in analyzing multivariate data is computing the mean vector and the variance-covariance matrix. Use total variation filter denoising to accomplish this. Two versions are available, one implemented in Matlab, and the other in. import cvxpy as cp U = cp. New Multiscale Transforms, Minimum Total Variation Synthesis: Applications to Edge-Preserving Image Reconstruction Emmanuel J. Nguyen, Fellow, IEEE. This is not the case for this Titanic dataset, but especially in time series data, we need know if the occurrence of missing values are sparsely located or located as a big chunk. Variational Theorem for EF. Current work is focused on: 1. Very large snakes may require 2 adult mice per feed or even the introduction of larger prey items such as rats, Guinea Pigs and small rabbits. The aggregation makes it possible to use the same database for years while the filesize stays constant and the amount of information just keeps growing. Here, the 'total variation' is the sum of the variance of each of the random variables (that is, the trace of the covariance matrix, i. In a continuous representation, this is In a continuous representation, this is Equation 1-1. Here is the way to read text file one line at a time using "While" statement and python's readline function. How are each of these terms computed? The total variation loss $$T(x)$$ is the simplest one to understand: It measures the average sum of squared differences among adjacent pixel values and encourages the result $$x$$ to be a smooth image. Some facts about R squared that you need to keep in mind. Restoration of 3D medical images with total variation scheme on wavelet domains (TVW). Users require tools that combine interactivity, versatility, and performance. Asserts and boolean checks BayesFlow Entropy BayesFlow Monte Carlo BayesFlow Stochastic Graph BayesFlow Stochastic Tensors BayesFlow Variational Inference Building Graphs Constants, Sequences, and Random Values Control Flow Copying Graph Elements CRF Data IO FFmpeg Framework Graph Editor Higher Order Functions Histograms Images Inputs and. So we've to find gradient of the image (which is still matrix, right?). I just spotted what appears to be the following line of code: :::python from KoC AR. Denoising a picture¶ In this example, we denoise a noisy version of a picture using the total variation, bilateral, and wavelet denoising filters. This measures how much noise is in the images. Total variation based ltering was introduced by Rudin, Osher, and Fatemi [8]. The library provides efficient solvers for the following Total Variation proximity problems: Standard (l1) Total Variation on a 1-dimensional signal. Total variation de-mosaicing. study below). Highlights: follows the scikit-learn API conventions; supports natively both dense and sparse data representations. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. edu Laboratory for Information and Decision Systems Massachusetts Institute of Technology (MIT), Cambridge, MA Abstract. In [1]: import Quadratic Variation and Total Variation of Brownian Motion. The pstdv() function is the same as numpy. Figure 2 shows the 2d fused lasso applied to a toy example. It is archived as an oral paper in Imaging and Applied Optics Congress in Heidelberg, Germany. Sparse Optimization Methods Stephen Wright University of Wisconsin-Madison Toulouse, Feb 2009 Stephen Wright (UW-Madison) Sparse Optimization Methods Toulouse, February 2009 1 / 58. Variation in the measurement process can directly contribute to our overall process variability. A free online data analysis calculator to find the standard error of sample means for the given data. For instance, the KS distance between two distinct $\delta$-measures is always 1, their total variation distance is 2, whereas the transportation distance between them is equal to the distance between the corresponding points, so that it correctly reflects their similarity. Remember that a Gage R&R study is a study of variation.
|
2019-11-20 17:52:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4319026470184326, "perplexity": 1236.749840163112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00274.warc.gz"}
|
http://lalrpop.github.io/lalrpop/generate_in_source.html
|
Up to version 0.15, LALRPOP was generating its files in the same directory of the input files. Since 0.16, files are generated in the Cargo's output directory.
If you want to keep the previous behaviour, you can use generate_in_source_tree in your configuration:
extern crate lalrpop;
fn main() {
lalrpop::Configuration::new()
.generate_in_source_tree()
.process();
}
For each foo.lalrpop file you can simply have mod foo; in your source tree. The lalrpop_mod macro is not useful in this mode.
|
2019-06-18 15:45:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5746169686317444, "perplexity": 5041.623910198499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00350.warc.gz"}
|
https://marwahaha.github.io/2015-07-09-berkeley/intermediate-python/02-modularization-documentation.html
|
# Intermediate Python Lesson 2: Modularization and Documentation¶
Now that we've covered some of the basic syntax and libraries in Python we can start to tackle our data analysis problem. We are interested in understanding the relationship between the weather and the number of mosquitos so that we can plan mosquito control measures. Since we want to apply these mosquito control measures at a number of different sites we need to understand how the relationship varies across sites. Remember that we have a series of CSV files with each file containing the data for a single location.
## Learning Objectives¶
• Write code for people, not computers
• Break a program into chunks
• Write and use functions in Python
• Write useful documentation
## Starting small¶
When approaching computational tasks like this one it is typically best to start small, check each piece of code as you go, and make incremental changes. This helps avoid marathon debugging sessions because it's much easier to debug one small piece of the code at a time than to write 100 lines of code and then try to figure out all of the different bugs in it.
Let's start by reading in the data from a single file and conducting a simple regression analysis on it. In fact, I would actually start by just importing the data and making sure that everything is coming in OK.
In [2]:
import pandas as pd
d
Out[2]:
year temperature rainfall mosquitos
0 1960 82 200 180
1 1961 70 227 194
2 1962 89 231 207
3 1963 74 114 121
4 1964 78 147 140
5 1965 85 151 148
6 1966 86 172 162
7 1967 75 106 112
8 1968 70 276 230
9 1969 86 165 162
10 1970 83 222 198
11 1971 78 297 247
12 1972 87 288 248
13 1973 76 286 239
14 1974 86 231 202
15 1975 90 284 243
16 1976 76 190 175
17 1977 87 257 225
18 1978 88 128 133
19 1979 87 218 199
20 1980 81 206 184
21 1981 74 175 160
22 1982 85 202 187
23 1983 71 130 126
24 1984 80 225 200
25 1985 72 196 173
26 1986 76 261 222
27 1987 85 111 121
28 1988 83 247 210
29 1989 86 137 142
30 1990 82 159 152
31 1991 77 172 160
32 1992 74 280 231
33 1993 70 291 238
34 1994 77 126 125
35 1995 89 191 178
36 1996 83 298 248
37 1997 80 282 237
38 1998 86 219 195
39 1999 72 143 134
40 2000 79 262 221
41 2001 85 189 175
42 2002 86 205 186
43 2003 72 195 173
44 2004 78 148 146
45 2005 71 262 219
46 2006 88 255 226
47 2007 79 262 221
48 2008 73 198 176
49 2009 86 215 187
50 2010 87 127 129
The import seems to be working properly, so that's good news, but does anyone have anyone see anything about the code that they don't like?
That's right. The variable name I've chosen for the data doesn't really communicate any information to anyone about what it's holding, which means that when I come back to my code next month to change something I'm going to have a more difficult time understanding what the code is actually doing. This brings us to one of our first major lessons for the morning, which is that in order to understand what our code is doing so that we can quickly make changes in the future, we need to write code for people, not computers, and an important first step is to use meaningful varible names.
In [4]:
import pandas as pd
Out[4]:
year temperature rainfall mosquitos
0 1960 82 200 180
1 1961 70 227 194
2 1962 89 231 207
3 1963 74 114 121
4 1964 78 147 140
The .head() method lets us just look at the first few rows of the data. A method is a function attached to an object that operates on that object. So in this case we can think of it as being equivalent to head(data).
Everything looks good, but either global warming has gotten really out of control or the temperatures are in degrees Fahrenheit. Let's convert them to Celsius before we get started.
We don't need to reimport the data in our new cell because all of the executed cells in IPython Notebook share the same workspace. However, it's worth noting that if we close the notebook and then open it again it is necessary to rerun all of the individual blocks of code that a code block relies on before continuing. To rerun all of the cells in a notebook you can select Cell -> Run All from the menu.
In [5]:
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0
Out[5]:
year temperature rainfall mosquitos
0 1960 27.777778 200 180
1 1961 21.111111 227 194
2 1962 31.666667 231 207
3 1963 23.333333 114 121
4 1964 25.555556 147 140
That's better. Now let's go ahead and conduct a regression on the data. We'll use the statsmodels library to conduct the regression.
In [7]:
import statsmodels.api as sm
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
regr_results.summary()
Out[7]:
Dep. Variable: R-squared: mosquitos 0.997 OLS 0.997 Least Squares 7889. Wed, 13 May 2015 3.68e-61 16:47:30 -111.54 51 229.1 48 234.9 2 nonrobust
coef std err t P>|t| [95.0% Conf. Int.] 17.5457 2.767 6.341 0.000 11.983 23.109 0.8719 0.092 9.457 0.000 0.687 1.057 0.6967 0.006 125.385 0.000 0.686 0.708
Omnibus: Durbin-Watson: 1.651 1.872 0.438 0.906 -0.278 0.636 3.343 1920
As you can see statsmodels lets us use the names of the columns in our dataframe to clearly specify the form of the statistical model we want to fit. This also makes the code more readable since the model we are fitting is written in a nice, human readable, manner. The summary method gives us a visual representation of the results. This summary is nice to look at, but it isn't really useful for doing more computation, so we can look up particular values related to the regression using the regr_results attributes. These are variables that are attached to regr_results.
In [11]:
regr_results.params
Out[11]:
Intercept 17.545739
temperature 0.871943
rainfall 0.696717
dtype: float64
In [12]:
regr_results.rsquared
Out[12]:
0.99696687369130499
If we want to hold onto these values for later we can assign them to variables:
In [13]:
parameters = regr_results.params
rsquared = regr_results.rsquared
And then we can plot the observed data against the values predicted by our regression to visualize the results. First, remember to tell the notebook that we want our plots to appear in the notebook itself.
In [14]:
%matplotlib inline
In [15]:
import matplotlib.pyplot as plt
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
Out[15]:
[<matplotlib.lines.Line2D at 0x7fe6acffd748>]
OK, great. So putting this all together we now have a piece of code that imports the modules we need, loads the data into memory, fits a regression to the data, and stores the parameters and fit of data.
In [17]:
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
parameters = regr_results.params
rsquared = regr_results.rsquared
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
print(parameters)
print("R^2 = ", rsquared)
Intercept 17.545739
temperature 0.871943
rainfall 0.696717
dtype: float64
R^2 = 0.996966873691
## Functions¶
The next thing we need to do is loop over all of the possible data files, but in order to do that we're going to need to grow our code some more. Since our brain can only easily hold 5-7 pieces of information at once, and our code already has more than that many pieces, we need to start breaking our code into manageable sized chunks. This will let us read and understand the code more easily and make it easier to reuse pieces of our code. We'll do this using functions.
Functions in Python take the general form
def function_name(inputs):
do stuff
return output
So, if we want to write a function that returns the value of a number squared we could use:
In [18]:
def square(x):
x_squared = x ** 2
return x_squared
print("Four squared is", square(4))
print("Five squared is", square(5))
Four squared is 16
Five squared is 25
We can also just return the desired value directly.
In [19]:
def square(x):
return x ** 2
square(3)
Out[19]:
9
And remember, if we want to use the result of the function later we need to store it somewhere.
In [20]:
two_squared = square(2)
two_squared
Out[20]:
4
## Challenges¶
1. Write a function that converts temperature from Fahrenheit to Celsius and use it to replace this line of code:
data['temperature'] = (data['temperature'] - 32) * 5 / 9.0
2. Write a function called analyze() that takes data as an input, performs the regression, makes the observed-predicted plot, and returns parameters.
*Walk through someone's result. When discussing talk about different names. E.g., fahr_to_celsius is better than temp_to_celsius since it is explicit both the input and the output. Talk about the fact that even though this doesn't save us any lines of code it's still easier to read.*
In [ ]:
## The call stack¶
Let's take a closer look at what happens when we call a function. To make things clearer, we'll start by putting the initial value 32 in a variable and store the final result in one as well:
In [1]:
# Don't worry if this fails
In [4]:
%%tutor --lang python3
# Uncomment ^ that line if the previous cell ran OK
def celsius_to_kelvin(tempC):
tempK = tempC + 273.15
return tempK
original = 32.0
final = celsius_to_kelvin(original)
#### Call Stack (Initial State)¶
When the first three lines of this function are executed the function is created, but nothing happens. The function is like a recipe, it contains the information about how to do something, but it doesn't do so until you explicitly ask it to. We then create the variable original and assign the value 32.0 to it. The values tempC and tempK don't currently exist.
#### Call Stack Immediately After Function Call¶
When we call celsius_to_kelvin, Python creates another stack frame to hold the function's variables. Upon creation this stack frame only includes the inputs being passed to the function, so in our case tempC. As the function is executed variables created by the function are stored in the functions stack frame, so tempC is created in the celsius_to_kelvin stack frame.
#### Call Stack At End Of Function Call¶
When the call to celsius_to_kelvin returns a value, Python throws away celsius_to_kelvin's stack frame, including all of the variables it contains, and creates a new variable in the original stack frame to hold the temperature in Celsius.
#### Call Stack After End¶
This global stack frame is always there; it holds the variables we defined outside the functions in our code. What it doesn't hold is the variables that were in the other stack frames. If we try to get the value of tempC or tempK after our functions have finished running, Python tells us that there's no such thing:
In [5]:
print(tempK)
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-3e96de45404b> in <module>()
----> 1 print(tempK)
NameError: name 'tempK' is not defined
The reason for this is encapsulation, and it's one of the key to writing correct, comprehensible programs. A function's job is to turn several operations into one so that we can think about a single function call instead of a dozen or a hundred statements each time we want to do something. That only works if functions don't interfere with each other by potentially changing the same variables; if they do, we have to pay attention to the details once again, which quickly overloads our short-term memory.
## Testing Functions¶
Once we start putting things into functions so that we can re-use them, we need to start testing that those functions are working correctly. The most basic thing we can do is some informal testing to make sure the function is doing what it is supposed to do. To see how to do this, let's write a function to center the values in a dataset prior to conducting statistical analysis. Centering means setting the mean of each variable to be the same value, typically zero.
In [22]:
def center(data):
return data - data.mean()
We could test this on our actual data, but since we don't know what the values ought to be, it will be hard to tell if the result was correct. Instead, let's create a made up data frame where we know what the result should look like.
In [23]:
import pandas as pd
test_data = pd.DataFrame([[1, 1], [1, 2]])
test_data
Out[23]:
0 1
0 1 1
1 1 2
Now that we've made some test data we need to figure out what we think the result should be and we need to do this before we run the test. This is important because we are biased to believe that any result we get back is correct, and we want to avoid that bias. This also helps make sure that we are confident in what we want the code to do. So, what should the result of running center(data) be?
OK, let's go ahead and run the function.
In [24]:
center(test_data)
Out[24]:
0 1
0 0 -0.5
1 0 0.5
That looks right, so let's try center on our real data:
In [25]:
data = pd.read_csv('data/A2_mosquito_data.csv')
center(data)
Out[25]:
year temperature rainfall mosquitos
0 -25 1.607843 -7.039216 -5.235294
1 -24 -10.392157 19.960784 8.764706
2 -23 8.607843 23.960784 21.764706
3 -22 -6.392157 -93.039216 -64.235294
4 -21 -2.392157 -60.039216 -45.235294
5 -20 4.607843 -56.039216 -37.235294
6 -19 5.607843 -35.039216 -23.235294
7 -18 -5.392157 -101.039216 -73.235294
8 -17 -10.392157 68.960784 44.764706
9 -16 5.607843 -42.039216 -23.235294
10 -15 2.607843 14.960784 12.764706
11 -14 -2.392157 89.960784 61.764706
12 -13 6.607843 80.960784 62.764706
13 -12 -4.392157 78.960784 53.764706
14 -11 5.607843 23.960784 16.764706
15 -10 9.607843 76.960784 57.764706
16 -9 -4.392157 -17.039216 -10.235294
17 -8 6.607843 49.960784 39.764706
18 -7 7.607843 -79.039216 -52.235294
19 -6 6.607843 10.960784 13.764706
20 -5 0.607843 -1.039216 -1.235294
21 -4 -6.392157 -32.039216 -25.235294
22 -3 4.607843 -5.039216 1.764706
23 -2 -9.392157 -77.039216 -59.235294
24 -1 -0.392157 17.960784 14.764706
25 0 -8.392157 -11.039216 -12.235294
26 1 -4.392157 53.960784 36.764706
27 2 4.607843 -96.039216 -64.235294
28 3 2.607843 39.960784 24.764706
29 4 5.607843 -70.039216 -43.235294
30 5 1.607843 -48.039216 -33.235294
31 6 -3.392157 -35.039216 -25.235294
32 7 -6.392157 72.960784 45.764706
33 8 -10.392157 83.960784 52.764706
34 9 -3.392157 -81.039216 -60.235294
35 10 8.607843 -16.039216 -7.235294
36 11 2.607843 90.960784 62.764706
37 12 -0.392157 74.960784 51.764706
38 13 5.607843 11.960784 9.764706
39 14 -8.392157 -64.039216 -51.235294
40 15 -1.392157 54.960784 35.764706
41 16 4.607843 -18.039216 -10.235294
42 17 5.607843 -2.039216 0.764706
43 18 -8.392157 -12.039216 -12.235294
44 19 -2.392157 -59.039216 -39.235294
45 20 -9.392157 54.960784 33.764706
46 21 7.607843 47.960784 40.764706
47 22 -1.392157 54.960784 35.764706
48 23 -7.392157 -9.039216 -9.235294
49 24 5.607843 7.960784 1.764706
50 25 6.607843 -80.039216 -56.235294
It's hard to tell from the default output whether the result is correct, but there are a few simple tests that will reassure us:
In [26]:
print('original mean:')
print(data.mean())
centered = center(data)
print()
print('mean of centered data:')
print(centered.mean())
original mean:
year 1985.000000
temperature 80.392157
rainfall 207.039216
mosquitos 185.235294
dtype: float64
mean of centered data:
year 0.000000e+00
temperature 1.393221e-15
rainfall 6.687461e-15
mosquitos -1.337492e-14
dtype: float64
The mean of the centered data is very close to zero; it's not quite zero because of floating point precision issues. We can even go further and check that the standard deviation hasn't changed (which it shouldn't if we've just centered the data):
In [27]:
print('std dev before and after:')
print(data.std())
print()
print(centered.std())
std dev before and after:
year 14.866069
temperature 6.135400
rainfall 56.560396
mosquitos 39.531551
dtype: float64
year 14.866069
temperature 6.135400
rainfall 56.560396
mosquitos 39.531551
dtype: float64
The standard deviations look the same. It's still possible that our function is wrong, but it seems unlikely enough that we we're probably in good shape for now.
Testing is really important when writing scientific code. If you haven't checked that your code works properly, you can't be confident in your results. We'll talk more about testing tomorrow.
## Documentation¶
OK, the center function seems to be working fine. Does anyone else see anything that's missing before we move on?
Yes, we should write some documentation to remind ourselves later what it's for and how to use it. This function may be fairly straightforward, but in most cases it won't be so easy to remember exactly what a function is doing in a few months. Just imagine looking at our analyze function a few months in the future and trying to remember exactly what it was doing just based on the code.
In [28]:
# center(data): return a new DataFrame containing the original data centered around zero.
def center(data, desired):
return data - data.mean()
There's a better way to do this in Python. If the first thing in a function is a string that isn't assigned to a variable, that string is attached to the function as its documentation:
In [29]:
def center(data, desired):
"""Return a new DataFrame containing the original data centered around zero."""
return data - data.mean()
This is better because we can now ask Python's built-in help system to show us the documentation for the function.
In [30]:
help(center)
Help on function center in module __main__:
center(data, desired)
Return a new DataFrame containing the original data centered around zero.
A string like this is called a docstring and there are also automatic documentation generators that use these docstrings to produce documentation for users. We use triple quotes because it allows us to include multiple lines of text and because it is considered good Python style.
In [31]:
def center(data):
"""Return a new array containing the original data centered on zero
Example:
>>> import pandas
>>> data = pandas.DataFrame([[0, 1], [0, 2])
>>> center(data)
0 1
0 0 -0.5
1 0 0.5
"""
return data - data.mean()
help(center)
Help on function center in module __main__:
center(data)
Return a new array containing the original data centered on zero
Example:
>>> import pandas
>>> data = pandas.DataFrame([[0, 1], [0, 2])
>>> center(data)
0 1
0 0 -0.5
1 0 0.5
### Challenge¶
1. Test your temperature conversion function to make sure it's working (think about some temperatures that you easily know the conversion for).
2. Add documentation to both the temperature conversation function and the analysis function.
In [ ]:
## Looping over files¶
So now our code looks something like this:
In [34]:
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
def fahr_to_celsius(tempF):
"""Convert fahrenheit to celsius"""
tempC = (tempF - 32) * 5 / 9.0
return tempC
def analyze(data):
"""Perform regression analysis on mosquito data
Takes a dataframe as input that includes columns named 'temperature',
'rainfall', and 'mosquitos'.
Performs a multiple regression to predict the number of mosquitos.
Creates an observed-predicted plot of the result and
returns the parameters of the regression.
"""
regr_results = sm.OLS.from_formula('mosquitos ~ temperature + rainfall', data).fit()
parameters = regr_results.params
predicted = parameters[0] + parameters[1] * data['temperature'] + parameters[2] * data['rainfall']
plt.figure()
plt.plot(predicted, data['mosquitos'], 'ro')
min_mosquitos, max_mosquitos = min(data['mosquitos']), max(data['mosquitos'])
plt.plot([min_mosquitos, max_mosquitos], [min_mosquitos, max_mosquitos], 'k-')
return parameters
data['temperature'] = fahr_to_celsius(data['temperature'])
regr_results = analyze(data)
print(regr_results)
Intercept 17.545739
temperature 0.871943
rainfall 0.696717
dtype: float64
Now we want to loop over all of the possible data files, and to do that we need to know their names. If we only had a dozen files we could write them all down, but if we have hundreds of files or the filenames change then that won't really work. Fortunately Python has a built in library to help us find the files we want to work with called glob.
In [36]:
import glob
filenames = glob.glob('data/*.csv')
filenames
Out[36]:
['data/B2_mosquito_data.csv',
'data/A2_mosquito_data.csv',
'data/B1_mosquito_data.csv',
'data/A3_mosquito_data.csv',
'data/A1_mosquito_data.csv']
The object returned by glob is a list of strings. A list is a Python data type that holds a group of potentially heterogenous values. That means it can hold pretty much anything, including functions.
In [37]:
mylist = [1, 'a', center]
mylist
Out[37]:
[1, 'a', <function __main__.center>]
In this case all of the values are strings that contain the names of all of the files that match the expression given to glob, so in this case all of the files with the .csv extension.
Let's restrict the filenames a little more finely, so that we don't accidentally get any data we don't want, and print out the filenames one at a time.
In [38]:
filenames = glob.glob('data/*.csv')
for filename in filenames:
print(filename)
data/B2_mosquito_data.csv
data/A2_mosquito_data.csv
data/B1_mosquito_data.csv
data/A3_mosquito_data.csv
data/A1_mosquito_data.csv
### Challenge¶
Modify your code to loop over all of the files in your directory, making an observed-predicted plot for each file and printing the parameters.
In [ ]:
|
2022-01-25 02:00:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21509712934494019, "perplexity": 1346.5311824396208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00487.warc.gz"}
|
https://includestdio.com/7186.html
|
# python – SQLAlchemy: What’s the difference between flush() and commit()?
## The Question :
460 people think this question is useful
What the difference is between flush() and commit() in SQLAlchemy?
I’ve read the docs, but am none the wiser – they seem to assume a pre-understanding that I don’t have.
I’m particularly interested in their impact on memory usage. I’m loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over – it’s a large database and a machine with not much memory.
I’m wondering if I’m using too many commit() and not enough flush() calls – but without really understanding what the difference is, it’s hard to tell!
The Question Comments :
## The Answer 1
593 people think this answer is useful
A Session object is basically an ongoing transaction of changes to a database (update, insert, delete). These operations aren’t persisted to the database until they are committed (if your program aborts for some reason in mid-session transaction, any uncommitted changes within are lost).
The session object registers transaction operations with session.add(), but doesn’t yet communicate them to the database until session.flush() is called.
session.flush() communicates a series of operations to the database (insert, update, delete). The database maintains them as pending operations in a transaction. The changes aren’t persisted permanently to disk, or visible to other transactions until the database receives a COMMIT for the current transaction (which is what session.commit() does).
session.commit() commits (persists) those changes to the database.
flush() is always called as part of a call to commit() (1).
When you use a Session object to query the database, the query will return results both from the database and from the flushed parts of the uncommitted transaction it holds. By default, Session objects autoflush their operations, but this can be disabled.
Hopefully this example will make this clearer:
#---
s = Session()
s.add(Foo('A')) # The Foo('A') object has been added to the session.
# It has not been committed to the database yet,
# but is returned as part of a query.
print 1, s.query(Foo).all()
s.commit()
#---
s2 = Session()
s2.autoflush = False
s2.add(Foo('B'))
print 2, s2.query(Foo).all() # The Foo('B') object is *not* returned
# as part of this query because it hasn't
# been flushed yet.
s2.flush() # Now, Foo('B') is in the same state as
# Foo('A') was above.
print 3, s2.query(Foo).all()
s2.rollback() # Foo('B') has not been committed, and rolling
# back the session's transaction removes it
# from the session.
print 4, s2.query(Foo).all()
#---
Output:
1 [<Foo('A')>]
2 [<Foo('A')>]
3 [<Foo('A')>, <Foo('B')>]
4 [<Foo('A')>]
## The Answer 2
27 people think this answer is useful
As @snapshoe says
flush() sends your SQL statements to the database
commit() commits the transaction.
When session.autocommit == False:
commit() will call flush() if you set autoflush == True.
When session.autocommit == True:
You can’t call commit() if you haven’t started a transaction (which you probably haven’t since you would probably only use this mode to avoid manually managing transactions).
In this mode, you must call flush() to save your ORM changes. The flush effectively also commits your data.
## The Answer 3
12 people think this answer is useful
Why flush if you can commit?
As someone new to working with databases and sqlalchemy, the previous answers – that flush() sends SQL statements to the DB and commit() persists them – were not clear to me. The definitions make sense but it isn’t immediately clear from the definitions why you would use a flush instead of just committing.
Since a commit always flushes (https://docs.sqlalchemy.org/en/13/orm/session_basics.html#committing) these sound really similar. I think the big issue to highlight is that a flush is not permanent and can be undone, whereas a commit is permanent, in the sense that you can’t ask the database to undo the last commit (I think)
@snapshoe highlights that if you want to query the database and get results that include newly added objects, you need to have flushed first (or committed, which will flush for you). Perhaps this is useful for some people although I’m not sure why you would want to flush rather than commit (other than the trivial answer that it can be undone).
In another example I was syncing documents between a local DB and a remote server, and if the user decided to cancel, all adds/updates/deletes should be undone (i.e. no partial sync, only a full sync). When updating a single document I’ve decided to simply delete the old row and add the updated version from the remote server. It turns out that due to the way sqlalchemy is written, order of operations when committing is not guaranteed. This resulted in adding a duplicate version (before attempting to delete the old one), which resulted in the DB failing a unique constraint. To get around this I used flush() so that order was maintained, but I could still undo if later the sync process failed.
See my post on this at: Is there any order for add versus delete when committing in sqlalchemy
Similarly, someone wanted to know whether add order is maintained when committing, i.e. if I add object1 then add object2, does object1 get added to the database before object2 Does SQLAlchemy save order when adding objects to session?
Again, here presumably the use of a flush() would ensure the desired behavior. So in summary, one use for flush is to provide order guarantees (I think), again while still allowing yourself an “undo” option that commit does not provide.
Autoflush and Autocommit
Note, autoflush can be used to ensure queries act on an updated database as sqlalchemy will flush before executing the query. https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.params.autoflush
Autocommit is something else that I don’t completely understand but it sounds like its use is discouraged: https://docs.sqlalchemy.org/en/13/orm/session_api.html#sqlalchemy.orm.session.Session.params.autocommit
Memory Usage
Now the original question actually wanted to know about the impact of flush vs. commit for memory purposes. As the ability to persist or not is something the database offers (I think), simply flushing should be sufficient to offload to the database – although committing shouldn’t hurt (actually probably helps – see below) if you don’t care about undoing.
sqlalchemy uses weak referencing for objects that have been flushed: https://docs.sqlalchemy.org/en/13/orm/session_state_management.html#session-referencing-behavior
This means if you don’t have an object explicitly held onto somewhere, like in a list or dict, sqlalchemy won’t keep it in memory.
However, then you have the database side of things to worry about. Presumably flushing without committing comes with some memory penalty to maintain the transaction. Again, I’m new to this but here’s a link that seems to suggest exactly this: https://stackoverflow.com/a/15305650/764365
In other words, commits should reduce memory usage, although presumably there is a trade-off between memory and performance here. In other words, you probably don’t want to commit every single database change, one at a time (for performance reasons), but waiting too long will increase memory usage.
## The Answer 4
8 people think this answer is useful
This does not strictly answer the original question but some people have mentioned that with session.autoflush = True you don’t have to use session.flush()… And this is not always true.
If you want to use the id of a newly created object in the middle of a transaction, you must call session.flush().
# Given a model with at least this id
class AModel(Base):
id = Column(Integer, primary_key=True) # autoincrement by default on integer primary key
session.autoflush = True
a = AModel()
session.add(a)
a.id # None
session.flush()
a.id # autoincremented integer
This is because autoflush does NOT auto fill the id (although a query of the object will, which sometimes can cause confusion as in “why this works here but not there?” But snapshoe already covered this part).
One related aspect that seems pretty important to me and wasn’t really mentioned:
Why would you not commit all the time? – The answer is atomicity.
A fancy word to say: an ensemble of operations have to all be executed successfully OR none of them will take effect.
For example, if you want to create/update/delete some object (A) and then create/update/delete another (B), but if (B) fails you want to revert (A). This means those 2 operations are atomic.
Therefore, if (B) needs a result of (A), you want to call flush after (A) and commit after (B).
Also, if session.autoflush is True, except for the case that I mentioned above or others in Jimbo‘s answer, you will not need to call flush manually.
## The Answer 5
2 people think this answer is useful
Use flush when you need to write, for example to get a primary key ID from an autoincrementing counter.
john=Person(name='John Smith', parent=None)
session.add(john)
session.flush()
son=Person(name='Bill Smith', parent=john.id)
Without flushing, john would never get an ID from the DB and so couldn’t represent the parent/child relationship in code.
Like others have said, without commit() none of this will be permanently persisted to DB.
|
2021-01-16 23:38:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33365052938461304, "perplexity": 2533.913119441381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00630.warc.gz"}
|
http://www.lastfm.se/user/deadhc/library/music/Foo+Fighters/_/I+Should+Have+Known?from=1306390216&rangetype=year&setlang=sv
|
# Bibliotek
Musik » Foo Fighters »
## I Should Have Known
43 spelade låtar | Gå till låtsida
Från torsdag 26 maj 2011 till lördag 26 maj 2012 Alltid
Låtar (43)
Låt Album Längd Datum
I Should Have Known 4:14 27 feb 2012, 21:26
I Should Have Known 4:14 27 feb 2012, 01:57
I Should Have Known 4:14 16 feb 2012, 23:25
I Should Have Known 4:14 16 feb 2012, 20:15
I Should Have Known 4:14 15 feb 2012, 20:04
I Should Have Known 4:14 12 feb 2012, 14:40
I Should Have Known 4:14 30 jan 2012, 00:00
I Should Have Known 4:14 10 jan 2012, 22:06
I Should Have Known 4:14 8 jan 2012, 16:21
I Should Have Known 4:14 31 dec 2011, 19:20
I Should Have Known 4:14 13 dec 2011, 23:39
I Should Have Known 4:14 13 dec 2011, 17:09
I Should Have Known 4:14 12 dec 2011, 21:55
I Should Have Known 4:14 11 dec 2011, 17:24
I Should Have Known 4:14 10 dec 2011, 18:33
I Should Have Known 4:14 6 dec 2011, 20:47
I Should Have Known 4:14 5 dec 2011, 22:28
I Should Have Known 4:14 4 dec 2011, 22:48
I Should Have Known 4:14 4 dec 2011, 18:22
I Should Have Known 4:14 4 dec 2011, 15:39
I Should Have Known 4:14 4 dec 2011, 14:49
I Should Have Known 4:14 30 nov 2011, 00:22
I Should Have Known 4:14 29 nov 2011, 23:33
I Should Have Known 4:14 29 nov 2011, 22:18
I Should Have Known 4:14 25 nov 2011, 00:35
I Should Have Known 4:14 22 nov 2011, 23:31
I Should Have Known 4:14 20 nov 2011, 22:43
I Should Have Known 4:14 20 nov 2011, 01:18
I Should Have Known 4:14 20 nov 2011, 00:30
I Should Have Known 4:14 2 okt 2011, 23:45
I Should Have Known 4:14 28 sep 2011, 22:53
I Should Have Known 4:14 25 sep 2011, 03:06
I Should Have Known 4:14 24 sep 2011, 22:15
I Should Have Known 4:14 24 sep 2011, 20:18
I Should Have Known 4:14 21 sep 2011, 01:10
I Should Have Known 4:14 20 sep 2011, 18:07
I Should Have Known 4:14 17 sep 2011, 23:34
I Should Have Known 4:14 1 sep 2011, 22:03
I Should Have Known 4:14 25 jun 2011, 15:55
I Should Have Known 4:14 22 jun 2011, 01:20
I Should Have Known 4:14 19 jun 2011, 02:36
I Should Have Known 4:14 18 jun 2011, 20:05
I Should Have Known 4:14 26 maj 2011, 23:09
|
2014-07-11 20:04:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9891344308853149, "perplexity": 13895.562123292098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776428772.70/warc/CC-MAIN-20140707234028-00014-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/377097/integer-partitions-with-same-divisors
|
# Integer partitions with same divisors
Definitions: For $$\alpha < \beta \in (0,1)$$, let $$P(n,\alpha,\beta)$$ be the set of unordered integer partitions of $$n$$ where each part of the partition has size between $$n^\alpha$$ and $$n^\beta$$.
We a set of partitions, $$P$$, we define $$f(P)$$ to be the smallest integer $$k$$ such that for any distinct partitions $$x,y \in P$$, there exists $$1\le d \le k$$ such that the number of parts in $$x$$ divisible by $$d$$ differs from the number of parts in $$y$$ divisible by $$d$$. (note that as $$d$$ may equal $$1$$, we automatically handle cases where $$x,y$$ have different numbers of parts)
Questions: For $$\alpha,\beta$$, what can we say about the magnitude of $$g(n):= f(P(n,\alpha,\beta))$$? It is obvious that $$g(n) \le n^\beta$$. Can $$g$$ be subpolynomial?
For $$\alpha < \beta \in (0,1)$$, defining $$Q(n,\alpha,\beta)$$ to be the set of partitions with at most $$n^{1-\beta}$$ parts, each having size at least $$n^\alpha$$, can we get an upper bound of $$h(n):= f(Q(n,\alpha,\beta))$$?
Alternate Statement: It might make sense to instead define $$F(P)$$ to be the minimum integer $$k$$ such that for every pair of partitions $$x,y \in P$$, where $$x$$ and $$y$$ have zero parts of equal length, that there exists $$1\le d\le k$$ such that the number of parts in $$x$$ divisible by $$d$$ differs from the number of parts in $$y$$ divisible by $$d$$. We may then define $$G(n) = \max_{1\le m\le n}\{F(P(m,\alpha,\beta))\}, H(n) = \max_{1\le m\le n} \{F(Q(m,\alpha,\beta))\} .$$At least in my mind, this makes the asymptotics clearer and may be easier to handle as $$x,y$$ cannot be as similar as in the definition in $$f$$. While care is needed to convert bounds of $$G$$ and $$H$$ into bounds for $$g$$ and $$h$$, I would be quite interested in seeing answers to this case if it's easier.
Partial Progress: According to this post, the number of $$n^\alpha$$-smooth numbers less than $$n^\beta$$ is $$(1+o(1))\rho(\beta/\alpha)n$$. Thus, there exists a subset $$S \subset \{m \in \Bbb{N}:n^\alpha< m with positive density natural such that if there exists $$n+1$$ distinct partitions of $$n$$ only using parts which are elements of $$S$$, it follows that $$g(n) \ge n^\alpha$$.
I know that for a fixed subset of the naturals $$S$$ with positive natural density $$d>0$$, that the number of partitions of $$n$$ only using parts which are elements of $$S$$ is asymptotically $$e^{(1+o(1))C\sqrt{dn}}$$ where $$C = \pi\sqrt{2/3}$$. I do not know to control the asymptotics as $$n$$ and $$S$$ vary, but hopefully someone can manage to use this to show that $$g(n)\ge n^\alpha$$.
|
2020-11-24 07:15:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 60, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238170981407166, "perplexity": 72.32306790079159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171126.6/warc/CC-MAIN-20201124053841-20201124083841-00414.warc.gz"}
|
https://chemistry.stackexchange.com/questions/114281/what-is-the-least-dense-liquid-under-normal-conditions
|
# What is the least dense liquid under normal conditions?
What is the least dense liquid under normal conditions, room temperature, one atmosphere of pressure, doesn't combust upon contact with air, also wouldn't kill a human just for being in the same room as it?
## 1 Answer
Isopentane $$\ce{C5H12}$$ has the density of $$0.6201~\mathrm{g\,cm^{-3}}$$ at $$20~\mathrm{^\circ C}$$ [1, p. 3-330].
### References
1. Haynes, W. M.; Lide, D. R.; Bruno, T. J. CRC Handbook of Chemistry and Physics: A Ready-Reference Book of Chemical and Physical Data.; CRC Press, 2017; Vol. 97. ISBN 978-1-4987-5429-3.
• That is... surprisingly dense. Water is known to be 'heavy' in common speech, but there aren't really any liquids which are a lot lighter apparently. – orlp Apr 25 '19 at 8:11
• @orlp That's due to the constraint imposed by OP: the compound must be liquid at NTP conditions. Once the constraint removed, there are more interesting things such as solution of lithium metal in liquid ammonia or good old liquid hydrogen with the density of $0.0709~\mathrm{g\,cm^{-3}}$, but that's not what the question is about. – andselisk Apr 25 '19 at 8:16
|
2020-01-19 14:36:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6524339318275452, "perplexity": 1384.7184885791771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594603.8/warc/CC-MAIN-20200119122744-20200119150744-00075.warc.gz"}
|
http://forum.wilmott.com/viewtopic.php?f=3&t=7580&p=65361
|
SERVING THE QUANTITATIVE FINANCE COMMUNITY
gatarek
Topic Author
Posts: 345
Joined: July 14th, 2002, 3:00 am
Strange kind of philanthropy
Last edited by gatarek on February 24th, 2004, 11:00 pm, edited 1 time in total.
Nonius
Posts: 5115
Joined: January 22nd, 2003, 6:48 am
Strange kind of philanthropy
QuoteOriginally posted by: gatarekkapital,I would be happy with rules similar to scientific ones, i.e. public domain with some respected rules, including hierarchy. I don't mean censoring stoch vol BGM and wouldn't like stoch vol contributor to censor it. On the other hand - I'm really afraid that one day all models will be patented and that will be the end of derivatives research. If suffices to patent one simple model - you will have domino effect after. Creating a mechanism preventing something like that would be of particular value.Background of my observation is (obviously!) personal. We offered implementation of BGM model to one of big European banks and our competitor was chosen. OK, I will have many other projects, but in my view a hierarchy was broken.don't worry Gatarek, the patents don't really work.....
gatarek
Topic Author
Posts: 345
Joined: July 14th, 2002, 3:00 am
Strange kind of philanthropy
Last edited by gatarek on February 24th, 2004, 11:00 pm, edited 1 time in total.
kr
Posts: 1885
Joined: September 27th, 2002, 1:19 pm
Strange kind of philanthropy
Sometimes one should think good and hard about what certain words mean:a _businessperson_ takes ideas and tries to produce some kind of economicsa _researcher_ takes a dozen cups of coffee and turns them into ideaswhat is protected in general is usually: research for business purposes. I guess this is because research by itself operates on some kind of socialist economics. Probably there was a good reason why things evolved the way they did. I don't think anybody will ever take the JPEG thing seriously because I am not getting rich by fiddling with my wilmott icon (though maybe there is a case for that other mega source of internet traffic, where they must be making lots of money).To say that you're subsidizing somebody else... maybe that's a bad word choice, but the reason we have businesspeople is that somebody needs to generate the economics. That they used gold from your mineshaft is mostly irrelevant, because on its own, gold does not sell itself. That money does not appear without the gold is also important, but one needs to control the gold. Ideas can only be controlled to the extent that you control their raw dissemination. And an idea that is not disseminated may as well not be an idea - otherwise it's just a tree falling in the forest.Anyway, history tells you that research is more likely to die when it is not widely disseminated, rather than the opposite as you suggest. Otherwise, we would be publishing research article after research article on nuclear fuel processing to get the Iranians and N Koreans out of the game. This just doesn't make any sense. I have a bit of an issue with people who try to make quantfinance a purely theoretical subject. To me it seems a bit artificial to develop a field that originally came from practical problems and then push it in the direction of pure theory. If you want to do stochastic research, call it that, not quantfinance. That said, I don't see much use for quantfinance unless a businessperson is involved. One could take this even one step further and say that if you really are contributing something major, you'd know about it (and apparently this is the case). Nonius said most of this already... the sweet spot in many ways is to be a cigar-chomping guy who knows his way around the academic literature.
kapital
Posts: 236
Joined: July 15th, 2002, 8:16 pm
Strange kind of philanthropy
I don't think anybody will ever take the JPEG thing seriously ...maybe not the JPEG thing, but IP is a really hot and serious topic right now. you must have read about the sco claims. for the last 2 years, microsoft has been trying to scare companies wanting to switch to linux with the "watch out for IP litigation" line. out comes SCO with a claim that 80 f*cking lines of code that have made it into the linux kernel (out of millions) and that these lines represent the key IP that keeps the Unix like OS alive. At first, people are pissed, but no one really takes the suit seriously. Guess what? along comes Microsoft, who "licenses" the code in question for an undisclosed amount. think anyone is laughing about SCO's attempt to bring linux down now? think the boys in legal&compliance have started to care about linux in the enterprise? yup. maybe the litigation wont amount to much, but a lot of damage is being done in the mean time.also a lot of smaller level IP related scams going on. check this out: youmaybenext.com. there is a company going around, waving some IP threatening small (but solvent) companies that they either pay up or go to court. being scared, poorly advised, whatever they pay up. then they use that bankroll to do the same to slightly larger companies.
kapital
Posts: 236
Joined: July 15th, 2002, 8:16 pm
Strange kind of philanthropy
Gatarek,OK, I will have many other projects, but in my view a hierarchy was broken.probably not broken as badly as you thought. imagine how many extra lap-dances, gladhanders, promises to hire the MD's junkie cousin to do documentation, etc. it took for the other company to out-compete "the other company who has that guy that invented the model"? talking about massive externalities, you get a major positive one by having your work in the public. having it peer reviewed and pointed to as 1st rate academic material, then cited in who knows how many extending papers or emperical analysis must be worth a fortune to your company in advertising alone.net-net, i bet you benefit a ton (intangibly perhaps) from having other people adopt and use your model. case in point of an intangible benefit: look how foolish i looked above. my point was lost and all anyone thought was - what an asshole. and all it took was: "hey, man, -> Gatarek as in BGM". think that would have happened if you were the creator of the GammaPi method*? i sure as hell doubt it.*incidentally, what happened to those posts? i tried to find one to link to it, but they don't seem to be around anymore
Last edited by kapital on June 11th, 2003, 10:00 pm, edited 1 time in total.
gatarek
Topic Author
Posts: 345
Joined: July 14th, 2002, 3:00 am
Strange kind of philanthropy
Last edited by gatarek on February 24th, 2004, 11:00 pm, edited 1 time in total.
sdw
Posts: 61
Joined: January 16th, 2003, 12:29 pm
Strange kind of philanthropy
I think it's true what has been said earlier in this thread, that if all academics had patented their ideas then that would significantly stifle the furthering of research. Fortunatley, it has only been very recently (in modern history) that mathematics has taken on such a corporate role, ie in quant finance. Correct me if I'm wrong, but is this not the first time math has been so directly applicable in a business sense, and so ideal for the ultimate end of generating revenue? The old adage about why should i learn algebra when I'm never even going to go there seems far less relevant these days.It's the first time such an intellectual subject has had to contend with idea protection and patent laws: for instance why would Gauss worry about censoring his Fundamental Theorem of Algebra when its dissemination would have no forseeable financial penalty, and in fact would be only likely to benefit him in the long run by stimulating other great minds?I think we're on unfallowed ground here, and it will be interesrting to see the direction things take.
Nonius
Posts: 5115
Joined: January 22nd, 2003, 6:48 am
Strange kind of philanthropy
QuoteOriginally posted by: gatarekkr,I am no more a university professor, so my bitter post was probably a typical reaction of a biznesman after losing a tender . Then I noticed that my winning competitor sold my product (to some extend) and found it unjust. Maybe I am wrong, but a financial model is not an abstract and remote background for a practical application - a pricing model is such an application itself. On the other hand I don't expect anything good from patenting of financial models (provided it was legal) - neither for the patent owner, nor for the industry. I wonder what would be the just and effective solution - current "wrestling with no rules " is not.Gaterek, what is your favorite beer? tell me what it is, and the next time you stop in Paris I'll hook you up with some beers, chicks, and we'll BS about martingales, manifolds, and geometry under a Parisian full moon on my terrace.....you game for that?
gatarek
Topic Author
Posts: 345
Joined: July 14th, 2002, 3:00 am
Strange kind of philanthropy
Last edited by gatarek on February 24th, 2004, 11:00 pm, edited 1 time in total.
kr
Posts: 1885
Joined: September 27th, 2002, 1:19 pm
Quotea financial model is not an abstract and remote background for a practical application - a pricing model is such an application itselfthis is where I disagree with you... the model does not generate revenue all on its own - you have to milk it, and that requires farmhandsanyhow, just because the drug companies cannot indefinitely patent these new wonderdrugs they are developing does not mean that new drugs are not being developedit even suggests a kind of 'speed' in the information mining process.. you don't dig all the gold out of the hill at once and just dump it on the market. Anybody who has upgraded their M$software knows how this stupid game works (and it does indeed work). I have a small angle on this as well because some of our CLO technology has recently gone through the US patent office... It was not my doing and I have always thought that this was a dumb idea. The point is that you can twiddle with small details and it would be difficult to say whether the new implementation is any different from the patented one. At the end of the day this just generates work for lawyers and costs us all money, because they are defending a very vague boundary (to which your contribution is at the margin). Maybe it's just my own personal philosophy, but I think if you're gonna make money, you should do it in a way that is not really contestible in this fashion. If you want to play legal games, become a lawyer and rip us all off. gatarek Topic Author Posts: 345 Joined: July 14th, 2002, 3:00 am Strange kind of philanthropy Last edited by gatarek on February 24th, 2004, 11:00 pm, edited 1 time in total. RowdyRoddyPiper Posts: 529 Joined: November 5th, 2001, 7:25 pm Strange kind of philanthropy QuoteYou've just given a strong arguments, why trucks, airplanes and other tools should be free. A truck does not generate revenue all on its own - you have to milk it, and that requires farmhands Your use of a truck, airplane or other tool excludes others from using that truck, airplane or other tool, the same argument does not apply to financial models . Someone using your model (or a variant of) does not prevent you from using your model. A competitor using your model to do business with a bank that you felt you had a relationship with, or were competing to have a relationship with does exclude you from doing business with said bank, imposing a cost on you, unfairly I might add. I'm sure that they have foregone the opportunity to be introduced to your refinements for the model, insights to other areas and your charming personality, much to their regret. The reason (and this is just based on the facts that I have in front of me) that they did not chose you is that they had no way of verifying that the model your competitor was hawking was yours with perhaps some minor modifications. The need for a heirarchy (such as their is in the scientific community) is one way to approach this signaling problem. The pitfall is that the people who are making the purchasing decisions almost always do not have the knowledge of the person providing the solution (otherwise why purchase) and they are often in a tangental field (making recognition of the contributor/contribution difficult).I would propose something along the lines of an industry group for model (idea??) certification. The idea being as follows: The applicant for certification pays a fee ($5,000, \$50,000, sliding scale???) for a review of their idea by a panel to ensure that it is indeed a novel solution to the problem. The applicant draws up which parts of the implementation they believe to be new and the panel checks them off as either a true or false claim. Once the idea is certified as a new solution for a problem (problem type) the idea holder is at least armed with the good housekeeping seal of approval so to speak. Whether or not this carries any weight when you go to pitch a solution will vary from client to client. I for one would view it as a large plus, not only do you have an idea that hopefully no one has been implementing, but you also have the information that whoever is coming to you can think creatively about problems and really develop new solutions. There are plenty of pitfalls (no one cares about the cert, inability to get competent people to certify, cost prohibitive to scholars, students, entrepeneurs, etc.)Patenting and idea will be ineffective as a means of signaling originality. I don't think that the patent office is capable of understanding what the differences are between different models. Not that they are incompetent, it's just too specialized of an area. Also collecting rents from the patent will be difficult because people will either use your model covertly, or use something else that they don't have to worry about ponying up for. Also the logistics of negotiating agreements and enforcing compliance are very tough for something that is not a physical good. In structured finance the cases of "models" are much more trivial, they are mostly cash flow engines that allow for situational analysis. The levers you can pull on and where they are located are the main differentiators between models, along with methods for speeding up the generating and distributing of cashflows. I've had a few cases where people have showed me their model and it bore an uncanny resemlance one that I completed and distributed with some deal info a few deals back. The first time it happened I got kind of upset, but thereafter I usually just focused that energy on trying to remember what I did wrong in that particular implementation, it makes up for the lack or recognition of the contribution, that's for sure.
N
Posts: 2808
Joined: May 9th, 2003, 8:26 pm
Strange kind of philanthropy
Gatarek,If you could rewind history, what would you do different? Patent the model?? Keep it confidential??Use it yourself in trading?? Write a paper or two??I see the Sobol sequence patent as ironic. Most everyone seems to see value in a technique whichlowers the Kolmogorov complexity of generated series. When they try to calibrate models, no informationin yields no information out.The math involved with obtaining the actual time-dependent 'covariance' IR surface is extremely hard. The usual string theory approaches are way too primative. Do you think that might suggest a confidentialityapproach since a patent becomes public domain??Regards,N
trc
Posts: 95
Joined: April 4th, 2002, 2:28 pm
Strange kind of philanthropy
Gatarek,I want to thank you for the Libor market model because it is so beautiful.I believe that your desire for control of its use is ill advised.Instead it is gratifying to see that its use is spreading.Obviously your competitors like it too and so will the users.You will be remembered for the scientific contribution and nobody cares which company bought which implementation from whom.Only the seller cares -- briefly.Money is secondary.Do you know who was the third richest man even only two years ago?I don't and I don't care.But I still like the proof that there are infinitely many prime numbers from 2000 years ago.
|
2020-09-23 16:01:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3696601688861847, "perplexity": 1987.546122345637}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00398.warc.gz"}
|
http://www.ruor.uottawa.ca/handle/10393/4253
|
### Molecular cloning and characterization of the ftsEX genes of Neisseria gonorrhoeae CH811 encoding a putative ABC transporter and identification of their flanking genes.
##### Description
Title: Molecular cloning and characterization of the ftsEX genes of Neisseria gonorrhoeae CH811 encoding a putative ABC transporter and identification of their flanking genes. Authors: Bernatchez, Stéphane. Date: 1998 Abstract: Cell division is an essential process in any living cell. In Escherichia coli, the majority of cell division genes are clustered at two loci on the chromosome. The ftsY, ftsE and ftsX genes constitute one of these clusters. The hypothesis of this Ph.D. research project was that the ftsY, ftsE and ftsX genes are present and clustered in N. gonorrhoeae, and that their gene products are involved in an uncharacterized aspect of cell division. The plasmid pSB19 was isolated while screening a genomic library of N. gonorrhoeae strain CH811. The analysis of the DNA sequence of the insert of pSB19 showed a complete open-reading frame (ORF) showing sequence similarity with the ftsX gene of E. coli. Two other partial ORFs respectively encoding a protein similar to 3-phosphoglycerate kinase and a protein similar E. coli FtsE were identified downstream and upstream of ftsX. The partial ftsE homologue overlapped ftsX by 4 base pairs (bp). To fully characterize the gonococcal ftsE and ftsX genes, the complete ftsE gene was required. A DNA fragment containing the 5$\sp\prime$-section of ftsE with 2.3 kilobases of sequence upstream of ftsE was amplified by inverse PCR, cloned and sequenced. The gonococcal ftsE gene comprised 651 bp and encoded a polypeptide of 216 amino acid (aa) residues. The gonococcal FtsE protein shared 60 to 71% similarity and 32 to 49% identity with other known bacterial FtsE homologues, and shared significant aa sequence similarities with numerous other ATP-binding domains of ABC transporters. The gonococcal ftsX gene included 918 bp encoding a protein of 305 aa residues that shared 47 to 55% similarity and 19 to 29% identity with its bacterial homologues, and did not share significant similarity to other protein sequences included in the public databases. Sequence analyses performed on N. gonorrhoeae FtsE and FtsX and on the five other known bacterial homologues of FtsE and FtsX predicted that FtsE did not contain transmembrane segments while FtsX was predicted to contain four of them. The size of FtsX varied between bacterial species and this variation appeared attributable to the amino-terminal section of the protein. N. gonorrhoeae FtsX was predicted to adopt a membrane topology that would locate both its amino- and carboxy-terminal ends in the cytoplasm. Identical topologies were predicted for the other known FtsX homologues. The presence of ftsE and ftsX was verified in ten other Neisseria species by Southern hybridization. ftsE and ftsX probes hybridized restriction fragments in each species, suggesting that ftsE and ftsX were present in each species tested. The 4 bp overlap ovserved between ftsE and ftsX suggested that these genes were co-transcribed. The co-transcription of ftsE and ftsX was confirmed by in vitro transcription/translation experiments. The results obtained suggest that FtsE and FtsX respectively constitute the ATP-binding and membrane domains of a putative ABC transporter that transports an unidentified substrate from the cell. It cannot be excluded that this transporter participates in the cell division process, but it appears not to be essential since a mutant in which ftsX had been disrupted was viable while other cell division genes were shown to be essential. (Abstract shortened by UMI.) URL: http://hdl.handle.net/10393/4253 Collection Thèses, 1910 - 2010 // Theses, 1910 - 2010
|
2016-10-27 10:44:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5947138667106628, "perplexity": 12005.113155988372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00248-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://zbmath.org/?q=an:0831.65102
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Continuous numerical solutions and error bounds for time dependent systems of partial differential equations: Mixed problems. (English) Zbl 0831.65102
For the mixed problems described by the equation ${u}_{t}\left(x,t\right)-D\left(t\right){u}_{xx}\left(x,t\right)=0$, $0, $t>0$ in the bounded domain ${\Omega }$ $\left({t}_{0},{t}_{1}\right)=\left[0,p\right]×\left[{t}_{0},{t}_{1}\right]$, subject to boundary conditions $u\left(x,0\right)=F\left(x\right)$ and initial conditions $u\left(0,t\right)=u\left(p,t\right)=0$ the authors construct continuous numerical solutions with prefixed accuracy. They assume that $u\left(x,t\right)$ and $F\left(x\right)$ are $r$-component vectors and $D\left(t\right)$ is a ${ℂ}^{r×r}$ valued two-times continuously differentiable function, so that $D\left({t}_{1}\right)D\left({t}_{2}\right)=D\left({t}_{2}\right)D\left({t}_{1}\right)$ for ${t}_{2}\ge {t}_{1}>0$ and there exists a positive number $\delta$ such that every eigenvalue $z$ of the form $\left(D\left(t\right)+{D}^{H}\left(t\right)\right)/2$ with $t>0$ is bigger than $\delta$.
Such coupled partial differential equations appear in many different problems, for example in magnetohydrodynamic flows, in the study of temperature distribution within a composite heat conductor, mechanics, diffusion problems, nerve conduction problems, biochemistry, armament models etc.
##### MSC:
65M70 Spectral, collocation and related methods (IVP of PDE) 65M15 Error bounds (IVP of PDE) 35K15 Second order parabolic equations, initial value problems
|
2014-04-23 09:20:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677152156829834, "perplexity": 14628.584492088052}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/notation/hot?filter=year
|
# Tag Info
34
There is a consistent definition, but it involves a couple of arbitrary thresholds, so I doubt you'd consider it rigorous. The construction $X \gg Y$ means that the ratio $\frac{Y}{X}$ is small enough that subleading terms in the series expansion for $f\bigl(\frac{Y}{X}\bigr) - f(0)$ can be neglected, where $f$ is some relevant function involved in the ...
18
The statement "$A$ happens given that $B$" is equivalent to "If $B$, then $A$", which is symbolically represented as an implication $$B \Rightarrow A$$ or, if you want to preserve the order of $A$ and $B$ in the original statement $$A \Leftarrow B$$
14
IMHO, the notation $\int_a^b\mathrm{d}x\,f(x)$ is much cleaner than $\int_a^b f(x)\,\mathrm{d}x$, because the integration variable ($x$) and its associated integral range $(\int_a^b$) are kept together. This is particularly important in lengthy and multi-dimensional integrals. Consider $$\Upsilon_{pq}(k)= \int_0^\infty\mathrm{d}x ... 12 It's not just QFT literature. Physicists, especially adult research physicists, find this notation sensible and popular – even though it may be more popular among particle physicists than elsewhere. Formally, dx\,f(x) is a product of two factors and \int is a form of a sum. Because product is commutative, it doesn't hurt when the order is interchanged. ... 10 What exactly does \Delta_{r_e} mean ? Your wave function isn't a field in space, it is a field on configuration space, i.e. it assigns complex numbers to a configuration. If your electron is at (x_e,y_e,z_e) and your proton is at (x_p,y_p,z_p) then the configuration is the point (x_e,y_e,z_e,x_p,y_p,z_p) in a 6d space. A point in that 6d space ... 9 \mathfrak{R}e: real part. A^*: complex conjugate of probability amplitude A. 9 If you want to make statements over (discrete) time, linear temporal logic may be worth looking at. For instance, \qquad\displaystyle \Box (A \implies B) means that whenever A holds, B has to hold at the same time. \qquad\displaystyle \Box (A \implies \Diamond B) means that whenever A holds, B will hold at some point in the future. Or yet ... 8 Besides the reasons listed in Lubos Motl's answer, here is another reason for the \int \!dx ~f(x) notation: By writing the integral sign \int_a^b and dx next to each other in multiple nested integrations, it becomes more easy to trace which limits belong to which integration. This becomes particularly handy when changing the orders of integration. ... 7 Those Greek letters are indices indexing the components of g. Generally if one expresses a rank-2 tensor like g as a matrix, the first index indexes the rows, the second the columns. In your example, we have g_{rr} \equiv g_{11} = 1, g_{\theta\theta} \equiv g_{22} = r^2, g_{r\theta} \equiv g_{12} = 0, etc. As you can see, we sometimes use numbers ... 7 (Q\cdot Q)_{ij}=Q_{im}Q_{mj} (Q^T\cdot Q)_{ij}=(Q^T)_{im}Q_{mj}=Q_{mi}Q_{mj} where we use that (Q^T)_{im}=Q_{mi} 6 Given the way that you've presented your table, I would personally put a "-" rather than a 1 in the units column. This to me would signify that units such as "g, km, s, A" etc. do not apply here. In terms of your symbols, in many branches of physics it is common to use a "hat", "tilde" or "star" notation above a symbol to indicate that it is a ... 6 Force is indeed a vector. Technically you should write |\overrightarrow{F}| = 30N, however there is usually context given that let you omit this. If you are working in one dimension, then the vector-like direction is all encapsulated in the sign once you've defined your coordinate system (e.g. -30N is 30N downwards.) Beyond that, it is typically just a ... 6 The answer is already on page 2 of your link above: "Among the large number of radionuclides of medical interest, Sc-44 is promising for PET imaging. Either the ground-state Sc-44g or the metastable-state Sc-44m can be used for such applications, depending on the moleculeused as vector." So the metastable state Sc-44m decays to the ground state Sc-44g. 6 Dirac notation is ill-suited for non-self-adjoint operators. Here's why: Let (-,-) be the inner product on our Hilbert space. The expectation value of AB is then$$ \langle AB \rangle_\psi = (\psi,AB\psi)$$by definition, and Dirac notation writes \langle \psi \vert AB \vert \psi \rangle. for this. But, in this notation, it is no longer clear to which ... 5 These states represent intermediate coupling schemes that are halfway between the usual LS coupling and the more extreme jj coupling that happens in heavier atoms where relativistic effects mean that the spin-orbit coupling for each individual electron can match or exceed the orbit-orbit coupling between different electrons. The intermediate coupling ... 5 I think the answer is no. It generally precedes some approximation method with a bounded error, but there are so many approximations methods in physics -- some rigorous, some nonrigorous -- that it's way too presumptuous to give it a rigorous definition. Generally, it means one of several things: If a\ll b, expanding in powers of \frac{a}{b} is ... 5 Comments to the question (v5): In this quantum case the overline/bar notation \bar{A}=\langle A\rangle is borrowed from statistics and it denotes a quantum expectation value of a quantity A. See also Ehrenfest theorem. The problem from Ref. 1 considers a harmonic oscillator with Hamiltonian operator$$\tag{A} H~=~\frac{p^2}{2m} ...
5
Use: $$u_{\alpha}= g_{\alpha\beta}u^{\beta}$$ where $g_{\alpha\beta}$ is the metric tensor.
5
Two conventions. First - use a capital M - make sure you make it big and pointy, so it cannot be confused with lower case: When it is right next to the lower case 'm', the difference should stand out clearly. Second - some people use the "computer short hand" E6: 1.7E6 m This is generally understood to mean (but quicker to write than) $1.7\cdot ... 5 Either. It's context dependent. Chemists generally mean the whole atoms, nuclear physicists usually mean the nucleus, and people not in those categories could mean either. And there are exception to all those rules or thumb. And the distinctions is important when people start throwing masses around because the mass of an electron is almost on the same ... 5 You're getting tripped up by summation notation. Whenever you have a repeated index, this means that that index is to be summed from 1 to 3: $$\delta_{ij} \delta_{ik} \equiv \sum_{i=1}^3 \delta_{ij} \delta_{ik}.$$ You're right that there are two terms in this sum where$i \neq j$, and so the contribution to the sum from these terms is zero. But the ... 5 The index of the Laplacian tells you which of the coordinates it acts on, that is, if you write$r = (r_x,r_y,r_z)^T$and$R = (R_x,R_y,R_z)^Tas Cartesian coordinates, then \begin{align} \Delta_r & := \frac{\partial^2}{\partial {r_x}^2} + \frac{\partial^2}{\partial {r_y}^2} + \frac{\partial^2}{\partial {r_z}^2} \\ \Delta_R & := ... 5 Conventionally we use1$for dimensionless quantities, although it may cause some confusions. In additon, The International Committee for Weights and Measures contemplated defining the unit of 1 as the 'uno', but the idea was dropped. --https://en.wikipedia.org/wiki/Dimensionless_quantity 4 This is a covariant derivative along a world line (if you would not consider a world line the proper time$\tau$would not make any sense). So you consider a curve in space time parametrized in dependence of the proper time$x^\mu(\tau)$. Then you have: $$\frac{DA^\mu}{d\tau} = \frac{\partial A^{\mu}\big(x(\tau)\big)}{\partial \tau} + ... 4 The equation you phrase as$$|l,m\rangle=\int_\text{all space}\psi_{lm}(r,\theta,\phi)\,\left|r,\theta,\phi\right\rangle r^2\,\mathrm dr\,\mathrm d\Omega$$is, and must be, wrong. The reason is that |l,m⟩ inhabits the orbital part of Hilbert space, \mathcal H_\Omega, and the right-hand side is a vector in the full Hilbert space \mathcal H, which is the ... 4 The LHS is an inner product while the RHS is the evaluation of a function from an L^2 space at the point x. To somehow link the two you need to be able to write the RHS as an integral, so you need a "function" \delta_x such that$$\langle x|\psi\rangle = \int\overline{\delta_x(s)}\psi(s)\text ds = \psi(x).$$There is no such function, but the map ... 4 As Qmechanic pointed out in the comments, you're mixing Einstein and abstract index notation a bit. To make things absolutely clear, we will use early Latin indices for abstract indices (abc) and Greek indices for component indices (\mu\nu\rho) and will always indicate Einstein summation explicitly. First and foremost, an abstract index is nothing more ... 4 Just a coincidence. There are too many quantities and not enough letters. It probably does make a difference that the fields in which these two equations exist (material science and electromagnetism) are well enough separated that you typically won't see them both in the same papers or textbooks; if that weren't the case, people would start using different ... 4 It is a symbol and an idea used in mathematics too. But the important part is just that B is 'ignorable' relative to A. This depends on the level of precision that is being used experimentally. If you're working to a precision of 1 part in 100, then B should not effect the answer to that level of precision. If you're working to 1 part in a million, ... 4 It is common to write$$ \partial_i = \frac{\partial}{\partial x^i}$$for the derivative with respect to the$i$-th coordinate. Since time is customarily written as the$0$-th coordinate,$\partial_0\$ is the time derivative.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-02-06 13:27:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321392774581909, "perplexity": 547.9675811981103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00321-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.studyadda.com/ncert-solution/comparing-quantities_q22/555/47517
|
• # question_answer 22) Find the amount to be paid at the end of 3 years in each case: (a) Principal = 1,200 at 12% p.a. (b) Principal = 7,500 at 5% p.a.
(a) P = 1,200 R = 12% p.a. T = 3 years $I=\frac{PRT}{100}=\frac{1,200\times 12\times 3}{100}=$432 A = P + I = 1200 + 432 = 1,632. Hence, the amount to be paid at end of 3 years is 1,632. (b) P = 7,500 R = 5% p.a. T = 3 years $\text{I =}\frac{\text{PRT}}{\text{100}}\text{ = }\frac{\text{7,500 }\!\!\times\!\!\text{ 5 }\!\!\times\!\!\text{ 3}}{\text{100}}=$ 1,125 A = P + I $\Rightarrow$ A = 7,500 + 1,125 = 8,625 Hence, the amount to be paid at the end of 3 years is 8,625.
|
2020-08-05 13:38:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7475730776786804, "perplexity": 1425.4036853072796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00182.warc.gz"}
|
http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Cardioid
|
# 1911 Encyclopædia Britannica/Cardioid
CARDIOID, a curve so named by G. F. M. M. Castillon (1708-1791), on account of its heart-like form (Gr. Καρδία, heart). It was mathematically treated by Louis Carré in 1705 and Koersma in 1741. It is a particular form of the limaçon (q.v.) and is generated in the same way. It may be regarded as an epicycloid in which the rolling and fixed circles are equal in diameter, as the inverse of a parabola for its focus, or as the caustic produced by the reflection at a spherical surface of rays emanating from a point on the circumference. The polar equation to the cardioid is $r=a(1+\cos\theta)$. There is symmetry about the initial line and a cusp at the origin. The area is $\tfrac{3}{2}\pi a^2$, i.e. $1\tfrac{1}{2}$ times the area of the generating circle; the length of the curve is $8a$. (For a figure see Limaçon.)
|
2013-12-07 23:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9119029641151428, "perplexity": 418.5169276382145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163055862/warc/CC-MAIN-20131204131735-00072-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://republicofsouthossetia.org/question/which-describes-how-to-graph-h-root-3-using-transformations-of-the-parent-function-reflect-over-16354881-94/
|
Which describes how to graph h(x)= -\root(3)(x) using transformations of the parent function? Reflect over the horizontal axis, and then tra
Question
Which describes how to graph h(x)= -\root(3)(x) using transformations of the parent function? Reflect over the horizontal axis, and then translate the graph right 3 units. Reflect over the horizontal axis, and then translate the graph up 3 units. Reflect over the vertical axis, and then translate right 3 units. Reflect over the vertical axis, and then translate up 3 units.
in progress 0
1 day 2021-09-12T02:21:36+00:00 2 Answers 0
|
2021-09-17 23:15:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391414880752563, "perplexity": 1146.9156317554932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00601.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-section-4-8-solving-equations-containing-fractions-exercise-set-page-312/68
|
## Prealgebra (7th Edition)
Published by Pearson
# Chapter 4 - Section 4.8 - Solving Equations Containing Fractions - Exercise Set: 68
#### Answer
The solution is -3
#### Work Step by Step
$\frac{a}{6}$ =$\frac{a}{3}$ + $\frac{1}{2}$ Multiply by LCD, 6 6$\frac{a}{6}$ =6$\frac{a}{3}$ +6 $\frac{1}{2}$ $\frac{6 \times a}{6}$ =$\frac{6 \times a}{3}$ + $\frac{6 \times1}{2}$ a =$\frac{2\times3\times a}{3}$ + $\frac{2\times3 \times1}{2}$ a=2a +3 a-2a=2a-2a+3 -a=3 a=-3 The solution is -3 Check Replace a with -3 $\frac{a}{6}$ =$\frac{a}{3}$ + $\frac{1}{2}$ $\frac{-3}{6}$ =$\frac{-3}{3}$ + $\frac{1}{2}$ LCD is 6 6$\frac{-3}{6}$ =6$\frac{-3}{3}$ +6 $\frac{1}{2}$ -3=2$\times$-3 +3 -3=-6+3 -3=-3
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-07-20 17:04:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6746114492416382, "perplexity": 5268.968551903802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591718.31/warc/CC-MAIN-20180720154756-20180720174756-00449.warc.gz"}
|
https://www.physicsforums.com/threads/connection-between-dysons-equation-and-heisenberg-equation-of-motion.208909/
|
# Connection between Dyson's equation and Heisenberg equation of motion
1. Jan 15, 2008
### Tanja
Might there be a similarity between Dyson's equation and Heisenberg equation? (It's just a feeling, nothing based on arguments.) Both describe how a system (density matrix or Green's function) behaves in time. Both require knowledge of the intial system at time t=0 and the potential acting on the system.
The Dyson equation:
$$G = G_0 + G_0 V G$$ usually solved with the iteration steps $$G_{j+1} = G_0 + G_0 V G{j}$$
The Heisenberg equation of motion (with the density as the operator):
[tex] \rho = U^{\dag} \rho_0 U{ [\tex] with [tex] U = e^{\frac{-i}{\hbar} H t[\tex] (in the case of a time independent Hamiltonian).
There must be bridge, but I can't find a mathematical transition or a common ground. My knowledge on Green's function is just too limited.
Does anybody has an idea what could lead in the right direction?
2. Jan 15, 2008
### strangerep
Are you just talking about how the propagator for an interacting theory is obtained by
perturbative expansion around the free propagator?
If so, this is standard fare in many QFT textbooks, e.g: Greiner & Reinhardt's
"Quantum Electrodynamics" presents this sort of thing at a pedestrian pace.
Or did I misunderstand what you were asking?
3. Jan 16, 2008
### Tanja
And my question is: Are these two equations connected in any way?
I know that the propagator U is a matrix element of the Green's function: G = <x|U|x'>. But I've never seen a derivation.
Dou you know some online resources treating this topic?
4. Jan 16, 2008
### Gokul43201
Staff Emeritus
There is no direct mapping between the two equations - why should there be one?
You will find a derivation to Dyson's equation in Fetter & Walecka, or if you have a strong stomach, in Abrikosov, Gorkov & Dzyaloshinskii.
5. Jan 16, 2008
### strangerep
The Dyson recursion equation is what you get when you try to solve the
Heisenberg equation by splitting the Hamiltonian H into $H_0 + V$,
and treat V as a small perturbation.
Actually, $U = exp(iHt)$ is the time evolution operator. The "propagator"
and the "Green's function" are the same thing by different names.
Sorry,... I normally just consult a textbook when I want to check this sort
of thing. I had a quick look in Srednicki's online QFT book, but he
doesn't seem to cover your question explicitly.
I vaguely recall requests on this forum about online QFT books, so maybe
if you search back through other threads you'll find something.
6. Jan 18, 2008
### reilly
Dyson's work is done in the Interaction rep. For all practical purposes, that means the time dependence due to the unperturbed, free, Hamiltonian is factored out. This is very similar to the way we solve first order differential equations of the form dW/dt= ivW + F. That is, introduce a new dependent variable S, such that W=Exp(ivt)S. This idea was a key to the Nobel Prizes of Feynman, Schwinger and Tomonaga -- Dyson should have been included.
The Heisenberg equations of motion are directly formed from the role of the Hamiltonian as the generator of displacements in time -- commutators and all that.
This is all explained in any book on QFT, and in many on ordinary QM. Very basic. And of course the two approaches are intimately connected -- they are describing the same thing. Good basic exercise to show the connection.
Regards,
Reilly Atkinson
7. Jan 19, 2008
### Tanja
Thanks strangerep. I found Srednicki's book and it really seems to be good. Anyway it will take me some time to go through it.
Reilly, thank you for the deep insight. I guess, I will be going to the library next week to find a book on QFT.
8. Nov 8, 2009
### wsttiger
I think that the book you would be looking for is Fetter and Walecka. Also Mahan's book.
Both of these book derive the Dyson equation starting from perturbation theory using the interaction picture.
|
2018-05-26 14:31:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6288484334945679, "perplexity": 1039.8925171197188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867417.75/warc/CC-MAIN-20180526131802-20180526151802-00421.warc.gz"}
|
https://whatsecurity.nl/making-havedane-net.html
|
Use STARTTLS and DANE for your mail servers. The advice from RFC 76721 is easy enough to give, but harder to implement. For incoming connections, online verifiers are available (also here) that show whether you have properly configured your server and associated TLSA records. Outgoing connections are another matter. How do you scan an organisation’s outgoing mail connections? The answer: you ask people to send emails to a specifically prepared mail server and check which messages arrive.
This is what HaveDane.net does. You can build your own HaveDane.net - this post explains how to do it.
I assume you have a regularly hardened Debian server on which you have root, and a domain name (havedane.net) which you use for only this purpose.
For my ease of writing, I will assume everywhere that you are the domain name owner of havedane.net. Obviously, you are not, as I own that domain. Change the instructions (and scripts!) to reflect the domain name you choose to use.
## Broad overview of the system
The system consists of two main components:
• a web application that creates email aliases on the fly whenever someone visits the site. It keeps the user up-to-date about the delivery of email messages to these aliases by regularly polling the server.
• a mail transfer agent (MTA) that receives email for three domains: do.havedane.net, which does have proper DANE records, dont.havedane.net, which doesn’t, and wrong.havedane.net, which has DANE records that are invalid. It passes the messages on to scripts that process these, depending on the alias to which they were sent.
These two components are tied together by a database. In the database, the web application generates an alias for each visitor. In this row, the mail transfer agent records which emails have been delivered. The web application consults the row to update the user on the delivery of the emails he has sent.
## Install software
Install the required software:
sudo apt-get install nginx php5-nginx php5-fpm sqlite3 php5-sqlite easy-rsa
## Create database
We will use an SQLite3-database. If you expect many visitors, you may want to use a separate database server. The database will contain the aliases that the web application generates and for which the mail transfer agent will process email.
Create a directory /var/www/db. In this directory, create a database file: sqlite3 havedane.net.sqlite3. Make sure the database is writable to both the user that runs the web server and the user that runs the mail transfer agent. For example, make it world-writable.
In the database, create a table ‘tests’:
create table tests (alias TEXT, firstreceived DATETIME, do BOOLEAN, dont BOOLEAN, wrong BOOLEAN);
## Web application
Configure the web server to serve the contents of /var/www/html, while passing .php files to PHP-FPM.
Import the code from the repository2 and install it in /var/www/html.
Generate a secret string and set it as the value of the variable ‘secret’ in config.php. You can use pwgen -s 42 1 to generate a random string. This secret value will be used to generate random email addresses. If an attacker knows this value, he can predict the email addresses other people will be served. He can then mess with their test results.
Reload the web server: sudo service nginx reload.
## Mail transfer agent: Postfix
First, create the certificates to which the DANE records will point.
Copy the easy-rsa directory to your /etc directory:
sudo cp -R /usr/share/easy-rsa/ /etc/
Edit /etc/easy-rsa/vars to set the lifetime of the certificate to some very large value, such as 36500 (= about one hundred years). Set other values as you like them.
Source the vars script, run the cleanup script and then build the internal CA and the certificate:
sudo -i
cd /etc/easy-rsa
source ./vars
./clean-all
./build-ca
./build-key-server do.havedane.net
Download the keys/do.havedane.net.crt and keys/ca.crt files to your workstation. We will use these to generate the DANE records later.
Postfix only accepts full x509 certificate chains, so concatenate the CA certificate and the server certificate:
sudo -i
cat keys/do.havedane.net.crt keys/ca.crt > keys/do.havedane.net.fullchain.crt
Now, configure Postfix. In /etc/postfix/main.cf, change or add the following settings:
smtpd_tls_cert_file=/etc/easy-rsa/keys/do.havedane.net.fullchain.crt
smtpd_tls_key_file=/etc/easy-rsa/keys/do.havedane.net.key
smtpd_use_tls=yes
myhostname = havedane.net
mydestination = havedane.net, localhost.net, localhost
virtual_alias_domains = do.havedane.net, dont.havedane.net, wrong.havedane.net
virtual_alias_maps = hash:/etc/postfix/virtual
export_environment = TZ MAIL_CONFIG LANG PYTHONIOENCODING=UTF-8
The last line is to make sure diacritical characters do not crash the script. By default, Postfix on Debian passes the received emails in ANSI_X3.4-1968 encoding. This fixes that. The part ‘TZ MAIL_CONFIG LANG’ should be based on the output of postconf -d | grep export_environment - this is the default for Debian. Thanks to Jeroen for pointing out this bug.
Create /etc/postfix/virtual, containing catchall addresses for the three domains on which we will receive email:
@do.havedane.net dohavedanenet
@dont.havedane.net donthavedanenet
@wrong.havedane.net wronghavedanenet
Run postmap /etc/postfix/virtual to process the virtual alias maps file.
Edit /etc/aliases to redirect the messages to the appropriate scripts, by adding:
dohavedanenet: "|/root/bin/do-havedane-net.py 2>&1 > /tmp/do-havedane-net.log"
donthavedanenet: "|/root/bin/dont-havedane-net.py 2>&1 > /tmp/dont-havedane-net.log"
wronghavedanenet: "|/root/bin/wrong-havedane-net.py 2>&1 > /tmp/wrong-havedane-net.log"
Run newaliases to process the aliases file.
Put the three Python scripts {do,dont,wrong}-havedane-net.py2 in the /root/bin directory and make them world-executable.
Reload the mail transfer agent: sudo service postfix reload.
## Domain name and DNS
On your workstation, install the tool hash-slinger. It may also be available through your package manager. As an alternative, you can use an online generator such as this one.
First, generate legitimate TLSA records for do.havedane.net:
tlsa --create --port 25 --usage 2 --selector 1 --certificate ca.crt do.havedane.net
tlsa --create --port 25 --usage 3 --selector 1 --certificate do.havedane.net.crt do.havedane.net
On your DNS server (or at your DNS provider), set these as the TLSA record for TCP port 25 on do.havedane.net.
Next, make small modifications in the hash values that hash-slinger just computed, and set these as the TLSA records for wrong.havedane.net. Now, DANE verification of the certificate at wrong.havedane.net should fail. You can test this at the sys4 DANE checker.
For dont.havedane.net you don’t have to set any TLSA records, as the point of this domain is that it does not have TLSA records.
Set A and AAAA records for havedane.net, do.havedane.net, dont.havedane.net and wrong.havedane.net. They should all point to your server. No separate MX records should be necessary, as mail servers use A and AAAA records in the absence of one.
Your DNS server or DNS provider must support DNSSEC. You can check whether this is the case at internet.nl.
## I think that’s it!
Obviously, I have written these instructions after I finished building HaveDane.net. Therefore, stuff will probably be missing. Contact me if you try these instructions but they do not work.
One thing that I think should be included (but that I have not built yet) is automatic deletion of old table rows in the database. This shouldn’t be too difficult (compare the timestamp with the current date and delete anything that’s older than 24 hours, run this in a script as a cron job), but I haven’t figured it out yet.
#### Footnotes
1. RFC 7672: SMTP Security via Opportunistic DNS-Based Authentication of Named Entities (DANE) Transport Layer Security (TLS)
2. The code for this project is available on GitHub under an MIT license. 2
|
2020-01-23 08:33:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21045377850532532, "perplexity": 6941.081416641107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00162.warc.gz"}
|
https://online.stat.psu.edu/stat505/book/export/html/697
|
# 12.6 - Final Notes about the Principal Component Method
12.6 - Final Notes about the Principal Component Method
Unlike the competing methods, the estimated factor loadings under the principal component method do not change as the number of factors is increased. This is not true of the remaining methods (e.g., maximum likelihood). However, the communalities and the specific variances will depend on the number of factors in the model. In general, as you increase the number of factors, the communalities increase towards one and the specific variances will decrease towards zero.
The diagonal elements of the variance-covariance matrix $$\mathbf{S}$$ (or $$\mathbf{R}$$) are equal to the diagonal elements of the model:
$$\mathbf{\hat{L}\hat{L}' + \mathbf{\hat{\Psi}}}$$
The off-diagonal elements are not exactly reproduced. This is in part due to variability in the data - just random chance. Therefore, we want to select the number of factors to make the off-diagonal elements of the residual matrix small:
$$\mathbf{S - (\hat{L}\hat{L}' + \hat{\Psi})}$$
Here, we have a trade-off between two conflicting desires. For a parsimonious model, we wish to select the number of factors m to be as small as possible, but for such a model, the residuals could be large. Conversely, by selecting m to be large, we may reduce the sizes of the residuals but at the cost of producing a more complex and less interpretable model (there are more factors to interpret).
Another result to note is that the sum of the squared elements of the residual matrix is equal to sum of the squared values of the eigenvalues left out of the matrix:
$$\sum\limits_{j=m+1}^{p}\hat{\lambda}^2_j$$
### General Methods used in determining the number of Factors
Below are three common techniques used to determine the number of factors to extract:
1. Cumulative proportion of at least 0.80 (or 80% explained variance)
2. Eigenvalues of at least one
3. Scree plot based on the "elbow" of the plot; that is, where the plot turns and begins to flatten out
[1] Link ↥ Has Tooltip/Popover Toggleable Visibility
|
2021-07-30 17:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7689543962478638, "perplexity": 357.55316899874055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00306.warc.gz"}
|
http://onsnetwork.org/chemblaics/category/wikipathways/
|
# EW8: replacing CAS registry numbers with free ChEBI identifiers
Hypothesis: CAS registry numbers can be replaced with free ChEBI identifiers.
Start date: 2014-12-07 End date: YYYY-MM-DD
Description: CAS registry numbers are non-free identifiers for chemical substances. BridgeDb does not have identifier mappings for them, as it is legally not allowed to create such (large) mapping databases (without explicit, non-transferable approval). Because of the lack of these databases, PathVisio cannot map experimental data to <gpml:DataNode>s in WikiPathways with such identifiers.
The goal of this experiment is to replace CAS registry numbers with ChEBI identifiers for which many more mappings are available in BridgeDb-provided identifier mapping files.
Methods
• generate a list of <gpml:DataNode>s with CAS registry data sources on WikiPathways
• for each, verify the chemical identity with the CAS reference database common-chemistry.org
• based on this chemical identity look up the matching ChEBI entry
• replace the identifier in the WikiPathways (e.g. via GPML editing)
Report
$DETAILS OF ACTUALLY PERFORMING THINGS Conclusion:$CONCLUSIONS
# EW7: converting metabolite Labels into DataNodes in WikiPathways GPML
Hypothesis: The GPML format has sufficient information to convert a metabolite encoded as a Label into a DataNode with identifier
Start date: 2014-09-04 End date: 2014-09-06
Description:
The GPML format is used by WikiPathways to internally store pathways. The format is human-readable, allowing for adding missing information. Particularly, it can be used to convert a list of metabolites as <Label> elements as <DataNode> elements. Lists of potential <Label> elements to be converted is outlined in other experiments, such as EW6.
For example:
<Label TextLabel="Acetyl-CoA" GraphId="c7c">
<Graphics CenterX="150.0" CenterY="640.0" Width="90.33333333333333" Height="19.0" ZOrder="28672" FillColor="ffffff" FontWeight="Bold" FontSize="12" Valign="Middle" />
</Label>
This can be converted into:
<DataNode TextLabel="Acetyl-CoA" GraphId="c7c" Type="Metabolite">
<Graphics CenterX="150.0" CenterY="640.0" Width="90.33333333333333" Height="19.0" ZOrder="28672" FillColor="ffffff" FontWeight="Bold" FontSize="12" Valign="Middle" />
<Xref Database="ChEBI" ID="CHEBI:15351" />
</DataNode>
Methods
• Open a WikiPathways page in the MediaWiki edit mode
• Remove one or more <Label> elements to convert
• Convert the start and end tag from Label to DataNode
• Add the Type=”Metabolite” attribute (with value)
• Add a <Xref> child element, preferable with identifier for that metabolite
• Place the new <DataNode> elements just above the first <Interaction> element
Report
Many pathways have been update using this approach in the past, but I had not previously written up the method I used. In the past few days, these are example pathways updated this way:
When there are many <Labels> to be converted, I commonly use a plain text editor and “replace” functionality.
It should be noted that graph identifiers do not get changed, so that links between elements in the GPML are preserved.
Conclusion: This method requires experience with manually editing XML files; the risk is that you break the GPML file, though the WikiPathways interface does validate the file before saving against the GPML XML Schema.
# EW6: Finding nodes in Rattus norvegicus pathways with IUPAC names
Hypothesis: Rattus norvegicus pathways in WikiPathways have DataNode’s with labels containing IUPAC names which can be tagged as type Metabolite.
Start date: 2014-09-05 End date: 2014-09-05
Description:
WikiPathways entries in GPML have DataNode objects and Label objects. It was found before [here, here] that metabolites can be encoded in pathways is Label objects and therefore not machine-readable as Metabolite-type DataNode and unable to have a database identifier. As such, these metabolites are unusable for pathway analysis of metabolomics data.
By processing these GPML files (they are XML-based) and iterating over all Label’s we can attempt to convert this label into chemical structure with OPSIN. This goes under the assumption that if OPSIN can parse the label into a structure, it is one. This label will be recorded along with the pathway identifier for manual inspection. For each structure it will also look up a ChemSpider identifier.
Methods
Unchanged protocol.
• Get a working Bioclipse development version (hard) with the OPSIN, InChI, and ChemSpider extensions
• A Groovy script to iterate over the GPML, find <Label> elementsEach <Label> is parsed with OPSIN and if successful, generate an InChI
• Use the InChIs to find ChemSpider identifiers
• Output all as a text file and open metabolites in a Structure table
Report
Similar to the experiment for Anopheles gambiae and Homo sapiens only curated pathways were analyzed, 143 in total, downloaded from WikiPathways.org on August 24. The Groovy script is used detailed in this experiment.
The script found 47 Labels that are possibly metabolites in 8 different rat pathways. The full list was uploaded to Gist.
Conclusion: Rat pathways also include metabolites encoded in GPML <Label> elements.
# EW5: Finding nodes in Homo sapiens pathways with IUPAC names
Hypothesis: Homo sapiens pathways in WikiPathways have DataNode’s with labels containing IUPAC names which can be tagged as type Metabolite.
Start date: 2014-09-01 End date: 2014-09-01
Description: WikiPathways entries in GPML have DataNode objects and Label objects. It was found before [here] that metabolites can be encoded in pathways is Label objects and therefore not machine-readable as Metabolite-type DataNode and unable to have a database identifier. As such, these metabolites are unusable for pathway analysis of metabolomics data.
By processing these GPML files (they are XML-based) and iterating over all Label’s we can attempt to convert this label into chemical structure with OPSIN. This goes under the assumption that if OPSIN can parse the label into a structure, it is one. This label will be recorded along with the pathway identifier for manual inspection. For each structure it will also look up a ChemSpider identifier.
Methods
• Get a working Bioclipse development version (hard) with the OPSIN, InChI, and ChemSpider extensions
• A Groovy script to iterate over the GPML, find <Label> elements
• Each <Label> is parsed with OPSIN and if successful, generate an InChI
• Use the InChIs to find ChemSpider identifiers
• Output all as a text file and open metabolites in a Structure table
Report
Similar to the experiment for Anopheles gambiae only curated pathways were analyzed, some 266 in total, downloaded from WikiPathways.org on August 24. The previous Groovy script was updated to point to the human pathways, but also to output the results in a file, rather than STDOUT. The new script was uploaded to myExperiment.org.
The script found 42 Labels that are possibly metabolites. The full list was uploaded to Gist. Again, labels were found which could not be linked to a single ChemSpider ID. For example, “5b-Pregnane-3,20-dione” which will results in these ChemSpider search hits: 21427590, 389575, 21232692, 21239075, 21237402. The result file also shows a few labels with new lines.
One metabolite was manually confirmed in WP1449Imidazoquinolin. Interestingly, the Label was visually “connected” with “(anti-viral compounds)” which have a ChEBI identifier and could be converted to a DataNode of type Metabolite too:
Most work, however, needs to be done in the Tryptophan metabolism pathway (WP465); many metabolites are not properly made machine readable.
Conclusion:
Human pathways also include metabolites encoded in GPML <Label> elements, even in the curated subset.
# EW4: Finding nodes in Anopheles gambiae pathways with IUPAC names
Hypothesis: Anopheles gambiae pathways in WikiPathways have DataNode’s with labels containing IUPAC names which can be tagged as type Metabolite.
Start date: 2014-08-24 End date: 2014-08-24
Description: WikiPathways entries in GPML have DataNode objects and Label objects. It was found before [no published] that metabolites can be encoded in pathways is Label objects and therefore not machine-readable as Metabolite-type DataNode and unable to have a database identifier. As such, these metabolites are unusable for pathway analysis of metabolomics data.
By processing these GPML files (they are XML-based) and iterating over all Label’s we can attempt to convert this label into chemical structure with OPSIN. This goes under the assumption that if OPSIN can parse the label into a structure, it is one. This label will be recorded along with the pathway identifier for manual inspection. For each structure it will also look up a ChemSpider identifier.
Methods
• Get a working Bioclipse development version (hard) with the OPSIN, InChI, and ChemSpider extensions
• A Groovy script to iterate over the GPML, find <Label> elements
• Each <Label> is parsed with OPSIN and if successful, generate an InChI
• Use the InChIs to find ChemSpider identifiers
• Output all as a text file and open metabolites in a Structure table
Report
Twelve WikiPathways for Anopheles gambiae were downloaded part of the analysis collection. In the future, uncurated pathways can also be included, anticipating to have more metabolites not annotated as Metabolite type. A custom Groovy script for Bioclipse was used, based on a previous similar script available from myExperiment.org. The updated script has been made available on myExperiment.org too. The results of running this script are visible in the above screenshot.
Key calls to Bioclipse managers used in this script, in addition to using the Groovy XMLParser, are:
• cdk.createMoleculeList()
• opsin.parseIUPACName(name)
• inchi.generate(molecule)
• chemspider.resolve(inchiKey)
Four metabolites were found, in one pathway (WP1230):
Ag_One_Carbon_Metabolism_WP1230_68447.gpml: node b93 -> Serine -> MTCFGRXMJLQNBG-UHFFFAOYSA-N -> CSID: [597]
Ag_One_Carbon_Metabolism_WP1230_68447.gpml: node ff7 -> Glycine -> DHMQDGOQFOQNFH-UHFFFAOYSA-N -> CSID: [730]
Ag_One_Carbon_Metabolism_WP1230_68447.gpml: node c8c -> Deoxythymidine monophosphate -> WVNRRNJFRREKAR-UHFFFAOYSA-N -> CSID: [315142]
Ag_One_Carbon_Metabolism_WP1230_68447.gpml: node a47 -> Deoxyuridine monophosphate -> JSRLJPSBLDHEIO-UHFFFAOYSA-N -> CSID: [21537275, 668, 21230588]
Three metabolites have a single ChemSpider identifier, whereas one has three ChemSpider identifiers.
Visual inspection of WP1230 (revision 68447) confirms our hypothesis:
Conclusion: Anopheles gambiae pathways indeed also include metabolites encoded in GPML <Label> elements.
|
2019-09-20 23:04:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3691750466823578, "perplexity": 11742.464301298553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00039.warc.gz"}
|
http://www.fightfinance.com/?t=premium_par_and_discount_bonds
|
# Fight Finance
#### CoursesTagsRandomAllRecentScores
Scores keithphw $6,001.61 Yizhou$489.18 Visitor $462.43 Visitor$370.00 allen $340.00 Donnal$190.00 Visitor $150.00 Visitor$119.09 Mahmood $109.43 Visitor$100.00 Visitor $60.00 Visitor$60.00 Visitor $50.00 Koushik ...$43.45 Visitor $40.09 Joe figh...$40.00 Visitor $40.00 Visitor$40.00 Visitor $39.09 Visitor$30.00
Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value (100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? Which of the following statements about risk free government bonds is NOT correct? Hint: Total return can be broken into income and capital returns as follows: \begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value (100), maturity, seniority, and payment frequency.
The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices?
Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other.
Which of the following statements is true?
Which one of the following bonds is trading at a discount?
Which one of the following bonds is trading at par?
The coupon rate of a fixed annual-coupon bond is constant (always the same).
What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that:
$$r_\text{total} = r_\text{income} + r_\text{capital}$$
$$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$
Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures.
Select the most correct statement.
From its date of issue until maturity, the income return of a fixed annual coupon:
Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency.
The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true?
Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive.
Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures.
|
2019-01-19 21:35:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3306608200073242, "perplexity": 1869.898992751518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583681597.51/warc/CC-MAIN-20190119201117-20190119223117-00446.warc.gz"}
|
https://gianlubaio.blogspot.com/2017/05/the-swingers.html
|
## Tuesday, 30 May 2017
### The swingers
Kaleb has left a comment on a previous post, asking what constituencies my model predicted to change hands, with respect to the 2015 election. This is not too difficult to do, given the wealth of results and quantities that can be computed, once the posterior distributions are estimated.
Basically, what I have done is to compute, based on the "possible futures" simulated by the model, the probability that the parties win each of the 632 seats in England, Wales and Scotland. Many of them seem to be very safe seats $-$ I think this is consistent with current political knowledge, although in an election like this possibly more can change...
Anyway, using the very latest analysis (as of today, 30th May and based on all polls published so far, but discounting older ones), there are 39 seats that are predicted to change hands. The following graph shows the predicted distribution of the probability of winning each of those seats, together of an indication of who won in 2015.
Of course, Labour are the big losers (there are many of the 39 constituencies that were Labour in 2015, but are predicted to swing to some other party in 9 days time). Conversely, the Tories are the big winners and most often, when they do, they are predicted to win that seat with a very large probability. There aren't very many real 50:50s $-$ a couple, I'd say, where the results are predicted to be rather uncertain.
Incidentally, as of today, this is the distribution of seats predicted by the model.
mean sd 2.5% median 97.5%
Conservative 359.467 5.4492757 351 358 371
Labour 209.276 5.3613961 198 211 218
UKIP 0.000 0.0000000 0 0 0
Lib Dem 14.699 2.1621920 10 15 19
SNP 48.055 2.7271620 42 48 52
Green 0.000 0.0000000 0 0 0
PCY 0.503 0.8286602 0 0 3
Other 0.000 0.0000000 0 0 0
Labour are continuing to close the gap on the Tories, but are still a long way out. I'm curious to see what last night not-a-debate did to the polls...
|
2017-07-21 12:30:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5641494393348694, "perplexity": 8128.917534185122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423774.37/warc/CC-MAIN-20170721122327-20170721142327-00095.warc.gz"}
|
https://blog.1a23.com/2020/03/21/sync-tweets-to-a-telegram-channel-using-account-activity-api/
|
# Sync Tweets to a Telegram Channel Using Account Activity API
This article was published over 3 years ago. The information mentioned may be changed or developed.
この記事は 3 years前に書かれたものです。記載された情報は変化した可能性があります。
Yet another post that has something to do with Telegram. Yeah, I know, but there’s never such thing as too much when you talk about blog articles.
A lot of people around my Telegram circle has been maintaining their own channels, and a lot of them has had a few hundreds or even thousands of subscribers. I think that I may make one too, but I also don’t want to give up with my Twitter account which is more accessible to search engines. So why not sync my tweets to the channel? Given the openness of both Telegram and Twitter, this shouldn’t be much of an issue.
The first option I turned to is, of course, IFTTT. It is one of the most famous solution for casual automation. I have also used it in another script. The problem of IFTTT is that the option it provides is too simple. There is no text transformation, no condition and no everything I would need for an ideal forwarder. It even drops line breaks when quoting the tweet content.
Soon after I got time, I replaced the IFTTT bot with my own one for a better presentation and granular control over it.
## The bot
Setting up a Telegram bot for this purpose isn’t hard, especially when it doesn’t require any input. In fact you can even reuse bot that you already have for this (which is what I was doing).
Add the bot to a channel as admin, get the ID/username of the channel, and you are good to go.
The hard part is to create a Twitter app to use the Account Activity API. You have to fill a quite lengthy survey telling them why you want to make an app and how are you going to use the API. Only after they have approved your request, then you can proceed to the next step.
Note that Twitter has been pretty strict on the quota of the Account Activity API ever since they have moved from the Streaming API (which has broken most third-party clients). You can’t share your API key and secret to more than 15 accounts if you are on the Sandbox (free) plan, or you may be asked to pay for the bill.
Also, since the new Account Activity API is webhook-like, that means you have to figure out, in one way or another, to expose your bot as an HTTP entry point. I used my existing Nginx web server to forward request to the bot. Other methods should also work.
I am using a Python library called Twitivity which provides a simple Flask web server and some helper functions, PickleDB for simple file-based key-value storage, and Python Telegram Bot for the bot.
### Setup environment
Once you have set up your Twitter App, you can go to the Dev Environments page to create an environment for your Account Activity API. The environment label (e.g. env_name) will be used later in the code.
If you are running the bot on your own account, go to your app page and choose “Keys and tokens”. From there you can get your API key/token and access key/token in one go. For users other than yourself, you need to setup authentication manually to get the tokens.
Once the webhook is registered, you can copy over the config to the actual bot file, and start it up. Since it runs flask in the background, you can actually use all the fancy uWSGI and Gunicorn stuff to maintain the bot, but a simple flask dev server should suffice if you don’t tweet 100 times per second.
As you might have seen above, the bot itself has quite a complicated logic. The bot would identify the nature of a tweet and treat each kind of tweet differently.
• For plain tweets, the bot would expand all shortened links back as the length limit here isn’t that strict. Link preview will only be enabled if a link is found in the tweet.
• For tweets with media, the bot will send them as picture/video (or media group) messages. Thank to the fact that Telegram Bot API accepts external media URL, we don’t need to download and upload again.
• For retweets with comments, only the comment is copied over.
• For likes and plain comments, the original tweet is only shown as link preview.
• Every message sent has links to the original tweet, and the source tweet too if its a like or retweet. (The links are on the emoji at the end of the messages.)
• If the tweet is a reply to something that we already have, it will reply to the previous message in the channel.
Some screenshots here:
Posted
in
by
|
2023-02-08 13:26:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26130685210227966, "perplexity": 1524.5213832973718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00801.warc.gz"}
|
http://sites.millersville.edu/bikenaga/courses/345-s18/homework/ps7/ps7-solutions.html
|
# Solutions to Problem Set 7
Math 345-504
2-9-2018
1. Find the greatest common divisor of 831 and 240.
Then write as a linear combination of 831 and 240 with integer coefficients.
2. (a) Find and write it as a linear combination with integer coefficients of 103 and 83.
(b) Using the result of (a) and without using trial and error, find specific integers x and y such that
(a)
(b) Multiplying the equation in (a) by 55, I obtain
Thus, , is a solution to the equation.
In fact, there are infinitely many solutions of the form:
3. Prove that if n is an integer, then and are relatively prime.
4. (a) Let . Prove that if and and , then .
(b) Give a counterexample to show that if and and , it does not necessarily follow that . (Note the difference in assumptions between (a) and (b)!)
(a) If and , then and for some .
If , there are integers a and b such that
Multiply by x and substitute:
Hence, .
(b) and , but .
MATH 504]
5. Let G be a group and let . Suppose that and
Prove that .
Since , I have
Then
It is now, and in this world, that we must live. - Andr\'e Gide
Contact information
|
2018-03-19 14:57:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165913462638855, "perplexity": 611.6692412041241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00152.warc.gz"}
|
https://mtosmt.org/issues/mto.17.23.2/mto.17.23.2.spicer.php
|
# Fragile, Emergent, and Absent Tonics in Pop and Rock Songs *
## Mark Spicer
KEYWORDS: tonality, popular song, rock harmony, fragile tonics, emergent tonics, absent tonics, soul dominant, Sisyphus effect
ABSTRACT: This article explores the sometimes tricky question of tonality in pop and rock songs by positing three tonal scenarios: 1) songs with a fragile tonic, in which the tonic chord is present but its hierarchical status is weakened, either by relegating the tonic to a more unstable chord in first or second inversion or by positioning the tonic mid-phrase rather than at structural points of departure or arrival; 2) songs with an emergent tonic, in which the tonic chord is initially absent yet deliberately saved for a triumphant arrival later in the song, usually at the onset of the chorus; and 3) songs with an absent tonic, an extreme case in which the promised tonic chord never actually materializes. In each of these scenarios, the composer’s toying with tonality and listeners’ expectations may be considered hermeneutically as a means of enriching the song’s overall message. Close analyses of songs with fragile, emergent, and absent tonics are offered, drawing representative examples from a wide range of styles and genres across the past fifty years of popular music, including 1960s Motown, 1970s soul, 1980s synthpop, 1990s alternative rock, and recent U.S. and U.K. #1 hits.
Volume 23, Number 2, June 2017
[1] Skilled nineteenth-century song composers such as Robert Schumann and Johannes Brahms often exploited tonality and its expectations for symbolic or expressive purposes. About Brahms’s 1873 song “Regenlied” (op. 59 no. 3), for example, Heather Platt observes that “[t]hroughout most of this song the tonic is absent . . . [and] the resulting wandering harmonies . . . create a dream world in which time seems to be suspended” (1999, 248). Broaching the issue of harmony and tonal design in their analysis of German lieder, Deborah Stein and Robert Spillman use the term implicit tonality to refer to “a section of music where a key is suggested (i.e., implied) but not fully (i.e., explicitly) presented” (1996, 135).(1) This essay argues that popular-song composers since the 1960s have often toyed with tonality in much the same way, and that many striking examples of such ephemeral tonal designs can be found in the vast repertoire of pop and rock songs recorded over the past several decades. Before proceeding, however, I will first anticipate my conclusions, positing three tonal scenarios to be explained more fully over the course of the music analyses that follow: 1) songs with a fragile tonic, in which the tonic chord is present but its hierarchical status is weakened, either by relegating the tonic to a more unstable chord in first or second inversion or by positioning the tonic mid-phrase rather than at structural points of departure or arrival; 2) songs with an emergent tonic, in which the tonic chord is initially absent yet deliberately saved for a triumphant arrival later in the song, usually at the onset of the chorus; and 3) songs with an absent tonic, an extreme case in which the promised tonic chord never actually materializes.
Example 1. Daryl Hall and John Oates, “She’s Gone” (1973)
(click to enlarge and listen)
Audio Example 1
Audio Example 2
Audio Example 3
[2] Example 1 provides a formal synopsis of Daryl Hall and John Oates’ 1973 song “She’s Gone” alongside a summary of the harmonic content in each of the song’s respective sections.(2) Like many pop and rock songs, “She’s Gone” opens with an extended introduction built upon an oscillating two-chord vamp—in this case, close position A major and B major triads alternating over a B pedal. Well over a minute into the track, the vocals enter with the first verse, the initial three lines of which are built upon this same oscillating vamp, yielding fleetingly to G minor-seventh and C minor-seventh chords in the fourth line before returning to the vamp for the song’s second verse. Audio Example 1 contains the end of the introduction through the first and on into the second verse.
[3] How are we to make sense of the tonal information that has been presented to us so far? All of the chords conform to a key signature of four sharps, suggesting E major, and yet the tonic chord is notably absent from the verse’s chord progression. In fact, the entire introduction and first two verses seem to be all about prolonging the dominant, with particular emphasis placed on the A major over a B bass “slash” chord that begins and ends each verse. This distinctive keyboard sonority which I have christened the “soul dominant”—best thought of here as a close position IV chord over $\stackrel{ˆ}{5}$ in the bass, conflating subdominant and dominant functions—is common to many pop and rock styles, but especially prevalent within the lush, extended harmonic language of 1970s soul music, hence its nickname.(3) The resulting effect felt throughout the introduction and first two verses of “She’s Gone” is one of constant tension, setting the stage for the E major tonic chord to emerge triumphantly in the song’s chorus. When the tonic chord ultimately does emerge, however, it is quite fragile, relegated to a passing harmony in first inversion (Audio Example 2).
[4] The ending of “She’s Gone” is also special and merits further commentary. Rather than fading out in E major, the third chorus instead gives way to a short but remarkable instrumental break—fully loaded with horns, strings, and a wailing lead guitar—that once again shines a spotlight on the soul dominant and consists of no fewer than three consecutive “truck driver’s modulations” up by semitone.(4) Out of all this the final chorus re-emerges a minor third higher in the chromatic mediant key of G major, above which Daryl Hall’s voice soars with passion and anguish as he hammers home the song’s title lyric and bemoans the loss of his lover for one last round. With nowhere else left to go, the chorus simply repeats and fades away (Audio Example 3).
Example 2. The Four Tops, “Reach Out I’ll Be There” (1966)
(click to enlarge and listen)
Audio Example 4
[5] As Walter Everett confirms in his book The Foundations of Rock, non-tonic openings like the one in “She’s Gone” have appeared frequently in pop and rock music since at least the 1960s. Surveying a variety of such songs by the Beatles and others, Everett aptly notes that “these songs all find their tonics eventually, with a rush of familiarity that often seems like the dissipation of clouds” (2009, 215)—a phenomenon that I call an emergent tonic. Yet sometimes this aural game of “hunt the tonic” is not quite so simple, a case in point being the Four Tops’ 1966 #1 Motown single “Reach Out I’ll Be There” (Example 2). The eight-bar introduction to “Reach Out,” which features the song’s signature flute riff, establishes E minor with chords alternating between i and V every other bar.(5) This sense of E minor as tonic is immediately thwarted at the onset of the verse, however, as the chord progression is taken over suddenly by an oscillating vamp of A minor seventh to D major that repeats five times; these chords get locked into a seemingly ever-repeating loop. As I have shown in my harmonic analysis below the staff, this two-chord vamp may be interpreted as a repeating ii7–V progression searching for its tonic in the relative major key of G, a goal made clear only at the onset of the prechorus (when the title lyric appears for the first time).(6) Almost as soon as G major has been established, the sense of tonic is thwarted yet again as first-inversion and root-position G chords yield to a first-inversion B major chord (these three chords being anchored by James Jamerson’s bass line in falling major thirds), followed by a fully-diminished leading tone seventh chord that leads us back into E minor for the song’s chorus. But even here the tonic is rendered fragile on its re-emergence, as Jamerson stubbornly remains on a B bass note for the first two bars of the chorus, placing the re-emergent E tonic chord initially in second inversion before “correcting” itself to root position in the third measure. To top this off, while Levi Stubbs’s lead vocal emphasizes the pitches F to E, $\stackrel{ˆ}{2}$ to $\stackrel{ˆ}{1}$, on successive downbeats in the first two bars of the chorus, one of the guitarists (presumably Joe Messina) clearly plays an E major triad above Jamerson’s B bass note in the second measure.(7) I invite the reader now to listen to this remarkable series of tonal events in Audio Example 4.
[6] Offering his own interpretation of the fragile tonal design of “Reach Out” and how it might contribute to the song’s overall impact, Everett claims that “the confusion, illusion, fear, cold, and drifting that trouble the singer’s love object in the verse turn to raw paranoia in the chorus, and the withholding of harmonic support and clarity are the chief expressive factors” (2008, 116). While Everett limits his survey in The Foundations of Rock to songs composed and recorded during rock’s formative period (through 1969), many other songs with similarly fragile tonal designs can be found among pop and rock songs composed in later decades.
Example 3. Elton John, “Someone Saved My Life Tonight” (1975)
(click to enlarge and listen)
Audio Example 5
Audio Example 6
[7] Elton John’s top-five pop hit “Someone Saved My Life Tonight,” from his 1975 concept album Captain Fantastic and the Brown Dirt Cowboy, offers another profound example of a song with a fragile tonic. With lyrics written by Elton’s lifelong songwriting partner Bernie Taupin (the “Brown Dirt Cowboy” to Elton’s “Captain Fantastic”), each song on the album provides an autobiographical glimpse into their struggles as young songwriters trying to make it in the London scene in the late 1960s. “Someone Saved My Life Tonight,” based on events from 1969, springs from a time when Elton evidently contemplated suicide until his friend and bandmate Long John Baldry (the “Sugarbear” of the song’s chorus) “saved his life” by convincing him not to risk ruining his burgeoning musical career by marrying his girlfriend and getting trapped in an unhappy relationship. Example 3a shows the song’s signature introductory piano riff, which reappears at crucial junctures throughout the song and also serves as the repeating loop for the song’s fade-away coda. While “Someone Saved My Life Tonight” is unmistakably in A major and the sense of A as tonic becomes clearer as the melody unfolds, the initial tonic chord of the piano riff is in second inversion and hence sounds particularly unstable and fragile, especially as it alternates with a root-position D major chord. The bar-to-bar alternation between G and G, as part of the octave-doubled ascending and descending scalar flourishes in the left hand, complicates the issue of discerning the tonal center of the piano riff even further. It might be tempting to say that the tonal center here is trying to shift to the subdominant D, but rather than a shift of key center, I prefer to think of this as an instance of what David Temperley (2011) calls “scalar shift”—in this case, one shift in the flatwards direction, replacing $\stackrel{ˆ}{7}$ with $\stackrel{ˆ}{7}$. In other words, this represents a back-and-forth alternation between Ionian and Mixolydian modes of the same key.
[8] Example 3b provides a harmonic reduction of the verse and chorus. The verse begins on the same fragile second-inversion A tonic chord, which again does not behave as a six-four chord would do in Classical tonality, resolving to a root-position IV chord that quickly moves by way of a passing G chord toward an odd-sounding cadence on its own fragile second-inversion chord (a D major triad over an A bass). The next phrase begins with oscillating GA chords that again sound to me a like a temporary scalar shift to the Mixolydian mode, hence VII–I. The root-position A chords that occur at this point during the verse progression provide the only instances of root-position tonic chords in the whole song, yet they hardly sound like stable points of arrival. In fact, after one repetition of the oscillating VII–I chords in half notes, the verse progression continues with D major (IV) and the borrowed D minor (iv) for one measure each, followed by a B dominant seventh that does not resolve immediately as a V7 of V but instead swerves back to the IV chord. The bass then creeps chromatically upwards toward $\stackrel{ˆ}{5}$, with the verse culminating on a big half cadence decorated with a cadential six-four that this time does resolve conventionally. Audio Example 5 presents the opening minute of the song, from the introductory piano riff through the half cadence ending the first verse.
[9] Following this half cadence, the ensuing chorus once again defies the expectations of Classical tonality by beginning not on the tonic but on the root-position IV chord, with the title lyric “Someone Saved My Life Tonight” set to the same IV–passing I6–ii progression that we heard in the chorus to “She’s Gone” (also reminiscent of the refrain from another famous song with suicidal tendencies, Brian Wilson’s 1966 masterpiece “God Only Knows” from the Beach Boys’ Pet Sounds).(8) With the harmonic design of the chorus even more unstable than the verse, here again the bass ultimately creeps upwards chromatically toward $\stackrel{ˆ}{5}$, at first overshooting it to $\stackrel{ˆ}{6}$ before doubling back to $\stackrel{ˆ}{4}$ and climbing again chromatically to $\stackrel{ˆ}{5}$ as Elton sings the melodic climax of the chorus—“. . . fly away, high away!”—over another painfully fragile second-inversion tonic.(9) Yet immediately after this climactic moment, the second-inversion A chord yields again to the IV chord (D major), this time slyly by way of a deceptive resolution of V7 of vi, musically portraying the idea of a butterfly “flying away.” The three-bar tag that closes the chorus overshoots A and comes to rest momentarily on the VII chord (G major), which functions in context like a retransitional dominant (with VII substituting for V, as it often does in modal rock harmony) leading back to the second-inversion A chord of the opening piano riff. Audio Example 6 picks up from the onset of the chorus.
[10] The fragile tonal design of “Someone Saved My Life Tonight,” with what seems to be a deliberate avoidance of tonic chords in root position for most of the song, mirrors perfectly the fragile emotional state of the song’s protagonist (which, in this case, we can assume to be Elton John himself). I do not profess to have to undertaken a systematic corpus study of post-1950s pop and rock songs (such as de Clerq and Temperley [2013] and others have done) and am therefore relying on the large database of pop and rock songs that I carry around in my head, but I think it’s pretty safe to say that most if not all songs with fragile tonics that similarly feature an abundance of second-inversion chords (and with a bass that snakes up and down mostly stepwise) were composed at the piano and not the guitar, since guitar-driven pop and rock songs tend to favor root-position “power” chords (octave-and-fifth doubled, with the third often omitted) and as a result lack the degree of voice-leading independence more typical of keyboard-driven songs.
Example 4. Prince, “Little Red Corvette” (1982)
(click to enlarge and listen)
Audio Example 7
Example 5. Possible harmonic-functional interpretations of some oscillating two-chord vamps
(click to enlarge)
[11] In “She’s Gone,” we saw that the tonic chord was absent during the verses and saved for the song’s chorus. I would like to look briefly now at another example of a song with an emergent tonic, Prince’s 1982 hit “Little Red Corvette” (Example 4). In her 2010 Music Theory Spectrum article, Nicole Biamonte provides a useful catalogue of the most common triadic modal and pentatonic patterns used in rock, where she discusses VI–VII–i as the prime exemplar of Aeolian harmony (the so-called “Aeolian cadence”). While the verse progression of “Little Red Corvette” could be interpreted on the local level as VI–VII–i–VI7 in B minor, I cannot help but hear this progression as a series of deceptive moves—that is, a repeating IV–V–vi–IV7 in D major. As in “She’s Gone,” the tonic chord seems to be deliberately withheld by Prince for its climactic arrival in the chorus (“[IV] Little [V] Red Cor- [I] -vette”), where it serves as a metaphor for the release of the sexual tension built up in the preceding verse. As I have shown in the notated example, the melodic design of the verse composes out a linear progression of $\stackrel{ˆ}{6}$$\stackrel{ˆ}{7}$$\stackrel{ˆ}{1}$. The same scale degrees are moved through twice as fast at the onset of the chorus as the tonic chord emerges. Audio Example 7 contains the opening of the track, from the intro through the first verse and breakout chorus.(10)
[12] In certain extreme cases, however, the promised tonic chord never actually materializes. But before considering some examples of truly absent tonics, let us ponder more closely this phenomenon of oscillating two-chord vamps. Example 5 shows four such vamps that are commonly found in pop and rock songs, with the Roman numerals below indicating their possible harmonic-functional interpretations (for ease of comparison, all four are transposed to begin with a chord on G). The first vamp alternates a major chord with a minor chord whose root lies a whole step above. Depending on such variable factors as durational accent, the presence or absence of a bass pedal tone, precisely which pitches are emphasized in the vocal melody and/or riffs sounding above these chords, and which of the two chords is sounded first, it is possible in the context of an actual song for the ear to latch on to one or the other of these chords as the tonic—that is, as either a repeating I–ii in major (as in, for example, the Beatles’ “Don’t Let Me Down” [1969], where the ii chord sounds first), or a VII–i in minor (as in, for example, Frankie Goes to Hollywood’s “Relax” [1983], where the i and VII chords oscillate over a tonic pedal).(11) Similarly, the second vamp may be interpreted either as a repeating I–vi in major (as in the opening of the Beatles’ “From Me to You” [1963]), or a repeating III–i in minor (as in the verses of the Police’s “Invisible Sun” [1981]). The third vamp consists of a minor chord alternating with a major chord whose root lies a fifth below. While this vamp, as we saw in “Reach Out,” can sometimes function as a repeating ii–V in major searching for its tonic, in modal rock songs the first chord will more often than not assume the status of the tonic itself, resulting in a Dorian i–IV (as in the Zombies’ “She’s Not There” [1964], for example, as well as several of the songs on Pink Floyd’s The Dark Side of the Moon [1973] and many songs by Santana—so much so, in fact, that Frank Zappa and his band in the late 1970s jokingly nicknamed G minor to C major as the “Carlos Santana Secret Chord Progression”).(12) This brings us to the fourth vamp, the most problematic of all, which consists of two alternating major chords whose roots lie a whole step apart and therefore might be interpreted in context as a Lydian I–II. As Everett explains in The Foundations of Rock, “the Lydian scale is marked by the most dissonant of intervals, the augmented fourth, involving the tonic scale degree and the distance from $\stackrel{ˆ}{1}$ to $\stackrel{ˆ}{4}$” (Everett 2009, 256). He then cites several examples of 1960s pop and rock songs featuring an oscillating I–II vamp, which he calls a “Lydian fingerprint,” such as the opening of the verse of the Turtles’ 1967 hit “You Know What I Mean.” In this particular song, however, the I–II oscillating vamp is quickly relieved—corrected, if you will—after only two iterations by the ensuing IV chord, which replaces the raised 4th scale degree with a plain 4. Much more problematic for discerning a tonal center are cases in which an oscillating vamp of two whole-step related major chords serves as the harmonic foundation for an entire section, or even—in extreme cases—an entire song.
Example 6. Two songs built on a shuttle of two major chords with roots lying a whole step apart
(click to enlarge and listen)
Audio Example 8
Audio Example 9
Audio Example 10
Example 7. The Human League, “Human” (1986), main progression (the “Sisyphus effect”)
(click to enlarge)
[13] Let us consider two such examples. Example 6a shows a transcription of the opening of Jane’s Addiction’s 1988 alternative rock classic “Jane Says,” a track built entirely on a repeating two-chord guitar riff of G major–A major. Above Dave Navarro’s incessant guitar riff, Perry Farrell’s vocal melody insistently outlines the tonic triad of D major, and seems to be at odds with the oscillating chords below; indeed, this is an excellent example of what David Temperley (2007) and more recently Drew Nobile (2015), following Allan Moore (1995), have termed the “melodic-harmonic divorce” in rock. To my ears, this divorce between the melody and harmony causes the whole song to sound like an ever-repeating IV–V that is searching for its tonic but never resolves (Audio Example 8).(13)
[14] Another of my favorite examples of a song that promises a tonic which never fully materializes is the Spinners’ 1972 #1 soul hit “I’ll Be Around” (Example 6b), a simple verse–chorus form built entirely upon a circular groove involving alternating E major-seventh and F major added-sixth chords.(14) The opening of the song is provided in Audio Example 9. Again, one might be tempted to hear this looping progression as some sort of Lydian I–II, but everything else in the track—from Bobbie Smith’s vocal melody, which persistently outlines a G-minor triad and repeatedly comes to rest locally on the pitch G, to the signature string/horn riff during the instrumental break, which marches down what is clearly a G Aeolian scale with D, $\stackrel{ˆ}{5}$, as its head note—points toward hearing this progression against the backdrop of G minor: hence, an Aeolian VI–VII with “absent” i. The song’s chorus and instrumental break are provided in Audio Example 10. This ever-repeating VI–VII, which always drops back to “resolve” on the VI chord and avoids cadencing on the promised tonic, is an example of what I have dubbed “the Sisyphus effect”—that is, a sequence of chords with their bass notes ever moving stepwise up a hill, only to fall back down to the bottom either just before (as in “I’ll Be Around”) or as soon as the goal tonic is reached.
[15] In a further illustration of this effect, Example 7 shows the repeated chord progression undergirding the verses and choruses of the Human League’s 1986 hit “Human” (a song composed by R&B producers Jimmy Jam and Terry Lewis).(15) The song begins with a catchy two-bar drum machine loop over an extended pedal drone on D that repeats four times, creating an eight-bar introduction. The extended pedal might lead us to expect D as the tonic, yet the progression that emerges confirms A as the tonic. With $\stackrel{ˆ}{5}$ held as a common tone in the upmost voice, the bass marches up the hill from $\stackrel{ˆ}{4}$ to $\stackrel{ˆ}{1}$; as soon as the tonic is reached, however, the progression slips back down to the IV chord. Ultimately the song will fade out on the same pedal D as it began, but I prefer to think of this song as being in a fragile A major, mirroring the fragile emotional condition of the cheating couple who keep plaintively reiterating in the chorus, “I’m only human, of flesh and blood I’m made.” It seems only appropriate that every time the fidelity of the tonic chord is reached, the progression falls back down the hill and the cycle repeats.
[16] All of the songs discussed so far exploit two stylistic features central to the tonal design of many if not most pop and rock songs: 1) the use of short, goalless circular progressions comprised of two, three, or four chords, which Philip Tagg in Everyday Tonality (2014) prefers to call shuttles in the case of back-and-forth motion between just two chords and loops in the case of circular progressions of three chords or more; and 2) the use of a fadeout to end the song, or rather, to cause the song to evaporate without providing full closure, with the fadeout most often set over a repeating chord shuttle or loop. These two features more than any other set the tonal design of pop and rock songs apart from their art-music counterparts and all but invite us to hear songs or sections of songs as fragmentary tonal structures with absent tonics. Yet for me, perhaps even more important in my perception of a particular song as having an absent tonic is the way in which I situate the track within the universe of other pop and rock tracks that I know—“musical worlding,” as John Covach (1994) once called this practice—and compare it to other songs making similar harmonic moves.(16)
Example 8. Two songs with potentially viable Lydian openings
(click to enlarge and listen)
Audio Example 11
Audio Example 12
[17] In his 2013 article “Modal Tonicization in Rock: The Special Case of the Lydian Scale,” Brett Clement offers a persuasive counter-argument in favor of interpreting shuttles of two major chords whose roots lie a whole step apart as the prime exemplar of Lydian tonality, I–II. While respecting Clement’s case, I do not hear most of his examples as Lydian, and instead would interpret the shuttles as either an Ionian IV–V or Aeolian VI–VII promising an absent tonic chord that never emerges (as we heard in “Jane Says” and “I’ll Be Around”). But before moving on, I shall play devil’s advocate by looking at two examples from songs that I think do have potentially viable Lydian openings.(17) The first is the opening piano/guitar riff that serves as the introduction to Fleetwood Mac’s 1979 hit “Sara” (Example 8a). Audio Example 11 contains the first 55 seconds of the track, in which this introductory loop first sounds by itself and then as the accompaniment for Stevie Nicks’ vocal melody. Notice however that when the drums enter and the main groove of the song sets in, B is replaced by B and the chords shift to the familiar “doo-wop” pattern of I–vi–IV–V. Stevie Nicks’ opening vocal melody hovers around the pitch C, and yet the omnipresent pedal tone F in the piano/guitar riff, above which the chords move F–G–Am–G and back to F in an ever-repeating loop, is strong enough for the ear to latch on to F as the tonal center, causing this opening section of the song to sound as if suspended in a kind of “Lydian zone.” When the Bs are replaced by Bs at the onset of the main groove, the tonal center remains on F but the mode shifts from Lydian to Ionian, not unlike the type of modal fluctuation often encountered, say, in Claude Debussy’s music.(18)
[18] Example 8b provides a formal synopsis of R.E.M.’s early-1990s ode to 1970s kitsch, “Man on the Moon,” showing the chord progressions used in the verse, prechorus, and chorus. At the song’s opening, we hear a guitar and bass groove whose chords move from C major to D major (one bar each) and then back to settle on C major for two bars. Michael Stipe’s vocal melody enters above this repeating groove, tumbling down the scale from G to C. Along with this strong melodic descent into C, the fact that this four-bar rotation settles on C for its last two measures makes its harmonic rhythm sound fundamentally quite different from a shuttle that alternates between two whole-step related major triads every bar. In fact, if the entire song were built on this four-bar loop, it would be entirely possible for the ear to latch on to C as the tonal center, with the D major triads serving as neighbor chords and the Fs simply as part of the scale rather than functioning as leading tones—hence hovering again in a “Lydian zone.”(19) As the song moves into the prechorus, however, the underlying Lydian harmony gives way to a shuttle of alternating A minor and G major chords and the vocal melody begins a quick series of descents into G, shifting the focus toward G as the tonal center. The prechorus culminates on a D chord that this time sounds unmistakably like a big transitional dominant leading into the G major tonic chord at the onset of the chorus. “Man on the Moon” therefore offers another example of a song with an emergent tonic, but the difference here is that the promised tonic is perhaps not quite so apparent from the song’s opening. I invite you to decide for yourself as you listen to the first pass through the introduction, verse, prechorus, and into the chorus (Audio Example 12).(20)
Example 9. Two songs by the Psychedelic Furs
(click to enlarge and listen)
Audio Example 13
Audio Example 14
[19] I now consider two songs composed entirely around a series of two-chord shuttles, both by the 1980s U.K. synthpop group the Psychedelic Furs. With respect to the notion of musical worlding, even though “The Ghost in You” was released about two years after “Love My Way,” in my mind I will always associate these songs closely with one another as part of the soundtrack to my senior year of high school and first year of college. My nostalgia notwithstanding, the two songs are also very similar both in their formal structure and texture. In “The Ghost in You” (Example 9a), the verse is built upon a shuttle of B to E chords alternating every other bar (with stark open fifths, quite typical of the minimalist harmonic language of 1980s synthpop), above which sounds a signature synthesizer riff whose opening descending four-note motive—E–D–B–F—runs pervasively throughout the various sections of the song. (Compare, for example, the opening notes of the synth riff to Richard Butler’s sung “Angels fall like rain” melody at the prechorus.) The prechorus is built upon another shuttle, E major to D minor, while the chorus is built upon yet another shuttle, this time involving F and E chords. I have deliberately not supplied any Roman numerals to analyze the harmonic functions of these various chords, but such an exercise would be quite straightforward in this case. Most would agree that B major is established as the tonic in the verse, hence an oscillating I–IV, and within this tonal context we can interpret the prechorus as an oscillating IV–iii and the chorus, in turn, as an oscillating V–IV. In sum, the tonic chord establishes itself in the verse but is absent entirely from the prechorus and chorus, the opposite of an emergent tonic. Audio Example 13 contains one cycle through the verse, prechorus, and chorus of “The Ghost in You.”
[20] In the earlier Furs track, “Love My Way” (Example 9b), it should be obvious from the transcription that this song stylistically closely resembles “The Ghost in You” in that, once again, we have an entire song built on two repeating two-chord shuttles, the chords of which alternate every other bar. “Love My Way” has a texture even more starkly minimalist than “The Ghost in You,” with its signature four-bar xylophone riff sounding incessantly throughout both the verses and choruses. Making a Roman numeral analysis of its harmonic design would prove problematic, however, since in this song, unlike “The Ghost in You,” neither of the two chords in the repeating C to B shuttle that undergirds the intro and verses feels like the tonic. (It should be noted that the chords are initially thirdless, but with a major third implied strongly in both chords and confirmed later in the song by the synth riff in parallel thirds layered into the texture, as shown in parentheses.) The xylophone melody outlines an E minor triad and strongly projects E minor as the tonic, making the underlying harmonies of the verse sound like a repeating VI–V. Yet the sense of E minor as tonic is thwarted at the chorus, as the repeating C–B progression is replaced by C–D (hence negating the D leading tone), and, as we heard earlier in “Man on the Moon,” the vocal melody (set appropriately to the lyric “Love my way, it’s a new road . . .”) strongly projects the new tonic G major with its clear descent from $\stackrel{ˆ}{4}$ down to $\stackrel{ˆ}{1}$. Like “Jane Says,” the effect of the chorus hinges on an ever-repeating IV–V that never resolves (Audio Example 14). “Love My Way” structurally resembles “Reach Out,” of which we recall that the verses and choruses are cast in competing relative major and minor keys (which Harald Krebs [1981] and others have referred to in nineteenth-century art song as a “double-tonic complex”), but unlike “Reach Out,” in “Love My Way” the tonic chord of neither key ever actually materializes.(21)
Example 10. Three recent #1 hits built on a repeating four-bar groove
(click to enlarge and listen)
Audio Example 15
Audio Example 16
[21] To show that my musical world is not always stuck in the 1980s, I will look now at three songs released during the last few years, all of which made their way to #1 on Billboard’s Hot 100 and also peaked at #1 on the U.K. singles chart. We have been concentrating so far on songs built on shorter loops of one or two bars, but these huge recent pop hits all feature a four-bar loop that serves as the primary groove for the majority—and, in the case of “Get Lucky,” the entirety—of the song.(22) Example 10a shows the four-bar string loop that underpins most of Coldplay’s 2008 Grammy-winning song “Viva la Vida.” The introduction sets up a repeating pattern of four chords—D major (with missing third), E dominant-seventh (with a decidedly “Classical-sounding” suspended fourth), A major, and F minor—above which Chris Martin’s vocal enters with a melody tracing a clear path from $\stackrel{ˆ}{3}$ down to $\stackrel{ˆ}{1}$ (Audio Example 15).(23) Even though the loop does not begin on the I chord, A serves definitively as the tonic; the loop represents a rotation of the classic I–vi–IV–V doo-wop pattern, flipped to become IV–V–I–vi.(24)
[22] Example 10b shows the four-bar string loop undergirding the chorus to Carly Rae Jepsen’s massive 2012 hit “Call Me Maybe.” The chorus melody persistently outlines the tonic triad G major, but the groove below essentially shuttles back and forth between C and D major chords. The G major and E minor chords (shown in parentheses) are merely touched upon as pickup chords to the main harmonies (because of their metric de-emphasis on the last eighth note of their respective measures), so while on a local level we have IV–I–V–vi, the overall effect of the loop is that of an ever-repeating IV–V, like we encountered in “Jane Says.” In his 2015 Music Theory Spectrum article, Drew Nobile discusses both “Jane Says” and the chorus of “Call Me Maybe” as examples of what he calls a “loop divorce,” where the melody remains essentially independent from the rotating chords in the groove below.(25)
[23] Example 10c shows the four-bar loop that sounds for the entirety of Daft Punk’s 2013 Grammy-winning song “Get Lucky” (featuring Pharrell Williams on vocals), the song that duked it out with Robin Thicke’s “Blurred Lines” for the title of that summer’s biggest hit. My notated example simplifies the groove to show only the right-hand piano chords and the essential bass rhythm, a four-chord pattern of B minor-seventh–D major–F minor-seventh–E major. I should point out that on the track itself, bassist Nathan East constantly improvises around this basic pattern with octave leaps and fills and varies the bass pattern with each repetition. Identifying the tonic in “Get Lucky” is not so simple. I posed this challenge to a group of my theory and musicology graduate students, asking them: “If you had to assign functional Roman numerals to this four-chord pattern, what would you do?” So, I am going to ask readers of this essay the same question now as you listen to the first pass through the introduction, verse, prechorus, and chorus (Audio Example 16).
[24] None of my students advocated for D major or E major (i.e., the hypermetrically weaker second or fourth chords of the loop) as a Lydian or Mixolydian tonic respectively, but they did come up with three possible harmonic-functional interpretations for the four-chord loop:
1. The melody of the verse is composed of a series of two-bar fragments that ever gravitate toward F—so, perhaps F minor as tonic?(26)
2. The topmost notes of the piano chords trace a stepwise descent from D down to B. In addition, the prechorus melody follows the bass in parallel tenths and therefore seems wedded to the underlying chord loop rather than being divorced from it—so, the first chord of the loop, B minor, as tonic?
3. The melody of the chorus is composed of a series of one-bar fragments that in the fourth measure seems to “arrive” on A (this fourth measure of the chorus melody is then repeated over and over again in the manner of a refrain). If we were to sing this melody in isolation and assign solfège syllables to it, then our most likely instinct would be to interpret this melody in A major, ending as it does with a $\stackrel{ˆ}{3}$$\stackrel{ˆ}{2}$$\stackrel{ˆ}{1}$ descent. Following this logic, we would interpret the four-chord loop as ii7–IV–vi7–V with an absent tonic.
Audio Example 17
[25] In the end, much as it would be gratifying to posit “Get Lucky” as another perfect example of a song with an absent tonic (especially since the Roman numerals fit so nicely in A major), the Dorian loop—i7III–v7–IV, hence an expansion of the Dorian shuttle i–IV—trumps all other possible hearings. For one thing, I have to “squint my ears” very hard to hear A as the tonic in the chorus melody as it unfolds over the repeating chord loop, especially considering the weak metric position of A on the final eighth note of the measure. This positioning undermines any sense of arrival, and instead the A seems to wrap back around repeatedly as a neighbor tone to B, as the one-bar melodic hook itself loops at the end of the chorus over the rotating four-chord cycle below.(27) More importantly from a rhythmic and metric standpoint, the groove underpinning “Get Lucky” gives us a textbook example of what Tim Hughes (2008, 242) calls an autotelic groove, with Nathan East’s anacrustic fills in the bass at the end of every fourth bar pushing forward into the next downbeat, causing the first measure of each four-bar pattern to sound simultaneously like a beginning and an ending—that is, as both a point of departure and a goal—and thus further strengthening the sense of B minor as tonic.(28) Yet it is mainly here for me a question of style: for this song, Daft Punk collaborated with Nile Rodgers, the mastermind behind the late-1970s disco group Chic, and there the groove and harmonic language of “Get Lucky” amounts to a deliberate throwback or homage to this earlier style. Every time I hear this song (which, needless to say, has been frequently since 2013), I situate it among the other late-1970s funk and disco tracks that I know and have been playing live myself for years, many of which are built squarely on Dorian loops and shuttles. Audio Example 17 contains the first 25 seconds of one widely known example from Nile Rodgers himself, Chic’s “Good Times” (U.S. #1, U.K. #5 [1979]), which is set entirely over an ever-repeating minor i to major IV.
Example 11. Michael Jackson, “Rock With You” (1979)
(click to enlarge and listen)
Audio Example 18
Audio Example 19
[26] For our final and most harmonically complex example, I will remain around the turn of the 1980s and consider a song recorded by the late great King of Pop, Michael Jackson. “Rock With You” was the second of Jackson’s two #1 singles from his 1979 breakthrough solo album Off the Wall, and Example 11 provides a summary of its formal and harmonic content. Again, with the exception of one chord in the bridge, I have deliberately refrained from offering a Roman numeral harmonic analysis, but I invite you now to do so yourself as I discuss the various sections of the song. Audio Example 18 contains the opening minute or so of the track, which represents the first pass through the introduction, verse, prechorus, and chorus.
[27] I have known this song for over thirty-five years, but only began looking closely at the harmonic design of “Rock With You” a few years ago, when the former lead singer of my eclectic cover band wanted us to learn the song to add to our repertoire. Our bass player at the time had played the song years before in another cover band, and so he proceeded to try to teach the rest of us the chords. The first thing he said was that the song is in E, which might seem logical considering that the introduction, verse, prechorus, and chorus all begin with an extended E minor-ninth sonority. In fact, the published sheet music arrangement of “Rock With You” adopts a key signature of six flats, indicating E minor. But as I listened closely to the harmonies and felt my way through the song’s various chord progressions, I was struck by the fact that every one of the song’s sections culminated with a prolonged chord on A, most often a close position G major triad over an A bass—the classic soul dominant discussed above as the central sonority in Hall & Oates’ “She’s Gone.” Plain major triads, of course, need not have dominant function, as I suggested in my interpretation of the E major chord at the end of the four-chord “Get Lucky” loop in context as a Dorian IV rather than V of an absent A major. Conversely, the soul dominant is a loaded sonority whose harmonic function as dominant is usually very clear, especially when such chords are positioned at the ends of sections like in “Rock With You.”(29) I therefore hear these chords with A in the bass as retransitional dominants, promising resolution to a D major tonic, and yet, except for the onset of the bridge, each time the soul dominant slips back Sisyphus-like into the E minor sonority at the onset of the next section. Crucially, I see the tonal design of “Rock With You” as being in D major but with an absent tonic. The vocal melody provides definitive confirmation, as it recurrently outlines D major and centers on the pitches D and A, or $\stackrel{ˆ}{1}$ and $\stackrel{ˆ}{5}$ (for example, at the prechorus, for which I have written out the lead vocal and bass line in full, Michael sings, in moveable do solfège syllables: do–re–mi, sol–sol–fa–fa–mi–re, do–re–mi–fa–sol ). In fact, until the song’s final truck driver’s modulation—which, like the finale of “She’s Gone,” shines a spotlight on the soul dominant in bumping the tonal center up by semitone—the entire melody of “Rock With You” conforms to a key signature of five flats, and the few chromatic alterations that do occur limit themselves to the harmonic accompaniment (for example, the modal VII major-seventh chords built on C found in both the verses and bridge).(30) Audio Example 19 contains an excerpt from near the end of the track, starting with the eight-bar bridge or “middle eight” (which begins, as many bridges do, with a swerve to the minor submediant, and then touches on a fragile first-inversion D major tonic in the sixth bar, the only instance of the tonic chord in the whole song), through the instrumental break and truck driver’s modulation into the final chorus.(31)
Example 12. Stevie Wonder, “Ribbon in the Sky” (1982)
(click to enlarge)
[28] Once again, one of the main reasons to hear “Rock With You” in D major is because of its revealing resemblances to another hit song from around that same time period making some uncannily similar harmonic moves, Stevie Wonder’s 1982 ballad “Ribbon in the Sky” (Example 12). Like “Rock With You,” “Ribbon in the Sky” opens with an extended E-minor sonority, and the verse is built around a three-chord loop that repeats three times. In the manner of an antecedent–consequent period, the song’s refrain—the harmonic design of which bears a strong resemblance to the prechorus of “Rock With You,” as both are built on a bass walkup from $\stackrel{ˆ}{2}$ to $\stackrel{ˆ}{5}$—cadences on the soul dominant the first time around, supporting $\stackrel{ˆ}{2}$ in the melody and effecting a half cadence, but the second time around does succeed in finding the D tonic chord that remains so elusive in the earlier track.
[29] To summarize: in order to make a claim for a song as having an absent tonic, enough aural information in the song’s chord progressions and melodies must be available for us to discern the tonic when heard against the backdrop of a “default” major or minor (i.e., Ionian or Aeolian) system in most instances, or, in rare cases, another tonal system (such as the major triad-doubled minor-pentatonic system unique to rock; see n. 17). Conversely, for the other diatonic modal systems used in pop and rock songs—Dorian, Mixolydian, Phrygian, and even Lydian—the tonic chord must always be present for that mode to assert itself.(32) The tonic chord may well initially be absent only to emerge later in the song, as in “She’s Gone,” “Little Red Corvette,” and “Man on the Moon,” or else it may never materialize, as in “Jane Says,” “I’ll Be Around,” and “Love My Way.” And though not always the case, it is no surprise that in many instances these songs tell the story of a romantic relationship gone bad, in which case the absent tonic serves as a powerful musical metaphor for lost love.
[30] The question of tonality in pop and rock music is by no means simple and has no single answer, yet by positing a continuum of varying states of tonal stability—from fragile to emergent to absent tonics—I hope to have offered a conceptual framework as an aid to understanding how tonality has been deployed for expressive purposes in popular songs across a wide range of styles and genres during the last fifty years.(33) Of course, a fadeout would probably be the most elegant and appropriate way to end this essay, but I shall have to leave that to the reader’s imagination.
Mark Spicer
Department of Music
Hunter College and the Graduate Center
City University of New York
695 Park Avenue
New York, NY 10065
mark.spicer@hunter.cuny.edu
### Works Cited
Attas, Robin. 2015. “Form as Process: The Buildup Introduction in Popular Music.” Music Theory Spectrum 37 (2): 275–96.
Attas, Robin. 2015. “Form as Process: The Buildup Introduction in Popular Music.” Music Theory Spectrum 37 (2): 275–96.
Biamonte, Nicole. 2010. “Triadic Modal and Pentatonic Patterns in Rock Music.” Music Theory Spectrum 32 (2): 95–110.
Biamonte, Nicole. 2010. “Triadic Modal and Pentatonic Patterns in Rock Music.” Music Theory Spectrum 32 (2): 95–110.
Capuzzo, Guy. 2009. “Sectional Tonality and Sectional Centricity in Rock Music.” Music Theory Spectrum 31 (1): 157–74.
Capuzzo, Guy. 2009. “Sectional Tonality and Sectional Centricity in Rock Music.” Music Theory Spectrum 31 (1): 157–74.
Clement, Brett. 2013. “Modal Tonicization in Rock: The Special Case of the Lydian Scale.” Gamut 6 (1): 95–142. http://trace.tennessee.edu/cgi/viewcontent.cgi?article=1081&context=gamut
Clement, Brett. 2013. “Modal Tonicization in Rock: The Special Case of the Lydian Scale.” Gamut 6 (1): 95–142. http://trace.tennessee.edu/cgi/viewcontent.cgi?article=1081&context=gamut
Covach, John. 1994. “Deconstructing Cartesian Dualism in Musical Analysis.” Music Theory Online 0.11.
Covach, John. 1994. “Deconstructing Cartesian Dualism in Musical Analysis.” Music Theory Online 0.11.
Covach, John. 2005. “Form in Rock Music: A Primer.” In Engaging Music: Essays in Musical Analysis, ed. Deborah Stein, 65–76. Oxford University Press.
—————. 2005. “Form in Rock Music: A Primer.” In Engaging Music: Essays in Musical Analysis, ed. Deborah Stein, 65–76. Oxford University Press.
De Clercq, Trevor and David Temperley. 2013. “Statistical Analysis of Harmony and Melody in Rock Music.” Journal of New Music Research 42 (3): 187–204.
De Clercq, Trevor and David Temperley. 2013. “Statistical Analysis of Harmony and Melody in Rock Music.” Journal of New Music Research 42 (3): 187–204.
Doll, Christopher. 2011. “Rockin’ Out: Expressive Modulation in Verse–Chorus Form.” Music Theory Online 17.3.
Doll, Christopher. 2011. “Rockin’ Out: Expressive Modulation in Verse–Chorus Form.” Music Theory Online 17.3.
Doll, Christopher. 2017. Hearing Harmony: Toward a Tonal Theory for the Rock Era. University of Michigan Press.
—————. 2017. Hearing Harmony: Toward a Tonal Theory for the Rock Era. University of Michigan Press.
Everett, Walter. 1997. “Swallowed by a Song: Paul Simon’s Crisis of Chromaticism.” In Understanding Rock: Essays in Musical Analysis, ed. John Covach and Graeme M. Boone, 133–53. Oxford University Press.
Everett, Walter. 1997. “Swallowed by a Song: Paul Simon’s Crisis of Chromaticism.” In Understanding Rock: Essays in Musical Analysis, ed. John Covach and Graeme M. Boone, 133–53. Oxford University Press.
Everett, Walter. 2001. “A True Story: The Expression of Troubling Societal Values in the Music of Postmodern Rock.” Genre 34: 205–18.
—————. 2001. “A True Story: The Expression of Troubling Societal Values in the Music of Postmodern Rock.” Genre 34: 205–18.
Everett, Walter. 2004. “Making Sense of Rock’s Tonal Systems.” Music Theory Online 10.4.
—————. 2004. “Making Sense of Rock’s Tonal Systems.” Music Theory Online 10.4.
Everett, Walter. 2008. “Pitch Down the Middle.” In Expression in Pop-Rock Music: Critical and Analytical Essays, 2nd edition, ed. Walter Everett, 111–74. Routledge.
—————. 2008. “Pitch Down the Middle.” In Expression in Pop-Rock Music: Critical and Analytical Essays, 2nd edition, ed. Walter Everett, 111–74. Routledge.
Everett, Walter. 2009. The Foundations of Rock: From “Blue Suede Shoes” to “Suite: Judy Blue Eyes. Oxford University Press.
—————. 2009. The Foundations of Rock: From “Blue Suede Shoes” to “Suite: Judy Blue Eyes. Oxford University Press.
Fink, Robert. 2011. “Goal-Directed Soul? Analyzing Rhythmic Teleology in African American Popular Music.” Journal of the American Musicological Society 64 (1): 179–238.
Fink, Robert. 2011. “Goal-Directed Soul? Analyzing Rhythmic Teleology in African American Popular Music.” Journal of the American Musicological Society 64 (1): 179–238.
Flory, Andrew. 2017. I Hear a Symphony: Motown and Crossover R&B. University of Michigan Press.
Flory, Andrew. 2017. I Hear a Symphony: Motown and Crossover R&B. University of Michigan Press.
Griffiths, Dai. 2015. “Elevating Form and Elevating Modulation.” Popular Music 34 (1): 22–44.
Griffiths, Dai. 2015. “Elevating Form and Elevating Modulation.” Popular Music 34 (1): 22–44.
Harrison, Daniel. 1997. “After Sundown: The Beach Boys’ Experimental Music.” In Understanding Rock: Essays in Musical Analysis, ed. John Covach and Graeme M. Boone, 33–58. Oxford University Press.
Harrison, Daniel. 1997. “After Sundown: The Beach Boys’ Experimental Music.” In Understanding Rock: Essays in Musical Analysis, ed. John Covach and Graeme M. Boone, 33–58. Oxford University Press.
Hirsh, Marc. 2008. “Striking a Chord.” The Boston Globe (December 31).
Hirsh, Marc. 2008. “Striking a Chord.” The Boston Globe (December 31).
Hughes, Tim. 2008. “Trapped within the Wheels: Flow and Repetition, Modernism and Tradition in Stevie Wonder’s ‘Living for the City.’” In Expression in Pop-Rock Music: Critical and Analytical Essays, 2nd edition, ed. Walter Everett, 239–65. Routledge.
Hughes, Tim. 2008. “Trapped within the Wheels: Flow and Repetition, Modernism and Tradition in Stevie Wonder’s ‘Living for the City.’” In Expression in Pop-Rock Music: Critical and Analytical Essays, 2nd edition, ed. Walter Everett, 239–65. Routledge.
Krebs, Harald. 1981. “Alternatives to Monotonality in Early Nineteenth-Century Music.” Journal of Music Theory 25 (1): 1–16.
Krebs, Harald. 1981. “Alternatives to Monotonality in Early Nineteenth-Century Music.” Journal of Music Theory 25 (1): 1–16.
Moore, Allan F. 1995. “The So-Called ‘Flattened Seventh’ in Rock.” Popular Music 14 (2): 185–201.
Moore, Allan F. 1995. “The So-Called ‘Flattened Seventh’ in Rock.” Popular Music 14 (2): 185–201.
Nobile, Drew. 2015. “Counterpoint in Rock Music: Unpacking the ‘Melodic-Harmonic Divorce.’” Music Theory Spectrum 37 (2): 189–203.
Nobile, Drew. 2015. “Counterpoint in Rock Music: Unpacking the ‘Melodic-Harmonic Divorce.’” Music Theory Spectrum 37 (2): 189–203.
Pallett, Owen. 2014a. “Skin Tight Jeans and Syncopation: Explaining the Genius of Katy Perry’s ‘Teenage Dream’—Using Music Theory.” Slate (March 25).
Pallett, Owen. 2014a. “Skin Tight Jeans and Syncopation: Explaining the Genius of Katy Perry’s ‘Teenage Dream’—Using Music Theory.” Slate (March 25).
Pallett, Owen. 2014b. “Ecstatic Melodic Copulation: Explaining the Genius of Daft Punk’s ‘Get Lucky’—Using Music Theory.” Slate (March 28).
—————. 2014b. “Ecstatic Melodic Copulation: Explaining the Genius of Daft Punk’s ‘Get Lucky’—Using Music Theory.” Slate (March 28).
Peres, Asaf. 2016. “(Dys)Functional Harmony: How Sound Production in Twenty-First Century Pop Music Liberates Harmony from its Functional Role.” Paper presented at the annual meeting of the Society for Music Theory, Vancouver.
Peres, Asaf. 2016. “(Dys)Functional Harmony: How Sound Production in Twenty-First Century Pop Music Liberates Harmony from its Functional Role.” Paper presented at the annual meeting of the Society for Music Theory, Vancouver.
Platt, Heather. 1999. “8 Lieder und Gesänge, Opus 59.” In The Compleat Brahms: A Guide to the Musical Works of Johannes Brahms, ed. Leon Botstein, 247–51. Norton.
Platt, Heather. 1999. “8 Lieder und Gesänge, Opus 59.” In The Compleat Brahms: A Guide to the Musical Works of Johannes Brahms, ed. Leon Botstein, 247–51. Norton.
Richards, Mark. Forthcoming. “Tonal Ambiguity in Popular Music’s Axis Progression.” Music Theory Online 23.3.
Richards, Mark. Forthcoming. “Tonal Ambiguity in Popular Music’s Axis Progression.” Music Theory Online 23.3.
Robison, Brian. 2013. “‘A Prayer From Your Secret God’: The Sensitive Female Chord Progression as a Veiled Symbol of Religiosity.” In Music: Function and Value. Proceedings from the 11th International Conference on Musical Signification, ed. Teresa Malecka and Malgorzata Pawłowska, 656–66. Akademia Muzyczna w Krakowie and Musica Iagellonica.
Robison, Brian. 2013. “‘A Prayer From Your Secret God’: The Sensitive Female Chord Progression as a Veiled Symbol of Religiosity.” In Music: Function and Value. Proceedings from the 11th International Conference on Musical Signification, ed. Teresa Malecka and Malgorzata Pawłowska, 656–66. Akademia Muzyczna w Krakowie and Musica Iagellonica.
Satyendra, Ramon. 1997. “Liszt’s Open Structures and the Romantic Fragment.” Music Theory Spectrum 19 (2): 184–205.
Satyendra, Ramon. 1997. “Liszt’s Open Structures and the Romantic Fragment.” Music Theory Spectrum 19 (2): 184–205.
Schmalfeldt, Janet. 1992. “Cadential Processes: The Evaded Cadence and the ‘One More Time’ Technique.” Journal of Musicological Research 12: 1–52.
Schmalfeldt, Janet. 1992. “Cadential Processes: The Evaded Cadence and the ‘One More Time’ Technique.” Journal of Musicological Research 12: 1–52.
Smith, Jeremy. 2016. “‘I Know It’s Over’: Melodically-Established Keys and Tonal (Non-)Closure in Contemporary Pop Music.” Paper presented at the annual meeting of the Society for Music Theory, Vancouver.
Smith, Jeremy. 2016. “‘I Know It’s Over’: Melodically-Established Keys and Tonal (Non-)Closure in Contemporary Pop Music.” Paper presented at the annual meeting of the Society for Music Theory, Vancouver.
Spicer, Mark. 2004. “(Ac)cumulative Form in Pop-Rock Music.” Twentieth-Century Music 1 (1): 29–64.
Spicer, Mark. 2004. “(Ac)cumulative Form in Pop-Rock Music.” Twentieth-Century Music 1 (1): 29–64.
Stein, Deborah and Robert Spillman. 1996. Poetry into Song: Performance and Analysis of Lieder. Oxford University Press.
Stein, Deborah and Robert Spillman. 1996. Poetry into Song: Performance and Analysis of Lieder. Oxford University Press.
Stephenson, Ken. 2002. What to Listen For in Rock: A Stylistic Analysis. Yale University Press.
Stephenson, Ken. 2002. What to Listen For in Rock: A Stylistic Analysis. Yale University Press.
Tagg, Philip. 2014. Everyday Tonality: Towards a Tonal Theory of What Most People Hear. Expanded and improved edition. The Mass Media Music Scholars’ Press.
Tagg, Philip. 2014. Everyday Tonality: Towards a Tonal Theory of What Most People Hear. Expanded and improved edition. The Mass Media Music Scholars’ Press.
Temperley, David. 2007. “The Melodic-Harmonic ‘Divorce’ in Rock.” Popular Music 26 (2): 323–42.
Temperley, David. 2007. “The Melodic-Harmonic ‘Divorce’ in Rock.” Popular Music 26 (2): 323–42.
Temperley, David. 2011. “Scalar Shift in Popular Music.” Music Theory Online 17.4.
—————. 2011. “Scalar Shift in Popular Music.” Music Theory Online 17.4.
### Footnotes
* My ideas in this article have been gestating for a number of years, appearing first in a paper presented at the 2009 annual meeting of the Society for Music Theory in Montréal (with the title “Absent Tonics in Pop and Rock Songs”), and subsequently as one-hour lectures given at the University of Cincinnati College-Conservatory of Music in 2014, and at the University of Rochester Institute for Popular Music/Eastman School of Music and the University of North Texas in 2016 (with the title “The Question of Tonality in Pop and Rock Songs”). I am grateful to those who have offered their valuable feedback on earlier versions, most notably John Covach, Walter Everett, Drew Nobile, Brad Osborn, and MTO editor Nicole Biamonte. I am especially grateful to my CUNY graduate students who served as a sounding board for these analyses in the context of stimulating class discussion. Finally, I wish to thank Christopher Segall for expertly setting the musical examples.
My ideas in this article have been gestating for a number of years, appearing first in a paper presented at the 2009 annual meeting of the Society for Music Theory in Montréal (with the title “Absent Tonics in Pop and Rock Songs”), and subsequently as one-hour lectures given at the University of Cincinnati College-Conservatory of Music in 2014, and at the University of Rochester Institute for Popular Music/Eastman School of Music and the University of North Texas in 2016 (with the title “The Question of Tonality in Pop and Rock Songs”). I am grateful to those who have offered their valuable feedback on earlier versions, most notably John Covach, Walter Everett, Drew Nobile, Brad Osborn, and MTO editor Nicole Biamonte. I am especially grateful to my CUNY graduate students who served as a sounding board for these analyses in the context of stimulating class discussion. Finally, I wish to thank Christopher Segall for expertly setting the musical examples.
1. Spillman and Stein go on to discuss a well-known example of implicit tonality, the first song of Schumann’s 1840 Dichterliebe cycle, “Im wunderschönen Monat Mai,” in which “both the song’s recurring unresolved V7s throughout and ending on V7 convey the poet’s unfulfilled longing and desire” (1996, 135). This is not unlike what Ramon Satyendra, drawing upon the aesthetics of the Romantic fragment, calls “open structures” in the music of Franz Liszt, which he defines as “Liszt works that are based on a dissonant [i.e., non-tonic] chord, yet rooted in diatonicism” (1997, 205).
2. Hall & Oates typified so-called “blue-eyed soul” in the 1970s, being one of the few white acts to emerge from Philadelphia’s burgeoning soul music scene early that decade.
3. The chord most likely has varied origins, both in gospel piano playing and in American popular song from the first half of the twentieth century. The soul dominant is most commonly voiced as a IV triad over $\stackrel{ˆ}{5}$, but can also appear as a close position ii7 or IV7 chord in the right-hand part, hence adding a fifth note to the basic four-note sonority. An important feature to remember about the soul dominant is that, despite its dominant function, the chord contains the tonic note amidst its upper voices and never includes the leading tone. The four-note soul dominant is usually labeled in harmonic analyses with the functional Roman numeral V11, understood as shorthand for a V chord with intervals of a seventh, ninth, and eleventh above the bass (see, e.g., Temperley 2011, [3.10], and Tagg 2014, 228). While the soul dominant is most commonly found in songs in major keys (or, if you prefer, the Ionian mode), as in “She’s Gone,” it can be found in minor or Aeolian songs as well, in which case the chord typically is not built upon $\stackrel{ˆ}{5}$ in the bass but rather $\stackrel{ˆ}{7}$, with a close position iv7, VI, or VI7 in the right-hand part above (in other words, the same pitches as the soul dominant from its relative major), hence serving as a substitute for the plain VII chord that often serves the functional role of dominant in modal rock harmony; for example, the main groove from A Taste of Honey’s 1978 #1 disco hit “Boogie Oogie Oogie” is built around oscillating D minor-seventh and B/C chords, or i7VII11.
4. Walter Everett defines a truck driver’s modulation as “a sudden shift from one tonal center to another—usually a half step above—that is not functionally related to the first” (1997, 118). Dai Griffiths prefers the term “elevating modulation” and presents a useful four-part schema to explain the four most common harmonic situations that typically occur in popular songs at the formal junctures of such modulations. Griffiths cites the final series of three consecutive T1 modulations in “She’s Gone” as a “breathtaking” example of his third type of elevating modulation, the “dominant handover” (2015, 38).
5. One might interpret this opening vamp not as i–V in E minor but rather as iv–I in B major (i.e., with a borrowed minor iv chord), although this is not the way I hear it. Circular chord progressions in pop and rock songs are often ambiguous with regard to their tonal center, especially in instances where such short progressions could just as easily be interpreted in a major key or its relative minor (which is not the case in the “Reach Out” introduction, but we shall see that it is so in Prince’s “Little Red Corvette” and other examples later in this essay). The opening pitch in the flute melody, A, behaving in context more like $\stackrel{ˆ}{4}$ in minor (as part of a 4–3 figure above $\stackrel{ˆ}{1}$ in the bass) than a flatted seventh degree in major, tips the scale for me in the direction of E minor. For an extensive discussion of the notion of ambiguity in rock harmony, see Doll 2017, Chapter 6, “Ambiguous Effects”; see also Richards (forthcoming).
6. As I have shown in parentheses above the staff, the guitars play quick G chords at the end of both measures within each two-bar cycle, but the bass guitar does not comply and instead carves out an active bass line around the chord roots of the oscillating ii–V progression. (An organ—buried deep in the mix—also simply moves back and forth between A and D chords without any intervening G chords.) I am therefore considering these metrically weak G chords as embellishing (neighboring) sonorities in the upper voices and not full-fledged chord changes. Flory (2017, 63) provides a transcription of the complete texture of this oscillating verse vamp.
7. I had long thought that this E major chord sounding in the upper voices was played on a piano, but Motown expert Andrew Flory has confirmed for me that there is actually no piano on this track. Motown’s primary house keyboardist at the time, Earl Van Dyke, was contracted for the July 8, 1966 session when “Reach Out” was recorded (along with two other songs) and is therefore most likely the one playing the organ part.
8. For an insightful analysis of “God Only Knows,” similarly focusing on the expressive effects of the unstable six-four chords and the song’s utter lack of resolution to a root-position tonic, see Harrison 1997, 39–40.
9. This passage during the chorus of “Someone Saved My Life Tonight”—in which a promised cadence is evaded, and instead the music “backs up” to repeat the cadential idea—reminds me of a similar technique often utilized by late-eighteenth century composers (especially Mozart) to extend phrases, which Janet Schmalfeldt (1992) has called the “one more time” technique.
10. Christopher Doll (2011) uses the term “breakout chorus” to refer to any song whose chorus represents a marked increase in intensity over the preceding verse (a feature common to many if not most pop and rock songs), but especially those choruses involving a shift of tonal center from a minor key to its relative major. While I do not hear the verse of “Little Red Corvette” as fully “in” the key of B minor, so that the chorus does not “modulate” to D major, the expressive or “breakout” effect in this song is essentially the same at the moment the emergent tonic chord replaces vi. For a detailed exploration of this thorny issue of monotonality vs. “sectional tonality” and “sectional centricity” in rock music, see Capuzzo 2009. There is no easy answer to this question, but in my view, the deciding factor as to whether or not I interpret a song as sectionally tonal or monotonal with an emergent or absent tonic usually has to do with the expressive potential that comes from withholding a tonal center, as I hope to demonstrate further in the analyses that follow. Though such situations are rare, it is entirely possible for a song to have two or more competing yet equally important tonal centers but for the tonic chord to be absent in each of its sections, as we shall see with the Psychedelic Furs song “Love My Way” (Example 9b).
11. It is also possible, of course, for major and minor triads related by whole step to function locally as V–vi in major, yet I can think of no example from the pop-rock repertoire in which we hear a repeating V–vi progression used as an extended vamp (i.e., V–vi with an “absent” I). After polling some experts on rock harmony, the closest example we could come up with is Bob Dylan’s “I Shall Be Free No. 10” from Another Side of Bob Dylan (1964)—specifically, the funny little repeated tag at the end of each strophe, V–vi–V–vi–V–I, where on its final iteration Dylan breaks the fourth wall by saying, “what’s probably got you baffled more is what this thing here is for.” I am grateful to Walter Everett for bringing this example to my attention.
12. “Variations on the Carlos Santana Secret Chord Progression” is the title of a track from Zappa’s 1981 album Shut Up ’N Play Yer Guitar Some More, and represents the guitar solo section culled from a 1978 live performance of the Zappa song “City of Tiny Lites.”
13. Interestingly, when playing “Jane Says” live (several 1990s performances of which can be found on YouTube), Jane’s Addiction would often append an instrumental introduction by jamming around the D major tonic chord before launching into the repeating IV–V riff, which suggests to me that Perry Farrell and company were fully aware of their song’s absent tonic. Nobile offers an analysis of “Jane Says” that shows how the vocal melody goes on to articulate a full-fledged AABA form, with the melody in the bridge (or B section) distinguishing itself by outlining scale degrees of the dominant harmony, “all by virtue of its melodic-harmonic divorce” (2015, 196).
14. John Covach (2005) uses the term simple verse–chorus form to describe songs in which the verse and chorus are set over the same chord changes (as opposed to contrasting verse–chorus form which uses a different set of chord changes for the verse and chorus, examples of which we heard in “She’s Gone,” “Someone Saved My Life Tonight,” and “Little Red Corvette”).
15. U.K. synthpop group the Human League are probably best known for their massive 1981 single “Don’t You Want Me,” which reached #1 on both sides of the Atlantic. “Human” represented something of a stylistic departure for the Human League, yet garnered the group another #1 hit in the U.S., where R&B was at the time more popular on the mainstream singles charts than it was in their native U.K. (where the song peaked only at #8). Jimmy Jam and Terry Lewis were brought in to produce the Human League’s 1986 album Crash in the immediate wake of the duo’s success as producers and main composers for Janet Jackson’s breakthrough album Control, released in February of that year.
16. As Covach aptly puts it, “the question is not ‘How does tonality work?’ but rather ‘How does tonality work with regard to these pieces?’” (1994, [18]).
17. In my earlier discussion of two-chord vamps, I must concede that I bypassed talking about III–IV (listed in Example 5) as yet another possible harmonic-functional interpretation for two whole-step related major chords. While III–IV shuttles are quite rare, the introduction to Tears for Fears’ hit “Head Over Heels” (U.S. #5, U.K. #12) from their 1985 album Songs from the Big Chair—which Clement (2013, 130, Figure 17) analyzes as an exemplar of Lydian harmony—is one such instance where I hear the shuttle of C major to D major chords (or, more accurately, oscillating open fifths on C and D, with two intertwining keyboard and guitar melodies sounding above) not as a Lydian I–II but rather as III–IV promising an A major tonic that emerges at the onset of the verse (which itself is built on another two-chord shuttle, I–III). To support this reading, I am hearing these chords against the backdrop of Type 5 in Walter Everett’s classification of tonal systems for rock music, i.e., “[major] triad-doubled or power-chord minor-pentatonic systems unique to rock styles: I–III–IV–V–VII” (2004, Table 1; see also Biamonte 2010, 104–5, which explains how this minor-pentatonic system can be rotated—e.g., “Pentatonic 4”: I–III–IV–VI–VII). My interpretation here may be controversial to some readers, but the main reason why I cannot help but hear the opening of “Head Over Heels” this way is because I associate it closely with another Tears for Fears hit from the same album, “Shout” (U.S. #1, U.K. #4), in which the verses are also built upon a III–IV shuttle (B–C) whose function has in this case been made clear in context of the I–VI–IV–I loop (G–E–C–G) of the chorus that opens the song. The only difference in “Head Over Heels” is that we hear the III–IV shuttle first.
18. See, for example, the opening to Debussy’s piano prelude “Des pas sur la neige” (1910), in which D remains omnipresent as a tonic pedal as the mode fluctuates from D Aeolian to D Dorian when the Bs yield to Bs. Stevie Nicks seems to be especially fond of evoking Lydian harmony in her songs, as in Fleetwood Mac’s 1977 megahit “Dreams” (U.S. #1, U.K. #24), another Nicks song built almost entirely upon an F major–G major shuttle. In this case, however, I agree with Ken Stephenson’s interpretation of the song’s ambiguous major or minor (but not Lydian) tonality: “The one instance of a harmony other than F and G, an A minor chord, favors A as tonic. Otherwise, the chords point more strongly to C than A. . . . The melody also ultimately favors C by ending each section on C. We need not decide ultimately whether the tonic is C or A: it may float between these two relative keys . . . , an appropriate situation in light of the subject matter of ‘Dreams’ and the sometimes surreal grammar of its lyrics. The point here is that an F Lydian hypothesis suggested by the harmonies must be abandoned once the melody begins” (2002, 42).
19. While I have shown only its first phrase in Example 8b, the verse melody of “Man on the Moon” is comprised of a series of six 4-bar phrases in the pattern aababa (truncated to aaba in subsequent verses), with the third and fifth phrases following essentially the same melodic contour as the first, but instead of the pitches G | F–E–D | C–D–C we hear C | B–A–A | E–F–E. The leap up to C4 at the onset of the third phrase anticipates the same C4 that initiates the melodic descent into G at the onset of the prechorus, but to my ears the tonal center does not yet shift to G major since the main melodic pitches—C | A | E—remain wedded to the underlying Lydian I–II–I.
20. In his 2001 article on “the expression of troubling societal values in postmodern [i.e., mainly 1990s] rock,” Walter Everett also hears C as the tonal center during the verses of “Man on the Moon,” where he interprets the use of Lydian tonality as a reflection of the “self-deprecating meaninglessness” conveyed by the lyrics. As Everett says, “we ultimately learn to accept the Lydian melody as aimless, as are the arbitrary, apathetic, and inscrutable lyrics it carries, all the perfect representation of the empty, detached world of Andy Kaufman’s anti-ideological parodies” (2001, 212).
21. A similar double-tonic complex occurs in Radiohead’s song “Reckoner” from In Rainbows (2007), in which the G major verses are built on a repeating IV–V–vi and the E minor bridge on a half-cadential iv–VI–V (with requisite raised leading tone), but no tonic chord appears in either key; see Osborn 2016, 149–51.
22. I am not the only scholar to have noticed this phenomenon, as evinced by two papers presented at the 2016 annual meeting of Society for Music Theory in Vancouver that also addressed the subject of recent pop hits composed entirely over a single repeating chord loop. In his “(Dys)Functional Harmony: How Sound Production in Twenty-First Century Pop Music Liberates Harmony from its Functional Role,” Asaf Peres argues that formal sections in these EDM-influenced harmonically repetitive songs are articulated primarily by differences in texture and sound production, where “[m]anipulations of sonic density and gestures such as filter sweeps and drum intensification have taken a lead role in delineating form and creating tension and release” (AMS-SMT 2016 Abstracts, 225). And in “‘I Know It’s Over’: Melodically-Established Keys and Tonal (Non-)Closure in Contemporary Popular Music,” Jeremy Smith draws upon my notion of absent tonics, analyzing several examples of recent hits in which “the tonic is only absent in its harmonic form, while it is very much present in its melodic form” (AMS-SMT 2016 Abstracts, 249).
23. “Viva la Vida” is a fine example of what I have called “cumulative form” in pop and rock music (see Spicer 2004), with Will Champion’s wordless countermelody (“oh-oh-oh-oh-ohh-oh”) being deliberately saved for the song’s climactic final chorus.
24. Four-chord loops featuring the I, vi, IV, and V chords in some order have become ubiquitous in pop and rock, including the now infamous pattern of I–V–vi–IV which the Australian comedy group Axis of Awesome immortalized in their clever 2009 song called “Four Chords” (with accompanying YouTube video at https://www.youtube.com/watch?v=5pidokakU4I), demonstrating how this loop has been used in dozens if not hundreds of pop and rock songs in recent decades. (I am deliberately avoiding here the rather sexist nickname given to the flipped version of this chord loop, vi–IV–I–V [or is it i–VI–III–VII?], by Boston Globe columnist Marc Hirsh [2008], which became the subject of a heated discussion on SMT-talk in April 2014; see also Robison 2013, and Richards [forthcoming].)
25. In the earlier version of his paper presented at the 2013 Society for Music Theory annual meeting in Charlotte, Nobile demonstrated this in the case of “Call Me Maybe” by substituting various other four-chord loops in his guitar accompaniment while singing the chorus melody, all of which sounded plausible and “correct.” Several huge pop hits in recent years, in fact, have been built entirely upon the harmonic platform of a IV–V loop with absent I, including Katy Perry’s “Teenage Dream” (U.S. #1, U.K. #2 [2010]) and Justin Bieber’s “Sorry” (U.S. and U.K. #1 [2015]). Canadian composer, singer, and multi-instrumentalist Owen Pallett (2014a), writing for the mainstream online publication Slate, uses music theory to help explain the “genius” of “Teenage Dream” to a general readership: “This song is all about suspension—not in the voice-leading 4–3 sense, but in the emotional sense, which listeners often associate with ‘exhilaration,’ being on the road, being on a roller coaster, travel. This sense of suspension is created simply, by denying the listener any I chords. There is not a single I chord in the song.”
26. Only one of my students heard the “Get Lucky” loop in F minor, but in doing so agreed with Owen Pallett (2014b).
27. With his analysis depending mainly on the apparent $\stackrel{̂}{3}$$\stackrel{ˆ}{2}$$\stackrel{ˆ}{1}$ descent in A major at the end of the chorus melody, Smith (2016) argues in favor of an absent-tonic reading of the “Get Lucky” loop, but we must be careful not to let any Schenkerian bias seduce us into thinking that all stepwise descending melodic figures at the ends of tunes necessarily involve scale degrees 3, 2, and 1. Nonetheless, in support of my B Dorian reading, I hear the chorus melody as composing out a larger $\stackrel{̂}{3}$$\stackrel{ˆ}{2}$$\stackrel{ˆ}{1}$ descent from D to B, in tandem with the topmost notes of the piano chords.
28. Robin Attas (2015) adapts Christopher Hasty’s metric theory to better explain how we as listeners perceive autotelic grooves. See, for example, her analysis of the buildup introduction to Marvin Gaye’s 1968 version of “I Heard It Through the Grapevine” (279–81); see also Spicer 2004 and Fink 2011.
29. I should issue here a caveat about the soul dominant, since by the late 1970s this chord sometimes operated as a coloristic sonority for its own sake, irrespective of its dominant function (not unlike, for example, the way in which Debussy often composed passages that utilized harmonic planing of non-functional dominant-seventh chords). As we can see from the chord symbols in Example 11, the intro and chorus progression touches on A/B and C/D chords before arriving on the “true” soul dominant, G/A, at the end of the phrase. “Rock With You” was composed by Rod Temperton, who before collaborating with Jackson had first made a name for himself as keyboardist and primary songwriter for the 1970s U.K.-based disco group Heatwave. The opening two-chord shuttle from Heatwave’s debut single “Boogie Nights” (U.S. and U.K. #2 [1977]) also features planed chords (Em9–Dm9). Such harmonic planing would quickly become a hallmark of Temperton’s chordal vocabulary and can be heard in his other top-ten hits written for Jackson; compare, for example, the introductions to “Off the Wall” (U.S. #10, U.K. #7 [1980]) and “Thriller” (U.S. #4, U.K. #10 [1983/84]).
30. “Get Lucky” and “Rock With You” each tell similar tales of a long night of revelry: in the songs’ respective choruses, Pharrell sings “I’m up all night to get lucky” while Michael sings “I want to rock with you all night.” In “Get Lucky,” the autotelic groove invites infinite repetition and keeps the party going, but in “Rock With You” it is the never-ending search for an absent tonic.
31. The truck-driver’s modulation in “Rock With You” is actually far more complicated than in “She’s Gone,” since it does not involve a simple dominant handover (see n. 4 above). As shown by the chord symbols in Example 11, the instrumental break ends with a series of planed non-functional soul-dominant sonorities ascending stepwise, A/BB/C–C/D, with the third of these chords bumping up by semitone to C/D, which in turn resolves deceptively into the Em9 chord that begins the final chorus (or, to think of it another way, the C/D chord functions in context as an applied soul dominant—VII11—to the ii9 chord that begins the final chord loop; see n. 3). The song then fades out on the chorus loop, searching for its new absent tonic of D major.
32. The Phrygian triadic modal system—relatively rare in pop and rock songs outside of the heavy metal genre—is not represented among the two-chord vamps in Example 5 but is certainly possible, as in Montell Jordan’s 1995 #1 hit “This Is How We Do It,” a song built entirely around a Phrygian shuttle, i–II. A “Locrian” system (with a V chord) is theoretically possible, but only if each degree of the scale is doubled with a triad or, more typically, a power chord with third omitted—as in Led Zeppelin’s “Immigrant Song” (1970), the chord pattern of which is best described not as Locrian, but a chromatically inflected version of rock’s fifth-doubled minor-pentatonic system, 1535–455575; see Biamonte 2010, 108.
33. In his book Everything in Its Right Place: Analyzing Radiohead, Brad Osborn uses my suggested continuum of fragile, emergent, and absent tonics to shed light on the ephemeral tonal designs of several Radiohead songs (2016, 147–49).
Spillman and Stein go on to discuss a well-known example of implicit tonality, the first song of Schumann’s 1840 Dichterliebe cycle, “Im wunderschönen Monat Mai,” in which “both the song’s recurring unresolved V7s throughout and ending on V7 convey the poet’s unfulfilled longing and desire” (1996, 135). This is not unlike what Ramon Satyendra, drawing upon the aesthetics of the Romantic fragment, calls “open structures” in the music of Franz Liszt, which he defines as “Liszt works that are based on a dissonant [i.e., non-tonic] chord, yet rooted in diatonicism” (1997, 205).
Hall & Oates typified so-called “blue-eyed soul” in the 1970s, being one of the few white acts to emerge from Philadelphia’s burgeoning soul music scene early that decade.
The chord most likely has varied origins, both in gospel piano playing and in American popular song from the first half of the twentieth century. The soul dominant is most commonly voiced as a IV triad over $\stackrel{ˆ}{5}$, but can also appear as a close position ii7 or IV7 chord in the right-hand part, hence adding a fifth note to the basic four-note sonority. An important feature to remember about the soul dominant is that, despite its dominant function, the chord contains the tonic note amidst its upper voices and never includes the leading tone. The four-note soul dominant is usually labeled in harmonic analyses with the functional Roman numeral V11, understood as shorthand for a V chord with intervals of a seventh, ninth, and eleventh above the bass (see, e.g., Temperley 2011, [3.10], and Tagg 2014, 228). While the soul dominant is most commonly found in songs in major keys (or, if you prefer, the Ionian mode), as in “She’s Gone,” it can be found in minor or Aeolian songs as well, in which case the chord typically is not built upon $\stackrel{ˆ}{5}$ in the bass but rather $\stackrel{ˆ}{7}$, with a close position iv7, VI, or VI7 in the right-hand part above (in other words, the same pitches as the soul dominant from its relative major), hence serving as a substitute for the plain VII chord that often serves the functional role of dominant in modal rock harmony; for example, the main groove from A Taste of Honey’s 1978 #1 disco hit “Boogie Oogie Oogie” is built around oscillating D minor-seventh and B/C chords, or i7VII11.
Walter Everett defines a truck driver’s modulation as “a sudden shift from one tonal center to another—usually a half step above—that is not functionally related to the first” (1997, 118). Dai Griffiths prefers the term “elevating modulation” and presents a useful four-part schema to explain the four most common harmonic situations that typically occur in popular songs at the formal junctures of such modulations. Griffiths cites the final series of three consecutive T1 modulations in “She’s Gone” as a “breathtaking” example of his third type of elevating modulation, the “dominant handover” (2015, 38).
One might interpret this opening vamp not as i–V in E minor but rather as iv–I in B major (i.e., with a borrowed minor iv chord), although this is not the way I hear it. Circular chord progressions in pop and rock songs are often ambiguous with regard to their tonal center, especially in instances where such short progressions could just as easily be interpreted in a major key or its relative minor (which is not the case in the “Reach Out” introduction, but we shall see that it is so in Prince’s “Little Red Corvette” and other examples later in this essay). The opening pitch in the flute melody, A, behaving in context more like $\stackrel{ˆ}{4}$ in minor (as part of a 4–3 figure above $\stackrel{ˆ}{1}$ in the bass) than a flatted seventh degree in major, tips the scale for me in the direction of E minor. For an extensive discussion of the notion of ambiguity in rock harmony, see Doll 2017, Chapter 6, “Ambiguous Effects”; see also Richards (forthcoming).
As I have shown in parentheses above the staff, the guitars play quick G chords at the end of both measures within each two-bar cycle, but the bass guitar does not comply and instead carves out an active bass line around the chord roots of the oscillating ii–V progression. (An organ—buried deep in the mix—also simply moves back and forth between A and D chords without any intervening G chords.) I am therefore considering these metrically weak G chords as embellishing (neighboring) sonorities in the upper voices and not full-fledged chord changes. Flory (2017, 63) provides a transcription of the complete texture of this oscillating verse vamp.
I had long thought that this E major chord sounding in the upper voices was played on a piano, but Motown expert Andrew Flory has confirmed for me that there is actually no piano on this track. Motown’s primary house keyboardist at the time, Earl Van Dyke, was contracted for the July 8, 1966 session when “Reach Out” was recorded (along with two other songs) and is therefore most likely the one playing the organ part.
For an insightful analysis of “God Only Knows,” similarly focusing on the expressive effects of the unstable six-four chords and the song’s utter lack of resolution to a root-position tonic, see Harrison 1997, 39–40.
This passage during the chorus of “Someone Saved My Life Tonight”—in which a promised cadence is evaded, and instead the music “backs up” to repeat the cadential idea—reminds me of a similar technique often utilized by late-eighteenth century composers (especially Mozart) to extend phrases, which Janet Schmalfeldt (1992) has called the “one more time” technique.
Christopher Doll (2011) uses the term “breakout chorus” to refer to any song whose chorus represents a marked increase in intensity over the preceding verse (a feature common to many if not most pop and rock songs), but especially those choruses involving a shift of tonal center from a minor key to its relative major. While I do not hear the verse of “Little Red Corvette” as fully “in” the key of B minor, so that the chorus does not “modulate” to D major, the expressive or “breakout” effect in this song is essentially the same at the moment the emergent tonic chord replaces vi. For a detailed exploration of this thorny issue of monotonality vs. “sectional tonality” and “sectional centricity” in rock music, see Capuzzo 2009. There is no easy answer to this question, but in my view, the deciding factor as to whether or not I interpret a song as sectionally tonal or monotonal with an emergent or absent tonic usually has to do with the expressive potential that comes from withholding a tonal center, as I hope to demonstrate further in the analyses that follow. Though such situations are rare, it is entirely possible for a song to have two or more competing yet equally important tonal centers but for the tonic chord to be absent in each of its sections, as we shall see with the Psychedelic Furs song “Love My Way” (Example 9b).
It is also possible, of course, for major and minor triads related by whole step to function locally as V–vi in major, yet I can think of no example from the pop-rock repertoire in which we hear a repeating V–vi progression used as an extended vamp (i.e., V–vi with an “absent” I). After polling some experts on rock harmony, the closest example we could come up with is Bob Dylan’s “I Shall Be Free No. 10” from Another Side of Bob Dylan (1964)—specifically, the funny little repeated tag at the end of each strophe, V–vi–V–vi–V–I, where on its final iteration Dylan breaks the fourth wall by saying, “what’s probably got you baffled more is what this thing here is for.” I am grateful to Walter Everett for bringing this example to my attention.
“Variations on the Carlos Santana Secret Chord Progression” is the title of a track from Zappa’s 1981 album Shut Up ’N Play Yer Guitar Some More, and represents the guitar solo section culled from a 1978 live performance of the Zappa song “City of Tiny Lites.”
Interestingly, when playing “Jane Says” live (several 1990s performances of which can be found on YouTube), Jane’s Addiction would often append an instrumental introduction by jamming around the D major tonic chord before launching into the repeating IV–V riff, which suggests to me that Perry Farrell and company were fully aware of their song’s absent tonic. Nobile offers an analysis of “Jane Says” that shows how the vocal melody goes on to articulate a full-fledged AABA form, with the melody in the bridge (or B section) distinguishing itself by outlining scale degrees of the dominant harmony, “all by virtue of its melodic-harmonic divorce” (2015, 196).
John Covach (2005) uses the term simple verse–chorus form to describe songs in which the verse and chorus are set over the same chord changes (as opposed to contrasting verse–chorus form which uses a different set of chord changes for the verse and chorus, examples of which we heard in “She’s Gone,” “Someone Saved My Life Tonight,” and “Little Red Corvette”).
U.K. synthpop group the Human League are probably best known for their massive 1981 single “Don’t You Want Me,” which reached #1 on both sides of the Atlantic. “Human” represented something of a stylistic departure for the Human League, yet garnered the group another #1 hit in the U.S., where R&B was at the time more popular on the mainstream singles charts than it was in their native U.K. (where the song peaked only at #8). Jimmy Jam and Terry Lewis were brought in to produce the Human League’s 1986 album Crash in the immediate wake of the duo’s success as producers and main composers for Janet Jackson’s breakthrough album Control, released in February of that year.
As Covach aptly puts it, “the question is not ‘How does tonality work?’ but rather ‘How does tonality work with regard to these pieces?’” (1994, [18]).
In my earlier discussion of two-chord vamps, I must concede that I bypassed talking about III–IV (listed in Example 5) as yet another possible harmonic-functional interpretation for two whole-step related major chords. While III–IV shuttles are quite rare, the introduction to Tears for Fears’ hit “Head Over Heels” (U.S. #5, U.K. #12) from their 1985 album Songs from the Big Chair—which Clement (2013, 130, Figure 17) analyzes as an exemplar of Lydian harmony—is one such instance where I hear the shuttle of C major to D major chords (or, more accurately, oscillating open fifths on C and D, with two intertwining keyboard and guitar melodies sounding above) not as a Lydian I–II but rather as III–IV promising an A major tonic that emerges at the onset of the verse (which itself is built on another two-chord shuttle, I–III). To support this reading, I am hearing these chords against the backdrop of Type 5 in Walter Everett’s classification of tonal systems for rock music, i.e., “[major] triad-doubled or power-chord minor-pentatonic systems unique to rock styles: I–III–IV–V–VII” (2004, Table 1; see also Biamonte 2010, 104–5, which explains how this minor-pentatonic system can be rotated—e.g., “Pentatonic 4”: I–III–IV–VI–VII). My interpretation here may be controversial to some readers, but the main reason why I cannot help but hear the opening of “Head Over Heels” this way is because I associate it closely with another Tears for Fears hit from the same album, “Shout” (U.S. #1, U.K. #4), in which the verses are also built upon a III–IV shuttle (B–C) whose function has in this case been made clear in context of the I–VI–IV–I loop (G–E–C–G) of the chorus that opens the song. The only difference in “Head Over Heels” is that we hear the III–IV shuttle first.
See, for example, the opening to Debussy’s piano prelude “Des pas sur la neige” (1910), in which D remains omnipresent as a tonic pedal as the mode fluctuates from D Aeolian to D Dorian when the Bs yield to Bs. Stevie Nicks seems to be especially fond of evoking Lydian harmony in her songs, as in Fleetwood Mac’s 1977 megahit “Dreams” (U.S. #1, U.K. #24), another Nicks song built almost entirely upon an F major–G major shuttle. In this case, however, I agree with Ken Stephenson’s interpretation of the song’s ambiguous major or minor (but not Lydian) tonality: “The one instance of a harmony other than F and G, an A minor chord, favors A as tonic. Otherwise, the chords point more strongly to C than A. . . . The melody also ultimately favors C by ending each section on C. We need not decide ultimately whether the tonic is C or A: it may float between these two relative keys . . . , an appropriate situation in light of the subject matter of ‘Dreams’ and the sometimes surreal grammar of its lyrics. The point here is that an F Lydian hypothesis suggested by the harmonies must be abandoned once the melody begins” (2002, 42).
While I have shown only its first phrase in Example 8b, the verse melody of “Man on the Moon” is comprised of a series of six 4-bar phrases in the pattern aababa (truncated to aaba in subsequent verses), with the third and fifth phrases following essentially the same melodic contour as the first, but instead of the pitches G | F–E–D | C–D–C we hear C | B–A–A | E–F–E. The leap up to C4 at the onset of the third phrase anticipates the same C4 that initiates the melodic descent into G at the onset of the prechorus, but to my ears the tonal center does not yet shift to G major since the main melodic pitches—C | A | E—remain wedded to the underlying Lydian I–II–I.
In his 2001 article on “the expression of troubling societal values in postmodern [i.e., mainly 1990s] rock,” Walter Everett also hears C as the tonal center during the verses of “Man on the Moon,” where he interprets the use of Lydian tonality as a reflection of the “self-deprecating meaninglessness” conveyed by the lyrics. As Everett says, “we ultimately learn to accept the Lydian melody as aimless, as are the arbitrary, apathetic, and inscrutable lyrics it carries, all the perfect representation of the empty, detached world of Andy Kaufman’s anti-ideological parodies” (2001, 212).
A similar double-tonic complex occurs in Radiohead’s song “Reckoner” from In Rainbows (2007), in which the G major verses are built on a repeating IV–V–vi and the E minor bridge on a half-cadential iv–VI–V (with requisite raised leading tone), but no tonic chord appears in either key; see Osborn 2016, 149–51.
I am not the only scholar to have noticed this phenomenon, as evinced by two papers presented at the 2016 annual meeting of Society for Music Theory in Vancouver that also addressed the subject of recent pop hits composed entirely over a single repeating chord loop. In his “(Dys)Functional Harmony: How Sound Production in Twenty-First Century Pop Music Liberates Harmony from its Functional Role,” Asaf Peres argues that formal sections in these EDM-influenced harmonically repetitive songs are articulated primarily by differences in texture and sound production, where “[m]anipulations of sonic density and gestures such as filter sweeps and drum intensification have taken a lead role in delineating form and creating tension and release” (AMS-SMT 2016 Abstracts, 225). And in “‘I Know It’s Over’: Melodically-Established Keys and Tonal (Non-)Closure in Contemporary Popular Music,” Jeremy Smith draws upon my notion of absent tonics, analyzing several examples of recent hits in which “the tonic is only absent in its harmonic form, while it is very much present in its melodic form” (AMS-SMT 2016 Abstracts, 249).
“Viva la Vida” is a fine example of what I have called “cumulative form” in pop and rock music (see Spicer 2004), with Will Champion’s wordless countermelody (“oh-oh-oh-oh-ohh-oh”) being deliberately saved for the song’s climactic final chorus.
Four-chord loops featuring the I, vi, IV, and V chords in some order have become ubiquitous in pop and rock, including the now infamous pattern of I–V–vi–IV which the Australian comedy group Axis of Awesome immortalized in their clever 2009 song called “Four Chords” (with accompanying YouTube video at https://www.youtube.com/watch?v=5pidokakU4I), demonstrating how this loop has been used in dozens if not hundreds of pop and rock songs in recent decades. (I am deliberately avoiding here the rather sexist nickname given to the flipped version of this chord loop, vi–IV–I–V [or is it i–VI–III–VII?], by Boston Globe columnist Marc Hirsh [2008], which became the subject of a heated discussion on SMT-talk in April 2014; see also Robison 2013, and Richards [forthcoming].)
In the earlier version of his paper presented at the 2013 Society for Music Theory annual meeting in Charlotte, Nobile demonstrated this in the case of “Call Me Maybe” by substituting various other four-chord loops in his guitar accompaniment while singing the chorus melody, all of which sounded plausible and “correct.” Several huge pop hits in recent years, in fact, have been built entirely upon the harmonic platform of a IV–V loop with absent I, including Katy Perry’s “Teenage Dream” (U.S. #1, U.K. #2 [2010]) and Justin Bieber’s “Sorry” (U.S. and U.K. #1 [2015]). Canadian composer, singer, and multi-instrumentalist Owen Pallett (2014a), writing for the mainstream online publication Slate, uses music theory to help explain the “genius” of “Teenage Dream” to a general readership: “This song is all about suspension—not in the voice-leading 4–3 sense, but in the emotional sense, which listeners often associate with ‘exhilaration,’ being on the road, being on a roller coaster, travel. This sense of suspension is created simply, by denying the listener any I chords. There is not a single I chord in the song.”
Only one of my students heard the “Get Lucky” loop in F minor, but in doing so agreed with Owen Pallett (2014b).
With his analysis depending mainly on the apparent $\stackrel{̂}{3}$$\stackrel{ˆ}{2}$$\stackrel{ˆ}{1}$ descent in A major at the end of the chorus melody, Smith (2016) argues in favor of an absent-tonic reading of the “Get Lucky” loop, but we must be careful not to let any Schenkerian bias seduce us into thinking that all stepwise descending melodic figures at the ends of tunes necessarily involve scale degrees 3, 2, and 1. Nonetheless, in support of my B Dorian reading, I hear the chorus melody as composing out a larger $\stackrel{̂}{3}$$\stackrel{ˆ}{2}$$\stackrel{ˆ}{1}$ descent from D to B, in tandem with the topmost notes of the piano chords.
Robin Attas (2015) adapts Christopher Hasty’s metric theory to better explain how we as listeners perceive autotelic grooves. See, for example, her analysis of the buildup introduction to Marvin Gaye’s 1968 version of “I Heard It Through the Grapevine” (279–81); see also Spicer 2004 and Fink 2011.
I should issue here a caveat about the soul dominant, since by the late 1970s this chord sometimes operated as a coloristic sonority for its own sake, irrespective of its dominant function (not unlike, for example, the way in which Debussy often composed passages that utilized harmonic planing of non-functional dominant-seventh chords). As we can see from the chord symbols in Example 11, the intro and chorus progression touches on A/B and C/D chords before arriving on the “true” soul dominant, G/A, at the end of the phrase. “Rock With You” was composed by Rod Temperton, who before collaborating with Jackson had first made a name for himself as keyboardist and primary songwriter for the 1970s U.K.-based disco group Heatwave. The opening two-chord shuttle from Heatwave’s debut single “Boogie Nights” (U.S. and U.K. #2 [1977]) also features planed chords (Em9–Dm9). Such harmonic planing would quickly become a hallmark of Temperton’s chordal vocabulary and can be heard in his other top-ten hits written for Jackson; compare, for example, the introductions to “Off the Wall” (U.S. #10, U.K. #7 [1980]) and “Thriller” (U.S. #4, U.K. #10 [1983/84]).
“Get Lucky” and “Rock With You” each tell similar tales of a long night of revelry: in the songs’ respective choruses, Pharrell sings “I’m up all night to get lucky” while Michael sings “I want to rock with you all night.” In “Get Lucky,” the autotelic groove invites infinite repetition and keeps the party going, but in “Rock With You” it is the never-ending search for an absent tonic.
The truck-driver’s modulation in “Rock With You” is actually far more complicated than in “She’s Gone,” since it does not involve a simple dominant handover (see n. 4 above). As shown by the chord symbols in Example 11, the instrumental break ends with a series of planed non-functional soul-dominant sonorities ascending stepwise, A/BB/C–C/D, with the third of these chords bumping up by semitone to C/D, which in turn resolves deceptively into the Em9 chord that begins the final chorus (or, to think of it another way, the C/D chord functions in context as an applied soul dominant—VII11—to the ii9 chord that begins the final chord loop; see n. 3). The song then fades out on the chorus loop, searching for its new absent tonic of D major.
The Phrygian triadic modal system—relatively rare in pop and rock songs outside of the heavy metal genre—is not represented among the two-chord vamps in Example 5 but is certainly possible, as in Montell Jordan’s 1995 #1 hit “This Is How We Do It,” a song built entirely around a Phrygian shuttle, i–II. A “Locrian” system (with a V chord) is theoretically possible, but only if each degree of the scale is doubled with a triad or, more typically, a power chord with third omitted—as in Led Zeppelin’s “Immigrant Song” (1970), the chord pattern of which is best described not as Locrian, but a chromatically inflected version of rock’s fifth-doubled minor-pentatonic system, 1535–455575; see Biamonte 2010, 108.
In his book Everything in Its Right Place: Analyzing Radiohead, Brad Osborn uses my suggested continuum of fragile, emergent, and absent tonics to shed light on the ephemeral tonal designs of several Radiohead songs (2016, 147–49).
[1] Copyrights for individual items published in Music Theory Online (MTO) are held by their authors. Items appearing in MTO may be saved and stored in electronic or paper form, and may be shared among individuals for purposes of scholarly research or discussion, but may not be republished in any form, electronic or print, without prior, written permission from the author(s), and advance notification of the editors of MTO.
[2] Any redistributed form of items published in MTO must include the following information in a form appropriate to the medium in which the items are to appear:
This item appeared in Music Theory Online in [VOLUME #, ISSUE #] on [DAY/MONTH/YEAR]. It was authored by [FULL NAME, EMAIL ADDRESS], with whose written permission it is reprinted here.
[3] Libraries may archive issues of MTO in electronic or paper form for public access so long as each issue is stored in its entirety, and no access fee is charged. Exceptions to these requirements must be approved in writing by the editors of MTO, who will act in accordance with the decisions of the Society for Music Theory.
This document and all portions thereof are protected by U.S. and international copyright laws. Material contained herein may be copied and/or distributed for research purposes only.
Prepared by Michael McClimon, Senior Editorial Assistant
Number of visits: 23496
|
2021-10-27 01:16:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 53, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3318984806537628, "perplexity": 6758.474731364988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00619.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-1-section-1-6-complex-numbers-1-6-exercises-page-64/53
|
Precalculus: Mathematics for Calculus, 7th Edition
$0+7i$
$\sqrt{-49}$ Rewrite this expression as $(\sqrt{-1})(\sqrt{49})$: $\sqrt{-49}=(\sqrt{-1})(\sqrt{49})=...$ Evaluate $\sqrt{49}$: $...=7\sqrt{-1}=...$ Since $\sqrt{-1}=i$, this expression becomes: $...=7i=...$ Rewrite in $a+bi$ form: $...=0+7i$
|
2019-07-18 04:44:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981313347816467, "perplexity": 1607.0513177409769}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00168.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/16781
|
# THE $b ^{4}\Sigma^{-} \rightarrow a ^{4}\Pi_{i}$ TRANSITION OF THE NO MOLECULE
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/16781
Files Size Format View
1984-TC-01.jpg 457.6Kb JPEG image
Title: THE $b ^{4}\Sigma^{-} \rightarrow a ^{4}\Pi_{i}$ TRANSITION OF THE NO MOLECULE Creators: Huber, K. P. Issue Date: 1984 Publisher: Ohio State University Abstract: Two sequences of the quartet system of $NO^{1}$ at 7900 {\AA} $(\Delta \nu = 4)$ and at 8700 {\AA} $(\Delta \nu = 3)$ have been rephotographed at high resolution. The spectra were obtained from a dc discharge in a supersonically expanding $jet^{2}$ of He + NO. The low rotational temperature achieved in the jet ($\sim 50 ^{\circ}K)$ results in a striking simplification of the previously very congested band structures, making it possible to proceed with a detailed rotational analysis. Description: $^{1}$P.A. Freedman and P.L. Radloff, J. Mol. Spectrosc. 88, 225 (1981) $^{2}$A.T. Droege and P.C. Engelking, Chem. Phys. Lett. 96, 316 (1983) Author Institution: Herzberg Institute of Astrophysics, National Research Council of Canada URI: http://hdl.handle.net/1811/16781 Other Identifiers: 1984-TC-1
|
2017-09-21 08:55:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48618021607398987, "perplexity": 6825.690788896804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00346.warc.gz"}
|
https://physics.stackexchange.com/questions/187735/would-hatq-hath-correspond-to-an-observable/187736
|
# Would $[\hat{Q},\hat{H}]$ correspond to an observable? [closed]
Would $[\hat{Q},\hat{H}]$ correspond to an observable? Where $\hat{Q}$ is an observable and $\hat{H}$ is the Hamiltonian.
Surely that would just mean that $[\hat{Q},\hat{H}]$ would commute i.e. = 0?:
$[\hat{Q},\hat{H}]\phi_{n} = \hat{Q}\hat{H}\phi_{n} - \hat{H}\hat{Q}\phi_{n} = \hat{Q}E_{n}\phi_{n} - \hat{H}q_{n}\phi_{n} = q_{n}E_{n}\phi_{n} - E_{n}q_{n}\phi_{n} = 0$ ? Hence the commutation does NOT correspond to an observable?
## closed as unclear what you're asking by ACuriousMind♦, innisfree, Danu, John Rennie, Kyle KanosJun 5 '15 at 13:22
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• An observable is just a self-adjoint operator. Why do you suppose it commutes with $H$, or rather, why do you assume $\hat{Q}\phi_n = q_n \phi_n$? What's the actual question here? – ACuriousMind Jun 4 '15 at 20:18
• Not all observables are conserved/commute with the Hamiltonian – innisfree Jun 4 '15 at 20:18
• And furthermore, why should 0 not be an observable? (Although a boring one) – Sebastian Riese Jun 4 '15 at 20:19
• You've a hidden assumption that they can be diagonalized simultaneously, which is equivalent to assuming they commute. – Omry Jun 4 '15 at 20:42
$[A, B]$ for two observables $A$ and $B$ is an observable if, and only if, $A$ and $B$ commute.
Proof: $$[A, B]^\dagger = (AB)^\dagger - (BA)^\dagger = B^\dagger A^\dagger - A^\dagger B^\dagger = BA - AB = -[A, B].$$
Note: An observable is any Hermitian operator. The commutator of two Hermitian operators is anti-Hermitian, as the proof shows. $0$ is an observable, but a "boring" one. (Actually any real number multiplied by the identity is an observable).
• I like this +1, but in what sense is $0$ an observable? – innisfree Jun 4 '15 at 21:34
• It is a corner case, admittedly. But it is a Hermitian operator, in all states we can give the probability ($p = 1$) of measuring the eigenvalue 0, all states are eigenstates with the eigenvalue 0. – Sebastian Riese Jun 4 '15 at 21:55
As noted by Sebastian, $[Q,H]$ will be anti-Hermitian and therefore generally not an observable (except in the trivial case).
However $i[Q,H]$ is an important observable. This corresponds to the classical Poisson bracket which can be see in the following formula,
$$\frac{d \langle Q\rangle}{d t} = \frac{i}{\hbar} \langle [H,Q]\rangle + \langle \frac{\partial Q}{\partial t} \rangle.$$
|
2019-10-21 17:33:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468151092529297, "perplexity": 693.9112229606063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00087.warc.gz"}
|
http://surfingwithstyle.com/notes/data-science/data-analysis/
|
# Data Analysis
Markdown Cheatsheet
A Review and Comparison of Methods for Detecting Outliers in Univariate Data Sets
Understanding Q-Q Plots
Kolmogorov–Smirnov test
Generalized Linear Models
Sampling Distribution of the Mean
What is a p-value and how do you calculate it?
An Introduction to Statistical Learning
Markdown Syntax
Hide the input cells from your IPython slides
Conda Cheatsheet
Conda: Myths and Misconceptions
Conda docs
magic keywords
pdb — The Python Debugger
# convert .ipynb to .html
jupyter nbconvert --to html notebook.ipynb
# convert .ipynb to slideshow
jupyter nbconvert notebook.ipynb --to slides
# convert .ipynb to slideshow and display it
jupyter nbconvert notebook.ipynb --to slides --post serve
Step 2: Wrangle data
• Gather
• Assess
• Dirty data, also known as low quality data. Low quality data has content issues.
• Messy data, also known as untidy data. Untidy data has structural issues.
1. Each variable forms a column.
2. Each observation forms a row.
3. Each type of observational unit forms a table.
• Clean
• Always make copies of the original pieces of data before cleaning
• Reassess and Iterate
• Store (Optional)
Step 3: Perform EDA (Exploratory Data Analysis)
• Explore – building intuition – finding patterns – visualizing relationships
• Augment – remove outliers – feature engineering
Step 4: Draw conclusions (or make predictions)
• machine learning
• inferential statistics
• Data visualization
Debugging Data Problems
1. Identify surprising data points
2. Print out one or a few surprising points
3. Fix any problems you find
Data Types
• Quantitative
• Continuous (Height, Age, Income)
• Discrete (Pages in a Book, Trees in Yard, Dogs at a Coffee Shop)
• Categorical
• Ordinal (Letter Grade, Survey Rating)
• Nominal (Gender, Marital Status, Breakfast Items)
Four Aspects for Quantitative Data
1. Measures of Center
3. The Shape of the data.
4. Outliers
Measures of Center
1. Mean
2. Median
3. Mode (most frequently occurrring value)
The median of a set with an odd number of values is the value in the middle. The median of a set with an even number of values is the mean of the 2 values in the middle.
A random variable is a column. An observed value is a scalar.
1. Range
2. Interquartile Range (IQR)
3. Standard Deviation
4. Variance
$$\sigma^2 = \frac{1}{n} \sum_{i=1}^n{(x_i-\bar{x})^2} \quad or \quad \sigma^2 = \frac{1}{n-1} \sum_{i=1}^n{(x_i-\bar{x})^2}$$ $$\sigma = \sqrt{\sigma^2} = \sqrt{\frac{1}{n} \sum_{i=1}^n{(x_i-\bar{x})^2}} \quad or \quad \sigma = \sqrt{\frac{1}{n-1} \sum_{i=1}^n{(x_i-\bar{x})^2}}$$
Calculating the 5 Number Summary
1. Minimum: The smallest number in the dataset.
2. Q₁: The value such that 25% of the data fall below.
3. Q₂: The value such that 50% of the data fall below.
4. Q₃: The value such that 75% of the data fall below.
5. Maximum: The largest value in the dataset.
When outliers are present we should consider the following points.
1. Noting they exist and the impact on summary statistics.
2. If typo - remove or fix
3. Understanding why they exist, and the impact on questions we are trying to answer about our data.
4. Reporting the 5 number summary values is often a better indication than measures like the mean and standard deviation when we have outliers.
5. Be careful in reporting. Know how to ask the right questions.
Descriptive statistics is about describing our collected data. Inferential Statistics is about using our collected data to draw conclusions to a larger population.
Parameter - numeric summary about a population
Probability of opposite event: 1 - P Probability of composite event: P * P * P … * P
$$\binom{n}{k} = \frac{n!}{k!(n-k)!}$$
Binomial outcomes are events that have 2 outcomes. The outcome probabilities follow a binomial distribution:
$$\frac{n!}{k!(n-k)!} \times p^k(1-p)^{(n-k)}$$
Conditional Probability:
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
Sensitivity: probability a test is positive if the condition is true (true positive) Specitivity: probability a test is negative if the condition is false (true positive)
Bayes’ Rule:
$$P(A|B) = \frac{P(B|A)\cdot P(A)}{P(B)}$$
Normal Distribution:
$$f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}$$
A sampling distribution is the distribution of a statistic.
The sampling distribution is centered on the original parameter value. The sampling distribution decreases its variance depending on the sample size used.
Parameter Statistic Description μ x̄ “The mean of a dataset” π p “The mean of a dataset with only 0 and 1 values - a proportion” μ₁−μ₂ x̄₁−x̄₂ “The difference in means” π₁−π₂ p₁−p₂ “The difference in proportions” β b “A regression coefficient - frequently used with subscripts” σ s “The standard deviation” σ² s² “The variance” ρ r “The correlation coefficient”
The Law of Large Numbers says that as our sample size increases, the sample mean gets closer to the population mean
The Central Limit Theorem states that with a large enough sample size the sampling distribution of the mean will be normally distributed.
The Central Limit Theorem actually applies for these well known statistics:
• Sample means (x̄)
• Sample proportions (p)
• Difference in sample means (x̄₁−x̄₂)
• Difference in sample proportions (p₁−p₂)
Bootstrapping is sampling with replacement.
Confidence interval means we are 95% confident the population mean falls between the bounds that you find.
Bootstrapping can be used in place of traditional confidence interval methods (T-Test, 2 Sample T-Test, Paired T-Test, Z-Test, Chi-squared Test, F-Test).
The null hypothesis is what we believe to be true before collecting any data. “Innocent until proven guilty.”
Type I errors - choosing H₁ when H₀ is true
1. You should set up your null and alternative hypotheses, so that the worse of your errors is the type I error.
2. They are denoted by the symbol α.
3. The definition of a type I error is: Deciding the alternative (H₁) is true, when actually (H₀) is true.
4. Type I errors are often called false positives. Type II Errors - choosing H₀ when H₁ is true
5. They are denoted by the symbol β.
6. The definition of a type II error is: Deciding the null (H₀) is true, when actually (H₁) is true.
7. Type II errors are often called false negatives.
Power = 1 - Type II Error
Common hypothesis tests include:
1. Testing a population mean (One sample t-test).
2. Testing the difference in means (Two sample t-test)
3. Testing the difference before and after some treatment on the same individual (Paired t-test)
4. Testing a population proportion (One sample z-test)
5. Testing the difference between population proportions (Two sample z-test)
You can use one of these sites to provide a t-table or z-table to support one of the above approaches: t-table, t-table or z-table
In hypothesis testing, we first simulate from the closest value to the alternative that is still in the null space.
The sampling distribution for the mean is also equal to σ/√n.
With a sample size of 150, the mean should follow a normal distribution by the central limit theorem.
# Simulating from the Null
diffs = np.zeros(10000)
for i in range(len(diffs)):
b_samp = df.sample(df.shape[0], replace=True)
control_df = b_samp.query('group == "control"')
experiment_df = b_samp.query('group == "experiment"')
control_ctr = control_df.query('action == "click"').id.nunique() / control_df.query('action == "view"').id.nunique()
experiment_ctr = experiment_df.query('action == "click"').id.nunique() / experiment_df.query('action == "view"').id.nunique()
diffs[i] = experiment_ctr - control_ctr
# Find lower and upper bound of confidence interval using bootstrapping
boot_means = []
for _ in range(10000):
bootsample = coffee_red.sample(200, replace=True)
boot_means.append(bootsample[bootsample['drinks_coffee'] == True]['height'].mean())
np.percentile(boot_means, 2.5), np.percentile(boot_means, 97.5)
# Find lower and upper bound of confidence interval (in this case difference of means) using statsmodels
import statsmodels.stats.api as sms
cm = sms.CompareMeans(sms.DescrStatsW(X1), sms.DescrStatsW(X2))
print(cm.tconfint_diff(usevar='unequal'))
The p-value is the probability of getting our statistic or a more extreme value if the null is true. Therefore, small p-values suggest our null is not true. Rather, our statistic is likely to have come from a different distribution than the null. When the p-value is large, we have evidence that our statistic was likely to come from the null hypothesis. Therefore, we do not have evidence to reject the null. By comparing our p-value to our type I error threshold (α), we can make our decision about which hypothesis we will choose.
p ≤ α ⇒ Reject H₀ p > α ⇒ Fail to Reject H₀
# Find p-value with numpy (two-sided test)
n_obs = data.shape[0]
n_control = data.groupby('condition').size()[0]
p = 0.5
n_trials = 200_000
samples = np.random.binomial(n_obs, p, n_trials)
np.logical_or(samples <= n_control, samples >= (n_obs - n_control)).mean()
sms.OLS()
Type I error compounds when you take multiple hypothesis tests. Bonferroni Correction says that if you have m tests, use α/m. Other methods include the
A two-sided hypothesis test (that is a test involving a ≠ in the alternative) is the same in terms of the conclusions made as a confidence interval as long as 1 − CI=α
Change Aversion: Existing users may give an unfair advantage to the old version, simply because they are unhappy with change, even if it’s ultimately for the better. Novelty Effect: Existing users may give an unfair advantage to the new version, because they’re excited or drawn to the change, even if it isn’t any better in the long run.
A/B Test:
• Compute the observed difference between the metric, average reading duration, for the control and experiment group.
• Simulate the sampling distribution for the difference in means (or average reading durations).
• Use this sampling distribution to simulate the distribution under the null hypothesis, by creating a random normal distribution centered at 0 with the same spread and size.
• Compute the p-value by finding the proportion of values in the null distribution that were greater than our observed difference.
• Use this p-value to determine the statistical significance of our observed difference.
Difficulties in A/B Testing
• Novelty effect and change aversion when existing users first experience a change
• Sufficient traffic and conversions to have significant and repeatable results
• Best metric choice for making the ultimate decision (eg. measuring revenue vs. clicks)
• Long enough run time for the experiment to account for changes in behavior based on time of day/week or seasonal events.
• Practical significance of a conversion rate (the cost of launching a new feature vs. the gain from the increase in conversion)
• Consistency among test subjects in the control and experiment group (imbalance in the population represented in each group can lead to situations like Simpson's Paradox)
The response variable is what you want to predict, while the explanatory variable is the variable you use to predict the response.
Pearson's correlation coefficient provides the strength and direction of a linear relationship.
Spearman's Correlation Coefficient does not measure linear relationships specifically, and it might be more appropriate for certain cases of associating two variables.
Linear regression is notated as: ŷ = b₀ + b₁x₁
The least squares algorithm finds the line that minimizes:
$$\sum\limits_{i=1}^n(y_i - \hat{y}_i)^2$$
The R-squared value is the square of the correlation coefficient.
Problems in multiple linear regression:
• Non-linearity of the response-predictor relationships
• Correlation of error terms
• Non-constant Variance and Normally Distributed Errors
• Outliers/ High leverage points
• Collinearity
One of the most common ways to identify if you have correlated errors is based on the domain from which the data were collected. If you are unsure, there is a test known as a Durbin-Watson test that is commonly used to assess whether correlation of the errors is an issue. Then ARIMA or ARMA models are commonly implemented to use this correlation to make better predictions.
Commonly, a log (or some other transformation of the response variable is done) in order to “get rid” of the non-constant variance. In order to choose the transformation, a Box-Cox is commonly used.
Two different ways of identifying multicollinearity:
• We can look at scatter plots.
• We can look at VIFs for each variable.
Higher order terms:
• Add a squared term if the response curve is roughly parabolic
• Add a cubed term if the responce curves is roughly S-shaped
• Add an interaction term x₁x₂ if x₁ and x₂ are both linear, but with a different slope
• When adding a higher order term, also add the corresponding lower order term(s)
To interpret logistic regression, use np.exp() on the coefficient values. Sometimes if the coeffient is negative, it's more meaningful to use 1/np.exp(coef).
Confusion matrices: Recall: True Positive / (True Positive + False Negative) Precision: True Positive / (True Positive + False Positive) Eselsbrücke: Precision ⇔ Positives ⚠️ Warning: The tables are usually formatted with actuals on the row and predicted values on the columns, but not always. ⚠️ Warning: scikit-learn outputs binary tables with true positives (1, 1) in the lower right, whereas handwritten tables often have true positives in the upper left.
Data Quality Dimensions
• Completeness: do we have all of the records that we should? Do we have missing records or not? Are there specific rows, columns, or cells missing?
• Validity: we have the records, but they're not valid, i.e., they don't conform to a defined schema. A schema is a defined set of rules for data. These rules can be real-world constraints (e.g. negative height is impossible) and table-specific constraints (e.g. unique key constraints in tables).
• Accuracy: inaccurate data is wrong data that is valid. It adheres to the defined schema, but it is still incorrect. Example: a patient's weight that is 5 lbs too heavy because the scale was faulty.
• Consistency: inconsistent data is both valid and accurate, but there are multiple correct ways of referring to the same thing. Consistency, i.e., a standard format, in columns that represent the same data across tables and/or within tables is desired.
There are two main reasons for creating visuals using data:
1. Exploratory Analysis is done when you are searching for insights. These visualizations don't need to be perfect. You are using plots to find insights, but they don't need to be aesthetically appealing. You are the consumer, and you need to be able to find the answer to your question from these plots.
2. Explanatory Analysis is done when you are providing your results for others. These visualizations need to provide you the emphasis you need to convey your message. They should be accurate, insightful, and visually appealing.
The five steps of the data analysis process:
1. Extract - Obtain the data from a spreadsheet, SQL, the web, etc.
2. Clean - Here we could use exploratory visuals.
3. Explore - Here we use exploratory visuals.
4. Analyze - Here we might use either exploratory or explanatory visuals.
5. Share - Here is where explanatory visuals live.
The Four Levels of Measurement
• Qualitative or categorical types (non-numeric types)
1. Nominal data: pure labels without inherent order
2. Ordinal data: labels with an intrinsic order or ranking (comparison operations can be made between values)
• Quantitative or numeric types 3. Interval data: numeric values where absolute differences are meaningful (addition and subtraction operations can be made) 4. Ratio data: numeric values where relative differences are meaningful (multiplication and division operations can be made) All quantitative-type variables also come in one of two varieties: discrete and continuous.
• Discrete quantitative variables can only take on a specific set values at some maximum level of precision.
• Continuous quantitative variables can (hypothetically) take on values to any level of precision.
Chart junk
• Heavy grid lines
• Unnecessary text
• Pictures surrounding the visual
• Ornamented chart axes
Limiting chart junk increases the data-ink ratio.
Color can both help and hurt a data visualization. Three tips for using color effectively.
2. When using color, use less intense colors - not all the colors of the rainbow, which is the default in many software applications.
3. Color for communication. Use color to highlight your message and separate groups of interest. Don't add color just to have color in your visualization.
To be sensitive to those with colorblindness, you should use color pallets that do not move from red to green . Instead, use colors on a blue to orange pallet.
Univariate plot types:
• Bar charts (qualitative)
• Pie charts
• Histograms (quantitative)
Bivariate plot types:
• Scatter plots (quantitative vs. quantitative)
• Violin plots (quantitative vs. qualitative)
• Box plots
• Clustered bar charts (qualitative vs. qualitative)
Multivariate visualizations:
• color, size, shape
• Faceting
|
2020-02-25 13:27:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5720196962356567, "perplexity": 2914.990307442111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00020.warc.gz"}
|
https://studydaddy.com/question/if-the-amount-of-mercury-in-a-polluted-lake-is-0-4-micrograms-hg-ml-what-is-the
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
# If the amount of mercury in a polluted lake is 0.4 micrograms Hg/mL, what is the total mass in kilograms of mercury in the lake? ( the lake has a surface area of 100 "mi"^2 and an average depth of 20 ft)
You have approximately 6 * 10^(5) "kg" of mercury in the lake.
Now, before getting started, notice that you have a wide array of units that you must work with, so I suggest deciding what units would be most useful to convert to.
Since was given to you in micrograms per mL, I'll calculate the volume of the lake in mL by going from cubic miles to cubic meters, and finally to mL.
The volume of the lake can be caulculated by multiplying the surface with the average depth. Go from feet to miles first
"20 ft" * "0.0001893939 miles"/"1 ft" = "0.0037878 miles"
The volume in cubic miles will be
V = "area" * "average depth" = "100 mi"^(2) * "0.0037878 mi"
V = "0.37878 mi"^(3)
Now use two conversion factors to get to mL
"0.37878 mi"^(3) * (4.16818183 * 10^(9)"m"^(3))/"1 mi"^(3) * (10^(6)"mL")/("1 m"^(3)) = 1.5788 * 10^(15)"mL"
The mass of mercury in micrograms will be
rho = m/V => m = rho * V = (0.4mu"g")/"mL" * 1.5788 * 10^(15)"mL"
m = 6.3152 * 10^(14)mu"g"
Now covert to kilograms to get the final answer
6.3152 * 10^(14)mu"g" * "1 kg"/(10^(9)mu"g") = 6.3152 * 10^(5)"kg"
Rounded to one , the number of sig figs in 100 square miles and in 20 ft, the answer is
m_("mercury") = 6 * 10^(5) "kg"
|
2019-04-23 22:51:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4741893410682678, "perplexity": 3451.702275416115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190423235856-00080.warc.gz"}
|
https://www.esaral.com/category/information/iit-jee/page/3/
|
Ampere’s Circuital Law and it Applications || Magnetic Effects of Current Class 12
Do you know about the Ampere’s Circuital Law, well it is a current distribution that helps us to calculate the magnetic field, And yes, Biot-Savart’s law does the same but Ampere’s law uses the case high symmetry. We will first understand the ampere’s circuital law, its definition, formulae, & Applications of Ampere’s Law in detail,
## Ampere’s Circuital Law
Ampere’s circuital law states that the line integral of magnetic field induction $\overrightarrow{\mathrm{B}}$ around any closed path in a vacuum is equal to $\mu_{0}$ times the total current threading the closed path, i.e.,
This result is independent of the size and shape of the closed curve enclosing a current.
This is known as Ampere’s circuital law.
Ampere’s law gives another method to calculate the magnetic field due to a given current distribution.
Ampere’s law may be derived from the Biot-Savart law and Biot-Savart law may be derived from the Ampere’s law.
Ampere’s law is more useful under certain symmetrical conditions.
Biot-Savart law based on the experimental results whereas Ampere’s law based on mathematical.
## Applications of Ampere’s Law
### (a) Magnetic induction due to a long current-carrying wire.
Consider a long straight conductor Z-Z’ is along the z-axis. Let I will be the current flowing in the direction as shown in Fig. The magnetic field is produced around the conductor. The magnetic lines of force are concentric circles in the XY plane as shown by dotted lines. Let the magnitude of the magnetic field induction produced at a point P at distance r from the conductor is
Consider a close circular loop as shown in the figure.
According to Ampere’s law $\oint \overrightarrow{\mathrm{B}} . \overrightarrow{\mathrm{d}} \ell=\mu_{0} \sum \mathrm{I}$
The direction of $\overrightarrow{\mathrm{B}}$ at every point is along the tangent to the circle.
Consider a small element $\overrightarrow{\mathrm{d} \ell}$ of the circle of radius r at P. The direction of $\overrightarrow{\mathrm{B}}$ and $\overrightarrow{\mathrm{d} \ell}$ the same. Therefore, angle between them is zero.
Line integral of $\overrightarrow{\mathrm{B}}$ around the complete circular path of radius $\mathrm{r}$ is given by
$\oint \overrightarrow{\mathrm{B}} \cdot \overrightarrow{\mathrm{d}} \ell=\oint \mathrm{B} \mathrm{d} \ell \cos 0^{\circ}$ $=\quad \mathrm{B} \oint \mathrm{d} \ell=\mathrm{B} \times 2 \pi \mathrm{r}$ $(\oint \mathrm{d} \ell=2 \pi \mathrm{r}=$ cicumference of the circle.) and $\quad \sum I=I$
So we get $\mathrm{B} \times 2 \pi \mathrm{r}=\mu_{0} \mathrm{I}$
### (b) Magnetic field created by a long current carrying conducting cylinder
A long straight wire of radius R carries a steady current I that is uniformly distributed through the cross-section of the wire.
For finding the behavior of magnetic field due to this wire, let us divide the whole region into two parts.
(a) $\mathrm{r} \geq \mathrm{R}$ and
(b) $\mathrm{r}<\mathrm{R}$
$r=$ distance from the centre of the wire.
For $\mathrm{r} \geq \mathrm{R}:$ For closed circular path denoted by ( 1) from symmetry $\overrightarrow{\mathrm{B}}$ must be constant in magnitude and parallel to $\overrightarrow{\mathrm{d} \ell}$ at every point on this circle. Because the total current passing through the plane of the circle is I.
For $\mathrm{r}<\mathrm{R}:$ The current $\mathrm{I}$ passing through the plane of circle 2 is less than the total current I. Because the current is uniform over the cross-section of the wire.
Current through unit area $=\frac{\mathrm{I}}{\pi \mathrm{R}^{2}}$
So current through area enclosed by circle 2 is $\mathrm{I}^{\prime}=\frac{\mathrm{I} \pi \mathrm{r}^{2}}{\pi \mathrm{R}^{2}}$
Now we apply Ampere’s law for circle $2 .$
The magnitude of the magnetic field versus $r$ for this configuration is plotted in figure. Note that inside the wire $\mathrm{B} \rightarrow 0$ as $\mathrm{r} \rightarrow 0 .$ Note also that eqn. (a) and eqn (b) give the same value of the magnetic field at $r=R,$ demonstrating that the magnetic field is continuous at the surface of the wire.
### (c) Magnetic field due to a conducting current carrying hollow cylinder
Consider a conducting hollow cylinder with inner radius $r_{1}$ and outer radius $r_{2} .$ And current $\mathrm{I}$ is flowing through it.
(I) $\quad$ For $r<r_{1}$
$\sum \mathrm{I}=0$ and hence $\quad B=0$
(II) $\quad$ For $r_{1}<r<r_{2}$
Now current I is flowing through area $\left[\pi r_{2}^{2}-\pi r_{1}^{2}\right]$
So, current per unit area $=\frac{I}{\pi\left(r_{2}^{2}-r_{1}^{2}\right)}$
$\therefore$ current flowing through area in bet” $\mathrm{r}_{1}<\mathrm{r}<\mathrm{r}_{2}$ is $\mathrm{I}=\frac{\mathrm{I}}{\pi\left(\mathrm{r}_{2}^{2}-\mathrm{r}_{1}^{2}\right)} \times\left(\pi \mathrm{r}^{2}-\pi \mathrm{r}_{1}^{2}\right)$
by using ampere’s law for circle of radius $\mathrm{r} \oint \overrightarrow{\mathrm{B}} . \overrightarrow{\mathrm{d}} \vec{\ell}=\mu_{0} \sum \mathrm{I}$
or $\quad \oint B d \ell \cos 0^{\circ}=\mu_{0}\left[\frac{I\left(r^{2}-r_{1}^{2}\right)}{r_{2}^{2}-r_{1}^{2}}\right]$
or $\quad \mathrm{B} \oint \mathrm{d} \ell=\mu_{0} \mathrm{I}\left[\frac{\mathrm{r}^{2}-\mathrm{r}_{1}^{2}}{\mathrm{r}_{2}^{2}-\mathrm{r}_{1}^{2}}\right]$
or $\quad B=\frac{\mu_{0} I}{2 \pi r}\left[\frac{r^{2}-r_{1}^{2}}{r_{1}^{2}-r_{1}^{2}}\right]$
$[\because \oint \mathrm{d} \ell=2 \pi \mathrm{r}]$
(a) For $r=r_{2}$
$B=\frac{\mu_{0} I}{2 \pi r_{2}}$
(b) For $r>r_{2}$
$B=\frac{\mu_{0} I}{2 \pi r}$
Also Read: Biot Savart’s Law Click here for the Video tutorials of Magnetic Effect of Current Class 12
About eSaral At eSaral we are offering a complete platform for IIT-JEE & NEET preparation. The main mission behind eSaral is to provide education to each and every student in India by eliminating the Geographic and Economic factors, as a nation’s progress and development depends on the availability of quality education to each and every one. With the blend of education & technology, eSaral team made the learning personalized & adaptive for everyone.
Magnetic Field at the Centre of a Circular Coil | Circular Current loop as a Magnetic Dipole
Here we will study about the Magnetic Field at the Centre of a Circular Coil. Here we will study about the number of cases
## Magnetic Field at the Center of a Circular Current-Carrying Coil
Consider a circular coil of radius r through which current I is flowing. Let AB be an infinitesimally small element of length d\ell. According to Biot-Savart’s law, the magnetic field dB at the center P of the loop due to this small element d $\ell$ is
$\mathrm{dB}=\frac{\mu_{0}}{4 \pi} \frac{\mathrm{Id} \ell \sin \theta}{r^{2}}$ where $\theta$ is the angle between $\overrightarrow{\mathrm{d} \ell}$ and $\overrightarrow{\mathrm{r}}$.
$\left.\therefore \quad \mathrm{dB}=\frac{\mu_{0}}{4 \pi} \frac{\mathrm{I} d \ell \sin 90^{\circ}}{r^{2}}=\frac{\mu_{0}}{4 \pi} \frac{\mathrm{I} d \ell}{\mathrm{r}^{2}} \quad \text { (for circular loop, } \theta=90^{\circ}\right)$
The loop can be supposed to consists of a number of small elements placed side by side. The magnetic field due to all the elements will be in the same direction. So, the net magnetic field at P is given by
$\mathrm{B}=\sum \mathrm{dB}=\sum \frac{\mu_{0}}{4 \pi} \frac{\mathrm{I} \mathrm{d} \ell}{\mathrm{r}^{2}}=\frac{\mu_{0} \mathrm{I}}{4 \pi \mathrm{r}^{2}} \sum \mathrm{d} \ell$
$\therefore \quad \mathrm{B}=\frac{\mu_{0} \mathrm{I}}{4 \pi r^{2}} \times 2 \pi r$
$| \Sigma \mathrm{d} \ell=\text { circumference of the circle }=2 \pi \mathrm{r})$
### Magnetic Field due to part of the current-carrying circular conductor (Arc) :
$B=\frac{\mu_{0} I}{4 \pi r^{2}} \sum d \ell\left(\because \frac{\sum d \ell}{r}=\alpha\right)$
$B=\frac{\mu_{0} I}{4 \pi r} \alpha$
### Magnetic Field on the Axis of a Circular Coil
Consider a circular loop of radius a through which current I is flowing as shown in fig. The point $P$ lies on the axis of the circular current loop i.e., along the line perpendicular to the plane of the loop and passing through its center.
Let $x$ be the distance of the observation point $P$ from the centre 0 of the loop. Let us consider an infinitesimally small element $\mathrm{AB}$ of length d\ell. Radius of the loop $=\mathrm{a}$ According to Biot-Savart’s law, the magnetic field at P due to this small element
\begin{aligned} \overrightarrow{\mathrm{dB}} &=\frac{\mu_{0} \mathrm{I}}{4 \pi r^{3}}[\overrightarrow{\mathrm{d}} \ell \times \overrightarrow{\mathrm{r}}] \\ \mathrm{dB} &=\frac{\mu_{0} \mathrm{I} \mathrm{d} \ell \sin \theta}{4 \pi \mathrm{r}^{2}} \end{aligned}
or $\quad \mathrm{dB}=\frac{\mu_{0} \mathrm{I} \mathrm{d} \ell}{4 \pi \mathrm{r}^{2}}\left(\theta=90^{\circ}\right)$
The direction of $\overrightarrow{\mathrm{dB}}$ is perpendicular to the plane of the current element $\overrightarrow{\mathrm{d} \ell}$ and $\overrightarrow{\mathrm{r}}(\mathrm{CP})$ as shown in fig. by $\overrightarrow{\mathrm{PM}}$
Similarly if we consider another small element just diametrically opposite to this element then
magnetic field due to this at point $P$ is $\overrightarrow{\mathrm{dB}^{\prime}},$ denoted by PN and of the same magnitude. $\mathrm{d} \mathrm{B}=\mathrm{dB}^{\prime}$
Both $\overrightarrow{\mathrm{dB}}$ and $\overrightarrow{\mathrm{dB}^{\prime}}$ can be resolved into two mutually perpendicular components along $\mathrm{PX}$ and $\mathrm{zz}$ :
The components along ZZ’ [dB $\cos \alpha$ and $\left.d B^{\prime} \cos \alpha\right]$ cancel each other as they are equal and opposite in direction.
The same will hold for such other pairs of current elements. over the entire circumference of the loop.
Therefore, due to the various current elements, the components of the magnetic field is only along PX will contribute to the magnetic field due to the whole loop at point P.
### The magnetic dipole moment of the current loop
The current loop can be regarded as a magnetic dipole that produces its magnetic field and the magnetic dipole moment of the current loop is equal to the product of ampere-turns and area of the current loop. we can write
Case II : If the observation point is far far away from the coil, then $a<<x$. So, $a^{2}$ can be neglected in comparison to $x^{2}$.
$\therefore \quad B=\frac{\mu_{0} N I a^{2}}{2 x^{3}}$
terms of magnetic dipole moment, $\mathrm{B}=\frac{\mu_{0}}{4 \pi} \frac{2 \mathrm{M}}{\mathrm{x}^{3}} \quad\left[\mathrm{B}=\frac{\mu_{0}}{2 \pi} \frac{\mathrm{NIA}}{\mathrm{x}^{3}}=\frac{\mu_{0}}{4 \pi} \frac{2 \mathrm{NIA}}{\mathrm{x}^{3}}\right]$
Click here for the Video tutorials of Magnetic Effect of Current Class 12
Helmholtz Coils | Magnetic Field between two Coils | eSaral
Helmholtz coil is named after the German physicist Hermann von Helmholtz. It is comprised of two identical magnetic coils positioned in parallel to each other, and their centers are aligned in the same x-axis. The two coils are separated by a distance equal to the radius like a mirror image as shown in Figure 1. When current is passing through the two coils in the same direction, it generates a uniform magnetic field in a three-dimension region of space within the coils. Helmholtz coils are normally used for scientific experiments, magnetic calibration, to cancel background (earth’s) magnetic field, and for electronic equipment magnetic field susceptibility testing.
## Helmholtz Coils
The two coaxial coils of equal radii placed at a distance equal to the radius of either of the coils and in which the same current in the same direction is flowing are known as Helmholtz coils.
For these coils $x=\frac{a}{2}, I_{1}=I_{2}=I, a_{1}=a_{2}=a$
The two coils are placed mutually parallel to each other these coils are used to produce a uniform magnetic field. In between two coils along the axis at the middle point rate of change of magnetic field is constant, so if distance increases from a coil magnetic field decrease, but the distance from another coil decreases, so magnetic field due to second coil increases and hence the resultant magnetic field produced in the region between two coils remains uniform.
or $\mathrm{B}=0.76 \frac{\mu_{0} n \mathrm{I}}{a} \quad$ or $\mathrm{B}=1.423 \mathrm{B}_{\mathrm{C}}$
(Bc is a magnetic field at the center of a single coil.)\
Click here for the Video tutorials of Magnetic Effect of Current Class 12
At eSaral we are offering a complete platform for IIT-JEE & NEET preparation. The main mission behind eSaral is to provide education to each and every student in India by eliminating the Geographic and Economic factors, as a nation’s progress and development depends on the availability of quality education to each and every one. With the blend of education & technology, eSaral team made the learning personalized & adaptive for everyone.
Right Hand Palm Rule – Magnetic Effect of Current Class12, JEE & NEET
So far we have described the magnitude of the magnetic force on a moving electric charge, but not the direction. The magnetic field is
a vector field, thus the force applied will be oriented in a particular direction. There is a clever way to determine this direction using nothing more than your right hand. The direction of the magnetic force $F$ is perpendicular to the plane formed by $v$ and $B$, as determined by the right-hand palm rule, which is illustrated in the figure. The right-hand rule states that to determine the direction of the magnetic force on a positive moving charge, $f$, point the thumb of the right hand in the direction of $v,$ the fingers in the direction of $B,$ and a perpendicular to the palm points in the direction of $F$
## Right Hand Palm Rule
If we hold the thumb of the right hand mutually perpendicular to the grip of the fingers such that the curvature of the finger represents the direction of current in the wire loop, then the thumb of the right hand will point in the direction of the magnetic field near the center of the current loop
### Graph of B v/s X
As soon as x increases magnetic field B decreases, dependence of B on x is shown in figure
Rate of change of B with respect to x is different at different values of x
for $x<\pm \frac{a}{2}$ curve is convex and $\quad$ for $x>\pm \frac{a}{2}$ curve is concave
At $x=\pm \frac{a}{2} \quad$ we get $\frac{d B}{d x}=$ const $,$ and $\frac{d^{2} B}{d x^{2}}=0$
So at $x=+\frac{a}{2} \&-\frac{a}{2}$ B varies linearly with $x$
These points are called points of inflexion.
Distance in between these two points is equal to radius of the coil
$B=\frac{B_{C}}{\left(1+\frac{x^{2}}{a^{2}}\right)^{3 / 2}}$
$\because$ Magnetic field at the centre of coil $\mathrm{B}_{\mathrm{C}}=\frac{\mu_{0} \mathrm{NI}}{2 \mathrm{a}}$
Biot Savart’s Law
Click here for the Video tutorials of Magnetic Effect of Current Class 12
About eSaral At eSaral we are offering a complete platform for IIT-JEE & NEET preparation. The main mission behind eSaral is to provide education to each and every student in India by eliminating the Geographic and Economic factors, as a nation’s progress and development depends on the availability of quality education to each and every one. With the blend of education & technology, eSaral team made the learning personalized & adaptive for everyone.
Magnetic Field due to Infinite Straight Conductor | Magnetic Effects of Current Class 12
Hey, do you want to learn about the Magnetic Field due to Infinite Straight Conductor? If yes. Then keep reading.
## Magnetic field due to long straight conductor
Here we will discuss all the cases involved in the magnetic field due to Conductor such as Magnetic Field due to Infinite Straight Conductor, and many more discussed below:
Consider a long straight conductor XY through which current I is flowing from $X$ to Y. Let $P$ be the observation point at a distance ‘r’ ‘from the conductor XY. Let us consider an infinitesimally small current element $\mathrm{CD}$ of length d\ell. Let s be the distance of $\mathrm{P}$ from the mid-point $\odot$ of the current element. Let $\theta$ be the angle that OP makes with the direction of the current. The magnetic field at $P$ due to the current element $C D$ is
$\mathrm{dB}=\frac{\mu_{0}}{4 \pi} \frac{\mathrm{Id} \ell \sin \theta}{\mathrm{s}^{2}}[\text { Biot-Savart’s law }]$ The magnetic field at P due to the whole of the conductor XY
## Case I :
If the conductor is infinitely long, then $\theta_{1}=90^{\circ}$ and $\theta_{2}=90^{\circ}$
$\mathrm{B}=\frac{\mu_{0} \mathrm{I}}{4 \pi}\left[\sin \frac{\pi}{2}+\sin \left(\frac{\pi}{2}\right)\right]=\frac{\mu_{0} \mathrm{I}}{4 \pi \mathrm{r}}[1+1]=\frac{\mu_{0}}{4 \pi} \frac{2 \mathrm{I}}{\mathrm{r}}$ Or
### Case II :
If a conductor is of infinite length but one end is in front of point $P$ i.e. one end of conductor starts from point $N$ then $\theta_{1}=0^{\circ}$ and $\theta_{2}=90^{\circ}$
### Case III :
Conductor is finite length and point P is just in front of the middle of the conductor
### Case IV :
Biot Savart’s Law
Click here for the Video tutorials of Magnetic Effect of Current Class 12
About eSaral At eSaral we are offering a complete platform for IIT-JEE & NEET preparation. The main mission behind eSaral is to provide education to each and every student in India by eliminating the Geographic and Economic factors, as a nation’s progress and development depend on the availability of quality education to each and every one. With the blend of education & technology, eSaral team made the learning personalized & adaptive for everyone.
Right hand Thumb Rule in Physics Class 12 | Magnetic Effects of Current
The right-hand thumb rule in Physics states that: to determine the direction of the magnetic force on a positive moving charge,ƒ, point the thumb of the right hand in the direction of v, the fingers in the direction of B, and a perpendicular to the palm points in the direction of F.
## Right-Hand Thumb Rule:
If we grasp the conductor in the palm of the right hand so that the thumb points in the direction of the flow of current, then the direction in which the fingers curl, gives the direction of magnetic field lines. For the current flowing through the conductor in the direction shown in fig. (a) or (b), both the rules predict that magnetic field lines will be in an anticlockwise direction when seen from above.
The magnetic field produced by a current-carrying straight conductor is of circular symmetry. The magnetic lines of force are concentric circles with the current-carrying conductor passing through their common center. The plane of the magnetic lines of force is perpendicular to the length of the conductor.
Biot Savart’s Law
##### Click here for the Video tutorials of Magnetic Effect of Current Class 12
About eSaral At eSaral we are offering a complete platform for IIT-JEE & NEET preparation. The main mission behind eSaral is to provide education to each and every student in India by eliminating the Geographic and Economic factors, as a nation’s progress and development depends on the availability of quality education to each and every one. With the blend of education & technology, eSaral team made the learning personalized & adaptive for everyone.
Magnetic Effect of Electric Current Class 12 Notes | Introduction
The electricity and magnetism are linked to each other and it is proved when the electric current passes through the copper wire, it produces a magnetic effect. The electromagnetic effects first time noticed by Hans Christian Oersted. Oersted discovered a magnetic field around a conductor carrying an electric current. The magnetic field is a quantity, which has both magnitude and direction. The direction of a magnetic field is usually taken to be the direction in which, a north pole of the compass needle moves inside it. So here you will get Magnetic Effect of Electric Current Class 12 complete Notes to prepare for Boards as well as for JEE & NEET Exams. Oersted discovered a magnetic field around a conductor carrying an electric current. Other related facts are as follows:
(a) A magnet at rest produces a magnetic field around it while an electric charge at rest produces an electric field around it.
(b) A current-carrying conductor has a magnetic field and not an electric field around it. On the other hand, a charge moving with a uniform velocity has an electric as well as a magnetic field around it.
(c) An electric field cannot be produced without a charge whereas a magnetic field can be produced without a magnet.
(d) No poles are produced in a coil carrying current but such a coil shows north and south polarities.
(e) All oscillating or an accelerated charge produces E.M. waves also in addition to electric and magnetic fields.
### Unit of Magnetic field
UNIT OF $\overrightarrow{\mathrm{B}}:$ MKS weber/metre $^{2},$ SI tesla, CGS maxwell cm’ or gauss.
One Tesla $=$ one (weber/m’) $=10^{4}$ (maxwell/cm’) $=10^{4}$ gauss
## Biot-Savart’s Law
With the help of experimental results, Biot and Savart arrived at a mathematical expression that gives the magnetic field at some point in space in terms of the current that produces the field. That expression is based on the following experimental observations for the magnetic field $\overrightarrow{\mathrm{d} B}$ at a point $P$ associated with a length element $\overrightarrow{\mathrm{d} \ell}$ of a wire carrying a steady current I.
$\mu_{0}$ is called permeability of free space $\frac{\mu_{0}}{4 \pi}=10^{-7}$ henry/meter.
$1(\mathrm{H} / \mathrm{m})=1 \frac{\mathrm{T} \mathrm{m}}{\mathrm{A}}=1 \frac{\mathrm{Wb}}{\mathrm{Am}}=1 \frac{\mathrm{N}}{\mathrm{A}^{2}}=1 \frac{\mathrm{Ns}^{2}}{\mathrm{c}^{2}}$
DIMENSIONS of $\mu_{0}=\left[\mathrm{M}^{\prime} \mathrm{L}^{\prime} \mathrm{T}^{-2} \mathrm{A}^{-2}\right]$
For vaccum $: \sqrt{\frac{1}{\mu_{0} \varepsilon_{0}}}=\mathrm{c}=3 \times 10^{8} \mathrm{m} / \mathrm{s}$
### Biot-Savart law in Vector form
[Note: Static charge is a source of electric field but not of magnetic field, whereas the moving charge is a source of electric field as well as magnetic field.]
the direction of $\mathrm{d} \mathrm{B}$ is perpendicular to the plane determined by $\overrightarrow{\mathrm{d} \ell}$ and $\overrightarrow{\mathrm{r}}$ (i.e. if $\overrightarrow{\mathrm{d} \ell}$ and $\overrightarrow{\mathrm{r}}$ lie in the plane of the paper then $\overrightarrow{\mathrm{dB}}$ is $\perp$ to plane of the paper). In the figure, direction of
$\overrightarrow{\mathrm{dB}}$ is into the page. (Use right hand screw rule).
### Click here for the Video tutorials of Magnetic Effect of Current Class 12
Difference Between Potentiometer and Voltmeter – Current Electricity || Class 12, JEE & NEET
The Potentiometer and the Voltmeter both are the voltage measuring devices. The main Difference Between Potentiometer and Voltmeteris that the potentiometer measures the emf of the circuit whereas voltmeter measures the end terminal voltage of the circuit. The other differences between the Potentiometer and the voltmeter are explained below in the comparison chart.
## Standardization of Potentiometer
The process of determination of potential gradient on wire of potentiometer is known as standardisation of potentiometer. A standard cell is one whose emf remains constant. Cadmium cell with emf 1.0186 V at $20^{\circ}$C is used as a standard cell. In laboratory a Daniel cell with emf 1.08 V is usually used as a standard cell.
If $\ell_{0}$ is the balancing length for standard emf $E_{0}$ then potential gradient $x=\frac{E_{0}}{l_{0}}$
### Key Differences:
The following are the key differences between Potentiometer and voltmeter.
1. The Potentiometer is an instrument used for measuring the emf, whereas the voltmeter is a type of meter which measures the terminal voltage of the circuit.
2. The Potentiometer accurately measures the potential difference because of zero internal resistance. Whereas, the voltmeter has a high internal resistance which causes the error in measurement. Thus the voltmeter approximately measures the voltage.
3. The sensitivity of the Potentiometer is very high, i.e. it can measure small potential differences between the two points. The voltmeter has low sensitivity.
4. The Potentiometer uses the null deflection type instrument whereas the voltmeter uses the deflection type instrument.
5. The Potentiometer has infinite internal resistance, whereas the Potentiometer has high measurable resistance.
Watch out the Video: Applications of Potentiometer & its Construction by Saransh Sir.
### Conclusion
The Potentiometer and voltmeter both measures the emf in volts. The Potentiometer is used in a circuit where the accurate value of voltage is required. For approximate calculation, the voltmeter is used. Physics Revision Series by Saransh Sir (AIR 41 in IIT-JEE)
JEE Main 2020: NTA reopens Application Process | Apply before 24th May
JEE Main 2020 Application Form – NTA has again released JEE Main application form 2020. The forms are available online on JEE Main Official Website. Fresh registered are also open for eligible 10+2 appeared / passed candidates, until May 24, 2020. The process to submit application form of JEE Main 2020 is the same as before.
Candidate need to click on the apply link below and register by entering name, e-mail id, etc. information. Apart from the details, candidates also have to upload images and pay application fee. The amount of JEE Main 2020 application fee varies as per the category and number of papers.
JEE Main application form correction reopens between May 25 to 31. Know more details on JEE Main 2020 application form below.
#### DIRECT LINK TO REGISTER IN JEE MAIN 2020
The application process of JEE Main 2020 includes registration, filling the application form, uploading scanned images, payment of application fee and taking the printout of the filled-in form. For JEE Main 2020 registration, candidates must check the eligibility criteria as prescribed by the NTA. Candidates who have missed the JEE Main 2020 January exam can appear for the JEE Main 2020 2nd session.
#### About JEE Main 2020 Exam:
Joint Entrance Examination is organized by National Testing Agency to offer admission into courses like B.Tech / B.E., B.Arch, B.Plan. JEE Main is a computer based test that takes place at national level. The duration of the exam is 3 hours. It consists of multiple-choice questions and numerical value type questions. There is negative marking in the case of multiple choice questions. After qualifying this exam, the candidates can take admission in IIITs / CFTIs / NITs or can sit for JEE Advance exam.
#### IMPORTANT DATES AND EVENTS TO REMEMBER
• The JEE Main Exam 2020 will be held across the country from July 18 to July 23 this year.
• The application form process for JEE-Mains 2020 will be available on or before May 24, 2020.
• The forms can be completed and submitted only up to 5 pm on May 24, whereas the submission of the fees can be done until 11.50 pm on May 24, 2020.
• Candidates have been asked to pay the application fee online via debit card, net banking, UPI, or Paytm.
• Students qualifying in the JEE-Mains exams will appear for the JEE Advanced exam which will be held later
#### List of Documents Required for Filling Application Form
Candidate needs some documents to fill and submit the application form. Check the list of required documents here:
• scanned image of photograph
• scanned image of signature
• a valid e-mail id
• valid and active mobile number
• past academic mark sheets and certificates
The candidates have to upload Photo and Signatue in the application form of JEE Main 2020 and as per the given specifications:
### JEE Main 2020 Exam Pattern
As per the eligibility criteria for, a few changes in the pattern of the question paper(s) and number of question(s) for B.E./B.Tech has been approved by the JEE Apex Board (JAB) for the conduct of JEE (Main)-2020 Examination.
Paper Subject with Number of Questions Type of Questions TIMING OF THE EXAMINATION(IST) B.tech Mathematics – 25(20+5) Physics – 25(20+5) Chemistry – 25(20+5) 20 Objective Type – Multiple Choice Questions (MCQs) & 5 Questions with answer as numerical value, with equal weightage to Mathematics, Physics & Chemistry 1st Shift: 09:30 a.m. to 12:30 p.m. 2nd Shift: 02:30 p.m. to 05:30 p.m.
### JEE Main 2020 Dates
Application form availability February 7, 2020
Last date to apply May 24, 2020 (reopens)
Last date to upload images and pay application fee May 24, 2020
Correction in Particulars (last date)
May 31, 2020
Exam date 18 July – 23 July ’20
Release date of answer key and recorded responses To be notified later
Release of answer key To be notified later
Declaration of result To be notified later
Announcement of NTA score To be notified later
### Eligibility Criteria
There is no age limit for the candidates to appear in JEE Main 2020 Examination. The candidates who have passed their 12th Examination in 2018, 2019 and appearing in 2020 are eligible to JEE Main Examination 2020.
Those candidates who cleared the class 12 exam in 2017 or before 2017 are not eligible to appear for JEE Main 2020 exam.
#### Educational Qualification
Candidates must have at least 5 subjects in class 12 exam or equivalent exam. Where Math, Physics and Chemistry are the essential subjects.
### JEE Main 2020 Detailed Syllabus
FREE Revision Series For JEE 2020 | Quick Revision Videos
👉Physics Revision Series
👉Chemistry Revision Series
👉Mathematics Revision Series
Mind Map For Hydrocarbons: Alkenes | Class 11, JEE & NEET – Download from here
Get to learn all important points and reactions of Hydrocarbons – Alkene through these mind maps. Download and share with your friends also.
Mind Maps for Rotational Motion: Torque Revision Class XI, JEE, NEET
Rotational Motion Class 11 comprises variety of cases with important formulae and key points. So here is the mind map to help you in remembering all the key formulae and important concepts on finger tips.
JEE Main 2020 New Dates Announced | Check Here for Details
The exam dates for NTA JEE Main 2020 announced by the Union Minister of Human Resource Development Ramesh Pokhriyal Nishank on tuesday (5th May’20). The JEE (Main) 2020 exam was earlier re-scheduled by the ministry, due to the nationwide lockdown in the wake of coronavirus outbreak. The new date for the JEE (Main) 2020 exams was since then awaited.
According to the announcement, Union Minister of Human Resource Development Ramesh Pokhriyal Nishank said the examination process for the JEE (Main) 2020 will begin from July this year, while the session will begin by August. The JEE Main exam will be conducted from July 18 to July 23, 2020, while the JEE Advanced in August, Ramesh Pokhriyal Nishank said. Through JEE Main entrance exam, students will be able to get admission to BE/B.tech, B.Plan, and B.Arch degree courses at the various IITs (Indian Institute of Information Technology), NITs (National Institute of Technology) and other Centrally Funded Technical Institutions (CFTIs) across India.
### JEE Main 2020 Exam Pattern
As per the eligibility criteria for, a few changes in the pattern of the question paper(s) and number of question(s) for B.E./B.Tech has been approved by the JEE Apex Board (JAB) for the conduct of JEE (Main)-2020 Examination.
Paper Subject with Number of Questions Type of Questions TIMING OF THE EXAMINATION(IST) B.tech Mathematics – 25(20+5) Physics – 25(20+5) Chemistry – 25(20+5) 20 Objective Type – Multiple Choice Questions (MCQs) & 5 Questions with answer as numerical value, with equal weightage to Mathematics, Physics & Chemistry 1st Shift: 09:30 a.m. to 12:30 p.m. 2nd Shift: 02:30 p.m. to 05:30 p.m.
There is no age limit for the candidates to appear in JEE Main 2020 Examination. The candidates who have passed their 12th Examination in 2018, 2019 and appearing in 2020 are eligible to JEE Main Examination 2020. Those candidates who cleared the class 12 exam in 2017 or before 2017 are not eligible to appear for JEE Main 2020 exam.
### Quick Revision Videos
#### 👉Physics Revision Series by Saransh Gupta Sir (AIR-41)
JEE MAIN 2020 Question Paper PDF Download | All Shifts (7th, 8th & 9th January)
NTA has successfully conducted JEE Main Jan 2020 exam in all the test centers and we understand that students are waiting for JEE MAIN 2020 Question Paper PDF format. Based on the reviews by the students, it is concluded that the exam is comparatively easy than the previous year. Furthermore, the questions asked in the exam were more conceptual based. So Question Papers PDF of all shifts are available here to download. JEE Main 2020 Question Paper – Candidates can download JEE Main question paper for January 07, 08, 09, 2020, for Shift 1 and Shift 2 from this page.
### JEE Main 2020 (January) – All Shifts Question Papers
The candidates can challenge the answer keys online. The window to raise objections will be available for a week. In case, the NTA finds the objections raised are incorrect, they will publish the result of JEE Main. The tentative result declaration date is by January 31.
### Candidates may check their JEE Main question papers from Here!!!
JEE Main 2020 Exam | January Attempt | Students’ Reactions and Reviews
## JEE MAIN 2020 Students Reactions | 7th January (SHIFT-1)
JEE MAIN, one of the biggest examination in the country was scheduled today. The difficulty level and selection ratio are the well-known aspects of this exam but the number of candidates that appear every year, is also the factor that makes it so special. Every year more than ten lakhs of aspirants apply and compete for just only 30-35 thousand seats (approximately) of National Institute of Technologies and IIITs. Today on 7th January 2020, it was the day, when JEE Main 2020 Exam was held in the morning shift from 9:30-12:30 across India. Many of you are also appeared or going to appear in near future and want to know the overall level of today’s exam. So, dedicated to students and in order to help them, eSaral is again here, providing you the details about today’s exam. In this video, our team members have gathered the information about the questions, paper pattern and the difficulty level of the exam. We went to some of the centers in KOTA city, yes!! The coaching city. Get to know about the students reviews and reactions regarding JEE Main 2020 Exam:
So, watch the video to know how they feel about the exam. We asked them regarding the easy to tough subjects based on the questions and topics, also the marking scheme of the numerical type questions and the ratio of 11th and 12th standards topics in the exam. So if you have these kind of doubts or want to know about the detail watch the video and based on the conversation with the aspirants we developed a PDF too. You can download the same to know in detail. Here is the complete analysis of JEE Main 2020, 7th Jan SHIFT-1 Exam by Saransh Gupta Sir with Students who have appeared in Exam.
## JEE MAIN 2020 Students Reactions | 7th January (SHIFT-2)
So, here are the students reviews on the Second shift of 7th January JEE Main Exam. As per the students reviews it can be concluded that both shifts of the day comprised of easy to moderate level questions. Watch out the video and comment your review of JEE Main Exam
## JEE MAIN 2020 Students Reactions | 8th January (SHIFT-2)
JEE Main 2020 Paper Analysis | Discussion with Students
## JEE MAIN 2020 Paper Analysis for 7th January SHIFT-1
Today on 7th January, the national level competition exam JEE MAIN was held across the nation and with the completion of the exam, there are many queries in the mind of aspirants. It is natural to feel a certain kind of anxiety after the exam like how others feel, the difficulty level of the exam and the expected cut-offs. Here, at eSaral we are providing you the first detailed analysis that is available to all the students. Watch the video to now about the questions’ level (according to eSaral students) and the topics wise questions from 11th and 12th standards. The physics faculty at eSaral, Saransh Gupta sir and chemistry faculty Prateek Gupta sir are doing the detailed analysis of the JEE MAIN QUESTION PAPER as per the reviews by the students of eSaral. In the video, the questions from each subject and the topics from where they were asked are explained using prepometer, the tool designed by the eSaral. In the discussion session the students of eSaral told about the questions and difficulty level and the question that they faced in the examination. JEE Main 2020 Paper Analysis for January 07th exam is updated here. Students are sharing their reviews of JEE Main Exam here. Watch out the complete video till the end to know in detail!
#### Watch the Reactions and Reviews of Students outside JEE Main Exam Center
In physics, it is found that many of the questions were direct and formula based related to the memory of the candidate. There were approximately 10 questions form the class 11th syllabus out of 25 questions. Watch out the video if you want to know about the same kind of detailed analysis for chemistry and mathematics. You will also get some idea about specific questions and their solutions that were discussed in the analysis video. So from our analysis we found the overall exam as easy to moderate level. Only a few questions were placed in the difficult to very difficult category and it was found that the question were not much confusing. We have developed a PDF regarding the detailed analysis. Download the file and know about the exam. If you think that this time you could not attempt your best attempt then don’t worry we are going to start our BOUNCE BACK CRASH COURSE for the April month JEE MAIN exam. Enroll and ace the exam. The above PDF contains weightage of number of questions per chapter.
P
## JEE MAIN 2020 Paper Analysis for 9th January SHIFT-2
(Shift-2) 9th Jan Download PDF JEE Main 2020 Question Paper JEE Main question paper for January 07, 08, 09, 2020, for Shift 1 and Shift 2 are given below. Download or View from here!!!
### 👉 Click to Join Free Physics Revision Series by Saransh Gupta Sir (AIR-41, IIT-Bombay)
Mind Maps for Modern Physics: Nuclei Revision – Class XII, JEE, NEET
Nuclear Physics in Class 12 comprises variety of cases with important formulae and key points. So here is the mind map to help you in remembering all the formulas and important key concepts on finger tips.
Mind Maps for Atomic Structure Revision – Class XII, JEE, NEET
Atomic Structure in Class 12 comprises variety of cases with important formulae and key points. So here is the mind map to help you in remembering all the formulas and important key concepts on finger tips.
Wave Optics – JEE Advanced Previous Year Questions with Solutions
JEE Advanced Previous Year Questions of Physics with Solutions are available at eSaral. Practicing JEE Advanced Previous Year Papers Questions of Physics will help the JEE aspirants in realizing the question pattern as well as help in analyzing weak & strong areas. Get detailed Class 11th & 12th Physics Notes to prepare for Boards as well as competitive exams like IIT JEE, NEET etc. eSaral helps the students in clearing and understanding each topic in a better way. eSaral is providing complete chapter-wise notes of Class 11th and 12th both for all subjects. Click Here for JEE main Previous Year Topic Wise Questions of Physics with Solutions Download eSaral app for free study material and video tutorials. Simulator Previous Years JEE Advanced Questions
Q. Column I shows four situations of standard Young’s double slit arrangement with the screen placed far away from the slits $S_{1}$ and $S_{2}$. In each of these cases $S_{1} P_{0}$ = $S_{2} P_{0}$, $S_{1} P_{1}$$S_{2} P_{1} = \lambda / 4 and S_{1} P_{2}$$S_{2} P_{2}=\lambda / 3$, where $\lambda$ is the wavelength of the light used. In the cases B, C and D, a transparent sheet of refractive index $\mu$ and thickness t is pasted on slit S2. The thicknesses of the sheets are different in different cases. The phase difference between the light waves reaching a point P on the screen from the two slits is denoted by $\delta$ (P) and the intensity by I(P). Match each situation given in Column I with the statement(s) in Column II valid for that situation. [IIT-JEE-2009]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. ((A) $p, s ;(B) q ;(C) t ;(D) r, s, t$) (A) $\Delta \mathrm{x}=\mathrm{S}_{2} \mathrm{P}-\mathrm{S}_{1} \mathrm{P}=0$ $\delta\left(\mathrm{P}_{0}\right)=\frac{2 \pi}{\lambda} \Delta \mathrm{x}=0$ $\Delta \mathrm{x}=\mathrm{S}_{1} \mathrm{P}_{1}-\mathrm{S}_{2} \mathrm{P}_{1}=\frac{\lambda}{4}$ $\delta\left(\mathrm{P}_{1}\right)=\frac{2 \pi}{\lambda} \times \frac{\lambda}{4}=\frac{\pi}{2}$ $\mathrm{I}=\mathrm{I}_{\max } \cos ^{2}\left(\frac{\Delta \phi}{2}\right)$ $\mathrm{I}\left(\mathrm{P}_{1}\right)=\mathrm{I}_{1}=\mathrm{I}_{\max } \cos ^{2} \frac{\delta}{2}=\frac{\mathrm{I}_{\max }}{2}$ $\delta\left(\mathrm{P}_{2}\right)=\frac{2 \pi}{\lambda} \times \frac{\lambda}{3}=\frac{2 \pi}{3}$ $\mathrm{I}\left(\mathrm{P}_{2}\right)=\mathrm{I}_{2}=\mathrm{I}_{\max } \cos ^{2} \frac{\pi}{3}=\frac{\mathrm{I}_{\max }}{4}$ $\mathrm{I}\left(\mathrm{P}_{0}\right)>\mathrm{I}\left(\mathrm{P}_{1}\right)$ $(\mathrm{B}) \Delta \mathrm{x}=\mathrm{S}_{1} \mathrm{P}-\left[\mathrm{S}_{2} \mathrm{P}+(\mu-1) \mathrm{t}\right]$ $\Delta \mathrm{x}_{1}=\mathrm{S}_{1} \mathrm{P}_{1}-\mathrm{S}_{2} \mathrm{P}_{1}-(\mu-1) \mathrm{t}$ $\Delta \mathrm{x}_{1}=\frac{\lambda}{4}-\frac{\lambda}{4}=0$ $8\left(\mathrm{P}_{1}\right)=0 ; \mathrm{I}\left(\mathrm{P}_{1}\right)=\mathrm{I}_{\max }$ $8\left(\mathrm{P}_{0}\right)=\frac{\pi}{2} \delta\left(\mathrm{P}_{0}\right) \neq 0$ $\mathrm{I}\left(\mathrm{P}_{0}\right)=\mathrm{I}_{\max } / 2$ $\Delta \mathrm{x}=\mathrm{S}_{1} \mathrm{P}_{2}-\mathrm{S}_{1} \mathrm{P}_{2}-(\mu-1) \mathrm{t}$ $=\frac{\lambda}{3}-\frac{\lambda}{4}=\frac{\lambda}{12}$ $8\left(\mathrm{P}_{2}\right)=\frac{2 \pi}{\lambda} \times \frac{\lambda}{12}=\frac{\pi}{6}$ $\mathrm{I}\left(\mathrm{P}_{2}\right)=\mathrm{I}_{\max } \cos ^{2}\left(\frac{\pi}{12}\right)$
Q. Young’s double slit experiment is carried out by using green, red and blue light, one color at a time. The fringe widths recorded are $\beta_{G}, \beta_{R}$ and $\beta_{B},$ respectively. Then (A) $\beta_{G}>\beta_{B}>\beta_{R}$ (B) $\beta_{B}>\beta_{G}>\beta_{R}$ (C) $\beta_{R}>\beta_{B}>\beta_{G}$ (D) $\beta_{R}>\beta_{G}>\beta_{B}$ [IIT-JEE-2012]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (D) $\beta=\frac{\mathrm{D} \lambda}{\mathrm{d}}$ $\lambda_{\mathrm{R}}>\lambda_{\mathrm{a}}>\lambda_{\mathrm{B}}$
Q. In the Young’s double slit experiment using a monochromatic light of wavelength $\lambda$, the path difference (in terms of an integer n) corresponding to any point having half the peak intensity is :- (A) $(2 n+1) \frac{\lambda}{2}$ (B) $(2 n+1) \frac{\lambda}{4}$ (C) $(2 n+1) \frac{\lambda}{8}$ $(D)(2 n+1) \frac{\lambda}{16}$ [JEE Advanced 2013]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (B) $\frac{\mathrm{I}_{\max }}{2}=\mathrm{I}_{\max } \cos ^{2}\left(\frac{\pi}{\lambda} \Delta \mathrm{x}\right)$ $\cos ^{2}\left(\frac{\pi}{\lambda} \Delta \mathrm{x}\right)=\frac{1}{2}$ $\cos \left(\frac{\pi}{\lambda} \Delta \mathrm{x}\right)=\pm \frac{1}{\sqrt{2}}$ $\frac{\pi}{\lambda} \Delta \mathrm{x}=\mathrm{n} \pi \pm \frac{\pi}{4}$ $\Delta \mathrm{x}=\left(\mathrm{n} \pm \frac{1}{4}\right) \lambda$
Q. A light source, which emits two wavelengths $\lambda_{1}=400 \mathrm{nm}$ and $\lambda_{2}=600 \mathrm{nm},$ is used in a Young’s double slit experiment. If recorded fringe widths for $\lambda_{1}$ and $\lambda_{2}$ are $\beta_{1}$ and $\beta_{2}$ and the number of fringes for them within a distance y on one side of the central maximum are $\mathrm{m}_{1}$ and $\mathrm{m}_{2},$ respectively, then :- (A) $\beta_{2}>\beta_{1}$ (B) $\mathrm{m}_{1}>\mathrm{m}_{2}$ (C) From the central maximum, $3^{\mathrm{rd}}$ maximum of $\lambda_{2}$ overlaps with $5^{\text {th }}$ minimum of $\lambda_{1}$ (D) The angular separation of fringes of $\lambda_{1}$ is greater than $\lambda_{2}$ [JEE Advanced 2014]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (A,B,C) $\beta=\frac{\mathrm{D} \lambda}{\mathrm{d}}$ $\mathrm{B}_{2}>\beta_{1}$ $\mathrm{y}=\mathrm{m}_{1} \frac{\mathrm{D} \lambda_{1}}{\mathrm{d}}=\mathrm{m}_{2} \frac{\mathrm{D} \lambda_{2}}{\mathrm{d}}$ $\frac{\mathrm{nD} \times \lambda_{2}}{\mathrm{d}}=\left(\mathrm{n}^{\prime}+\frac{1}{2}\right) \frac{\mathrm{D} \lambda_{1}}{\mathrm{d}} \Rightarrow 600 \mathrm{n}=\left(\mathrm{n}^{\prime}+\frac{1}{2}\right) \times 4$
Q. A Young’s double slit interference arrangement with slits $S_{1}$ and $S_{2}$ is immersed in water (refractive index $=4 / 3$ ) as shown in the figure. The positions of maxima on the surface of water are given by $x^{2}=p^{2} m^{2} \lambda^{2}-d^{2},$ where $\lambda$ is the wavelength of light in air (refractive index $=1$, $2 d$ is the separation between the slits and $m$ is an integer. The value of p is. [JEE Advanced 2015]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. 3
Q. While conducting the Young’s double slit experiment, a student replaced the two slits with a large opaque plate in the x-y plane containing two small holes that act as two coherent point sources $\left(\mathrm{S}_{1}, \mathrm{S}_{2}\right)$ emitting light of wavelength 600 nm. The student mistakenly placed the screen parallel to the x-z plane (for z > 0) at a distance D = 3m from the mid-point of $\mathrm{S}_{1} \mathrm{S}_{2}$, as shown schematically in the figure. The distance between the sources d = 0.6003 mm. The origin O is at the intersection of the screen and the line joining $\mathrm{S}_{1} \mathrm{S}_{2}$. Which of the following is (are) true of the intensity pattern on the screen ? (A) Hyperbolic bright and dark bands with foci symmetrically placed about O in the x-direction (B) Semi circular bright and dark bands centered at point O (C) The region very close to the point O will be dark (D) Straight bright and dark bands parallel to the x-axis [JEE-Mains 2016]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (B,C) Path difference at point O = d = .6003 mm = 600300 nm $=\frac{2001}{2}(600 \mathrm{nm})=1000 \lambda+\frac{\lambda}{2}$ $\Rightarrow$ minima form at point $\mathrm{O}$ Line $S_{1} S_{2}$ and screen are $\perp$ to each other so fringe pattern is circular (semi-circular because only half of screen is available)
Q. Two coherent monochromatic point sources $\mathrm{S}_{1}$ and $\mathrm{S}_{2}$ of wavelength $\lambda$ = 600 nm are placed symmetrically on either side of the center of the circle as shown. The sources are separated by a distance d = 1.8mm. This arrangement produces interference fringes visible as alternate bright and dark spots on the circumference of the circle. The angular separation between two consecutive bright spots is $\Delta \theta$. Which of the following options is/are correct ? (A) A dark spot will be formed at the point $\mathrm{P}_{2}$ (B) The angular separation between two consecutive bright spots decreases as we move from $\mathrm{P}_{1}$ to $\mathrm{P}_{2}$ along the first quadrant (C) At $\mathrm{P}_{2}$ the order of the fringe will be maximum (D) The total number of fringes produced between $P_{1}$ and $\mathrm{P}_{2}$ in the first quadrant is close to 3000 [JEE Advanced 2017]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (C,D)
Liquid Solution – JEE Main Previous Year Question of with Solutions
JEE Main Previous Year Question of Chemistry with Solutions are available here. Practicing JEE Main Previous Year Papers Questions of Chemistry will help all the JEE aspirants in realizing the question pattern as well as help in analyzing their weak & strong areas. Get detailed Class 11th &12th Chemistry Notes to prepare for Boards as well as competitive exams like IIT JEE, NEET etc. eSaral helps the students in clearing and understanding each topic in a better way. eSaral is providing complete chapter-wise notes of Class 11th and 12th both for all subjects. Besides this, eSaral also offers NCERT Solutions, Previous year questions for JEE Main and Advance, Practice questions, Test Series for JEE Main, JEE Advanced and NEET, Important questions of Physics, Chemistry, Math, and Biology and many more. Download eSaral app for free study material and video tutorials. Simulator Previous Years AIEEE/JEE Mains Questions
Q. A binary liquid solution is prepared by mixing n-heptane and ethanol. Which one of the folloowing statements is correct regarding the behaviour of the solution ? (1) The solution is non-ideal, showing –ve deviation from Raoult’s law (2) n-heptane shows +ve deviation while ethanol shows –ve deviation from Raoult’s law (3) The solution formed is an ideal solution. (4) The solutionis non-ideal, showing +ve deviation from Raoult’s law [AIEEE-2009]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (4) (A) n–heptone : Non Polar (B) Ethanol : Polar $\mathrm{F}_{\mathrm{A}-\mathrm{B}}<\mathrm{F}_{\mathrm{A}-\mathrm{A}}, \mathrm{F}_{\mathrm{B}-\mathrm{B}} \Rightarrow+$ deviation
Q. Two liquids X and Y form an ideal solution. At 300K, vapour pressure of the solution containing 1 mol of X and 3 mol of Y is 550 mm Hg. At the same temperature, if 1 mol of Y is further added to this solution, vapour pressure of the solution increases by 10 mm Hg. Vapour pressure (in mmHg) of X and Y in their pure states will be, respectively :- (1) 400 and 600 (2) 500 and 600 (3) 200 and 300 (4) 300 and 400 [AIEEE-2009]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) $550=\mathrm{P}_{\mathrm{A}}^{\circ} \times \frac{1}{4}+\mathrm{P}_{\mathrm{B}}^{\circ} \times \frac{3}{4}$ $560=\mathrm{P}_{\mathrm{A}}^{\circ} \times \frac{1}{5}+\mathrm{P}_{\mathrm{B}}^{\circ} \times \frac{4}{5}$ $\mathrm{P}_{\mathrm{A}}^{\circ}=400 \quad \mathrm{P}_{\mathrm{B}}^{\circ}=600$ torr
Q. On mixing, heptane and octane form an ideal solution. At 373 K, the vapour pressures of the two liquid components (heptane and octane) are 105 kPa and 45 kPa respectively. Vapour pressure of the solution obtained by mixing 25.0 of heptane and 35 g of octane will be (molar mass of heptane = 100 g $\mathrm{mol}^{-1}$ and of octane = 114 g $\mathrm{mol}^{-1}$) :- (1) 144.5 kPa (2) 72.0 kPa (3) 36.1 kPa (4) 96.2 kPa [AIEEE-2010]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (2) $\mathrm{P}_{\mathrm{A}}=\mathrm{P}_{\mathrm{A}}^{\circ} \mathrm{X}_{\mathrm{A}}=105 \times \frac{1 / 4}{1 / 4+0.307}$ $=105 \times 0.449=47.13 \mathrm{K} \mathrm{Pa}$ $\mathrm{P}_{\mathrm{B}}=\mathrm{P}_{\mathrm{B}}^{\circ} \mathrm{X}_{\mathrm{B}}=45 \times 0.551=24.795$ $\mathrm{P}_{\mathrm{T}}=\mathrm{P}_{\mathrm{A}}+\mathrm{P}_{\mathrm{B}}=71.925 \mathrm{atm}$
Q. If sodium sulphate is considered to be completely dissociated into cations and anions in aqueous solution, the change in freezing point of water $\left(\Delta \mathrm{T}_{\mathrm{f}}\right)$, when 0.01 mol of sodium sulphate isdissolved in 1 kg of water, is $\left(\mathrm{K}_{\mathrm{f}}=1.86 \mathrm{K} \mathrm{kg} \mathrm{mol}^{-1}\right):$ :- (1) 0.0186 K (2) 0.0372 K (3) 0.0558 K (4) 0.0744 K [AIEEE-2010]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) $\Delta \mathrm{T}_{\mathrm{f}}=\mathrm{i} \mathrm{k}_{\mathrm{f}} \cdot \mathrm{m}$ $=3 \times 1.86 \times 0.01 / 1$ $=0.0558 \mathrm{K}$
Q. The molality of a urea solution in which 0.0100g of urea, $\left.\left[\mathrm{NH}_{2}\right)_{2} \mathrm{CO}\right]$ is added to 0.3000 $\mathrm{dm}^{3}$ of water at STP is :- (1) 0.555 m (2) $5.55 \times 10^{-4} \mathrm{m}$ (3) 33.3 m (4) $3.33 \times 10^{-2} \mathrm{m}$ [AIEEE-2011]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (2) $\mathrm{m}=\frac{\mathrm{n}}{\mathrm{W}(\mathrm{kg})}=\frac{0.01 / 60}{0.3 \mathrm{kg}}=5.55 \times 10^{-4} \mathrm{mol} / \mathrm{kg}$
Q. A 5% solution of cane sugar (molar mass 342) is isotonic with 1% of a solution of an unknown solute. The molar mass of unknown solute in g/mol is :- (1) 136.2 (2) 171.2 (3) 68.4 (4) 34.2 [AIEEE-2011]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) $\pi_{\mathrm{c.s}}=\pi_{\mathrm{Unk}}$ $\left(\frac{\mathrm{n}}{\mathrm{V}}\right)_{\mathrm{c.s.}} \mathrm{RT}=\left(\frac{\mathrm{n}}{\mathrm{V}}\right)_{\mathrm{unk} .} \mathrm{RT}$ $\frac{5 \times 10}{342}=\frac{1 \times 10}{\mathrm{M}}$ M = 68.4 gm/mol
Q. Ethylene glycol is used as an antifreeze in a cold climate. Mass of ethylene glycol which should be added to 4 kg of water to prevent it from freezing at – $6^{\circ} \mathrm{C}$ will be : $\left(\mathrm{K}_{\mathrm{f}} \text { for water }=1.86 \mathrm{K} \mathrm{kgmol}^{-1}, \text { and molar mass of ethylene glycol }=62 \mathrm{gmol}^{-1}\right)$ (1) 400.00 g (2) 304.60 g (3) 804.32 g (4) 204.30 g [AIEEE-2011]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) $6=1.86 \times \frac{\mathrm{w} / 62}{4} \Rightarrow \mathrm{w}=800 \mathrm{gm}$
Q. The degree of dissociation () of a weak electrolyte, AxBy is related to van’t Hoff factor (i) by the expression :- $(1) \alpha=\frac{\mathrm{x}+\mathrm{y}-1}{\mathrm{i}-1}$ (2) $\alpha=\frac{\mathrm{x}+\mathrm{y}+1}{\mathrm{i}-1}$ (3) $\alpha=\frac{\mathrm{i}-1}{(\mathrm{x}+\mathrm{y}-1)}$ (4) $\alpha=\frac{\mathrm{i}-1}{\mathrm{x}+\mathrm{y}+1}$ [AIEEE-2011]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3)
Q. $\mathrm{K}_{\mathrm{f}}$ for water is 1.86 K kg $\mathrm{mol}^{-1}$. If your automobile radiator holds 1.0 kg of water, how many grams of ethylene glycol $\left(\mathrm{C}_{2} \mathrm{H}_{6} \mathrm{O}_{2}\right)$ must you add to get the freezing point of the solution lowered to –$-2.8^{\circ} \mathrm{C} ?$ (1) 27 g (2) 72 g (3) 93 g (4) 39 g [AIEEE-2012]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) $2.8=1.86 \times \frac{\mathrm{w} / 62}{1} \Rightarrow \mathrm{w}=93.33 \mathrm{gm}$
Q. A solution containing 0.85 g of $\mathrm{ZnCl}_{2}$ in 125.0 g of water freezes at $-0.23^{\circ} \mathrm{C}$ . The apparent degree of dissociation of the salt is : ($\mathbf{k}_{f}$ for water = 1.86 K kg $\mathrm{mol}^{-1}$, atomic mass ; Zn = 65.3 and Cl = 35.5) (1) 1.36% (2) 2.47% (3) 73.5% (4) 7.35% [Jee (Main)-2012 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) $0.23=(1+2 \alpha) \times 1.86 \times \frac{0.85 / 134.5}{0.125}$ $\alpha=0.735=73.5 \%$
Q. Liquids A and B form an ideal solution. At $30^{\circ}$C, the total vapour pressure of a solution containing 1 mol of A and 2 moles of B is 250 mm Hg. The total vapour pressure becomes 300 mm Hg when 1 more mol of A is added to the first solution. The vapour pressures of pure A and B at the same temperature are (1) 450, 150 mm Hg (2) 250, 300 mm Hg (3) 125, 150 mm Hg (4) 150, 450 mm Hg [Jee (Main)-2012 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) $250=\mathrm{P}_{\mathrm{A}}^{0} \times \frac{1}{3}+\mathrm{P}_{\mathrm{B}}^{0} \times \frac{2}{3}$ $300=\mathrm{P}_{\mathrm{A}}^{0} \times \frac{1}{2}+\mathrm{P}_{\mathrm{B}}^{0} \times \frac{1}{2}$ $\mathrm{P}_{\mathrm{A}}^{0}=450 \mathrm{mm}$ $\mathrm{P}_{\mathrm{B}}^{0}=150 \mathrm{mm}$
Q. The freezing point of a 1.00 m aqueous solution of HF is found to be $-1.91^{\circ} \mathrm{C}$. The freezing point constant of water, $\mathrm{K}_{\mathrm{f}}$, is 1.86 K kg $\mathrm{mol}^{-1}$. The percentage dissociation of HF at this concentration is (1) 2.7% (2) 30% (3) 10% (4) 5.2% [Jee (Main)-2012 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) $\Delta \mathrm{T}_{\mathrm{f}}=\mathrm{i} \times \mathrm{K}_{\mathrm{f}} \times \mathrm{m}$ $1.91=(1+\alpha) \times 1.86 \times 1$ $\alpha=0.027$
Q. How many grams of methyl alcohol should be added to 10 litre tank of water to prevent its freezing at 268 K ? $\left(\mathrm{K}_{f} \text { for water is } 1.86 \mathrm{K} \mathrm{kg} \mathrm{mol}^{-1}\right)$ (1) 899.04 g (2) 886.02 g (3) 868.06 g (4) 880.07 g [Jee (Main)-2013 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (2) $\Delta \mathrm{T}_{\mathrm{f}}=\mathrm{T}_{\mathrm{f}}^{0}-\mathrm{T}_{\mathrm{f}}=\mathrm{K}_{\mathrm{f}} \times \mathrm{m}$ $273.15-268=1.86 \times \frac{\mathrm{w} / 32}{10}$ $\mathrm{w}=886.02 \mathrm{g}$
Q. Vapour pressure of pure benzene is 119 torr and that of toluene is 37.0 torr at the same temperature. Mole fraction of toluene in vapour phase which is in equilibrium with a solution of benzene and toluene having a mole fraction of toluene 0.50, will be : (1) 0.137 (2) 0.205 (3) 0.237 (4) 0.435 [Jee (Main)-2013 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3) Benzen $\rightarrow 4$ Toluene $\rightarrow B$ y $_{B}=\frac{P_{B}^{0} \times X_{B}}{P_{B}^{0} X_{B}+P_{A}^{0} X_{A}}=\frac{37 \times 0.5}{37 \times 0.5+119 \times 0.5}=0.237$
Q. A molecule M associates in a given solvent according to the equation M $(\mathrm{M})_{\mathrm{n}}$. For a certain concentration of M, the van’t Hoff factor was found to be 0.9 and the fraction of associated molecules was 0.2. The value of n is : (1) 2 (2) 4 (3) 5 (4) 3 [Jee (Main)-2013 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) $\mathrm{M}=\mathrm{M}_{\mathrm{n}}$ $1-0.2 \quad 0.2 / \mathrm{n}$ $0.9=\frac{1-0.2+0.2 / \mathrm{n}}{1}$ $0.9=0.8+\frac{0.2}{\mathrm{n}}$ $0.1=\frac{0.2}{\mathrm{n}}$ $\mathrm{n}=2$
Q. 12g of a nonvolatile solute dissolved in 108g of water produces the relative lowering of vapour pressure of 0.1. The molecular mass of the solute is : (1) 60 (2) 80 (3) 40 (4) 20 [Jee (Main)-2013 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (4) $\frac{\Delta \mathrm{P}}{\mathrm{P}^{0}}=0.1=\frac{12 / \mathrm{m}}{108 / 18} \Rightarrow \mathrm{m}=20$
Q. The molarity of a solution obtained by mixing 750 mL of 0.5(M)HCl with 250 mL of 2(M)HCl will be :- (1) 0.875 M (2) 1.00 M (3) 1.75 M (4) 0.975 M [Jee (Main)-2013]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) $\mathrm{M}_{\mathrm{f}}=\frac{\mathrm{M}_{1} \mathrm{V}_{1}+\mathrm{M}_{2} \mathrm{V}_{2}}{\mathrm{V}_{1}+\mathrm{V}_{2}}=0.875 \mathrm{M}$
Q. The observed osmotic pressure for a 0.10 M solution of Fe$\left(\mathrm{NH}_{4}\right)_{2}\left(\mathrm{SO}_{4}\right)_{2}$ at $25^{\circ} \mathrm{C}$ is 10.8 atm. The expected and experimental (observed) values of Van’t Hoff factor (i) will be respectively : $\left(\mathrm{R}=0.082 \mathrm{L} \mathrm{atm} \mathrm{k}^{-} \mathrm{mol}^{-1}\right)$ (1) 3 and 5.42 (2) 5 and 3.42 (3) 4 and 4.00 (4) 5 and 4.42 [Jee (Main)-2014 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (4) $\pi_{\mathrm{ob}}=\mathrm{i} \frac{\mathrm{n}}{\mathrm{V}} \mathrm{RT}$ $10.8=\mathrm{i} \times 0.1 \times 0.082 \times 298$ $\mathrm{i}=4.42$
Q. For an ideal Solution of two components A and B, which of the following is true ? (1) $\Delta \mathrm{H}_{\text {mixing }}<0$ (zero) (2) $\mathrm{A}-\mathrm{A}, \mathrm{B}-\mathrm{B}$ and $\mathrm{A}-\mathrm{B}$ interactions are identical (3) $\mathrm{A}-\mathrm{B}$ interaction is stronger than $\mathrm{A}-\mathrm{A}$ and $\mathrm{B}-\mathrm{B}$ interactions (4) $\Delta \mathrm{H}_{\text {mixing }}>0$ (zero) [Jee(Main)-2014 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (2) $\Delta \mathrm{H}_{\operatorname{mix}}=0$
Q. Consider separate solution of $0.500 \mathrm{M} \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}(\mathrm{aq}), 0.100 \mathrm{MM} \mathrm{g}_{3}\left(\mathrm{PO}_{4}\right)_{2}(\mathrm{aq}), 0.250 \mathrm{M} \mathrm{KBr}(\mathrm{aq})$ and 0.125 M $\mathrm{Na}_{3} \mathrm{PO}_{4}(\mathrm{aq})$ at $25^{\circ} \mathrm{C}$. Which statement is true about these solutions, assuming all salts to be strong electrolytes ? (1) 0.125 M $\mathrm{Na}_{3} \mathrm{PO}_{4}$ (aq) has the highest osmotic pressure. (2) 0.500 M $\mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}$ (aq) has the highest osmotic pressure. (3) They all have the same osmotic pressure. (4) 0.100 M $\mathrm{Mg}_{3}\left(\mathrm{PO}_{4}\right)_{2}$ (aq) has the highest osmotic pressure. [Jee (Main)-2014]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3)
Q. Determination of the molar mass of acetic acid in benzene using freezing point depression is affected by : (1) association (2) dissociation (3) complex formation (4) partial ionization [Jee (Main)-2015 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) Acetic acid in non polar solvent (benzene) associates.
Q. A solution at $20^{\circ} \mathrm{C}$ is composed of 1.5 mol of benzene and 3.5 mol of toluene. If the vapour pressure of pure benzene and pure toluene at this temperature are 74.7 torr and 22.3 torr, respectively, then the total vapour pressure of the solution and the benzene mole fraction in equilibrium with it will be, respectively : (1) 38.0 torr and 0.589 (2) 30.5 torr and 0.389 (3) 35.8 torr and 0.280 (4) 35.0 torr and 0.480 [Jee (Main)-2015 online]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (1) \begin{aligned} \mathrm{P}_{\mathrm{T}} &=\mathrm{P}_{\mathrm{A}}^{0} \mathrm{X}_{\mathrm{A}}+\mathrm{P}_{\mathrm{B}}^{0} \mathrm{X}_{\mathrm{B}} \\ &=747 \times \frac{1.5}{5}+22.3 \times \frac{3.5}{5} \\ &=38 \mathrm{torr} \end{aligned}
Q. The vapour pressure of acetone at $20^{\circ}$C is 185 torr. When 1.2 g of non-volatile substance was dissolved in 100 g of acetone at $20^{\circ}$$20^{\circ}$C, its vapour pressure was 183 torr. The molar mass $\left(\mathrm{g} \mathrm{mol}^{-1}\right)$ of the substance is : (1) 128 (2) 488 (3) 32 (4) 64 [Jee (Main)-2015]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (4) $\begin{array}{rl}{\frac{185-183}{185}} & {=\frac{1.2 / \mathrm{m}}{100 / 58}} \\ {\mathrm{m}=64} & {\mathrm{gm} / \mathrm{mol}}\end{array}$
Q. For 1 molal aqueous solution of the following compounds, which one will show the highest freezing point ? (1) $\left[\mathrm{Co}\left(\mathrm{H}_{2} \mathrm{O}\right)_{5} \mathrm{Cl}\right] \mathrm{Cl}_{2} \cdot \mathrm{H}_{2} \mathrm{O}$ (2) $\left[\mathrm{Co}\left(\mathrm{H}_{2} \mathrm{O}\right)_{4} \mathrm{Cl}_{2}\right] \mathrm{Cl} .2 \mathrm{H}_{2} \mathrm{O}$ (3) $\left[\mathrm{Co}\left(\mathrm{H}_{2} \mathrm{O}\right)_{3} \mathrm{Cl}_{3}\right] \cdot 3 \mathrm{H}_{2} \mathrm{O}$ (4) $\left[\mathrm{Co}\left(\mathrm{H}_{2} \mathrm{O}\right)_{6}\right] \mathrm{Cl}_{3}$ [Jee (Main)-2018]
Download eSaral App for Video Lectures, Complete Revision, Study Material and much more...
Sol. (3)
JEE MAIN 2020 Admit Card for January Attempt | Download Hall Ticket for JEE Main 2020 Exam
• Application number.
• Password or date of birth.
JEE Main 2020 Admit Card will look like the image given below:
### What to Carry in Examination Hall
On the exam day, candidate have to carry following documents to the exam hall:
1. JEE Main 2020 hall ticket – Print it on an A4 sheet and make sure all the information should be clear and correct.
2. One passport size photograph – Along with the JEE Main admit card, candidates also have to carry one passport size photograph. This should be the same photo as the one uploaded in the form.
3. Valid Original ID Proof – As per information brochure, candidates have to carry one id proof. However, it has to be carried in original and should contain photograph of the candidate.
It is better that candidates carry the same id proof, the details of which were entered in the application form. List of valid id proofs for JEE Main 2020 is as follows:
• PAN card.
• Voter id card.
• Ration card.
• Passport.
• 12th Class admit card with photograph.
• Bank passbook with photograph.
4. PwD Certificate – Candidates who applied for scribe facility have to carry PwD Certificate with the admit card.
### What Not to Carry in the Examination Hall
Following items are not allowed inside the exam hall:
• Text material.
• Calculator.
• Docu pen.
• Log tables.
• Silde rules.
• Electronic watches.
• Mobile phone.
• Paper,
• Metallic objects etc.
Important Instructions – Candidates should note that if they are planning to carry metallic objects such as Kara and Kirpan, etc. should report to the center at least 1 hour 30 minutes before closing of the gate. NTA might ask the candidates to not take it inside the exam hall.
### Exam Time Schedule(Tentative)
Particulars Shift 1 Shift 2 Entry in the exam hall 7.30 am – 9.00 am 1.00 pm – 2.00 pm Instruction by invigilators 9.00 am – 9.20 am 2.00 pm – 2.20 pm Login and read the instructions 9.20 am 2.20 pm Exam starts at 9.30 am 2.30 pm B.E.. / B.Tech Exam 9.30 am – 12.30 pm 2.30 pm – 5.30 pm B.Arch Exam 9.30 am – 12.30 pm 2.30 pm – 5.30 pm B.Planning Exam – 2.30 pm – 5.30 pm B.Arch & B.Planning Exam (Both) – 2.30 pm – 6.00 pm
JEE Main 2020 Sample Questions released by NTA are available here:
### MATHEMATICS Sample Questions based on Numerical value by NTA
Stay tuned with eSaral for more Updates.
|
2021-09-18 13:46:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4963679611682892, "perplexity": 1631.5412584722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00341.warc.gz"}
|
https://leanprover-community.github.io/mathlib4_docs/Lean/Data/Name.html
|
Documentation
Lean.Data.Name
@[export lean_name_hash_exported]
Equations
Equations
Equations
• One or more equations did not get rendered due to their size.
Equations
Equations
• One or more equations did not get rendered due to their size.
Equations
• = match x, x with | , s' => s == s' | x, x_1 => false
Equations
Equations
Equations
Equations
• = ()
Equations
Equations
• = match compare () () with | Ordering.eq => | ord => ord
def Lean.Name.quickLt (n₁ : Lean.Name) (n₂ : Lean.Name) :
Equations
• = ()
The frontend does not allow user declarations to start with _ in any of its parts. We use name parts starting with _ internally to create auxiliary names (e.g., _private).
Equations
Checks whether the name is an implementation-detail hypothesis name.
Equations
Equations
• = match x with | Lean.Name.anonymous => true | => true | => true | x => false
Equations
• = match x with | Lean.Name.anonymous => true | x => false
Equations
Equations
def Lean.Name.anyS (n : Lean.Name) (f : ) :
Return true if n contains a string part s that satifies f.
Examples:
#eval (foo.bla).anyS (·.startsWith "fo") -- true
#eval (foo.bla).anyS (·.startsWith "boo") -- false
Equations
Equations
|
2023-03-24 12:55:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2762782573699951, "perplexity": 9143.679135100136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00002.warc.gz"}
|
https://codeforces.com/blog/entry/60338
|
Hello Codeforces!
I'm glad to announce that we will be having Round 22 of MathMash today at 17: 00 UTC. The round has been prepared by NicolasCassia, DrSwad and Snpushpita.
The round will consist of 10 problems, to be solved within 2 hours.
Points distribution for the problems are decided as 500, 500, 750, 1250, 1300, 1500, 2000, 2500, 3000, 3500
We're inviting you all to join us in the contest. Hopefully we'll have a fun and successful round.
A few details about the contest:
• In order to participate in the contest, you need to register first. Kindly follow this link to do so. Registration deadline has been removed from the contests. So now, you can register at any time before the contest ends.
• I know this announcement is being placed in a coding website, but unfortunately using computer programs for MathMash contests is not allowed. We are trying to reward users' math skills with high ratings as there are not many sites that do this yet. We also try our best to set up the problems in such a way that any trivial brute force approach isn't supposed to work (for the mediocre/difficult problems).
• Time-penalty is applicable for the contest: Each problem is given an appropriate number of points, which will be visible when the round starts. Participants will gain that many points — the time penalty after successfully submitting to a particular problem. The later the problem submission occurs, the more points being deducted due to time penalty.
• The round is rated; which means that if you participate in the contest, your rating will be updated at the end of the round based on your rank in it. But you won't be considered as participating in the round if you don't submit any answer at all, even if you're registered for the contest.
• +20
» 2 years ago, # | ← Rev. 2 → 0 Clashes with July Easy Hackerearth !! DrSwad Can something be done to avoid this.
» 2 years ago, # | +3 A bit of advice, the problems are sorted by difficulty but this is very subjective, especially as the difficulty increases, so read as much as you can!
|
2020-09-30 22:04:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25587037205696106, "perplexity": 1022.3951992289909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00427.warc.gz"}
|
https://bocthaler.splet.arnes.si/publications/
|
# Publications
#### Research papers:
1. Wandering domains arising from Lavaurs map with siegel disk , Analysis & PDE, (2021), ( with M. Astorg and H. Peters )
2. On the geometry of simply connected wandering domains, The Bulletin of the London Mathematical Society, (2021)
3. A transcendental Hénon map with an oscillating wandering Short $\mathbb{C}^2$, Mathematische Zeitschrift, (2021), ( with L. Arosio and H. Peters )
4. Automorphisms of $\mathbb{C}^m$ with bounded wandering domains , Annali di Matematica Pura ed Applicata, (2021) (Remark: the assumption of being simply connencted is not needed, see https://arxiv.org/abs/2004.05420 for a general statement)
5. Entire functions with prescribed singular values , International journal of mathematics, (2020)
6. Reduced dynamical systems Ergodic theory & dynamical systems, (2020), ( with U. Kuzman )
7. Automorphisms of $\mathbb{C}^2$ with Parabolic Cylinders Journal of Geometric analysis (2020), ( with F. Bracci and H. Peters )
8. A Long $\mathbb{C}^2$ without holomorphic functions. Analysis & PDE, (2016), ( with F. Forstnerič )
9. A reconstruction theorem for complex polynomials. International journal of mathematics, (2015)
10. Fatou components with punctured limit sets. Ergodic theory & dynamical systems, (2015), ( with H. Peters and J.E. Fornaess )
Preprint:
1. Dynamics of skew-products tangent to the identity , (2022), ( with M. Astorg)
#### Textbook:
1. Izbrana poglavja iz osnov matematike. Založništvo PeF (2019), ( with E. Horvat)
|
2022-08-14 13:14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8922606110572815, "perplexity": 7265.037727878392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00030.warc.gz"}
|
https://python-programs.com/python-program-to-read-two-numbers-and-print-their-quotient-and-remainder/
|
# Python Program to Read Two Numbers and Print Their Quotient and Remainder
Don’t stop learning now. Get hold of all the important Java fundamentals with the Simple java program example guide and practice well.
Given two numbers , the task is to print their quotient and Remainder in Python.
Examples:
i)Floating Division
Example1:
Input:
first number =45
second number =17
Output:
The value of quotient after dividing 45 / 17 = 2.6470588235294117
The value of remainder after dividing 45 / 17 = 11
ii)Integer Division
Input:
first number =45
second number =17
Output:
The value of quotient after dividing 45 / 17 = 2
The value of remainder after dividing 45 / 17 = 11
## Program to Read Two Numbers and Print Their Quotient and Remainder in Python
Below are the ways to print the quotient and Remainder:
### 1)Using / and % modulus operator in Python (User Input separated by new line, Float Division)
Approach:
• Scan the given two numbers using int(input()) and store them in two separate variables.
• Calculate the quotient by using the syntax first number /second number and store it in a variable.
• Calculate the remainder by using the syntax first number %second number and store it in a variable.
• Print the above two variables which are the result of the program
• Exit of Program.
Below is the implementation:
# scanning the given two numbers using int(input()) function
# first number
numb1 = int(input("Enter some random number = "))
# second number
numb2 = int(input("Enter some random number = "))
# Calculate the quotient by using the syntax first number /second number
# and store it in a variable.
quotie = numb1/numb2
# Calculate the remainder by using the syntax first number %second number
# and store it in a variable.
remain = numb1 % numb2
# Print the above two variables which are the result of the program
print("The value of quotient after dividing", numb1, "/", numb2, " = ", quotie)
print("The value of remainder after dividing",
numb1, "/", numb2, " = ", remain)
Output:
Enter some random number = 86
Enter some random number = 12
The value of quotient after dividing 86 / 12 = 7.166666666666667
The value of remainder after dividing 86 / 12 = 2
### 2)Using / and % modulus operator in Python (User Input separated by spaces , Float Division)
Approach:
• Scan the given two numbers using map and split() functions to store them in two separate variables.
• Calculate the quotient by using the syntax first number /second number and store it in a variable.
• Calculate the remainder by using the syntax first number %second number and store it in a variable.
• Print the above two variables which are the result of the program
• Exit of Program.
Below is the implementation:
# Scan the given two numbers using map and split() functions
# to store them in two separate variables.
numb1, numb2 = map(int, input("Enter two random numbers separated by spaces = ").split())
# Calculate the quotient by using the syntax first number /second number
# and store it in a variable.
quotie = numb1/numb2
# Calculate the remainder by using the syntax first number %second number
# and store it in a variable.
remain = numb1 % numb2
# Print the above two variables which are the result of the program
print("The value of quotient after dividing", numb1, "/", numb2, " = ", quotie)
print("The value of remainder after dividing",
numb1, "/", numb2, " = ", remain)
Output:
Enter two random numbers separated by spaces = 45 17
The value of quotient after dividing 45 / 17 = 2.6470588235294117
The value of remainder after dividing 45 / 17 = 11
### 3)Using // and % modulus operator in Python (User Input separated by spaces , Integer Division)
Approach:
• Scan the given two numbers using map and split() functions to store them in two separate variables.
• Calculate the integer quotient by using the syntax first number //second number and store it in a variable.
• Calculate the remainder by using the syntax first number %second number and store it in a variable.
• Print the above two variables which are the result of the program
• Exit of Program.
Below is the implementation:
# Scan the given two numbers using map and split() functions
# to store them in two separate variables.
numb1, numb2 = map(int, input("Enter two random numbers separated by spaces = ").split())
# Calculate the quotient by using the syntax first number /second number
# and store it in a variable.
quotie = numb1//numb2
# Calculate the remainder by using the syntax first number %second number
# and store it in a variable.
remain = numb1 % numb2
# Print the above two variables which are the result of the program
print("The value of quotient after dividing", numb1, "/", numb2, " = ", quotie)
print("The value of remainder after dividing",
numb1, "/", numb2, " = ", remain)
Output:
Enter two random numbers separated by spaces = 45 17
The value of quotient after dividing 45 / 17 = 2
The value of remainder after dividing 45 / 17 = 11
Related Programs:
|
2022-05-23 23:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18449187278747559, "perplexity": 2837.0247640865186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00322.warc.gz"}
|
http://forum.allaboutcircuits.com/threads/puzzled-by-ad623.95355/
|
Discussion in 'General Electronics Chat' started by tommydyhr, Mar 7, 2014.
1. ### tommydyhr Thread Starter Active Member
Feb 3, 2009
39
4
Today I built a circuit which consists of the following blocks:
Voltage regulator, wheatstone bridge with strain gauge, and an instrumentation amplifier. A schematic of which can be seen below (The potmeter R2 is our "strain gauge", which has an $R_0=120\Omega$.
I've never worked with neither a strain gauge, nor an instrumentation amplifier, but I can't for the life of me figure out what's wrong with the circuit. My issue is that, no matter what I do, I can't zero the output of the op-amp. Not even when $V_{in-}>V_{in+}$, it never went below ~10 mV.
Can anyone tell me what could potentially be wrong in the circuit? Thanks.
2. ### Kermit2 AAC Fanatic!
Feb 5, 2010
3,847
963
pin 6 is your output pin.
pin 5 is an output reference and should be tied to your ground. It can also host an offset voltage to move your output voltage to ground potential if your output load does not share a common ground with your input circuit.
Edit: previous pin number refs. were wrong.
Last edited: Mar 7, 2014
3. ### BillB3857 Senior Member
Feb 28, 2009
2,402
348
I would start by lowering the value of R7 to 100 ohms and increasing your Zeroing pot, R1, to 30 to 40 ohms, or whatever you can find in a standard value around that figure. You may have just run out of adjustment range. My guess is that you have R1 set to put as much resistance as possible, but just can't quite reach zero. Lowering of R1 resistance will cause a greater degree of unbalance. Am I right?
4. ### tommydyhr Thread Starter Active Member
Feb 3, 2009
39
4
That's a bit confusing. In the datasheet (and in the above schematic), pin 8 is for the feedback resistor, and pin 5 is the reference.
5. ### tommydyhr Thread Starter Active Member
Feb 3, 2009
39
4
Unfortunately I already tried replacing R7, just to make sure. At one point, the inverting terminal was at 3 V whilst non-inverting input had 2.9V. At that time, the output should be at 0.000 V, right?
6. ### Kermit2 AAC Fanatic!
Feb 5, 2010
3,847
963
I was wrong, the post has been changed to show the correct pins. My point was to tell you about the use of the ref pin to correct output voltage.
Last edited: Mar 7, 2014
7. ### crutschow Expert
Mar 14, 2008
13,482
3,369
Nothing is wrong. If you look at the OUTPUT section of the AD623 data sheet SPECIFICATIONS table you will see that the minimum output voltage with a single supply is 0.01V (10mV) as you measured.
If you want the output to be at some other voltage at null and allow positive or negative excursions from there, then you can connect a voltage to pin 5. For example if you use two equal value resistive dividers to generate 1/2 the supply voltage and apply that to pin 5, then the output null point will also be at 1/2 the supply voltage and the bridge going off null will cause the output to go either plus or minus from that voltage.
tommydyhr likes this.
8. ### tommydyhr Thread Starter Active Member
Feb 3, 2009
39
4
Thank you, I had totally missed that line. I guess I took the "rail-to-rail"-term too literally. Back to the drawing board I go!
Thanks everyone
|
2017-01-19 11:07:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4282078444957733, "perplexity": 1988.1409278762594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00087-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/20826
|
# Knowledge Bank
## University Libraries and the Office of the Chief Information Officer
The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content.
# INFRARED CAVITY RINGDOWN SPECTROSCOPY OF JET-COOLED PAHS: A COMPARISON WITH MATRIX SPECTRA
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/20826
Files Size Format View
2003-MI-03.jpg 173.5Kb JPEG image
Title: INFRARED CAVITY RINGDOWN SPECTROSCOPY OF JET-COOLED PAHS: A COMPARISON WITH MATRIX SPECTRA Creators: Huneycutt, A. J.; Casaes, R.; McCall, Benjamin J.; Saykally, R. J.; Chung, Chao-Yu; Lee, Yuan-Pern Issue Date: 2003 Abstract: Infrared absorption spectra of the CH stretching region were observed for naphthalene, anthracene, phenanthrene, pyrene, and perylene using a heated supersonic slit source and cavity ringdown spectroscopy. Results are compared closely with 10 K Ar matrix spectra to detemine general matrix perturbation effects for this class of molecules. Fundamental transitions in the matrix spectra were subject to spectral shifts of up to $3.0 cm^{-1}$ and band widths were generally broader than the jet-cooled spectra by up to 80%. Weak features not predicted by theory were observed in both Ar matrix and gas-phase spectra with similar relative intensities which suggest assignment to overtones and combination bands. URI: http://hdl.handle.net/1811/20826 Other Identifiers: 2003-MI-03
|
2014-04-19 12:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.474549263715744, "perplexity": 14820.309531610372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://indico.nikhef.nl/event/2690/
|
Theory
# Theory seminar: Hólmfríður (Hofie) Hannesdóttir
Thursday, 8 October 2020 from to (Europe/Amsterdam)
at Nikhef
Description Sequential Discontinuities of Scattering Amplitudes Scattering amplitudes are essential ingredients in theoretical predictions for collider experiments. In some cases, symmetries and other constraints can fix amplitudes completely, and hence conventional Feynman diagram computations are circumvented. The traditional cutting rules relate discontinuities across branch cuts of amplitudes to cuts through the corresponding Feynman diagrams. Here we probe the analytic structure further by generalizing the cutting rules, relating sequential discontinuities (discontinuities of discontinuities) to multiple cuts. As a corollary, we present a new proof in perturbation theory of the Steinmann relations, which forbid sequential discontinuities in partially overlapping momentum channels, in the case where all external particles are massive. These types of formulas are crucial in determining amplitudes to high loop order using modern bootstrapping methods, suggesting that our new relations could provide useful constraints for such programs. Material:
|
2021-01-27 00:48:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6559403538703918, "perplexity": 2344.8876427996915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00763.warc.gz"}
|
https://answerbun.com/artificial-intelligence/how-large-should-the-corpus-be-to-optimally-retrain-the-gpt-2-model/
|
# How large should the corpus be to optimally retrain the GPT-2 model?
Artificial Intelligence Asked by Andreas Toresäter on September 20, 2020
I just started working with the GPT-2 models and want to retrain one on a pretty narrow topic, so I have problems finding training material.
How large should the corpus be to optimally retrain the GPT-2 model? And what is the bare minimum size? Should it simply be as large as possible or can it flip over and make the model worse in some way?
I am also not certain how many steps you should let the retraining run. I have been using 6000 steps when testing, and it seems not much happens after that, loss only moved from 0.2 to 0.18 last 1000 steps.
## Related Questions
### Can GANs be used to generate something other than images?
1 Asked on November 24, 2021
### What should the output of a neural network that needs to classify in an unsupervised fashion XOR data be?
1 Asked on November 20, 2021
### Choosing a policy improvement algorithm for a continuing problem with continuous action and state-space
1 Asked on November 20, 2021
### Why is the policy loss the mean of $-Q(s, mu(s))$ in the DDPG algorithm?
1 Asked on November 17, 2021 by dhanush-giriyan
### Are tabular reinforcement learning methods obsolete (or getting obsolete)?
1 Asked on November 12, 2021
### How do I test an LSTM-based reinforcement learning model using any Atari games in OpenAI gym?
1 Asked on November 10, 2021
### How does the target network in double DQNs find the maximum Q value for each action?
1 Asked on November 7, 2021
### Understanding the loss function in deep Q-learning
2 Asked on November 4, 2021
### Is a reward given at every step or only given when the RL agent fails or succeeds?
1 Asked on November 4, 2021
### Ways to keep up with the latest developments in Machine Learning and AI?
0 Asked on November 4, 2021 by tinu
### What is the expectation of an empirical model in model based RL?
1 Asked on November 4, 2021 by ijuneja
### How can I change observation states’ values in OpenAI gym’s cartpole environment?
1 Asked on August 24, 2021 by kashan
### What does the term $|mathcal{A}(s)|$ mean in the $epsilon$-greedy policy?
1 Asked on August 24, 2021 by metrician
### Do the order of the features ie channel matter for a 1d convolutional network?
1 Asked on August 24, 2021 by user289602
### What is convergence analysis, and why is it needed in reinforcement learning?
1 Asked on August 24, 2021 by daniel-koh
### Correct dimensionality of parameter vector for solving an MRP with linear function approximation?
0 Asked on August 24, 2021 by soitgoes
### How can I convert a simple CLI RPG to a compatible environment for training an RL agent via stable-baselines?
0 Asked on August 24, 2021 by seunosiko
### What is the amount of test data needed to evaluate a CNN?
0 Asked on August 24, 2021 by user38639
### What is the Turing test?
2 Asked on August 24, 2021
Get help from others!
|
2022-08-15 03:01:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19572612643241882, "perplexity": 2468.2328107274166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00679.warc.gz"}
|
https://brilliant.org/problems/a-cool-easy-problem-2/
|
A cool easy problem 2
Electricity and Magnetism Level 2
A piece of wire of resistance 4 ohm is bent through 180˚ at its midpoint and the two halves are twisted togeher , then the resistance is 'x'. FIND 'x'
×
Problem Loading...
Note Loading...
Set Loading...
|
2016-10-28 19:52:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899180889129639, "perplexity": 7019.912722979779}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00454-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://gateoverflow.in/2466/gate1994-1-23
|
1.2k views
Consider the following two functions:
$g_1(n) = \begin{cases} n^3 \text{ for } 0 \leq n \leq 10,000 \\ n^2 \text{ for } n \geq 10,000 \end{cases}$
$g_2(n) = \begin{cases} n \text{ for } 0 \leq n \leq 100 \\ n^3 \text{ for } n > 100 \end{cases}$
Which of the following is true?
1. $g_1(n) \text{ is } O(g_2(n))$
2. $g_1(n) \text{ is } O(n^3)$
3. $g_2(n) \text{ is } O(g_1(n))$
4. $g_2(n) \text{ is } O(n)$
For asymptotic complexity, we assume sufficiently large $n$. So, $g_1(n) = n^2$ and $g_2(n) = n^3$. Growth rate of $g_1$ is less than that of $g_2.$ i.e., $g_1(n) = O(g_2(n)).$
Options A and B are true here.
selected by
i think only a is correct in second option they might be mentioning the time complexity of g1 itself which is not O(n^3) it is O(n^2) for n>=10000 i.e. high value of n
@harit g1 value is (n^2) for n>=10000 so we can say g1 is asymptotically O(n^3)
why not O(n^2) it will be asymptotically tighter then O(n^3)
$O(n^2)$ is also correct.
Yes. Both (a) and (b) are correct. $n^{2}$ is $O(n^{3})$.
edited by
I though so but when paper says select the correct(only one) choice then it creates doubt!
+1 vote
Index Condition $g_{1}(n)$ $g_{2}(n)$ Time Complexity($B$) Time Complexity($A$) 1 $0 \leq n \leq 100$ $n^{3}$ $n$ $O(n^{3})$ $O(g^{2}(n))$ -- Fails 2 $101 \leq n \leq 10000$ $n^{3}$ $n^{3}$ $O(n^{3})$ $O(g^{2}(n))$ 3 $n \geq 10001$ $n^{2}$ $n^{3}$ $O(n^{3})$ $O(g^{2}(n))$
Thus the right option should be B
edited by
This is wrong; big-O cares for only large $n$.
|
2018-02-24 16:05:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529169201850891, "perplexity": 2292.46061285939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00360.warc.gz"}
|
https://stats.stackexchange.com/questions/514450/taylor-expansion-of-gaussian-process-function-with-input-noise
|
# Taylor expansion of Gaussian process function with input noise
I am reading "Gaussian Process Training with Input Noise" by Andrew McHutchon and Carl Edward Rasmussen, where it is assumed that the inputs $$x$$ are noisy measurements of the actual latent input $$\tilde{x}$$ , i.e $$x = \tilde{x}+\epsilon_x$$ with $$\epsilon_x\sim N(0,\Sigma_x)$$. The observation is: $$y=f(\tilde{x}+\epsilon_x)+\epsilon_y$$ under a GP model $$f$$.
The authors consider taylor expansion of the above expression up to the first order terms as follows: $$y = f(x) + \epsilon_x^T\delta_\bar{f}+\epsilon_y$$, where $$\delta_\bar{f}$$ represents the derivative of the mean of the GP function. I`d like to know why this 1st order expansion would be an accurate-enough approximation for the input error propagation.
If anyone could help, I would be very grateful.
|
2021-10-28 21:21:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9446982741355896, "perplexity": 413.7880697280949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00425.warc.gz"}
|
https://physics.stackexchange.com/tags/hamiltonian-formalism/hot
|
# Tag Info
### "Constrain then quantise" vs. "quantise then constrain"
The phrase "quantization commutes with constraints" usually refers to Guillemin-Sternberg conjecture. It has only been proved for a limited class of (gauge) theories. It should stressed that ...
• 182k
• 5,695
1 vote
Accepted
### Derivation of Hamilton-Jacobi (HJ) Equation
Hamilton's principal function$^1$ $S\equiv F$ is the sole unknown variable in HJ equation. E.g. in contrast the Hamiltonian $H$ is assumed to be known. That's how separation of variables (SOV) work. ...
• 182k
1 vote
Accepted
### What are good books/chapters of books or articles to study canonical transformations in quantum mechanics at a graduate level?
I am unaware of many detailed resources in the area targeted towards typical physics graduates. However, there is a CRM monograph dealing with the Function Theory on Symplectic Manifolds which is ...
Only top scored, non community-wiki answers of a minimum length are eligible
|
2023-02-08 16:57:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998540759086609, "perplexity": 2640.5670636649197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00321.warc.gz"}
|
https://github.com/syreal17/Cardinal
|
/ Cardinal Public
Similarity Analysis to Defeat Malware Compiler Variations
syreal17/Cardinal
Switch branches/tags
Nothing to show
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Files
Failed to load latest commit information.
Type
Name
Commit time
Overview
CPC Aggregation by Reversing and Dumping in Arrays Lightweight (CARDINAL) is a tool that can find similarities between binaries compiled with different optimization flags, or even completely different compilers. CARDINAL accurately finds the number of arguments at each callsite, also known as the callsite parameter cardinalities (CPC's), and creates a easily comparable signature by aggregating them per function and dumping the result into a Bloom filter. Bloom filters are compared via the Jaccard index from which a similarity score is calculated. CARDINAL is proven to tolerate differences between binaries produced using the same source but different compiler configurations, from using different optimization levels to using completely different compilers. We hope that CARDINAL paves the way for future static analyses that similarly tolerate radical code transformations like a dynamic analysis, yet still retain the benefits of static analysis. For more information, see the paper.
Requirements
• IDA Pro
• CMake
• Python 2.7 with pip
• Cygwin (Required if running CARDINAL on Windows, which is recommended)
• LLVM (Optional: for ground truth calculation tool, cpc-tool)
Getting Started
2. Install Python modules: pip install capstone pyelftools pybloom editdistance
3. Download repo git clone https://github.com/syreal17/Cardinal.git
Notes
• make sure you can connect to the IDA license server
• Add "Cardinal/dev/feature_extraction" to PythonPath environment variable
• make sure that "C:\Python27\python.exe" exists and is the Python install that you add the above packages to
Usage
Using CARDINAL directly
1. idaw64.exe -A -S"Cardinal\dev\feature_extraction\cpc\ida_cpc_extract.py -l" name_of_test_binary1.elf This extracts the CPC features from the binary in the form of newline delimited chains of CPC's. Each line or chain represents all the CPC's in one function.
2. python Cardinal\dev\similarity\bloom-jaccard\to_bloom.py name_of_test_binary1.elf.cpc.feature This enters all of the CPC chains into a Bloom filter for quick comparison.
3. python Cardinal\dev\similarity\bloom-jaccard\bloom_jaccard_ind.py name_of_test_binary1.elf.cpc.feature.bloom name_of_test_binary2.elf.cpc.feature.bloom This yields a number between 0 and 1 inclusive. 0 means none of the CPC chains matched in the bloom filters and 1 means all of the CPC chains matched in the bloom filters.
Using the test harness
The test harness automates the above steps for a large number of binaries. We run the tests en masse by executing a find command and running the above steps on all matching files. The test harness is designed to perform the isocompiler modulation, different compiler, and different source tests, and as such, the harness only handles files that conform to the naming scheme adopted for the aforementioned tests. The scheme is as follows: [name_of_test_binary].simple.lin.[name_of_compiler].[optimization_flag].elf "Name of compiler" can be "clang" or "gcc" and "optimization flag" can be "o0," "o1," "o2," or "o3".
To run the isocompiler modulation, different compiler, and different source tests on a group of binaries simply do:
cd Cardinal/tests/windows
./test_bins.sh [name_of_test_binary1] [name_of_test_binary2] ... [name_of_test_binaryN]
For example ./test_bin.sh treecc vis burg if using our corpora. This creates multi_sample.report with all the data from running isocompiler modulation, different compiler and different source tests. The data can be put into an R consumable form by running the following:
./create_isocomp_dats.sh [name_of_test_binary1] [name_of_test_binary2] ... [name_of_test_binaryN]
./create_diffcomp_dats.sh [name_of_test_binary1] [name_of_test_binary2] ... [name_of_test_binaryN]
./create_diffbin_dats.sh [name_of_test_binary1] [name_of_test_binary2] ... [name_of_test_binaryN]
The create dats scripts read "multi_sample.report" and put the data into a better format. The "graph" variations of these scripts put the data into two columns, test and similarity score, whereas the original create dat scripts put the data into N columns, one for each binary. More details for the testing framework are covered in Cardinal/tests/windows/README.md.
Similarity Analysis to Defeat Malware Compiler Variations
Releases
No releases published
Packages 0
No packages published
|
2023-03-21 18:41:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3503480851650238, "perplexity": 7653.456826818738}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00636.warc.gz"}
|
https://math.stackexchange.com/questions/1128705/when-change-making-problem-has-an-optimal-greedy-solution
|
# When change making problem has an optimal greedy solution?
A well-known Change-making problem, which asks
how can a given amount of money be made with the least number of coins of given denominations
for some sets of coins (50c, 25c, 10c, 5c, 1c) will yield an optimal solution by using a greedy algorithm (grab the highest value coin). For some other sets one have to use a dynamic programming.
Is there any way to prove whether for a given set of coins a greedy solution will always yield an optimal solution? Coin denomination can be any natural number (not only smaller then 100) and there can be any number of different coin denominations.
Suppose the set of coin denominations is $\{a_1,\ldots,a_n\}$ where $a_1>\ldots>a_n=1$. Let $S$ be the the given amount of money. Define $m_t=\lceil a_{t-1}/a_t\rceil$ and $S_t=m_ta_t$. Let $Opt(S)$ (respectively $G(S)$) denote the number of coins used in an optimal (respectively,the greedy) solution. Then we have the following theorem:
If $\ \ \ S_t<a_t-2 \$ for all $t\in\{3,\ldots,n\}\ \ \$ then
$Opt(S)=G(S) \quad$ iff $\quad G(S_t)\leq m_t \ \$ (for all $t\in\{2,\ldots,n\}$)
There is a simpler version which states only the sufficient condition:
$Opt(S)=G(S) \quad$ if $\quad G(S_t-a_{t-1})\leq m_t-1 \ \$ (for all $t\in\{2,\ldots,n\}$)
Let $C = \langle c_1, \ldots, c_n \rangle$ be the set of coin denominations in a sorted order (that is, $i \le j \to c_i \le c_j$). Let $V = \sum_{i=1}^{n-1} c_i$ (that is, sum of everything except for the largest coin). Now, for any integer $k > V \cdot c_n$, I claim that in the optimal set of coins that sums exactly to $k$, there exists at least one coin of value $c_n$ (the max coin). To see this, by pigeonhole principle there exists at least one coin $c_i$ that appears at least $c_n$ times in this set (otherwise their sum cannot exceed $V \cdot c_n$). Replace these $c_n$ coins with $c_i$ coins of value $c_n$, and we arrive at a contradiction.
Thus, we only need to check for all integers between $1$ and $V \cdot c_n$, inclusively, whether the greedy solution is optimal to sum to them, since we know that for all $k > V \cdot c_n$, an optimal solution will consist of multiple $c_n$ coins until the remaining value drops under or equal to $V \cdot c_n$.
The paper by Pearson, "A Polynomial-Time Algorithm for the Change-Making Problem", Operation Research Letters 33:3 (may 2005), pp. 231-234 (an earlier technical report is here) gives the most efficient algorithm to check this to date.
|
2021-04-16 21:25:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233354330062866, "perplexity": 118.86492969872313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00178.warc.gz"}
|
https://jarrettmeyer.com/2016/11/14/statistics-done-wrong
|
I recently started reading Statistics Done Wrong: The Woefully Complete Guide by Alex Reinhart. Right from the get-go, the book is a fascinating look at methodology and statistical illiteracy. Thousands of academic papers are published every year by authors who, while they may be experts in their specific fields, have very little statistical training.
In his first chapter, Reinhart demonstrates this problem when discussing p-values. Let’s assume you have a true/false test, and a student has 9 correct answers out of 12. Does the student actually know the material, or did the student just guess? The p-value can guide us in answering this question.
Before continuing, let’s define a p-value. A p-value is the probability of finding the observed result, given that the null hypothesis, $$H_0$$, is true. In context of our question, this should make sense. If we have a 12 question true/false test and have students guess at responses, then we expect the average student to score a 50%, or 6 out of 12. However, because of the nature of randomness, if we collect enough data, we would expect some students to get 0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 11, or 12 questions correct as well.
This ought to be our first interesting conclusion. Statistics tell us a great deal about populations, but they tell us very little about individuals. That said, predictive modeling is a subfield of statistics, where you do try to make conclusive statements about individuals. That, however, is beyond the scope of this post.
Continuing with the problem at hand, what is the probability that you could randomly guess 12 true/false questions and get 9 correct? We call this the probability density function, or PDF, and we can model this problem with a binomial distribution.
\begin{align} P(X=k) &= \binom{12}{k}\left(0.5\right)^{k}\left(0.5\right)^{12-k} \
P(X=9) &= \binom{12}{9}\left(0.5\right)^{9}\left(0.5\right)^{3} \
P(X=9) &= 220 \times 0.00195 \times 0.125 \
P(X=9) &= 0.0537 \end{align}
In R, we can get this value much more easily with the dbinom function.
> dbinom(9, 12, 0.5)
## [1] 0.05371094
If we gave this test to 100 students, we would expect about 5 of them to score 9 out of 12.
To compute this p-value, to determine if this score is really different from chance, we need simply compute the upper tail cumulative density of the binomial distribution.
\begin{align} P(X \ge k) &= \sum_{i=k}^{12} \binom{12}{i}\left(0.5\right)^{i}\left(0.5\right)^{12-i} \end{align}
Computing this high tail distribution is done with the pbinom function. It is a little more complicated to express, as it is an infinite series beta function, but the idea is the same. We add up all of the values to the right of the blue line.
> pbinom(8, 12, 0.5, lower.tail = FALSE)
## [1] 0.07299805
This is our p-value. Graphically, this is the cumulative values of the dots to the right of the blue line in the following image.
### Reframing the Question
This is not where the author’s story ends. He then asks, what if we presented the data differently? What if the student was allowed to answer questions until he or she got three incorrect responses, and in this scenario, the student got his or her third incorrect response on the 12th question.
Already, we know more information. First of all, we know what the 12th response was incorrect. Second, we know there are more than 12 possible questions. Some students may only be given 3 questions, get all 3 wrong, and their test will be over. It is also possible that some students that some students may be given dozens, hundreds, or even thousands of questions before they get 3 incorrect responses. This time, we ask a slightly different question. What is the probability that a student gets his or third question wrong on the 12th question, given random guessing?
Instead of a binomial distribution, this data set would now follow a negative binomial distribution.
> pnbinom(8, 3, 0.5, lower.tail = FALSE)
## [1] 0.03271484
Note that in our R functions, we use 8 instead of 9 in both of our examples. This is because the binomial and negative binomial distributions are discrete probability functions. They measure to the left, and including the test value. If we want the value plus the values to the right - lower.tail = FALSE - we need to use 8 instead of 9.
Also, the R function is written in a slightly odd way. pbinom(k, r, p) means answering the question, “What is the probability of getting k successes before getting r failures?” Since we are using lower.tail = FALSE we want to know the answer to, “What is the probability of getting 9 or more correct answers before getting 3 failures, when just random guesses are used?”
What makes this result so interesting? We have the exact same data: a student’s score of 9 out of 12. In this scenario, we’re saying that there is only a 3.3% chance that the result could be this extreme.
If we are using $$\alpha = 0.05$$, then our chosen model makes all the difference if we should accept these results as random chance or not. This is why statistical literacy is so vital. How you create your statistical design and how you build your model can make a difference in the result significance and the subsequent interpretation of the study. Not knowing enough about statistics can lead you to proclaim facts that aren’t there or ignore facts that are.
|
2021-05-16 02:12:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074527025222778, "perplexity": 465.4855923223514}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00559.warc.gz"}
|