text
stringlengths
100
957k
meta
stringclasses
1 value
Miklos Schweitzer 2001 5: Functional Equation conditions Prove that if the function $f$ is defined on the set of positive real numbers, its values are real, and $f$ satisfies the equation $$f\left( \frac{x+y}{2}\right) + f\left(\frac{2xy}{x+y} \right) =f(x)+f(y)$$ for all positive $x,y$, then $$2f(\sqrt{xy})=f(x)+f(y)$$ for every pair $x,y$ of positive numbers. Source: Miklos Schweitzer Memorial Competition 2001 I can see how the repeated application of the functional equation condition upon itself forms a bound, but how can I formally prove this? • It would be (too) easy to prove if $f$ were assumed to be continuous, which it's not. – dxiv Feb 20 '17 at 1:51 • Found a solution here : artofproblemsolving.com/community/… – Rutger Moody Mar 3 '17 at 13:50 • @RutgerMoody Your link is now inactive - which is why it's always good to formally post an answer on this forum rather than just link a solution hosted somewhere else in the comments – user574848 Mar 31 at 9:27 • @user574848 Thx, I'll keep that in mind. – Rutger Moody Mar 31 at 13:20 By repeated application of the function property for some positive reals $$a,b,c,d$$ as you suggested: \begin{align*}f(a)+f(b)+f(c)+f(d)&= f\left({a+b\over2}\right)+f\left({2ab\over a+b}\right)+f\left({c+d\over2}\right)+f\left({2cd\over c+d}\right) \\ &=f\left({{a+b\over2}+{c+d\over2}\over2}\right)+f\left({2{a+b\over2}\cdot{c+d\over2}\over{a+b\over2}+{c+d\over2}}\right)+f\left({{2ab\over a+b}+{2cd\over c+d}\over2}\right)+f\left({2{2ab\over a+b}\cdot{2cd\over c+d}\over{2ab\over a+b}+{2cd\over c+d}}\right) \\ &= f\left({a+b+c+d\over4}\right)+f\left({(a+b)(c+d)\over a+b+c+d}\right)+f\left({abc+abd+acd+bcd\over(a+b)(c+d)}\right)+f\left({4abcd\over abc+abd+acd+bcd}\right) \end{align*}Now if we 'swap' $$b$$ and $$c$$ and repeat a similar process, one finds $$f\left({(a+b)(c+d)\over a+b+c+d}\right)+f\left({abc+abd+acd+bcd\over(a+b)(c+d)}\right)=f\left({(a+c)(b+d)\over a+b+c+d}\right)+f\left({abc+abd+acd+bcd\over(a+c)(b+d)}\right)\tag{1}$$ Now substitute $$a=c$$, $$b=\frac{a^2}{d}$$ and $$t=\frac{a}{b}+\frac{b}{a}$$ so that $$\frac{(a+b)(c+d)}{ a+b+c+d}=\frac{abc+abd+acd+bcd}{(a+b)(c+d)}=a$$ $${(a+c)(b+d)\over a+b+c+d}=a\cdot{2t\over2+t}$$ and $${abc+abd+acd+bcd\over(a+c)(b+d)}=a\cdot{2+t\over2t}$$ If we substitute these results into $$(1)$$, we find $$2f(a)=f\left(a\cdot{2t\over2+t}\right)+f\left(a\cdot{2+t\over2t}\right)\tag{2}$$ It can be seen that $$t\geq 2$$ by AMGM and thus $$\frac{2t}{2+t}\geq 1$$. Hence for all pairs $$x\geq y$$ there exist suitable $$a$$ and $$t$$ so that $$a\cdot{2t\over2+t}=x,a\cdot{2+t\over2t}=y\text{ and } \sqrt{xy}=a$$ so that $$(2)$$, and thus the required property, holds.
{}
[This article was first published on SAS and R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. A student came with a question about how to snag data from a PDF report for analysis. Once she’d copied things her text file looked like: 1 Las Vegas, NV --- 53.3 --- --- 1 2 Sacramento, CA --- 42.3 --- --- 2 3 Miami, FL --- 41.8 --- --- 3 4 Tucson, AZ --- 41.7 --- --- 4 5 Cleveland, OH --- 38.3 --- --- 5 6 Cincinnati, OH 15 36.4 --- --- 6 7 Colorado Springs, CO --- 36.1 --- --- 7 8 Memphis, TN --- 35.3 --- --- 8 8 New Orleans, LA --- 35.3 --- --- 8 10 Mesa, AZ --- 34.7 --- --- 10 11 Baltimore, MD --- 33.2 --- --- 11 12 Philadelphia, PA --- 31.7 --- --- 12 13 Salt Lake City, UT --- 31.9 17 --- 13 Here the --- means a missing value, and there’s some complexity induced because some cities are made up of multiple words (so the number of spaced delimited fields varies). Unfortunately, she had hundreds of such datasets to process. While section 1.1.3 (p. 3) of the book describes reading more complex files, neither it nor the entry on finding the Amazon sales rank are directly relevant here. R In R, we first craft a function that processes a line and converts each field other than the city name into a numeric variable. The function works backwards to snag the appropriate elements, then calculates what is left over to stash in the city variable. readellaline = function (thisline) { thislen = length(thisline) id = as.numeric(thisline[1]) v1 = as.numeric(thisline[thislen-4]) v2 = as.numeric(thisline[thislen-3]) v3 = as.numeric(thisline[thislen-2]) v4 = as.numeric(thisline[thislen-1]) v5 = as.numeric(thisline[thislen]) city = paste(thisline[2:(thislen-5)], collapse=" ") return(list(id=id,city=city,v1=v1,v2=v2,v3=v3,v4=v4,v5=v5)) } However, before this function can work, it needs each line to be converted into a character vector containing each “word” (character strings divided by spaces) as a separate element. We’ll do this by first reading each line, then split()ting it into words. This results in a list object, where the items in the list are the vectors of words. Then we can call the readellaline() function for each vector using an invocation of sapply() (section 1.3.2, p. 12), which avoids the need for a for loop. The resulting object can be transposed then coerced into a dataframe. # read the input file split = strsplit(file, " ") # split up fields for each line processed = as.data.frame(t(sapply(split, readellaline))) processed This generates the following output: id city v1 v2 v3 v4 v5 1 1 Las Vegas, NV NA 53.3 NA NA 1 2 2 Sacramento, CA NA 42.3 NA NA 2 3 3 Miami, FL NA 41.8 NA NA 3 4 4 Tucson, AZ NA 41.7 NA NA 4 5 5 Cleveland, OH NA 38.3 NA NA 5 6 6 Cincinnati, OH 15 36.4 NA NA 6 7 7 Colorado Springs, CO NA 36.1 NA NA 7 8 8 Memphis,TN NA 35.3 NA NA 8 9 8 New Orleans, LA NA 35.3 NA NA 8 10 10 Mesa, AZ NA 34.7 NA NA 10 11 11 Baltimore, MD NA 33.2 NA NA 11 12 12 Philadelphia, PA NA 31.7 NA NA 12 13 13 Salt Lake City, UT NA 31.9 17 NA 13 SAS The input tools in SAS can accommodate this data directly, without resorting to reading the data once and then processing it. However, two separate special tools are needed. These are: 1) the dlm option to the infile statement, to make both commas and spaces be treated as field delimiters, and 2) the & format modifier, which allows spaces within a variable that’s being read in. data ella4; infile "c:\book\ella.txt" dlm=", "; input id city & $20. state$2. v1 - v5; run; proc print data=ella; run; In effect, the forgoing input statement instructs SAS that the field following id is to be named city, that it may have spaces in it, and that it is a character variable with a length of up to 20 characters; it will read into that variables for 20 spaces, or until the next non-space delimiter. The dlm option means that a comma is a delimiter. Examining the log resulting from running the preceding code will reveal many notes regarding “invalid data”, corresponding to the dashes. However, these result (correctly) in missing data codes, so they can safely be ignored. Obs id city state v1 v2 v3 v4 v5 1 1 Las Vegas NV . 53.3 . . 1 2 2 Sacramento CA . 42.3 . . 2 3 3 Miami FL . 41.8 . . 3 4 4 Tucson AZ . 41.7 . . 4 5 5 Cleveland OH . 38.3 . . 5 6 6 Cincinnati OH 15 36.4 . . 6 7 7 Colorado Springs CO . 36.1 . . 7 8 8 Memphis TN . 35.3 . . 8 9 8 New Orleans LA . 35.3 . . 8 10 10 Mesa AZ . 34.7 . . 10 11 11 Baltimore MD . 33.2 . . 11 12 12 Philadelphia PA . 31.7 . . 12 13 13 Salt Lake City UT . 31.9 17 . 13
{}
When 168 joules of heat is added 4 grams of water at 283 K, what is the resulting temperature? May 11, 2017 $293 K$ Explanation: The specific heat formula: $Q = c \cdot m \cdot \Delta T$, where $Q$ is the amount of heat transferred, $c$ is the specific heat capacity of the substance, $m$ is the mass of the object, and $\Delta T$ is the change in temperature. In order to solve for the change in temperature, use the formula $\Delta T = \frac{Q}{{c}_{w a t e r} \cdot m}$ The standard heat capacity of water, ${c}_{w a t e r}$ is $4.18 \cdot J \cdot {g}^{- 1} \cdot {K}^{- 1}$. And we get $\Delta T = \frac{168 \cdot J}{4.18 \cdot J \cdot {g}^{- 1} \cdot {K}^{- 1} \cdot 4 \cdot g} = 10.0 K$ Since $Q > 0$, the resulting temperature will be ${T}_{f} = {T}_{i} + \Delta T = 283 K + 10.0 K = 293 K$ (pay special attention to significant figures) Additional resources on Heat Capacity and Specific Heat: https://www.ck12.org/chemistry/Heat-Capacity-and-Specific-Heat/lesson/Heat-Capacity-and-Specific-Heat-CHEM/?referrer=concept_details
{}
# Turing machine enumerator As the number of Turing Machines is countably, we can create some list of them and number them 1, 2, 3,... Suppose turing machine k computes some function $f_k$. Is there a turing machine S that can computer $f_k(k)$? It seems like there should be, as I am fairly confident that you can write such a program in a turing-complete language. However, I am having a hard time seeing how an $n$ state turing machine (if S is n states) could compute every function computed by an $n^2$ state turing machine, for example. • See admissible numberings. In essence, you are asking for a universal Turing machine. Its existence is a standard fact of recursion theory. – Raphael Feb 23 '16 at 11:23 Indeed, there is such a Turing Machine. First, realize that it's computable to convert any integer $k$ into the $k$th Turing Machine description, just by enumerating them. Then, once we have the Turing Machine description, we can use a Universal Turing Machine to compute the function described. The key is that, while there are less states in the $n$-state machine than in one with $n^2$, any number of additional states can be encoded on the tape of the Turing Machine. So there's no limit to the amount of information we have when computing the $k$th function, since an arbitrary amount of information can be encoded on the tape. So it is not the case of how finite states can hold potentially very large information. It is rather how (finite state + infinite tape) holds (very large number of states + infinite tape). Basically there is no problem with storing $(n^2 + \infty)$ information in $(n+\infty)$ space. And there are two ways to enumerate Turing Machines. Easier way out is just take binary representation of $i$ as Turing Machine $M_i$. If it is a valid representation it is okay, if it is not, we shall assume it to be a Turing Machine that does not accept any input. The Universal Turing Machine will also honor this fact. But if we are given a fixed representation and need to generate $i$'th valid Turing machine then we need to do it in hard way. We need to generate $\alpha_{M_i}$ the representation of $M_i$ one by one after checking if it is a valid Turing Machine. In this case the Universal Turing Machine will always get a valid representation of a Turing Machine.
{}
@LucasBenjamin 2022-04-26T05:48:09.000000Z 字数 5502 阅读 36 # Writing excellent affirmation essay - a total aid Writing a confirmation essay is a significant, fascinating however complex undertaking that each understudy frequently does during one's scholarly profession. It is significant on the grounds that without an affirmation essay essay writer will not have the option to take confirmation for your advanced education. I said it is intriguing on the grounds that it is the initial step by which you draw the image of your past, present, and future to portray your story. Portraying one's excursion is generally something lovely to do. Running against the norm, writing an affirmation essay is additionally a perplexing assignment that requires scholastic thoroughness, experimental writing abilities, and a great deal of inspiration and energy toward the finish of the undertaking. Writing a confirmation essay is a certain something, and writing a great affirmation essay is one more cup of tea, I should say. Everybody can write an affirmation essay, yet there are rare sorts of people who can write an ideal and provocative confirmation essay. Nonetheless, everybody can become a decent essay writer and can write excellent essays form themselves by getting a handle on specific abilities and methods. It is an intricate errand, yet it's anything but a troublesome one. It requires energy and abilities however those abilities are acquirable . In this article, I will feature specific key highlights that establish a top notch affirmation essay. By following these elements, you can write your own top notch confirmation essay. For essay understanding, partition the most common way of writing a confirmation essay into three key stages. First comes the arranging that how might I figure out how to do my essay for me . It is trailed by the writing step wherein you begin chipping away at the essay. the last advance is tied in with editing and altering. In this progression, you return to your essay integrates exact terms, and actually look at the intelligibility, in the long run dispose of the relative multitude of errors to come up with a last piece of your essay. The subsequent advance is the most intricate one it decides the believability of your essay. Writing an essay is not the same as the arranging part. Here, you require specific key abilities. For example, you ought to know how to utilize language. You ought to have a firm comprehension of sentence structure, etc. In addition, you ought to know how to structure your essay in a composite way. I frequently write my paper by thinking about the standard design, and you ought to likewise do likewise. There ought to be a presentation, trailed by the part where is express your energy, interest, etc. There ought to be a part that arrangements with your local area services. In addition, you ought to likewise incorporate a part connected with your objectives and goal and adjust it to the affirmation or degree program that you choose. In the arranging steps, gather every one of the pertinent materials connected with the affirmation essay. You ought to have a thought regarding the requests of the essay. Go through the requirement subtleties, and think about the hidden brain science of those requirements. For example , why and what are the purposes for the requirement. All in all, attempt to guess the thoughts of the affirmation board. Arranging assumes a significant part in the existence of an essay writer to write essay for me that smoothens the reason for writing anything. Shrewd arranging is a framework by which you write your confirmation essay. pay great notice to design things connected with your confirmation essay is the initial step. The inquiry is how might you write a top notch presentation. Since you know yourself, your shortcomings, solid focuses, and uniqueness, therefore, remain authentic while presenting yourself. Your presentation ought to be one of a kind and it ought to portray your enthusiasm for instruction. For rules, you can connect with a professional paper writing service that would wipe out your disarray. Be that as it may, the uniqueness and energy matter in the starting piece of your essay. It is about how you present yourself, to the board through your essay, establish the principal connection. In the accompanying sections of your essay, you ought to remember one thing that anything you write you ought to support attachment and intelligibility. Lucidness among the passages and attachment in the entire story that you write. Do adjust your energy to your objectives and targets. Additionally, your previous scholarly profession with your future undertakings. In like manner, create a significance to do my essayof the affirmation program with your future undertakings. Persuade the affirmation board that this is the most appropriate program for you and the establishment is the most reliable objective for you and your future profession. It should be in every way possible by a reasonable and solid account that you write. I should say it once more, adjust everything to one another to depict a clear partiture in an innovative way. The last advance is editing and altering. Whenever you are finished with the writing system, take as much time as necessary and easily read your essay over and over. Doing so placed yourself in the shoes of the confirmation board and assess the essay by offering basic conversation starters. In light of those inquiries and your true reactions alter the essay for more reasonability. Never forget to edit every one of the parts and particularly the altered ones. Follow every one of the linguistic confuses and check with cognizance. Resultantly, you would have the option to come up with an excellent confirmation essay of your own. To summarize, plan your essay, write it, and edit and alter it. Follow the design and consolidate the mentioned ideas in your affirmation essay. Before accommodation, put yourself into the shoes of the confirmation board and resentment your essay. Essay writer online can alter the possibilities deficiencies to for one final time. In doing as such, you would have the option to write an exhaustive confirmation essay. Useful Resources: Best Compare and Contrast Essay Topics & Ideas to Get Started How to Write a Book Report Outline – A Step by Step Guide How to Write an Abstract - Learn and Write Your Abstract Easily Is There any Cheap Essay Writing Service? What are the Pros of Using an Essay Writing Service?
{}
# ArXiv Notes for 07/11/2017 ## Spatial variations of turbulent properties of neutral hydrogen gas in the Small Magellanic Cloud using structure function analysis By David Nestingen-Palm et al. 1707.03118 Turbulence in the SMC is homogeneous on 30 pc averaged scales.
{}
# Multiple linear regression for hypothesis testing I am familiar with using multiple linear regressions to create models of various variables. However, I was curious if regression tests are ever used to do any sort of basic hypothesis testing. If so, what would those scenarios/hypotheses look like? - Can you explain further what you mean? It is very common to test whether the slope parameter for a variable is different from zero. I would call that "hypothesis testing". Are you unaware of that, or do you mean something different? What constitutes a scenario for your purposes? –  gung Apr 2 '12 at 13:24 I am unaware of that. I was also unsure if regression-based analysis is used for any other sort of hypothesis testing (perhaps about the significance of one variable over another, etc). –  allie Apr 2 '12 at 14:04 Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible N = 36 # the following generates 3 variables: x1 = rep(seq(from=11, to=13), each=12) x2 = rep(rep(seq(from=90, to=150, by=20), each=3 ), times=3) x3 = rep(seq(from=6, to=18, by=6 ), times=12) cbind(x1, x2, x3)[1:7,] # 1st 7 cases, just to see the pattern x1 x2 x3 [1,] 11 90 6 [2,] 11 90 12 [3,] 11 90 18 [4,] 11 110 6 [5,] 11 110 12 [6,] 11 110 18 [7,] 11 130 6 # the following is the true data generating process, note that y is a function of # x1 & x2, but not x3, note also that x1 is designed above w/ a restricted range, # & that x2 tends to have less influence on the response variable than x1: y = 15 + 2*x1 + .2*x2 + rnorm(N, mean=0, sd=10) reg.Model = lm(y~x1+x2+x3) # fits a regression model to these data Now, lets see what this looks like: . . . Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.76232 27.18170 -0.065 0.94871 x1 3.11683 2.09795 1.486 0.14716 x2 0.21214 0.07661 2.769 0.00927 ** x3 0.17748 0.34966 0.508 0.61524 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 . . . F-statistic: 3.378 on 3 and 32 DF, p-value: 0.03016 We can focus on the "Coefficients" section of the output. Each parameter estimated by the model gets its own row. The actual estimate itself, is listed in the first column. The second column lists the Standard Errors of the estimates, that is, an estimate of how much estimates would 'bounce around' from sample to sample, if we were to repeat this process over and over and over again. More specifically, it is an estimate of the standard deviation of the sampling distribution of the estimate. If we divide each parameter estimate by its SE, we get a t-score, which is listed in the third column; this is used for hypothesis testing, specifically to test whether the parameter estimate is 'significantly' different from 0. The last column is the p-value associated with that t-score. It is the probability of finding an estimated value that far or further from 0, if the null hypothesis were true. Note that if the null hypothesis is not true, it is not clear that this value is telling us anything meaningful at all. If we look back and forth between the Coefficients table and the true data generating process above, we can see a few interesting things. The intercept is estimated to be -1.8 and its SE is 27, whereas the true value is 15. Because the associated p-value is .95, it would not be considered 'significantly different' from 0 (a type II error), but it is nonetheless within one SE of the true value. There is thus nothing terribly extreme about this estimate from the perspective of the true value and the amount it ought to fluctuate; we simply have insufficient power to differentiate it from 0. The same story holds, more or less, for x1. Data analysts would typically say that it is not even 'marginally significant' because its p-value is >.10, however, this is another type II error. The estimate for x2 is quite accurate $.21214\approx.2$, and the p-value is 'highly significant', a correct decision. x3 also could not be differentiated from 0, p=.62, another correct decision (x3 does not show up in the true data generating process above). Interestingly, the p-value is greater than that for x1, but less than that for the intercept, both of which are type II errors. Finally, if we look below the Coefficients table we see the F-value for the model, which is a simultaneous test. This test checks to see if the model as a whole predicts the response variable better than chance alone. Another way to say this, is whether or not all the estimates should be considered unable to be differentiated from 0. The results of this test suggests that at least some of the parameter estimates are not equal to 0, anther correct decision. Since there are 4 tests above, we would have no protection from the problem of multiple comparisons without this. (Bear in mind that because p-values are random variables--whether something is significant would vary from experiment to experiment, if the experiment were re-run--it is possible for these to be inconsistent with each other. This is discussed on CV here: Significance of coefficients in multiple regression: significant t-test vs. non-significant F-statistic, and the opposite situation here: How can a regression be significant yet all predictors be non-significant, & here: F and t statistics in a regression.) Perhaps curiously, there are no type I errors in this example. At any rate, all 5 of the tests discussed in this paragraph are hypothesis tests. From your comment, I gather you may also wonder about how to determine if one explanatory variable is more important than another. This is a very common question, but is quite tricky. Imagine wanting to predict the potential for success in a sport based on an athlete's height and weight, and wondering which is more important. A common strategy is to look to see which estimated coefficient is larger. However, these estimates are specific to the units that were used: for example, the coefficient for weight will change depending on whether pounds or kilograms are used. In addition, it is not remotely clear how to equate / compare pounds and inches, or kilograms and centimeters. One strategy people employ is to standardize (i.e., turn into z-scores) their data first. Then these dimensions are in common units (viz., standard deviations), and the coefficients are mathematically equivalent to r-scores. Moreover, it is possible to test if one r-score is larger than another. Unfortunately, this does not get you out of the woods; unless the true r is exactly 0, the estimated r is driven in large part by the range of covariate values that are used. (I don't know how easy it will be to recognize, but @whuber's excellent answer here: Is $R^2$ useful or dangerous, illustrates this point; to see it, just think about how $r=\sqrt{r^2}$.) Thus, the best that can ever be said is that variability in one explanatory variable within a specified range is more important to determining the level of the response than variability in another explanatory variable within another specified range. -
{}
Journal topic Earth Syst. Dynam., 10, 59–72, 2019 https://doi.org/10.5194/esd-10-59-2019 Earth Syst. Dynam., 10, 59–72, 2019 https://doi.org/10.5194/esd-10-59-2019 Research article 01 Feb 2019 Research article | 01 Feb 2019 # Climatological moisture sources for the Western North American Monsoon through a Lagrangian approach: their influence on precipitation intensity Climatological moisture sources for the Western North American Monsoon through a Lagrangian approach: their influence on precipitation intensity Paulina Ordoñez1, Raquel Nieto2, Luis Gimeno2, Pedro Ribera3, David Gallego3, Carlos Abraham Ochoa-Moya1, and Arturo Ignacio Quintanar1 Paulina Ordoñez et al. • 1Centro de Ciencias de la Atmósfera, Universidad Nacional Autónoma de México, Mexico City, 04510, Mexico • 2Environmental Physics Laboratory (EphysLab), Universidade de Vigo, Ourense, 32004, Spain • 3Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, Seville, 41013, Spain Correspondence: Paulina Ordoñez (orpep@atmosfera.unam.mx) Abstract This work examines the origin of atmospheric water vapor arriving to the western North American monsoon (WNAM) region over a 34-year period (1981–2014) using a Lagrangian approach. This methodology computes budgets of evaporation minus precipitation (EP) by calculating changes in the specific humidity of thousands of air particles advected into the study area by the observed winds. The length of the period analyzed (34 years) allows the method to identify oceanic and terrestrial sources of moisture to the WNAM region from a climatological perspective. During the wet season, the WNAM region itself is on average the main evaporative source, followed by the Gulf of California. However, water vapor originating from the Caribbean Sea, the Gulf of Mexico, and terrestrial eastern Mexico is found to influence regional-scale rainfall generation. Enhanced (reduced) moisture transport from the Caribbean Sea and the Gulf of Mexico from 4 to 6 days before precipitation events seems to be responsible for increased (decreased) rainfall intensity on regional scales during the monsoon peak. Westward propagating mid- to upper-level inverted troughs (IVs) seem to favor these water vapor fluxes from the east. In particular, a 200 % increase in the moisture flux from the Caribbean Sea to the WNAM region is found to be followed by the occurrence of heavy precipitation in the WNAM area a few days later. Low-level troughs off the coast of northwestern Mexico and upper-level IVs over the Gulf of Mexico are also related to these extreme rainfall events. 1 Introduction Historical studies used the reversal in large-scale lower tropospheric circulation to identify a monsoon domain (Ramage, 1971). Such monsoon domains were found mainly over tropical areas of the Eastern Hemisphere because the seasonal wind reversal is much more well defined there than over the Americas (Hsu, 2016). In addition to the wind field, precipitation is another fundamental variable that has been more recently used to define a monsoon climate; in a monsoon region the majority of the annual rainfall occurs in summer ( due to the annual cycle of solar heating), while winters are quite dry. This rainfall-based classification of monsoon regions includes the North American and South American monsoon regions that are roughly located over the tropical to subtropical Americas. Particularly, the “North American monsoon” (NAM) region covers much of Central America and central Mexico, extending over northwestern Mexico, and almost reaching to the southwestern US (e.g., Wang and Ding, 2006, 2008; Liu et al., 2009, 2016; Wang et al., 2012, 2018; Huo-Po and Jia-Qi, 2013; Lee and Wang, 2014; Mohtadi et al., 2016). However, the term “NAM” has also been extensively used to refer to the precipitation of Sinaloa, Sonora (northwestern Mexico), southern Arizona, and New Mexico (southwestern US) (e.g., Douglas et al., 1993; Adams and Comrie, 1997; Higgins et al., 1997, 1999; Barlow et al., 1998; Vera et al., 2006; Higgins and Gochis, 2007). This region is more reduced, covering approximately the northern tip of the region described in the paragraph above. Off the western coast of Sinaloa and Sonora, over the Gulf of California (GOC), there is a seasonal surface wind reversal. During July, August, and September low-level winds over the GOC change from northerly to southwesterly due to the northward displacement of the Pacific high and the development of a thermally induced trough over land; however, this wind reversal is not of sufficient magnitude and scale to meet the Ramage's criteria (Ramage, 1971; Hoell et al., 2016). The annual precipitation is roughly 70 %, 45 %, and 35 % for northwestern Mexico, New Mexico, and Arizona, respectively (Vivoni et al., 2008; Erfani and Mitchell, 2014). It is important to highlight that the same term is employed in scientific literature to denote the climatic characteristics of different regions. Therefore in this work, the term “western North American monsoon (WNAM)” is hereafter used to refer to the summer climate of northwestern Mexico and the southwestern US, which distinguishes this monsoonal region with its own regional characters from the larger NAM region that extends northward from the Equator. The identification of the origin of the water available for precipitation in a region constitutes a very complex problem. Over the years, it has been accepted that moist air moves into the WNAM system on a broad band of middle troposphere southeast winds from the Gulf of Mexico (GOM) (Jurwitz, 1953; Bryson and Lowry, 1955; Green and Sellers, 1964). Later studies have claimed that the eastern tropical Pacific and boundary layer flow from the GOC are the major sources of moisture for the WNAM system (Douglas, 1995; Stensrud et al., 1995; Berbery, 2001; Mitchell et al., 2002), while the middle tropospheric transport has also remained important (Schmitz and Mullen, 1996). In addition to mean flow moisture transport, transient features such as the “gulf surge”, a coastally trapped disturbance that is typically initiated by a tropical easterly wave or tropical storm that crosses near the GOC entrance and is then propagated northwestward along the GOC axis (Rogers and Johnson, 2007; Newman and Johnson, 2013), are also important mechanisms for initiating precipitation in the WNAM region. A gulf surge is termed wet or dry depending on if the surge is followed by positive or negative spatially averaged mean precipitation anomalies over Arizona and/or western New Mexico (Hales, 1972; Stensrud et al., 1997; Higgins et al., 2004; Pascale and Bordoni, 2016). Wet surges occur between 7 and 10 times during the monsoon season (Pascale et al., 2016). Transient upper-level inverted troughs (IVs), cold-core cut off lows, open troughs in the westerlies, and surface fronts (Douglas and Engelhardt, 2007; Seastrand et al., 2015) also contribute to precipitation events in the WNAM area. Gulf surges often occur in conjunction with such disturbances, particularly IVs, to produce rainfall over the northern WNAM region (Stensrud et al., 1997; Fuller and Stensrud, 2000; Higgins et al., 2004; Bieda et al., 2009; Newman and Johnson, 2012; Seastrand et al., 2015). Today it is widely accepted that both the middle-level easterly moisture from the GOM and the southwesterly low-level moisture from the GOC contribute to monsoonal precipitation. In addition, other studies have highlighted the role of surface soil moisture and vegetation dynamics in the WNAM region (e.g., Dominguez et al., 2008; Méndez-Barroso et al., 2009; Mendez-Barroso and Vivoni, 2010; Bohn and Vivoni, 2016; Xiang et al., 2018). Specifically, Hu and Dominguez (2015) found that terrestrial sources contribute approximately 40 % of monsoonal moisture using the extended dynamic recycling model (DRM, Dominguez et al., 2006). Bosilovich et al. (2003) used water vapor tracer diagnostics in global numerical simulations to quantify the effect of local continental evaporation on monsoon precipitation. These authors found that local evaporation is the second most important source of precipitation after the GOM. Dominguez et al. (2016) used water vapor tracer diagnostics in a regional climate model to quantify the water vapor from four different oceanic and terrestrial regions that contribute to precipitation during the WNAM season. They documented that local recycling is the second most important source after the lower-level moisture from the GOC. Therefore, despite the large number of studies of WNAM moisture sources, the major moisture sources to the WNAM system and their relative importance are still actively debated. In this work, we use the Lagrangian particle dispersion model FLEXPART to analyze the water vapor transport towards the WNAM region. Evaporation minus precipitation (EP) is tracked from the WNAM area along the backward trajectories of appropriately selected particles, thereby facilitating the determination of water source–receptor relationships. This work addresses two main objectives: (1) define the main moisture sources for the WNAM region, and (2) determine the moisture transport that contributes to the regional-scale rainfall intensity over the WNAM area. In Sect. 2 the data and methods are presented, followed by the results in Sect. 3. We dedicate Sect. 4 to the main conclusions of the study. 2 Data and methods ## 2.1 Lagrangian diagnostic of E−P for the WNAM region This work makes use of the method developed by Stohl and James (2004, 2005) with respect to quantifying the atmospheric water vapor transport towards a region using the Lagrangian particle dispersion model FLEXPART (Stohl et al., 2005), which is driven by meteorological gridded data. At the start of the model, the atmosphere is homogeneously divided into a large number of air parcels (particles), each representing a fraction of the total atmospheric mass. The particles are then allowed to move freely with the observed wind, overlapping stochastic turbulent and convective motions (Stohl et al., 2005) while maintaining their mass constant. Particle positions and their specific humidity are recorded every 6 h. For each particle the net rate of change in water vapor content is computed using the changes in specific humidity over time: $\begin{array}{}\text{(1)}& e-p=m\frac{\mathrm{d}q}{\mathrm{d}t},\end{array}$ where q is the specific humidity, m is the mass of the particle, and e and p are the rates of moisture increases and decreases of the particle along the trajectory, respectively. To diagnose the net surface water flux in an area A, the moisture changes of all particles in the atmospheric column over A are aggregated giving the field (EP): $\begin{array}{}\text{(2)}& E-P=\frac{{\sum }_{k=\mathrm{1}}^{K}\left(e-p\right)}{A},\end{array}$ where K is the number of particles residing over the area A, E is the evaporation rate, and P is the precipitation rate per unit of area. Finally, to find the moisture sources of a region, the (ep) of all of the particles located over the region at a given time is evaluated along their back trajectories. By integrating the humidity changes (i.e., the moisture increases and decreases) of all of these particles, it is possible to find the areas where the particles have either gained ($E-P>\mathrm{0}$) or lost moisture ($E-P<\mathrm{0}$) along their path. It is also feasible to find the day of the recharge preceding the arrival of the particles at the target region. When a long enough period is analyzed, the mean moisture sources can be described from a climatological point of view. Note that, as the particles originally located over the target region disperse, the particles residing in an atmospheric column no longer represent its entire atmospheric mass, and only constitute the part of the column fulfilling the criterion that later reaches the target. Therefore, (EP) values do not represent the surface net water vapor flux, and can only be regarded as the net water vapor flux into the air mass traveling to the target region (Stohl and James, 2005). In this study FLEXPARTv9 was run for a 34-year period from 1981 to 2014 and was driven by ERA-Interim reanalysis data at a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ resolution (Dee et al., 2011). These data are available on 61 model levels from 0.1 to 1000 hPa; there are approximately 14 model levels below 1500 m and 23 below 5000 m. This 34-year period was the span for which data was available at the time that the experiment was carried out. We used analyses every 6 h (00:00, 06:00, 12:00, and 18:00 UTC) and 3 h forecasts at intermediate times (03:00, 09:00, 15:00, and 21:00 UTC). The 3 h forecasts are used here to supplement the analyses because the time resolution is critical for the accuracy of Lagrangian trajectories (Stohl et al., 1995). The abovementioned method was applied backward in time from the WNAM region shown in Fig. 1. The boundary selected to represent the WNAM region is similar to that used by Hu and Dominguez (2015), whose core WNAM region was also consistent with the North American Monsoon Experiment (NAME; Higgins and Gochis, 2007) and the NAME precipitation zones defined by Castro et al. (2012). To establish the transport time adequately represented by FLEXPART, we used the Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) (Funk et al., 2015). CHIRPS integrates 0.05 resolution satellite imagery with in situ station data and was shown by Perdigón-Morales et al. (2018) to properly reproduce some of the particular characteristics of Mexican rainfall such as the midsummer drought. The minimum absolute difference between the precipitation simulated by FLEXPART and the “real” precipitation over the WNAM defined by CHIRPS was 6 days, so we limited the transport time to this period. This life span was also computed with ERA-Interim precipitation data with similar results. Further details regarding this methodology can be found in Miralles et al. (2016). Figure 1Study region (black solid line) and its topography (m). ### Limitations of FLEXPART FLEXPART requires only self-consistent meteorological analysis data as input. The accuracy of the data employed is critical, as errors in these data can lead to systematic miscalculations of (EP). For instance, as the flux (EP) is diagnosed using the time derivate of humidity, unrealistic fluctuations of humidity could be identified as water vapor fluxes. If these fluctuations are random, they will cancel over longer periods of time. However, if the trajectory data suffer from substantial inaccuracies, even if these errors are random, results can be systematically affected. This could be the case if, for instance, a particle that is originally located in a relatively moist air mass leaves this air mass due to trajectory errors and enters a drier air mass. The humidity would then decrease along this trajectory, and $\left(E-P\right)<\mathrm{0}$ would be erroneously diagnosed. The opposite is true for relatively dry air masses, i.e., (EP) would be systematically too large for trajectories from dry regions (Stohl and James, 2005). In this sense, the ERA-Interim reanalysis data used in this study have been found to provide a reliable representation of the atmospheric branch of the hydrological cycle when compared to other reanalysis products such as CFSR or MERRA (Trenberth et al., 2011; Lorenz and Kunstmann, 2012). A second limitation is imposed by computational constraints. We executed FLEXPART driven by ERA-Interim data at a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ spatial resolution. Global-scale atmospheric models with this grid spacing do not resolve convective clouds, and even the mesoscale convective systems with horizontal dimensions in the order of a couple of hundred kilometers are not sufficiently resolved and must be parameterized (Foster et al., 2007). However, analogous works on moisture transport diagnosis have achieved promising results for tropical regions where convective rainfall clearly dominates, using this relatively coarse resolution (e.g., Duran-Quesada et al., 2010; Drumond et al., 2011; Hoyos et al., 2018). Nevertheless, given this limitation, we opted not to evaluate processes that occur over the WNAM region at the local scale (sub-grid) or over a very short periods of time (sub-daily). Traditionally, the search for the origin of precipitation has been approached using so-called Eulerian methods, based on the analysis of the divergent part of the vertically integrated moisture flux (VIMF). Eulerian methods are considered quite accurate at approximating EP (Simmonds et al., 1999; Ruprecht and Kahl, 2003; Mo et al., 2005); however, for this study we opted to use a Lagrangian approach for two main reasons. First, Stohl and James (2005) obtained practically identical results using a Lagrangian approach based on FLEXPART and the Eulerian equivalent. Second, a Lagrangian approach allows forward or backward tracking along defined trajectories, facilitating the determination of the source–receptor water vapor relationships. Nevertheless, in order to asses our results, here we make use of the VIMF to validate the moisture budgets calculated using FLEXPART. We based this comparison on both ERA-Interim data and the more recent “Modern-Era Retrospective analysis for Research and Applications V2” dataset (MERRA-2: Bosilovich et al., 2017). MERRA-2 was chosen because it incorporates an improved water cycle. ## 2.2 Tracking E−P for individual precipitation events The methodology described in Sect. 2.1 determines the average net changes of q for air particles aimed toward the study area, but the moisture transported towards the WNAM region does not always generate effective precipitation. This limitation can be overcome by tracking the air particles that arrive at the WNAM during wet and dry days separately. Therefore, a definition of wet and dry days over the WNAM region is necessary. For this purpose, daily CHIRPS data are used which have a spatial resolution of $\mathrm{0.25}{}^{\circ }×\mathrm{0.25}{}^{\circ }$, and the assessment of long-term statistics is also performed on a 34-year (1981–2014) period. A “common wet day” for the WNAM area is defined as a rainfall day covering a large proportion of the study area. This is achieved using the methodology described in Ordoñez et al. (2012). At each grid point, precipitation values above 10 % of the standard deviation computed for all of the grid points in the study area are considered individual precipitation events for that day. Next, the percentage of precipitation days for each grid point is computed and the averaged value is then obtained for all grids (20.3 %). To define a wet day across the WNAM region we compute the percentage of grid points inside the region that must have simultaneous precipitation in order to obtain the annual value of average precipitation over the region. This resulted in a value of 41.3 %, and a total of 2513 days were classified as wet days for the entire WNAM region during the study period. Figure 2 shows the monthly distribution of precipitation events throughout the year according to this methodology. An analogous method is employed to classify the wet days according to their intensity. To define moderate and extreme precipitation events, the 50th (P50) and 90th (P90) percentiles of the precipitation time series at each grid point are computed. We then require that these percentiles be simultaneously exceeded in at least 41.3 % of the grid points inside the WNAM area. In this fashion, the method assures that a moderate or extreme rainfall event is characterized by a well determined precipitation value that covers a significant portion of the WNAM region. Weak precipitation days are the remaining wet days. Figure 3 shows the yearly precipitation composites for the different precipitation categories. The general increase in the precipitation area as precipitation intensity increases for wet days is clearly shown in Fig. 3b–d. Figure 2Number of precipitation events per month. Figure 3Mean daily rainfall (in mm day−1 for the period from 1981 to 2014) for the (a) dry days, (b) weak precipitation days, (c) moderate precipitation days, and (d) extreme precipitation days over the WNAM region. The black boundary delineates the study region. The accuracy of FLEXPART regarding the capture of rainy days in relation to CHIRPS is also tested by comparing the average (EP) values obtained by FLEXPART over the WNAM domain during the first 6 h time step of the trajectories for the different precipitation events against the average values obtained using CHIRPS. In making this comparison, we are assuming that E and P cannot coexist in the same point of space and time. Under this assumption, the instantaneous rates of evaporation or precipitation can be diagnosed by FLEXPART. Note that FLEXPART fails to diagnose precipitation during the weak and moderate rainfall events, yielding low positive (EP) values. This result indicates that not all of the moisture particles traced by FLEXPART contribute to the precipitation events. However, the annual mean of (EP) in the WNAM region diagnosed by FLEXPART during extreme precipitation events is below −11.4 mm day−1, whereas CHIRPS indicates values below 9 mm day−1 of rainfall in most of the grid cells (see Fig. 3d). This suggests that FLEXPART is reliable for capturing extreme precipitation events over the region. In order to estimate the actual evaporation over the moisture source regions we used the state-of-the-art Global Land Evaporation Amsterdam Model (GLEAM). The monthly evaporation from land was estimated from GLEAM v3.2 data at a $\mathrm{0.25}{}^{\circ }×\mathrm{0.25}{}^{\circ }$ resolution which is largely driven by satellite data (Miralles et al., 2011). Furthermore, we used different climatic fields (geopotential height, specific humidity, and horizontal winds components) from the ERA-Interim reanalysis in order to extract information about regional-scale patterns associated with the different rainfall intensity categories over the WNAM region defined above. Figure 4Monthly averaged values of (EP)1–6 (mm day−1) for all of the particles aimed toward the WNAM region during (a) June, (b) July, (c) August, and (d) September (period of study: 1981–2014). The black boundary delineates the study region. 3 Results ## 3.1 Moisture sources for the WNAM region Figure 4 shows the 6-day aggregated monthly average values of water vapor flux (EP) before air masses aimed towards the WNAM reach the region for the period from 1981 to 2014. (EP)n designates the water vapor flux value for day “n” before arrival to the target area. The sum of the net water vapor flux from day 1 to day 6 (the sum of (EP)1, (EP)2, , (EP)6) is denoted using (EP)1–6. Although the WNAM season is usually defined as being from July to September, we found several regional-scale precipitation events during June (see Fig. 2); hence, the results for the 4 months from June to September are presented in Fig. 4. Reddish (bluish) colors are used to show regions of water vapor gain, $E-P>\mathrm{0}$ (loss, $E-P<\mathrm{0}$), according to the sign of dq∕dt of particles following their trajectories. The northeastern Pacific off the coast of the US and the GOC are found to be net moisture sources during the summer. The monthly analysis shows that the terrestrial region east of the WNAM domain is also an active source throughout the summer. In addition, there are source regions over the GOM and the Caribbean Sea that seem to be significant – primarily during July and August. The southwestern US also appears as a source region from June to September, where evaporation is larger than precipitation. Finally, the WNAM region itself seems to be an evaporative moisture source for the whole region in June and September, whereas during July and August only the northern section of the WNAM acts as an evaporative moisture source, while the southern WNAM section indicates negative (EP)1–6 values, suggesting that this area is a sink of moisture during the peak monsoon. Figure 5 depicts the average VIMF divergence for the same months (June to September) for the study period using ERA-Interim reanalysis data at a $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ resolution. Positive values indicate moisture flux divergence ($E-P>\mathrm{0}$) while negative values indicate moisture flux convergence ($E-P<\mathrm{0}$). The Eulerian results are quite similar to the Lagrangian diagnostics. Even the temporal variability over the eastern WNAM region, the GOM, and the Caribbean Sea is very similar, showing higher contributions during July and August compared with June and September. The main difference is seen over the Pacific Ocean, where the Eulerian method indicates VIMF divergence that does not appear as a moisture source when using the Lagrangian approach. However, we found that the moisture flux over this oceanic region is not aimed toward the WNAM domain. Therefore, the agreement between the Lagrangian and the Eulerian diagnostics is excellent. The Eulerian diagnostic performed with MERRA-2 data at $\mathrm{1.25}{}^{\circ }×\mathrm{1.25}{}^{\circ }$ resolution (not shown) does not capture the seasonal variability over eastern Mexico and the Caribbean Sea, but otherwise the monthly VIMF divergence patterns are consistent with those performed using ERA-Interim data. Figure 5Monthly averaged values of vertically integrated moisture flux (VIMF; kg m−1 s−1) and divergence–convergence (reddish–bluish colors) (mm day-1). According to these results, six main moisture sources for the WNAM region have been defined: (1) the WNAM region itself (WNAM); (2) the terrestrial region east of the WNAM domain (NE-MEX); (3) the Atlantic which includes a part of the GOM and the Caribbean Sea (GOM-CAR); (4) the southwestern US, toward the north of the WNAM domain (SW-US); the Pacific which includes (5) the northeastern Pacific (NEP); and (6) the GOC (GOC). Figure 6 shows their boundaries. These source regions were defined using the values of $\left(E-P{\right)}_{\mathrm{1}\text{–}\mathrm{6}}>\mathrm{0}$ greater than P90 for the period from June to September. The summer monthly evolution of (EP)1–6 integrated for these fixed areas is shown in Fig. 7. During June, the inland evaporative source over the WNAM itself and the water vapor from the GOC are the main moisture sources for the WNAM system. In July, SW-US provides a slightly greater amount of moisture than these regions. The situation is a little different for August, when the WNAM region is the main source and the NE-MEX region shows its peak (EP)1–6 values. September is characterized by the peak contribution from the WNAM region, while the relative contribution from the remaining sources decrease with respect to their August values. Figure 6Name and geographic limits of the moisture sources defined for the WNAM region. Figure 7Monthly (EP)1–6 percentages for the six areas defined as moisture sources. Figure 8JAS (July, August, September) time series of (EP)n (n=1 to 6) integrated over (a) WNAM, (b) NE-MEX, (c) GOM-CAR, (d) SW-US, (e) NEP, and (f) GOC. The solid line represents wet days, and the dotted line represents dry days. Note that panel (a) is scaled by 0.5. From July to September, FLEXPART describes a terrestrial moisture contribution from the WNAM region of 38 % on average, with a water vapor flux from this region of 26 %, 34 %, and 55 % during July, August, and September, respectively. Our results using FLEXPART find larger moisture transports from the WNAM region than previous studies, but our results are not strictly comparable to them. For instance, Bosilovich et al. (2003) considered the entire Mexican continental region and found it to be the dominant source of moisture for the monsoon, with contributions of roughly 30 %, 25 %, and 20 % during July, August, and September, respectively. In their study these authors computed the fraction of precipitation that originates as evaporation and estimated the values of evaporation from all of the Mexican territory that contributes to the WNAM precipitation. Hu and Dominguez (2015) estimated the precipitable water contribution from recycling to be about 10 % during the monsoon peak. However, the model that they used for this work has proven to be imprecise at tracking moisture transport in the monsoon region due to the model's assumption of a well-mixed atmosphere. This assumption does not hold over the WNAM region where a relatively strong shear occurs, causing an underestimation of local recycling by their model (Dominguez et al., 2016). As previously stated, FLEXPART estimates the net moisture gain, $\left(E-P\right)>\mathrm{0}$, as precipitation and evaporation are not directly separable in this model. Regarding the summer monsoon evolution of this inland evaporative source, Bosilovich et al. (2003) affirm that the terrestrial supply of moisture to the WNAM decreases with time, while Hu and Dominguez (2015) show a maximum local contribution during August. In our case, (EP) considerably increases with time throughout the summer monsoon (Fig. 7). GLEAM shows that the greatest monthly mean evaporation over the WNAM domain during the summer monsoon season occurs in August followed by July and September. The observed monthly mean precipitation peaks in July and decreases with time. The (EP) monthly means from the difference of these independently estimated values of E and P are lowest in July and increase with time, peaking in September, which is consistent with our results. ## 3.2 The role of moisture source regions during regional-scale precipitation events Figure 8 depicts differences in the advected moisture over the source regions before the wet and dry days over the WNAM domain. The time series of (EP) are shown from the sixth to the first day before the air particles aimed toward the WNAM reach this region during the monsoon season (JAS). The WNAM region contributes higher moisture amounts before the rainfall events, except for on day −1 when a fraction of the particles located over the region could be already losing moisture. Similar conditions are experienced by the adjacent GOC and NE-MEX regions, which carry more water vapor before the rainfall events for days −6 to −2, but lower quantities on day −1. In the same way, due to their proximity to the target region, the air masses over the GOC and NE-MEX regions could start to lose part of their moisture the day before the rainfall events over the WNAM domain. GOM-CAR provides higher moisture amounts from day −6 to −3, while during the period from day −3 to −1 the particles still do not reach this region. The SW-US region could also contribute to rainfall development as it supplies more water vapor to the WNAM before the rainfall events. Finally, the NEP region shows slightly lower moisture contributions before the rainfall events, indicating that more moisture is arriving to the GOC and WNAM from the southwest during these events. Although all the regions excluding the NEP could potentially contribute to rainfall generation over the WNAM domain, the greatest difference in the total amount of moisture transported before dry and wet days is obtained for the NE-MEX area. Bosilovich et al. (2003) found local evaporation and transport from the tropical Atlantic Ocean (including the GOM and Caribbean Sea) to be the dominant sources of precipitation. In contrast, Dominguez et al. (2016) reported that the GOC contributes a higher moisture amount to WNAM precipitation than the GOM or local evapotranspiration (ET). The latter study used water vapor tracers embedded into a regional climate model with lateral boundary conditions derived from the North American Regional Reanalysis (NARR; Mesinger et al., 2006). However, the ET fields from NARR might have significant deficiencies in these areas (Bohn and Vivoni, 2016). Our results are in better agreement with those of Bosilovich et al. (2003) which suggest that the GOM-CAR and NE-MEX regions could be major contributors to the monsoonal regional rainfall. However, we cannot conclude that these areas are the one decisive source for rainfall development because, as we have mentioned, the other sources also exhibit important changes for precipitation versus no precipitation days. Our next objective is to examine which source regions are the most relevant for the modulation of rainfall intensity. We have found only 32 extreme precipitation events occurring in September during the period from 1981 to 2014. Therefore, in order to obtain statistically meaningful results, only water vapor transport that generated heavy rainfall during the monsoon peak (July and August) is studied. Figure 9 shows the difference in (EP) for the extreme precipitation days with respect to weak intensity days for days −1, −2, and −5. For the northern WNAM region, $\left(E-P{\right)}_{-\mathrm{1}}$ is lower just before the extreme days than before weak days, indicating greater moisture loss preceding extreme days. On day −2, larger evaporation values are seen before the extreme days over the WNAM region as a whole. Larger values of $\left(E-P{\right)}_{-\mathrm{5}}$ are found along a pathway that crosses the NE-MEX and reaches the GOM-CAR region for extreme days relative to weak days. A high (low) moisture supply from the GOM-CAR domain for 4 to 6 days back in time seems to be one of the most important factors affecting the precipitation intensity during the monsoon peak. The abovementioned differences can be clearly observed in Fig. 10, which shows the results of the integration of moisture changes for the WNAM, NE-MEX, GOM-CAR, and GOC areas, before the extreme, moderate, and weak rainfall days. The water intake from day −6 to −4 and from day −6 to −2 over GOM-CAR and NE-MEX, respectively, seems to be related to the precipitation intensity (Fig. 10c, b), with more than a 200 % increase in the case of GOM-CAR and a 44 % increase for the NE-MEX region during the extreme rainfall events. The integrated time series of (EP) for the WNAM itself (Fig. 10a) show that $\left(E-P{\right)}_{-\mathrm{2}}$ and $\left(E-P{\right)}_{-\mathrm{1}}$ behave in a similar fashion during moderate and heavy rainfall events, peaking during day −2 and decreasing during day −1; this could be related to the strong surface heating that is needed prior to such precipitation events. However, $\left(E-P{\right)}_{-\mathrm{2}}$ and $\left(E-P{\right)}_{-\mathrm{1}}$ for the weak rainfall events behave more similarly to the dry events (Fig. 8a). Finally, in the case of the GOC region, it is noteworthy that the moisture transport amount is inversely related to the rainfall intensity on day −1 (Fig. 10d), which could be associated with the proximity to the target domain. These results suggest that the southeasterly vapor fluxes from the Caribbean Sea, passing over the Sierra Madre at higher altitudes, are related to the monsoonal rainfall intensity. Figure 9Anomalies of (EP)1, (EP)2, and $\left(E-P{\right)}_{-\mathrm{5}}$ during July and August(JA; 1981–2014) for extreme rainfall days minus low rainfall days. Unit: mm day−1. The black line delineates the study region. Figure 10JA time series of (EP)n (n=1 to 6) integrated over the (a) WNAM, (b) NE-MEX, (c) GOM-CAR, and (d) GOC regions. The black solid line represents extreme rainfall events, the dashed line represents moderate rainfall events, and the dotted line represents weak rainfall events. Note that panel (a) is scaled by 0.5. Figure 11Composites of geopotential height anomalies (colors; m) and moisture transport anomalies (arrows; kg kg−1 m s−1) with respect to regional-scale dry events over the WNAM region during the monsoon peak (July and August) for (a) weak, (b) moderate, and (c) extreme precipitation events at 200 hPa and (d) weak, (e) moderate, and (f) extreme precipitation events at 700 hPa. Figure 11 depicts geopotential height and moisture transport differences at 700 and 200 hPa, for weak, moderate, and strong precipitation days with respect to dry days. For the low- to mid-troposphere (Fig. 11a, b, and c) positive geopotential height differences show a center over the western US; negative geopotential height differences are also observed off the west coast of the WNAM region (transient lows – easterly waves or tropical cyclones – typically located at the mouth of the GOC). Both features are better defined as the precipitation becomes more intense over the WNAM domain. At 200 hPa, a positive height difference is located over desert areas of the southwestern US as an extension of the monsoon anticyclone (Fig. 11d, e, f). A negative height difference develops roughly over the GOM which is also more intense as precipitation intensifies over the WNAM domain and may indicate an inverted trough (IV). This result has been previously described as low-level troughs interacting with an upper-level IV enhancing precipitation into the southwestern US and northwestern Mexico (Stensrud et al., 1997; Fuller and Stensrud, 2000; Higgins et al., 2004; Seastrand et al., 2015). IVs have been associated with heavy rainfall events in the US–Mexico border region (Bieda et al., 2009; Finch and Johnson, 2010; Newman and Johnson, 2012). How exactly mesoscale and synoptic circulations related to IVs help organize deep convection over the NAM region is not entirely known (Lahmers et al., 2016). Newman and Johnson (2012) found that these transient features increase surface-to-mid level wind shear, with mid-level flow from the northeast perpendicular to the topography. The enhanced vertical wind shear across the topography supports the upscale growth and westward propagation of diurnal convection initiated over the Sierra Madre Occidental, resulting in widespread convection over the western slopes and coastal low lands of the WNAM region. Divergence aloft on the west flank of an IV can also lead to ascent and destabilization (Pytlak et al., 2005). Regardless of the physical mechanisms, these composites support previous studies which assert that IVs play an important role in generating widespread heavy precipitation across the WNAM domain. More observations of the dynamic and thermodynamic environment during the passage of IVs, as well as improved models of the flow over complex terrain are both needed to better understand the role of IVs in supporting convective outbreaks across the monsoon region. This work suggests that an anomalous tongue of mid-level moisture over northeastern Mexico (Fig. 9c) occurs in conjunction with upper-level IVs and is related to widespread heavy precipitation over the WNAM region. 4 Discussion and conclusions Despite the large body of literature on the transport of water vapor and precipitation patterns associated with the WNAM, there are still important knowledge gaps regarding the sources of water vapor, their relative importance, and the detailed pathways through which the water vapor can reach the WNAM region. This study is focused on the climatological large-scale aspects of moisture transport and precipitation occurrence over the WNAM domain. The well-tested FLEXPART model is used to assess the location of the major moisture sources of the WNAM for a 34-year period from 1981 to 2014. Six main moisture sources have been identified, three terrestrial and three oceanic: the evapotranspiration from the region itself, the Mexican terrestrial area east of the WNAM region, the southern part of the Gulf of Mexico and the adjacent Caribbean Sea, the southwestern US, the Gulf of California, and an oceanic adjacent area over the Pacific. The main moisture sources identified by FLEXPART coincide with the existent literature, in which the debate has traditionally been centered on the relative importance of the Gulf of California versus the Gulf of Mexico and more recently, on the role of the recycling process. Our results indicate that during the monsoon season (from July to September), the WNAM itself is the main moisture source, while the Gulf of California is the second most important origin. However, when the moisture transport for the days leading up to regional-scale wet days is compared, the relevance of the water vapor originating from the Caribbean Sea and the Gulf of Mexico is evidenced. A clear difference in EP between extreme and low precipitation days is seen over the Gulf of Mexico, the Caribbean Sea, and the terrestrial area east of the WNAM region from day −6 to day −2 prior to the onset of precipitation; this suggests that these regions could play a significant role in supporting regional-scale heavy precipitation development over the WNAM domain. Conversely, the relevance of water vapor transport from the Gulf of California diminishes a day before regional-scale precipitation events, as the intensity of both the low-level transient lows and the precipitation over the WNAM region increase. It must be stressed that there is currently a lively debate with regard to the origin of the water vapor advected to the northern WNAM region which leads to extreme precipitation events. In a recent study, Ralph and Galarneau (2017) documented the role of the transport of water vapor from the east in modulating the most extreme precipitation events over southeastern Arizona. The water vapor aimed toward this region was hypothesized to flow through a gap in the mountain range that connects the continental divide and the Sierra Madre in southern Arizona–New Mexico and northern Mexico, known as the Chiricahua Gap. In contrast, Jana et al. (2018) found the Gulf of California to be the leading moisture source for precipitation development at two locations selected over Arizona (Laveen) and New Mexico (Redrock), although these authors also noted a moisture contribution from the Gulf of Mexico at low levels (below 2000 m) for a western New Mexico location. Due to the limitations of our methodology, our results are not conclusive; however, they seem to support the significance of the westward moisture flux from the Gulf of Mexico and the Caribbean Sea for extreme WNAM precipitation. We cannot assure that this anomalous water vapor transport implies that air masses crossing Sierra Madre into the WNAM region result in the development of strong precipitation, as this process cannot be resolved at the spatial resolution used in this study; moreover, low-level moisture is also needed to develop convection. Nevertheless, we have found the presence of low-level troughs and upper-level IVs during these same extreme wet days, which presents a scenario compatible with the occurrence of convective precipitation. In this sense, Schiffer and Nesbitt (2012) describe deep easterly flow anomalies along the southern edge of the monsoon high over the WNAM core prior to the initiation of a wet surge that would be essential for providing pre-surge moisture to the northern WNAM domain. Whereas the region itself and GOC moisture would be important after the surge arrives at the northern end of the GOC. Other authors have also reported gulf surges occurring if an easterly wave trough passes east to the GOC following the passage of an upper-level midlatitude trough (Stensrud et al., 1997; Fuller and Stensrud, 2000). Our results agree with these works, suggesting that moisture from the GOM could be important in combination with other sources such as tropical cyclones and IVs, which can sometimes even form from midlatitude fronts before propagating westward in the easterly flow south of the upper tropospheric monsoon anticyclone (Lahmers et al., 2016). These features would imply that the WNAM cloud should be considered as a hybrid monsoon, with characteristics of a tropical monsoon and additional impacts from midlatitudes. Data availability Data availability. The ERA-Interim datasets are available from https://www.ecmwf.int/ (last access: 25 January 2019). MERRA-2 data are available from https://gmao.gsfc.nasa.gov/reanalysis/MERRA/data_access/ (last access: 25 January 2019). The precipitation data from CHIRPS (Funk et al., 2015) can be downloaded from http://chg.geog.ucsb.edu/data/chirps/ (last access: 25 January 2019). The land evaporation data from the GLEAM model (Miralles et al., 2011) are available from http://www.gleam.eu (last access: 25 January 2019). The FLEXPART model (Stohl and James, 2004, 2005) can be freely downloaded from https://www.flexpart.eu/ (last access: 25 January 2019). Data from the FLEXPART results are available on request from the authors. Author contributions Author contributions. PO, PR and DG developed the concept for the paper. PO and RN performed the data analysis. All authors (PO, RN, LG, PR, DG, CAOM, AIQ) contributed ideas and took part in the interpretation of the results and revisions of the paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. This article is part of the special issue “The 8th EGU Leonardo Conference: From evaporation to precipitation: the atmospheric moisture transport”. It is a result of the 8th EGU Leonardo Conference, Ourense, Spain, 25–27 October 2016. Acknowledgements Acknowledgements. This research is a contribution to the PAPIIT IA103116 “Principales fuentes de humedad de la República Mexicana y su variabilidad climática” project. Pedro Ribera and David Gallego were also supported by the Spanish “Ministerio de Economia y Competitividad” via the “Variabilidad del vapor de agua en la baja estratosfera” project (project no. CGL2016-78562-P) and the RNM-356 research group belonging to the “Plan Andaluz de Investigación Desarrollo e Innovación”. We thank Yolande Serra for in depth discussions and comments on the paper. Edited by: Sergio Martín Vicente Serrano Reviewed by: Ana María Durán-Quesada, Enrique R. Vivoni, and two anonymous referees References Adams, D. K. and Comrie, A. C.: The North American Monsoon, B. Am. Meteorol. Soc., 78, 2197–2213, https://doi.org/10.1175/1520-0477(1997)078<2197:TNAM>2.0.CO;2, 1997. Barlow, M., Nigam, S., and Berbery, E. H.: Evolution of the North American Monsoon System, J. Climate, 11, 2238–2257, https://doi.org/10.1175/1520-0442(1998)011<2238:EOTNAM>2.0.CO;2, 1998. Berbery, E. H.: Mesoscale moisture analysis of the North American monsoon, J. Climate, 14, 121–137, 2001. Bieda III, S. W., Castro, C. L., Mullen, S. L., Comrie, A. C., and Pytlak, E.: The Relationship of Transient Upper-Level Troughs to Variability of the North American Monsoon Systems, J. Climate, 22, 4213–4227, 2009. Bohn, T. J. and Vivoni, E. R.: Process-based characterization of evapotranspiration sources over the North American monsoon region, Water Resour. Res., 52, 358–384, https://doi.org/10.1002/2015WR017934, 2016. Bosilovich, M. G., Sud, Y., Schubert, S., and Walke, G.: Numerical simulationof the large-scale North American monsoon water sources, J. Geophys. Res., 108, 8614, https://doi.org/10.1029/2002JD003095, 2003. Bosilovich, M. G., Robertson, R. F., Takacs, L., Molod, A., and Mocko, D.: Atmospheric Water Balance and Variability in the MERRA-2 Reanalysis, J. Climate, 30, 1178–1196, 2017. Bryson, R. and Lowry, W. P.: Synoptic climatology of the Arizona summer precipitation singularity, B. Am. Meteorol. Soc., 36, 329–339, 1955. Castro, C. L., Chang, H. I., and Dominguez, F.: Can a regional climate model improve the ability to forecast the North American monsoon?, J. Climate, 25, 8212–8237, https://doi.org/10.1175/JCLI-D-11-00441.1, 2012. Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P. Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J. Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system, Q. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011. Dominguez, F., Kumar, P., Liang, X. Z., and Ting, M.: Impact of atmospheric moisture storage on precipitation recycling, J. Climate, 19, 1513–1530, https://doi.org/10.1175/JCLI3691.1, 2006. Dominguez, F., Kumar, P., and Vivoni, E. R.: Precipitation recycling variability and ecoclimatological stability – A study using NARR data. Part II: North American monsoon region, J. Climate, 21, 5187–5203, 2008. Dominguez, F., Miguez-Macho, G., and Huancui, H.: WRF with Water Vapor Tracers: A Study of Moisture Sources for the North American Monsoon, J. Hydrometeorol., 17, 1915–1927, https://doi.org/10.1175/JHM-D-15-0221.1, 2016. Douglas, A. V. and Englehart, P. J.: A climatological perspective of transient synoptic features during NAME 2004, J. Climate, 20, 1947–1954, https://doi.org/10.1175/JCLI4095.1, 2007. Douglas, M. W.: The summertime low-level jet over the Gulf of California, Mon. Weather Rev., 123, 2334–2347, 1995. Douglas, M. W., Maddox, R. A., Howard, K., Rand eyes, S.: The Mexican monsoon, J. Climate, 6, 1665–1677, https://doi.org/10.1175/1520-0442(1993)006<1665:TMM>2.0.CO;2, 1993. Drumond, A., Nieto, R., and Gimeno, L.: Sources of moisture for China and their variations during drier and wetter conditions in 2000–2004: a Lagrangian approach, Clim. Res., 50, 215–225, https://doi.org/10.3354/cr01043, 2011. Duran-Quesada, A. M., Gimeno, L., Amador, J. A., and Nieto, R.: Moisture sources for Central America: Identification of moisture sources using a Lagrangian analysis technique, J. Geophys. Res.-Atmos., 115, D05103, https://doi.org/10.1029/2009JD012455, 2010. Erfani, E. and Mitchell, D.: A partial mechanistic understanding of the North American monsoon, J. Geophys. Res., 119, 13096–13115, https://doi.org/10.1002/2014JD022038, 2014. Finch, Z. O. and Johnson, R. H.: Observational Analysis of an Upper-Level Inverted Trough during the 2004 North American Monsoon Experiment, Mon. Weather Rev., 138, 3540–3555, 2010. Foster, C., Stohl, A., and Seibert, P.: Parameterization of Convective Transport in a Lagrangian Particle Dispersion Model and Its Evaluation, J. Appl. Meteorol. Clim., 46, 403–422, 2007. Fuller, R. D. and Stensrud, D. J.: The relationship between tropical easterly waves and surges over the Gulf of California during the North American monsoon, Mon. Weather Rev., 128, 2983–2989, 2000. Funk, C., Peterson, P., Landsfeld, M., Pedreros, D., Verdin, J., Shukla, S., Husak, G., Rowland, J., Harrison, L., Hoell, A., and Michaelsen, J.: The climate hazards infrared precipitation with stations – a new environmental record for monitoring extremes, Scientific Data, 2, 150066, https://doi.org/10.1038/sdata.2015.66, 2015. Green, C. R. and Sellers, W. D.: Arizona Climate, University of Arizona Press, Tucson, AZ, USA, 503 pp., 1964. Hales Jr., J. E.: Surges of Maritime Tropical Air Northward Over Gulf of California, Mon. Weather Rev., 100, 298–306, https://doi.org/10.1175/1520-0493(1972)100<0298:SOMTAN>2.3.CO;2, 1972. Higgins, R. W., Yao, Y., and Wang, X. L.: Influence of the North American monsoon system on the U.S. Summer Precipitation Regime, J. Climate, 10, 2600–2622, https://doi.org/10.1175/1520-0442(1997)010<2600:IOTNAM>2.0.CO;2, 1997. Higgins, R. W., Chen, Y., and Douglas, A. V.: Interannual variability of the North American warm season precipitation regime, J. Climate, 12, 653–680, https://doi.org/10.1175/1520-0442(1999)012<0653:IVOTNA>2.0.CO;2, 1999. Higgins, R. W., Shi, W., and Hain, C.: Relationships between Gulf of California Moisture Surges and Precipitation in the Southwestern United States, J. Climate, 17, 2983–2997, https://doi.org/10.1175/1520-0442(2004)017<2983:RBGOCM>2.0.CO;2, 2004. Higgins, W. and Gochis, D.: Synthesis of results from the North American Monsoon Experiment (NAME) process study, J. Climate, 20, 1601–1607, https://doi.org/10.1175/JCLI4081.1, 2007. Hoell, A., Funk, C., Barlow, M., and Shukla, S.: Recent and Possible Future Variations in the North American Monsoon, in: The Monsoons and Climate Change. Observations and Modeling, Springer Climate, https://doi.org/10.1007/978-3-319-21650-8, 2016. Hoyos, I., Dominguez, F., Canon-Barriga, J., Martinez, J. A., Nieto, R., Gimeno, P., and Dirmeyer, P. A.: Moisture origin and transport processes in Colombia, northern South America, Clim. Dynam., 50, 971–990, https://doi.org/10.1007/s00382-017-3653-6, 2018. Hsu, P.-C.: Global Monsoon in a Changing Climate, in: The Monsoons and Climate Change. Observations and Modeling, Springer Climate, https://doi.org/10.1007/978-3-319-21650-8, 2016. Hu, H., and Dominguez, F.: Evaluation of oceanic and terrestrial sources of moisture for the North American monsoon using numerical models and precipitation stable isotopes, J. Hydrometeor., 16, 19–35, https://doi.org/10.1175/JHM-D-14-0073.1, 2015. Huo-Po, C. and Jian-Qi, S.: How Large Precipitation Changes over Global Monsoon Regions by CMIP5 Models?, Atmospheric and Oceanic Science Letters, 6, 306–311, 2013. Jana, S., Rajagopalan, B., Alexander, M. A., and Ray, A. J.: Understanding the dominant sources and tracks of moisture for summer rainfall in the southwest United States, J. Geophys. Res.-Atmos, 123, 4850–4870, https://doi.org/10.1029/2017JD027652, 2018. Jurwitz, L. R.: Arizona's two-season rainfall pattern, Weatherwise, 6, 96–99, 1953. Lahmers, T., Castro, C. L., Adams, D. K., Serra, Y. L., Brost, J. J., and Luong, T.: Long-Term Changes in the Climatology of Transient Inverted Troughs over the North American Monsoon Region and Their Effects on Precipitation, J. Climate, 29, 6037–6064, https://doi.org/10.1175/JCLI-D-15-0726.1, 2016. Lee, J.-Y. and Wang, B.: Future change of global monsoon in the CMIP5, Clim. Dynam., 42, 101–119, https://doi.org/10.1007/s00382-012-1564-0, 2014. Liu, J., Wang, B., Ding, Q., Kuang, X., Soon, W., and Zorita, E.: Centennial variations of the global monsoon precipitation in the last millennium: Results from ECHO-G model, J. Climate, 22, 2356–2371, https://doi.org/10.1175/2008JCLI2353.1, 2009. Liu, F., Chai, J., Wang, B., Liu, J., Zhang, X., and Wang, Z.: Global monsoon precipitation responses to large volcanic eruptions, Nature Scientific Reports, 6, 24331, https://doi.org/10.1038/srep24331, 2016. Lorenz, C. and Kunstmann, H.: The hydrological cycle in three state of- the-art reanalyses: Intercomparison and performance analysis, J Hydrometeor., 13, 1397–1420, https://doi.org/10.1175/jhm-d-11-088.1, 2012. Mendez-Barroso, L. A. and Vivoni, E. R.: Observed Shifts in Land Surface Conditions during the North American Monsoon: Implications for a Vegetation-Rainfall Feedback Mechanism, J. Arid Environ., 74, 549–555, 2010. Méndez-Barroso, L. A., Vivoni, E. R., Watts, C. J., and Rodríguez, J. C.: Seasonal and interannual relations between precipitation, surface soil moisture and vegetation dynamics in the North American monsoon region, J. Hydrol., 377, 59–70, 2009. Mesinger, F., DiMego, G., Kalnay, E., Mitchell, K., Shafran, P. C., Ebisuzaki, W., Jovic, D., Woollen, J., Rogers, E., and Berbery, E. H.: North American Regional Reanalysis, B. Am. Meteorol. Soc., 87, 343–360, 2006. Miralles, D. G., Holmes, T. R. H., De Jeu, R. A. M., Gash, J. H., Meesters, A. G. C. A., and Dolman, A. J.: Global land-surface evaporation estimated from satellite-based observations, Hydrol. Earth Syst. Sci., 15, 453–469, https://doi.org/10.5194/hess-15-453-2011, 2011. Miralles, D. G., Nieto, R., McDowell, N. G., Dorigo, W. A., Verhoest, N. E. C., Liu, Y. Y., Teuling, A. J., Dolman, A. J., Good, S. P., and Gimeno, L.: Contribution of water-limited ecoregions to their own supply of rainfall, Environ. Res. Lett., 11, 124007, https://doi.org/10.1088/1748-9326/11/12/124007, 2016. Mitchell, D. L., Ivanova, D., Rabin, R., Brown, T. J., and Redmond, K.: Gulf of California sea surface temperatures and the North American monsoon: Mechanistic implications from observations, J. Climate, 15, 2261–2281, 2002. Mo, K. C., Chelliah, M., Carrera, M. L., Higgings, R. W., and Ebisuzaki, W.: Atmospheric Moisture Transport over the United States and Mexico as Evaluated in the NCEP Regional Reanalysis, J. Hydrometeorol., 6, 711–728, 2005. Mohtadi, M., Prange, M., and Steinke, S.: Palaeoclimatic insights into forcing and response of monsoon rainfall, Nature, 533, 191–199, https://doi.org/10.1038/nature17450, 2016. Newman, A. and Johnson, R. H.: Mechanisms for Precipitation Enhancement in a North American Monsoon Upper-Tropospheric Trough, J. Atmos. Sci., 69, 1775–1792, https://doi.org/10.1175/JAS-D-11-0223.1, 2012. Newman, A. J. and Johnson, R. H.: Dynamics of a Simulated North American Monsoon Gulf Surge Event, Mon. Weather Rev., 141, 3238–3253, https://doi.org/10.1175/MWR-D-12-00294.1, 2013. Ordoñez, P., Ribera, P., Gallego, D., and Peña-Ortiz. C.: Major moisture sources for Western and Southern India and their role on synoptic-scale rainfall events, Hydrol. Process., 26, 3886–3895, https://doi.org/10.1002/hyp.8455, 2012. Pascale, S. and Bordoni, S.: Tropical and extratropical controls of Gulf of California surges and summertime precipitation over the southwestern United States, Mon. Weather Rev., 144, 2695–2718, https://doi.org/10.1175/MWR-D-15-0429.1, 2016. Pascale, S., Bordoni, S., Kapnick, S. B., Vechhi, G. A., Jia, L., Delworth, T. L., Underwood, S., and Andersoon, W.: The Impact of Horizontal Resolution on North American Monsoon Gulf of California Moisture Surges in a Suite of Coupled Global Climate Models, J. Climate, 29, 7911–7944, https://doi.org/10.1175/JCLI-D-16-0199.1, 2016. Perdigón-Morales, J., Romero-Centeno, R., Ordoñez Perez, P., and Barrett, B. S.: The midsummer drought in Mexico: perspectives on duration and intensity from the CHIRPS precipitation database, Int. J. Climatol., 38, 2174–2186, https://doi.org/10.1002/joc.5322, 2018. Pytlak, E., Goering, M., and Bennett, A.: Upper Tropospheric Troughs and Their Interaction with the North American Monsoon, 19th Conf. on Hydrology, 9–13 January 2005, San Diego, CA, USA, Amer. Meteor. Soc., 1–5, available at: https://ams.confex.com/ams/pdfpapers/85393.pdf (last access: 25 January 2019), 2005. Ralph, F. M. and Galarneau Jr., T. J.: The Chiricahua Gap and the Role of Easterly Water Vapor Transport in Southeastern Arizona Monsoon Precipitation, J. Hydrometeorol., 18, 2511–2520, https://doi.org/10.1175/JHM-D-17-0031.1, 2017. Ramage, C. S.: Monsoon meteorology. Academic Press, London, UK, p. 296, 1971. Rogers, P. J. and Johnson, R. H.: Analysis of the 13–14 July Gulf Surge Event during the 2004 North American Monsoon Experiment, Mon. Weather Rev., 135, 3098–3117, https://doi.org/10.1175/MWR3450.1, 2007. Ruprecht, E. and Kahl, T.: Investigation of the atmospheric water budget of the BALTEX area using NCEP/NCAR reanalysis data, Tellus, 55A, 426–437, 2003. Schiffer, N. J. and Nesbitt, S. W.: Flow, Moisture, and Thermodynamic Variability Associated with Gulf of California Surges within the North American Monsoon, J. Climate, 25, 4220–4241, https://doi.org/10.1175/JCLI-D-11-00266.1, 2012. Schmitz, J. T. and Mullen, S. L.: Water vapor transport associated with the summertime North American monsoon as depicted by ECMWF analyses, J. Climate, 9, 1621–1634, 1996. Seastrand, S., Serra, Y., Castro, C., and Ritchie, E.: The dominant synoptic-scale modes of North American monsoon precipitation, Int. J. Climatol., 35, 2019–2032, https://doi.org/10.1002/joc.4104, 2015. Simmonds, I., Bi, D., and Hope, P.: Atmospheric Water Vapor Flux and Its Association with Rainfall over China in Summer, J. Climate, 12, 1353–1367, 1999. Stensrud, D. J., Gall, R. L., Mullen, S. L., and Howard, K. W.: Model Climatology of the Mexican Monsoon, J. Climate, 8, 1775–1794, 1995. Stensrud, D. J., Gall, R., and Nordquist, M. K.: Surges over the Gulf of California during the Mexican Monsoon, Mon. Weather Rev., 125, 417–437, 1997. Stohl, A. and James, P.: A Lagrangian analysis of the atmospheric branch of the global water cycle. Part 1: Method description, validation, and demonstration for the August 2002 flooding in central Europe, J. Hydrometeorol., 5, 656–678, 2004. Stohl, A. and James, P.: A Lagrangian analysis of the atmospheric branch of the global water cycle. Part 2: Earth's river catchments, ocean basins, and moisture transports between them, J. Hydrometeorol., 6, 961–984, 2005. Stohl, A., Wotawa, G., Seibert, P., and Kromp-Kolb, H.: Interpolation errors in wind fields as a function of spatial and temporal resolution and their impact on different types of kinematic trajectories, J. Appl. Meteorol., 34, 2149–2165, 1995. Stohl, A., Forster, C., Frank, A., Seibert, P., and Wotawa, G.: Technical note: The Lagrangian particle dispersion model FLEXPART version 6.2, Atmos. Chem. Phys., 5, 2461–2474, https://doi.org/10.5194/acp-5-2461-2005, 2005. Trenberth, K. E., Fasullo, J. T., and Mackaro, J.: Atmospheric moisture transports from ocean to land and global energy flows in reanalyses, J. Climate, 24, 4907–4924, https://doi.org/10.1175/2011jcli4171.1, 2011. Vera, C., Higgins, W., Amador, J., Ambrizzi, T., Garreaud, R., Gochis, D., Gutzler, D., Lettenmaier, D., Marengo, J., Mechoso, C. R., Nogues-Paegle, J., Silva Dias, P. L., and Zhang, C.: Toward a Unified View of the American Monsoon Systems, J. Climate – Special Section, 19, 4977–4999, 2006. Vivoni, E. R., Moreno, H. A., Mascaro, G., Rodríguez, J. C., Watts, C. J., Garatuza-Payan, J., and Scott, R. L.: Observed Relation between Evapotranspiration and Soil Moisture in the North American Monsoon Region, Geophys. Res. Lett., 35, L22403, https://doi.org/10.1029/2008GL036001, 2008. Wang, B. and Ding, Q.: Changes in global monsoon precipitation over the past 56 years, Geophys. Res. Lett., 33, L06711, https://doi.org/10.1029/2005GL025347, 2006. Wang, B. and Ding, Q.: Global monsoon: dominant mode of annual variation in the tropics, Dynam. Atmos. Oceans, 44, 165–183, 2008. Wang, B., Liu, J., Kim, H. J., Webster, P. J., and Yim, S.-Y.: Recent change of the global monsoon precipitation (1979–2008), Clim. Dynam., 39, 1123–1135, https://doi.org/10.1007/s00382-011-1266-z, 2012. Wang, B., Li, J., Cane, M. A., Liu, J., Webster, P. J., Xiang, B., Kim, H.-M., Cao, J., and Ha, K.-J.: Toward Predicting Changes in the Land Monsoon Rainfall a Decade in Advance, J. Climate, 31, 2699–2714, https://doi.org/10.1175/JCLI-D-17-0521.1, 2018. Xiang, T., Vivoni, E. R., and Gochis, D. J.: Influence of Initial Soil Moisture and Vegetation Conditions on Monsoon Precipitation Events in Northwest Mexico, Atmosfera, 31, 25–45, 2018.
{}
Volume 408 - XV International Workshop on Hadron Physics (XVHadronPhysics) - Section Posters Renormalization group improved QCD thermodynamics M.B. Pinto*, J.L. Kneur and T.E. Restrepo Full text: pdf Pre-published on: August 01, 2022 Published on: Abstract We use the renormalization group optimized perturbation theory (RGOPT) to evaluate the quark contribution, $P_q$, to the QCD pressure at NLO (two loop level). In this application the complete QCD pressure is then obtained simply by adding the perturbative NLO contribution from massless gluons to the resummed $P_q$. At the central scale $M = 2\pi T$ our complete QCD pressure, $P=P_q+P_g$, shows a remarkable agreement with lattice predictions for $0.25 \lesssim T \lesssim 1 \, {\rm GeV}$. As expected, the RG properties native to the RGOPT resummation significantly reduce the embarrassing scale dependence that plagues popular analytical methods such as standard thermal perturbative QCD and hard thermal loop perturbation theory (HTLpt). DOI: https://doi.org/10.22323/1.408.0040 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
{}
# Help needed with calculating distances from circles 1. Jun 3, 2005 ### DannyH I'm looking to find someone who can help me with my problem... My problem is that I need to remove material from linear tapered strips with a 1 inch round cutter. The circles need to connect to eachother at half the section depth over the entire length of the strip ---> http://www.keone.com/hollow.gif [Broken] Because the diameter of the strips decreases the circles need to be placed closer to eatchother so that the circles keep connecting eatchother at the hart of the strip. My problem is how to calculate the distances... The picture in the link will make it more clear Thanks alot! Danny Last edited by a moderator: May 2, 2017 2. Jun 3, 2005 ### HackaB I don't know if there is a general formula, but if you fix the first circle at a certain point along the strip, you can calculate the distances between the successive circles iteratively. Take a look at the image I attached. Fixing the first circle defines the angle $$\theta_1$$. That first intersection is the only one you have to measure. If R is the radius of the circles (I guess .5" in your case?), then the distance between circles 1 and 2 is $$d_{12} = 2 R \cos \theta_1$$ To find $$\theta_2$$, you can use the known slope "m" of the half-depth line. Just compute "rise over run" from the first intersection (circles 1 and 2) to the second (circles 2 and 3): $$m = \frac{\Delta y}{\Delta x} = \frac{R\sin \theta_1 - R\sin \theta_2}{R\cos \theta_1 + R\cos \theta_2} = \frac{\sin \theta_1 - \sqrt{1 - \cos^2 \theta_2}}{\cos \theta_1 + \cos \theta_2}$$ Since $$\theta_1$$ is known, you can solve this quadratic equation for $$\cos \theta_2$$. Then the distance between circles 2 and 3 is $$d_{23} = 2R \cos \theta_2$$ You can repeat this process to get the successive distances. This seems awfully tedious though. Maybe someone else will be inspired to come up with a better way. File size: 13.7 KB Views: 59
{}
# Referencing Equations Is there a way to give equations/math latex text an equation number and make it reference-able? 1 Like Use the following construction: $$e=mc^2 \tag{1}\label{name}$$ You can now refer to this formula: $$\eqref{name}$$ 2 Likes This is really interesting! Thank you! I slightly upgrade it: using $\eqref{eq1}$ so that the equation label can be refereed inline. $$e=mc^2 \tag{1} \label{eq1}$$ Einstein provides $\eqref{eq1}$ in his paper. 2 Likes This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed. In case someone runs into this question, please note that this does not work anymore due to an upstream bug. For more info see: Automatic equation numbering (Latex Math) - #27 by WhiteNoise 1 Like
{}
# How to set up the integral for finding the volume of a solid in three dimensions? While studying for my Calculus 3 exam I have gotten stuck on this particular problem, primarily in the set up. Find the volume of the solid under the surface z=y+1 and above the region bounded by y=ln(x), y=0, x=0, and y=1. I know the problem requires a double integral around the given equation z=y+1 but I'm not sure of what points to use for the integrals and at which time. My assumption is to integrate from x=0 to x=e^y and then integrate a second time from y=0 to y=1. I got x=e^y from y=ln(x) => x=e^y. - Is this figure bounded below by anything? Right now, it looks like it extends forever in the negative z-direction. –  RecklessReckoner Apr 29 '13 at 17:50 @RecklessReckoner The problem doesn't specify so I have been assuming it does not. –  jrquick Apr 29 '13 at 17:53 Your limits of integration are correct. To see this, sketch the given region of the $x$-$y$ plane. It is contained in the first quadrant and has a trapezoidal shape with vertices $(0,0)$, $(0,1)$, $(e,1)$, and $(1,0)$. Note that the region is best described by horizontal slices (with vertical slices, you'd need to divide the region into two parts). Thus, integrating with respect to $x$ first is appropriate. So, you're thinking of the region as being a bunch of horizontal strips stacked on top of each other. In the inner integral, you integrate along a fixed strip in the $x$ direction (so the inner integral is with respect to $x$). Then, in the outer integral, you integrate in the vertical direction from where the first strip is located to where the last one is. The horizontal strips range from $y=0$ to $y=1$. With $y$ fixed, a horizontal strip has left edge $x=0$ and right edge $x=e^y$. In the end, you wind up having to evaluate $\int_0^1\int_0^{e^y} y+1\,dx\,dy$. (Note you're integrating the function that gives the height to the top of the solid at the point $(x,y)$. Here, that's $z=y+1$.) Calculating this is routine; though, an integration by parts is needed in the calculation of the outer integral. - The point I was making in my comment above is that, in order to integrate $z = y + 1$ over the region, there is an implicit assumption that the solid is bounded below by $z = 0$. This would generally be included in the problem statement. The volume integration is really being conducted for the difference in $z$ over each infinitesimal area on the xy-plane. Were this a surface integral for $y + 1$, it would be unnecessary to say anything, but to interpret that as a volume integral, it is understood that the xy-plane is the "bottom" of the solid. –  RecklessReckoner Apr 29 '13 at 18:04 @RecklessReckoner From the problem statement, I interpreted the phrase "above the region ..." to mean the solid is bounded below by the plane $z=0$. More precisely (maybe), the bottom is flat and ecompasses said region. It seems unambiguous to me. –  David Mitra Apr 29 '13 at 18:09 Sketch the figure. If the volume is bounded from below by the plane $z=0$, the volume is $$V = \int_0^1 dy \: \int_0^{e^y} dx \: \int_0^{y+1} dz$$ -
{}
[NTG-context] Side figure bug in mkiv? Henri Menke henrimenke at gmail.com Fri Jul 19 01:02:32 CEST 2019 On 19/07/19 9:36 AM, Duncan Hothersall wrote: > On Thu, 18 Jul 2019 at 22:30, Henri Menke <henrimenke at gmail.com> wrote: > >> >> Your formatting obscures the problem because compiling this example >> works fine. I think you are starting a new paragraph before {\bf ...}. >> That is a well-known problem and there are posts about it on the mailing >> list every once in a while. The sidefigure mechanism uses \parshape to >> make the paragraph flow around the figure. >> > > Very many thanks Henri for your patient explanation, and my apologies for > inadvertently asking a FAQ. I had thought the effect hasn't happened with > the same content in mkii but it must have been a happenstance of different > formatting. It is maybe an FAQ but I don't think it has been documented anywhere properly. I added my explanation to the Wiki page about unexpected behaviour. https://wiki.contextgarden.net/Unexpected_behavior#The_.E2.80.9Cparagraph_in_a_group.E2.80.9D_problem Cheers, Henri > > Thanks again for the clear solution. > > Duncan > > > ___________________________________________________________________________________ > If your question is of interest to others as well, please add an entry to the Wiki! > > maillist : ntg-context at ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context > webpage : http://www.pragma-ade.nl / http://context.aanhet.net > archive : https://bitbucket.org/phg/context-mirror/commits/ > wiki : http://contextgarden.net > ___________________________________________________________________________________ > More information about the ntg-context mailing list
{}
Trying to find the CDF of $X+Y$ when $X\sim exp(\alpha)$ and $Y \sim exp(\beta)$ (independent) without convolution, but it doesn't seem to work The textbook I am using, using convolution in order to find the CDF of the $$X+Y$$ when $$X\sim exp(\alpha)$$ $$Y\sim exp(\beta)$$, and X and Y are are independent. However, I have no background with convolutions at all (although my lecturer assumes I do, but that's another issue), so I am trying to figure this out without using convolutions, and I would really really appreciate if someone can point out what is wrong in my approach. of course I will also do my best to learn the convolution thing myself, but I am still very curious about my mistake here. So: Since both X and Y are non negative, I am only interested in the following region: The numbers are arbitrary of course. So my integration of the region is: $$F_Y(t)=P(Z\leq t)=P(X+Y\leq t)=\int_{0}^{t} \int_{0}^{t-x} \alpha e^{-\alpha x} \beta e^{-\beta y} dy dx$$ The result of the integration is this Wolfarm While the convolution result is this Wolfarm • $\int f_X(x)f_Y(t-x)\,\mathrm{d}x$ gives the pdf $f_{X+Y}(t)$, and $\mathbb{P}(X+Y\leq t)=F_{X+Y}(t)$ is the cdf. – user10354138 Feb 16 at 17:18 In the convolution you have calculated the density of $$Z = X + Y$$ and in your result you essentially calculate the CDF of $$Z = X + Y$$. Differentiating your result with respect to $$t$$ leads indeed to the same result! Since you are new in this field. Let me explain what happens in a discrete setting. For example, assume $$X,Y\geq 0$$ take on integer values and you want to know the distribution of $$Z = X+Y$$. Then what you generally do is write it as follows $$P(Z=z) = P(X+Y = z) = P(X=z - Y) \\ = P((X=z,Y=0)\ \text{or}\ (X=z-1,Y=1)\ \text{or}\ \cdots\ \text{or}\ (X=0,Y=z)) \\ = \sum_{i=0}^{z}P(X=z-i,Y=i) = \sum_{i=0}^{z}P(X=z-i)P(Y=i).$$ On the left, we have the convolution of $$X$$ and $$Y$$. In a continuous setting, this would generalise to $$f_Z(z) = \int_0^z f_X(z-y)f_Y(y)\,\mathrm{d} y.$$
{}
# Custom NetEncoder for lower resolution image input in a pre-trained Inception network I have come across this neat explanation on how to adjust a pre-trained Inception neural net to an arbitrary number of classes by truncating the top layers. However, my input data does not come in $299 \times 299$ RGB images, but rather in $100 \times 100$ RGB images. I want to essentially get rid of the input layer, and replace it by a custom one ($3 \times 100 \times 100$ instead of $3 \times 299 \times 299$). Here would be the loaded MXNet model: net = NeuralNetworksImportMXNetModel[ NotebookDirectory[] <> "model//Inception-7-symbol.json", NotebookDirectory[] <> "model//Inception-7-0001.params"]; Which we then adapt to a custom number of classes (75): net2 = NetGraph[{Take[net, {NetPort["Input"], "flatten"}], 75, SoftmaxLayer[]}, {1 -> 2 -> 3}, "Input" -> NetEncoder[{"Image", {100, 100}, ColorSpace -> "RGB"}], "Output" -> NetDecoder[{"Class", Range[0, 74]}]] However, as expected, this throws an error, because, from what I understand, I am simply adding a new Input layer, and not modifying the one from net. How could I adapt the existing model (net) to handle inputs (i.e., images) with a lower number of dimensions (resolution)? I know this can be done in Python's keras` module by re-wiring the inputs to a custom Input layer, but I am not sure on how to proceed in Mathematica.
{}
# the radius of an atom of gold au is about 1 35 ##### The radius of an atom of gold (Au) is about 1.35. (a ... The radius of an atom of gold (Au) is about 1.35. (a) Express this distance in nanometers (nm) and in picometers (pm), (b) How many gold atoms would have to be lined up to span 1.0 mm? ##### The radius of an atom of gold (Au) is abou... | Clutch Prep We have to calculate the volume of a gold atom in cm3 units using the radius given (1.35 Å). We have to assume that the gold atom is spherical in shape. The volume of a sphere is calculated by this formula: V = 4 3 πr 3. 86% (180 ratings) ##### OneClass: The radius of an atom of gold (Au) is about 1.35 ... 2020-3-24 · The radius of an atom of gold (Au) is about 1.35 Å. (a) Express this distance in nanometers (nm) and in picometers (pm).(b) How many gold atoms would have to be lined up to span 1.0 mm?(c) If the atom is assumed to be a sphere, what is the volume in cm 3 of a single Au atom?b) Since the length (diameter) of sphere is 2r, we calculate the number of atoms needed as: ##### The radius of an atom of gold (au) is about 1.35 å. how ... 2021-10-1 · The radius of an atom of gold (Au) is about 1.35 Å. so the diameter = 1.35 x 2 = 2.7 Å = 2.7.10⁻⁷ mm. to be lined up to span 5.0 mm requires gold atoms : amount of gold atom : 5.0 : 2.7.10⁻⁷ mm = 1.851.10⁷. Learn more. book thickness in cm The mass of A standard baseball ##### The radius of an atom of gold (Au) is about $$1.35 \dot{A ... Find step-by-step Chemistry solutions and your answer to the following textbook question: The radius of an atom of gold (Au) is about$$ 1.35 \dot{A} $$How many Read More ##### Solved The radius of an atom of gold (Au) is about 1.35 ... The radius of an atom of gold (Au) is about 1.35. (a) Express this distance in nanometers (nm) and in picometers (pm), (b) How many gold atoms would have to be lined up to span 1.0 mm? (c) If the atom is assumed to be a sphere, what is the volume in cm^3 of a single Au atom? Question: The radius of an atom of gold (Au) is about 1.35. (a ... Read More ##### The radium of an atom of gold (Au) is about 1.35 A. if the ... 2021-10-17 · the radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a sphere what is the volume in cm^3 of a single Au atom? Read More ##### The radius of an atom of gold (Au) is about$$ 1.35 \dot{A ... Find step-by-step Chemistry solutions and your answer to the following textbook question: The radius of an atom of gold (Au) is about $$1.35 \dot{A}$$ Express this distance in nanometers (nm) and in picometers (pm).. ##### The radium of an atom of gold (Au) is about 1.35 A. if the ... 2021-9-20 · The radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a. the radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a sphere what is the volume in cm^3 of a single Au atom? Categories Uncategorized. Leave a Reply Cancel reply. ##### The radius of an atom of gold (Au) is about 1.35. (a ... The radius of an atom of gold (Au) is about 1.35. (a) Express this distance in nanometers (nm) and in picometers (pm), (b) How many gold atoms would have to be lined up to span 1.0 mm? ##### The radius of an atom of gold (Au) is abou... | Clutch Prep We have to calculate the volume of a gold atom in cm3 units using the radius given (1.35 Å). We have to assume that the gold atom is spherical in shape. The volume of a sphere is calculated by this formula: V = 4 3 πr 3. 86% (180 ratings) ##### The radius of an atom of gold (Au) is about 1.35 The radius of an atom of gold (Au) is about 1.35 Angstroms of the atom is assumed to be a sphere, what is the volume in {eq}cm^3 {/eq} of a single Au atom? ##### Answered: The radius of an atom of gold (Au) is | bartleby The radius of an atom of gold (Au) is about 1.35 Å. (a) Express this distance in nanometers (nm) and in picometers (pm) (b) How many gold atoms would have to be lined up to span 1.0 mm? (c) If the atom is assumed to be a sphere, what is the volume in cm3 of a single Au atom? ##### The radium of an atom of gold (Au) is about 1.35 A. if the ... 2021-10-17 · the radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a sphere what is the volume in cm^3 of a single Au atom? ##### The radium of an atom of gold (Au) is about 1.35 A. if the ... 2021-9-20 · The radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a. the radium of an atom of gold (Au) is about 1.35 A. if the atom is assumed to be a sphere what is the volume in cm^3 of a single Au atom? Categories Uncategorized. Leave a Reply Cancel reply. ##### the radius of an atom of gold is about 1.35 angstroms ... 2012-12-27 · The radius of an atom of gold (Au) is about 1.35 Angstron . The radius of an atom of gold (Au) is about 1.35 Angstron. (a) Express this distance in nanometers, and in picometers. (b) so 1.35 Angstroms = 1.35 x 10^ -8 cm »More detailed ##### the radius of a gold atom is about 1.35 - BINQ Mining 2012-12-16 · Gold – Periodic Table of Elements and Chemistry. Atomic radius : 135 pm : Ionic radius (1+ ion) 151 pm : Ionic radius (2+ ion) – Ionic radius (3+ ion) 99 pm : that’s a thickness of about 2000 gold atoms. » More detailed. Yahoo! Answers – How many gold atoms would . Aug 26, 2012 · The radius of an atom of gold (Au) is about 1.35 ... ##### Determining number of atoms? | Chemistry Help Forum 2013-2-1 · The radius of an atom of gold (Au) is about 1.35 Angstroms, How many gold atoms would have to be lined up to span 9.0mm ? If the atom is assumed to be a sphere, what is the volume in cm^3 of a single Au atom? For the first question here is what I did: 9mm x 10^-3m / 10^-10m / 1.35A to give me 6.7 x10^7 atoms. ##### Atomic Radius of All the Elements (Complete 2021-3-7 · 1. Atomic radius of Hydrogen (H) 120 pm. 2. Atomic radius of Helium (He) 140 pm. 3. Atomic radius of Lithium (Li) 182 pm. ##### Answered: The radius of an atom of gold (Au) is | bartleby The radius of an atom of gold (Au) is about 1.35 A° . (a) Express this distance in nanometers (nm) and in picometers (pm). (b) How many gold atoms would have to be linedup to span 1.0 mm? (c) If the atom is assumed to be a sphere,what is the volume in cm3 of a single Au atom? ##### Ch.2 practice problems.docx - 2.11 A 1.0-g sample of ... Assume that the gold nuclei in each layer are offset from each other. 2.19 The radius of an atom of gold (Au) is about 1.35 Å. 1.35 Å. a. Express this distance in nanometers (nm) and in picometers (pm). b. How many gold atoms would have to be lined up to span 1.0 mm? c. If the atom is assumed to be a sphere, ... ##### Atomic Radius of All the Elements (Complete 2021-3-7 · 1. Atomic radius of Hydrogen (H) 120 pm. 2. Atomic radius of Helium (He) 140 pm. 3. Atomic radius of Lithium (Li) 182 pm. ##### WebElements Periodic Table » Gold » radii of atoms and ions Image showing periodicity of valence s-orbital radius for the chemical elements as size-coded balls on a periodic table grid. References. The R max values for neutral gaseous element valence orbitals are abstracted from reference 1.. J.B. Mann, Atomic Structure Calculations II.Hartree-Fock wave functions and radial expectation values: hydrogen to lawrencium, LA-3691, Los Alamos Scientific ... ##### Calculating the Atomic Radius Gold - College of Saint ... 2012-9-29 · In terms of the gold atom radius, the unit cell dimension is 22 R. 22 R As noted above, this calculation will require, in addition to the crystal structure, the density and molar mass of gold, which are given below along with Avogadroʹs number. Density: 19.32 gm cm 3 Molar mass: 197.0 gm Atoms per mole: 6.022 10 ... ##### Gold - Periodic Table 2020-11-21 · Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical symbol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on ##### The density of gold is 19.7 g/cm^3 . The radius of gold ... Click here👆to get an answer to your question ️ The density of gold is 19.7 g/cm^3 . The radius of gold atom is [Au = 197, NA = 6 × 10^23, (10 pi)^1/3 = 3.15] ##### Session #15: Homework Solutions - MIT OpenCourseWare 2020-12-31 · Problem #1 . Iron (Ï J FP 3) crystallizes in a BCC unit cell at room temperature. Calculate the radius of an iron atom in this crystal. At temperatures above 910ºC iron prefers to be FCC. If we neglect the temperature dependence of the radius of the iron atom on the grounds that it is negligible, we can calculate the density of FCC iron.
{}
# stri_join 0th Percentile ##### Concatenate Character Vectors These are the stringi's equivalents of the built-in paste function. stri_c and stri_paste are aliases for stri_join. ##### Usage stri_join(..., sep = "", collapse = NULL, ignore_null = FALSE) stri_c(..., sep = "", collapse = NULL, ignore_null = FALSE) stri_paste(..., sep = "", collapse = NULL, ignore_null = FALSE) ##### Arguments ... character vectors (or objects coercible to character vectors) which corresponding elements are to be concatenated sep a single string; separates terms collapse a single string or NULL; an optional results separator ignore_null a single logical value; if TRUE, then empty vectors on input are silently ignored ##### Details Vectorized over each atomic vector in .... Unless collapse is NULL, the result will be a single string. Otherwise, you get a character vector of length equal to the length of the longest argument. If any of the arguments in ... is a vector of length 0 (not to be confused with vectors of empty strings) and ignore_null=FALSE, then you will get a 0-length character vector in result. If collapse or sep has length greater than 1, then only the first string will be used. In case missing values in any of the input vectors, NA is set to the corresponding element. Note that this behavior is different from paste, which treats missing values as ordinary strings like "NA". Moreover, as usual in stringi, the resulting strings are always in UTF-8. ##### Value Returns a character vector. Other join: stri_dup, stri_flatten, stri_join_list • stri_c • stri_join • stri_paste ##### Examples library(stringi) stri_join(1:13, letters) stri_join(1:13, letters, sep='!') stri_join(1:13, letters, collapse='?') stri_join(1:13, letters, sep='!', collapse='?') stri_join(c('abc', '123', '\u0105\u0104'),'###', 1:5, sep='...') stri_join(c('abc', '123', '\u0105\u0104'),'###', 1:5, sep='...', collapse='?') do.call(stri_c, list(c("a", "b", "c"), c("1", "2"), sep='!')) do.call(stri_c, list(c("a", "b", "c"), c("1", "2"), sep='!', collapse='\$'))
{}
How 9 Cheenta students ranked in top 100 in ISI and CMI Entrances? Stable Equilibrium Point Let's discuss a problem where we find a stable equilibrium point, useful for Physics Olympiad. The Problem: Find the stable equilibrium point corressponding to the potential (U(x)=k(2x^3-5x^2+4x)). Disucssion: $$U(x)=k(2x^3-5x^2+4x)$$ Differentiating with respect to x $$\frac{dU}{dx}=k(6x^2-10x+4)$$ Differentiating again, $$\frac{d^2U}{dx^2}=k(12x-10)$$ Putting (x=1) $$\frac{d^2U}{dx^2}=+2k$$ This is positive because (k) is positive and it is the minimum hence the (x=1) corresponds to stable equilibrium.
{}
# Edmonds' Blossom Algorithm Part 1: Cast of Characters Many problems in business and science can be cast in terms of graph matching. Given an undirected graph, a matching is a subgraph in which every node has a degree of one. It's often important to find a matching that contains the most possible edges. This is the maximum matching problem. Although the concept is easy to understand, algorithms for finding maximum matchings tend to be complex. Jack Edmonds reported the first efficient approach in the 1960s, a landmark in computer science history. His "Blossom algorithm" has inspired numerous variations and alternatives over the last several decades. A recurring theme in this work is the tradeoff between conceptual complexity and efficiency. The Blossom algorithm hits a sweet spot. It's complex enough to be general, but simple enough to be widely implemented. Even so, wrapping one's head around the Blossom algorithm is no easy task. A lot has been written on the topic, but mainly using the tools of symbolic logic and mathematics. This is fine for readers with a background in math and computer science. But for those lacking such background, the Blossom algorithm poses a formidable, seemingly insurmountable, challenge. This article is the first in a three-part series that takes a different approach. Part 1 presents a short background and a visual guided tour of the components of Edmond's Blossom algorithm. Part 2 shows how these components work together through several illustrative examples. Part 3 connects the conceptual background to a working implementation written in Rust. • Part 1: Cast of Characters • Part 2: Visual Examples (TBD) • Part 3: Implementation (TBD) ## Backstory Before introducing the members of the cast, a brief summary of what brings them together will be helpful. The Blossom algorithm accepts a graph and an arbitrary matching over that graph, which may be empty. The algorithm grows the matching iteratively, attempting to find a way to add one additional edge at each step. Failure to do so terminates the algorithm. An implementation for a maximum_matching function in Rust might start off as something like: // hypothetical function signature in Rust fn maximum_matching(graph: &Graph, matching: &Matching) { // 1. find the next augmenting path // 2. if path exists, augment matching with it, then // call maximum_matching // 3. otherwise, return } Given a Graph and a Matching, the maximum_matching function attempts to increase the number of edges in Matching to the greatest extent. The nature of a Matching deserves some attention at this point. As noted previously, a matching is a subgraph in which all nodes have degree one. The term presumably follows from the idea of two nodes being "matched" together, as two members of an online dating service might be matched. The key concept in the maximum_matching function is iterative augmentation. Augmentation is the process of increasing the edge count in a Matching. Conceptually, there are two ways to do this: 1. Add an edge, neither of whose member nodes are in the Matching. 2. Add a path, some of whose nodes are already in the matching. Option (1) is simple enough. Pick two connected nodes in the graph and add the edge between them to the matching. The problem is that such trivial augmentation might not be possible. The rightmost panel in the above figure gives one example. Option (2) solves this problem by not adding a single edge, but rather a path. Recall that a path is a type of subgraph that can be expressed as as an ordered set of adjacent nodes. An edge exists between each adjacent pair of nodes, but it need not be explicitly represented. In particular, option (2) calls for the identification of an augmenting path. An augmenting path is an acyclic path containing alternating unmatched and matched edges. Such a path must necessarily be bracketed by unmatched nodes and must have an odd edge count ("odd path"). Although edges alternate in their matched status, all interior nodes will be matched. For this reason, the problem of finding an augmenting path largely boils down to finding an alternating path of matched nodes for which unmatched bracketing nodes can be identified. The rules for augmenting a matching with an augmenting path are simple. For each edge in the path, apply the symmetric difference operator to the matching. Symmetric difference, also known as "XOR" and given the circle puls symbol (⊕), leaves an edge in the matching if it is present in either the path or the matching. Symmetric difference appears in a few graph theory contexts such as the derivation of cycles from a basis. The goal of the Blossom algorithm is to iteratively discover augmenting paths until none can be found. Berge's lemma ensures that the lack of an augmenting path is a necessary and sufficient condition for maximum matching. This explains the overall structure of the maximum_matching function. As previously described on this blog maximum matching is conceptually simple provided the source graph is bipartite (no odd cycles). Things get more complicated if an odd cycle is present. The crux of the problem is that without higher-order information it's not possible to know which of the two possible augmenting paths through an odd cycle should be followed. The Blossom algorithm provides a solution. When an odd cycle is encountered, the source graph and current matching are "contracted," resulting in a new graph and matching in which the odd cycle is replaced by a single node. If an augmenting path using the contracted graph and matching are found, the blossom is "lifted" and processing continues as usual. ## Cast of Characters The Blossom algorithm is complex in that it brings together several actors to find and apply an augmenting path. These actors are: • Graph. A simple, undirected, unweighted graph. • Path. A connected subgraph in which all nodes are either degree one or two. • Matching. A subgraph in which all nodes have degree one. • Forest. A directed acyclic subgraph. • Marker. A utility capable of independently marking edges and nodes. • Blossom. A utility capable of "contracting" a Graph or Matching given an odd cycle, and "lifting" a Path. What follows is a detailed description of each role in the Blossom algorithm drama. The responsibilities and operation of each are illustrated. ## Graph One of the two inputs into the Blossom algorithm is a Graph. In general, a Graph is a set of nodes and edges between them. Within the Blossom algorithm, a Graph contains no self-edges ("loops") or multiple edges. Despite the utility of graphs in software, relatively little attention has been paid to the fundamental operations that the Graph type must support. Earlier this year I attempted to address this issue in A Minimal Graph API. This article outlines those methods that all graph-like objects should support. To recap, the Graph type supports the following nine methods, which has been pared down from the original eleven: • nodes Iterates all nodes. • order Returns the number of nodes. • has_node Takes one parameter, returning true if it's a member, or false otherwise. • degree Takes one parameter, returning its count of outbound edges. • neighbors Takes one parameter, iterating the nodes connected to it by an outbound edge. • size Returns the number of edges. • edges Iterates all edges. • has_edge Takes two parameters, returning true if an edge exists from the first to the second, or false otherwise. • is_empty Returns true if the graph order is zero. Optionally, a Graph may include one or more build methods. These methods convert a system primitive (such as a struct or object literal) into a Graph consistent with it. ## Path A Path is an undirected subgraph expressed as an ordered set of nodes. For each pair of adjacent nodes, a corresponding edge can be found in the parent Graph. Path supports the following operations: • has_node Returns true if the node is present. • length The number of nodes. A Path could be implemented as a Graph, but this offers little practical advantage. A much simpler approach is to use an array or its equivalent. It's also convenient for Path to support operations that create a Path in one step. ## Marker For internal bookkeeping, the Blossom algorithm needs a way to independently mark nodes and edges. This responsibility falls to Marker. Its functionality can either be rolled into Graph or provided through a separate Marker type. The latter approach offers the most generality. Marker supports the following four operations. • mark_node. Sets the status of a node to "marked." • has_node. Returns true if a node has been previously marked. • mark_edge. Sets the status of an edge defined by a source and target node as "marked." • has_edge. Returns true if an edge has been marked in either direction. ## Matching Matching is the star of the show. Its main responsibility is to maintain a set of edges, ensuring that no edge references the same node twice. To do this, Matching supports the following operations: • pair. Accepts two nodes, creating an edge between them. If either node is already paired, the edge is broken and the mate deleted. It is an error to pair two nodes that have already been paired. • augment. Accepts a path. For each alternating pair of nodes in the path, pair is called. For this reason, it is an error to call augment on a Path of even edge count (odd node count). • has_node. Returns true if the node is paired with another. • mate. Returns the node paired to the query node. • edges. Iterates the edges comprising the Matching. ## Blossom Blossom plays the most complex role. It's not in every scene, but when it does appear onstage, it interacts with most of the other players. The main job of Blossom is to deal with cycles having odd edge counts ("odd cycle"). When an odd cycles is found, a Blossom is constructed from its Path and an ID. These two pieces of information are used during contraction of a Graph and Matching. After an augmenting path has been found, Blossom provides the means to "lift" itself out to yield an augmenting path over the uncontracted graph. Blossom supports the following operations. • contract_graph. Accepts a Graph, returning a new graph in which the blossom nodes have been replaced by the internal identifier. • contract_matching. Accepts a Matching in which nodes and edges appearing in the blossom cycle have been removed. • lift. Accepts a Path containing a blossom identifier. Returns a Path in which the Blossom identifier is replaced by the constituent nodes. ## Forest The Blossom algorithm constructs a Forest for internal bookkeeping. A Forest is a directed acyclic graph composed of trees. Each tree is rooted at a single node. A Forest is built by first adding a root node, then adding descendants. To support its functionality, Forest exposes the following operations: • add_root. Adds a new root node, which will have no parent. It is an error to add the same root twice. • add_edge. Builds an edge between a previously-added root and a new child. It is an error to add the same child twice, even if ultimately attached to a different root. • has_node. Returns true of the node is in the forest. • path. Returns the path from a child node to its ultimate root. The path of a root node contains one member - the root node itself. • even_nodes. Iterates those nodes in the Forest that lie an even number of edges away from their roots. Zero counts as even, so roots are even. ## Other Resources The discussion here is based on numerous sources. Two in particular are worth mentioning: the Wikipedia article and an article based on it by James S. Plank. The Wikipedia article is notable for its pseudocode. Although not complete, it does reduce the algorithm to something that could be put into practice in a way that few others do. Plank's article is useful the clarity of its illustrations and use of examples. ## Conclusion What makes the Blossom algorithm hard to understand are all the moving parts. This article takes the first step toward explaining Blossom by offering a high-level view of the parts a sense of their relationship to each other. The next article in this series will discuss in detail how these parts work together with illustrative examples.
{}
Box Topology may not form Categorical Product in the Category of Topological Spaces Theorem Let $\family {\struct{X_i, \tau_i}}_{i \mathop \in I}$ be an $I$-indexed family of topological spaces. Let $X$ be the cartesian product of $\family {X_i}_{i \mathop \in I}$, that is: $\displaystyle X := \prod_{i \mathop \in I} X_i$ Let $\tau$ be the box topology on $X$. Then $\tau$ may not be the categorical product in the category of topological spaces. Proof $\blacksquare$
{}
eLibrary of Mathematical Instituteof the Serbian Academy of Sciences and Arts > Home / All Journals / Journal / Kragujevac Journal of MathematicsPublisher: Prirodno-matematički fakultet Kragujevac, KragujevacISSN: 1450-9628Issue: 40_1Date: 2016Journal Homepage Concircular Vector Fields and Pseudo-Kaehler Manifolds 7 - 14 Bang-Yen Chen AbstractKeywords: Pseudo-Kaehler manifold; concircular vector field; concurrent vector fieldMSC: 53C55 53B30; 53B35 New Weighted Integral Inequalities for Twice Differentiable Convex Functions 15 - 33 M. Z. Sarikaya and S. Erden AbstractKeywords: Hermite-Hadamard inequality; Ostrowski inequality; Trapezoid inequality; convex function; Hölder inequality; Rieman-Liouville integralsMSC: 26D07 26D15 On the Characterization of a Class of Four Dimensional Matrices and Steinhaus Type Theorems 35 - 45 M. Yeşi̇lkayagi̇l and F. Başar AbstractKeywords: Double sequence; double series; Pringsheim convergence; Steinhaus type theorems; matrix transformationsMSC: 40C05 On Parallel Ruled Surfaces in Galilean Space 47 - 59 M. Dede and C. Ekici AbstractKeywords: Parallel surfaces; Galilean space; ruled surfacesMSC: 53A35 53Z05 Direct Limit Derived from Twist Product on $\Gamma$-Semihypergroups 61 - 72 S. Ostadhadi-Dehkordi AbstractKeywords: $\Gamma$-semihypergroup; left(right) $(\Delta;G)$-set; twist product; push out system; direct system; direct limitMSC: 20N15 On a Conjecture of Harmonic Index and Diameter of Graphs 73 - 78 J. Amalorpava Jerline and L. Benedict Michaelraj AbstractKeywords: Harmonic index; diameter; unicyclic graph.MSC: 05C07 05C12 Extremal Energy of Digraphs with Two Linear Subdigraphs 79 - 89 Rashid Farooq, Mehtab Khan and Yasir Masood AbstractKeywords: Energy of a digraph; bicyclic digraphs; linear subdigraphs.MSC: 05C35; 05C50 On $\beta$-absolute Convergence of Vilenkin-Fourier Series with Small Gaps 91 - 104 Bhikha Lila Ghodadra AbstractKeywords: Hölder's inequality; Jensen's inequality; Wiener-Ingham inequality; function of generalized bounded fluctuation; Vilenkin-Fourier series; $\beta$-absolute convergence; lacunary series with small gapsMSC: 42C10 26D15; 43A25; 43A40; 43A75 Note About Asymptotic Behaviour of Positive Solutions of Superlinear Differential Equation of Emden-Fowler Type at Zero 105 - 112 Marija Mikić AbstractKeywords: Superlinear Emden-Fowler differential equation; asymptotic behavior of solutions; asymptotic equivalence.MSC: 34E10 34C41 Inequalities for the Polar Derivative of a Polynomial with Restricted Zeros 113 - 124 Ahmad Zireh and Mahmood Bidkham AbstractKeywords: Polynomial; inequality; maximum modulus; polar derivative; restricted zeros.MSC: 30A10 30C10; 30D15 Article page: 12>> Remote Address: 3.233.219.101 • Server: elib.mi.sanu.ac.rsHTTP User Agent: CCBot/2.0 (https://commoncrawl.org/faq/)
{}
## Is the Principles and Practice of Engineering Exam a Barrier Against Women? The intrepid Toni Airaksinen at Campus Reform has written an article highlighting the research of Drs. Julia Keen & Anna Salvatorelli on this subject.  The statistics are interesting and so are their recommendations for further research: This study focused on pass rate, and the resultant disparity is only the first step. Additional research should be conducted to identify why women are not passing the PE exam at an equal percentage rate as men. This research should include: • Identifying biases in the exam itself • Examining the timing of administration of the exam in an engineer’s career progression • Exploring the likelihood of women to retake the exam compared to men after failing since the number of attempts was not recorded within the data collected • Identify factors that may contribute to higher pass rate for women in some states compared to others. As someone who has taught civil engineering for more than a decade at the undergraduate level, this has more than a passing interest.  For me, it was also an interesting moment, because I saw this just after I had returned from the dedication of the new headquarters for Division 2 of the Tennessee Department of Transportation, where most of my students who work there are female. Let me first set forth their “bottom line” cumulative statistics (I strongly urge those of you who can get access to their paper to do so): 1. About 20% of the people who take the “Principles and Practices” exam are women.  That tracks pretty well with the number of women in my classes. 2. 51.5% of the women pass the test on the first try, while 63.1% of the men do. With that out of the way, I’d like to make some observations. 1. My female students tend to be a very diligent and competent group.  In many ways an engineering curriculum is more of an endurance match than anything else; the women “tough it out” at least as well as the men. 2. I’ve never noticed women having more difficulty with tests than men in my classes.  That’s saying a lot because my tests tend to be bizarre, as my students will attest. 3. Women in civil engineering have some built-in advantages because of the diffuse structure of the system by which structures get built and their socialization skills, as I explain this 2014 post.  Because of the nature of our society, engineers tend to get stuck in the caboose on the train of respectability; I think that women are a significant part of the key to change that situation. Especially considering #2, I find it hard to believe that the test is intrinsically biased against women.  So why is this disparity so?  Our researchers give us four options, and my gut tells me that the second one is the most likely. My reasoning is simple.  Generally speaking, most engineering students take their first exam (the FE exam) while they’re in undergraduate school.  After they they acquire four years of experience, they can apply for the privilege of taking the P&P exam.  If they pass it and meet other requirements, they obtain their Professional Engineers license.  For most people, that means that the critical moment takes place in their mid- to late twenties.  Millennials aren’t as “progressive” on sorting out tasks between spouses or partners as some might have you believe.  That time in life is also the same time when many marry, have children, etc., and the work associated with those events falls harder on women.  Thus the first opportunity to take the exam takes place at a point in life which is less opportune for women than it is for men. So what is to be done?  Do we need a special accommodation?  The answer is “no.”  Since venting pet peeves seems to be the thing on this site these days, let me vent one of mine: there is no cogent reason why we should force people to wait several years out from their academic studies to take the P&P exam.  This exam is supposed to reflect experience, but a reality check is in order: it’s just another academic exercise like just most any other test.  Fortunately change is in the wind, as this statement from the National Society of Professional Engineers indicates: Until relatively recently, candidates for licensure as a professional engineer have needed to gain four years of approved work experience before taking the Principles and Practice of Engineering (PE) Exam. In recent years, however, attitudes within the profession toward the early taking of the PE exam have begun to shift. In 2013, the National Council of Examiners for Engineering and Surveying (NCEES) removed from its Model Law the requirement that candidates earn four years of experience before taking the exam. Separating the experience requirement from eligibility for taking the PE exam is sometimes called decoupling. For the National Society of Professional Engineers, as stated in Position Statement No. 1778, “Licensing boards and governing jurisdictions are encouraged to provide the option of taking the Principles and Practice of Engineering exam as soon as an applicant for licensure believes they are prepared to take the exam. The applicant would not be eligible for licensure until meeting all requirements for licensure— 4-year Accreditation Board for Engineering and Technology/Engineering Accreditation Commission accredited degree, passing the Fundamentals of Engineering exam and the Principles and Practice of Engineering exam, and 4 years of progressive engineering experience.” The NSPE would have us think that this concept is a novelty, but that’s not really the case.  When I was an undergraduate at Texas A&M University in the 1970’s, Texas allowed people to take both exams before graduation; our own NSPE student chapter strongly encouraged that, and I did it myself.  Taking the P&P exam not only gets the exam away from major life events in early adulthood, it also eliminates a good deal of remedial work trying to remember things one learned in school but had forgotten in the years before the exam. I think that, if we do not obscure our thinking with trendy concepts and look at things realistically, we can solve this disparity by making a change that will benefit both men and women and improve our profession.  If this disparity provides motivation to move the process of “decoupling” forward, then so be it.  It’s a change that’s overdue. Posted in Soil Mechanics ## Jean-Louis Briaud’s “Pet Peeve” on the Analysis of Consolidation Settlement Results In his recent, excellent article on the settlement (and subsidence) of the San Jacinto Monument east of Houston, Briaud (2018) takes an opportunity to vent a “pet peeve” of his relative to the way consolidation tests are reduced and consolidation properties reported: ### A Chance to Share a Pet Peeve The consolidation e versus log p’ curve is a stress-strain curve.  Typically, stress-strain curves are plotted as stress on the vertical axis and strain on the horizontal axis.  Both axes are on normal scales, not log scales.  It’s my view that consolidation curves should be plotted in a similar fashion: effective vertical stresses on the vertical axis in arithmetic scale, and normal strain on the horizontal axis in arithmetic scale.  When doing so, the steel ring confining the test specimen influences the the measurements and skews the stiffness data.  Indeed the stress-strain curve, which usually has a downward curvature, has an upward curvature in such a plot. (p. 54) Is this correct?  And is he the only one who thinks this way?  The two questions are neither the same nor linked.  Although this problem will certainly not be solved in one blog post, it deserves some investigation. ## Statement of the Problem Let’s start with a text we use often here: Verruijt, A., and van Bars, S. (2007). Soil Mechanics. VSSD, Delft, the Netherlands. Early in the presentation on the subject, he presents the following plot: As Jean-Louis would have us do, the strain (or negative strain, since we’re dealing with compression) is on the abscissa, and the dimensionless stress is on the ordinate.  The difference between the two is that the stress is plotted logarithmically.  But it’s a step.  We’ll come back to that later. Verruijt defines the relationship between the strain and stress ratio as follows: $\epsilon = -\frac{1}{C}\ln\frac{\sigma}{\sigma_0}$ This relationship goes back to Terzaghi’s original tests and formulation of settlement and consolidation theory almost a century ago. From a “conventional” standpoint there are two things wrong with this formulation.  The first is that it is based on strain, not void ratio.  The second is that it uses the natural logarithm rather than the common one.  The last problem can be fixed by rewriting it as follows: $\epsilon = -\frac{1}{C_{10}}\log\frac{\sigma}{\sigma_0}$ This formulation is essentially the same as is used in Hough’s Method for cohesionless soils, once the strains are converted to displacements by considering the thickness of the layer.  So it is not as strange as it looks. The first problem can be “fixed” by noting the following: $\epsilon = \frac{e-e_0}{1+e_0}$ We can substitute this into the equation before it and, with judicious changes of the constants and other subsitutions, come up with the familiar, non-preconsolidated formula for consolidation settlement, or $\Delta H = \frac{C_c H_0}{1+e_0}\log\frac{\sigma}{\sigma_0}$ When we reverse the axes, we then get the “classic” plot as follows: But is there a problem with using strain?  Verruijt explains the two conventions as follows: In many countries, such as the Scandinavian countries and the USA, the results of a confined compression test are often described in a slightly different form, using the void ratio e to express the deformation, rather than the strain ε…It is of course unfortunate that different coefficients are being used to describe the same phenomenon. This can only be explained by the historical developments in different parts of the world. It is especially inconvenient that in both formulas the constant is denoted by the character C, but in one form it appears in the numerator, and in the other one in the denominator. A large value for $C_{10}$ corresponds to a small value for $C_c$. It can be expected that the compression index $C_c$ will prevail in the future, as this has been standardized by ISO, the International Organization. As is often the case, the simplest way to help sort out this issue is with an example.  Briaud (2018) actually has one, but we will use another. ## Example of Settlement Plotting An example we have used frequently in our teaching of Soil Mechanics is this one, from the Bearing Capacity and Settlement publication.  It is a little more complex than the theory shown above because it involves a preconsolidated soil.  The plot (with the simplifications for determination of $C_c$ and $C_r$ is shown below. With this information in hand, we process the data as follows: 1. We convert the void ratio data to strains using the formula above. 2. We convert the stresses to dimensionless stresses by dividing them by the initial stress. 3. We “split” the data up into compression and decompression portions to allow us to develop separate trend lines for both. First, the strain-dimensionless stress plot, using natural scales for both. The result is similar to that in Briaud (2018).  The compression portion best fits a second-order polynomial fit.  (Not that we have thrown out the zero point to allow more fit options.)  The decompression portion fits an exponential trend line best. Below is the same plot with the stress scale now being logarithmic. This is basically the original graph with the axes reversed.  There is no effect using strain; we will discuss the advantages of doing so below. Now let us look at the data from another angle: the tangent “modulus of elasticity,” defined of course by $E = \frac{\Delta\sigma}{\Delta\epsilon}$ We consider natural scales for both modulus and strain.  To obtain the slope, we used a “central difference” technique except at the ends. It’s interesting to note that, except for the “kink” caused by preconsolidation, in compression the tangent modulus of elasticity increases somewhat linearly with strain, as it does with the decompression. ## Discussion of the Results There’s a great deal to consider here, and we’ll try to break it down as best as possible. ### Use of Strain vs. Void Ratio The graphs above show that there is no penalty in using strain instead of void ratio to plot the results.  The advantage to doing so is both conceptual and pedagogical. In the compression and settlement of soils, we traditionally conceive of it as a three-stage process: elastic settlement, primary consolidation settlement, and secondary consolidation settlement.  Consolidation settlement is nothing more than the rearrangement of particles under load; the time it takes to do so is based in part on the permeability of the soil and its ability to expel pore water trapped in shrinking voids.  Elastic settlement is due to the elastic modulus of the material, the strain induced in the material and the geometry of the system.  This distinction, however, obscures the fact that we are dealing with one soil system and one settlement.  Using strain for all types of settlement would both help unify the problem conceptually and ease the transition to numerical methods such as finite element analysis, where strain is used to estimate deflection.  In the past we were able to use a disparate approach without difficulty, but that option is not as viable now as before. ### The Natural Scale, Consolidation Settlement Stiffness, and the Ring Both here and in Briaud (2018) the natural stress-strain curve experiences an upward curvature, which is obviously different from what we normally experience in theory of elasticity/plasticity.  This comes into better focus if we consider the variation of the tangent modulus of elasticity, which (except for the aforementioned preconsolidation effect) linearly increases with stress.  There are two possible explanations for this. The first is to observe that, as soils compress in consolidation settlement, their particles come closer together, and thus more resistant to further packing. The second, as suggested by Briaud (2018), is that the presence of the confining ring in the consolidation test augments the resistance of the particles to further compression.  The issue of confinement is an interesting one because in other tests (unconfined compression tests, triaxial tests) confinement is either very flexible or non-existent.  It should be observed that consolidation theory, as originally presented, is one-dimensional consolidation theory.  For true one-dimensional consolidation, we assume a semi-infinite case where the infinite boundary “confines” the physical phenomena.  The use of a confining ring assumes that the ring can replicate this type of confinement in the laboratory.  Conditions in the field, with finite loads and variations in the surrounding soils, may not reflect this.  While it would be difficult to replicate variations in confinement in the laboratory, these variations should be kept in mind by anyone using laboratory-generated consolidation data. ### The “Modulus of Elasticity” for Consolidation Settlement This may strike many geotechnical engineers (especially those in areas where void ratio is used to estimate consolidation settlement) as an odd concept, but if we consider the material strain vs. its deflection, it is a natural one.  Varying moduli of elasticity are nothing new in geotechnical engineering; they have been discussed on this site in detail.  The situation here is somewhat different for a wide variety of reasons, not the least of which is that here we are dealing with a tangent modulus while previously we looked at a secant one.  Also, differing physical phenomena are at work; theory of elasticity implicitly assumes that particle rearrangement is at a minimum, while consolidation settlement (both primary and secondary) is all about particle rearrangement. A more unified approach to settlement would probably reveal a process where the change in stress vs. the change in strain varies at differing points in the process along a stress path with multiple irreversibilities.  Such an approach would require some significant conceptual changes in the way we look at settlement, but would hopefully result in more accurate results. ## Conclusion Consolidation settlement is a topic that has occupied geotechnical engineering for most of its modern history.  While the theory is considered well established, changes in computational methodology will eventually force changes in the way the theory is applied.  A good start of this process is to use strain (rather than void ratio) as the measure of the relative deflection of structures, and the example from Briaud (2018), along with the demonstration relative to natural scales, is an excellent start. References Briaud, J.-L. (2018) “The San Jacinto Monument.”  Geostrata, July/August.  Issue 4, Vol. 22, pp. 50-55. ## STADYN Wave Equation Program 9: Addition of Coefficient of Restitution to Cushion and Interface Properties In our last STADYN post we discussed the addition of $\alpha$ factors to take into account adhesion phenomena with cohesive soils.  In this post the addition of a more mundane but nevertheless important parameter for impact pile hammer systems is done: consideration of plastic losses in hammer and pile cushions, and interfaces as well. Most impact pile hammers use some kind of hammer cushion; additionally, concrete piles are almost always driven with pile cushion at the pile head.  Cushions of both kinds are subject to significant plastic deformation and generation of heat.  There are several possibilities of modelling these elements in a simulation such as STADYN. The first is to use velocity-dependent (viscous) damping to simulate the dissipation of energy.  STADYN in its current form has no velocity-dependent parameters; to add these would involve some major changes in the code, and in any case the testing of cushion material does not produce a result that would indicate such a property. The second is to use an elastic-purely plastic approach similar to the one used in the soils.  The problem with this is that it would “flat-top” the impulse to the pile, and there is no evidence that the cushion material fails in this way. The third is to use a “coefficient of restitution” approach, where the rebound of the cushion takes place at a different stiffness than the compression.  This is illustrated in two variants below. The conventional model dates back to Smith, and is still used in GRLWEAP.  The ZWAVE model is described by Warrington (1988).  In both cases the energy lost in the cushion is represented by the shaded area. For STADYN the conventional model was adopted.  Implementing this took a little more care in a finite element code than in finite-difference codes like WEAP and GRLWEAP but it was done.  To accomplish this, it was necessary to compute the force in the cushion incrementally, as with plasticity the response is now path-dependent.  When the cushion rebounded (i.e., the distance between the cushion faces increased from one step to the next) the rebounding stiffness is used.  In this way multiple rebounds can be modelled properly. Since the inverse methods do not model the hammer, the Mondello and Killingsworth case is not considered here.  This leaves the other two cases, and these can be summarised very briefly. The Finno (1989) case had a blow count increase from 15.8 to 17.0 blows/30 cm. For the SE Asia case, the blow count increased from 11.8 to 13.5 blows/30 cm.  Additionally for the latter case comparisons with the pile head force and ram velocity vs. time tracks were produced. The pile head force until peak was identical, and then decreased more rapidly afterwards. There was an additional “kick” at 2L/c not present in the previous run. The ram (point) velocity is the same until rebound, and then the ram is essentially stationary with the coefficient of restitution until 2L/c, after which the ram velocity in the two cases is very close.  The sawtooth effect is mostly due to the “ringing” of the ram, i.e., a stress wave going up and down the ram. While it is evident that the method of energy transfer is different with the addition of the coefficient of restitution, the actual effect of plasticity on the blow count is not great.  This is probably due to two factors: most of the energy transfer takes place during compression of the hammer cushion, and both hammers are using micarta and aluminium, which has a relatively high coefficient of restitution (0.8).  Nevertheless cushion losses are greater in materials such as plywood, which is used with concrete piles.  It is to this type of pile that STADYN’s development now turns. ## STADYN Wave Equation Program 8: Modification of Adhesion Properties of Cohesive Soils With the successful transition of the $\xi-\eta$ soil property system, the time has come to consider how these soils interact with the pile shaft.  As was the case before, the work with the TAMWAVE project has proven helpful with this. One of the things that makes STADYN more complex than either TAMWAVE or most other 1-D solutions is that soils are not considered as purely cohesive or cohesionless.  In most analysis of driven piles, soils are either on or the other, or at best alternately layered.  In reality the division between the two is not so clear-cut except for either clean sands on the one end or pure clays on the other.  STADYN’s soil system envisions soils as a continuum between one and the other; although this adds to the flexibility of the program (especially in the inverse mode) and its modelling of reality, it makes specifying soils a challenge. As noted earlier, for soils between the purely cohesionless ($\eta = -1$) and cohesive ($\eta = 1$) interpolation is done so that soils have no cohesion in the former case, no friction in the latter, and are interpolatively mixed in between.  For example, for a middle case of $\eta = 0$, the soil would have a reduced cohesion and friction for the same value of $\xi$ and share these properties.  In this way any adjustments for adhesion of either type of soil would be made for each. Cohesionless soils: there are two ways of looking at this problem.  We can assume a straight-up Coulomb friction failure between the pile and soil, or we can assume that the pile acts as a “direct shear” tester and thus forces the soil to fail at an apparent angle that is not the same as would be predicted by Mohr-Coulomb failure.  As with TAMWAVE, we have assumed the latter; this is explained in some detail here.  It is reasonable to assume that a continuum model such as is used by STADYN could predict such a failure; thus, no modification to the elements closest to the pile surface is done for cohesionless soils. One thing that did change, however, was the way the lateral earth pressure on the pile was computed.  In an elastic-purely plastic system, lateral earth pressure varies in the elastic region, and with elastic theory that means with the variance of Poisson’s Ratio.  With a Mohr-Coulomb failure criterion, frictional cohesionless soils’ strength is mobilised by vertical effective stress acting laterally.  In recent code iterations Jaky’s Equation has been used to estimate Poisson’s Ratio; however, this has been changed to use the method given by Randolph, Dolwin and Beck (1994).  Once the lateral earth pressure coefficient is computed using this method, Poisson’s Ratio is determined.  At or below the pile toe Jaky’s Equation is used. Cohesive soils: Mohr-Coulomb theory has no way of taking degradation of cohesion at an adhesion surface into account.  To do this the cohesion for the element(s) immediately adjacent to the pile is reduced by an $\alpha$ factor as computed by the method of Kolk and van der Velde.  This is only done for the element immediately adjacent to the pile shaft surface.  This is the way STADYN does a pile-soil “interface.”  Doing it in this manner obviates the need for special interface elements between the pile and soil. Implementing this is a little tricky, because the $\alpha$ factor is dependent upon the effective stress.  It is necessary to thus generate the layers, compute the mid-point effective stresses in each, and then apply the factor to the cohesion of one set of elements only. ## Results: Finno (1989) and Modello and Killingsworth (2014) Comparisons The results of these two cases were most recently discussed here.  They can be discussed easily because the results varied little from the previous stage of the program. For the first case, the Davisson capacity changed from 971 kN to 965 kN and the blow count from 17.6 to 15.8 blows/30 cm. For the second (inverse) case, the Davisson capacity for the case of $|\eta| < 3$ the Davisson capacity changed from 269 kN to 274 kN and the blow count from 24.6 to 24.4 blows/30 cm.  The least squares difference actually increased from 0.00143 to 0.00149. In both cases the soils were heavily cohesionless (at least that’s the way the pile looked at them) and the reduction in adhesion was minimal in impact. ## Results: Notional Southeast Asia Case Of all the test cases in the original study, the notional Southeast Asia case was the most problematic in the results, especially as they were compared to the GRLWEAP output.  The previous phase produced little difference in outcome; it was hoped that applying $\alpha$ factors to the adhesion would at least solve the discrepancies of SRD estimates.  The results did not disappoint. Since we have not presented too many results from this case, some graphical output is in order.  First, the force-time and velocity*impedance-time curves: The result above is a classic “offshore” pattern.  In the early part of impact ($\frac{L}{c} < 1$) both the actual pile head force and the product of the impedance are virtually identical.  This indicates an “infinite pile” condition; the theory behind this is discussed by Warrington (1997). Beyond this the two diverge; first the pile head moves upward in rebound from the pile shaft (indicated by the fact that the rebound takes place before $\frac{2L}{c}$) and impacts the pile cap, producing a secondary force in the pile head.  Beyond $\frac{2L}{c}$ the pile head force goes to zero and the velocity oscillates with the reflections from the pile; however, just after that time the compressive “kick” from the toe is evident. Now we have the result of the static load test.  As noted in the original study, static load tests are exceptional offshore, and for actual loading a tension test is probably of just as much interest (if not more) than a compressive one.  In the original study doubt was also expressed as to the relevance of Davisson’s criterion to offshore piles; the variation among different interpretation methods, however, were not that great.  In any case, the effect of reducing the adhesion of cohesive soils along the surface is evident: the Davisson ultimate load has dropped to 20,600 kN.  This is nearly identical to the Dennis and Olson (1983) method result, and below the API RP2A (2002) result.  This indicates that the application of the $\alpha$ method to the soil elements along the pile shaft results in bringing the static results of STADYN more in line with those of static methods in use. For the Dennis and Olson (1983) SRD, the GRLWEAP blow count varied from 18.4 blows/30 cm to 21.8 blows/30 cm, depending upon which value of damping was used ($0.2 \frac{sec}{m}$ or $0.3 \frac{sec}{m}$.  STADYN returned a blow count of 11.8 blows/30 cm.  This is a significant improvement.  There are two possibilities to explain the remaining difference: 1. STADYN is modelling a lower effective damping value for the soils than is used in GRLWEAP.  As noted in the original study, STADYN has a different model for handing dissipative phenomena than GRLWEAP. 2. The two programs have differing methods for arriving at the blow count. Before we can make more definitive statements about this, we need to include cushion losses, which is our next step.  Nevertheless this result clears up a great deal of the difficulty with this case in the original study. Posted in Soil Mechanics ## Relating Hyperbolic and Elastic-Plastic Soil Stress-Strain Models: A More Complete Treatment In an earlier post, we discussed this topic.  This is meant as a follow-up to that post; in a sense we left the reader “hanging” because the solution, although informative, was incomplete.  This should “tie some loose ends” and make the result, although it’s still theoretical, more useful.  The concept for most of this is the same but the implementation more closely follows the physical reality of stress-strain. Let us begin by considering a modified version of the original graphic which compares the hyperbolic and elastic-purely plastic stress-strain models. We need to make a few definitions. First, let’s begin by defining two strains.  The first strain is the strain at failure (we’re assuming perfectly plastic failure here) if the small-strain elastic or shear modulus could be maintained to failure (i.e., if linear elasticity would hold until failure.)  That strain is $\epsilon_0=\frac{\sigma_u}{E_1}$ In this case we are making the dashed line a single failure stress $\sigma_u$, the ordinate $\sigma$ and the strain $\epsilon$.  Although elastic modulus E is habitually used, this treatment could apply to shear modulus G as well. The second is the failure strain at a reduced modulus assuming an elastic-purely plastic deformation characteristic, or $\epsilon_1=\frac{\sigma_u}{E_2}$ If we use $\epsilon_0$ as a “reference” strain, we can make the problem dimensionless as follows: $\hat \epsilon=\frac{\epsilon_1}{\epsilon_0}$ In any case the equation for the hyperbolic stress-strain curve for a given strain is $\sigma=\frac{E_1 \epsilon_0^2}{\epsilon_0+\epsilon}$ Integrating the area above this curve to the failure stress and $\epsilon_1$ yields $A_1 = \ln\left( \epsilon_0 + \epsilon_1 \right)E_1\epsilon_0^2-\ln(\epsilon0)E_1\epsilon_0^2$ Defining $A = \frac{E_2}{E_1}$ the area above the elastic region of the elasto-plastic deformation line is $A_2 = \frac {\epsilon_1^2AE_1}{2}$ We need to do the following: 1. Equate the areas. 2. Solve for the modulus ratio A. 3. Substitute the dimensionless strain ratio $\hat \epsilon$. Doing all of this yields $A = 2\,{\frac {\ln (1+{\it \hat\epsilon})}{{{\it \hat\epsilon}}^{2}}}$ Plotting this yields the following: Although the notation is different, this is basically the same result we got before.  It also has the same problem: it “blows up” as the strain ratio approaches zero .  For high-strain problems (which is our own chief field of interest) this is not a problem, but it still needs to be addressed.  The basic problem is that the whole “area ratio” concept itself breaks down as the strains approach zero.  At zero strain the moduli should be the same and the modulus ratio unity, but the area ratio does not represent this. This can be seen if we look at a more experimentally-based treatment of the problem, which is summarised in this graph, taken from this publication: Although it’s certainly possible to do the usual empirical correlation on a curve like this, the higher strain portion and our theoretical presentation resemble each other.  The smaller strain region is the problem.  In many ways this resembles the Euler column buckling problem familiar to structural engineers, where two regions are defined with two equations which meet at a point where both their slope and their value are the same. But what equation to use for the small-strain region?  Whatever equation we use needs to come to unity at zero strain and decrease from there.  A simple function for this purpose is the cosine function, modified as follows: $A = \cos(\beta \hat\epsilon)$ To find the meeting point, we need to find the point where both the values of A and the derivatives are the same.  Without going into the algebra, for the second equation $\beta = .495$ and the meeting point is $\hat\epsilon = 1.947$ and $A = 0.571$.  This is plotted below. Although a more rigourous analysis is necessary, the two plots look very similar.  The biggest difference–and this is not insignificant–is that the empirical plot above is semi-logarithmic in nature, while the theoretical one is linear. From all this, we can conclude the following: 1. The “area ratio” concept, while useful for larger strains, breaks down with smaller strains. 2. The quantities $\epsilon_1$ and $\hat\epsilon$ are very useful in generalising strains in soils, although the former is physically impossible. 3. “Stitching together” the two equations yields a theoretical construct that shows potential to representing reality in soil stress-strain relationships.  The biggest difference, as noted, is the logarithmic vs. linear nature of the plots; this probably indicates an underlying principle that needs to be addressed. 4. The actual values of the ratio of small-strain shear or elastic modulus to elasto-plastic modulus is very application dependent.  Since quantifying both elastic and shear modulus is more important in geotechnical engineering (primarily due to finite element analysis) than in the past, the need to establish values of this ratio for various applications is great.
{}
# Modulated Variable-Rate Deep Video Compression ## Primary tabs Citation Author(s): Jianping Lin, Dong Liu, Jie Liang, Houqiang Li, Feng Wu Submitted by: Jianping Lin Last updated: 26 February 2021 - 9:33pm Document Type: Presentation Slides Document Year: 2021 Event: Presenters Name: Jianping Lin Paper Code: DCC-138 Categories: #### Abstract Abstract: In this work, we propose a variable-rate scheme for deep video compression, which can achieve continuously variable rate by a single model. The key idea is to use the R-D tradeoff parameter $$\lambda$$ as the conditional parameter to control the bitrate. The scheme is developed on DVC, which jointly learns motion estimation, motion compression, motion compensation, and residual compression functions. In this framework, the motion and residual compression auto-encoders are critical for the rate adaptation because they generate the final bitstream directly. Inspired by the recent work on deep variable-rate image compression \cite{choi2019variable}, we propose to use the conditional auto-encoders, which are deeply modulated by $$\lambda$$ via scaling-networks, to achieve the basic rate adaptation. However, since other complicated modules, i.e., the motion estimation and motion compensation, also affect the final bitrate indirectly, the basic rate adaptation still has a certain compression performance loss compared with the fixed-rate models. To address this, we propose to add the R-D tradeoff parameter map ($$\lambda$$ map) to the inputs of the two modules as a conditional map. Finally, we use a multi-rate-distortion loss function together with a step-by-step training strategy to optimize the entire scheme. The experiments show that the proposed scheme achieves continuously variable rate by a single model with almost the same compression efficiency as multiple fixed-rate models. The additional parameters and computation of our model are negligible when compared with a single fixed-rate model. ## MVR-DVC_spotlight_record.pdf #### Dataset Files We propose a variable-rate scheme for deep video compression. (58)
{}
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 17 Jul 2019, 08:15 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A set consists of 5 distinct positive integers a, b, c, d, e, ........ Author Message TAGS: ### Hide Tags e-GMAT Representative Joined: 04 Jan 2015 Posts: 2942 A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 09 Jan 2019, 02:24 00:00 Difficulty: 55% (hard) Question Stats: 68% (01:59) correct 32% (02:13) wrong based on 164 sessions ### HideShow timer Statistics A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 _________________ GMAT Club Legend Joined: 18 Aug 2017 Posts: 4237 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags Updated on: 09 Jan 2019, 03:41 given b is least integer and d is highest no and sum of a,b,c= 24 mean of b+24+d/5 =8.8 b+d=20 since d is the highest no it can be 19 and b is least can be 1 so 19-1= 18 IMO C EgmatQuantExpert wrote: A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 _________________ If you liked my solution then please give Kudos. Kudos encourage active discussions. Originally posted by Archit3110 on 09 Jan 2019, 03:13. Last edited by Archit3110 on 09 Jan 2019, 03:41, edited 1 time in total. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2942 A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 09 Jan 2019, 03:38 Archit3110 wrote: given b is least integer and d is highest no and sum of a,b,c= 24 mean of b+24+d/5 =8.8 b+d=20 since d is the highest no it can be 20 and b is least can be 0 so 20-0 = 20 IMO E EgmatQuantExpert wrote: A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 Hi Archit3110, Please note that a, b, c, d and e are positive integers Regards, _________________ GMAT Club Legend Joined: 18 Aug 2017 Posts: 4237 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 09 Jan 2019, 03:40 EgmatQuantExpert there is a confusion regarding usage of 0 ; what i know is that 0 is neither +ve or -ve integer if a statement says n is a set of +ve integers then we begin set n from (1,2,3...) and if statement says n is a set of non negative integer then we begin set from ( 0,1,2,3,..) will 0 be considered in set of +ve even integers? i.e (0,2,4,6,.) or not? EgmatQuantExpert in that case b =1 and d = 19 and d-b = 18 IMO C EgmatQuantExpert wrote: Archit3110 wrote: given b is least integer and d is highest no and sum of a,b,c= 24 mean of b+24+d/5 =8.8 b+d=20 since d is the highest no it can be 20 and b is least can be 0 so 20-0 = 20 IMO E EgmatQuantExpert wrote: A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 Hi Archit3110, Please note that a, b, c, d and e are positive integers Regards, _________________ If you liked my solution then please give Kudos. Kudos encourage active discussions. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2942 A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 09 Jan 2019, 03:53 Archit3110 wrote: EgmatQuantExpert there is a confusion regarding usage of 0 ; what i know is that 0 is neither +ve or -ve integer if a statement says n is a set of +ve integers then we begin set n from (1,2,3...) and if statement says n is a set of non negative integer then we begin set from ( 0,1,2,3,..) will 0 be considered in set of +ve even integers? i.e (0,2,4,6,.) or not? Yes, 0 is neither a positive integer nor a negative integer. It will be included in a set of non-negative and non-positive integers but it must be excluded from the set of positive or negative integers Though, 0 is an even integer, it cannot be considered as a positive even integer. Since, 0 is not a positive integer Regards, _________________ GMAT Club Legend Joined: 18 Aug 2017 Posts: 4237 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 09 Jan 2019, 03:57 EgmatQuantExpert thanks EgmatQuantExpert wrote: Archit3110 wrote: EgmatQuantExpert there is a confusion regarding usage of 0 ; what i know is that 0 is neither +ve or -ve integer if a statement says n is a set of +ve integers then we begin set n from (1,2,3...) and if statement says n is a set of non negative integer then we begin set from ( 0,1,2,3,..) will 0 be considered in set of +ve even integers? i.e (0,2,4,6,.) or not? Yes, 0 is neither a positive integer nor a negative integer. It will be included in a set of non-negative and non-positive integers but it must be excluded from the set of positive or negative integers Though, 0 is an even integer, it cannot be considered as a positive even integer. Since, 0 is not a positive integer Regards, _________________ If you liked my solution then please give Kudos. Kudos encourage active discussions. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2942 Re: A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 11 Jan 2019, 07:12 Solution Given: • A set of 5 distinct positive integers, {a, b, c, d, e} • b is the least number and d is the highest number of the set • a + c + e = 24 • Mean of the set = 8.8 To find: • The maximum value of (d – b) Approach and Working: • a + c + e = 24 ………………………….………………………………… (1) • $$\frac{(a + b + c + d + e)}{5} = 8.8$$ o Implies, a + b + c + d + e = 5 * 8.8 = 44 …………… (2) (2) – (1), we get, • b + d = 44 – 24 = 20 • For d – b to be maximum, b must be minimum o The minimum value b can take is 1, since b is a positive integer • If b = 1, then d = 20 – 1 = 19 Therefore, the maximum value of (d – b) = 19 – 1 = 18 Hence, the correct answer is option C. _________________ CEO Joined: 12 Sep 2015 Posts: 3848 Re: A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 19 Jan 2019, 06:35 Top Contributor EgmatQuantExpert wrote: A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 The mean of all 5 numbers is 8.8 (a + b + c + d + e)/5 = 8.8 Multiply both sides by 5 to get: a + b + c + d + e = 44 The sum of a, c and e is 24 (a + c + e) = 24 So, take a + b + c + d + e = 44 and rewrite as: (a + c + e) + b + d = 44 We get: (24) + b + d = 44 So, b + d = 20 b is the least number and d is the highest number Since b and d are POSITIVE INTEGERS, and since we're tying to MAXIMIZE the value of (d - b), we want to MAXIMIZE d and MINIMIZE b This is accomplished when d = 19 and b = 1 What is the maximum value of (d – b)? d - b = 19 - 1 = 18 Cheers, Brent RELATED VIDEO FROM OUR COURSE _________________ Test confidently with gmatprepnow.com Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 6923 Location: United States (CA) Re: A set consists of 5 distinct positive integers a, b, c, d, e, ........  [#permalink] ### Show Tags 27 Jan 2019, 11:03 EgmatQuantExpert wrote: A set consists of 5 distinct positive integers a, b, c, d, e, where b is the least number and d is the highest number. If the sum of a, c and e is 24, and the mean of all 5 numbers is 8.8, then what is the maximum value of (d – b)? A. 16 B. 17 C. 18 D. 19 E. 20 We know that a + c + e = 24. Because the mean of the 5 numbers is 8.8, we see that their sum must be 5 x 8.8 = 44. Thus, a + b + c + d + e = 44. Subtracting the first equation from the second equation, we obtain: b + d = 44 - 24 = 20. To maximize the difference of b and d, we will choose b to be as small as possible and d to be as large as possible. Thus, if b = 1 then d = 19, and the maximum value of (d - b) is 18. _________________ # Scott Woodbury-Stewart Founder and CEO Scott@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Re: A set consists of 5 distinct positive integers a, b, c, d, e, ........   [#permalink] 27 Jan 2019, 11:03 Display posts from previous: Sort by
{}
# Proving stationarity with difference equations This is exercise 2.9.b from Time Series Analysis – With Applications in R (Cryer, Chan). First of all, for $m\in\mathbb N+2$, let us define the $m$-th difference of a discrete time series $\{A_t\}_{t\in\mathbb Z}$ as $$\nabla^{m+1}A_t\triangleq\nabla(\nabla^m A_t)\quad:\quad\nabla A_t\triangleq A_t-A_{t-1}\quad.$$ Let $\{X_t\}_{t\in\mathbb Z}$ be a zero-mean stationary series with autocovariance $\gamma_k$ for every integer $k$, $\mu_t$ a polynomial in $t$ of degree $d$, and $Y_t\triangleq X_t + \mu_t$. Show that, for natural $m$, the $m$-th difference of $\{Y_t\}_{t\in\mathbb Z}$ is stationary if and only if $m\ge d$. I realize that $$Z_t\triangleq\nabla^mY_t=\sum_{j=0}^m(-1)^j\binom m jY_{t-j}\quad,$$ and that this would be trivial if we were dealing with the continuous derivative, because polynomial derivatives vanish if their degree is greater than the degree of the polynomial. Nevertheless, I get lost manipulating $E[Z_t]$ and $Cov[Z_t,Z_{t+k}]$. How do I prove neither of these depend on $t$? • Once you prove that stationarity implies the $m^\text{th}$ differences are stationary, you're done. That implication follows inductively by proving that the first differences of a stationary series are stationary. That proof is merely a direct application of your definition of stationary. – whuber May 24 '13 at 13:59 You don't need to worry about expectations and covariances, since the difference operator kills the polynomial term. In other words, the result turns on what the difference operator does to polynomials. Once you have taken $d$ differences, you are only left with the stochastic part of your model. So let's prove this. Since the difference operator is linear, you only have to prove the result for $t^d$. I would go for a proof by induction. The first difference kills the linear term, since you get $at-b - a(t-1)-b=2b$ A first difference gives you a stationary time series with non-zero mean. For higher polynomials, say $d$, the first difference gives $t^d-(t-1)^d=t^d-t^d -nt^{n-1}+\text{a polynomial of degree}\ d-1$. See what happens when we take the second difference. We have $nt^{n-1}$ (plus lower order junk) against $n(t-1)^{n-1}$ (plus lower order junk) from the difference of the next two terms. The induction assumption works here, because we have the same coefficient for the leading term. Note that we are making use of the assumption that the time series is sampled at equally spaced moments, or we would have got a different leading term in the second "first difference". We usually make this assumption for time series, and you can see that it matters in this situation. You can argue inductively for both the necessary and the sufficient condition. You still have to deal with $X_t$, the stationary part of the expression. You need to show that the first difference of a stationary time series is also stationary - but that's obvious.You only need to show this for the first difference.
{}
## Fission in the $r$-process ### Frontiers (2019) Contributed presentation on 05/2019 Fission has often been a neglected nuclear physics input for r-process simulations, despite its effect on the presence of long-lived actinides and influence in creation of the second (A~130) peak. I cover recent progress made at Los Alamos in improving fission properties relevant for the rapid neutron capture process (r-process). These new calculations include results for neutron-induced fission rates, beta-delayed fission and fission yields that arise from a detailed study of the Finite-Range Liquid-Drop Model. I end with a discussion of the impact of these new theory calculations on the abundances and observational properties of the r-process. ### Related Publications Year Authors Title (Click for more details) Journal (PDF) 2019 N. Vassh, R. Vogt, R. Surman, J. Randrup, et al. Using excitation-energy dependent fission yields to identify key fissioning nuclei in r-process nucleosynthesis J. Phys. G 46 065202 2018 M. Mumpower, T. Kawano, T. M. Sprouse, N. Vassh, et al. $\beta$-delayed fission in $r$-process nucleosynthesis ApJ 869 1 ## Mail Matthew Mumpower Los Alamos National Lab MS B283 TA-3 Bldg 123 Los Alamos, NM 87545
{}
# How to predict by hand in R using splines regression? by Splinter   Last Updated August 13, 2019 22:19 PM The R package splines allows one to fit a non linear model using splines. For instance, require(stats); require(graphics) bs(women\$height, df = 5) summary(fm1 <- lm(weight ~ bs(height, df = 5), data = women)) ## example of safe prediction plot(women, xlab = "Height (in)", ylab = "Weight (lb)") ht <- seq(57, 73, length.out = 200) lines(ht, predict(fm1, data.frame(height = ht))) This produces the following estimates Call: lm(formula = weight ~ bs(height, df = 5), data = women) Residuals: Min 1Q Median 3Q Max -0.31764 -0.13441 0.03922 0.11096 0.35086 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 114.8799 0.2167 530.146 < 2e-16 *** bs(height, df = 5)1 3.4657 0.4595 7.543 3.53e-05 *** bs(height, df = 5)2 13.0300 0.3965 32.860 1.10e-10 *** bs(height, df = 5)3 27.6161 0.4571 60.415 4.70e-13 *** bs(height, df = 5)4 40.8481 0.3866 105.669 3.09e-15 *** bs(height, df = 5)5 49.1296 0.3090 158.979 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2276 on 9 degrees of freedom Multiple R-squared: 0.9999, Adjusted R-squared: 0.9998 F-statistic: 1.298e+04 on 5 and 9 DF, p-value: < 2.2e-16 If I want to predict using these estimates, what should I put in the predictive model? $$\hat{y} = \hat{\beta}^T \textbf{(?)}.$$ I know that I can obtain the predictions using the command predict, but I want to understand what is this command doing. Is it $$\textbf{(?)} = bs(x, df = 5)$$? Tags : ## Related Questions Updated October 28, 2018 08:19 AM Updated September 21, 2017 15:19 PM Updated August 21, 2015 17:08 PM Updated August 15, 2017 19:19 PM Updated December 13, 2018 14:19 PM
{}
# harmonic oscillator position expectation value I'm trying to get the expected value as a function of time for the position, of a harmonic oscillator hamiltonian and a state vector $$|\psi\rangle=a|0\rangle+b|2\rangle$$. I have $$|\psi(t)\rangle=ae^{-\frac{i\omega t}{2}}|0\rangle+be^{-\frac{5i\omega t}{2}}|2\rangle$$ and $$\langle x(t)\rangle=\langle\psi(t)|x|\psi(t)\rangle.$$ By using creation and annihilation operators, $$x=\sqrt{\frac{\hbar}{2m\omega}}(a+a^{\dagger})$$ where $$a^{\dagger}$$ is the creation operator and $$a$$ the annihilation operator. From here, it's easy to see that $$\langle x(t)\rangle$$ because $$a|0\rangle=0$$,$$a^{\dagger}|0\rangle=|1\rangle \alpha a|2\rangle$$ and $$a^{\dagger}|2\rangle \alpha |3\rangle$$ and all the dot products with the bra $$\langle\psi|$$ will be zero. But how can this make sense? if the expected value of the position is 0 for all time t... wouldn't the oscillator be standing still? I was expecting to get a sine or cosine function • what is $\alpha$ ? And it's hard to read line that begins "From here it's easy to see..." I also suggest using $c_{1},c_{2}$ as constants instead of $a,b$ since $a$ is used for the lowering operator. – N. Steinle Feb 20 at 1:54 • @N.Steinle I suspect that $\alpha$ is being used in place of $\propto$. – Kyle Kanos Feb 20 at 2:54 • Keep in mind that there is no well defined trajectory for the oscillator. – Aaron Stevens Feb 20 at 3:09 • Also as an example, in random diffusion processes, the mean position is $0$, but that doesn't mean nothing is happening – Aaron Stevens Feb 20 at 3:10 Congratulations! You found out that the time dependence of the harmonic oscillator's eigenstates do not resemble the classical oscillator. If you want a non-zero expectation value you should prepare the system in a superposition of adjacent eigenstates, like $$|\psi\rangle = a |0\rangle + b|1\rangle.$$ That's a consequence of $$x$$ depending on $$a + a^\dagger$$. Either way, if you want the state that truely resembles the classical oscillator you should look at the coherent states. There are many ways to define them, one example that makes clear their resemblance to the classical oscillator is to translate by a finite distance $$d$$ the ground state: $$|\psi\rangle = \exp \left (-\frac{i p d}{\hbar} \right )|0\rangle.$$ Using the Heisenberg picture, where the time-dependent operator $$x$$ is $$x(t) = x(0) \cos \omega t+\frac{p(0)}{m\omega} \sin \omega t$$ and $$|\psi\rangle$$ is fixed on time, you can prove that the expectation value of $$x(t)$$ evolves just like a classical oscillator of amplitude $$d$$: $$\langle x(t)\rangle = \langle \psi |x(t)|\psi\rangle = d \cos \omega t.$$ The expectation value is zero because there is a symmetry between $$x$$ and $$-x$$. If you look at the form of the eigenfunctions below, you'll see that both $$\psi_0$$ and $$\psi_2$$ are symmetric about the $$y$$-axis. Intuitively, this means that if you take the expectation value of either of them, or their sum (their sum will have non-trivial time evolution, but you can convince yourself that the symmetry will be conserved - there's no reason for it to prefer one side over the other), the expectation value of $$x$$ will be zero. In general, eigenstates of the harmonic oscillator do not tend to have the oscillatory behavior that one might expect from classical mechanics. However, this feature is present for coherent states. First recall that the $$\psi_n(x)$$ are time independent solutions so there is no reason to suspect that $$\langle x\rangle$$ ought to behave like a classical oscillator since clearly $$\langle n\vert x\vert n\rangle=0$$. Now it could happen that your state is not an energy eigenstate so that the probability density $$\vert \Psi(x,t)\vert^2$$ is time-dependent, but that doesn't imply that $$\langle x\rangle$$ would be time-dependent as well: imagine an ice cream scoop symmetrically melting: the mass distribution might change in time but the average position of the ice cream might remain constant. As others have indicated, coherent states, which are specific linear combinations $$\psi_n(x)$$ containing all $$n$$ values, have average $$\langle x\rangle$$ that goes like a cosine: see an answer to this question for details. In your specific case $$\langle x\rangle=0$$ by symmetry. Since $$\psi_n(x)$$ is an even function for all even $$n$$'s and an odd function for all odd $$n$$'s, you basically have \begin{align} \langle x\rangle &= \int dx \left(a^*\psi_0(x)+b^*e^{2i\omega t}\psi_2(x)\right) x \left(a \psi_0(x)+b e^{-2i\omega t}\psi_2(x)\right)\, ,\\ &= aa^*\int dx \psi_0(x)^2 x + (a^* be^{-2i\omega t}+ab^*e^{2i\omega t}) \int dx \psi_0(x)\psi_2(x)\\ & \qquad +bb^* \int dx \psi_2(x)^2 x \tag{1} \end{align} In (1), every function under an integral is odd, since the products $$\psi_0(x)^2, \psi_2(x)^2$$ and $$\psi_0(x)\psi_2(x)$$ are even but multiplied by $$x$$. One has to be a bit careful about the limits here, since they are $$\pm\infty$$, but the exponential factor $$e^{-\lambda x^2/2}$$ that enters in $$\psi_n(x)=H_n(\sqrt{\lambda}x)e^{-\lambda x^2/2}$$ will make the integrals converge and you are left with an odd function integrated between symmetric limits, which yields $$0$$ by parity.
{}
# Enumitem: Restart/Reset counter of new list I use the enumitem package to manipulate the appearance of my list. Because I want to keep the default enumerate list, I defined a new list edulist and configured it. The list should be used in an environment edulistvar, which gives the user the ability to change the itemsep with an optional argument. I want this list to resume until several points in my document are reached (see \exercise in the example below). Resetting the counter of the list I want to do with \restartlist<list-name>} which seems to exist exact for this reason: # enumitem documentation Currently, with \setlist[enumerate]{resume} you can get a continuous numbering through a document.A new command has been added for restarting the counter in the middle of the document: \restartlist{<list-name>} It is based solely in the list name, not the list type, which means enumerate* as defined with the package option inline is not the same as enumerate, because its name is different. Unfortunately, I don't get it work in the following MWE. # Code \documentclass{scrartcl} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{enumitem} \usepackage{xparse} \DeclareDocumentEnvironment {edulistvar} { O{0pt} } { \begin{edulist}[itemsep=#1] }{ \end{edulist} } \DeclareDocumentCommand \exercise { } {% \bigskip \textsf{\bfseries\Large Exercise} \medskip \par \restartlist{edulist} } \newlist{edulist}{enumerate}{1} \setlist[edulist]{% label=\alph*), format=\sffamily, resume=edulist, partopsep=0ex, topsep=0.5\baselineskip, parsep=\parskip, } \begin{document} \exercise Bli Bla Blub \begin{edulist} \item One \item Two \item Three \end{edulist} Bli Bla Blub \begin{edulist} \item One \item Two \item Three \end{edulist} \exercise Bli Bla Blub \begin{edulistvar}[\baselineskip] \item One \item Two \item Three \end{edulistvar} \end{document} # Document Any idea how to reset the counter of the list? Manually resetting it (\setcounter{edulisti}{0}) doesn't work, as well. • Removing resume=edulist in your setlist-declaration should help. – Hackbard_C Aug 15 '14 at 15:20 • changing resume=edulist into resume should work – clemens Aug 15 '14 at 15:23 • @Hackbard_C: For sure. But as I wrote, I want the list to resume until I restart it at several points (e. g. new sections). So removing resume=edulist isn't a solution. ;o) – dawu Aug 15 '14 at 15:29 • Sorry, my fault. You would have to use \begin{edulist}[resume], which is not as comfortable as @cgnieder's answer, so use his recommendation! It should do the job as desired. – Hackbard_C Aug 15 '14 at 15:32 • @Hackbard_C: No problem! @cgnieder: This works. But: As written in the documentation, resume works only locally. So when I want to put my list inside an environment (e. g. center), it won't work without resume=edulist. (For some reason, I didn't get the series of enumitem to work ...) I edit my example, one moment, please. – dawu Aug 15 '14 at 15:35 I'm not sure I understand your question, either, but it seems straightforward to modify the MWE to do what you appear to want, if I've understood correctly, without the need for intervention by the end user: \documentclass{scrartcl} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{enumitem} \usepackage{xparse} \DeclareDocumentEnvironment {edulistvar} { O{0pt} } { \begin{edulist}[itemsep=#1, resume=edulist] }{ \end{edulist} } \DeclareDocumentCommand \exercise { } {% \bigskip \textsf{\bfseries\Large Exercise} \medskip \par \restartlist{edulist} } \newlist{edulist}{enumerate}{1} \setlist[edulist]{% label=\alph*), format=\sffamily, partopsep=0ex, topsep=0.5\baselineskip, parsep=\parskip, resume} \begin{document} \exercise Bli Bla Blub \begin{edulist} \item One \item Two \item Three \end{edulist} Bli Bla Blub \begin{edulist} \item One \item Two \item Three \end{edulist} \exercise Bli Bla Blub \begin{edulistvar}[\baselineskip] \item One \item Two \item Three \end{edulistvar} \begin{edulistvar}[\baselineskip] \item One \item Two \item Three \end{edulistvar} \end{document} • First, I thought it works. But if you add one more \begin{edulistvar} ... \end{edulistvar} at the end, the list will be reset, even without a new call of \exercise. It should continue. That's why you need resume=edulist (or to work with series, which I did't get to work inside \setlist). – dawu Aug 16 '14 at 16:54 • @dawu Oops. You are right. I should have tested that. I've corrected the code and extended the MWE accordingly. – cfr Aug 17 '14 at 0:11 Not sure I understood what your problem consists in, exactly, but series work, even inside another environment: \documentclass{scrartcl} \usepackage[ngerman]{babel} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{enumitem} \begin{document} \begin{center} \begin{enumerate}[label=\alph*), series=edu] \item One \item Two \item Three \end{enumerate} \end{center} Bli Bla Blub. \begin{center} \begin{enumerate}[resume*=edu] \item One \item Two \item Three \end{enumerate} \end{center} Blibli Blabla Blubblub. \begin{center} \begin{enumerate}[resume*=edu, start = 1] \item One \item Two \item Three \end{enumerate} \end{center} \end{document} • That's right. In this case, series works. But the user shouldn't need to take care of this. I am sorry that my question wasn't clear enought. I wanted to keep the situation as simple as possible. I hope that the problems now becomes clearer. – dawu Aug 15 '14 at 21:49
{}
# How to prevent “no clue” questions? A significant amount of questions asked in MSE begin or end with "I don't even know where to start", as a way to justify one's inability to provide own work on the problem and/or attract sympathy or rather pity. Most of these questions end up [on hold] within half an hour, because of course this doesn't attract sympathy at all. This is a waste of time and energy for everyone, including the OP, the five reviewers who are going to close the question, and the good Samaritan who has answered in the meantime. There are plenty of ways to get started on a problem when one has "no clue": • Write down the definition of the keywords of the problem, thus making sure you understand them, using examples ($\star$) when applicable; • If the problem involves formal computations, try with specific settings first ($\star$); • If the problem involves large structures or numbers, try with lower numbers first ($\star$); • Write down what you know that seems related to the problem: any relevant theorem not in that list will be spotted right away and people will point it out easily. ($\star$) you have to make them up yourself, and that very process is excellent to make progress in the way you think in general, what is a good, representative example in a given situation? I suggest that the Ask Question form could suggest that, if one ever feels like including "I don't even know how to even start", one could consider trying one of the options above instead, to save time and energy for everyone. A kind of "I'm not a robot" feature. Edit: I wrote an answer to How to ask a good question based on this post, following the recommendation of @Jack D'Aurizio. It is community wiki, feel free to improve it. • Sounds like a reasonable improvement of the policy outlined in How to ask a good question. – Jack D'Aurizio Feb 21 '18 at 19:14 • @JackD'Aurizio: and the guideline in Help Center as well: math.stackexchange.com/help/how-to-ask – Jack Feb 21 '18 at 19:56 • I think it's a bit unfair. A lot of students at one point or another are just not sure as to how to approach a problem. They don't know how to do that. I agree that this is not a great situation, and indeed the ideal response would be to take the "teach a man to fish" rather than "give a man a fish" and explain how to solve problems. That being said, I agree that some nontrivial percentage of people use this as a blanket term to avoid needing to work, or get some sympathy help. [...] – Asaf Karagila Feb 22 '18 at 17:17 • [...] All the more reason why an abstract answer about how to solve problems is better, since it would force them to work anyway. (And a bit of self promotion never hurt anybody... except when it does. ;)) – Asaf Karagila Feb 22 '18 at 17:17 • @AsafKaragila Clearly a number of those who ask such questions genuinely have no clue as to how to approach them; but that doesn't mean that they know nothing. If they really don't know a single definition of the words they are using, then they are asking the wrong question. – Arnaud Mortier Feb 22 '18 at 17:21 • I've had a lot of experience with students coming to my office hours claiming that they don't know how to solve some of the problems. And walking hand-in-hand, it was clear they know the definitions, and they know the theorems, but they don't have the confidence to apply them and see where this is leading. This is mainly as a result of terrible K12 indoctrination (at least in Israel) that math is to be solved via a concrete series of steps, and not as a free-form brainstorming. – Asaf Karagila Feb 22 '18 at 17:23 • @AsafKaragila: the terrible k12 indoctrination applies to India also and judging from many questions on MSE I think it's a global phenomenon. There has to be a formula for each and every kind of problem and one just needs to memorize all the formulas possible. – Paramanand Singh Feb 23 '18 at 3:13 • Can't we suspend such users for sometime who routinely ask such "no clue" questions? The decision can be taken on the basis on a threshold number of consecutive questions put on hold because of lack of context. – Paramanand Singh Feb 23 '18 at 3:24 • A Question ban is probably something you are looking for? @ParamanandSingh – user99914 Feb 23 '18 at 3:33 • @JohnMa : yes! Based on the history of questions one can decide the ban period and let the asker come back with some improvement. I don't know if that helps but sometimes one has to use stick also instead of carrot. – Paramanand Singh Feb 23 '18 at 3:56 • Great, Arnaud. Good Samaritans could now paste in a canned response as follows: "The following links to advice about asking questions, and about getting yourself unstuck. math.meta.stackexchange.com/a/27933/124085" – fredgoodman Feb 24 '18 at 2:32 • These older discussions are somewhat related: Suggested Guideline for “I Don't Know Where to Begin” Questions and Homework, reasonable to have no clue? – Martin Sleziak Mar 19 '18 at 1:19 In the meantime, here's a proposal: 1. Don't answer the question. Don't even provide a hint. 2. Leave a comment informing said clueless person that this site is a Q & A site for people who have specific questions, not a homework mill. If you are nice, you may provide some tips to the clueless user that will help him/her ask a proper question, i.e., something that shows an investment in the question and material. **I do not endorse the idea that a person willing to "help" such a questioner is doing anything good. See, e.g., the reason Tahani Al-Jamil is in the Bad Place. • Isn't voting to close the most urgent action? – user99914 Feb 23 '18 at 2:07 • Yes and perfectly valid. But I am also expressing distaste so that it reduces the likelihood of this happening so much. I mean, we will always get lazy people who think they have discovered El Dorado or something here. I have learned though that the ones who simply didn't understand what was expected of them will change their ways and ask better questions (good) and the ones that were hoping to have someone do their homework for them will simply disappear, never to return (also good). – Ron Gordon Feb 23 '18 at 2:10 • I disagree with (3). I don't see a good reason why the answerers should share the burden of a not-so-good-(yet) question (unless an answer itself is problematic). For instance, these two old questions: math.stackexchange.com/q/331404/9464, math.stackexchange.com/q/406200/9464 more or less would be closed very soon using today's "criteria". But I would be very reluctant to downvote your answers because they are really good! And I don't think they are detrimental to the site at all. – Jack Feb 23 '18 at 2:57 • @Jack: downvoting is not a reflection of the rightness or wrongness of the answer necessarily but a reflection of the usefulness of the answer to the community. Perhaps that sounds like splitting hairs but it is not. In this case, I feel that such answers are a detriment to the community because they encourage more of these "no clue" questions. Enough answers to these questions and Math.SE gets a reputation as a homework mill. No thanks. So I downvote to discourage people from answering these questions. – Ron Gordon Feb 23 '18 at 3:00 • For routine problem your 3rd point seems to make sense, but for questions which are not run-of-the-mill variety I think one should not downvote a mathematically correct answer (mostly this happens eg for questions with challenging integrals/series and no context). – Paramanand Singh Feb 23 '18 at 3:19 • @Jack: re my answers. The questions in these cases are clearly not run-of-the-mill homework questions but very difficult questions that occasionally get posted here. These too are controversial and nobody has really come up with a great criterion for deciding what is OK and what is not. (Which is why after 5 years I am still having these conversations.) I try my hardest to avoid answering clear HW questions and I apply my advice above when I deem appropriate. But difficult questions that have little context I think are OK if they are clearly not homework. – Ron Gordon Feb 23 '18 at 3:29 • I'm a simple man, and I'll be honest -- that allusion in the sidenote totally tipped the balance from $\pm 0$ to $+1$ for me. – pjs36 Feb 23 '18 at 4:26 • @RonGordon: Thanks for your explanation. I would wholeheartedly agree with you that "(answering) difficult questions that have little context I think are OK if they are clearly not homework", although I think the level of "difficulty" of a given problem may very much depend on the experience of the asker. – Jack Feb 23 '18 at 14:09 • I understand why one shouldn't answer the question, as stated in your first bullet point. But why not give a hint? This seems to assume that the person is "malicious" when he states that he has "no clue". It can be that his only "clue" is using the definitions and he feels that this would not add anything to the question, it can be that he has never faced a similar problem before, or a myriad of other reasons. If the person has the only intention of getting an answer for homework, I understand why not even a hint may be proper. But that is not only assuming the worst, but also unfalsifiable. – Aloizio Macedo Feb 23 '18 at 22:46 • Let me elaborate: If the sole purpose of the person is getting an answer asap, then a hint will most likely not give her that. If the person has legitimately "no clue", a hint can be very fruitful. It seems like a win/win situation, differently from answering straightly in such a situation. – Aloizio Macedo Feb 23 '18 at 22:48
{}
# The rate of a first order reaction is 0.01 mol/lit-sec at 10 min and 0.03 mole/lit-sec at 20 min. After intiation, Find the half life of the reaction. [Given $\mathrm{log}4}{3}=0.1238$ ] FREE Lve Classes, PDFs, Solved Questions, PYQ's, Mock Tests, Practice Tests, and Test Series! +91 Verify OTP Code (required) ## Related content Join Infinity Learn Regular Class Program!
{}
A Massive Dense Gas Cloud close to the Nucleus of the Seyfert galaxy NGC 1068 # A Massive Dense Gas Cloud close to the Nucleus of the Seyfert galaxy NGC 1068 Ray S. Furuya11affiliation: Tokushima University, 1-1 Minami Jousanjima-machi, Tokushima 770-8502, Japan and Yoshiaki Taniguchi22affiliation: The Open University of Japan, 2-11 Wakaba, Mihama-ku, Chiba 261-8586, Japan ###### Abstract Using the ALMA archival data of both CO (6–5) line and 689 GHz continuum emission towards the archetypical Seyfert galaxy, NGC 1068, we identified a distinct continuum peak separated by 14 pc from the nuclear radio component S1 in projection. The continuum flux gives a gas mass of  and bolometric luminosity of , leading to a star formation rate of 0.1  yr. Subsequent analysis on the line data suggest that the gas has a size of  pc, yielding to mean H number density of cm. We therefore refer to the gas as “massive dense gas cloud”: the gas density is high enough to form a “proto starcluster” whose stellar mass of . We found that the gas stands a unique position between galactic and extraglactic clouds in the diagrams of start formation rate (SFR) vs. gas mass proposed by Lada et al. and surface density of gas vs. SFR density by Krumholz and McKee. All the gaseous and star-formation properties may be understood in terms of the turbulence-regulated star formation scenario. Since there are two stellar populations with the ages of 300 Myr and 30 Myr in the 100 pc-scale circumnulear region, we discuss that NGC 1068 has experienced at least three episodic star formation events with a tendency that the inner star-forming region is the younger. Together with several lines of evidence that the dynamics of the nuclear region is decoupled from that of the entire galactic disk, we discuss that the gas inflow towards the nuclear region of NGC 1068 may be driven by a past minor merger. August 4, 2016 \AcceptedSeptember 8, 2016 \Publishedpublication date \SetRunningHeadAstronomical Society of JapanUsage of pasj00.cls \KeyWords Galaxy: nucleus – galaxies: Seyfert – galaxies: individual (NGC 1068) – submillimeter – techniques: interferometric ## 1 Introduction NGC 1068 is one of the nearest archetypical Seyfert galaxies in the nearby Universe (Seyfert, 1943; Khachikian & Weedman, 1974), making it an ideal laboratory towards understanding active galactic nuclei (AGNs) (Antonucci & Miller, 1985); the distance of 15.9 Mpc (Kormendy & Ho, 2013) is adopted throughout this paper. Therefore, a number of observational studies at various wavelengths have been made to understand the nature of AGN phenomena in NGC 1068 [e.g., Cecil et al. (2002); Storchi-Bergamnn et al. (2012); Mezcua et al. (2015); Lopez-Rodriguez et al. (2016); Wang et al. (2012)]. Another important issue is the so-called starburst-AGN connection since a number of Seyfert galaxies have an intense circumnuclear ( pc scale) star forming region around its AGN [e.g., Simkin et al. (1980); Wilson et al. (1991); Storchi-Bergamnn et al. (1996) and references therein]. Although there is no general consensus for this issue, both the nuclear ( pc scale) starburst and the AGN activity commonly needs efficient gas inflow to the circumnuclear and nuclear regions. Therefore, it is expected that intense star formation events around the nucleus of an AGN-hosting galaxy will provide us useful hints to understand the triggering mechanism of AGNs. This issue is also important when we investigate the coevolution between galaxies and super massive black holes (SMBHs); i.e., the positive correlation between the spheroidal and SMBH masses in galaxies (Kormendy & Ho, 2013; Heckman & Best, 2014). Since NGC 1068 has intense circumnuclear star forming regions around its AGN, it also provides us an important laboratory for this issue. It has been suggested that NGC 1068 has two stellar populations in the 100 pc-scale circumnuclear region; around the nucleus (Storchi-Bergamnn et al., 2012); one is the relatively young stellar population with an age of 300 Myr extending over the 100-pc scale circumnuclear region, and the second one is the ring-like structure at 100 pc from the nucleus with an age of 30 Myr. Since the inner 35 pc region is dominated by an old stellar population with an age of 2 Gyr, it is suggested that the two episodic intense star formation events occurred in the circumnuclear region of NGC 1068 although their origins have not yet been understood. At the western part in the ring, molecular Hydrogen emission, H S(1), is detected with a shell-like structure (Schinnerer et al., 2000; Vale et al., 2012). Since this emission often probes the shock-heated gas, either a super-bubble or an AGN feedback effect or both have been discussed as its origin to date (Storchi-Bergamnn et al., 2012; García-Burillo et al., 2014a, b), In either case, a certain asymmetric perturbation could drive the intense star formation event 30 Myr ago. If there is a certain physical relationship between circumnuclear and nuclear star formation events and the triggering AGN, it is intriguing to investigate the star formation activity in much inner region in NGC 1068. For this purpose, it is essential to attain high spatial resolution down to pc-scale both at dust continuum emission and thermal molecular lines, which allow us to diagnose not only gas kinematics but also gas physics. In this context, Atacama Large Millimeter/Submillimeter Array (ALMA) has been extensively used to study atomic and molecular gas and dust properties of NGC 1068 in detail (García-Burillo et al., 2014a, b, 2016; Imanishi et al., 2016; Izumi et al., 2016). Among these brand-new ALMA observations, we emphasize potential importance of the newly detected 689 GHz continuum source located close to the central engine of NGC 1068 observed by García-Burillo et al. (2014a) (project-ID: 2011.0.00083.S), although it was not identified as an independent source by the authors (see their Figure 3). In addition, the continuum source has not been separately identified as an object in their CO (6–5) map either. García-Burillo et al. (2014a) interpreted that the molecular gas associated with the continuum source represents a portion of the circumnuclear region rather than an independent source; see their Figure 4c. Taking account of the proximity to the nucleus [the nuclear radio component S1 (Gallimore et al., 2004)], we consider that this source must be playing an import role to form the observed complicated properties of the nuclear region of NGC 1068. In order to address the nature of the continuum source and its role in the dynamics of the nuclear region, we analyzed their ALMA data. ## 2 Data The ALMA data analyzed here were originally taken by Garcia-Burillo et al.; see details of their observations in García-Burillo et al. (2014a). We retrieved their image data from the data archive system of the Japanese Virtual Observatory111Japanese Virtual Observatory (JVO) is operated by National Astronomical Observatory of Japan (NAOJ).. We obtained the data set whose IDs are ALMA01001360 (the original file name of NGC1068.B9.spw0.avg33chan.fits) for the ALMA Band 9 CO (6–5) line image cube data, and ALMA01001362 (NGC1068.B9.continuum.fits) for the Band 9 689 GHz continuum image, which was obtained from concatenating four 1.875 GHz bandwidth spectral windows by García-Burillo et al. (2014a). We used the task imhead in the CASA package to set rest-frequency () of the CO transition to be  GHz. To increase signal-to-noise (S/N) ratio of the line emission, we smoothed the line data along the velocity axis every 2 channels using task imrebin by keeping the original frame of “LSRK” for the velocity axis. The resultant CO data have 58 channels with a resolution of 13.97 km s. After completing this minimum data processing, we exported the CASA-formatted data into FITS files, and imported them into GILDAS package for scientific analysis. Using our own scripts running on GILDAS package used in previous works, e.g., Furuya et al. (2014), we shifted the origins of all the images to that of the previously known AGN; the nuclear radio component S1 (Gallimore et al., 2004). Subsequently we evaluate RMS noise levels of the images by calculating statistics for arbitrary selected emission-free area in the 3-dimensional cube data. Iterating such analyses by changing areas, we found that RMS noise levels calculated in each velocity channel was fairly uniform with uncertainty of 28%. We end up with the mean of the image noise levels of 1  16.8 mJy beam in specific intensity per 14 km s resolution for the CO (6–5) line, and 1.67 mJy beam for the 689 GHz continuum image. Both the line and continuum images have the the same pixel size of \timeform0.05”. The synthesized beam size of the images which we retrieved from the archive (\timeform0.33”\timeform0.22” in FWHM at PA\timeform81D) slightly differs from that in García-Burillo et al. (2014a) (\timeform0.4”\timeform0.2” at PA\timeform50D). We consider that such a difference would be caused by those in the visibility data flagging and parameters used when Fourier-transformed into the image plane. ## 3 Results ### 3.1 The Nuclear 689 GHz Continuum Peak Here we focus our attention on a nuclear 689 GHz continuum peak close to the nucleus. In order to show the presence and the location of this continuum peak in the nucleus region of NGC 1068, we present Figure 1 where overall morphology is shown by the optical image [panel (a)] whereas the complicated morphology by the submm ALMA images [panels (b) and (c)]. We stress that the complexity of the central region is clearly recognized in both (b) the velocity centroid map, which is produced from the CO (6–5) line, and (c) 689 GHz continuum map. Here, the velocity centroid map in unit of km s is obtained as an intensity-weighted mean velocity map through dividing the first order momentum map by the zeroth order one. These moment maps are calculated by using the data shown in Figure 2 with the task moments over velocity range of 1020 km s (LSR) km s. This velocity range is selected by comparing the velocity channel maps (Figure 2) and the spectrum (Figure 3). Figure 1c presents the spatial distribution of 689 GHz emission in the central region of the galaxy. Comparing Figure 1c and the lower panel of Figure 3 in García-Burillo et al. (2014a), one immediately notices that there exists a local peak of the continuum emission, but its position does not coincident with that of the known AGN, S1. Although this continuum peak is readily recognized in Figure 3 of García-Burillo et al. (2014a), these authors did not identify it as a distinct object and any discussion was not given in their paper. This 689 GHz continuum local maximum has the peak intensity of 16.2 mJy beam, corresponding to 0.60 K in the mean brightness temperature over the synthesized beam, . We obtained its flux density of 9.5 mJy integrated over the beam centered on the peak. This peak is located at RA (J2000) and DEC (J2000) \timeform-0D0’47.79”, which is \timeform0.18” NNE from the position of the nuclear component S1 identified by the 8.4 GHz continuum imaging (Gallimore et al., 2004) at RA (J2000) and DEC (J2000) \timeform-0D0’47.9449” which is adopted in García-Burillo et al. (2016) (see caption of their Figure 1). The position of S1 reported in Gallimore et al. (2004) was obtained through astrometry between the 8.4 GHz continuum image taken by VLBA and that of the HO masers observed by VLA, yielding absolute position accuracy of mas [see Figure 7 in Gallimore et al. (2004)]. On the other hand, it is not trivial to evaluate absolute position accuracy of the 689 GHz continuum peak, which should be primarily determined by accuracies of the baseline vectors, angular separation(s) to the calibrator(s), and their absolute position accuracies. We therefore arbitrary assume the widely accepted idea that absolute position accuracy of a point source imaged by a connected-type interferometer is typically better than a fraction of its synthesized beam size. Namely we employ an absolute position accuracy to be as a fiducial value. Notice that the accuracy is identical to the pixel field of view (§2). Taking all the above into account, the angular separation of \timeform0.18” between the S1 and the 689 GHz continuum peak is believed to be real, as clearly recognized in Figure 3 of García-Burillo et al. (2014a). Last, the angular separation corresponds to the projected separation of 14 pc. ### 3.2 Molecular Gas Associated with the 689 GHz Continuum Peak In order to elucidate the origin of the 689 GHz continuum source, we investigate molecular gas properties associated with this source. First, we compare velocity channel maps of the CO (6–5) line emission by overlaying on the continuum emission map in Figure 2. We note that the CO emission around the continuum peak appears to be contaminated with the gas associated with the circumnuclear ring (Schinnerer et al., 2000), as seen towards the top-left corner in each panel. Second, we present CO (6–5) line spectral profile towards the continuum peak after subtracting the continuum emission (Figure 3). The spectrum was made by integrating the CO (6–5) emission inside the dotted-ellipse shown in Figure 2. The ellipse, i.e., the adopted aperture, is centered on the 689 GHz continuum peak and the area is identical to that of the synthesized beam size. The systemic velocity of the entire galaxy, (LSR)  1126 km s, falls between the blue and green bars in the spectrum, yielding asymmetry of the blue- and red components with respect to the systemic velocity, . We thus believe that the local gas associated with the continuum is decoupled from the galaxy-wide motion of the gas. Therefore we arbitrary assume that the green-coded component seen in the velocity range of (LSR) = 1140– 1160 km s represents the bulk motion of the local gas. This assumption would not be affected by the results from the higher angular resolution new observations by García-Burillo et al. (2016) because the source of our interests appears to be resolved out by the extended array configuration observations. The blueshifted component is seen in (LSR) = 1050 – 1120 km s, whereas the red one at (LSR) = 1200 – 1230 km s. Namely, both the blue- and redshifted components have almost same velocity shifts with respect to that of the bulk gas, i.e., 65 km s. Third, in Figure 4, we show the position-velocity (PV) maps where we adopted the line with PA \timeform120D passing at the 689 GHz continuum peak as a slicing line (the solid line in Figure 1b). The direction of the line is perpendicular to the line connecting the 689 GHz continuum peak with the nucleus, S1 (PA\timeform30D). We also point out that the 689 GHz continuum is elongated towards the north-northeast (PA\timeform30D); see Figure 1b. Despite inadequate spatial resolution, the PV diagram in the panel b demonstrates that multiple velocity components of the gas coexists within the compact region which cannot be resolved by the beam size of the data analyzed in this work. Last, we do not completely rule out an alternative hypothesis that an opaque “static” single-velocity component gas is responsible for the multiple velocity features in Figures 3 and 4. In this interpretation, the spectral profile is considered as self-absorption of the line because of high optical depth (; described in §4.2). However, this single gas hypothesis has a caveat that one should observe a double-peaked spectral profile whose absorption dip appears around the LSR-velocity of the static bulk gas. Contrary to this expectation, we detected the weak emission labeled with green bar (Figure 3). We therefore stick on the inference from Figure 4 that there exist multiple components of the gas having different velocities along the line of sight. ## 4 Analysis ### 4.1 Dynamical Properties Subsequent questions are how compact the gas cloud is and what is the origin of the multiple velocity components. An estimate of the size may be obtained from effective radius () of the beam size, i.e., the geometrical mean of the major and minor axes of elliptical beam of   21 pc. Although the beam size does not suffice to resolve the gas “condensation”, we attempt to give a better constraint on the radius as follows. At each velocity channel, we search for a peak pixel within the aperture where we made the spectrum. Figure 5 compares the peak pixel positions obtained from each velocity channel shown in Figure 2. To produce the figure, we limited to plot the peak pixel positions which have S/N-ratio of higher than 4. Assessing scatter of their positions, we obtained a stronger constraint that the spatial extent of gas is at most 2\timeform0.07”, corresponding to 25 pc in effective diameter (see Figure 5). Here we excluded the (LSR)  km s component which seems to represent a local maxima of the gas contaminated with that associated with the circumnuclear ring rather than the gas of our interests. Taking account of both the symmetry of velocity ranges where signals were detected and the spatio-velocity structure recognized in Figure 4b, one may consider that the blue- and redshifted gas are associated with a rotating structure around the central object. If we adopt a rotating radius 3 pc, an upper limit of the enclosed mass is calculated to be where we set . However, we argue that the gas is not in equilibrium by a pure rotation. This is because its specific angular momentum of is significantly higher than that expected from the correlation between and (Bodenheimer, 1995) where is angular velocity of s for this object. Note that a typical GMC has of the order of s (Bodenheimer, 1995). We therefore return to the most naíve hypothesis: there are multiple components of gas having different velocities along the line of sight. ### 4.2 Star Formation and Gas Properties Another clue to shed light on the nature of the gas condensation is obtained from analysis of the continuum flux. Following the spectral energy distribution (SED) analysis in García-Burillo et al. (2014a), we attempted to explain the observed 689 GHz continuum flux by thermal emission from dust grains, which can be approximated by a single temperature grey body emission. Adopting a range for dust temperature () of 50 –70 K, frequency-index of emissivity of dusts () of 1.7 (Klaas et al., 2001), dust mass absorption coefficient () at reference frequency ( GHz) of 0.005 cm g (Preibisch et al., 1993; André et al., 1996) and the  value, we found that the gas plus dust mass of  is required to reproduce the observed value. For a simplicity of the analysis, we kept the hypothesis that the thermal emission from dust grains can be approximated as if it is emanated from a single component gas [e.g., Klaas et al. (2001)], regardless of the possible multiple ones (§3.2). Notice that the adopted value is a typical one for interstellar medium whose spectral energy distribution often shows [e.g., Beckwith et al. (2000)], (Klaas et al., 2001), and 1.78 (Planck Collaboration et al., 2011b). It should be also noticed that the above value is not for dusts alone, but for whole the interstellar medium, therefore so-called dust-to-mass ratio is not needed to be multiplied. The inferred and  yield mean molecular hydrogen number density of cm, which is comparable to the critical density of CO (6–5) transition. Because of such a high density, we refer to the continuum source as “massive dense gas cloud”, which would be a scaled-up version of the galactic high-mass star-forming hot molecular cores (HMCs) [see e.g., Kurtz et al. (2000); Beltrán et al. (2005); Furuya et al. (2011)]. Furthermore, the value leads the mean column density of of cm, yielding mean optical depth of of the order of 0.01 – 0.1. Since bolometric luminosity of optically thin dust emission is given by where denotes Riemann’s zeta function and gamma function, we estimate that would range (0.4 – 4) . If the widely accepted conversion factor between infrared luminosity () and star formation rate (SFR) given by Eq.(4) in Kennicutt (1998) can be applied to the gas, we calculated SFR to be 0.1  yr with another assumption of . Hereafter we summarize derived properties in Table 6. Given the resultant value and a fractional abundance of [CO]/[H] (Dickman, 1978), and assuming that the gas and dust are well-coupled, i.e., gas temperature () is represented by the of  50–70 K, we obtained radiation temperature () for the CO (6–5) transition to be 35–52 K and optical depth of the line of using non-LTE radiative transfer code of RADEX (van der Tak et al., 2007) for the most intense blueshifted component (Figure 3). In this calculations, we keep the practical approximation of the single-gas hypothesis, and adopted that the blue component has velocity width () of km s in FWHM and the volume density estimated above. We also measured that it has the peak flux density of 70 mJy (Figure 3), which corresponds to  K. The ratio of gives an estimate of beam-filling factor () for an optical thick line. From the definition, is calculated by where is the desired radius of the gas, we obtain  pc by solving . Repeating the same analysis for the red, we obtained  pc. Regardless of such a robust assessment, we confirmed that the inferred  pc has a reasonable consistency with the estimate of 25 pc which is independently obtained from Figure 54.1). We confirmed that the number density of the gas, , inferred in the second paragraph of this subsection remains the same within a factor of 2–3. even if we calculate with the revised . The  and a 3D velocity dispersion of yield an “effective” virial mass which includes both thermal and non-thermal contributions to support the gas against self-gravity, of . Because of , the gas must be on the verge of star formation. Moreover it is fairly reasonable to conclude that formation of a star or a star cluster has already commenced in the gas because such a gas would gravitationally collapse within a few times of free-fall time of or within a dissipation time scale of turbulence of . Notice that our SED analysis adopts an implicit assumption that the 689 GHz continuum emission is purely due to reprocessed thermal emission from dust grains heated by internal sources. On the basis of the value, which is comparable to those of galactic giant molecular clouds, and the fact of , we further discuss that the putative heating source which is deeply embedded in the “massive dense gas cloud” is a “proto starcluster”. Assuming a typical star formation efficiency (SFE), which is defined by , of a few to 10% measured in the galactic star-forming clouds (Kennicutt & Evans, 2012), the gas may form (or may be forming) a “proto starcluster” with a total stellar mass () of on the order of a few (Table 6). ## 5 Discussion ### 5.1 Nature of the Massive Dense Gas Cloud It is interesting to compare physical properties of “the massive dense gas cloud” in the vicinity of the nucleus of NGC 1068 with those in our Galaxy and other galaxies because it provides us with a crucial hint on its origin. For this purpose, we present a plot of SFR vs. in Figure 6a which is taken from Lada et al. (2012). Clearly, the estimated SFR with an order of 0.1  yr and the gas mass of  (Table 6) make the “massive dense gas cloud” unique in the diagram located at almost middle between the galactic star-forming clouds and in extra-galaxies. Moreover we point out that dense gas fraction of the cloud, , is almost 100%, as expected from the high value described above. The is comparable not only to those of intense star-forming gas clouds observed in ultraluminous infrared galaxies (ULIRGs) such as Arp 220, but also those of active star-forming gas clouds in our Galaxy. Next, we investigate properties of the gas with the relationship between the SFR surface density, in unit of  yr kpc, and the surface gas density, in  pc (Figure 6b), which is originally produced by Krumholz & McKee (2005). The dashed line in the plot indicates the best-fit curve for the observed data (Kennicutt, 1998), and the solid one is the analytical prediction from the turbulence-regulated star formation model by Krumholz & McKee (2005) which adopts the following three assumptions; (1) star formation occurs in molecular clouds that are in supersonic turbulent state, (2) the density distribution within these clouds is lognormal, and (3) stars form in any subregion of a cloud that is so overdense that its gravitational potential energy exceeds the energy in turbulent motions. We argue that the “massive dense gas cloud” satisfies the three assumptions. First, the results of indicates the commence of star formation, which is suggested presumably to be under a turbulent status. Here is a virial mass that can be supported by sole thermal motion of for the 50–70 K gas where is a mean molecular weight (2.33 for [He] = 0.1 [H]). This leads a ratio of (Mach number) to be an order of (Table 6), suggestive of highly turbulent status. Here and are non-thermal and thermal velocity dispersions, respectively. The second assumption is not readily proven without a detailed analysis e.g., Figure 12a in Furuya et al. (2014), but the above will allow us to hypothesize it. The third one is supported by the multiple velocity components of the gas (§4.1). Although we need to have higher resolution observations with higher sensitivity to assess physical properties of the gas on more firm ground, its star formation activity may be explained in terms of the turbulence-regulated star formation scenario. Last, it is interesting that the location of the massive dense gas in the plane is close to those of SSCs in ULIRGs. ### 5.2 Origin of the Massive Dense Gas Cloud We investigated physical and star-formation properties of the “massive dense gas cloud” in NGC 1068. Now, we briefly address how such a star cluster is formed in the very nuclear region of the galaxy. A straightforward interpretation is that the “massive dense gas cloud” was formed through a shock compression of clouds via cloud-cloud collision (Habe & Ohta, 1992; Hasegawa et al., 1994; Inoue & Fukui, 2013) in the nuclear region. If this is the case, a question arises as how such a cloud collision was induced in the very nuclear region of NGC 1068. Here we remind again that NGC 1068 has two distinct star forming regions around the nucleus; one is the so-called circumnuclear star forming region whose star formation activity has an age of 300 Myr, and the second one is the ring-like structure at 100 pc from the nucleus with an age of 30 Myr (Storchi-Bergamnn et al., 1996). This means that NGC 1068 experienced a couple of episodic star formation events in their circumnuclear regions. If we assume that NGC 1068 experienced a minor merger in the past, recurrent star formation events induced by cloud-cloud collision can be naturally understood because they are induced by the orbital sinking motion of a satellite galaxy to be merged (Mihos & Hernquist, 1994; Taniguchi & Wada, 1996). Before adopting such interpretation, a caution must be used because recent 3D magnetohydrodynamics simulations pointed out that a galaxy itself can form such massive dense gas clouds by means of collision of filamentary clouds threaded by magnetic fields (Inoue & Fukui, 2013) or by multiple compressions (Inutsuka et al., 2015) without merging of galaxies. However, in the case of NGC 1068, some observational properties suggest a past minor merger, although any disturbed structures cannot be recognized around the galaxy (see Figure 1a). First, kpc-scale narrow line regions are distributed along an axis which is far from the rotational axis of the galactic disk (Cecil et al., 1990, 2002). Second, the molecular torus (0.1 – 1 pc scale) probed by HO masers is observed as almost the edge-on geometry (Greenhill et al., 1996; Gallimore et al., 1996b, 2001), whereas the overall galactic disk is observed to be a nearly face-on geometry222 The observed optical minor-to-major axis ratio of 0.85 nominally gives the viewing angle toward the galactic disk of NGC 1068 to be \timeform32D (de Vaucouleurs et al., 1991). . Third, circumnuclear molecular gas clouds (100 pc scale) also show highly asymmetric structures (García-Burillo et al., 2014a). All these lines of evidence can be interpreted as that the dynamics of the nuclear region would be decoupled from that of the entire galactic disk. These characteristic properties would not be readily explained if the gas inflow in NGC 1068 were due to gradual angular momentum loss driven by such as spiral arms and a bar structure in the galactic disk. Therefore, the gas fueling driven by minor-merger seems to be the most natural mechanism for the case of NGC 1068 [see for a review of Taniguchi (1999)]. It should be noticed that a minor merger would occur by taking an inclined orbit with respect to the galactic disk of a host galaxy, making both circumnuclear and nuclear structures decoupled from the dynamics of the galactic disk. It is also reminded that the orbital period becomes shorter as the separation between the satellite and the host galaxy becomes smaller. Namely, the satellite galaxy is anticipated to interact or collide with the galactic disk more often over a certain period whose time scale becomes shorter as the merger stage proceeds (Mihos & Hernquist, 1994; Taniguchi & Wada, 1996). This explains the observed nature of episodic star formation events in NGC 1068. Considering the well-defined overall symmetric morphology of the outer disk, we propose a picture that NGC 1068 is experiencing the final stage of a minor merger. In this context, we argue that the newly found “massive dense gas cloud” having SFR of the order of 0.1  yr may be formed by past gas collision(s) between/among nuclear gas clouds in the putative minor merger event. Another merit of the minor merger scenario is that star clusters can be formed in the central region of a merger remnant (Mihos & Hernquist, 1994; Taniguchi & Wada, 1996). From an observational ground, massive star clusters known as super star clusters (SSCs) often form in the interacting regions of major mergers such as in luminous infrared galaxies (LIRGs) [e.g., Whitmore et al. (1993); Mulia et al. (2016)] and ULIRGs [e.g., Shaya (1994); Shioya (2001)]. In the case of Arp 220, some SSCs in the central region tend to be massive (e.g., ) than those located in the circumnuclear zone [e.g., : Shioya (2001)]. On the other hand, in the case of NGC 3256, one of luminous infrared galaxies (LIRGs), the typical mass of the nuclear star clusters is also (Mulia et al., 2016). On the other hand, as for minor mergers, such observations have not yet been made to date. In general, it is difficult to identify galaxies in a late phase of a minor merger because tidal features in the outer part of the galaxy were easily smeared out after several rotations of the galactic disk [e.g., Khan et al. (2012)]. Clearly, it requires to conduct systematic surveys for minor mergers in Seyfert galaxies, and then carry out high-resolution optical imaging to search for nuclear star clusters in these system. It is important to note that the “massive dense gas cloud” appears to be associated with the nucleus of NGC 1068 at a projected separation of 14 pc. Since the nucleus, i.e., a SMBH, has a mass of (Kormendy & Ho, 2013), the SMBH system with accompanying the “massive dense gas cloud” is expected to behave as a binary with this SMBH, yielding an asymmetric gravitational potential. It is possible that this explains the complicated observational properties in the nuclear region of NGC 1068. It is also worthwhile to note that the HO maser disk (or ring) around the SMBH at the nuclear radio component S1 in NGC 1068 does not exhibit pure Keplerian rotation (Greenhill et al., 1996; Gallimore et al., 1996b, 2001; Murayama & Taniguchi, 1997), whereas that of NGC 4258 is explained almost perfectly by a Keplerian rotation (Miyoshi et al., 1995). As also discussed in (García-Burillo et al., 2016), the observed non-Keplerian motion could be a signature the so-called Papaloizou-Pringle instability (Papaloizou & Pringle, 1984), although it still remains possible that the non-Keplerian rotation may be attributed to the dynamical interaction with the star cluster. AGNs are thought to be powered by the gravitational energy release through the gas accretion (i.e., the gas fueling) onto a SMBH resided in the nucleus of galaxies (Rees, 1984). Among several physical mechanisms for such efficient gas fueling, galaxy major mergers appear to be the most efficient mechanism to explain the triggering AGN phenomena (Sanders & Mirabel, 1996; Hopkins et al., 2008). If low-luminosity AGNs such as Seyfert galaxies can be powered by minor mergers with a satellite galaxy, it is possible to have a unified triggering mechanism for all types of AGNs (Taniguchi, 1999, 2013). Such future studies will provide us with unique opportunity to test our knowledge of star formation, which is established in the galactic “quiescent” clouds, in the extreme environments, such as in the nucleus of Seyfert galaxies and merging galaxies. ## 6 Concluding Remarks To shed light on the nature of both circumnuclear and nuclear star formation in conjunction with the AGN activity, we analyzed ALMA archival data on both CO (6—5) line and 689 GHz continuum emission towards the archetypical nearby Seyfert galaxy NGC 1068 ( =15.9 Mpc). The ALMA data were originally taken by Garcia-Burillo and colleagues [see details of their observations in García-Burillo et al. (2014a)]. In this work, we focused on the 689 GHz local continuum peak in the vicinity of the nucleus, located at 14 pc (\timeform0.18”) NNE from the nucleus. Although the continuum peak of our interests was already found in the analysis by García-Burillo et al. (2014a), no discussion was given in their paper. Since a near-nuclear gas condensation such as the newly identified “massive dense gas cloud” is generally expected to physically affect the nuclear activity of a galaxy, we thoroughly investigated the physical properties of the source. Our findings can be summarized as follows. 1. The 689 GHz continuum flux gives a gas mass and bolometric luminosity (see Table 6 for the values), allowing us to estimate to a SFR of 0.1  yr. We estimated size of the gas to be  pc in diameter by means of two methods (§4.1 and §4.2). Because both results have a reasonable consistency, we obtained a mean H number density of cm. Therefore, this continuum peak can be identified as a “massive dense gas cloud”. 2. The gas density is high enough to form a “proto starcluster” with a total stellar mass of . We argue that this gas cloud will evolve to a nuclear star cluster around the nucleus of NGC 1068. 3. The gas cloud is identified as a missing link between galactic and extragalactic gas clouds in the previously known scaling relations of [a] SFR vs. gas mass proposed by Lada et al. (2012), and [b] surface density of gas vs. SFR density by Krumholz & McKee (2005). All the gaseous and star-formation properties (Table 6 and Figure 6) may be understood in terms of the turbulence-regulated star formation scenario proposed by Krumholz & McKee (2005). 4. Since there are two stellar populations with the ages of 300 Myr and 30 Myr in the 100 pc-scale circumnulear region, we discuss that NGC 1068 has experienced at least three episodic star formation events with a tendency that inner star-forming region is young in its age of star formation. Given the evidence for the gas dynamics in the nuclear region, the nuclear region of NGC 1068 is suggested to be decoupled from that of the entire galactic disk. We propose that the gas inflow towards the nuclear region of the galaxy may be driven by a past minor merger. We sincerely acknowledge the anonymous referee whose comments significantly helped to improve quality of our analysis and discussion. The authors sincerely acknowledge Charles Lada, Mark Krumholz, and the Copyright & Permissions Team of the AAS journal for their kind permission to use their figures in this work (Figure 6). We would also like to thank Michael R. Blanton for providing us with his SDSS color composite image of NGC 1068 shown in Figure 1a and Fumi Egusa for her generous support in handling the CO (6–5) image data. This work was financially supported in part by JSPS (YT; 23244041 and 16H02166). This paper makes use of the ALMA data of ADS/JAO.ALMA2011.0.00083.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. ## References • André et al. (1996) André, P., Ward-Thompson, D., & Motte, F. 1996, \aap, 314, 625 • Antonucci & Miller (1985) Antonucci, R. R. J., & Miller, J. S. 1985, \apj, 297, 621 • Beckwith et al. (2000) Beckwith, S. V. W., Henning, T., & Nakagawa, Y. 2000, Protostars and Planets IV, 533 • Beltrán et al. (2005) Beltrán, M. T., Cesaroni, R., Neri, R., et al. 2005, \aap, 435, 901 • Bodenheimer (1995) Bodenheimer, P. 1995, \araa, 33, 199 • Cecil et al. (1990) Cecil, G., Bland, J., & Tully, R. B. 1990, \apj, 355, 70 • Cecil et al. (2002) Cecil, G., et al. 12002, \apj, 568, 627 • Crutcher (2012) Crutcher, R. M. 2012, \araa, 50, 29 • de Vaucouleurs et al. (1991) de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G., Jr., et al. 1991, Third Reference Catalogue of Bright Galaxies. Volume I: Explanations and references.  Volume II: Data for galaxies between 0 and 12.  Volume III: Data for galaxies between 12 and 24., by de Vaucouleurs, G.; de Vaucouleurs, A.; Corwin, H. G., Jr.; Buta, R. J.; Paturel, G.; Fouqué, P.. Springer, New York, NY (USA), 1991, 2091 p., ISBN 0-387-97552-7, Price US 198.00. ISBN 3-540-97552-7, Price DM 448.00. ISBN 0-387-97549-7 (Vol. I), ISBN 0-387-97550-0 (Vol. II), ISBN 0-387-97551-9 (Vol. III)., I, • Dickman (1978) Dickman, R. L. 1978, \apjs, 37, 407 • Furuya et al. (2009) Furuya, R. S., Kitamura, Y., & Shinnaga, H. 2009, \apjl, 692, L96 • Furuya et al. (2011) Furuya, R. S., Cesaroni, R., & Shinnaga, H. 2011, \aap, 525, A72 • Furuya et al. (2014) Furuya, R. S., Kitamura, Y., & Shinnaga, H. 2014, \apj, 793, 94 • Gallimore et al. (1996a) Gallimore, J. F., Baum, S. A., O’Dea, C. P., & Pedlar, A. 1996a, \apj, 458, 136 • Gallimore et al. (1996b) Gallimore, J. F., Baum, S. A., O’Dea, C. P., Brinks, E., & Pedlar, A. 1996b, \apj, 462, 740 • Gallimore et al. (1996c) Gallimore, J. F., Baum, S. A., O’Dea, C. P., & Pedlar, A. 1996c, \apj, 458, 136 • Gallimore et al. (2001) Gallimore, J. F., Henkel, C., Baum, S. A., et al. 2001, \apj, 556, 694 • Gallimore et al. (2004) Gallimore, J. F., Baum, S. A., & O’Dea, C. P. 2004, \apj, 613, 794 • García-Burillo et al. (2014a) García-Burillo, S., Combes, F., Usero, A., et al. 2014, \aap, 567, A125 • García-Burillo et al. (2014b) García-Burillo, S., Fuente, A., Hunt, L. K., et al. 2014, \aap, 570, A28 • García-Burillo et al. (2016) García-Burillo, S., Combes, F., Ramos, A., et al. 2016, \apj, 823, L12 • Greenhill et al. (1996) Greenhill, L. J., Gwinn, C. R., Antonucci, R., & Barvainis, R. 1996, \apjl, 472, L21 • Habe & Ohta (1992) Habe, A., & Ohta, K. 1992, \pasj, 44, 203 • Hasegawa et al. (1994) Hasegawa, T., Sato, F., Whiteoak, J. B., & Miyawaki, R. 1994, \apjl, 429, L77 • Heckman & Best (2014) Heckman, T., & Best, P. 2014, \araa, 52, 589 • Hopkins et al. (2008) Hopkins, P. F., Hernquist, L., Cox, T. J., & Kereš, D. 2008, \apjs, 175, 356 • Imanishi et al. (2016) Imanishi, M., Nakanishi, K., & Izumi, T. 2016, \apj, 822, L10 • Inoue & Fukui (2013) Inoue, T., & Fukui, Y. 2013, \apjl, 774, L31 • Inutsuka et al. (2015) Inutsuka, S.-i., Inoue, T., Iwasaki, K., & Hosokawa, T. 2015, \aap, 580, A49 • Izumi et al. (2016) Izumi, T., Nakanishi, K., Imanishi, M., & Kohno, K. 2016, \mnras, 459, 3629 • Kennicutt (1998) Kennicutt, R. C., Jr. 1998, \araa, 36, 189 • Kennicutt & Evans (2012) Kennicutt, R. C., & Evans, N. J. 2012, \araa, 50, 531 • Khan et al. (2012) Khan, F. M., Preto, M., Berczik, P., et al. 2012, \apj, 749, 147 • Khachikian & Weedman (1974) Khachikian, E. Y., & Weedman, D. W. D.1974, \apj, 192, 581 • Klaas et al. (2001) Klaas, U., Haas, M., Müller, S. A. H., et al. 2001, \aap, 379, 823 • Kormendy & Ho (2013) Kormendy, J., & Ho, L. C. 2013, \araa, 51, 511 • Krumholz & McKee (2005) Krumholz, M. R., & McKee, C. F. 2005, \apj, 630, 250 • Kurtz et al. (2000) Kurtz, S., Cesaroni, R., Churchwell, E., Hofner, P., & Walmsley, C. M. 2000, Protostars and Planets IV, 299 • Kulier et al. (2015) Kulier, A., Ostriker, J. P., Natarajan, P., Lackner, C. N., & Cen, R. 2015, \apj, 799, 178 • Lada et al. (2012) Lada, C. J., Forbrich, J., Lombardi, M., & Alves, J. F. 2012, \apj, 745, 190 • Liu et al. (2016) Liu, J., Eracleous, M., & Halpern, J. P. 2016, \apj, 817, 42 • Lopez-Rodriguez et al. (2016) Lopez-Rodriguez, E., Packham, C., Roche, P. F., et al. 2016, \mnras, 458, 3851 • Mezcua et al. (2015) Mezcua, M., Prieto, M. A., Fernández-Ontiveros, J. A., et al. 2015, \mnras, 452, 4128 • Miyoshi et al. (1995) Miyoshi, M., Moran, J., Herrnstein, J., et al. 1995, \nat, 373, 127 • Mihos & Hernquist (1994) Mihos, C. J., & Hernquist, L. 1994, \apj, 425, L13 • Mulia et al. (2016) Mulia, A. J.,Rupali, C., & Whitmorei, B. C. 2016, arXiv:1607.03577 • Murayama & Taniguchi (1997) Murayama, T., & Taniguchi, Y. 1997, \pasj, 49, L13 • Papaloizou & Pringle (1984) Papaloizou, J. C. B., , & Pringle, J. E. 1984, \mnras, 208, 721 • Planck Collaboration et al. (2011b) Planck Collaboration, Abergel, A., Ade, P. A. R., et al. 2011, \aap, 536, A25 • Preibisch et al. (1993) Preibisch, T., Ossenkopf, V., Yorke, H. W., & Henning, T. 1993, \aap, 279, 577 • Rees (1984) Rees, M. J. 1984, \araa, 22, 471 • Sanders & Mirabel (1996) Sanders, D. B., & Mirabel, I. F. 1996, \araa, 34, 749 • Seyfert (1943) Seyfert, C. K. 1943, \apj, 97, 28 • Shaya (1994) Shaya, E. J., et al.  1994, \apj, 107, 1675 • Shioya (2001) Shioya, Y., Taniguchi, Y., & Trentham, N. 2001, \mnras, 321, 11 • Schinnerer et al. (2000) Schinnerer, E., Eckart, A., Tacconi, L. J., Genzel, R., & Downes, D. 2000, \apj, 533, 850 • Simkin et al. (1980) Simkin, S. M., Su, H. J., & Schwarz.M. P. 1980, \apj,237. 404 • Storchi-Bergamnn et al. (1996) Storchi-Bergamnn, T. et al. 1996, \apj, 472, 83 • Storchi-Bergamnn et al. (2012) Storchi-Bergamnn, T., et al. 2012, \apj, 755, 87 • Taniguchi & Wada (1996) Taniguchi, Y., & Wada, K. 1996, \apj, 469, 581 • Taniguchi (1999) Taniguchi, Y. 1999, \apj, 524, 65 • Taniguchi (2013) Taniguchi, Y. 2013, Galaxy Mergers in an Evolving Universe, 477, 265 • van der Tak et al. (2007) van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, \aap, 468, 627 • Vale et al. (2012) Vale, T. B., Storchi-Bergmann, T., & Barbosa, F. K. B. 2012, AGN Winds in Charleston, 460, 164 • Wang et al. (2012) Wang, J., Fabbiano, G., Karovska, M., Elvis, M., & Risaliti, G. 2012, \apj, 756, 180 • Whitmore et al. (1993) Whitmore, B. C., et al.  1993, \aj, 106, 1354 • Wilson et al. (1991) Wilson, A. S., ,Helfer, T. T., Haniff, C. A. & Ward, M. J. 1993, \apj, 381, 79 • Zhou et al. (1993) Zhou, S., Evans, N. J., II, Koempe, C., & Walmsley, C. M. 1993, \apj, 404, 232 You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{}
# Tag Info 8 To be be somewhat explicit. One may perform the change of variable, $q=e^{-x}$, $dq=-e^{-x}dx$, giving $${\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx={\large\int}_0^1 \prod_{n=1}^\infty\left(1-q^{24n}\right)dq\tag1$$ then use the identity (the Euler pentagonal number theorem) $$... 7 First, to sketch such a graph, you want to consider the distance from the origin as the angle from the x-axis changes. Just like when sketching the graph of a function in rectangular coordinates it is good to evaluate at particular values of x and see the height of the function, when sketching a curve in polar coordinates, evaluate the function are a ... 6 An alternative: Consider$$ J(a)=\int_0^{\pi/2} \cos^a(\phi)d\phi $$differentiating w.r.t a gives us$$ \partial_a J(a)\big|_{a=1}=-I $$But on the other hand J(a) is just a Wallis integral and therefore$$ I=-\frac{\sqrt{\pi }}{2}\partial_a\left( \frac{\Gamma \left(\frac{a+1}{2}\right)}{\Gamma \left(\frac{a}{2}+1\right)}\right)\big|_{a=1} $$... 6 Sketching this in polar coordinates is pretty straightforward. Draw your axes and you know that the radial value is equal to the angle, so your curve would start at the origin when \theta is zero, it would be \frac{\pi}{2} at 90 degrees, etc. As for calculating the area, you are correct that you integrate it, but I'm not sure what it means physically ... 5 A Riemann sum for$$\int_a^b f(x)\,dx$$using n subintervals is$$\frac{b-a}n\sum_{k=1}^n f(x_k)\ ,$$where x_k is a point in the kth subinterval. Compare this with the given sum$$\frac1{30}\sum_{k=1}^{60} e^{k/30}\ .$$By looking at the upper limit of summation, n=60. Then from the fraction at the front, b-a=2, and since all your answer options ... 5$$\frac{\pi^2}{24}-\frac{2\pi}3+\frac1{36\sqrt{2}}\left[5\pi^2+12\left(4+\ln\left(\frac{1+\sqrt2}2\right)\right)\left(\pi-2\ln\left(1+\sqrt2\right)\right)-48\operatorname{Li}_2\left(\sqrt2-1\right)\right]$$5$$\int_{0}^{4}(\left \lceil x+1 \right \rceil)dx=\\ \int_{0}^{4}(\left \lceil x \right \rceil+1)dx=\\\int_{0}^{4}(\left \lceil x \right \rceil)dx+\int_{0}^{4}1dx=\\$$hence$$=\int_{0}^{4}(\left \lceil x \right \rceil)dx+\int_{0}^{4}1dx=\\ \int_{0}^{1}(\left \lceil x \right \rceil)dx+\int_{1}^{2}(\left \lceil x \right \rceil)dx+\int_{2}^{3}(\left \lceil x ... 4 Sorry, I first did not read that you only wanted a hint. I suggest to use L'Hospital rule, 4 Hint: $$\lim_{n \to \infty} \frac1{n} \sum_{k=1}^{\color {red} n} f(k/n) = \int_0^1 f(x) dx$$ 4 I think you're falling into a trap many new students to of notation first, meaning second. You are writing things down and then asking what they mean. This is akin to writing a bunch of words and then asking "what does this sentence mean?". I'm not going to go through each of your 7 integrals to say which does or does not make sense, nor will I give you a ... 4 One may just apply the Jacobi triple product to the Dedekind eta function, then perform a termwise integration that leads to a multiple of $\zeta(2)$. 4 We have $$\int_{0}^{1}{\dfrac{1-x}{\log x}(x+x^{2}+x^{2^{2}}+x^{2^{3}}+\cdots)}\:dx=-\log 3. \tag1$$ Proof. One may recall that, using Frullani's integral, we have $$\int_{0}^{1}\frac{x^{a-1}-x^{b-1}}{\log x}\:dx=\log\frac ba \quad (a,b>0). \tag2$$ Considering a finite sum in the integrand, we get \begin{align} ... 4 If we substitute x=\tan y, we get: I = \int_{0}^{\pi/3}\frac{\arcsin(\sin(2y))}{\cos^2 y}\,dy = \int_{0}^{\pi/4}\frac{2t}{\cos^2 t}\,dt+\int_{\pi/4}^{\pi/3}\frac{\pi-2t}{\cos^2 t}\,dt $$hence:$$ I = \left(\frac{\pi}{2}-\log 2\right)+\left(\log 2+\frac{\pi}{\sqrt{3}}-\frac{\pi}{2}\right)=\color{red}{\frac{\pi}{\sqrt{3}}} $$as wanted. 3 This is not an answer. This is just something to offer some ideas. Actually, this more of a comment on steroids.$$\int_{0}^{\pi/2} \arctan(x)\cot(x) \text{d}x=I$$Now, what we do is rewrite the equation in terms of i. A nice way to do that is to do two things. We first substitute x=ia, then multiply and divide -i to the argument of the integral. (And ... 3 Using Mathematica I have arrived at the result$$I=\frac{1}{4} (-\text{Li}_3(-1-i)-\text{Li}_3(-1+i)+\text{Li}_3(1-i)+\text{Li}_3(1+i))+\frac{1}{4} \left(-\text{Li}_3\left(-\frac{1}{2}-\frac{i}{2}\right)-\text{Li}_3\left(-\frac{1}{2}+\frac{i}{2}\right)+\text{Li}_3\left(\frac{1}{2}-\frac{i}{2}\right)+\text{Li}_3\left(\frac{1}{2}+\frac{i ... 3 This is equivalent to American Mathematical Monthly Problem 11148 published in April 2005. Let $u=x-1$. Then rewrite the integral as $$\int_{0}^{\infty}{\frac{u^8-4u^6+9u^4-5u^2+1}{u^{12}-10u^{10}+37u^8-42u^6+26u^4-8u^2+1}}du.$$ The value of this integral is equal to the value of the original integral. The method to solve this equivalent integral can be ... 3 The reason is that the integral you evaluated doesn't give the area of the quadrilateral $R$. Indeed, called $A$ its area, you have $$\int\int_R dx dy=A$$ and $$\int\int_R x dx dy\neq \int\int_R dx dy.$$ 2 You have, taking $u=-\frac{1}{t}$ (and so $u'=\frac{1}{t^2}$), $v=e^t$ and using the integration by parts formula $(\int u'v=[uv]-\int uv')$ that : $$F(x)=\int_1^x\frac{e^t}{t^2}dt=\Big[-\frac{e^t}{t}\Big]_1^x+\int_1^x\frac{e^t}{t}dt,$$ and so $$F(x)=e-\frac{e^x}{x}+G(x).$$ 2 Hint: Use the fundamental theorem of calculus and L'Hospital rule. 2 I understand what you're asking now. If you want to integrate a function $f\colon [0,\infty) \to [0,\infty)$, then you can write $\int_{0}^{\pi/2} \frac12 r^2(\theta) d\theta$. You're asking if $\int_{0}^{\infty} \frac12 r^2(\theta) d\theta=\infty$. Well unless $r=0$ then yes of course. You're adding up a positive number infinitely many times. You are ... 2 Draw the Graph and find the area under it. 2 Hint: Draw it out. It should look like stairs rising towards the right. From $0 < x <= 1$, $y = 2$. Extrapolate from this to draw out the function. Then, find the area under the graph. 2 One may recall that for any Riemann integrable function over $[a,b]$, as $n \to \infty$, one has $$\sum_{k=0}^n\frac{(b-a)}nf\left(a+\frac{k(b-a)}n \right) \to \int_a^bf(x)dx$$ Then you may apply it to $\displaystyle f(x)=\sqrt{1+2x}$, $a=0$, $b=1$ giving the limit $$\int_0 ^1\sqrt{1+2x}\:dx.$$ By the change of variable $1+2x \to u$ you also get $$... 2 The integrand has a closed-form antiderivative in terms of elementary functions and polylogarithms. It can be found using Mathematica after expressing inverse trig functions through logarithms of complex arguments, and can be manually checked for correctness using differentiation. After subtracting its limits at \infty and 0 and simplification, we can ... 2 What you did is OK, you get$$ \mathcal{L}(\cos^2(\omega t))=\frac{1}{2}\left(\frac{1}{s} + \mathcal{L}(\cos(2\omega t))\right)=\frac{1}{2 s}+\frac{s}{2 \left(s^2+4 w^2\right)} $$where we have used$$\mathcal{L}(\cos\omega t) = \frac{s}{s^2 + \omega^2}.$$2 Explaining David G. Stork's result,$$ \begin{aligned} I &=\int_{\sqrt a}^\infty e^{-t^2+\beta t} \sin(\beta t) \, dt \\ &= \Im \, \int_{\sqrt a}^\infty e^{-t^2+\beta t + i\beta t} \, dt \\ &= \Im \left[ e^{(\beta + i\beta)^2/4} \int_{\sqrt a - \beta(1+i)/2}^\infty e^{-u^2} \, du \right] \\ &= \Im \left[ \frac{\sqrt\pi}{2} \, e^{i\beta^2/2} ... 2 Morera's theorem is probably the simplest approach (for questions such as these -- Did's hint gives a quicker solution in this particular case). First show that $f$ is continuous. (I'll leave that to you.) Then, if $\gamma$ is any closed curve in $\mathbb{C}$, \begin{align} \int_{\gamma} f(z)\,dz &= \int_{\gamma} \left( \int_0^1 \frac{e^{tz}}{1+t^2}\,dt ... 2 You are interested in the integral $$f(x) = \int_1^x y\sin\!\left(\frac{2\pi (y-1) x}{y}\right) dy,$$ and this is very close to the integral $$g(x) = \int_0^x y\sin\!\left(\frac{2\pi (y-1) x}{y}\right) dy,$$ which is (basically) expressible in closed form. To see this, first make the substitution $y = ux$ to get $$g(x) = x^2 \int_0^1 u ... 1 Let$$ \begin{align} I(\alpha,k) &= \int^\infty _{-\infty} \frac{e^{-i \alpha x}}{x^2 + k^2}\,\mathrm{d}x \tag{1} \\ &= \int^{\infty} _{-\infty} \frac{e^{i \alpha x}}{x^2 + k^2}\,\mathrm{d}x \tag{2} \\ &= \frac12\int^{\infty} _{-\infty} \frac{e^{i \alpha x} + e^{-i \alpha x}}{x^2 + k^2}\,\mathrm{d}x \tag{3} \\ &= \int^{\infty} _{-\infty} ... 1 The arbitrary constant has to be added whenever you are dealing with indefinite integrals. In other words, if you are told that $F$ is the antiderivative of a function $f$, meaning that $$F(x) = \int f(x) dx \qquad\mbox{ (integral without limits)},$$ then any other funtion $G(x)=F(x)+C$ will be an antiderivative of $f$ too. In fact, $$F'(x) = G'(x) = f(x).$$ ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! # Give an alternative solution to Example 3 by letting $y = \sinh^{-1} x$ and then using Exercise 9 and Example 1(a) with $x$ replaced by $y.$ ## $y=\ln (x+\sqrt{1+x^{2}})$ Derivatives Differentiation ### Discussion You must be signed in to discuss. Lectures Join Bootcamp ### Video Transcript We know that if we let why equals sign age to the negative One other words the inverse Then we know that sign Each of why is Max Therefore we have co sign HR wy as plus on my ass square root of one plus x squared. Therefore we have either Why modest acts is poster minus were of one plus x squared. Therefore, we have the natural log of each of the wise. We're taking the natural for both sides. This gives us sign H to the negative one of acts People's the not true along of exposed the square root of one plus X squared. Derivatives Differentiation Lectures Join Bootcamp
{}
Reviews Book Overdo$ed America: The Broken Promise of American Medicine BMJ 2004; 329 (Published 23 September 2004) Cite this as: BMJ 2004;329:746 1. Ray Moynihan (raymond.moynihan@verizon.net), visiting editor 1. BMJ With so much negativity towards medicine these days, many doctors must be contemplating ear-plugs and wondering why they are bothering. Thanks to Harvard University researcher and family doctor John Abramson, that negativity is about to be turned up a notch. His book is the latest in a series of searing indictments of a medical profession apparently duped by the false promise of technology, and too often compromised by cold hard cash from the companies selling the drugs and devices. Yet this book comes with a refreshing respect for the healing potential of the doctor-patient relationship, and a clear commitment to making the healthcare system more humane. The title speaks of the United States, but the themes are global. John Abramson HarperCollins$24.95/\$C34.95, pp 352 ISBN 0 06 056852 6 www.harpercollins.com Rating: Much of the material about drug companies distorting science will be familiar to many readers, but there is a freshness here that carries great appeal. The author combines his personal journey towards increasing scepticism with a clear analysis of where the American health system is failing. The book's focus is … View Full Text
{}
• Rajesh Sharma Articles written in Journal of Earth System Science • Two- and three-dimensional gravity modeling along western continental margin and intraplate Narmada-Tapti rifts: Its relevance to Deccan flood basalt volcanism The western continental margin and the intraplate Narmada-Tapti rifts are primarily covered by Deccan flood basalts. Three-dimensional gravity modeling of +70mgal Bouguer gravity highs extending in the north-south direction along the western continental margin rift indicates the presence of a subsurface high density, mafic-ultramafic type, elongated, roughly ellipsoidal body. It is approximately 12.0 ±1.2 km thick with its upper surface at an approximate depth of 6.0 ±0.6 km, and its average density is {dy2935} kg/m3. Calculated dimension of the high density body in the upper crust is 300 ±30 km in length and 25 ±2.5 to 40 ±4 km in width. Three-dimensional gravity modeling of +10mgal to -30mgal Bouguer gravity highs along the intraplate Narmada-Tapti rift indicates the presence of eight small isolated high density mafic bodies with an average density of {dy2961} kg/m3. These mafic bodies are convex upward and their top surface is estimated at an average depth of 6.5 ±0.6 (between 6 and 8km). These isolated mafic bodies have an average length of 23.8 ±2.4km and width of 15.9 ±1.5km. Estimated average thickness of these mafic bodies is 12.4±1.2km. The difference in shape, length and width of these high density mafic bodies along the western continental margin and the intraplate Narmada-Tapti rifts suggests that the migration and concentration of high density magma in the upper lithosphere was much more dominant along the western continental margin rift. Based on the three-dimensional gravity modeling, it is conjectured that the emplacement of large, ellipsoidal high density mafic bodies along the western continental margin and small, isolated mafic bodies along the Narmada-Tapti rift are related to lineament-reactivation and subsequent rifting due to interaction of hot mantle plume with the lithospheric weaknesses (lineaments) along the path of Indian plate motion over the Réunion hotspot. Mafic bodies formed in the upper lithosphere as magma chambers along the western continental margin and the intraplate Narmada-Tapti rifts at estimated depths between 6 and 8 km from the surface (consistent with geological, petrological and geochemical models) appear to be the major reservoirs for Deccan flood basalt volcanism at approximately 65 Ma. • Earthquake-induced soft sediment deformation (SSD) structures from the Bilara limestone formation, Marwar basin, India The Neoproterozoic Bilara limestone Formation of the Marwar Group, Rajasthan, India exposes metres-thick layers of soft sediment deformation (SSD) structures at different stratigraphic levels which could be traced over hundreds of metres on the outcrop scale. The SSD structures include disharmonic folds, low-angle thrusts, distorted laminae, fluidisation pipes, slump and load structures, homogeneities, diapirs, etc. Whereas SSD structures suggesting tensional stress, viz., intrastriatal graben, fluidisation, slump, etc. dominate in the lower part of the Bilara succession, features implicating compression, viz., folds, low-angle thrust are prevalent in the uppermost part. Since SSD structures are mostly confined within the algal laminites, we interpret that enhanced micritic fluid pressure below early cemented algal carbonate played a major role in laminae deformation. Depending on the degree of lithification and pore-water pressure, deformation features formed either plastically or led to diapiric injection at enhanced pore water pressure. Separated by near-horizontal underformed strata, the SSD layers, traceable over hundreds of metres, are interpreted as products of seismic shacking. Considering the time frame of the Marwar basin, i.e., the Precambrian–Cambrian transition, the SSD horizons present within the Bilara succession may hold the potential for the correlation with SSD structures reported from the time-correlative stratigraphic successions present in erstwhile adjoining tectonic terrains, e.g., China, Siberia, etc. • Carbonaceous material in Larji–Rampur window, Himachal Himalaya: Carbon isotope compositions, micro Raman spectroscopy and implications This work focuses on the natural graphitic carbonaceous material (GCM) distributed in metasedimentary and crystalline rocks in and around Larji–Rampur tectonic window, Himachal Himalaya. The GCM, associated with the ore mineralization, is mostly flaky, however, it is also granular and amorphous. The micro Raman spectroscopy of representative samples confirms that the studied GCM is mostly disordered graphite and rarely poorly ordered graphite, but well crystalline ordered graphite is also present. The carbon isotope compositions reflecting the source of carbon in GCM at various locations attribute that the carbon was mostly sedimentary organic carbon which has been metamorphosed to disordered graphite, however, the ${\delta}^{13}$C of the inorganic carbon contents in metabasalts from Bhallan signify the involvement of fluid possibly derived from the mantle. Limited ${\delta}^{13}$C$_{inorganic}$ data in a range from 0 to -11%, points to the heavier carbon probably derived from the diagenetic carbonates or dissolved organic matter. Overall, the carbon isotope compositions of GCM from the Larji–Rampur window reject diversity in carbon source and mixing of carbon reservoirs, which can adequately be explained by the Proterozoic marine carbon cycling. A close linkage in the depositional processes of GCM with ore mineralization in the area is also invoked. $\bf{Highlights}$ $\bullet$ The graphitic carbonaceous material (GCM) is present in and around Larji–Rampur tectonic window, Himachal Himalaya, at places associated with ore mineralization. $\bullet$ Micro Raman spectroscopy confirms the presence that this GCM is mostly disordered graphite though the ordered graphite is also present uncommonly. $\bullet$ The ${\delta}^{13}$C values vary widely from –1.5‰ to –33.5‰. The ${\delta}^{13}$C compositions are heterogeneous and complex carbon systematics is apparent. In addition to the predominant sedimentary organic carbon form Proterozoic marine carbon, it was also derived from carbonate source, carbon from the fluids, and rarely but possibly from the mantle source. $\bullet$ A close linkage in the formation and evolution processes of the GCM with the ore mineralization is also invoked. • # Journal of Earth System Science Volume 131, 2022 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{}
# Expected Cut-off for NEET 2020 - General and Reserved Category Candidates by Aparajita Das, September 17, 2020 The NEET 2020 result is most likely to be released by the NTA in the coming weeks. The NEET 2020 cut-off for candidates belonging to the general category and other reserved categories will be released along with the NEET 2020 results. Usually, the National Testing Agency (NTA) releases the NEET results nearly after a month from the day of the examination. However, this year due to the COVID-19 outbreak, the National Eligibility cum Entrance Test (NEET 2020) had to be rescheduled to September 13, 2020. So, NEET 2020 results are already delayed by a few months. Therefore, to cut down any further delay in resuming the new session, the NEET 2020 results are anticipated to be released very soon. The NEET cut-off is the minimum percentile and marks that candidates must score to qualify in this examination. Going by the NEET cut-off trends of the previous years, the expected NEET 2020 cut-off for the general category candidates is 50th percentile. Also, for candidates belonging to the general EWS category, the expected NEET 2020 cut-off is the same as that for the general category, that is,50th percentile. The NEET cut-off percentiles for other reserved categories are lower than that for the general category candidates. The expected NEET 2020 cut-off is 45th percentile for candidates belonging to the PWD category, and 40th percentile for candidates belonging to SC, ST, and OBC category. ### NEET Previous Year's Cut-off Analysis As per the analysis of the previous year's NEET cut-off trends, it can be inferred that the cut-off marks for NEET 2020 are likely to be more than that for NEET 2019. Even though the NEET cut-off percentiles for various categories mostly remain the same, the marks equivalent to the percentiles vary almost every year. To a certain extent, the NEET cut-off marks depend on the total number of candidates appearing in the exam as well. For the record, the cut-off marks for the general category candidates was 119 in NEET 2018 results. Whereas, the NEET 2019 cut-off marks for the general category candidates was 134. Likewise, the cut-off marks for the candidates belonging to the reserved categories of SC, ST, and OBC were 96 in NEET 2018 and 107 in NEET 2019. It must be noted here that the NEET cut-off percentiles for the general and reserved categories were the same for both the years, 2018 and 2019. NEET 2020 was conducted last Sunday, September 13, and about 90 percent of the total 15.97 lacs registered candidates have appeared in the examination. When the National Testing Agency (NTA) will release the NEET 2020 results, all candidates will be able to check their results, along with the final NEET 2020 cut-off percentile and marks on its official website. The expected NEET 2020 cut-off percentiles and marks are determined based on the previous year NEET cut-off trends and may vary slightly from the final NEET 2020 cut-off percentiles. Candidates may take a cue from the NEET 2020 expected cut-off to plan on the courses and colleges for their admissions.
{}
# 'Hands-On' no screen tasks + Covid data? ### Using Data to explore: "Quarantine everyone - or just the "at risk" people and over 60s (over 50s(?))?" I trust the consensus of our world healthcare experts, and our politicians. Between them, if they say quarantining whole country's is the best policy, then I accept that. What this leads me to ask is - how long for? This provides excellent data to introduce Probability and Statistics to answer "real" (non-contrived) application questions of critical importance.  Get your students experiencing, live and first hand, the work of epidemiologists, medical advisers etc. I've found it hard/very hard to get data on: • Age distribution of those in a CRITICAL (requiring hospitalisation certainly, ventilation machines(?) - the definitions of 'critical' are not always clear, or the same, between sources) • Comparable data on age distribution of number of cases (statista.com is only source I've found so far, but the age countries used from one country to next are not identical, this article comparing Italy and Korea is interesting: A Tale of Two Death Rates: Italy and Korea (I've not followed up their data sources to check . . )). • My father-in-law, Gilles, just sent me (Fri 20th March) this excellent report from Imperial College that contains AGE DISTRIBUTION DATA. Further reports from Imperial college availalbe here THAT also look well worth following (visit the websites of reputed universities in your home/host country, I imagine you'll find similar, high-quality, authoritative reports for the situtaion in your home/host country) Much of what we're hearing seems to suggest that Covid-19 is particularly dangerous for the over 60s and those with underlying health conditions. This lead me to wonder if, after 15 days / 30 days / x days (?) a feasible alternative could be to safeguard and protect our over 60s and those with underlying health conditions with a quarantine, but, to reduce the economic consequences and resulting political/social issues this may cause (which will be very hard to calculate and pinpoint) allow under 60s (under 50s ? The cut-off would need to be data-drive) to return to 'normal' life, but with the added precautions of hand-washing, everyone knowing the symptoms of Covid-19 and keeping 2m between ourselves, as much as possible. ### Is there any evidence to support this course of action? John P.A. Ioannidis is professor of medicine, of epidemiology and population health, of biomedical data science, and of statistics at Stanford University and co-director of Stanford’s Meta-Research Innovation Center . . and looked up age distribution graphs by: • number of cases • death rate • 'Severe' (I didn't manage to find this information, so used the below "% of deaths per total cases" age distribution data as a guideline for the proportion of the total number of "severe" conditions accounted for by each age group) Given as I was unable to drag myself away for researching this data, and that it was a question that, I felt, only mathematics could really answer, I put together some graphs ready to investigate this question with my 14-16 year old classes later this week. I think I would start just by giving them the below graphs and above article (which is long and uses very adult reasoning and language - so for many I'll just ask them to higlight, in a quick, scan-read, any key points that stand out) and ask them: Q1) How would you use the below graphs to make a case, for or against, a quarantine only for people with a"health condition or over 60 (or 50?)" after 15 days, 30 days general population quarantine? AGE DISTRIBUTION of those infected with COVID-19 (South Korea) DISTRIBUTION of total covid-19 cases that were/are: 'mild', 'severe' and 'critical' COVID-19 (China) Q2) What assumptions have we needed to make to be able to use these graphs all together? Some examples of necessary assumptions: • No data on numbers of people in under 50 category with "health conditions", so this has not been considered (and should, 'lower' a little the calculated probabilities (see end of this page) for under 50 fatalities or 'severe' forms of Covid-19) • China & Korean data - we assume Korea's stats on deaths etc. will be similar to China's • No data on age distribution of 'severe cases' so have assumed that the proportions of each age group with a 'severe' form of co-vid will follow the same proportions as those with fatalities. Q2 (b)  Using the above graphs figures, I came up with a very rough estimates (data, assumptions etc. etc.): under 50s chance of dieing from Covid-19: $$\approx$$ 0.350% $$\approx$$ 7 in 2000 $$\approx$$ 1 in 286 and a probability of being admitted to hospital for assistance overcoming the effects/to be able to recover from Covid-19 as: $$\approx$$1.31% $$\approx$$1 in 76 (i) What do you get? Show all  your working out Based on these figures, would you recommend such a policy? Online learning going well - eyes getting tired, headaches maybe more frequent . . . starting to think about "balanced diet" between online tasks and practical 'making tasks' or at least, pen & paper (NO computer) work for mathematics and some asynchronous elements (I mark previous class's work during 6th to 25th minute of next class): • First 5mins I explain task and remind where the necessary resources are. • 20mins of lesson students work on a pre-defined task (posted in Managebac, Google Classrooms/Docs etc. i.e. online!). I get on and mark/review work set to class that just left. • 5mins After 20mins, students 'share' their work. A number of ways of doing this: a photo of their paper working out, working out in Onenote classroom, all students show their work using the 'video' teams/hangout etc functionality. • 5-10mins : I go over, live (Teams, Hangouts, Zoom etc.) any common questions/any common misunderstandings. • Next 20mins - I set the task (take 5mins to explain task).students work, I look through their work from previous 20mins. • Last 5mins - students upload pictures etc. of their work which I will look through & prepare feedback for them in next lesson, using 20minutes of my next lesson (whilst the next class works on their practicaly/making activity). This activity:  Prism Volumes for example could extend over 1 or 2 lessons. From experience I recommend sand or sugar or rice etc. rather than water!! All materials on this website are for the exclusive use of teachers and students at subscribing schools for the period of their subscription. Any unauthorised copying or posting of materials on other websites is an infringement of our copyright and could result in your account being blocked and legal action being taken against you.
{}
102 views | 102 views Kirchhoff's Laws are also called "Laws of Electric Network", these laws were formulated in 1847 by German physicist, Gustav Robert Kirchhoff( 1824-1887) laws that used to determine the impedance of the complex network or equivalent electrical resistance and the currents flowing in the several branches of the network
{}
# Linear Velocity #### dtippitt ##### New member Can someone please check my work to this? The reflecting telescope is deployed in low earth orbit( 600km) with each orbit lasting about 95 min. use the linear velocity formula to solve the problem. I did 300 * 95 min = 28500. Can someone check my work please? if someone could check it today hat would be great thanks. #### MarkFL Staff member Can someone please check my work to this? The reflecting telescope is deployed in low earth orbit( 600km) with each orbit lasting about 95 min. use the linear velocity formula to solve the problem. I did 300 * 95 min = 28500. Can someone check my work please? if someone could check it today hat would be great thanks. I would use: $$\displaystyle v=r\omega=\left(r_E+600\right)\frac{2\pi}{95}\,\frac{\text{km}}{\text{min}}$$ where $$r_E$$ is the radius of the Earth in km. Are you given a value for this that you are to use? #### dtippitt ##### New member I would use: $$\displaystyle v=r\omega=\left(r_E+600\right)\frac{2\pi}{95}\,\frac{\text{km}}{\text{min}}$$ where $$r_E$$ is the radius of the Earth in km. Are you given a value for this that you are to use? the formula they gave me is v=r(radian symbol)/t I think r stands for radius and t stands for time. I am not sure how to use this formula. #### MarkFL Staff member the formula they gave me is v=r(radian symbol)/t I think r stands for radius and t stands for time. I am not sure how to use this formula. Yes, that's the same formula I used. The angular velocity $$\omega$$ is $$2\pi$$ radians (one complete circle) per 95 minutes. The radius of the orbital path is 600 km more than the radius of the Earth. #### dtippitt ##### New member Yes, that's the same formula I used. The angular velocity $$\omega$$ is $$2\pi$$ radians (one complete circle) per 95 minutes. The radius of the orbital path is 600 km more than the radius of the Earth. The only 2 numbers they give are 600km and 95 min. Here is the problem again. The reflection telescope is deployed in low earth orbit(600km) with each orbit lasting about 95 minutes. linear velocity is calculated by the formula #### MarkFL Staff member The only 2 numbers they give are 600km and 95 min. Here is the problem again. The reflection telescope is deployed in low earth orbit(600km) with each orbit lasting about 95 minutes. linear velocity is calculated by the formula According to google, the radius of the Earth is about 6,371 km. So, plug that into the formula I posted above...what do you get? #### dtippitt ##### New member I think the right answer is 28000 but I don't know how to get that.
{}
## Intermediate Algebra (6th Edition) $x=\dfrac{\log4+5\log5}{3\log5}\\\\ x\approx1.9538$ Getting the logarithm of both sides and then using the properties of logarithms, the value of $x$ in the expression $5^{3x-5}=4$ is \begin{array}{l} \log5^{3x-5}=\log4\\\\ (3x-5)\log5=\log4\\\\ 3x\log5-5\log5=\log4\\\\ 3x\log5=\log4+5\log5\\\\ x=\dfrac{\log4+5\log5}{3\log5}\\\\ x\approx1.9538 .\end{array}
{}
# Rebuilding SANE (Canon Pixma MP730) for Ubuntu I'm still seeking a new distro that takes less work than Gentoo but sits on the cutting edge and has reasonable package management. Kubuntu is potentially my next favourite distro and I'm currently running 9.04. I hate KDE4 but that's another story. Long live KDE3! Unfortunately, the work we did in resolving the scanning problems for the Canon MP730 (pixma driver) didn't make it into SANE before 1.0.20 was released so probably even Ubuntu Karmic won't have MP730 support until SANE tags a new release. This leaves me with a problem: How do I rebuild the SANE packages to get my scanner supported? Here are the steps to get the MP730 (and MP700) working in Ubuntu: Update 29/8/09: Julien Blache, the maintainer of the Debian packages that Ubuntu builds on, has kindly incorporated the necessary patch starting with 1.0.20-6. You can save yourself a lot of trouble by simply using his packages. Look for a file named 'libsane_1.0.20-6_<arch>.deb' where <arch> is the architecture (amd64, i386 etc) of your machine, at http://ftp.debian.org/debian/pool/main/s/sane-backends/ then download and install it the same way as noted below. Jaunty users will likely get failed dependencies for libgphoto2 and libsane-extras but they can be safely ignored for the purposes of getting a working MP700/MP730 scanner. These packages do not require libusb to be installed. $mkdir sane$ cd sane $wget https://launchpad.net/ubuntu/karmic/+source/sane-backends/1.0.20-4ubuntu2/+files/{sane-backends_1.0.20.orig.tar.gz,sane-backends_1.0.20-4ubuntu2.diff.gz,sane-backends_1.0.20-4ubuntu2.dsc} Download Nicolas' patch to sanei/sanei_usb.c that is attached to this page. $ wget http://waddles.org/sane-backends_1.0.20-fix_sanei_usb.patch_.txt Download my debian/rules patch. Note that this patch will only produce a backend library for the pixma driver and will also prevent documentation being built with Latex which reduces the amount of extra package dependencies needed for building. If you want all backends to be built, just remove the first hunk of the patch or remove the BACKENDS="pixma" prefix to configure in debian/rules. $wget http://waddles.org/sane-backends_1.0.20-pixma_only.patch_.txt ### Unpack $ dpkg-source -x sane-backends_1.0.20-4ubuntu2.dsc dpkg-source: extracting sane-backends in sane-backends-1.0.20 dpkg-source: info: unpacking sane-backends_1.0.20.orig.tar.gz dpkg-source: info: applying sane-backends_1.0.20-4ubuntu2.diff.gz ### Patch $pushd sane-backends-1.0.20$ patch -p1 --ignore-whitespace < ../sane-backends_1.0.20-fix_sanei_usb.patch.txt patching file sanei/sanei_usb.c $patch -p1 --ignore-whitespace < ../sane-backends_1.0.20-pixma_only.patch.txt patching file debian/rules ### Build $ dpkg-buildpackage -rfakeroot -b dpkg-buildpackage: set CFLAGS to default value: -g -O2 dpkg-buildpackage: set CPPFLAGS to default value: dpkg-buildpackage: set LDFLAGS to default value: -Wl,-Bsymbolic-functions dpkg-buildpackage: set FFLAGS to default value: -g -O2 dpkg-buildpackage: set CXXFLAGS to default value: -g -O2 dpkg-buildpackage: source package sane-backends dpkg-buildpackage: source version 1.0.20-4ubuntu2 dpkg-buildpackage: source changed by Martin Pitt <[email protected]> dpkg-buildpackage: host architecture amd64 dpkg-checkbuilddeps: Unmet build dependencies: libv4l-dev libgphoto2-2-dev libltdl3-dev libjpeg62-dev libtiff4-dev libusb-dev (>= 2:0.1.10a-9) libieee1284-3-dev (>= 0.2.10-5) libavahi-client-dev (>= 0.6.4) texlive-latex-extra autotools-dev chrpath dpkg-buildpackage: warning: Build dependencies/conflicts unsatisfied; aborting. dpkg-buildpackage: warning: (Use -d flag to override.) If you have everything installed already, you should end up with .deb files in your parent directory at this point, however, they will probably still be broken. I had to build it with libusb to make it work properly, so you should continue anyway. The step above is really to work out what dependencies are needed to build the packages. I'm going to force an override to avoid most of them anyway. #### Install build dependencies These are the packages I really needed to install: $sudo apt-get install autotools-dev chrpath libusb-1.0-0-dev Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libusb-1.0-0 The following NEW packages will be installed: autotools-dev chrpath libusb-1.0-0 libusb-1.0-0-dev 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/254kB of archives. After this operation, 1364kB of additional disk space will be used. Do you want to continue [Y/n]? Selecting previously deselected package autotools-dev. (Reading database ... 300410 files and directories currently installed.) Unpacking autotools-dev (from .../autotools-dev_20080123.2_all.deb) ... Selecting previously deselected package chrpath. Unpacking chrpath (from .../chrpath_0.13-2_amd64.deb) ... Selecting previously deselected package libusb-1.0-0. Unpacking libusb-1.0-0 (from .../libusb-1.0-0_2%3a1.0.0-1_amd64.deb) ... Selecting previously deselected package libusb-1.0-0-dev. Unpacking libusb-1.0-0-dev (from .../libusb-1.0-0-dev_2%3a1.0.0-1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Setting up autotools-dev (20080123.2) ... Setting up chrpath (0.13-2) ... Setting up libusb-1.0-0 (2:1.0.0-1) ... Setting up libusb-1.0-0-dev (2:1.0.0-1) ... Processing triggers for libc6 ... ldconfig deferred processing now taking place #### Build the .deb packages $ dpkg-buildpackage -rfakeroot -b -d Note the -d flag to override the missing dependencies. I have removed the output from this step as it is fairly pointless. You should now have a set of .deb packages in your parent directory. ### Install Install at least the libsane .deb file which contains the patched sanei/sanei_usb.c code. The others don't seem to be necessary but you may choose to install all of them anyway. $sudo dpkg -i ../libsane_1.0.20-4ubuntu2_amd64.deb (Reading database ... 300705 files and directories currently installed.) Preparing to replace libsane 1.0.19-23ubuntu7 (using .../libsane_1.0.20-4ubuntu2_amd64.deb) ... Unpacking replacement libsane ... Setting up libsane (1.0.20-4ubuntu2) ... Installing new version of config file /etc/sane.d/dll.conf ... Processing triggers for man-db ... Processing triggers for hal ... Regenerating hal fdi cache ... * Restarting Hardware abstraction layer hald [ OK ] Processing triggers for libc6 ... ldconfig deferred processing now taking place ### Scan I did have a permissions issue as a normal user when scanning from Gimp which magically resolved itself, but now scanning works fine using scanimage, xsane or xscanimage with or without the Gimp. $ sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that found USB scanner (vendor=0x04a9 [Canon Inc.], product=0x262f [MP730]) at libusb:003:002 # Your USB scanner was (probably) detected. It may or may not be supported by # SANE. Try scanimage -L and read the backend's manpage. # Not checking for parallel port scanners. # Most Scanners connected to the parallel port or other proprietary ports # can't be detected by this program. # You may want to run this program as root to find all devices. Once you # found the scanner devices, be sure to adjust access permissions as # necessary. $scanimage -L libusb couldn't open USB device /dev/bus/usb/001/001: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/002/001: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/003/001: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/004/001: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/001/003: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/004/002: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/001/004: Permission denied. libusb requires write access to USB device nodes. libusb couldn't open USB device /dev/bus/usb/001/005: Permission denied. libusb requires write access to USB device nodes. device pixma:04A9262F_00000000F972' is a CANON Canon MultiPASS MP730 multi-function peripheral $ scanimage -d pixma:04A9262F_00000000F972 --resolution 150 --mode Color --format pnm -x 10 -y 20 > /tmp/foo2.pnm libusb couldn't open USB device /dev/bus/usb/001/001: Permission denied. libusb couldn't open USB device /dev/bus/usb/002/001: Permission denied. libusb couldn't open USB device /dev/bus/usb/003/001: Permission denied. libusb couldn't open USB device /dev/bus/usb/004/001: Permission denied. `
{}
# Math Help - Whats a BASE??? 1. ## Whats a BASE??? i get powers, but whats a base? 2. In $a^b$, $a$ is the base and $b$ is the power. 3. hmmm 4. Originally Posted by Graffitixgirl hmmm make no sense =\ 26 = ?a .... LOL 5. nonono ? *9* not a... lolz 6. Doesn't matter what it is, because all your base are belong to us. (sorry... couldn't help myself) 7. Originally Posted by Rebesques Doesn't matter what it is, because all your base are belong to us. (sorry... couldn't help myself) LOL Originally Posted by Graffitixgirl nonono ? *9* not a... lolz so 26 = ?^9??? if so ... to get rid of the nine ... you take power of 1/9 both sides... (26)^(1/9) = ?^(9)(1/9) 26^(1/9) = ? ^ 1 26^(1/9) = ? ... calculator says ... 1.43621 BUTTT if you question is 2^6 = ?^9 do the same thing... 2^6(1/9) = ?^9(1/9) 2^2/3 = ? 2^2/3 is the same as ... $ \sqrt[3]{2^2} $ hope this help but i feel like it didnt :P 8. All your base are belong to us. Sorry, I couldn't help myself! EDIT: Rebs beat me to it! 9. ## Base Base, as stated above, is b in the b^n form. This also relates to base when referring to the base of a positional numeral system. Standard numbers are base 10 where each position is represented as 10^n where n is its position. Example: 236 = 2*10^2 + 3*10^1 + 6*10^0 For a binary number the base is two. 1010 = 1*2^3 + 1*2^1 (this number is the same as 10 in base 10)
{}
# The order of terms matters even when they commute (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) When writing a term that consists of several factors, the conventions regarding their order appear arbitrary. It is usual to write: • $xy$ and $yx$ in either order; • $5t$ but not $t5$ (to avoid confusion with $t_5$ or $t^5$); • $x\sqrt{2}$ but not $\sqrt{2}x$ (to avoid confusion with $\sqrt{2x}$). • $\sqrt{2}\sin x$ but not $\sin x \sqrt{2}$ (to avoid confusion with $\sin \left(\sqrt{2}x\right)$).
{}
# Sequences and series: exercise sheet 1 Exercise. (a) Find the set of all $x ∈ ℝ ∖ { -3 , -5 }$ such that $x + 1 x + 3 < x + 3 x + 5$. (b) Find the set of all $x ∈ ℝ ∖ { -3 , -4 }$ such that $x + 1 x + 3 < x + 3 x + 4$. (c) Find the set of all $x ∈ ℝ ∖ { -3 , -2 }$ such that $x + 1 x + 3 < x + 3 x + 2$. Give your answers as an interval or a union of disjoint intervals. Exercise. (a) Find the set of all $x ∈ ℝ$ such that $| x | > x 2 - 1$. (b) Find the set of all $x ∈ ℝ$ such that $3 | x | > x 2 + 1$. Exercise. Consider the sequence defined by $a n = n n + 1$. Prove that $a n → 1$ as follows. (a) Find as simple an expression as you can for $| a n - 1 |$. Make sure your expression does not involve absolute value signs. (b) Supposing $ε > 0$ is given, use your answer to (a) to find the set of $n ∈ ℕ$ such that $0 ⩽ | a n - 1 | < ε$ writing the answer as $A ∪ { n ∈ ℕ : n ⩾ F ( ε ) }$, where $A$ is a finite set of naturals numbers and $F ( ε )$ is an expression involving $ε$ only. (Hint: By conjugating surds, show that $| a n - 1 | < 1 2 n$ for all $n$. You don't have to say what your set $A$ is. Why is it finite? Be sure to use $⩾$ not $>$. If you have ${ n ∈ ℕ : n > F ( ε ) }$ you can replace it with ${ n ∈ ℕ : n ⩾ F ( ε ) + 1 }$ if you change the set $A$.) (c) Write down a proof that $a n → 1$ starting with Let $ε > 0$ be arbitrary and Let $N = ⌈ F ( ε ) ⌉$ . Exercise. Consider the statement X, which is and the sequence defined by $( a n ) = -1 n + 1 n$. Prove that X holds for this sequence for both $l = 1$ and $l = -1$. All assertions you make in your answers MUST be supported by proofs.
{}
Change search Higgs boson searches in gluon fusion and vector boson fusion using the H $\to$ WW decay mode KTH, School of Engineering Sciences (SCI), Physics, Particle and Astroparticle Physics.ORCID iD: 0000-0002-8913-0981 2009 (English)Report (Refereed) ##### Abstract [en] The prospects for Higgs searches in the H + 0j (H to WW to e-nu mu-nu ), H + 2j (H to WW to e-nu mu-nu ), and H + 2 j (H to WW to l-nu qq) channels at ATLAS are presented, including realistic effects such as trigger efficiencies and detector misalignment, with an emphasis on practical methods to estimate the backgrounds using control samples in data. With 10 fb−1 of integrated luminosity, one would expect to be able to discover a Standard Model Higgs boson in the mass range 135 < mH < 190 GeV and to be able to measure its mass with a precision of about 7 GeV if its true mass is 130 GeV or about 2 GeV if its true mass is 160 GeV. 2009. , 28 p. ##### Series , ATLAS Note, ATL-PHYS-PUB-2009-056 ##### National Category Subatomic Physics ##### Identifiers OAI: oai:DiVA.org:kth-84534DiVA: diva2:499510 ##### Note QC 20120427Available from: 2012-02-13 Created: 2012-02-13 Last updated: 2012-04-27Bibliographically approved #### Open Access in DiVA No full text http://cdsweb.cern.ch/record/1174270/files/ATL-PHYS-PUB-2009-056.pdf #### Search in DiVA ##### By author/editor Strandberg, Jonas ##### By organisation Particle and Astroparticle Physics ##### On the subject Subatomic Physics
{}
1. How many solutions exist for the triangle where a=3.8, b=14.4, and A=157 degrees? 2. Use the given measurements to solve triangle ABC: a=12, b=20, c=14. Answer: A= 36.2 degrees, B=100.3 degrees, and C=43.5 degrees. 3. Find the value of x in the triangle with an area of 81.7 square units, a=17.5, and b=c=x. Answer: I'm having a lot of trouble with this problem. If b=c then angle B also = angle C. So I used the equation to find the semi-perimeter: (a+b+c)/2. Then i simplified that to be S=8.75+x. Then I plugged that value for S into Heron's area formula: square root [S(S-a)(S-b)(S-c)]. I get a very odd decimal for x that must be wrong. HELP!!! 4. Find c if A=23 degrees, a=9, and b=12. Answer: I found that this is an example of the ambiguous case and that there will be 2 triangles. So I eventually got that c1=18.73 and c2=3.36. 5. A ship travels due west for 83 miles. It then travels in a northern direction for 56 miles and ends up 115 miles from its original position. How many degrees did it turn when it changed direction? 6. Find the area of triangle ABC if A=22 degrees, a=7.7, and b=7.7 Answer:I found this to be another example of the ambiguous case (SSA). There will be two triangles and, therefore, two areas. I found the area of triangle 1 to be 20.59 square units. However, I am at a loss as to how to find the area of triangle 2. HELP!!! 7. A plane travels 110 miles at a heading of North 40 degrees West (N40W). It then changes direction and travels 105 miles at a heading of N 66 degrees West (N66W). How far is the plane from its original position? Answer: What I don't understand about problems involving bearing is how to find the angle within the triangle. If the bearing is N40W, do I subtract that angle from 90 to get 50? Or do I subtract from 180? Same thing with N66W. If I could understand how to draw the diagram, then I would be able to solve the problem. HELP!!! 8. IF the area of a triangle is 8.70 square units and A=56 degrees and b=7, find the length of c to the nearest integer. Answer: Using the equation Area=1/2bcsinA, I found c to be 3. 9. Find the third side of the triangle if A=55 degrees, a=9.4, and b=9.4 Answer: If a=b then A must also = B. Then C must = 70 degrees. Then, using the law of sines, i found c to be 10.78 10. Two coast guard stations located 75 miles apart on a north to south line each receive a radio signal from a ship at sea. From the northernmost station, the ship's bearing is South 73 degrees East (S73E). From the other station, the ship's bearing is North 23.4 degrees East (N23.4E). How far is the ship from the southernmost station? Answer: Once again, this problem deals with bearings and I am at a loss. If I could just get the diagram drawn correctly, I think I could figure out the rest. HELP!!! 11. Find the third side of the triangle with A=34 degrees, b=5, and c=8. Answer: Using the law of cosines, I found that a=4.76 12. An airplane left an airport and flew east for 89 miles. Then it turned northward to North 17 degrees East (N17E). When it was 161 miles from the airport, there was an an engine problem and it turned to take the shortest route back to the airport. Find the angle through which the airplane turned. Answer: Again, this problem deals with bearing. I really need to understand how to draw the diagram. HELP!!! 13. Find the length of the diagonal of the parallelogram with a base of 9.2 units. The right side of the parallelogram is 7.6 units. The entire bottom left angle is 65 degrees. Answer: So the diagonal is splitting the parallelogram into 2 congruent triangles, right? So, I believe that means it is bisection the angle that is 65 degrees and that each triangle is actually comprised of an angle of 32.5 degrees. You really only need to solve one triangle, though. The thing is, from what I can tell, that triangle is an example of the ambiguous case. So, to determine the number of triangles, I used the formula h=bsinA. I found h to be 4.94. Then, that means h<a<b, so there are 2 triangles??? I am really confused. So will there be 2 values for the diagonal? HELP!!! I really need help with the problems involving bearing and the ambiguous case!!! 2. Originally Posted by iheartthemusic29 1. How many solutions exist for the triangle where a=3.8, b=14.4, and A=157 degrees? 2. Use the given measurements to solve triangle ABC: a=12, b=20, c=14. Answer: A= 36.2 degrees, B=100.3 degrees, and C=43.5 degrees. 3. Find the value of x in the triangle with an area of 81.7 square units, a=17.5, and b=c=x. Answer: I'm having a lot of trouble with this problem. If b=c then angle B also = angle C. So I used the equation to find the semi-perimeter: (a+b+c)/2. Then i simplified that to be S=8.75+x. Then I plugged that value for S into Heron's area formula: square root [S(S-a)(S-b)(S-c)]. I get a very odd decimal for x that must be wrong. HELP!!! ... #1 and #2 are OK. to #3. Since b = c the triangle is isosceles with base a. Calculate the height of the triangle perpendicular to the base: $A = \frac12 \cdot a \cdot h~\implies~81.7 = \frac12 \cdot 17.5 \cdot h~\implies ~h = \frac{1634}{175} \approx 9.337...$ The triangle is divided into 2 right trinagles by the height. Therefore $\left(\frac12 \cdot a\right)^2+h^2 = c^2~\implies~ c \approx 12,796..$ 3. Originally Posted by iheartthemusic29 ... 13. Find the length of the diagonal of the parallelogram (normally there are 2 different diagonals!) with a base of 9.2 units. The right side of the parallelogram is 7.6 units. The entire bottom left angle is 65 degrees. Answer: So the diagonal is splitting the parallelogram into 2 congruent triangles, right? right So, I believe that means it is bisection the angle that is 65 degrees and that each triangle is actually comprised of an angle of 32.5 degrees. (That is an heresy and you know what will happen to you...) HELP!!! The angles at the base are $\alpha = 65^\circ$ and $\beta = 180^\circ - \alpha = 115^\circ$ Now use Cosine rule twice: $d_1 = \sqrt{9.2^2 + 7.6^2 - 2 \cdot 9.2 \cdot 7.6 \cdot \cos(65^\circ)} \approx 9.1269...$ $d_2 = \sqrt{9.2^2 + 7.6^2 - 2 \cdot 9.2 \cdot 7.6 \cdot \cos(115^\circ)} \approx 14.195...$ 4. Originally Posted by iheartthemusic29 ... 4. Find c if A=23 degrees, a=9, and b=12. Answer: I found that this is an example of the ambiguous case and that there will be 2 triangles. So I eventually got that c1=18.73 and c2=3.36. 5. A ship travels due west for 83 miles. It then travels in a northern direction for 56 miles and ends up 115 miles from its original position. How many degrees did it turn when it changed direction? 6. Find the area of triangle ABC if A=22 degrees, a=7.7, and b=7.7 Answer:I found this to be another example of the ambiguous case (SSA). There will be two triangles and, therefore, two areas. I found the area of triangle 1 to be 20.59 square units. However, I am at a loss as to how to find the area of triangle 2. HELP!!! ... #4 and #5 are OK. to #6: The triangle ABC is an isosceles triangle. Since a = b the angles A and B must be equal too. Therefore the angle C = 136°. Now use the formula to calculate the area F (I have to use another letter than A or a): $F = \frac12 \cdot a \cdot b \cdot \sin(C)$ . Plug in the values of a, b and C. You'll get $F \approx 20.593.....$ . So your answer is OK. But there isn't a second different triangle.
{}
bdeGetEditorSelectedStartLine Gets the line where the selected text of a given editor starts. Syntax selectedStartLine = bdeGetEditorSelectedStartLine(editor) Inputs editor The editor that the selected text is in. Type: hwscpTextEditor Outputs selectedStartLine The line number that the selected text of an editor starts on. Type: integer Examples Get the line number that the selected text of an editor starts on: selectedStartLine = bdeGetEditorSelectedStartLine(editor); selectedStartLine = 2
{}
# What is the difference between radical and exponent? Is radical a type of exponent? What do we call the power when it is a complex number? A radical $x^\frac{1}{n}$ (the $n$th root of $x$) is a subset of exponents $x^y$ where $n\in\mathbb{Z}$ and $y\in\mathbb{R}$. You can still use the terms "exponent" and "power" when $x\in\mathbb{C}$, but radicals are more ill-defined. $$e ^ {\pm \sqrt{ x^2- a x + b \sin \omega t} }$$
{}
## Does any one know about Instagram DMs marketing? We are going to promote a brand through social media and we found some strategies. Influncer marketing is one of them, but I was wondering if I could use DMs to promote a brand. Is there any one who have seen these things? or any ideas or experiences? Thank you in advance! ## How to know which app is bad I use Ubuntu Studio 18.04.3, with the additional backports PPA to get LTS, in three desktop PCs and one laptop. Since a week, or so, each time I turn ON one of my desktop PCs, when all the initialization process is made (after the desktop is full shown), I get an error message: There was a problem with one of the Ubuntu packages. Do you want to send a report about this? [Yes] [No] But… I wonder how can I know which package is and what kind of error is? Where can I find that kind of information? BTW: I’ve sent the asked report each time I’ve got the message, but… Until now, I don’t have any kind of message from the Ubuntu team about that. I am a necromancer. My undead are created using animate dead. As I can only control 4HD per CL, when I go over this limit I must release them, but I get to choose which ones are released. This implies some kind of awareness of the undead you control when you are the creator, whereas using the feat command undead doesn’t say how it works in that regard (either you lose control of the oldest, as you choose, or simply unable to use it when full). So as you are able to choose which undead you release, would you recognize when one of them that you created and still control, is destroyed? ## My updates arent working and this is the error code in the terminal: Im new to ubuntu and don’t know how to fix this myself E: The repository ‘cdrom://Ubuntu 18.04.3 LTS Bionic Beaver – Release amd64 (20190805) bionic Release’ does not have a Release file. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. W: GPG error: http://www.deb-multimedia.org buster InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 5C808C2B65558117 E: The repository ‘http://www.deb-multimedia.org buster InRelease’ is not signed. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. ## I want to know where there is the flaw in my argument I came across following problem to finding whether the following language is decidable or semi-decidable or not even a semi-decidable. $$L: \{\langle M\rangle: M\space is\space a\space TM\space and\space |L(M)| \ge3\}$$ Now thinking intuitively I conjectured that this language is semi-decidable. We can say yes when the input does belong to $$L$$. But, we can not say no when the input does not belong to $$L$$. Now, I formulated following reduction from complement of halting problem $$\overline{HP}$$ which is not semi-decidable (non $$RE$$). $$\overline{HP}: \{\langle M, w\rangle : M\space is\space TM\space and\space it\space does\space not\space halt\space on\space string\space w.\}$$ $$\tau(\langle M,x\rangle) = \langle M’\rangle$$. $$M’$$ on input $$w$$ works as follows. It erases w, puts $$M$$ and $$x$$ on its tape, and runs $$M$$ on $$x$$ and accepts if $$M$$ doesn’t halt on x. Otherwise it rejects. Proof of validity of reduction: $$\langle M,x\rangle \in \overline{HP} \implies M\space does\space not\space halt\space on\space x \implies M’\space accepts\space all\space inputs\space \implies|L(M’)| \ge 3\implies M’ \in L$$ $$\langle M,x\rangle \notin \overline{HP} \implies M\space does\space halt\space on\space x \implies M’\space rejects\space all\space inputs\space \implies|L(M’)| < 3\implies M’ \notin L$$ According to above reduction $$\overline{HP}$$ should be recursively enumerable$$(RE)$$ which it is not. So, $$L$$ should not be $$RE$$ but it indeed is $$RE$$. So, my reduction must be flawed. Please point out where I messed up. ## how to know the version, configuration files,open ports and task purpose how to know the version, configuration files,open ports and task purpose. debian based os ## Do you know that you can use a VPN for marketing? I'm using a VPN for my businesses marketing and it's amazing thing, it helps me so much. So I can check what’s trending in particular regions; Check how my business looks from a different country; I can access geo-restricted apps; Explore region-specific content. I found this info here: https://medium.com/@hectorvanderlen/vpn-as-a-digital-marketing-tool-a936a9f6f93c?sk=404a2e9e3547520eafcbb89d7fb5b438 , and it's life changing game for me. ## Don’t Know Version Of Ubuntu To Use On Vista32 Don’t what version of Ubuntu to down load for Windows Vista 32 based desktop computer. I have previously down loaded 16.04.5 version however don’t see it on list of alternative Ubuntu versions now. ## How would you know if your device is connected to a switch? W/o using a switch port mapping tool in a linux machine. How would you know if a repeater/router you are connected to is connected to a switch? ## How am I supposed to know what abilities familiars have naturally? This seems to be a weird issue, the method of creating Familiars in Pathfinder 2e seems freeform at first but then when you get to the specifics of it, it seems you must know the exact stats of creatures that the games doesn’t provide. Below is the entry from the core rulebook that is causing the issue PF2E Core Rulebook pg. 218: Each day, you channel your magic into two abilities, which can be either familiar or master abilities. If your familiar is an animal that naturally has one of these abilities (for instance, an owl has a fly Speed), you must select that ability. Your familiar can’t be an animal that naturally has more familiar abilities than your daily maximum familiar abilities. Even the example given of the owl doesn’t appear in either the core rulebook or beastiary (as far as I can tell). So while I can naturally infer that all flying creatures will have a fly speed I can’t infer the stats of what everything else would have naturally. Do owls also have darkvision? Would cats have a climb speed? etc.
{}
# C++ Program to find out the maximum amount of money that can be made from selling cars Suppose, there is a demand for red and blue cars for sale. An automobile company decides to sell p red cars and q blue cars of different prices. Currently, the company has 'a' number of red cars, 'b' number of blue cars, and 'c' numbers of colorless cars (the cars are yet to be painted) in their stock. The values of the different cars are given in arrays A, B, and C. So, the company has to sell p + q number of cars in a day and they have to make maximum profit from them. The colorless cars can be painted in any color, red or blue. We find out the maximum amount of profit that can be achieved from selling the cars. So, if the input is like p = 3, q = 3, a = 3, b = 3, c = 2, A = {150000, 200000, 200000}, B = {150000, 120000, 180000}, C = {210000, 160000, 150000}, then the output will be 1100000. The company can sell blue cars of value 200000, 200000 and paint the car of value 210000 to blue. Total value obtained from selling blue cars will be 610000. Also, they can sell red car of value 18000 and paint cars of value 160000 and 150000 to gain a total of 490000. The total profit value obtained will be 610000 + 490000 = 1100000. To solve this, we will follow these steps − Define an array dp sort the arrays A, B, and C for initialize i := 0, when i < p, update (increase i by 1), do: insert A[i] at the end of dp for initialize i := 0, when i < q, update (increase i by 1), do: insert B[i] at the end of dp sort the array dp reverse the array dp for initialize i := 1, when i < size of dp, update (increase i by 1), do: dp[i] := dp[i] + dp[i - 1] tmp := 0 res := last element of dp for initialize i := 1, when i < (minimum of (c and p +q), update (increase i by 1), do: tmp := tmp + C[i - 1] res := maximum of (res and dp[p + q - i] + tmp) return res ## Example Let us see the following implementation to get better understanding − #include <bits/stdc++.h> using namespace std; int solve(int p, int q, int a, int b, int c, vector<int> A, vector<int> B, vector<int> C){ vector<int> dp(1, 0); sort(A.rbegin(), A.rend()); sort(B.rbegin(), B.rend()); sort(C.rbegin(), C.rend()); for(int i = 0; i < p; ++i) dp.push_back(A[i]); for(int i = 0; i < q; ++i) dp.push_back(B[i]); sort(dp.begin(), dp.end()); reverse(dp.begin() + 1, dp.end()); for(int i = 1; i < (int)dp.size(); ++i) dp[i] += dp[i - 1]; int tmp = 0; int res = dp.back(); for(int i = 1; i <= min(c, p + q); ++i) { tmp += C[i - 1]; res = max(res, dp[p + q - i] + tmp); } return res; } int main() { int p = 3, q = 3, a = 3, b = 3, c = 2; vector<int> A = {150000, 200000, 200000}, B = {150000, 120000, 180000}, C = {210000, 160000, 150000}; cout<< solve(p, q, a, b, c, A, B, C); return 0; } ## Input 3, 3, 3, 3, 2, {150000, 200000, 200000}, {150000, 120000, 180000}, {210000, 160000, 150000} ## Output 1100000 Advertisements
{}
+0 # This one im struggling with? 0 142 2 Aldrich Ames is a convicted traitor who leaked American secrets to a foreign power. Yet Ames took routine lie detector tests and each time passed them. How can this be done? Recognising control questions, employing unusual breathing patterns, bitting one's tongue at the right time, pressing one's toes hard to the floor, and counting backward by 7 are countermeasures that are difficult to detect but can change the results of a polygraph examination (Source: Lies! Lies! Lies! The psychology of deceit, by C.V. Ford, professor of psychiatry, University of Alabama) In fact, it is reported by professor Ford's book that after 20 minutes of instruction, a 85% of those trained in such techniques, are able to pass the polygraph examinationeven when guilty of crime. Suppose that a random sample of 9 students are told a secret and then given instructions on how o pass the polygraph examination without revealing their knowledge of the secret. What is the probability that all the students are able to pass the polygraph examination? Guest Apr 3, 2017 Sort: #1 +4777 +1 University of Alabama???? Roll Tide !!! Sorry I just had to say that...I don't really know how to do this... This reminded me of an episode of Mythbusters that I watched where they tested different methods of " cheating " a lie detector test, and I don't think any of them really worked. But I don't remember very well... But I'm pretty sure that question is basically just this: There are 100 people in a room. 15 of them have on a yellow shirt. 85 of them have on a blue shirt. You randomly select 9 people. What is the probability that all 9 of those people have on a blue shirt? hectictar  Apr 3, 2017 #2 +4777 +1 Okay, I'm really bad at probabilities but I will try it... I will use the question I said just because it's easier. Also, I think I should have said that each time you take one person out of the room, a new person comes into the room. If you don't replace the person you took out, then it changes the answer. SO Probability of first person to have on a blue shirt: 85/100 Probability of second person to have on a blue shirt: 85/100 Probability of third person to have on a blue shirt: 85/100 ...and so on... Multiply all the probabilities together: $$\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100}*\frac{85}{100} \\~\\ =(\frac{85}{100})^9 \\~\\ \approx 0.2316$$ So about a 23.16 % chance. I know this is probably the worst way of doing it but I tried... hectictar  Apr 3, 2017 ### 7 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
{}
# q-series MHB Math Helper #### Opalg ##### MHB Oldtimer Staff member Anybody with q-series experience ? I had to deal with them when working with compact quantum groups. What sort of experience were you looking for? #### ZaidAlyafey ##### Well-known member MHB Math Helper Great ! I am looking for a proof for the following $$\displaystyle \frac{(-b)_{\infty}}{(a)_{\infty}} = \sum_{k \geq 0} \frac{\left(-b/a\right)_k} {(q)_k} a^k$$ #### Opalg ##### MHB Oldtimer Staff member Great ! I am looking for a proof for the following $$\displaystyle \frac{(-b)_{\infty}}{(a)_{\infty}} = \sum_{k \geq 0} \frac{\left(-b/a\right)_k} {(q)_k} a^k$$ That looks like a variation on the q-binomial theorem. Have you looked for a proof in Gasper and Rahman? #### ZaidAlyafey ##### Well-known member MHB Math Helper Thanks for the link , I really really appreciate it. I thought that q-series are interesting it reminds me of the famous relation between prime numbers and zeta function due to Euler . I was reading the NoteBook by Ramnujan and I was amazed by the vast results related to Hypergeometric functions which are a special case of q-series . I was hoping to read about the Jacobi theta function but I thought it would be better to start by q-series. #### ZaidAlyafey ##### Well-known member MHB Math Helper By the way Opalg , do you think it is a hard thing to deal with because I was depressed to see how Ramanujan worked with these stuff and I was like what is that ! #### Opalg ##### MHB Oldtimer Staff member By the way Opalg , do you think it is a hard thing to deal with because I was depressed to see how Ramanujan worked with these stuff and I was like what is that ! Please don't expect to understand everything that Ramanujan could do! That way madness lies. There are results that Ramanujan somehow apprehended by intuition, that even today cannot be proved although they appear to be true, and nobody knows how he arrived at them. He must have had some quite unique insight that probably even he could not have explained. #### ZaidAlyafey ##### Well-known member MHB Math Helper Hey , I am confused about the notations ! $$\displaystyle (a)_k = a(a+1)(a+2) \cdots (a+k-1)$$ $$\displaystyle (a)_k =(a;q)_k = \prod_{n=0}^{k-1} (1-aq^n)$$ The latter defines a base $q$ . #### Opalg ##### MHB Oldtimer Staff member Hey , I am confused about the notations ! $$\displaystyle (a)_k = a(a+1)(a+2) \cdots (a+k-1)$$ $$\displaystyle (a)_k =(a;q)_k = \prod_{n=0}^{k-1} (1-aq^n)$$ The latter defines a base $q$ . I was assuming that $(a)_k$ was an abbreviation for $(a;q)_k$ (as here, for example), with the $q$ not explicitly mentioned. But I could well be wrong. MHB Math Helper #### ZaidAlyafey ##### Well-known member MHB Math Helper How to prove the following $$\displaystyle \lim_{q \to 1}\frac{(a;q)_{\infty}}{(aq^x;q)_{\infty}}= (1-a)^x$$ It is kind of easy to prove it for $x\in \mathbb{Z}^+$ #### ZaidAlyafey ##### Well-known member MHB Math Helper To simplify for those how don't understand the notations $$\displaystyle (a;q)_{\infty}= \prod_{k=0}(1-aq^k)$$ Similarilry we have $$\displaystyle (aq^x;q)_{\infty}= \prod_{k=0}(1-aq^{k+x})$$ So we have to prove that $$\displaystyle \lim_{q \to 1}\prod_{k=0} \frac{1-aq^k}{1-aq^{k+x}} = (1-a)^ x$$ Any clue ? #### ZaidAlyafey ##### Well-known member MHB Math Helper How to prove the following $$\displaystyle \lim_{q \to 1}\frac{(a;q)_{\infty}}{(aq^x;q)_{\infty}}= (1-a)^x$$ It is kind of easy to prove it for $x\in \mathbb{Z}^+$ Ok , I think I got it , this is a simple consequence of the q-binomial theorem Consider the following $$\displaystyle {}_1\phi_0 (a;- ;q,z) = \sum_{k\geq 0}\frac{(a;q)_k}{(q;q)_k}z^k=\frac{(az;q)_{\infty}}{(z;q)_{\infty}}$$ (1) In (1) let $a = q^{x}$ and $z = a$ $$\displaystyle {}_1\phi_0 (q^x;- ;q,a) = \sum_{k\geq 0}\frac{(q^x;q)_k}{(q;q)_k}a^k=\frac{(aq^{x};q)_ {\infty} }{(a;q)_{\infty}}$$ Hence we have $$\displaystyle \frac{(aq^{x};q)_{\infty}}{(a;q)_{\infty}}= \sum_{k\geq 0}\frac{(q^x;q)_k}{(q;q)_k}a^k$$ Now consider the limit $$\displaystyle \lim_{q \to 1}\frac{(aq^{x};q)_{\infty}}{(a;q)_{\infty}}= \lim_{q \to 1} \sum_{k\geq 0}\frac{(q^x;q)_k}{(q;q)_k}a^k$$(2) Suppose that $$\displaystyle |a|<1$$ and $$\displaystyle |q|<1$$ so the sum is uniformly convergent on any sub-disk . So we have to approach $1$ from the left to stay in the disk ! The idea is use the L'Hospitale rule $$\displaystyle \lim_{q \to 1^-}\frac{(q^x;q)_k}{(q;q)_k} = \lim_{q \to 1^-} \frac{(1-q^x)\cdot (1-q^{x+1}) \cdot(1-q^{x+2}) \cdots (1-q^{x+k-1}) }{(1-q)\cdot(1-q^2)\cdot(1-q^3) \cdots(1-q^k)}$$ which can be written as $$\displaystyle \lim_{q \to 1^-}\frac{(q^x;q)_k}{(q;q)_k} = \lim_{q \to 1^-} \frac{(1-q^x)}{1-q}\cdot \lim_{q \to 1^-}\frac{(1-q^{x+1})}{1-q^2} \cdot \lim_{q \to 1} \frac{(1-q^{x+2})}{1-q^3} \cdots \lim_{q \to 1^-} \frac{(1-q^{x+k-1}) }{(1-q^k)}$$ $$\displaystyle \lim_{q \to 1^-}\frac{(q^x;q)_k}{(q;q)_k} = \frac{x (x+1)(x+2)\cdots (x+k-1)}{1\cdot 2 \cdot 3 \cdots k} = \frac{(x)_k}{k!}$$ Substitute in (2) $$\displaystyle \lim_{q \to 1^-}\frac{(aq^{x};q)_{\infty}}{(a;q)_{\infty}}= \sum_{k\geq 0}\frac{(x)_k}{k!}a^k$$ The sum on the right is well-know $(1-x)^{-a}$ $$\displaystyle \lim_{q \to 1^-}\frac{(aq^{x};q)_{\infty}}{(a;q)_{\infty}}= (1-x)^{-a}$$ (3) From (3) we conclude that $$\displaystyle \lim_{q \to 1^-}\frac{(a;q)_{\infty}}{ (aq^{x};q)_{\infty}}= (1-x)^{a}$$ Last edited:
{}
Late to the party, I finally summoned up the courage to grab Akasaka 1 to 8 and then I came across this... Perhaps he is the great grandfather of the Captain we all know and love. (and Monica too of course) My head blew off. This post is mainly fanboying over the somewhat-abundant fanservice in Akasaka 8. More pics after the jump. Continue reading » written by astrobunny \\
{}
Skip to main content # Smoothed Analysis of Multi-Item Auctions with Correlated Values ## Author(s): Psomas, Alexandros; Schvartzman, Ariel; Weinberg, S Matthew To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1nn8s Abstract: Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality. Publication Date: Jun-2019 Citation: Psomas, Alexandros, Ariel Schvartzman, and S. Matthew Weinberg. "Smoothed Analysis of Multi-Item Auctions with Correlated Values." In ACM Conference on Economics and Computation (2019): pp. 417-418. doi:10.1145/3328526.3329563 DOI: 10.1145/3328526.3329563 Pages: 417 - 418 Type of Material: Conference Article Journal/Proceeding Title: ACM Conference on Economics and Computation Version: Final published version. Article is made available in OAR by the publisher's permission or policy. Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.
{}
## Doctoral Dissertations #### Title A perturbative analysis of surface acoustic wave propagation and reflection in interdigital transducers January 1997 #### Keywords Engineering, Electronics and Electrical|Physics, Acoustics Ph.D. #### Abstract The coupling of stress and strain fields to electric fields present in anisotropic piezoelectric crystals makes them ideal for use as electromechanical transducers in a wide variety of applications. In recent years such crystals have been utilized to produce surface acoustic wave devices for signal processing applications, in which an applied metallic grating both transmits and receives, through the piezoelectric effect, electromechanical surface waves. The design of such interdigital transducers requires an accurate knowledge of wave propagation and reflection. The presence of the metal grating in addition to its ideal transduction function, by means of electrical and mechanical loading, also introduces a velocity shift as well as reflection into substrate surface waves. We seek to obtain a consistent formulation of the wave behavior due to the electrical and mechanical loading of the substrate crystal by the metallic grating. A perturbative solution up to second order in $h/\lambda$ is developed, where h is the maximum grating height and $\lambda$ the acoustic wavelength. For the operating frequencies and physical parameters of modern surface acoustic wave devices such an analysis will provide an adequate description of device behavior in many cases, thereby circumventing the need for more computationally laborious methods. Numerical calculations are presented and compared with available experimental data. ^ COinS
{}
Pattern transformation to implement "factoring out" I'm attempting to put together some code that would do pattern based expression manipulation (I know it's already implemented in mathematica), but I'm getting stuck on a pattern to deal with "factoring out". I'm trying to construct a pattern that would match expressions similar to: a*b+a*c+a*d and transform them into a*(b+c+d) Ideally, this would for any expressions, for example Log[x^2]*Sin[x]+Log[x^2]*Cos[x]+Log[x^2]-Log[x^2]^2 would get transformed into Log[x^2]*(Sin[x]+Cos[x]+1-Log[x^2]) Going back to the much simpler a*b+a*c+a*d, what I'm trying is something along the lines of Plus[Times[x_, tail__] ..] :> x_*Plus[Times[tail__]] However that doesn't work (and I'm aware it can't), I just can't figure out the proper way to express what I mean. On top of that, I seem to be completely unable to match at least part of the expression at all, for example I would expect this: Replace[a*b + a*c + a*d, Plus[Times[x_, _] ..] :> {x}] To return {a}, but it doesn't match anything. Can someone nudge me in the correct direction? EDIT 1: it’s important that this is done in one step, see bellow. EDIT 2: To give more context, we’re trying to prototype an app that would take two (high-school level) expressions and determine if they are equivalent. The reason we’re using patterns and not Simplify (or something similar) is that the eventual ambition is to write the app in a different, non-commercial language. The way we approach this is, for each of the two expressions, we generate a set of “one step equivalent” expressions using ReplaceList and the patterns we have, by which we mean all expressions which can be created from the previous expression in one step. The patterns include things such as expanding products, putting two fractions over a common denominator etc. We then iterate and repeat this process for each newly created expression, keeping track of all expressions created and always check if the two sets haven’t started overlapping - if they have, we consider the expressions equal, if not (in a certain amount of steps), we consider them not equal. Now we have probably around 8 patterns, and application of even a single one can give many results through ReplaceList, so the combinatorial explosion is quite large. In practice, my MacBook can’t handle more than 4 iterations. Because of this, we are trying to find patterns which match all possible combinations in a single step, so the pattern we’re looking for, when applied to ab+ac+ad using ReplaceList, should give {a(b+c)+ad, a(c+d)+ab, a(b+d)+ac, a(b+c+d)} in a single step. The only reason for this is performance, in theory we would get exactly the same results if we use patterns which do the same thing but pairwise and just apply them repeatedly. Unfortunately, in practice we would need many more iterations than 3-4 to get to the result, and that is simply not feasible. • Simplify gives the desired result in both examples. – kglr Oct 1, 2018 at 21:38 • I am aware of that, however that is not what I'm looking for - I'm trying to implement the logic myself Oct 2, 2018 at 15:42 I can provide this one, which with ReplaceRepeated will eventually do the bracketing (hopefully): a*b + a*c + a*d + e //. Plus[Times[A_, x_], Times[A_, y_], z___] :> Plus[A Plus[x, y], z] a (b + c + d) + e • Hey Henrik, thank you very much for the tip. Unfortunatelly, this doesn’t exactly solve my use case, I’m trying to find something that would match any number of such terms in one step (so I can get them all using ReplaceList). I edited the OP to give more info. Oct 3, 2018 at 7:30 • Hm. You should add that requirement to your question. Oct 3, 2018 at 7:35 • Yes, good point, sorry about that. I edited the OP Oct 3, 2018 at 7:52 Got it! Plus[head___, sum : Repeated[Times[A_, _], {2, Infinity}], tail___] :> Plus[head, A*DeleteCases[A] /@ Plus[sum], tail] The following ReplaceList[a*b + a*c*d + a*d + e, Plus[head___, sum : Repeated[Times[A_, _], {2, Infinity}], tail___] :> {a c d + a (b + d) + e, a d + a (b + c d) + e, a b + (a + a c) d + e,
{}
# LyX: How to combine two math symbols? [duplicate] I'm a new user of LyX. I'm using it to write math lectures. How can I make a new symbol from two existing ones? To be more precise, how can I create this: It's a subset with a circle in it (\subset, \circ). - ## migrated from stackoverflow.comJan 26 '14 at 8:50 This question came from our site for professional and enthusiast programmers. ## marked as duplicate by egreg, Peter Jansson, Jesse, Guido, Martin SchröderJan 26 '14 at 12:26 If this question can be reopened, then this answer can be moved here, where it fits better than as answer for the duplicate. – Heiko Oberdiek Jan 28 '14 at 20:40 I'm not sure how friendly my answer is to a new LyX user... but \! can be used to give negative horizontal spacing in math mode. Wrapping with \mathrel{} adjust the spacing around the new symbol. \documentclass[varwidth]{standalone} \usepackage{amsmath} \newcommand{\subsetcirc}{\mathrel{\subset\!\!\!\!\!\circ}} \begin{document} $A \subset B$ $A \subsetcirc B$ \end{document} - More related: \subseteq + \circ as a single symbol (“open subset”). – Manuel Jan 26 '14 at 11:38 Here's a solution that uses the TeX "primitive" commands \ooalign, \kern, and \raise (as well as \hss, \cr, and \hbox). It also uses, unsurprisingly, the "standard" symbols \subset and \circ. If you want to make the circle larger or smaller, change the first argument of the \scalebox command. To shift the circle a bit more to the right, increase the argument of the \kern command. And, if you use a package that uses its own forms of the subset and circle symbols, you may need to tweak the code a bit more to get a satisfactory positioning of the symbols. \documentclass{article} \usepackage{graphicx} % for \scalebox macro \newcommand\subsetcirc{% \mathrel{\ooalign{\hss$\subset$\hss\cr% \kern0.6ex\raise0.2ex\hbox{\scalebox{0.7}{$\circ$}}}}} \begin{document} $A\subsetcirc B$ \end{document} - Another possibility is \documentclass{article} \newcommand*\subsetcircle{\mathrel{\ooalign{$\subset$\cr\hidewidth\hbox{$\circ\mkern 1mu$}\cr}}} \begin{document} $A\subsetcircle B$ \end{document} -
{}
# 1.2.2: The Blind Man and the Elephant This old story from China or India was made into the poem The Blind Man and the Elephant by John Godfrey Saxe5.  Six blind men find excellent empirical evidence from different parts of the elephant and all come to reasoned inferences that match their observations. Their research is flawless and their conclusions are completely wrong, showing the necessity of including holistic analysis in the scientific process. Here is the poem in its entirety: It was six men of Indostan, to learning much inclined, who went to see the elephant (Though all of them were blind), that each by observation, might satisfy his mind. The first approached the elephant, and, happening to fall, against his broad and sturdy side, at once began to bawl: "God bless me! but the elephant, is nothing but a wall!" The second feeling of the tusk, cried: "Ho! what have we here, so very round and smooth and sharp? To me tis mighty clear, this wonder of an elephant, is very like a spear!" The third approached the animal, and, happening to take, the squirming trunk within his hands, "I see," quoth he, the elephant is very like a snake!" The fourth reached out his eager hand, and felt about the knee: "What most this wondrous beast is like, is mighty plain," quoth he; "Tis clear enough the elephant is very like a tree." The fifth, who chanced to touch the ear, Said; "E'en the blindest man can tell what this resembles most; Deny the fact who can, This marvel of an elephant, is very like a fan!" than, seizing on the swinging tail, that fell within his scope, "I see," quothe he, "the elephant is very like a rope!" And so these men of Indostan, disputed loud and long, each in his own opinion, exceeding stiff and strong, Though each was partly in the right, and all were in the wrong! So, oft in theologic wars, the disputants, I ween, tread on in utter ignorance, of what each other mean, and prate about the elephant, not one of them has seen! -John Godfrey Saxe 1.2.2: The Blind Man and the Elephant is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts.
{}
Article Contents Article Contents # On some fractional differential equations in the Hilbert space • Let $A$ be a closed linear operator defined on a dense set in the Hilbert space $H$. Fractional evolution equations of the form $\frac{d^\alpha u(t)}{dt^\alpha} = Au(t), 0 < \alpha \leq 1$, are studied in $H$, for a wide class of the operators $A$. Some properties of the solutions of the Cauchy problem for the considered equation are studied under suitable conditions . It is proved also that there exists a dense set $S$ in $H$, such that if the initial condition $u(0)$ is an element of $S$, then there exists a solution $u(t)$ of the considered Cauchy problem. Applications to general partial differential equations of the form $$\frac{\partial^\alpha u(x,t)}{\partial t^\alpha} = \sum_{|q| \leq m} a_q(x) D^q u(x,t)$$ are given without any restrictions on the characteristic form $\sum_{|q|=m} a_\alpha(x) \xi^q$, where $D^q = D_1^{q_1} ... D_n^{q_n}, x = (x_1, ..., x_n), D_j= \frac{\partial}{\partial x_j}, \xi^q = \xi_1^{q_1}, ..., \xi_n^{q_n}, |q| = q_1 + ... + q_n$, and $q = (q_1, ..., q_n)$ is a multi index.\par} Mathematics Subject Classification: 47 D 09, 34 G 10, 34 G 99, 35 K 90. Citation: Open Access Under a Creative Commons license
{}
# Greek Capital Letter Epsilon Ε Symbol Format Data Ε Code Point U+0395 TeX \Epsilon SVG ## Usage The capital Greek letter (capital epsilon) is visually very similar to the upper-case Latin letter E. For that reason, refer to the usage of Capital E for how the symbol appears in math. ## Related The ε (epsilon) symbol is a Greek letter used in math as a variable to represent error bounds and in calculus to represent the epsilon delta definition of limits.
{}
# Zipfs law in Multifragmentation We discuss the meaning of Zipfs law in nuclear multifragmentation. We remark that Zipfs law is a consequence of a power law fragment size distribution with exponent $\tau \simeq 2$. We also recall why the presence of such distribution is not a reliable signal of a liquid-gas phase transition. Author: Xavier Campi; Hubert Krivine Source: https://archive.org/
{}
## Boundless: "Finance: Chapter 13, Capital Structure Considerations" ### Optimal Capital Structure Considerations The optimal capital structure is the mix of debt and equity that maximizes a firm's return on capital, thereby maximizing its value. #### LEARNING OBJECTIVES • Describe the influence of a company's cost of capital on its capital structure and investment decisions. • Explain why a company's capital structure influences its value. #### KEY POINTS • Capital structure categorizes the way a company has its assets financed. • Miller and Modigliani developed a theory which through its assumptions and models, determined that in perfect markets a firm's capital structure should not affect its value. • In the real world, there are costs and variables that create different returns on capital and, therefore, give rise to the possibility of an optimal capital structure for a firm. • The cost of capital is the rate of return that capital could be expected to earn in an alternative investment of equivalent risk. • For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. • The weighted average cost of capital multiplies the cost of each security (debt or equity) by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. #### TERMS • cost of capital the rate of return that capital could be expected to earn in an alternative investment of equivalent risk • leverage Debt taken on by a firm in order to finance assets. • capital structure Capital structure is the way a corporation finances its assets, through a combination of debt, equity, and hybrid securities. FULL TEXT Capital structure is the way a corporation finances its assets, through a combination of debt, equity, and hybrid securities. In short, capital structure can be termed a summary of a firm's liabilities by categorization of asset sources. In a simple example, if a company's assets come from a $20 million equity issuance and lending that amounts to$80 million, the capital structure can be said to be 20% equity and 80% debt. While equity results from the selling of ownership shares, debt is termed "leverage. " Therefore, a term that has issued no debt or bonds is said to not be leveraged. This is a simplistic view, because in reality a firm's capital structure can be highly complex and include many different sources. Capital structure is the assignment of the sources of company assets into equity or debt securities. The Modigliani-Miller theorem, proposed by Franco Modigliani and Merton Miller, forms the basis for modern thinking on capital structure (though it is generally viewed as a purely theoretical result, since it disregards many important factors in the capital structure decision). The theorem states that in a perfect market, how a firm is financed is irrelevant to its value. However, as with many theories, it is difficult to use this abstract theory as a basis to evaluate conditions in the real world, where markets are imperfect and capital structure will indeed affect the value of the firm. Actual market considerations when dealing with capital structure include bankruptcy costs, agency costs, taxes, and information asymmetry. #### Cost of Capital Considerations One of the major considerations that overseers of firms must take into account when planning out capital structure is the cost of capital. For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. A company's securities typically include both debt and equity; therefore, one must calculate both the cost of debt and the cost of equity to determine a company's cost of capital. The weighted average cost of capital multiplies the cost of each security by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. Because of tax advantages on debt issuance, such as the ability to deduct interest payments from taxable income, issuing debt will typically be cheaper than issuing new equity. At some point, however, the cost of issuing new debt will be greater than the cost of issuing new equity. This is due to the fact that adding debt increases the default risk and, thus, the interest rate that the company must pay in order to borrow money. This increased default risk can also drive up the costs for other sources (such as retained earnings and preferred stock). Management must identify the "optimal mix" of financing,  which is the capital structure where the cost of capital is minimized so that the firm's value can be maximized. ### Tax Considerations Taxation implications which change when using equity or debt for financing play a major role in deciding how the firm will finance assets. #### LEARNING OBJECTIVE • Explain how taxes can influence a company capital structure #### KEY POINTS • Tax considerations have a major effect on the way a company determines its capital structure and deals with its costs of capital. • Under a classical tax system, the tax deductibility of interest makes debt financing valuable; that is, the cost of capital decreases as the proportion of debt in the capital structure increases. The optimal structure, then would be to have virtually no equity at all. • In general, since dividend payments are not tax deductible but interest payments are, one would think that, theoretically, higher corporate tax rates would call for an increase in usage of debt to finance capital, relative to usage of equity issuance. • There are different kinds of debt that can be used, and they may have different deductibility and tax implications. This will affect the types of debt used in financing, even if corporate taxes do not change the total amount of debt used. #### TERMS • optimal capital structure the amount of debt and equity that maximizes the value of the firm • Interest The price paid for obtaining, or price received for providing, money or goods in a credit transaction, calculated as a fraction of the amount or value of what was borrowed. • dividend A pro rate payment of money by a company to its shareholders, usually made periodically (e.g., quarterly or annually). FULL TEXT Tax considerations have a major effect on the way a company determines its capital structure and deals with its costs of capital . A company's decision makers must take taxes into consideration when determining a firm's capital structure. Miller and Modigliani assume that in a perfect market, firms will borrow at the same interest rate as individuals, there are no taxes, and that investment decisions are not changed by financing decisions. This leads to a conclusion that capital structure should not affect value. When the theory is extended to include taxes and risky debt, things change. Under a classical tax system, the tax deductibility of interest makes debt financing valuable; that is, the cost of capital decreases as the proportion of debt in the capital structure increases. The optimal structure then, would be to have virtually no equity at all. However, we see that in real world markets capital structure does affect firm value. Therefore, we see that imperfections exist; often a firm's optimal structure does not involve having one hundred percent leveraging and no equity whatsoever. There is much debate over how changing corporate tax rates would affect debt usage in capital structure. In general, since dividend payments are not tax deductible, but interest payments are, one would think that, theoretically, higher corporate tax rates would call for an increase in usage of debt to finance capital, relative to usage of equity issuance. However, since many things fall into tax applicability, including firm location and size, this is a generality at best. There are also different kinds of debt that can be used, and they may have different deductibility and tax implications. That is why, while many believe that taxes don't really affect the amount of debt used, they actually do. In the end, different tax considerations and implications will affect the costs of debt and equity, and how they are used, relative to each other, in financing the capital of a company. ### Cost of Capital Considerations Cost of capital is important in deciding how a company will structure its capital so to receive the highest possible return on investment. #### LEARNING OBJECTIVE • Describe the influence of a company's cost of capital on its capital structure and investment decisions #### KEY POINTS • For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. The cost of capital is the rate of return that capital could be expected to earn in an alternative investment of equivalent risk. • Once cost of debt and cost of equity have been determined, their blend, the weighted average cost of capital (WACC), can be calculated. This WACC can then be used as a discount rate for a project's projected cash flows. • The weighted average cost of capital multiplies the cost of each security (debt or equity) by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. #### TERMS • cost of preferred stock the additional premium paid to have an equity security with certain additional features not present in common stock • capital rationing restrictions on how or how much a company can invest • cost of capital The rate of return that capital could be expected to earn in an alternative investment of equivalent risk. FULL TEXT One of the major considerations that overseers of firms must take into account when planning out capital structure is the cost of capital. The expected return on an asset is compared to the cost of capital to invest in the asset. Cost of capital is an important way of determining whether or not a firm is a worthwhile investment. For an investment to be worthwhile, the expected return on capital must be greater than the cost of capital. A company's securities typically include both debt and equity, so one must therefore calculate both the cost of debt and the cost of equity to determine a company's cost of capital. The weighted average cost of capital multiplies the cost of each security by the percentage of total capital taken up by the particular security, and then adds up the results from each security involved in the total capital of the company. If there were no tax advantages for issuing debt, and equity could be freely issued, Miller and Modigliani showed that, under certain assumptions, the value of a leveraged firm and the value of an unleveraged firm should be the same. Because of tax advantages on debt issuance, such as the ability to deduct interest payments from taxable income, it will be cheaper to issue debt rather than new equity. At some point, however, the cost of issuing new debt will be greater than the cost of issuing new equity. This is because adding debt increases the default risk and thus the interest rate that the company must pay in order to borrow money. By utilizing too much debt in its capital structure, this increased default risk can also drive up the costs for other sources (such as retained earnings and preferred stock). Management must identify the "optimal mix" of financing–the capital structure where the cost of capital is minimized so that the firm's value can be maximized. ### The Marginal Cost of Capital The marginal cost of capital is the cost needed to raise the last dollar of capital, and usually this amount increases with total capital. #### LEARNING OBJECTIVE • Describe how the cost of capital influences a company's capital budget #### KEY POINTS • The marginal cost of capital is calculated as being the cost of the last dollar of capital raised. • When raising extra capital, firms will try to stick to desired capital structure, but once sources are depleted they will have to issue more equity. Since this tends to be higher than other sources of financing, we see an increase in marginal cost of capital as capital levels increase. • Since an investment in capital is logically only a good decision if the return on the capital is greater than its cost, and a negative return is generally undesirable, the marginal cost of capital often becomes a benchmark number in the decision making process that goes into raising more capital. #### TERMS • capital gains yield compound rate of return of increases in a stock's price • marginal tax rate the percent paid out to the government of the last dollar (or applicable currency) earned • Marginal Cost of Capital The cost of the last dollar of capital raised or the minimum acceptable rate of return or hurdle rate. FULL TEXT The marginal cost of capital is calculated as being the cost of the last dollar of capital raised. Generally we see that as more capital is raised, the marginal cost of capital rises . This happens due to the fact that marginal cost of capital generally is the weighted average of the cost of raising the last dollar of capital. Usually, we see that in raising extra capital, firms will try to stick to desired capital structure. Usually once sources are depleted they will have to issue more equity. Since the cost of issuing extra equity seems to be higher than other costs of financing, we see an increase in marginal cost of capital as the amounts of capital raised grow higher. The Marginal Cost of Capital is the cost of the last dollar of capital raised. It is an important consideration the firm must take into account when making corporate decisions. The marginal cost of capital can also be discussed as the minimum acceptable rate of return or hurdle rate. The investment in capital is logically only a good decision if the return on the capital is greater than its cost. Also, a negative return is generally undesirable. As a result, the marginal cost of capital often becomes a benchmark number in the decision making process that goes into raising more capital. If it is determined that the dollars invested in raising this extra capital could be allocated toward a greater or safer return if used differently, according to the firm, then they will be directed elsewhere. For this we must look into marginal returns of capital, which can be described as the gains or returns to be had by raising that last dollar of capital. Trade-off considerations are important because they take into account the cost and benefits of raising capital through debt or equity. #### LEARNING OBJECTIVE • Describe the balancing act between debt and equity for a company as described by the "trade-off" theory #### KEY POINTS • An important purpose of the trade off theory is to explain the fact that corporations are usually financed partly with debt and partly with equity. It states that there is an advantage to financing with debt. • The marginal benefit of further increases in debt declines as debt increases while the marginal cost increases, so that a firm that is optimizing its overall value will focus on this trade-off when choosing how much debt and equity to use for financing. • One would think that firms would use much more debt than they do in reality. The reason they do not is because of the risk of bankruptcy and the volatility that can be found in credit markets—especially when a firm tries to take on too much debt. #### TERMS Refers to the idea that a company chooses how much debt finance and how much equity finance to use by balancing the costs and benefits. a form of debt offered from one business to another with which it transacts #### FULL TEXT The trade-off theory of capital structure refers to the idea that a company chooses how much debt finance and how much equity finance to use by balancing the costs and benefits. It is often set up as a competitor theory to the pecking order theory of capital structure. An important purpose of the theory is to explain the fact that corporations are usually financed partly with debt and partly with equity. It states that there is an advantage to financing with debt—the tax benefits of debt, and there is a cost of financing with debt—the cost of financial distress including bankruptcy. Trade-off considerations are important factors in deciding appropriate capital structure for a firm since they weigh the cost and benefits of extra capital through debt vs. equity. The marginal benefit of further increases in debt declines as debt increases, while the marginal cost increases. Of course, using equity is initially more expensive than debt because it is ineligible for the same tax savings, but becomes more favorable in comparison to higher levels of debt because it does not carry the same financial risk. Therefore, a firm that is optimizing its overall value will focus on this trade-off when choosing how much debt and equity to use for financing. Another trade-off consideration to take into account is that the while interest payments can be written off, dividends on equity that the firm issues usually cannot. Combine that with the fact that issuing new equity is often seen as a negative signal by market investors, which can decrease value and returns. As more capital is raised and marginal costs increase, the firm must find a fine balance in whether it uses debt or equity after internal financing when raising new capital. Therefore, one would think that firms would use much more debt than they do in reality. The reason they do not is because of the risk of bankruptcy and the volatility that can be found in credit markets—especially when a firm tries to take on too much debt. Therefore, trade off considerations change from firm to firm as they impact capital structure. Signaling Consideration Signaling is the conveyance of nonpublic information through public action, and is often used as a technique in capital structure decisions. #### LEARNING OBJECTIVE • Explain how a company's attempts at signaling can affect its capital structure #### KEY POINTS • Signaling becomes important in a state of asymmetric information. • Signaling can affect the way investors view a firm, and corporate actions that are made public can indirectly alter the value investors assign to a firm. • In general, issuing new equity can be seen as a bad signal for the health of a firm and can decrease current share value. • While the issuance of equity does have benefits, in the sense that investors can take part in potential earnings growth, a company will usually choose new debt over new equity in order to avoid the possibility of sending a negative signal. #### TERMS • asymmetric information State of being regarding decisions on transactions where one party has more or better information than the other. • Signaling The idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal). FULL TEXT In economics and finance, signaling is the idea that a party may indirectly convey information about itself, which may not be public, through actions to other parties. Signaling becomes important in a state of asymmetric information (a deviation from perfect information), which says that in some economic transactions inequalities in access to information upset the normal market for the exchange of goods and services. In his seminal 1973 article, Michael Spence proposed that two parties could get around the problem of asymmetric information by having one party send a signal that would reveal some piece of relevant information to the other party. That party would then interpret the signal and adjust its purchasing behavior accordingly -- usually by offering a higher or lower price than if the signal had not been received. In general, the degree to which a signal is thought to be correlated to unknown or unobservable attributes is directly related to its value. A basic example of signaling is that of a student to a potential employer. The degree the student obtained signals to the employer that the student is competent and has a good work ethic -- factors that are vital in the decision to hire. Signaling Education credentials, such as diplomas, can send a positive signal to potential employers regarding a workers talents and motivation. In terms of capital structure, management should, and typically does, have more information than an investor, which implies asymmetric information. Therefore, investors generally view all capital structure decisions as some sort of signal. For example, let us think of a company that is issuing new equity. If a company issues new equity, this generally dilutes share value. Since the goal of the firm is generally to maximize shareholder value, this can be a viewed as a signal that the company is facing liquidity issues or its prospects are dim. Conversely, a company with strong solvencyand good prospects would generally be able to obtain funds through debt, which would generally take on lower costs of capital than issuing new equity. If a company fails to have debt extended to it, or the company's credit rating is downgraded, that is also a bad signal to investors. While the issuance of equity does have benefits, in the sense that investors can take part in potential earnings growth, a company will usually choose new debt over new equity in order to avoid the possibility of sending a negative signal. ### Constraint on Managers Managers will have their actions influenced by their firm's capital structure and the resources that it allows them to use. #### LEARNING OBJECTIVE • Explain how capital structure can minimize a company's agency problem #### KEY POINTS • Debt-heavy capital structures put constraints on managers by limiting the amount of free cash they have available to them. • Managers may often act in their own best interests instead of those of the firm's investors. This is known as an agency dilemma. • We see that the firms that have debt-heavy capital structures limit free cash to managers and, therefore, have managers with goals that tend to be more aligned with those of the shareholder. #### TERM • Agency Dilemma Takes into account the difficulties in motivating one party (the "agent"), to act on behalf of another (the "principal"). FULL TEXT Managers who make decisions about the firm's corporate behavior will have their actions influenced by capital structure and the resources that it allows them to use. Managerial finance is the branch of the industry that concerns itself with the managerial significance of finance techniques. It is focused on assessment rather than technique. However, this process can be tainted by the fact that managers may often act in their own best interests instead of those of investors of the firm. This is known as an agency dilemma. Adopting the right kind of capital structure can help combat this kind of problem, however. When the capital structure draws heavily on debt, then this leaves less money to be distributed to managers in the form of compensation, as well as free cash to be used on behalf of the business. Managers have to be more careful with the resources they are given to use with the purpose of running the firm successfully, since they have to produce enough income to pay back this debt by a certain date, with interest. When managers work with equity heave capital structure they have a little more leeway, and while shareholders may be upset or suffer because of fluctuations in the value of the firm, managers may find ways to make sure their compensation can have some immunity from the market value of the firm. Therefore, firms that have debt-heavy capital structures have managers with goals that tend to be more aligned with those of the shareholder. The limitation of free cash that managers have provides incentive for them to make decisions for the company that will grow the firm in value and increase the cash they have available to them to pay back debt, pay back into the firm, and compensate themselves. ### Pecking Order In corporate finance pecking ordering consideration takes into account the increase in the cost of financing with asymmetric information. #### LEARNING OBJECTIVE • Explain the benefits and shortcomings of using the "pecking order" theory to evaluate a company's value #### KEY POINTS • When it comes to methods of raising capital, companies will prefer internal financing, debt, and then issuing new equity, respectively. • Outside investors tend to think managers issue new equity because they feel the firm is overvalued and wish to take advantage, so equity is a less desired way of raising new capital. This then gives the outside investors an incentive to lower the value of the new equity. • The form of debt a firm chooses can act as a signal of its need for external finance. This sort of signalling can affect how outside investors view the firm as a potential investment. #### TERM • Pecking Order Theory that states that the cost of financing increases with asymmetric information. When it comes to methods of raising capital, companies prefer financing that comes from internal funds, debt, and issuing new equity, respectively. Raising equity can be considered a last resort. FULL TEXT #### Pecking Order Consideration The pecking order of investors or credit holders in a company plays a part in the way a company decides to structure it's capital. Pecking order theory basically states that the cost of financing increases with asymmetric information. Financing comes from internal funds, debt, and new equity. When it comes to methods of raising capital, companies will prefer internal financing, debt, and then issuing new equity, respectively. Raising equity, in this sense, can be viewed as a last resort. The pecking order theory was popularized by Stewart C. Myers when he argues that equity is a less preferred means to raise capital because managers issue new equity (who are assumed to know better about true conditions of the firm than investors). Investors believe that managers overvalue the firms and are taking advantage of this over-valuation. As a result, investors will place a lower value to the new equity issuance. This theory maintains that businesses adhere to a hierarchy of financing sources and prefer internal financing when available, and debt is preferred over equity if external financing is required. Thus, the form of debt a firm chooses can act as a signal of its need for external finance. This sort of signalling can affect how outside investors view the firm as a potential investment, and once again must be considered by the people in charge of the firm when making capital structure decisions. Tests of the pecking order theory have not been able to show that it is of first-order importance in determining a firm's capital structure. However, several authors have found that there are instances where it is a good approximation of reality. On the one hand, Fama, French, Myers, and Shyam-Sunder find that some features of the data are better explained by the Pecking Order than by the trade-off theory. Goyal and Frank show, among other things, that Pecking Order theory fails where it should hold, namely for small firms where information asymmetry is presumably an important problem. ### Window of Opportunity In corporate finance, a "window of opportunity" is the time when an asset or product which is unattainable will become available. #### LEARNING OBJECTIVE • Identify a window of opportunity #### KEY POINTS • Windows of opportunity must be taken into consideration by a corporation in order to purchase capital to achieve maximum return. • From the seller's perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on capital purchased and used. • The people in charge of a firm must take windows of opportunity into account in order to keep costs low and returns high, in order to make the firm look like the best investment possible for creditors of all types. #### TERM • Window of opportunity The idea of a time when an asset or product. which is unattainable, will become available. It can be extended to a time when a certain product will be attainable at a certain price, or from an opposite perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on investment. FULL TEXT In corporate finance, a "window of opportunity" basically is the idea of a time when anasset or product that is unattainable will become available. It can be extended to a time when a certain product will be attainable at a certain price or from an opposite perspective, the unique time a party will be able to sell a certain product at its highest price point in order to get a maximum return on investment. Windows of opportunity come into play when budgeting for capital because they can provide opportunities for firms to maximize returns on investment. For example, when a firm issues an IPO, which allows a company to tap into a wide pool of potential investors to provide itself with capital for future growth, repayment of debt, or working capital. A company selling common shares is never required to repay the capital to its public investors. Those investors must endure the unpredictable nature of the open market to price and trade their shares. However, for a company with massive growth potential, the IPO may be the lowest price that the stock is available for public purchase. Therefore, the IPO presents a window of opportunity to the potential investor to get in on the new equity while it is still affordable and a greater return on investment is attainable. From the firm side, the opportunity to purchase a new plant or real estate at a cheap cost or lower lending rates also presents an opportunity to attain a greater investment on assets used in production. Management of a firm must take this into account in order to keep costs low and returns high, in order to make the firm look like the best possible investment for creditors of all types. ### Bankruptcy Considerations Bankruptcy occurs when an entity cannot repay the debts owed to creditors and must take action to regain solvency or liquidate. #### LEARNING OBJECTIVE • Describe how the risk of a corporate bankruptcy can influence a company's cost of capital #### KEY POINTS • Generally, a debtor declares bankruptcy to obtain relief from debt. This is accomplished either through a discharge of the debt or through a restructuring of the debt. • In the U.S. firms that go bankrupt generally file for Chapter 7 or 11. Chapter 7 involves basic liquidation for businesses. It is also known as straight bankruptcy. Chapter 11 involves rehabilitation or reorganization while allowing the firm to continue functioning. • When liquidation occurs one must remember that bondholders and other lenders are paid back first before equity holders. Usually, there is little to no capital left over for common shareholders. #### TERMS • Chapter 11 In bankruptcy involves rehabilitation or reorganization and is known as corporate bankruptcy. It is a form of corporate financial reorganization which typically allows companies to continue to function while they follow debt repayment plans. • Chapter 7 In bankruptcy involves basic liquidation for businesses. Also known as straight bankruptcy, it is the simplest and quickest form of bankruptcy available. • bankruptcy Legal status of an insolvent person or an organisation, that is, one who cannot repay the debts they owe to creditors. #### FULL TEXT Bankruptcy is a legal status of an insolvent person or an organization, that is, one who cannot repay the debts they owe to creditors . In most jurisdictions bankruptcy is imposed by a court order, often initiated by the debtor. Generally, a debtor declares bankruptcy to obtain relief from debt. This is accomplished either through a discharge of the debt or through a restructuring of the debt. Usually, when a debtor files a voluntary petition, his or her bankruptcy case commences. Chapter 9 Bankruptcy Jefferson County, Alabama underwent Chapter 9 bankruptcy in 2009. In the U.S. firms that go bankrupt normally file for Chapter 7 or 11. Chapter 7 involves basic liquidation for businesses. It is also known as straight bankruptcy. Chapter 7 is the simplest and quickest form of bankruptcy available. Chapter 11 involves rehabilitation or reorganization and is known as corporate bankruptcy. It is a form of corporate financial reorganization that typically allows companies to continue to function while they follow debt repayment plans. When liquidation occurs one must remember that bondholders and other lenders are paid back first before equity holders. Usually, there is little or no capital left over for common shareholders. When gaining the financing for capital, firms must take the possibility of bankruptcy into consideration. This is especially important when looking into financing capital through debt. If potential creditors sense that bankruptcy could be likely firms will have a harder time acquiring financing and even if they do, it will probably come at a high interest rate that significantly increases the cost of debt. These firms will have to rely heavily on equity, which once again can be seen as a negative signal about the firm's current state. It can put a downward pressure on equity values. This places a high cost on raising capital, with potential for low returns. Therefore, it is best that the firm take into consideration any possibilities of bankruptcy and work to minimize them when designing capital structure.
{}
# Chapter 1, Problem 25E ### Introductory Chemistry, Books a la... 1st Edition Nivaldo J. Tro ISBN: 9780133877939 Chapter Section ### Introductory Chemistry, Books a la... 1st Edition Nivaldo J. Tro ISBN: 9780133877939 Interpretation Introduction Interpretation: Observation, law, hypothesis, and theory is to be define in our own words. Concept Introduction: To acquire a scientific knowledge, the first step is to make observations. Similar number of observations results in the development of a scientific law that compiles all the observations and predicts the future observations. A well-established hypothesis results in the formation of scientific theory that gives a wider and deeper explanation for the observations made and laws. Giving a tentative explanation or interpretation of the observations made to formulate a hypothesis. The validity of hypothesis are further tested by experiments to get similar number of observations. ## Explanation ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution Solve them all with bartleby. Boost your grades with guidance from subject experts covering thousands of textbooks. All for just $2.99 for your first month and only$4.99/month after? Done and done. Get A's ASAP
{}
# New entry in the Chance Wiki ## 2006-06-07 [StATS]: New entry in the Chance Wiki (June 7, 2006). Category: Wiki pages I just posted a new article in the Chance Wiki: Some earlier writings of mine in the Chance Wiki are: There are a lot of interesting articles written by others that I should cite and comment on in this weblog.
{}
# Homework Help: Partial fractions question 1. Jun 11, 2013 ### phospho 2. Jun 11, 2013 ### NasuSama Please use the template as provided next time. You can substitute those values to find the values of $A$, $B$ and $C$ since we want to find those values that equal to both sides, selecting a "root" for one of the factors. We want to make both sides have same coefficients. 3. Jun 11, 2013 ### phospho I'm not sure what you mean, I know this method, I'm aware of how to do it, I'm also aware of how to use other methods. What I don't get (still after your post) is why it's allowed to use values which the question specifies is not allowed to be used (as f(x) would be undefined). Thanks for your reply, I'll be sure to use the template next time. 4. Jun 11, 2013 ### CAF123 You are writing the fraction as: $$\frac{15 - 17x}{(2+x)(1-3x)^2} = \frac{A(1-3x)^2 + B(2+x)(1-3x) + C(2-x)}{(2+x)(1-3x)^2}.$$For this to hold, the denominator must be the same on both sides. This is true by inspection. Similarly, the numerator on LHS must be the same as the numerator on the RHS. This gives: $$15 - 17x = A(1-3x)^2 + B(2+x)(1-3x) + C(2-x)$$ which has to be satisifed by all $x$. The condition on x not being equal to 1/3 and -2 is so that we don't end up with f(x) being undefined. The exercise above is simply to determine A,B and C. The reason we choose 1/3 and -2 is because that simplies things greatly. 5. Jun 12, 2013 ### Ray Vickson As CAF123 has pointed out, you have $$\frac{15-17x}{(2+x)(1-3x)^2} = \frac{A(1-3x)^2 + B(2+x)(1-3x)+C(2+x)}{(2+x)(1-3x)^2} \; x \neq -2,\, 1/3$$ (although CAF123 wrote $C(2-x)$ instead of $C(2+x)$---probably a typo). Therefore, we have $$15-17x = A(1-3x)^2 + B(2+x)(1-3x)+C(2+x)$$ for all $x \neq -2, \, 1/3.$ The reason we had to exclude -2 and 1/3 was so that we would not be dividing by zero. However, when we eliminate the denominators and just look at the numerators, we are no longer prevented from setting x to -2 or 1/3, because both numerators are perfectly well-defined at those points. In fact, we have an equation of the form $$\text{polynomial 1}(x) = \text{polynomial 2}(x)$$ for all x different from -2 and 1/3. But, since polynomials are continuous everywhere, the equation also holds at the points -2 and 1/3. The reason that is important to note is that when x = -2 or x = 1/3, the right-hand-side is very easy to evaluate, so we can get immediately the numbers A and C. Since the two sides must be equal for all x, so are their derivatives; that allows us to get B as well. An alternative would be to note that since the two polynomials are equal for all x, their $x^n$ coefficients must be equal. So, if you expand out the right-hand-side, you can get three equations for the three parameters A, B and C. 6. Jun 12, 2013 ### CAF123 Hmm didn't notice that, indeed a typo.
{}
Technical Article # How Standard Deviation Relates to Root-Mean-Square Values July 28, 2020 by Robert Keim ## This article explores an interesting connection between an important statistical measure and one of the fundamental analytical tools of electrical engineering. If you're just joining in on this series about statistics in electrical engineering, you may want to start with the first article introducing statistical analysis and the second reviewing descriptive statistics. Most recently, we touched on sample-size compensation when calculating standard deviations—focusing specifically on Bessel’s correction. In this article, we'll build on a previous article's discussion of standard deviation, which captures the averaged power of the random variations in a data set or digitized waveform. This averaged power is expressed as an amplitude, e.g., as volts instead of watts. Electrical engineers deal with random variations all the time. We call them noise, and they ensure that no matter how good the weather is, we will have something to complain about. We use the following formula to calculate standard deviation: $\sigma=\sqrt{\sigma^2}=\sqrt{\frac{1}{N-1}\sum_{k=0}^{N-1}(x[k]-\mu)^2}$ ### Root Mean Square (RMS) Review Most of us probably first learned about RMS values in the context of AC analysis. In AC systems, an RMS value of voltage or current is often more informative than a value that specifies the peak voltage or current, because RMS is a more direct path to power dissipation. We can’t use a peak voltage or current value when calculating power dissipation because the voltage or current is constantly varying, and consequently the instantaneous power dissipation also varies. A calculation based on peak value would overestimate the time-averaged power. RMS amplitudes allow us to calculate power dissipation as though we were working with DC quantities. More specifically, the RMS amplitude of a sinusoidal voltage or current is equal to the amplitude of a DC signal that would create the same amount of time-averaged power dissipation. A 12 V battery connected to a 10 Ω resistor will generate 122/10 = 14.4 W of (instantaneous and average) power. If we replace the battery with an AC supply voltage that has an RMS amplitude of 12 V, the (average) power will be the same. Calculating RMS amplitudes is easy when we’re working with sinusoidal signals: we just divide the peak value by √2. The following diagram provides an interesting illustration of this relationship. ##### Here, we calculate the RMS amplitudes of sinusoidal signals by dividing the peak value by √2. Power is proportional to the square of voltage or current. A DC voltage of 1 V connected to a circuit with resistance R will generate 12/R = 1/R watts of power. We can see by inspection that the blue curve has an average value of 1; thus, since the blue curve is equal to the red curve squared, the average power generated by the red curve will also be 1/R. Now notice the peak value of the red curve: it’s √2 (approximately 1.4). This confirms that we need to divide the peak value by √2 in order to identify the amplitude that will produce the correct average power when the standard formula—V2/R or I2R—is applied. ### The Full RMS Calculation Those of us who frequently work with AC electrical systems need to remember that RMS amplitudes are not limited to sinusoidal signals. Furthermore, the mathematical procedure that generates an RMS amplitude is significantly more complicated than dividing by √2. It just so happens that with sinusoids, the procedure is equivalent to dividing by √2. This simplification does not apply to other types of signals such as square waves, triangular waves, or noise. ##### The horizontal line indicates the RMS amplitude of this noise waveform. The peak value of random noise tends to be 3 to 4 times higher than the RMS amplitude. The actual RMS calculation—i.e., the calculation that we apply to signals in general—is expressed as follows: $X_{RMS}=\sqrt{\frac{1}{T_2-T_1}\int_{T1}^{T2}x(t)^2dt}$ Here’s the procedure in words: Assume that x(t) is a time-domain signal that is periodic over the interval from time T1 to time T2. We square x(t), integrate this squared signal over the relevant interval, divide the integrated value by the length of the interval, and then take the square root. Integrating from T1 to T2 and then dividing by (T2–T1) is analogous to summing all the values in the signal and dividing by the number of values. In other words, performing these two steps is the time-domain equivalent of calculating the arithmetic mean of a data set. Thus, we are taking the square root of the mean of the squared signal: root mean square. ### RMS with Discrete Data How would we convert the formula given above into something that we can apply to discrete data? In other words, how can we calculate the RMS amplitude of a digitized waveform? Let’s look at it this way: First, we square individual values (e.g., x[1], x[2], x[3], etc.) instead of a function (e.g., x(t)). Next, when we move from a continuous-time signal to a discrete-time signal, integration becomes summation and a time interval becomes an “interval” of data points, i.e., the number of data points that were summed. Finally, we have the square root, which doesn’t change. Thus, we can write our discrete-time RMS calculation as follows: $X_{RMS}=\sqrt{\frac{1}{N}(x[1]^2 + x[2]^2 + ... + x[N]^2)}$ Is this beginning to look familiar? We’re squaring values, summing them, dividing by the number of values, and then taking the square root. There are only two differences between this procedure and the procedure that we use to calculate standard deviation: • With RMS, we divide by N; with standard deviation, we (usually) divide by N–1. We can ignore this difference because the use of N–1 is just an attempt to compensate for small sample size (see the previous article for more information). • With RMS, we square the data points; with standard deviation, we square the difference between each data point and the mean. If we’re trying to establish equivalency between RMS and standard deviation, the second difference might seem like a deal-breaker. However, consider this: if the mean is zero, as is often the case in electrical signals, there is no difference between the RMS calculation and the standard-deviation calculation. In other words, for a signal with no DC offset, the standard deviation of the signal is also the RMS amplitude. ### Conclusion I’m not going to attempt to explore the full significance of this equivalency between standard deviation and root mean square. Nonetheless, before we finish up, I want to mention two interesting points that emerge from this discussion. First, standard deviation gives us the “AC coupled” RMS amplitude of a waveform: we can calculate standard deviation when the DC offset of a signal is irrelevant, and this gives us the RMS amplitude of only the AC portion. Second, standard deviation can be interpreted as a quantification of noise, and noise analysis is closely linked to the root mean square.
{}
# \mathcal gives different output where it should not I have multiple occurences of mathcal in my document and they do not use the same style. Consider this MWE: \documentclass{standalone} \usepackage{mathtools} \usepackage{unicode-math} \setmainfont[Ligatures=TeX]{STIX} \setmathfont{XITS Math} \setmathfont[range={\mathcal},StylisticSet=1]{XITS Math} \begin{document} $\operatorname{O}$ $\mathcal{S}$ $\text{$P$}$ $\mathcal{S}$ \end{document} As you can see the two "S" charcters are different, which they should not be. What have I done wrong? Versions used: • XeTeX, Version 3.14159265-2.6-0.99996 (TeX Live 2016) • mathtools 2015/11/12 v1.18 • amsmath 2016/06/28 v2.15d • unicode-math 2015/09/24 v0.8c Loading mathtools after unicode-math solves this problem, but messes up other things. For example \underbrace{X}_{0} then gives: • Looks okay in texlive 2017. I get the second S in both places. It works in texlive 2016 with unicode-math 2017/01/27 v0.8d too. – Ulrike Fischer Oct 1 '17 at 9:55 This was a bug in unicode-math. See for example https://github.com/wspr/unicode-math/issues/356.
{}
# deepgraph.deepgraph.DeepGraph.plot_map¶ DeepGraph.plot_map(lon, lat, edges=False, C=None, C_split_0=None, kwds_basemap=None, kwds_scatter=None, kwds_quiver=None, kwds_quiver_0=None, ax=None, m=None) Plot nodes and corresponding edges on a basemap. Create a scatter plot of the nodes in v and optionally a quiver plot of the corresponding edges in e on a mpl_toolkits.basemap.Basemap instance. The coordinates of the scatter plot are determined by the node’s longitudes and latitudes (in degrees): v[lon] and v[lat], where lon and lat are column names of v (the arrow’s coordinates are determined automatically). In order to map colors to the arrows, either C or C_split_0 can be be passed, an array of the same length as e. Passing C creates a single quiver plot (qu). Passing C_split_0 creates two separate quiver plots, one for all edges where C_split_0 == 0 (qu_0), and one for all other edges (qu). By default, the arrows of qu_0 have no head, indicating “undirected” edges. This can be useful, for instance, when C_split_0 represents an array of temporal distances. In order to control the parameters of the basemap, scatter, quiver and/or quiver_0 plots, one may pass keyword arguments by setting kwds_basemap, kwds_scatter, kwds_quiver and/or kwds_quiver_0. Can be used iteratively by passing ax and/or m. Parameters: lon (int or str) – A column name of v. The corresponding values must be longitudes in degrees. lat (int or str) – A column name of v. The corresponding values must be latitudes in degrees. edges (bool, optional (default=True)) – Whether to create a quiver plot (2-D field of arrows) of the edges between the nodes. C (array_like, optional (default=None)) – An optional array used to map colors to the arrows. Must have the same length es e. Has no effect if C_split_0 is passed as an argument. C_split_0 (array_like, optional (default=None)) – An optional array used to map colors to the arrows. Must have the same length es e. If this parameter is passed, C has no effect, and two separate quiver plots are created (qu and qu_0). kwds_basemap (dict, optional (default=None)) – kwargs passed to basemap. kwds_scatter (dict, optional (default=None)) – kwargs to be passed to scatter. kwds_quiver (dict, optional (default=None)) – kwargs to be passed to quiver (qu). kwds_quiver_0 (dict, optional (default=None)) – kwargs to be passed to quiver (qu_0). Only has an effect if C_split_0 has been set. ax (matplotlib axes object, optional (default=None)) – An axes instance to use. m (Basemap object, optional (default=None)) – A mpl_toolkits.basemap.Basemap instance to use. obj – If C_split_0 has been passed, return a dict of matplotlib objects with the following keys: [‘fig’, ‘ax’, ‘m’, ‘pc’, ‘qu’, ‘qu_0’]. Otherwise, return a dict with keys: [‘fig’, ‘ax’, ‘m’, ‘pc’, ‘qu’]. dict Notes When passing C_split_0, the color of the arrows in qu_0 can be set by passing the keyword argument color to kwds_quiver_0. The color of the arrows in qu, however, are determined by C_split_0. The default drawing order is set to: 1. quiver_0 (zorder=1) 2. quiver (zorder=2) 3. scatter (zorder=3) This order can be changed by setting the zorder in kwds_quiver_0, kwds_quiver and/or kwds_scatter. See also http://matplotlib.org/examples/pylab_examples/zorder_demo.html
{}
# MOSFET Common Source Output Analysis What is the difference between those two circuits, when only output node is changed? Is there any changes in the output resistance, voltage gain, or somewhere else? simulate this circuit – Schematic created using CircuitLab simulate this circuit • What you should read more into would be the "Difference between high-side and low-side MOSFET Drivers". I believe you will find most of your answers there. – 12Lappie Mar 31 '17 at 13:45 • Is there any changes in the output resistance, voltage gain, or somewhere else? Yes, do the small signal analysis of both and you will know. – Bimpelrekkie Mar 31 '17 at 14:01 • Neither of your circuits has an input indicated, so I don't see how either one can be considered an amplifier. – The Photon Mar 31 '17 at 17:42 • @ThePhoton: You know what I meant. I know you could probably and most possibly quess where I am aiming with this question. – Keno Mar 31 '17 at 21:08 Yes, there is a huge difference between those two circuits. When the output voltage is taken from the source terminal we have a Source Follower (common-drain amplifier). The Output voltage is Vgs lower than the voltage at the MOSFET Gate. The voltage gain is less than one ($A_V = \frac{R_4}{\frac{1}{gm} + R_4}$) and Rout is 1/gm (low). The second circuit is a classic Common Source with Source Degeneration resistor (R4). The voltage gain is equal to $A_V = -\frac{R3}{R4+ \frac{1}{gm}}$ The voltage gain is negative but usually, we ignore this "minus" sign. Because this only informs us about 180-degree phase shift between Vin and Vout. When the voltage at the MOSFET Gate increases, the drain current also increases. The current in the drain resistor (R3) increases (ID = IR3) which increases the voltage drop across it (across R3, VR3 = IdR3) so the drain voltage decreases (Vd = Vdd - IdR3) which is 180 degrees to the change in gate voltage. And the output resistance is Rout = R3. • Is it true that with "Rs" (by making negative feedback to mosfet) in the circuit "Q" is to be stabilized? I mean if dc bias conditions are applied for "Q" to be in the saturation region and gate voltage for some reason increases, "Q" stays unchanged - because of the negative feedback. Is this true? – Keno Mar 31 '17 at 21:26 • Or this applies if temperature changes - thermal stability? – Keno Mar 31 '17 at 21:44 This is the classic 0/180 balanced output reversed-phase amplifier. You use it in certain audio amplifiers. If the active-device is biased for high gm (transconductance), the bottom voltage swing is nearly identical to the top voltage swing. simulate this circuit – Schematic created using CircuitLab • Which "this" are you looking at? OP showed two entirely different amplifier circuits. – The Photon Mar 31 '17 at 17:41 • Could you maybe explain this in more detailed manner? I know what you want to explain but I would be greateful for more detailed answer. – Keno Mar 31 '17 at 21:12 Is there any changes in the output resistance, voltage gain, or somewhere else? as there is no input signal, those questions are not answerable. after that, you will need to know about the characteristics of the input signal to answer your questions with confidence. • How many times do I have to tell you this is only a simbolic circuit. Of course there is an input signal! – Keno Apr 1 '17 at 16:17
{}
# Dimension of Lie algebra Let $o(2l,F)$, with $l \ge2$ and $n=2l$ be the orthogonal lie algebra $\{L\in gl(n,F)|SL=-L^{t}S\}$ where $S=\begin{pmatrix} 0 &I_{l} \\ I_{l} & 0 \end{pmatrix}$. How can I show that $\dim(o(2l,F))=2l^{2}-l$? - How are $l$ and $n$ linked? – Davide Giraudo Jan 31 '12 at 18:45 It doesn't seem to make sense unless $n=2l$. I'm more curious about the "$s$" that shows up unintroduced in the last formula. – Henning Makholm Jan 31 '12 at 18:59 If instead we consider $\{L\in F^{2l\times 2l}\mid SL=-L^t S\}$, then we do get a Lie algebra with the standard bracket $[A,B]=AB-BA$. – Henning Makholm Jan 31 '12 at 19:12 @HenningMakholm, sorry for the confusion, I have edited the question. – Edison Jan 31 '12 at 19:32 We write $L=\pmatrix{A&B\\ C&D}$. Then $L\in o(2l,F)$ iff $Sl=-L^tS$. We have $$SL=\pmatrix{0&I\\ I&0}\pmatrix{A&B\\ C&D}=\pmatrix{C&D\\ A&B}$$ and $$-L^tS=\pmatrix{A^t&C^t\\ B^t&D^t}\pmatrix{0&I\\ I&0}=\pmatrix{-C^t&-A^t\\ -D^t&-B^t}$$ so $C$ and $B$ have to be skew-symmetric and $D=-A^t$. So $L=\pmatrix{A&0\\ 0&-A^t}+\pmatrix{0&B\\ 0&0}+\pmatrix{0&0\\ C&0}$. Denoting $E_{rc}$ the matrix whose entries are $0$ except the term of the row $r$ and column $c$. Then $o(2l,F)$ is generated by $$\left\{\pmatrix{E_{rc}&0\\\ 0&E_{cr}},1\leq c,r\leq l\right\}\cup\left\{\pmatrix{0&E_{rc}-E_{cr}\\\ 0&0},1\leq c<r\leq l\right\}\cup\left\{\pmatrix{0&0\\\ E_{rc}-E_{cr}&0},1\leq c<r\leq l\right\},$$ which is linearly independent. The dimension of $l\times l$ skew-symmetric matrices is $\frac{l(l-1)}2$ so $$\dim o(2l,F)=2\frac{l(l-1)}2+l^2=l^2-l+l^2=2l^2-l.$$
{}
# Relation between displacement current, dielectric and time varying Electric field Physics Asked by sizzling_. on September 30, 2020 I know that displacement current is produced in dielectric material due to dipole moment. I also know that displacement current is produced by time varying electric field (according to maxwell equation) so now why is this displacement current not produced in dielectric material with steady Electric field? And also if the medium is free space is there any affect of time varying fields? I am not sure if my question is correct. Please someone explain me the concept clearly , where I am missing? A displacement field is produced by the polarization of a dielectric. A displacement current is produced by a time-varying electric field. The two concepts are completely different. A displacement field does not cause displacement current, and a displacement current is not affected by displacement field. There is no displacement current in a dielectric with a steady electric field because a displacement current is produced by a time-varying electric field, not a steady one. There is a displacement current in free space produced by a time-varying electric field because if there's a time-varying electric field, then there's a displacement current. Although the two concepts are completely different, they do both have "displacement" in their name, which as CuriousOne mentioned is quite confusing. Furthermore, a displacement current isn't an actual current, with charges moving around and all that; it just has an associated magnetic field as if it were a real current. Hopefully that clears things up. Answered by eyqs on September 30, 2020 ## Related Questions ### Potential energy function for high energy continuum? 1  Asked on July 24, 2021 ### Smallest proposition given a state $psi$ 1  Asked on July 24, 2021 ### Conservation of angular momentum and reference frames 1  Asked on July 23, 2021 ### Quantum Psuedo Telepathy vs. “Deterministic random seed” (piece of paper) 0  Asked on July 23, 2021 ### Can only two electrons be in ground state? Are the energy levels the same thing as the energy shells? I can’t find a straight answer 3  Asked on July 23, 2021 ### Particle in Cyclone – height of pipe that will keep the particle pressed against the inside wall 0  Asked on July 23, 2021 by user69283 ### If the Higgs field gives particles mass, and is present everywhere, then why are there massless particles? 3  Asked on July 23, 2021 by prata ### Translation of Fock’s original paper? 1  Asked on July 23, 2021 by khalid-wenchao-yjibo ### Current divergence 1  Asked on July 23, 2021 by jurandi-leo ### Calculating atomic forces from total force and torque on molecule in molecular dynamics (MD) 1  Asked on July 23, 2021 by ptur ### What is the difference between irreversible and reversible isochoric processes done on an ideal monatomic gas in a frictionless system? 0  Asked on July 23, 2021 ### Huygens Principle, and Polarization after reflection of a perfect metallic surface in a vacuum 0  Asked on July 23, 2021 by fugby ### Hamiltonian mechanics really useful for numerical integration? Lagrange equations can become 1st-order by introducing extra variables 2  Asked on July 23, 2021 by akai ### Derivation of proper time with respect to time 1  Asked on July 23, 2021 by peter-anderson ### How does increased fluid velocity give rise to low fluid pressure at a point? 1  Asked on July 23, 2021 ### Set of unitary operators such that $U|0rangle = |psirangle$ 1  Asked on July 23, 2021 ### Electromagnetic induction in varying current 1  Asked on July 23, 2021 by czar-luc ### Why are filter flare images centrally inverted? 2  Asked on July 23, 2021
{}
Jay ☆ India, 2018-03-26 13:31 (edited by Jay on 2018-03-26 13:50) Posting: # 18595 Views: 3,680 ## FDA: RSABE for NTID [RSABE / ABEL] Dear all, In one of the full replicate study of NTID the results are as per below, Parameters   Swr    95%CI LnCmax       0.04   0.0001 LnAUC0-t     0.07   -0.003 LnAUC0-inf   0.07   -0.004 The ABE results are as below, Parameters   T/R    90%CI-L    90%CI-U LnCmax       97     95         99 LnAUC0-t     99     97         101 LnAUC0-inf   99     97         101 The point estimate is near to 100 and ABE is also within 80% to 125%. But in SABE, the Cmax criteria is more than 0 i.e. 0.0001 which leads to bioinequivalence. So can it be scientifically justified the 95%CI of Cmax value as its very borderline to 0 and T/R shows less difference between test and reference. Thanks -Jay Edit: Category and subject line changed; see also this post #1 and #2. [Helmut] jag009 ★★★ NJ, 2018-03-26 17:57 @ Jay Posting: # 18596 Views: 3,265 ## FDA: RSABE for NTID Did you check individual data? Any strange ones? If FDA, "no" As in good luck in trying to convince them. John Jay ☆ India, 2018-04-02 06:28 @ jag009 Posting: # 18628 Views: 2,831 ## FDA: RSABE for NTID I would like to understand that how come the study passing in ABE within 80% to 125%, low T/R and low ISCV would not meet the BE criteria of scaled average bioequivalence. As we generally proceed with scaled approach when the ISCV is high (more than 30%) and would not meet BE criteria with ABE approach. Regards, Jay pjs ★ India, 2018-04-02 15:07 (edited by pjs on 2018-04-02 15:17) @ Jay Posting: # 18633 Views: 2,789 ## FDA: RSABE for NTID Dear Jay, » I would like to understand that how come the study passing in ABE within 80% to 125%, low T/R and low ISCV would not meet the BE criteria of scaled average bioequivalence. As per the Swr data shared scaled BE limits would be 95.87-104.30%. Now as per the ABE data you shared lower limit is mentioned to be 95%. Now please note that variability in ABE and SABE would be different as there will be different models for the estimation. Also for SABE only subjects who had completed all four Periods would be included and in ABE any subject with atleast one reference and one test product would be included. You may refer the lower limit for the ilat output in SABE approach which would be outside 95.87%. This could be reason behind the upper bound value marginally more than 0. Just one quick question in the outputs for upper bound, value to be rounded off till which decimal? If you consider three decimals study is passing and with four decimals same is failing. Also with variability as low as 4%, you also need to check test product variability to comply with variability comparison criteria. Just to check with forum members is there any method for outlier detection in such full replicate studies? In one of the earlier posts there was discussion for use of method proposed by Lazlo for the same. Any further update for the acceptability of this approach. Can you pls confirm how many subjects were included in the study and how sample size was defined. Regards Pjs Jay ☆ India, 2018-04-04 14:24 @ pjs Posting: # 18639 Views: 2,504 ## FDA: RSABE for NTID Dear pjs, In any two way ABE study, if the 90%CI is not within the range of 80-125%, most of time the same is reflected in the T/R and also the dissolution graphs. For highly variable product, if the 95% CI is not ≤0, the T/R ratio would be very high and beyond the range of 80 to 125%. For NTID as per USFDAs GL below three criteria should be followed, - Upper 95% CI ≤0 (SABE) - 90% CI should be within 80-125% - UL of 90% CI for σWT/σWR should be ≤ 2.5 (ABE) As per the case mentioned in above post, the ISCV of test and ref is very low (around 6-7), T/R is near to 100 and also meeting BE criteria in ABE. So if only 95% CI in SABE does not meet criteria (0.0001) so how the reason for the difference between test and reference can be interpreted. -Jay Helmut ★★★ Vienna, Austria, 2018-04-04 15:03 @ Jay Posting: # 18640 Views: 2,555 ## β = 1 – π Hi Jay, » In any two way ABE study, if the 90%CI is not within the range of 80-125%, most of time the same is reflected in the T/R … Well, the CI is constructed around the GMR. » … and also the dissolution graphs. Nope. Only if dissolution would be the rate-limiting step. Otherwise in vitro cannot (!) be predictive of in vivo. There a tons of studies with completely dissimilar dissolution profiles which passed BE easily. » For NTID as per USFDAs GL below three criteria should be followed, » - Upper 95% CI ≤0 (SABE) » - 90% CI should be within 80-125% » - UL of 90% CI for σWT/σWR should be ≤ 2.5 (ABE) You are mixing something up. Actually: - Upper 95% CI ≤0 (SABE) - 90% CI should be within 80-125% (ABE) - UL of 90% CI for σwT/σwR should be ≤ 2.5 (comparison of variabilities) » As per the case mentioned in above post, the ISCV of test and ref is very low (around 6-7), T/R is near to 100 and also meeting BE criteria in ABE. So if only 95% CI in SABE does not meet criteria (0.0001) so how the reason for the difference between test and reference can be interpreted. The FDA’s “implied BE limits” can be calculated by $e^{\mp \log{(1.11111)}\cdot \sqrt{\log{(0.30^2+1)}}/0.1}$ which are in your case: CV (%)  BE-limits (%)   6     93.88  106.52   7     92.90  107.64 Does the conventional ABE (90% CI) pass these narrower limits? These three tests are not directly related. Fine if you pass the latter two. Bad luck if you fail the first by a small margin. How was the study powered? If you targeted a power of 80–90%, the chance of failing for a product which is BE would be 10–20%. Daily life. See John’s reply above. Counterquestion: Did you ever succeed in convincing the FDA to accept a conventional BE study which showed a 90% CI of 79.99–125.01%? Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes Nirali ★ India, 2019-01-30 15:38 @ Helmut Posting: # 19830 Views: 1,348 ## Warfarin Sodium 95% Upper bound criteria Dear Helmut, I need the same guidance for Warfarin Sodium case. If the 95% upper bound for Cmax parameter is 0.002, should we conclude this study as a bioequivalent for Cmax? There is no specific criteria given for 95% upper bound in the OGD of Warfarin Sodium. Regards Nirali Helmut ★★★ Vienna, Austria, 2019-02-02 14:15 @ Nirali Posting: # 19844 Views: 1,198 ## Upper bound in RSABE ≤0 Hi Nirali, » If the 95% upper bound for Cmax parameter is 0.002, should we conclude this study as a bioequivalent for Cmax? No, since the upper bound has to be less or equal to zero. No rounding here … » There is no specific criteria given for 95% upper bound in the OGD of Warfarin Sodium. ≤0 is the general rule for the FDA’s RSABE. • Endrényi L, Tóthfalusi L. Determination of Bioequivalence for Drugs with Narrow Therapeutic Index: Reduction of the Regulatory Burden. J Pharm Pharm. 2013;16(5):676–82. PMID 24393551. Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes Nirali ★ India, 2019-02-12 07:10 @ Helmut Posting: # 19902 Views: 1,108 ## Upper bound in RSABE ≤0 Thank you Helmut. Nirali Mehta (PHARMA-STATS) Bioequivalence and Bioavailability Forum |  Admin contact 19,604 posts in 4,158 threads, 1,340 registered users; online 7 (0 registered, 7 guests [including 4 identified bots]). Forum time (Europe/Vienna): 17:31 CEST Nothing in the world is more dangerous than sincere ignorance and conscientious stupidity.    Martin Luther King, Jr. The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by Ing. Helmut Schütz
{}
# [java] Unicode Question This topic is 4669 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I want a program that can accept Unicode characters as input and output them to the screen, I had heard that Java supported double-byte characters by default (which is why I am attempting to port this project to Java from C++) but I have found only vague references to it in tutorials and books. Like say I try to create a string with Unicode characters in the constructor: String abs = new String("おほようございます","ISO8859_1"); This should work shouldn't it? But it doesn't, instead of おほようございます I'm getting a series of squares and the compiler flags it as an error. Is the problem with NetBeans IDE 3.6 or is it the language itself? ##### Share on other sites Hi, I usually do the following in networking code: URLEncoder.encode(new String("international string", "iso-8859-1"), "iso-8859-1"); Which is, of course, bad programming, but it works just fine. I had several similar problems with character encoding, and I have searched a lot for a good tutorial that could clear my doubts, but with no success at all. Son Of Cain ##### Share on other sites You cannot type the literal characters into your source file and have them work; Java does support the data natively with its data types and classes (char, java.lang.String etc.), but the source file is expected to be encoded in Latin-1. To represent those characters, escape them with \u#### (four hex digits) sequences.
{}
# Terminology ¶ Go back Learn about the terminology for scheduling problems (optimal cost, early/last start time, and total/free/certain margin). optimal cost/duration Also called durée optimale/coût optimal, this is the least number of days the project will last. This is the last task (usually called END) early start time. Note: we are using the notation $A(10)$, for the task $A$ having a duration/cost of $10$. early start time In French, it's called date au plus tôt. It's the number of days/... you will have to wait before doing this task. for the first one, it's 0 for the next ones, it's the maximum result of the sum of the previous early start time and the task duration/cost Ex: If a task $C$ need the task $A(10)$ and $B(20)$, and they are both starting after 30 days, then the task C while start after at least $\max(30+20, 30+10)=50$. The early start time for $C$ is $50$. last start time In French, it's called date au plus tard. This is the last date for a task and if we pass this date then the optimal duration of the project will increase. You will start from the end. for the last one, it's "early start time" value for the previous ones, it's the minimum result of the subtraction of the predecessor last start time their duration Ex: If a task $C(last=???)$ got two predecessors $D(cost=4, last=45)$ and $E(cost=7, last=45)$ then the $\min(45-4,45-7)=38$, so we have $C(last=38)$. margin total Also called marge totale. This is the maximum delay that we can take for a task without affecting the optimal cost. Operation: early - last Ex: If $D(early=42, last=45)$ then the total margin is $45-42=3$. free Also called marge libre. Same as the total margin, but without changing the next task starting date. Operation: solve $x$ @ x + \text{early_start} + \text{cost} \le \forall_{successor}\hspace{0.3cm} \text{early_start_successor} @ Ex: the free margin for $I(cost=7, early=30)$ having one successor $J(early=69)$ is $x + 30 + 7 \le 69 \Leftrightarrow x = 69-37=32$ certain Also called marge certaine. Same as the free margin, but considering that every task started with the maximum delay. Operation: $a$ is the certain margin value of the task $A$ if for all predecessors $p$ of $A$, $\text{last_start}_p - (\text{last_start}_A + cost(A \to p)) \ge a \ge 0$. Ex: If a task $C(last=38)$ got two predecessors $D(cost=4, last=45)$ and $E(cost=7, last=45)$ then we have • $45-38+4=3$ • $45-38+7=0$ • certain margin=0 Instead of margin, you can use • "optimistic time estimate" instead of total margin • "normal time estimate" instead of free margin • "pessimistic time estimate" instead of certain margin but I will use a French-friendly kind of name. ## Alternative example ¶ The early start time for C is $\text{early start for C} = \max \sum_\text{i predecessor of C} \text{early_start}_i + cost(i \to C)$ If you got a task C that can only be done after the task "A" and the task "B", then simply check what's the task that you will have to wait for (=maximum), and the "wait" value is the "cost=duration + the start". The last start time for C, it's the same as the early start time because C seems to be the last task. The last start time for A, it's $\text{last start for A} = \min \sum_\text{i successor of A} \text{last_start}_i - cost(A \to i) = min( 30 - 30 ) = 0$ The last start time for B, it's $\text{last start for B} = \min \sum_\text{i successor of B} \text{last_start}_i - cost(B \to i) = min( 30 - 20 ) = 10$ The optimal cost/duration is simply the last task early start + cost. Since we don't have a cost for C, we can assume that the optimal duration is $30$. C task's name should have been END.
{}
compressibility and rarefaction effects for three-dimensional gas flow in square microchannels, to investigate the dif ference between slip and no-slip boundary condition effects on The choice to define compressibility as the negative of the fraction makes compressibility positive in the (usual) case that an increase in pressure induces a reduction in volume. three-dimensional-flow tunnel is given in references.4,.8, 9, and 10. Void ratio is used to represent compression because it is a ratio of the volume of voids to the volume of solids, the former being reflective of compression and the latter being constant in soil compression. Hence, the volume correction also will be small and negligible. The temperature at which a real gas behaves like an ideal gas over a long range of pressure is Boyle’s temperature for the gas. Van der Waal considered two hard-sphere particles can come as close as to touch each other and they will not allow any other particle to enter in that volume as shown in the diagram. Therefore, Van der Waals equation was devised and it helps us define the physical state of a real gas. Compressibility is the measure of a liquid’s relative volume change when the pressure acting on it changes. PVm < RT. Every measurement has two parts. geotechnical and foundation formula sheet table contents page 1. identification and classification of soil and rock 1 2. hydraulic properties of soil and rock 3 3. effective stress and seepage pressure 5 4. seepage of water through soils 5 5. compressibility of soil and rock 6 6. The void space can be full of liquid or gas. Compressibility Compressibility of a material is the reciprocal of its bulk modulus of elasticity. If n1 and n2 are the numerical values of a physical quantity corresponding to the units u1 and u2, then n1u1 = n2u2. It is essential to derive the compressibility equation for a 2D system. Geologic materials are made up of two portions: solids and voids (or same as porosity). The dimensional formula for compressibility is. It is denoted by beta “B”. All the options are correct and hence, ‘d’ is the correct option. The speed of sound is defined in classical mechanics as: where ρ is the density of the material. Able to calculate the critical conditions of liquefaction and derive an expression of the Principle of Corresponding States. This can happen over a period of time, resulting in settlement. In general, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, that is, an increase in pressure squeezes the material to a smaller volume. where γ is the heat capacity ratio, α is the volumetric coefficient of thermal expansion, ρ = N/V is the particle density, and But, the particles on the surface and near the walls of the container do not have particles above the surface and on the walls. View Answer. Hence at low pressures, the volume will be larger. Hydrogen and Helium are examples. The term "compressibility" is also used in thermodynamics to describe the deviance in the thermodynamic properties of a real gas from those expected from an ideal gas. Van der Waals equation was derived by Johannes Diderik van der Waals in the year 1873. For Example,the length of an object = 40 cm. Interestingly, all real gases behave like ideal gases at low pressures and high temperatures. Ions or free radicals transported to the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recombination process. In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity greatly increases. Van der Waals equation derivation is based on correcting the pressure and volume of the ideal gases given by Kinetic Theory of Gases. It is an important concept in geotechnical engineering in the design of certain structural foundations. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond 800 km/h (500 mph). This pressure dependent transition occurs for atmospheric oxygen in the 2,500–4,000 K temperature range, and in the 5,000–10,000 K range for nitrogen.[3]. In its simple form, the compressibility β may be expressed as. The arrangement of the equation in a cubic equation in volume. The equation can further be written as; 1. The isothermal compressibility is generally related to the isentropic (or adiabatic) compressibility by a few relations:[4]. • Correlation energy is also negative, but negligible. • Thickness of layer reduces the coulomb interaction between carriers, which reduces the effect of negative compressibility. Hence, in real gases, the particles exhibit lower pressure than shown by ideal gases. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques. The two sphere model, has a total radius of ‘2r’ (r is the radius of the sphere particle) and Volume of 43π2r3=8×43πr3=8×\frac{4}{3}\pi 2{{r}^{3}}=8\times \frac{4}{3}\pi {{r}^{3}}=8\times34​π2r3=8×34​πr3=8× volume of single particle. gases) as response to the pressure change. But, there is no ideal gas. This.mattw is discussed in “the present report. At constant temperature, a decrease in pressure increases the volume (V). It can be represented in the formula below. The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is isentropic or isothermal. Van der Waals equation is an equation relating the relationship between the pressure, volume, temperature, and amount of real gases. So, an increase in temperature decreases the deviation from ideal behaviour. i) For an ideal gas, PVm = RT, so that Z=1 at all temperatures and pressure. ... compressibility is the term applied to 1-D volume change that occurs in cohesive soils that are subjected to compressive loading. Electric current is charge flowing per unit time. Compressible flow (or gas dynamics) is the branch of fluid mechanics that deals with flows having significant changes in fluid density.While all flows are compressible, flows are usually treated as being incompressible when the Mach number (the ratio of the speed of the flow to the speed of sound) is smaller than 0.3 (since the density change due to velocity is about 5% in that case). Compressibility factor is inversely proportional to temperature. {\displaystyle \Lambda =(\partial P/\partial T)_{V}} [citation needed]. where p is the pressure of the gas, T is its temperature, and V is its molar volume. confined compressibility = (1+ υ) (1-2υ) (1-υ) E The confined (one dimensional) compressibility is also referred to as the coefficient of volume compressibility or the coefficient of volume decrease and the symbol m v is widely used to indicate the value of this compressibility. The Compressibility of a fluid depends on adiabatic or isothermal process. = However, this law fails to explain the behaviour of real gases. At 250 K, the activation energy for a gas-phase reaction was determined to be 6.500 kJ mol-1. Z for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Volume Correction in Van der Waals Equation, Pressure Correction in Van der Waals Equation, Ideal Gas Equation and Van der Waals Equation, Merits and Demerits of Van der Waals Equation, CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, JEE Main Chapter Wise Questions And Solutions. The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In thermodynamics and fluid mechanics, compressibility (also known as the coefficient of compressibility[1] or isothermal compressibility[2]) is a measure of the relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. This video is … Gaseous particles do interact. Compressibility is related to thermodynamics and fluid mechanics. The physical quantity which has the dimensional formula [M 1 T 3] is (a) Surface tension (b) density (c) solar constant (d) compressibility 61. * 2 2 1 dn g m d D m p = = n r r E s s = − 1 3 1 8 2 2 p 2 1 0 2 2 2 1 4 2 − 5. As the correction factor becomes negligible, pressure and volume of the real gases will be equal to that of ideal gases. When an element of fluid is compressed, the work done on it tends to heat it up. Applicable not only to gases but for all fluids. In essence, you can think of bulk’s modulus as the 3-dimensional form of Young’s modulus because we are considering loading in three dimensions vs. one. Hence, the compressibility of soils is expressed in the terms of a plot between void ratio on the y-axis and effective stress on the x-axis. In any case, Van der Waals theory helps us to develop an approximation for real gases at high pressures and also predict the behaviour of non-ideal gases. Bulk Modulus of Elasticity Dimensional Formula: The dimensional formula is [ML-1 T-2]. Dimensional analysis is a means of simplifying a physical problem by appealing to dimensional homogeneity to reduce the number of relevant variables. Most of the gases, show compressibility factor less than one at low pressures, and greater than one at high pressures. The authors establish formulas for the isothermal compressibility and long-wavelength static density-density response function of a weakly correlated two-dimensional electron gas in the 1<<β∊ F <∞ and 0<=β∊ F <<1 degeneracy domains; β∊ F =πnħ 2 /(mk B T). Every real gas has a certain temperature, where the compressibility factor shows little changes and comes close one. The gases having compressibility lesser than 1, show negative deviation from the ideal gases at all temperatures and pressures. For our 1-D Compressibility factor for air (experimental values) Pressure, bar (absolute) Temp, K 1 5 10 20 40 60 80 100 150 200 250 300 400 500 75 0.0052 0.0260 0.0519 0.1036 0.2063 0.3082 0.4094 0.5099 0.7581 1.0125 80 0.0250 0.0499 0.0995 0.1981 0.2958 0.3927 0.4887 0.7258 0.9588 1.1931 1.4139 ) When gas is ideal or that it behaves ideally then both the constant will be zero. Assumptions 1, 2, and 4 are reasonable and valid for most practical situations. ii) Z > 1. 9.2 … For a solid, the distinction between the two is usually negligible. So, there will be net interactions or pulling of the bulk molecules towards the bulk that is away from the walls and surface. The theory has been later extended to include the effect of 3D consolidation. Results predicted by Ahmed, Al-Marhoun, De … Hydrogen and noble gasses except krypton are examples. Define: Compressibility Compressibility: is the property through which particles of soil are brought closer to each other, due to escapage of air and/or water from voids under the effect of an applied pressure. ‘a’ and ‘b’ constants specific to each gas. For inside particles, the interactions cancel each other. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. / a) Increasing temperature increases the distribution of molecular velocities. Compressibility (C) = 1 / k Its SI unit is N-1m2 and CGS unit is dyne-1 cm2. Its urut is N/m2 or Pascal and its dimensional formula is [ML-1T-2]. The reduction in pressure α square of the particle density in the bulk α (particle density/volume)2, Pressure of the real gas, Pi=Pr⁡+an2V2Pi=\Pr +a\frac{{{n}^{2}}}{{{V}^{2}}}Pi=Pr+aV2n2​. Sometimes, it is also referred to as Van der Waals equation of state. The validity of the assumptions made in Terzaghi’s theory of one-dimensional consolidation is discussed as follows: 1. 2. Search. Methods proposed by Standing and Ahmed exhibit excessive changes in compressibility compared with the other methods and can determine results that are physically unreal.. Impact of gravity changes. Or [a] = [M 0 L 1 T-2]; Thus, the dimensions of a physical quantity are the powers(or exponents) to which the fundamental units of length, mass, time etc. y x z z Calculation of 1-D Consolidation Settlement. Compressibility factor is a measure of the deviation of the real gas from ideal gas behaviour. Steel is more elastic than … The term "compressibility" is also used in thermodynamics to describe the deviance in the thermodynamic properties of a real gas from those expected from an ideal gas. iii) Z < 1. ∂ For a real gas containing ‘n’ moles, the equation is written as; Where, P, V, T, n are the pressure, volume, temperature and moles of the gas. For Example,2.8 m = 280 cm; 6.2 kg = 6200 g. How the real gases classified in terms of compressibility? Λ At T = 250 K and for E = 6.500 kJ mol-1 = 6500 J mol-1, so, nE/n = e-6500/(8.314 x 250) = 0.044 or 4.4%. 3. E, m, L, G denote energy, mass, angular momentum & gravitation constant respectively. Give an example. ‘a’ and ‘b’ constants specific to each gas. The expression in terms of moles for the distribution of molecular energies, nE = ne-E/RT, the fraction of the total moles, (n), that have energy E or greater, (nE), as = nE/n = e-E/RT. Q = nu. It follows, by replacing partial derivatives, that the isentropic compressibility can be expressed as: The inverse of the compressibility is called the bulk modulus, often denoted K (sometimes B). b) Larger the mass lesser the distribution of velocities. What percentage of gaseous molecules would be expected to have less than this energy at 250 K? The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure. [8] However, under very specific conditions the compressibility can be negative. It is given by Z=PVmRT;Z=\frac{PVm}{RT};Z=RTPVm​; where, P is the pressure and Vm is the molar volume of the gas. T The compressibility factor is defined as Where . Van der Waals equation is an equation relating the relationship between the pressure, volume, temperature, and amount of real gases. 3. Reduced equation (Law of corresponding states) in terms of critical constants: Have definite volume and hence cannot be compressed beyond a limit. Other articles where Compressibility is discussed: fluid mechanics: Basic properties of fluids: …this is described by the compressibility of the fluid—either the isothermal compressibility, βT, or the adiabatic compressibility, βS, according to circumstance. Generally, a constant help in the correction of the intermolecular forces while the b constant helps in making adjustments for the volume occupied by the gas particles. V MEDIUM. Cube power of volume: V3−(b+RTP)V2+aPV−abP=0{{V}^{3}}-\left( b+\frac{RT}{P} \right){{V}^{2}}+\frac{a}{P}V-\frac{ab}{P}=0V3… Compressibility is directly related to bulk modulus so we will start with this concept first. [9], Navier-Stokes equations § Compressible flow of Newtonian fluids, "Coefficient of compressibility - AMS Glossary", "Materials with Negative Compressibilities", https://en.wikipedia.org/w/index.php?title=Compressibility&oldid=991696760, Articles with unsourced statements from June 2020, Creative Commons Attribution-ShareAlike License, This page was last edited on 1 December 2020, at 10:39. Share 4. In SI unit system unit of electric current i.e.ampere(A) is taken as fundamental unit. The gases having compressibility greater than 1, have a positive deviation from the ideal gases at all temperatures and pressures. Compressibility is an important factor in aerodynamics. Able to predict the behaviour of gases better than the ideal gas equation. The cubic equation gives three volumes that are useful for calculating the volume at and below critical temperatures. Compressibility of a 2DFS is specifically interesting as it is a measurable quantity through experimental procedures. The reciprocal of the bulk modulus is compressibility, so a substance with a low bulk modulus has high compressibility. PVm > RT. Accordingly, isothermal compressibility is defined: where the subscript T indicates that the partial differential is to be taken at constant temperature. The degree of compressibility of a fluid has strong implications for its dynamics. ( dimensional flow is complicated and only applicable to a very limited range of problems in geotechnical engineering. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions. So, the correction factor in pressure (an2V2)\left( a\frac{{{n}^{2}}}{{{V}^{2}}} \right)(aV2n2​) becomes very small and negligible. Geologic materials reduce in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. The degree of compressibility is measured by a bulk modulus of elasticity, E, defined as either E=δp/ (δρ/ρ ), or E=δp/(-δV/V), where δp is a change in pressure and δρ or δV is the corresponding change in density or specific volume. The equation gives more accurate results of all real gases only above critical temperature. Most notably, the propagation of sound is dependent on the compressibility of the medium. Two particles at close range interact and have an exclusive spherical volume around them. Compressibility Compressibility of a material is the reciprocal of its bulk modulus of elasticity. Some gases obey ideal gas laws at high pressures at a certain temperature. In the case of an ideal gas, the compressibility factor Z is equal to unity, and the familiar ideal gas law is recovered: Z can, in general, be either greater or less than unity for a real gas. The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. Share with your friends. Question: Part A – One-Dimensional Consolidation Test A One-dimensional Consolidation Test Was Performed On A Saturated Clay Soil Through The Pressure Ranges Of: 5 KPa To 25 KPa To 50 KPa To100 KPa To 200 KPa To 400 KPa To 800 KPa. Again the volume of the gas will be larger compared to the volume of the molecules (n, b). What is the Boyle temperature? The dimensional formula for compressibility is. This condition is required for mechanical stability. The number expressing the magnitude of a physical quantity is inversely proportional to the unit selected. Van der Waals equation is (P+an2V2)(V−nb)=nRT\left({P}+a\frac{{{n}^{2}}}{{{V}^{2}}} \right)(V -nb) = nRT(P+aV2n2​)(V−nb)=nRT. Nonetheless, both derivations help us establish the same relationship. 1. Loading... Close. P Volume of the real gas VR = Volume of the container/ideal gas (VI) – Correction factor(b), Total volume of the particle = number of particle x volume of one particle =(n43πr3)=\left( n\frac{4}{3}\pi {{r}^{3}} \right)=(n34​πr3). c) Most probable velocity is the velocity, is the velocity that most of the molecules have at that temperature. Therefore, [a] = [L 1 T-2] That is, the dimension of acceleration is 1 dimension in length, -2 dimension in time and zero dimension in mass. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Compressibility Formula: Compressibility (C) = $$\frac{1}{K}$$ Compressibility unit: Its SI unit is N-1 m² and CGS unit is dyne-1 cm². As oil gravity increases, isothermal compressibility should increase. 2 shows how isothermal compressibility changes with crude oil gravity. The dimensional formula of angular velocity is. Then, each of the two particles has a sphere of influence of 4 times the volume of the particle. The equation completely fails in the transition phase of gas to the liquid below a critical temperature. where V is volume and p is pressure. Fluid Mechanics formulas list online. Compressibility factor depends on the temperature also. More significantly, the Van der Waals equation takes into consideration the molecular size and molecular interaction forces (attractive and repulsive forces). Since δp/δρ =c 2, where c is the adiabatic speed of sound, another expression for E is E =ρc 2. The first is a number (n) and the next is a unit (u). Another derivation is also used that is based on the potentials of the particles. Real gases can be classified into three types on the magnitude of the compressibility factor. is the thermal pressure coefficient. Compressibility is the change in the volume of a substance (e.g. For a real gas containing ‘n’ moles, the equation is written as; Where, P, V, T, n are the pressure, volume, temperature and moles of the gas. The compressibility factor is defined as. For example, nitrogen has a Boyle temperature of 323K. ... dimensions may be deduced indirectly from any known formula involving that quantity. Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. There are two effects in particular, wave drag and critical mach. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. The results are acceptable below the critical temperature. The coefficient of compressibility (mv), also known as the coefficient of volume change, is defined as the change in volumetric strain divided by the change in effective stress. 4. the formulas are in agreement, but the n-dfication- to. The adiabatic speed of sound is dependent on the magnitude of the individual gas a measure a. Unit selected or same as porosity ) characteristic of the gas will be small negligible! A positive deviation from ideal gas laws at high pressures at a certain temperature, amount., PVm = RT, so a substance ( e.g particles exhibit lower pressure than by. And below critical temperatures results of all real gases behave like ideal gases the. Range interact and have an exclusive spherical volume around them from ideal gas behaviour differential! The entire container volume but less compressibility dimensional formula two particles has a Boyle temperature of.. ’ is the measure of a material is the reciprocal of its bulk modulus so will... The particles are not independent, they do interact nonetheless, both derivations help us the... In classical mechanics as: where ρ is the measure of a fluid has strong implications its! Equation relates the isothermal compressibility ( and indirectly the pressure acting on it changes L, denote! At small concentrations characteristic of the real gas has a Boyle temperature of 323K V ) L... Most probable velocity is the measure of the individual gas = RT, so a (... Three-Dimensional ( 3D ) in the field the two particles has a Boyle temperature of 323K high temperatures given. ‘ b ’ constants specific to each gas will start with this concept important... The field T indicates that the partial differential is to be reduced for real gases will be and! Is taken as fundamental unit angular momentum & gravitation constant respectively Waals equation was devised and it helps us the! ’ s theory of ideal gases temperatures and pressure in the volume will be.! Proportional to the unit selected be zero amount of real gases will be net interactions or pulling of the.. In practice, van der Waals equation of state a measurable quantity experimental... Interactions cancel each other z z Calculation of 1-D consolidation Settlement temperature, and amount real! Validity of the molecules experiencing a net interaction away from the ideal assumes! The interactions cancel each other to predict the behaviour of gases ’ specific! Isothermal process molecules ( n, b ) to include the effect of negative compressibility )... Indicates that the partial differential is to be taken at constant temperature, decrease... Of ideal gases at all temperatures and pressures negligible, pressure and volume of the liquid, isothermal compressibility with... Pressure, volume, temperature, and 4 are reasonable compressibility dimensional formula valid for most practical situations 6.500 kJ.. C ) = 1 / K its SI unit system unit of current. Full of liquid or gas exhibit lower pressure compressibility dimensional formula shown by ideal gases of. Liquid or gas from ideal behaviour of simplifying a physical problem by appealing to dimensional homogeneity to reduce volume... Three-Dimensional ( 3D ) in the field independent, they do interact for moderate pressures, above K. The arrangement of the particle low bulk modulus of elasticity gases obey ideal,... Of gas to the unit selected all temperatures and pressures CGS unit is N-1m2 and CGS unit is N-1m2 CGS. Expressed as a period of time, resulting in Settlement modulus of.... Expression for E is E =ρc 2 dependent on the magnitude of a material is the gas constant and is... The behaviour of real gases deduced indirectly from any known formula involving that.. Of 3D consolidation, De … – compressibility is the reciprocal of bulk... Be larger compared to the units u1 and u2, then n1u1 = n2u2 ’ s volume! Times the volume of the particle more accurate results of all real.... Low bulk modulus so we will start with this concept is important for specific storage, estimating... Sciences use compressibility to triaxial compressibility was developed by Yale et al ; 1 is compressibility so. Measure of a 2DFS is specifically interesting as it is an equation the! The options are correct and hence, in practice, van der Waals is. Interact and have an exclusive spherical volume around them which reduces the coulomb interaction between,... Magnitude of the medium for its dynamics or that it behaves ideally then both the constant will equal... Factor is defined: where the compressibility factor shows little changes and comes close one next is measurable... 4 times the volume of a physical quantity is compressibility dimensional formula proportional to isentropic. Will start with this concept is important for specific storage, when estimating groundwater reserves in confined.. 95.6 % RT, so that Z=1 at all temperatures and pressure that! Is discussed as follows: 1 ) larger the mass lesser the distribution of.. T is its temperature, and 10 homogeneity to reduce the number expressing the magnitude of a problem... Gases having compressibility lesser than 1, 2, and amount of real gases in! Sometimes, it compressibility dimensional formula also negative, but negligible the number expressing the magnitude of a problem! Occurs in cohesive soils that are useful for calculating the volume of the individual gas to the or! Consolidation Settlement agreement, but negligible Example, nitrogen has a sphere of of. And ‘ b ’ constants specific to each gas a Boyle temperature of 323K of. Modulus ( K ) = Volumetric stress / Volumetric strain a liquid s! = 1 / K its SI unit is dyne-1 cm2, and 10 interaction between carriers, reduces... An equation relating the relationship between the two is usually negligible and indirectly the of... Its simple form, the volume ( V ) only when the void space can be full of liquid gas... Drag and critical mach constant temperature, and amount of real gases 6.500 kJ mol-1 energy = –. Unit selected in reference.4 cliffers from that given in reference.4 cliffers from given. Change in the year 1873 electrons and ions, above 10,000 K gas. From that given in references 8, 9, and 10 volume, temperature, where c the. 8, 9, and greater than one at low pressures, above 10,000 K the constant. All temperatures and pressures same as porosity ) and negligible ability of a fluid depends on or... By ideal gases at all temperatures and pressures the distribution of molecular velocities temperature, and 4 are and. That, gaseous particles – consolidation is discussed as follows: 1 interaction away from the walls hit... This energy at 250 K, the distinction between the pressure acting it. Nitrogen has a certain temperature, a decrease in pressure increases the distribution molecular., G denote energy, mass, angular momentum & gravitation constant.... To reduce the number expressing the magnitude of the medium container volume less! Then, each of the assumptions made in Terzaghi ’ s theory of ideal gases at all temperatures pressures! Voids ( or same as porosity ) Waals in the field have less than this at! Assumes the gaseous particles as –, in practice, van der assumed. Force and pressure can happen over a period of time, resulting in Settlement be net interactions or of! Element of fluid is compressed, the volume of the liquid 2, 10! ( and indirectly the pressure, volume, the interactions cancel each other … – compressibility is related! Physical state of a material is the temperature critical temperature the velocity is. Crude oil gravity increases, isothermal compressibility ( and indirectly the pressure and volume of a fluid depends on or. The mass lesser the distribution of molecular velocities both derivations help us establish the same relationship deviation... Molecules that have less than 6.500 kJ mol-1 energy = 100.0 – 4.4 = 95.6 % to quantify the of. Formula involving that quantity, each of the gas, PVm = RT, compressibility dimensional formula! Also negative, but the n-dfication- to period of time, resulting in Settlement temperature of 323K energy mass. A low bulk modulus has high compressibility that have less than 6.500 kJ mol-1, this law fails explain! A number ( n, b ) low bulk modulus is compressibility, a! An important concept in geotechnical engineering in the year 1873 another derivation is based correcting... ( attractive and repulsive forces ) three types on the compressibility equation for 2D... Same as porosity ) of real gases relative volume change that occurs in cohesive soils that are for. ) to the structure of the real gases classified in terms of compressibility given in reference.4 cliffers that... Compressibility should increase, have a definite volume, temperature, where c is the,... Of negative compressibility a gas-phase reaction was determined to be 6.500 kJ mol-1 95.6 % Boyle temperature of.! Of relevant variables at constant temperature, and V is its temperature, 4. Is away from the walls will hit the walls with less force and pressure do interact start this... Volume, the activation energy for a solid, the compressibility factor is a means of simplifying a physical corresponding. Is away from the voids dimensional formula for compressibility is directly related to bulk modulus compressibility... –, in real gases will be small and negligible able to calculate the critical of. V is its molar volume gas, PVm = RT, so that Z=1 at all and. A solid, the interactions cancel each other, isothermal compressibility should increase most notably, van. The arrangement of the bulk that is based on correcting the pressure ) to the of!
{}
# Last digit What is the last number of 2016 power of 2017 Result n =  1 #### Solution: $x_{ 0 } = 2017^0 = 1; l = 1 \ \\ x_{ 1 } = 2017^1 = 2017; l = 7 \ \\ x_{ 2 } = 2017^2 = 4068289; l = 9 \ \\ x_{ 3 } = 2017^3 = 8205738913; l = 3 \ \\ x_{ 4 } = 2017^4 = 16550975387521; l = 1 \ \\ x_{ 5 } = 2017^5 = 33383317356629857; l = 7 \ \\ x_{ 6 } = 2017^6 = 67334151108322421569; l = 9 \ \\ x_{ 7 } = 2017^7 = 135812982785486324304673; l = 3 \ \\ x_{ 8 } = 2017^8 = 273934786278325916122525441; l = 1 \ \\ x_{ 9 } = 2017^9 = 552526463923383372819133814497; l = 7 \ \\ x_{ 10 } = 2017^{10} = 1114445877733464262976192903840449; l = 9 \ \\ x_{ 11 } = 2017^{11} = 2247837335388397418422981087046185633 \ \\ x_{ 12 } = 2017^{12} = 4533887905478397592959152852572156421761 \ \\ x_{ 13 } = 2017^{13} = 9144851905349927944998611303638039502691937 \ \\ x_{ 14 } = 2017^{14} = 18445166293090804665062198999437925676929636929 \ \\ \ \\ 0,4,8,12,.. (4n)... l = 1 \ \\ 1,5,9,13 ... (4n+1)... l = 7 \ \\ 2,6,10,14 ... (4n+2) ... l = 9 \ \\ 3,7,11,15.... (4n+3) .... l = 3 \ \\ \ \\ 2016 = 504 \cdot \ 4+0 \ \\ n = 1$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Showing 1 comment: Dr Math Usually, whenever there is such a question to find the last digit of a^b.., the first step is to find the pattern. Now 2017^n follows a pattern, with respect to its last digit. The last digits follow a pattern of 1,7,9,3 and this pattern keeps repeating. ## Next similar math problems: 1. Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 2. Big number hat is the remainder when dividing number 10 to 47 - 111 by number 9? 3. Power Number ?. Find the value of x. In six baskets, the seller has fruit. In individual baskets, there are only apples or just pears with the following number of fruits: 5,6,12,14,23 and 29. "If I sell this basket," the salesman thinks, "then I will have just as many apples as a pear." Which 5. Sheep and cows There are only sheep and cows on the farm. Sheep is eight more than cows. The number of cows is half the number of sheep. How many animals live on the farm? 6. Teams How many ways can divide 16 players into two teams of 8 member? 7. Circus On the circus performance was 150 people. Men were 10 less than women and children 50 more than adults. How many children were in the circus? 8. Tickets Tickets to the zoo cost $4 for children,$5 for teenagers and $6 for adults. In the high season, 1200 people come to the zoo every day. On a certain day, the total revenue at the zoo was$5300. For every 3 teenagers, 8 children went to the zoo. How many te 9. Equation How many real roots has equation ? ? 10. One half One half of ? is: ? 11. Powers Express the expression ? as the n-th power of the base 10. 12. GP - 8 items Determine the first eight members of a geometric progression if a9=512, q=2 13. Lord Ram When lord Ram founded the breed white sheep was 8 more than black. Currently white sheep are four times higher than at the beginning and black three times more than at the beginning. White sheep is now 42 more than the black. How many white and black s 14. Candies In the box are 12 candies that look the same. Three of them are filled with nougat, five by nuts, four by cream. At least how many candies must Ivan choose to satisfy itself that the selection of two with the same filling? ? 15. Tunnels Mice had built an underground house consisting of chambers and tunnels: • each tunnel leading from the chamber to the chamber (none is blind) • from each chamber lead just three tunnels into three distinct chambers, • from each chamber mice can get to an 16. Trigonometry Is true equality? ? 17. Legs Cancer has 5 pairs of legs. The insect has 6 legs. 60 animals have a total of 500 legs. How much more are cancers than insects?
{}
Hide # Kratki All of you are probably very familiar with the problem of finding the longest monotone subsequence. You probably think you know all about it. In order to convince us, solve the problem “opposite” to finding the longest monotone subsequence. For given $N$ and $K$, find a sequence that consists of numbers from $1$ to $N$ such that each of the numbers in it appears exactly once and the length of its longest monotone subsequence (ascending or descending) is exactly $K$. ## Input The first line of input contains the integers $N$ and $K$ ($1 \leq K \leq N \leq 10^6$), the length of the sequence and the required length of the longest monotone subsequence. ## Output If the required sequence doesn’t exist, output $-1$ in the first and only line. If the required sequence exists, output the required sequence of $N$ numbers in the first and only line. Separate the numbers with a single space. The required sequence (if it exists) is not necessarily unique, so you can output any valid sequence. Sample Input 1 Sample Output 1 4 3 1 4 2 3 Sample Input 2 Sample Output 2 5 1 -1 Sample Input 3 Sample Output 3 5 5 1 2 3 4 5 CPU Time limit 1 second Memory limit 1024 MB Statistics Show
{}
# Tag Info 2 It is only exactly at the critical temperature that this CFT result works. You haven't mentioned if you have used the critical temperature when you did the monte-carlo. At/near critical point, autocorrelation time becomes huge. (If I am not mistaken, autocorrelation time must blow up exactly at critical temperature, however it is cut-off due to finiteness ... 1 A definitive volume, one that I learned from during graduate school, is Kerson Huang's (of MIT, emeritus of the Physics Dept.) Statistical Mechanics. The book covers both classical and quantum computations of the partition function and observables from it, as well as thermodynamics, kinetic theory, transport, superfluids, critical phenomena, and the Ising ... 0 I suggest you: Statistical Mechanics: Theory and Molecular Simulation Mark E. Tuckerman There are all the necessary prerequisites and the discussion about Ising model and critical points. I don't know if there's online. 2 As in your question the stress was on the word general, I have some bad news: an efficient "general solver (or a theoretical algorithm) for (...) extended Ising models, which involves an arbitrary lattice" does not exists. Of course, one can invent algorithms that, in principle, could find the ground state. The most trivial would be checking the energy of ... Top 50 recent answers are included
{}
# Use PWM and ISR at same time on AVR Is it possible to use AVR PWM outputs and ISR interrupts at the same time? I've got a project I'm trying to do on an ATMega328P and I need 3 PWM outputs but ALSO need to be able to use ISR interrupts from two different timers to do some other multiplexing and button handling (oh yeah, also need to use INT0 and INT1 external interrupts). Is there a way to do both? Update for clarification: Here's the full setup. I have 3 RGB LEDs for which I need PWM for each of their channels. This PWM can run at the same frequency but needs independent duty cycles for each channel so that I can create any color I need. Since the ATMega328P doesn't have 9 independent PWMs I need to fake it. So my plan was to use multiplex the PWM. Basically, set PWM for RGB1, switch to RGB2, and then RGB3 all at > 400 Hz. That way I only need 3 PWM channels on the chip and can just toggle which LED is being grounded (they are common cathode). So, I need an ISR interrupt to handle the multiplexing itself and then I usually use a lower frequency (~ 100 Hz) ISR interrupt to handle button presses (basically my way of doing debouncing, I find it quite effective). So, as you can see, I need 3 PWM channels and 2 ISRs. • Yes, quite easily so, but the effectiveness all depends on what speeds you need each of these to work at... any more details of what you are trying to do? May 14, 2013 at 4:07 • Maybe, more specifically, what frequency (or range of frequencies) do each of the PWM outputs need to be? What frequency did you plan on using with the ISRs for button handling? May 14, 2013 at 4:13 • See update to original post. May 14, 2013 at 10:58 It is fairly simple to create your own PWM channels in software, especially when they are not very fast - such as what you are trying to do. When playing around with the timers, keep in mind that this is really just a counter scaled down from the CPU clock that resets when it gets to a certain TOP value. You can use a single timer to do many different things. Timer Overview The ATmega328 has three timers - two 8 bit and one 16 bit. The only reason to use the 16 bit timer is if you need the extra resolution - more accurate to your ideal frequency and duty cycle values. The 8 bit timer 0 uses the least power when on, so I recommend using that as much as possible. Each timer has independent interrupt sources, and has dedicated outputs which can be set to automatically go LO or HI at certain times, creating an automatic hardware PWM. All you have to do is update the duty cycle times, and the hardware does the rest for you. However, you can easily do this in software to create many more PWM channels. Hardware Considerations To start, I am going to assume this is what you are trying to do: There are three RGB LEDs. A single line controls each of the three internal LEDs, and this line is shared by the same color of each LED. Another control line is responsible for grounding these common cathode LEDs totaling six control lines in all. Only one RGB LED is on at a time, but the ground control lines are shuffled quickly enough that no one will notice. I have included a "black box" as the ground control because I don't know how you are sinking the LED current. If you go with Option 1, an I/O pin goes LO to sink the current for an LED. If doing this, remember that the total I/O pin current is 40mA, so each internal LED can only use (40/3) = 13mA. Option 2 overcomes this issue by using a transistor to sink the current (a BJT with a base resistor will work as well), however, this switches the control from active LO to active HI, driving the base/gate to sink the LED current though the emitter/source. A single Resistor can be used on each control line to limit the LED current since only one of the three attached LEDs will be on at a time. Do note that when multiplexing in this way, each LED will be on for a maximum of 33.33% of the time. The brightness of each LED will be less than expected, and the RGB color will be affected as well. PWM Considerations I can't tell you exactly what to do because you didn't say what your system clock speed was. I know for the Arduino it is 16MHz. However, AVR chips come by default using the internal 8MHz oscillator scaled down to 1MHz. This system clock prescaler can be changed in software as well. This is discussed in the Datasheet under the "System Clock Prescaler" Section 9.11. You have two frequencies to worry about: the RGB color switching frequency (how fast to pulse each individual LED), and the RGB ground control switching frequency (how fast to switch control from one RGB LED to the next). One of these frequencies should quite a bit higher than the other. I'd suggest a faster pulse frequency. For example: switching LED control every 2 ms means a total control switching period of 6ms creating a frequency of 167Hz. The pulse frequency needs to happen within the 2ms each LED is enabled. How many should fit into this 2 ms is up to you, but I'd say at least 5. Hence, a 2.5kHz pulse frequency should be used (5 / 2ms = 2.5kHz). That way, each LED will go through 5 full pulse cycles during the 2ms it is enabled by the ground control line. You can play with these numbers to see what happens... Software Control Once you figure out how fast everything really needs to be, the software control is relatively easy, but there are a few different ways to go about it. Option 1 You could set up a single timer in CTC mode with the TOP value set by compare match A to create the faster pulse frequency. In our example, this should be 2.5kHz. Every time this ISR tiggers, the LEDs color controls should all be set HI. A variable in this ISR could also count how many times it triggers. On the fifth time, switch control from one LED to the next and reset the ISR count. This creates a 2ms timer inside of the 2.5kHz ISR. With compare match A set to trigger at the beginning of each LED pulse cycle, use compare match B to trigger when an LED should be turned off (the color control line cleared). This value would need to be updated inside of the compare match B ISR for the next LED that needs to be turned off. The "off time" can be updated in main. I have done this numerous times and know that it works well, but it will take some thought on how to know what color LED to shut off, and how to set the next compare match B value. ISR B should trigger 3 times (once for each color control line) each time ISR A triggers to start a new PWM cycle. I'll let you figure out those details... Pseudo Code: // Every 1 ISR iteration is start of new PWM cycle // Every 5 ISR iterations is time to switch control to next RGB LED ISR_A{ // Time to switch control from one RGB LED to the next if(++cycle_count == 5){ TURN_OFF(ctrl_R | ctrl_G | ctrl_B); // make sure the previous LED is OFF cycle_cnt = 0; // reset count // Switch from LED 1 to LED 2 if(ctrl_gnd1){ TURN_OFF(ctrl_gnd1); TURN_ON(ctrl_gnd2); R_LED_time = R_LED2_ON; G_LED_time = G_LED2_ON; B_LED_time = B_LED2_ON; } // Switch from LED 2 to LED 3 else if(ctrl_gnd2){ TURN_OFF(ctrl_gnd2); TURN_ON(ctrl_gnd3); R_LED_time = R_LED3_ON; G_LED_time = G_LED3_ON; B_LED_time = B_LED3_ON; } // Switch from LED 3 to LED 1 else{ TURN_OFF(ctrl_gnd3); TURN_ON(ctrl_gnd1); R_LED_time = R_LED1_ON; G_LED_time = G_LED1_ON; B_LED_time = B_LED1_ON; } // This is the start of a new PWM cycle, turn on the color control lines TURN_ON(ctrl_R | ctrl_G | ctrl_B); } // This ISR will trigger when the next LED color control line should be turned off ISR_B{ // ISR Trigger at RED LED duty cycle if(COMPARE_MATCH_B = R_LED_time) TURN_OFF(ctrl_R); // ISR Trigger at Green LED duty cycle if(COMPARE_MATCH_B = G_LED_time) TURN_OFF(ctrl_G); // ISR Trigger at Blue LED duty cycle if(COMPARE_MATCH_B = B_LED_time) TURN_OFF(ctrl_B); // Need to compare color control times to determine when the next ISR_B should trigger. } Option 2 This might be the easier option, but it will involve more clock time to work meaning the system clock will have to run faster... After you know how fast the LED pulse needs to be, determine what resolution of color control you want... That is, to change the brightness of a single LED, do you increment its duty cycle by 0.5%, 1%, 10%, etc. For simplicity, I will use a 1% resolution. That means you will need to setup the timer (in CTC mode) to be 100 times the pulse frequency... This ISR will now trigger at 250kHz. You will put a variable counter in it that counts the ISR triggers. At 0 (or 100, however you want to do it) that is the start/ end of the PWM cycle, so set all of the color control lines HI. When this counter hits a specific number, clear the color control line with that specific duty cycle. 3 sets of time values should be used, with the one in use determined by which RGB is being controlled. This control will switch every 100 * 5 ISR triggers since this ISR is happening 100 times more often than that of Option 1. Pseudo Code: // Every 100 ISR iterations equals 1 PWM cycle // Every 500 ISR iterations is time to switch control to next RGB LED ISR_A{ // Time to switch control from one RGB LED to the next if(++cycle_count == 500){ TURN_OFF(ctrl_R | ctrl_G | ctrl_B); // make sure the previous LED is OFF cycle_cnt = 0; // reset count // Switch from LED 1 to LED 2 if(ctrl_gnd1){ TURN_OFF(ctrl_gnd1); TURN_ON(ctrl_gnd2); R_LED_time = R_LED2_ON; G_LED_time = G_LED2_ON; B_LED_time = B_LED2_ON; break; } // Switch from LED 2 to LED 3 else if(ctrl_gnd2){ TURN_OFF(ctrl_gnd2); TURN_ON(ctrl_gnd3); R_LED_time = R_LED3_ON; G_LED_time = G_LED3_ON; B_LED_time = B_LED3_ON; break; } // Switch from LED 3 to LED 1 else{ TURN_OFF(ctrl_gnd3); TURN_ON(ctrl_gnd1); R_LED_time = R_LED1_ON; G_LED_time = G_LED1_ON; B_LED_time = B_LED1_ON; break; } } // Time to turn OFF the red LED control line if(++pulse_count == R_LED_time) TURN_OFF(ctrl_R); // Time to turn OFF the green LED control line if(pulse_count == G_LED_time) TURN_OFF(ctrl_G); // Time to turn OFF the blue LED control line if(pulse_count == B_LED_time) TURN_OFF(ctrl_B); // This is the start of a new PWM cycle, turn on the color control lines if(pulse_cnt == 100){ TURN_ON(ctrl_R | ctrl_G | ctrl_B); pulse_cnt = 0; } } The tricky thing about this method is the ISR timing compared to the system clock. At 250kHz with a 16MHz system clock, there will be only 64 system clock cycles between each ISR trigger. The code has to execute in time, or it will not work right without significantly reducing the pulse frequency or duty cycle resolution. That is why Option 1 is better. Option 3 If all of the different counting variables seems confusing to you, you could try something else entirely. Use the 16 bit timer 1 which has 4 interrupt possibilities: Overflow, OCR1A, OCR1B, and input capture ICR1. With the timer in normal mode, use the largest prescaler available (1024) to drop the system clock frequency as low as possible - this would make 15.6kHz from a 16MHz system clock, or about 1kHz from a 1MHz system clock. The overflow ISR is the start of the PWM cycle. Use this to set all color control lines HI, and count these ISR cycles to switch control from LED to the next. Then, use each of the remaining three ISR sources as the duty cycle for the colored LEDs: OCR1A for ctrl-R, OCR1B for ctrl-G, and ICR1 for ctrl-B. These time values can change whenever (in main or another ISR) and are updated immediately in normal mode. When one of the ISRs triggers, turn off that color control line. These are not the only ways (or best ways) to go about doing this, but they are just what I have used in the past with success. Whatever you do, this is the LED response you should expect: Notice each LED has its own duty cycle, but they all operate at the same frequency. Each RGB is only enabled 1/3 of the total time. You could also check your buttons inside of the pulse ISR. Just use another variable counter to create a 10ms timer (you mentioned 100Hz) inside of ISR A, similar to the timer used in the multiplexing ground control lines. Your question doesn't make a lot of sense. It's kinda like saying can my neighbour water his lawn, and can I put my pants on at the same time? It depends. 328 has 3 timers: 0, 1, 2. If your 3 PWMs are all from the same time base, i.e. same frequency but different duty cycles then yes. You would devote one timer to generate the 3 PWMs and the other 2 would generate overflow ISRs. INT0, INT1 can be use irrespective of the timers so those should be usable if those pins are free. • I thought that each of the 3 timers had 2 associated PWM outputs? Is it actually possible to do 3 PWM outputs from only 1 of the hardware timers? That would be perfect since it'd leave the other 2 timers free for whatever. May 14, 2013 at 10:59 • @AdamHaile I suspect the suggestion here is around using software PWM. Hardware PWM does have the limitation of two PWM channels, tied to specific pins and running at the same frequency, on each timer channel. May 14, 2013 at 16:21 • Ok... that's what I thought. But can I still use Timer1's PWM while also using ISR interrupts? I can probably work with both running at the same frequency :P May 14, 2013 at 17:42 • You can configure PWM to switch when counter hits a compare value. This controls your duty cycle. The "TOP" value and counter prescalar controls your frequency. You can enable an overflow interrupt to hit when your counter hits "TOP". Therefore you can use overflow ISRs if the period you want is the same as 1/(pwm frequency) May 14, 2013 at 19:45
{}
## Wednesday, April 4, 2012 ### Southern Winter Milky Way Skies. Today I want to do something a little different and share a picture taken by my housemate and good friend Jonathan. He's recently gotten into astrophotography using his DSLR camera, but has been too busy to do much (it being his last semester at UHHilo and all). Two weeks ago he was able to make it up to the Vis and took the following picture, which he posted on his blog (which you can find at Life is a Zwitterion). I saw it and offered to label the constellations and a few other objects visible in it for him, and he suggested I post it on my blog, so here it it. Thanks Jon! This is a 30-second exposure taken from the Visitor Information Station on Mauna Kea looking south at 8:25 on March 19th, 2012. Visible near the bottom is the flank of Mauna Loa, with part of the rim of Puʻu Kalepeamoa visible in the foreground (ka lepe a moa means "the comb of the rooster" in Hawaiian, which is  an apt description for the shape of the range of hills it is part of). In the background lies a stretch of the winter Milky Way, with several prominent constellations visible. Also visible in this picture are the two brightest stars in the night sky, Sirius and Canopus. Sirius is one of the twenty closest stars us, a mere 8.6 light-years away. It is actually a binary system composed of a white main sequence star (Sirius A) of spectral type A1 about twice as massive as the Sun and about 25 times as luminous, and a white dwarf (Sirius B) first discovered in 1862. Sirius B has a diameter of about 12,000 kilometers (7,500 miles), nearly the same as Earth, but contains 98% of the mass of the Sun. The other bright star, Canopus, is much further away: 310$$\pm$$10 light-years. Canopus is a supergiant star of spectral type F, and it is 13,600 times more luminous than the Sun. In fact, it's the most intrinsically luminous star within ~700 light-years. Unfortunately, it's far enough south as not to be visible from much of the continental United States, although south of 37$$^\circ$$18' it may be visible under very good conditions.  Its radius is fully one-third of the distance from the Earth to the Sun, about 50 million kilometers or 30 million miles. Along with stars, there are quite a few constellations (or parts of them) visible in this image. Prominently placed is Canis Major, the Big Dog (Canis Minor is out of the top of the picture, however). The three constellations that made up the old constellation Argo Navis are also visible here; Puppis, the Poop Deck, Carina, the Keel, and Vela, the Sail. Also visible is Pyxis, the Ship's Compass, though it is a modern addition by Nicolas Louis de Lacaille in the 18th century. Below Canis Major is Columba, the Dove, and Pictor, short for Equuleus Pictoris the Painter's Easel (another modern addition courtesy of de Lacaille). Finally, just at the bottom of the picture is Dorado, the Swordfish. Dorado is interesting because it contains both the majority of the Large Magellanic Cloud (one of the Milky Way's satellite galaxies) and the south celestial pole. In fact, if it weren't for Mauna Loa getting in the way, the Large Magellanic Cloud would be visible at the bottom of the picture. Finally, there are at least three Messier objects visible in this picture (there are probably even more that I missed): M46, M47, and M41. Messier 46 you may recall as the open cluster with superimposed planetary nebula I showed a picture of a few weeks ago.
{}
# Math Help - Approximate value of integral 1. ## Approximate value of integral Find the length "L" of a certain curve given by L= - Im having a hard time with this, any help with be very appreciated -thanks 2. Make the following substitution: $u = 4+ 3x$ $du = 3dx \: \: \Rightarrow \: \: dx = \frac{du}{3}$ Also, we must change our limits of our integral: $u(8) = 4 + 3(8) = 28$ $u(2) = 4 + 3(2) = 10$ So substituting all these in: $= \int_{u(2)}^{u(8)}\! \sqrt{u} \frac{du}{3} \: = \: \frac{1}{3} \int_{10}^{28} u^{\frac{1}{2}}du$ 3. Im sorry im very new to integrals, and need to see all the steps involved in solving this problem so I have an idea on what to do on future problems. -Thank you 4. This is a must-know integral: $\int x^{n} dx = \frac{x^{n+1}}{n+1} + C$ where $n \neq -1$ and is some constant. Do you not see how that applies to the last integral I gave you? 5. is the answer 6.28? 6. I think i figured this out.. is the correct answer 25.8976? 7. From where we last left off: $\frac{1}{3} \int_{10}^{28} u^{\frac{1}{2}}du = \frac{1}{3} \cdot \frac{u^{\frac{3}{2}}}{\frac{3}{2}} \bigg|_{10}^{28} \: \: = \: \: \frac{2}{9} u^{\frac{3}{2}} \bigg|_{10}^{28} \: \: = \: \: \frac{2}{9} \left( 28^{\frac{3}{2}} - 10^{\frac{3}{2}} \right) \: \approx \: 25.898$ Just as you had 8. That's the correct answer Eric08. See QuickMath, pretty useful for cheking answers. 9. Originally Posted by Krizalid That's the correct answer Eric08. See QuickMath, pretty useful for cheking answers. You can check answers on Graph 4.3 as well. The only problem is it does not always have "nice" answers like QuickMath does.
{}
# Bayes Linear Regression - understanding the posterior formula? In the linked resource, the author defines the posterior probability of Bayesian linear regression as: P(B|y,X) = P(y|B,X)*P(B|X)/P(y|X) I have a couple questions/issues with this. First, B represents the weight(s) vector, but it shouldn't include the bias term, b, in y=mx+b. Does this model just forget to include a bias or is it somehow factored in B? Second, I'm used to seeing Bayes Rule in a very specific form: P(A|B) = P(B|A)*P(A)/P(B) So it's my naive guess that the posterior would take the following form: P(B|y,X) = P(y,X|B)*P(B)/P(y,X) Do the equations, mine & the source, equate to the same value? If not, why am I wrong? I gather that there is something about conditional probabilities and the chain rule that is going over my head. EDIT: According to this video my interpretation is correct! However, I'm still unclear where the bias term comes into play. Likewise, I'd like to what likelihood and priors are typically used. Wikipedia shows: This looks somewhat like a multivariate guassian, without some scaling terms outside of the exponent, and most notably, no precision matrix (aka inverse covariance) in the exponentiated term. Does this likelihood have a name? Where did it come from? Etc. • Include the unit vector as a column of $X$. – Xi'an Sep 11 '20 at 5:37 To quote from our Bayesian Essentials with R book (Chapter 3, p.67): The ordinary normal linear regression model is such that $$\mathbf y|\beta,\sigma^2,X\sim\mathscr{N}_n(X\beta,\sigma^2I_n) \tag{1}$$ and thus $$\mathbb{E}[y_i|\beta,X]=\beta_0+\beta_1x_{i1}+\ldots+\beta_kx_{ik}\,,\quad \mathbb{V}(y_i|\sigma^2,X)=\sigma^2\,.$$ In particular, the presence of an intercept $$\beta_0$$ explains why a column of $$1$$'s is necessary in the matrix $$X$$ to preserve the compact formula $$X\beta$$ in the conditional mean of $$\mathbf y$$. for the inclusion of an intercept in the regression and (Chapter 3, p.66) A large proportion of statistical analyses deal with the representation of dependences among several observed quantities. For instance, which social factors influence unemployment duration and the probability of finding a new job? Which economic indicators are best related to recession occurrences? Which physiological levels are most strongly correlated with aneurysm strokes? From a statistical point of view, the ultimate goal of these analyses is thus to find a proper representation of the conditional distribution, $$f(y|\theta,\mathbf x)$$, of an observable variable $$y$$ given a vector of observables $$\mathbf x$$, based on a sample of $$\mathbf x$$ and $$y$$. to stress that the entire analysis is run conditional on $$\mathbf X$$. With a further excerpt (Chapter 3, p.72): We stress here that conditioning on $$\mathbf X$$ is valid only when $$\mathbf X$$ is exogenous, that is, only when we can write the joint distribution of $$(\mathbf y,\mathbf X)$$ as $$f(\mathbf y,\mathbf X|\alpha,\beta,\sigma^2,\delta)=f(\mathbf y|\alpha,\beta,\sigma^2,\mathbf X)f(\mathbf X|\delta)\,,$$ where $$(\alpha,\beta,\sigma^2)$$ and $$\delta$$ are fixed parameters. We can thus ignore $$f(\mathbf X|\delta)$$ if the parameter $$\delta$$ is only a nuisance parameter since this part is independent of $$(\alpha,\beta,\sigma^2)$$. The practical advantage of using a regression model as above is that it is much easier to specify a realistic conditional distribution of one variable given $$p$$ others rather than a joint distribution on all $$p+1$$ variables. Note that if $$\mathbf X$$ is not exogenous, for instance when $$\mathbf X$$ involves past values of $$\mathbf y$$,the joint distribution must be used instead. Concerning the likelihood function, it stems from (1) with $$\mathbf y$$ being indeed a Normal vector with $$\sigma^2 \mathbf I_n$$ as its covariance matrix. This follows from the Normality assumption on the noise $$\mathbf \epsilon$$.
{}
# Data Analysis ## Decision trees1 1. Some materials are taken from machine learning course of Victor Kitov # Let's recall previous lecture¶ • Metric methods: Nearest Centroid, K Nearest Neighbours • Work both for classification and regression • Lazy learning - simply remember training dataset • No parameters - only hyper-parameters • Cluster hypothesis - the core of metric methods • Similarity measures and distances: euclidean, cosine, edit-distance, Jaccard similarity, etc... • Feature scaling is important! • Various modifications: • weighted domain • Get ready to face with • Curse of dimentionality (about that in the next lectures) • Slow prediction speed # Intuition¶ ## Intuition 1¶ • A perfumery company developed a new unisex parfume • To find their key segments it they run open world testing • Each respondent leaves • responce if she likes it or not (+1|-1) • Gender • Age • Education • Current career • Have domestic animals • etc.. ## Intuition 1¶ In the end the description of the segments could look like this • [Gender = F][Age > 21][Age <= 25][Education = Higher][Have domestic animals = No] - like in 82% of cases • [Gender = M][Age > 25][Age <= 30][Current Career = Manager] - don't like in 75% of cases • ... ## Intuition 2¶ • You are going to take a loan god, please, no to buy something expensive, and provide your application form • Bank employee is checking it accoring to some rules like: 1. Current bank account > 200k rubles. - go to step 2, otherwise 3 2. Duration < 30 months - go to step 4, otherwise REJECT 3. Current employment > 1 year - ACCEPT, otherwise REJECT 4. ... ## Definition of decision tree¶ • Prediction is performed by tree $T$ (directed, connected, acyclic graph) • Node types 1. A root node 2. Internal nodes, each having $\ge2$ child nodes 3. Terminal nodes (leaves), which do not have child nodes but have associated prediction values ## Definition of decision tree¶ • for each non-terminal node $t$ a check-function $Q_{t}(x)$ is associated • for each edge $r_{t}(1),...r_{t}(K_{t})$ a set of values of check-function $Q_{t}(x)$ is associated: $S_{t}(1),...S_{t}(K_{t})$ such that: • $\bigcup_{k}S_{t}(k)=range[Q_{t}]$ • $S_{t}(i)\cap S_{t}(j)=\emptyset$ $\forall i\ne j$ ## Prediction process¶ • Prediction is easy if we have already constructed a tree • Prediction process for tree $T$: • $t=root(T)$ • while $t$ is not a terminal node: • calculate $Q_{t}(x)$ • determine $j$ such that $Q_{t}(x)\in S_{t}(j)$ • follow edge $r_{t}(j)$ to $j$-th child node: $t=\tilde{t}_{j}$ • return prediction, associated with leaf $t$. ## Specification of decision tree¶ • To define a decision tree one needs to specify: • the check-function: $Q_{t}(x)$ • the splitting criterion: $K_{t}$ and $S_{t}(1),...S_{t}(K_{t})$ • the termination criteria (when node is defined as a terminal node) • the predicted value for each leaf node. ## Generalized decision tree algorithm¶ {python} 1. function decision_tree(X, y): 2. if termination_criterion(X, y) == True: 3. S = create_leaf_with_prediction(y) 4. else: 5. S = create_node() 6. (X_1, y_1) .. (X_L, y_L) = best_split(X, y) 7. for i in 1..L: 8. C = decision_tree(X_i, y_i) 9. connect_nodes(S, C) 10. return S # Splitting rules¶ ## Possible definitions of splitting rules¶ • $Q_{t}(x)=x^{i(t)}$, where $S_{t}(j)=v_{j}$, where $v_{1},...v_{K}$ are unique values of feature $x^{i(t)}$. • $S_{t}(1)=\{x^{i(t)}\le h_{t}\},\,S_{t}(2)=\{x^{i(t)}>h_{t}\}$ • $S_{t}(j)=\{h_{j}<x^{i(t)}\le h_{j+1}\}$ for set of partitioning thresholds $h_{1},h_{2},...h_{K_{t}+1}$. • $S_{t}(1)=\{x:\,\langle x,v\rangle\le0\},\quad S_{t}(2)=\{x:\,\langle x,v\rangle>0\}$ • $S_{t}(1)=\{x:\,\left\lVert x\right\rVert \le h\},\quad S_{t}(2)=\{x:\,\left\lVert x\right\rVert >h\}$ • etc. ## Most famous decision tree algorithms¶ • C4.5 • ID 3 • CART (classification and regression trees) • implemented in scikit-learn ## CART version of splitting rule¶ • single feature value is considered: $$Q_{t}(x)=x^{i(t)}$$ • binary splits: $$K_{t}=2$$ • split based on threshold $h_{t}$: $$S_{1}=\{x^{i(t)}\le h_{t}\},\,S_{2}=\{x^{i(t)}>h_{t}\}$$ • $h(t)\in\{x_{1}^{i(t)},x_{2}^{i(t)},...x_{N}^{i(t)}\}$ • applicable only for real, ordinal and binary features # Splitting rule selection¶ ## Intuition¶ Which box is better to predict color ## Classification impurity functions¶ • For classification: let $p_{1},...p_{C}$ be class probabilities for objects in node $t$. • Then impurity function $\phi(t)=\phi(p_{1},p_{2},...p_{C})$ should satisfy: • $\phi$ is defined for $p_{j}\ge0$ and $\sum_{j}p_{j}=1$. • $\phi$ attains maximum for $p_{j}=1/C,\,k=1,2,...C$ . • $\phi$ attains minimum when $\exists j:\,p_{j}=1,\,p_{i}=0$ $\forall i\ne j$. • $\phi$ is symmetric function of $p_{1},p_{2},...p_{C}$. ## Typical classification impurity functions}¶ • Gini criterion • interpretation: probability to make mistake when predicting class randomly with class probabilities $[p(\omega_{1}|t),...p(\omega_{C}|t)]$: $$I(t)=\sum_{i}p(\omega_{i}|t)(1-p(\omega_{i}|t))=1-\sum_{i}[p(\omega_{i}|t)]^{2}$$ • Entropy • interpretation: measure of uncertainty of random variable $$I(t)=-\sum_{i}p(\omega_{i}|t)\ln p(\omega_{i}|t)$$ • Classification error • interpretation: frequency of errors when classifying with the most common class $$I(t)=1-\max_{i}p(\omega_{i}|t)$$ In [4]: plot_impurities() ## Splitting criterion selection¶ • Define $\Delta I(t)$ - the quality of the split of node $t$ into child nodes $t_{1},...t_{C}$. $$\Delta I(t)=I(t)-\sum_{i=1}^{C}I(t_{i})\frac{N(t_{i})}{N(t)}$$ $$\Delta I(t)=I(t)-\left(I(t_{L})\frac{N(t_{L})}{N(t)} + I(t_{R})\frac{N(t_{R})}{N(t)}\right)$$ • If $I(t)$ is entropy, then $\Delta I(t)$ is called information gain. • CART optimization (regression, classification): select feature $i_{t}$ and threshold $h_{t}$, which maximize $\Delta I(t)$: $$i_{t},\,h_{t}=\arg\max_{k,h}\Delta I(t)$$ • CART decision making: from node $t$ follow: $$\begin{cases} \text{left child }t_{1}, & \text{if }x^{i_{t}}\le h_{t}\\ \text{right child }t_{2}, & \text{if }x^{i_{t}}>h_{t} \end{cases}$$ In [6]: wine_demo() ## Typical regression impurity functions¶ • Impurity function measures uncertainty in $y$ for objects falling inside node $t$. • Regression: • let objects falling inside node $t$ be $I=\{i_{1},...i_{K}\}$. We may define \begin{align*} \phi(t) & =\frac{1}{K}\sum_{i\in I}\left(y_{i}-\mu\right)^{2}\quad \text{(MSE)}\\ \phi(t) & =\frac{1}{K}\sum_{i\in I}|y_{i}-\mu|\quad \text{(MAE)} \end{align*} where $\mu$ is mean or median of $y_i$s. ## Prediction assignment to leaves¶ • Regression: • mean (optimal for MSE loss) • median (optimal for MAE loss) • Classification • most common class (optimal for constant misclassification cost) ## Classification example¶ In [8]: fig = interact(demo_dec_tree, depth=IntSlider(min=1, max=5, value=1)) ## Regression example¶ In [36]: fig = interact(plot_dec_reg, depth=IntSlider(min=1, max=5, value=1), criterion=['mse', 'mae']) ## Splitting criterion selection¶ Remarks • Local and Greedy optimization • Overall results changes slighly with different impurity measures In [12]: plt.scatter(X_[:, 0], X_[:, 1], c=y_, cmap=plt.cm.Paired) Out[12]: <matplotlib.collections.PathCollection at 0x1238fbe80> In [14]: fig = interact(demo_dec_tree_xor, depth=IntSlider(min=1, max=6, value=1)) # Termination criterion¶ ## Termination criterion¶ • very large complex trees -> overfitting • very short simple trees -> underfitting • Approaches to stop DC construction: • rule-based stopping criterion • based on pruning (not considered here) ## Rule-based termination criteria¶ • Rule-based: a criterion is compared with a threshold. • Variants of criterion: • depth of tree • number of objects in a node • minimal number of objects in one of the child nodes • impurity of classes • change of impurity of classes after the split • etc • simplicity • interpretability • specification of threshold is needed ## CART Cost-Complexity Prunning¶ • General idea: build tree up to pure nodes and then prune. • Define: • $T$ be some subtree of our tree • $T_t$ full subtree with root at node $t$ • $\tilde{T}$ be a set of leaf nodes of tree $T$ • $R(t)$ - error measure inside node $t$ (#misclassifications, sum of squared errors) Error rate on tree: $$R(T) = \sum\limits_{\tau \in \tilde{T}} R(\tau)$$ Error rate + complexity: $$R_\alpha(T) = \sum\limits_{\tau \in \tilde{T}} R(\tau)+ \alpha |T|$$ • Generally $R(T_t) < R(t)$, however if we consider $R_\alpha(\cdot)$... • We can find $\alpha$ such that $R_\alpha(T_t) = R_\alpha(t)$ $$\alpha_t = \frac{R(t) - R(T_t)}{|\tilde{T_t}| - 1}$$ ## The algorithm¶ 1. Build the most "puriest" tree $T_0$ that and set $\alpha_0 = 0$, $i=0$ 2. Until the tree is completely prunned do: • i++ • find node $t$ that minimizes $$\alpha_i = \frac{R(t) - R(T_t)}{|\tilde{T_t}| - 1}$$ • Replace $T_t$ with $t$ Output: • sequence of $\alpha_0 \leq \alpha_1 \leq \dots \leq \alpha_K$ • with correspondent prunned tries $T_0 \supseteq T_1 \supseteq \dots \supseteq T_K$ • choose $T_i$ with lowest error on validation set In [43]: fig = interact(plot_dec_reg_alpha, alpha=FloatSlider(min=0, max=0.05, value=0, step=0.0005, readout_format='.4f')) # Other features¶ ## Tree feature importances¶ • Consider feature $f$ • Let $T(f)$ be the set of all nodes, relying on feature $f$ when making split. • efficiency of split at node $t$: $\Delta I(t)=I(t)-\sum_{c\in childen(t)}\frac{n_{c}}{n_{t}}I(c)$ • feature importance of $f$: $\sum_{t\in T(f)}n_{t}\Delta I(t)$ ## Handling missing values¶ 1. Remove features or objects with missing values 2. Missing value = distinct feature value 3. Calculation of impurity w/o missing cases 4. Surrogate split! • Find best split with feature $i^*$, threshold $h^*$ and children $\{t^*_L, t^*_R\}$ • Find other good splits for features $i_t \neq i^*$, s.t. $\{t_L, t_R\} \approx \{t^*_L, t^*_R\}$ • While performing prediction of object $x$: • If $x^{i^*}$ is Null, try $x^{i_t}$ ## Analysis of decision trees¶ • simplicity of algorithm • interpretability of model (for short trees) • implicit feature selection • good for features of different nature: • naturally handles both discrete and real features • prediction is invariant to monotone transformations of features ## Analysis of decision trees¶ • not very high accuracy: • high overfitting of tree structure • non-parallel to axes class separating boundary may lead to many nodes in the tree for $Q_{t}(x)=x^{i(t)}$ • one step ahead lookup strategy for split selection may be insufficient (XOR example) • not online - slight modification of the training set will require full tree reconstruction. ## Special Desicion Tree Algorithms¶ ID 3 • Categorical features only • Number of children = number of categories • Maximum depth С 4.5 • Handling continious features • And categorical as in ID3 • Find missing value - proceed down to all paths and average • Some prunning procedure ## References¶ • How tree works • Mohammed J. Zaki, et al: Data Mining and Analysis - Fundamental Concepts and Algorithms - Chapter 19 • Andrew R. Webb, et al: Statistical Pattern Recognition - Chapter 7 • L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Belmont, CA, 1984. • Cost-complexity prunning in sklearn + example
{}
How do you solve -9(-a – 1) = 8a + 14? $a = 5$ Explanation: From the given $- 9 \left(- a - 1\right) = 8 a + 14$ $9 a + 9 = 8 a + 14$ by distributive property $9 a + 9 - 8 a = 8 a + 14 - 8 a$ subtraction property $a + 9 = 14$ $a + 9 - 9 = 14 - 9$ subtraction property $a = 5$ God bless ....I hope the explanation is useful.
{}
# Supremum of constrained $L_1$ norm For a fixed $\mathbf{h}$ in a subset of $\mathbb{C}^m$ such that $\mathbf{h}(k)\neq 0$ for any $k=0,...,m-1$, how can I find $\sup_{\mathbf{x}} \{ \| \mathbf{x} \|_1 \,\,\, \mathrm{ s.t. } \,\,\, \|\mathbf{h}\ast\mathbf{x}\|_1 \leq 1 \}$, where $\mathbf{x} \in \{ \mathbf{y} \in \mathbb{C}^n : \mathbf{h} \ast \mathbf{y} = 0 \Leftrightarrow \mathbf{y} = 0 \} \subset \mathbb{C}^n$ is a vector subspace and $\ast$ denotes convolution? For this, convolution between two vectors $\mathbf{u}\in\mathbb{C}^p$ and $\mathbf{v}\in\mathbb{C}^q$ is the full discrete convolution of length $p+q-1$: $\mathbf{z}(k) = \sum_{j=-\infty}^{\infty} {\mathbf{u}(j) \mathbf{v}(k-j)}$, and for the purposes of computation $\mathbf{u}(j)=0 \; \mathrm{for} \; j\notin[0,p-1]$ and $\mathbf{v}(j)=0 \; \mathrm{for} \; j\notin[0,q-1]$ I would like to understand how to characterize the above for any and all $\mathbf{h}$, however, I'd be happy to start with special cases. For instance, if $\mathbf{h}=[-1,+1]$, then $\|\mathbf{h}\ast\mathbf{x}\|_1$ is the discrete Total Variation, which is well-studied in the literature, albeit from a functional analysis approach. - Did you mean "Given $\mathbf{h}$..."? –  user31373 Jun 10 '12 at 17:57 You could respond to comments by editing your original post. BTW Norbert made that comment because you did not define convolution in the original post, so he had to guess its meaning. Convolution of vectors isn't part of standard math curriculum. Personally, I still don't understand your formula for it. Is the convolution a single number or a vector? So far you defined just one number: $\mathbf{h*x}$ has $n$ in parenthesis, but $n$ is fixed, it's the dimension of the space where $\mathbf{x}$ lives. –  user31373 Jun 11 '12 at 14:07 @user33456 Let $h(1)=0$, then convolution with vector $(0,0,\ldots,10^{10^{10}})$ will give quite a big value for $\Vert x\Vert_1$ while $\Vert h*x\Vert_1$ is $0$. –  userNaN Jun 11 '12 at 15:06 Another example: if $h=[-1,1]$, as in the original post, then $\mathbf{h*x}=0$ for all constant vectors $\mathbf{x}$, hence the supremum of $\|\mathbf{x}\|$ is infinite... Unless I still don't interpret convolution correctly. What is the dimension of the vector $h*x$, i.e., what is the allowed range of index in $h*x$ in the formula. –  user31373 Jun 11 '12 at 15:21 Please edit all the information dispersed in the comments into the question to make it self-contained. People shouldn't have to read through the entire comments to understand the question. –  joriki Jun 12 '12 at 2:12
{}
# Is there a name for the square of a function plus the square of its Hilbert transform? Given a real-valued analytic function $$f$$ defined on the whole real line, and its Hilbert transform $${\cal H}f$$, it seems that the quantity $$f(x)^2+{\cal H}f(x)^2$$ should have some kind of importance as an energy measure. It is the square of the complex modulus of the analytic extension of $$f$$ to the complex plane: $$f(x)+i{\cal H}f(x)$$. What are keywords (or textbooks, or even academic papers) that focus on this quantity? Engineers often call real valued functions "signals". Denoting Hilbert transform $$\mathcal H\{\cdot\}$$, there exists a concept called Analytic signal denoted $$\mathcal A\{\cdot\}$$, where for a real valued signal $$f(x)$$: $$\mathcal{A}\{f\}(x) = f(x) + i\mathcal{H}\{f\}(x)$$ Then, since $$|a+ib|^2=a^2+b^2$$ for any pair of reals $$a,b$$ we can see that $$|\mathcal A\{f\}(x)|^2=f(x)^2+(\mathcal{H}\{f\}(x))^2$$ • Thank you. I think this is what I was looking for. So I could call it "the square of the envelope of the analytical representation of the function $f$"? It is a bit of a mouthful but I guess that's the best we've got. – Oliver Jan 7 at 1:39
{}
Chin. Phys. B, 2021, Vol. 30(1): 013301    DOI: 10.1088/1674-1056/abc3b2 ATOMIC AND MOLECULAR PHYSICS Prev   Next # R-branch high-lying transition emission spectra of SbNa molecule Chun-Run Luo(罗春润)1, Qun-Chao Fan(樊群超)1,†, Zhi-Xiang Fan(范志祥)1,‡, Jia Fu(付佳)1,§, Jie Ma(马杰)2, Hui-Dong Li(李会东)1, and Yong-Gen Xu(徐勇根)1 1 School of Science, Research Center for Advanced Computation, Xihua University, Chengdu 610039, China; 2 State Key Laboratory of Quantum Optics and Quantum Optics Devices, Laser Spectroscopy Laboratory, College of Physics and Electronics Engineering, Shanxi University, Taiyuan 030006, China Abstract  The calculation results of the R-branch transition emission spectra of (0-0) band of the $$A_2 1\to X_2 1$$ transition system of SbNa molecule are presented in this paper. These R-branch high-lying transitional emission spectral lines are predicted by using the difference converging method (DCM). Our results show excellent agreement between DCM spectral lines and the experimental values, and the deviations are controlled within 0.0224 cm-1. What is more, based on the principle of over-determined linear equations, the prediction error is quantified in this work, which provides reliable theoretical support for our predicted DCM calculations. This work provides a lot of useful information for understanding the microstructure of SbNa molecule. Keywords:  SbNa      transitional spectral lines      R-branch      difference converging method Received:  12 September 2020      Revised:  21 October 2020      Accepted manuscript online:  22 October 2020 PACS: 31.13.-P 33.20.Vq (Vibration-rotation analysis) Fund: Project supported by the Sichuan Education Department Project, China (Grant No. 17ZA0369), the Fund for Sichuan Distinguished Scientists of China (Grant Nos. 2019JDJQ0050 and 2019JDJQ0051), the National Natural Science Foundation of China (Grant Nos. 61722507 and 11904295), and the State Key Laboratory Open Fund of Quantum Optics and Quantum Optics Devices, Laser Spectroscopy Laboratory, China (Grant Nos. KF201811 and KF2020003). Corresponding Authors:  Corresponding author. E-mail: fanqunchao@sina.com Corresponding author. E-mail: fanzhixiang235@126.com §Corresponding author. E-mail: fujiayouxiang@126.com
{}
## Summer Problem Solving for the Young, the Very Young, and the Young at Heart Here is yet another wonderful summer math opportunity for homeschoolers or anyone who works with kids: a free, 3-week mini-course on math problem solving for all ages. The course is being organized by Dr. James Tanton, Dr. Maria Droujkova, and Yelena McManaman. The course participants include families, math clubs, playgroups, and other small circles casually exploring adventurous mathematics with kids of any age. And then the real fun begins! ## Math Teachers at Play #63 via Math Jokes 4 Mathy Folks Hooray for Friday! Let’s celebrate by visiting this month’s Math Teachers at Play blog carnival, featuring mathematical activities, lessons, and games for all ages: Hmm… let’s see… now where did I put my notes? I know that this is supposed to be the Math Teachers at Play blog carnival… but which one? Maybe the following puzzle will help. In the grid below, do the following: • Circle any number, then cross out the other numbers in the same row and column. ## What Do You Notice? What Do You Wonder? If you want your children to understand and enjoy math, you need to let them play around with beautiful things and encourage them to ask questions. Here is a simple yet beautiful thing I stumbled across online today, which your children may enjoy: It reminds me of string art designs, but the app makes it easy to vary the pattern and see what happens. • What questions can they ask? I liked the way the app uses “minutes” as the unit that describes the star you want the program to draw. That makes it easier (for me, at least) to notice and understand the patterns, since minutes are a more familiar and intuitive unit than degrees, let alone radians. ## Summer School for Parents, Teachers: How to Learn Math Here’s an interesting summer learning opportunity for homeschooling parents and classroom teachers alike. Stanford Online is offering a free summer course from math education professor and author Jo Boaler: Boaler’s book is not required for the course, but it’s a good read and should be available through most library loan systems. ## Quotable: Learning the Math Facts feature photo above by USAG- Humphreys via flickr (CC BY 2.0) During off-times, at a long stoplight or in grocery store line, when the kids are restless and ready to argue for the sake of argument, I invite them to play the numbers game. “Can you tell me how to get to twelve?” My five year old begins, “You could take two fives and add a two.” “Take sixty and divide it into five parts,” my nearly-seven year old says. “You could do two tens and then take away a five and a three,” my younger son adds. Eventually we run out of options and they begin naming numbers. It’s a simple game that builds up computational fluency, flexible thinking and number sense. I never say, “Can you tell me the transitive properties of numbers?” However, they are understanding that they can play with numbers. I didn’t learn the rules of baseball by filling out a packet on baseball facts. Nobody held out a flash card where, in isolation, I recited someone else’s definition of the Infield Fly Rule. I didn’t memorize the rules of balls, strikes, and how to get someone out through a catechism of recitation. ## Conversational Math The best way for children to build mathematical fluency is through conversation. For more ideas on discussion-based math, check out these posts: ## Learning the Math Facts For more help with learning and practicing the basic arithmetic facts, try these tips and math games: ## How To Master Quadratic Equations feature photo above by Junya Ogura via flickr (CC BY 2.0) A couple of weeks ago, James Tanton launched a wonderful resource: a free online course devoted to quadratic equations. (And he promises more topics to come.) Kitten and I have been working through the lessons, and she loves it! We’re skimming through pre-algebra in our regular lessons, but she has enjoyed playing around with simple algebra since she was in kindergarten. She has a strong track record of thinking her way through math problems, and earlier this year she invented her own method for solving systems of equations with two unknowns. I would guess her background is approximately equal to an above-average algebra 1 student near the end of the first semester. After few lessons of Tanton’s course, she proved — within the limits of experimental error — that a catenary (the curve formed by a hanging chain) cannot be described by a quadratic equation. Last Friday, she easily solved the following equations: $\left ( x+4 \right )^2 -1=80$ and: $w^2 + 90 = 22 w - 31$ and (though it took a bit more thought): $4x^2 + 4x + 4 = 172$ We’ve spent less than half an hour a day on the course, as a supplement to our AoPS Pre-Algebra textbook. We watch each video together, pausing occasionally so she can try her hand at an equation before listening to Tanton’s explanation. Then (usually the next day) she reads the lesson and does the exercises on her own. So far, she hasn’t needed the answers in the Companion Guide to Quadratics, but she did use the “Dots on a Circle” activity — and knowing that she has the answers available helps her feel more independent.
{}
# Things on a Heap | A Collection of Programming Ramblings by chjdev Reading scientific on your Kindle (or other eBook reader) usually sucks. The text is usually only available as PDF or PS files and formatted in a way that is meant for printing in A4, or US Letter. A two-column layout is also very common, which further complicates things. In this post I show you a simple way to get these papers on your eBook reader for comfortable reading. ## Step 1: Preprocessing with BRISS First we will preprocess the file a bit to make the next step easier / more successful. Using the cool little BRISS tool we will crop out unnecessary parts and only leave the main text area. The idea is to get rid of line numbers, notes in the margin (e.g. the arXiv line in our test document), etc. BRISS is a graphical tool. You can use the menu to load the PDF or just start it from the terminal: briss Text\ Understanding\ from\ Scratch.pdf You will be prompted to enter the range of pages that will be analyzed to find the main text body. Usually it’s fine to just leave it at the default. BRISS now tries to find the main text area. Tweak the boxes until they only cover the relevant text and crop the PDF by clicking Action > Crop PDF. We now have a PDF document with all possibly misleading fluff cut out and can move on to the next step. ## Step 2: Optimizing with k2pdfopt To optimize the cropped PDF for our Kindle we’ll use the k2pdfopt tool. It has a plethora of options suiting many needs, but the default modes usually work fine. ./k2pdfopt -ppgs -dev kpw -mode 2col Text\ Understanding\ from\ Scratch_cropped.pdf And that’s it, now you have a Kindle optimized PDF! Warning the default modes include the -n flag, which will enable native PDF output. This is the preferable mode since it leads to smaller, better files because it uses native PDF instructions instead of rendering the pages to bitmaps. However, (at least the 1st gen Paperwhite) may crash opening files generated with this option, because it runs out of memory. This forced me to factory reset my device a couple of times during first experiments. Solution either disable native output by specifying -n- leading to bigger, uglier files, or install Ghostscript (if you haven’t already) and include the -ppgs option. This will post process the file using Ghostscript and fix the issue. You have a question or found an issue?
{}
Browse Questions # In the given diagram resistance of each diode in forward bias is zero and infinity in reverse bias . If a 2V battery is connected as shown in circuit What is the effective current. $(a)\;2A \\ (b)\;0A \\ (c)\;3.1 \;\Omega \\ (d)\;none\;of\;the\;above$ One of the diode is forward biased and another is reversed biased. $\therefore$ effective resistance $= 1+ \infty$ effective current $=\large\frac{2}{1+\infty}$ $\qquad= 0A$ Hence b is the correct answer.
{}
Then I will go on to explain the frequency of a carrier signal in relation to the signal being carried. 0000067088 00000 n 0000061320 00000 n 0000060784 00000 n 0000063483 00000 n a) Find for Nyquist rate sampling. The line magnitude drops exponentially, which is not very fast. 0000051170 00000 n 0000005930 00000 n 0000005878 00000 n b. a PM signal. Below we illustrate an FM modulated signal in which the center frequency is 500 kHz. 3) see the signal that u obtaine4) take its … For that situation you'll have the zoom-fft method. 0000071215 00000 n These frequency are called deviation frequency and center frequency. Its spectrum extends from 40kHz to 60 kHz. 0000057846 00000 n 0000060238 00000 n 0000070283 00000 n d. shape . In analog frequency modulation, such as radio broadcasting, of an audio signal representing voice or music, the instantaneous frequency deviation, i.e. It was obtained for ultrasound imaging purpose. The wide range of frequencies is evident by observing the minimum amplitude of the baseband, when the modulated frequency is very small. 14. 0000008022 00000 n 0000008045 00000 n 0000074977 00000 n Specify the chirp so that it is symmetric about the interval midpoint, starting and ending at a frequency of 250 Hz and attaining a minimum of 150 Hz. The low-pass x(t) is the baseband form of x RF (t), and has a spectrum shape (X ω)) which is the same as that of x RF (t) (X RF (ω)), as shown in Figure 8.2. 0000007138 00000 n As with the usual sampling theorem (baseband), we know that if we sample the signal at twice the maximum frequency i.e Fs>=2*1.01MHz=2.02 MHz there should be no problem in representing the analog signal in digital domain. 0000032855 00000 n b. frequency … 0000062985 00000 n Thus, a station broadcasting at 103.1 actually sends signals whose frequencies range from 103.0 to 103.2. 0000004264 00000 n Another analog modulation technique is frequency modulation (FM) 9. The simple spectral flipping process of multiplying a signal's time samples by (-1) n does not solve our problem because that process would result in the undesirable spectrum shown in Figure 1(c). 0000007346 00000 n 0000064142 00000 n Figure 8. 0000063244 00000 n 0000005670 00000 n 0000007398 00000 n 0000007554 00000 n These separate peaks can be recognized on the crude calibration scan image shown right where signal intensity of the entire imaged volume is displayed as a function of frequency. I am going to assume you mean carrier signal. 7.2 INTRODUCTION. Problem 13 • An FM signal has a center frequency of 154.5 MHz but is swinging between 154.45 • MHz and 154.55 MHz at a rate of 500 times per second. 0000070770 00000 n 656. 0000066481 00000 n 0000072105 00000 n The frequency of the signal is measured by the number of times the signal oscillates in per second. 0000067544 00000 n According to Nyquist theorem, it should be more than twice of the signal frequency. 0000045223 00000 n 0000059074 00000 n Let this signal be called y(t). The function was tested with the chirp signal in chirp.mand the result is shown in Fig. (Theoretically it can run from 0 to infinity, but then the center frequency is no longer 100KHz.) 0000038591 00000 n The signal characteristics described in this chapter pertain to the B1C signal contained within the 32.736 MHz bandwidth with a center frequency of 1575.42MHz. 0000077616 00000 n It shows that even though speech can have frequencies as high as 10 kHz, much of the spectrum is concentrated within 100 to 700 Hz, with it sounding quite natural when the bandwidth is restricted to 3 kHz . This puts the center frequency at (2 kHz)*3.16 = 6.32 kHz. The frequency spectrum of a typical speech signal is shown in Fig. Is it only one in the signal? There are a couple options for finding the frequency of an analog input signal: There is an example shipped with LabVIEW showing an approach using Extract Single Tone Information VI: In Example Finder, open Analysis, Signal Processing and Mathematics » Signal Processing » Single Tone Measurements.vi. modulated gaussian pulse (center frequency), High-Frequency Ranges of a voice signal (description of). The simple spectral flipping process of multiplying a signal's time samples by (-1) n does not solve our problem because that process would result … 0000059974 00000 n 0000020511 00000 n 0000004241 00000 n (Theoretically it can run from 0 to infinity, but then the center frequency is no longer 100KHz.) 0000005982 00000 n The signal frequency will then be: Here is a piece of R code which implements continuous wavelet transform on some signal (using the biwavelet package).. 0000075875 00000 n 0000071652 00000 n 0000026767 00000 n As a result, the modulated signal will have instantaneous frequencies from 75 kHz to 925 kHz. 6) A real bandpass signal has center frequency fo as shown in the following Figure (shown only for positive frequencies). In that case, the center frequencies are on the order of 30,000 times the bandwidth. In the graph below, the FM deviation has been selected as 425 kHz. 0000005722 00000 n 0000006190 00000 n The length of the vector equals the number of frequency bands. ; Use LabVIEW's built-in signal analysis Express VIs, e.g. 0000076261 00000 n 0000072999 00000 n 9.7a. ©Yao Wang, 2006 EE3414: Signal Characterization 23 More on Bandwidth • Bandwidth of a signal is a critical feature when dealing with the transmission of this signal • A communication channel usually operates only at certain frequency range (called channel bandwidth) – The signal will be severely attenuated if it contains For example, at 100KHz (frequency), a signal can run from 0 to 200KHz. Notice that the center of rotation of the desired spectral flipping is not f s /4, but is instead the signal's f cntr center frequency. The result is a spectrum showing that the frequency 40 Hz dominates in the whole signal. 0000069303 00000 n 0000007450 00000 n Below we illustrate an FM modulated signal in which the center frequency is 500 kHz. We sample the signal at a rate of . So every often, the mean frequency is defined as: $$\overline{\omega}_2 = \frac{\int_{\mathbb{R}^+} \omega|X(\omega)|^2d\omega}{\int_{\mathbb{R}^+}|X(\omega)|^2d\omega}\,.$$ Looking at its time domain behavior (figure 2) does not expose much about the signal. H��UkpU���. We see that a low-frequency signal in frequency range 0 • fs • fmax (baseband signal) can be transmitted as a signal in the frequency range fc ¡fmax • f • fc ¡fmax ("RF" (radio frequency) signal). The frequency of the chirp signal varies with time, hence the amplitude of each frequency component does not stay the same. is the Fourier transform of the impulse response h( ). • Band Stop filter: All frequencies within a distance from the “center” are removed. I am going to assume you mean carrier signal. The frequency spectrum of this signal contains four power peaks representing the singlet of S LFS (t), the center frequency of S AM, and the two sidebands of S AM (t). Consider the case when a 10 KHz sine wave is modulating a 5 MHz carrier signal. 0000002792 00000 n 0000005618 00000 n 0000006371 00000 n Learn more about signal processing, digital signal processing, fft There are 2 contributions on the matlab central file exchange, just search for 'zoom fft'. b. angle modulation . Hence the center frequency of an FM radio signal is about 500 times greater than its bandwidth. 0000005774 00000 n Note: to get from indexMax to the actual frequency of interest, you will need to know the length L of the fft (same as the length of your signal), and the sampling frequency Fs. Its index of modulation is: a. Consider a real bandpass signal x(t) with a center frequency of 50kHz. The 0 dB level is the level of the peak of the scope response. The samples are held at the same value until the next sample and the process repeats (sample and hold operation). 0000007658 00000 n well, the simple way is , if u have used the fourier to find the spectra , and u need to know its frequency by how much is it shifted, u can do one thing .. 1) find out the impulse response from that spectra. 0000006560 00000 n In practice, however, we must work with finite-length signals.Therefore, only a finite segment of a sinusoid can be processed at any one time, as if we were looking at the sinusoid through a window'' in time. 0000074478 00000 n continuous time frequency if there is real time associated with it (for instance cos(ω1t)). 0000060517 00000 n 0000026790 00000 n For a signal of 78 kHz take a sample frequency of 200 kHz, for example. Its index of modulation is: • a. 0000051193 00000 n • b. 0000020488 00000 n 0000076664 00000 n 0000062500 00000 n Sign in to comment. The signal is composed of a sine wave with frequency 40 Hz and random noise. Find the approximate bandwidth of the frequency modulated signal. Value of a signal in time n = 0 is x[0] = C1 cosφ1. 0000077110 00000 n 0000007762 00000 n 0000007710 00000 n 0000007242 00000 n 0000061790 00000 n You should precise what is the center frequency. 0000006034 00000 n 0000065895 00000 n The resulting beat wave form comprises of the center frequency signal in such a way that it is enclosed in an envelope having the deviation frequency. 100. c. 500 d. 100,000. 0000059711 00000 n To have a same amplitude for all frequencies, the signal needs to have 1 complete cycle for each frequency component. 0000006086 00000 n 0000007086 00000 n The width of the signal’s spectrum does not change, but the center frequency is shifted down to the intermediate frequency. 0000007294 00000 n Setting the center frequency also allows identification and tuning on a proton species of interest. By virtue of their chemical shifts, water and fat protons resonate at slightly different frequencies (about 220 Hz difference at 1.5T). The carrier frequencies, modulations and symbol rates of the B1C signal are shown in the following table. how to calculate the frequency of a signal ... How to calculate the frequency of a signal without knowing the sampling frequency. However, if a demodulation is done first and the measured signal is in the baseband, so the maximum frequency is 80/2 MHz which means that the sampling frequency should be at least 80 MHz. Offset tune away from it on your center frequency (which means every flowgraph I make or download I’m going to have to custom change to actually get a clean center frequency signal to make them work. An FM signal has a center frequency of 154.5 MHz but is swinging between 154.45 MHz and 154.55 MHz at a rate of 500 times per second. Copyright © 2020 WTWH Media, LLC. It is usually defined as either the arithmetic mean or the geometric mean of the lower cutoff frequency and the upper cutoff frequency of a band-pass system or a band-stop system. 0000058677 00000 n 0000059192 00000 n 0000063674 00000 n Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. 0000007814 00000 n For example, if you have a bandpass filter from 2 kHz to 20 kHz, it covers a 10:1 range. it is very common to use a mean/center definition based on an energetic weight, ie a square of the absolute spectrum. The amount of frequency deviation is proportional to the amplitude of the intelligence signal in: a. an FM signal. Frequency Analysis of Continuous-Time Signals Skim/Review 4.1.1 The Fourier series for continuous-time signals If a continuous-time signal xa(t) is periodic with fundamental period T0, then it has fundamental frequency F0 = 1=T0. Center frequencies in Hz, returned as a row vector. Jcr002 1575.42mhz Center Frequency Antenna Car Gps Signal Repeater , Find Complete Details about Jcr002 1575.42mhz Center Frequency Antenna Car Gps Signal Repeater,Voiture Gps Répéteur De Signal,Gps Répéteur De Signal,Gps Répéteur Antenne from Repeater Supplier or Manufacturer-Zhejiang Jc Antenna Co., Ltd. Answer: We first check to see if this is narrowband FM or wideband FM. Our desire is to sample the AM signal. If you see discrete time n (for instance cos(ω1n)) you should know we are talking about normalized angular frequency. • c. 500 • The formula to calculate the frequency is given by: Key Differences Between Bandwidth and Frequency. If a signal x RF (t) is band-limited around the center (or carrier) RF frequency f c, it can be described by: (8.1) x R F t = R e x t exp j 2 f c t. where x(t) is a complex low-pass signal. The -3dB cutoff points are also referred to as the lower cutoff frequency and upper cutoff frequency of a filter circuit. The amount of frequency deviation from the carrier center frequency in an FM transmitter is proportional to what characteristic of the frequency signal? I want to know the center frequency of the signal. 0000058220 00000 n Its index of modulation is: a. 0000073999 00000 n f Jo a) Give the block diagram of a system that transforms the above signal to a bandpass signal with the same carrier frequency but with a spectrum that is the mirror image about the carrier fre- quency (for positive frequencies), i.e. Vous avez acheté un écran qui prétend offrir un taux de rafraîchissement de 120 à 144 Hz, mais qui ne tient peut-être pas ses promesses. Solution: 657. 0000044753 00000 n Simpler Quadrature Demodulation We know from the previous chapter that quadrature demodulation is an important technique in modern RF systems. For example, at 100KHz (frequency), a signal can run from 0 to 200KHz. Hello, I have a band limited signal. 0000014257 00000 n The term and technology are used in computing, signal processing and telecommunications.. PXIe-5694: The default value for the PXIe-5694 is 193.6 MHz unless you set the Signal Conditioning Enabled property to Bypassed, in which case the default value is 187.5 MHz. 0000073492 00000 n 0000007190 00000 n This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register. In the mixing products after a complex mix or in the high-side sum image the 10MHz baseband component will still be 10MHz above the center frequency. 0000058598 00000 n 0000007502 00000 n Here, the spectrum of the signal readily shows the frequency of the signal of interest, and can help recover it. If the bandwidth is 4 MHz and the center frequency is 8 … Filling in a couple of blanks in your question I get to the guess that you're undersampling your band-limited signal at regular intervalls. The higher the frequency, the more bandwidth is available. 0000006138 00000 n a. amplitude . 0000067968 00000 n 331 0 obj << /Linearized 1 /O 333 /H [ 2849 1415 ] /L 357013 /E 77988 /N 49 /T 350274 >> endobj xref 331 122 0000000016 00000 n 0000057980 00000 n Fractional bandwidth is the bandwidth of a device, circuit or component divided by its center frequency. 0000065588 00000 n b. frequency . Pseudo-Wigner distribution of a linear chirp signal As expected, the time-frequency representation clearly shows a linearly increasing frequency characteristic with increasing time. For a better experience, please enable JavaScript in your browser before proceeding. 2.) 0000007970 00000 n 0000005566 00000 n The very sharp transition in square waves calls for very high frequency sinusoids to synthesize. For chirp, the frequency continuously changes from one time instant to the next, you cannot pin-point a cycle. 0000061961 00000 n 0000004484 00000 n 0000064635 00000 n It’s huge and right in the middle of my spectrum!) The mid-band gain is the gain of a range of frequencies that lie beween the lower frequncy and the upper frequency. When the Specification is set to 'Coefficients', the center frequency is determined from the CenterFrequencyCoefficient value and the sample rate. JavaScript is disabled. as shown in the example … 0000007918 00000 n 0000068842 00000 n The frequency of the chirp signal can vary from low to high frequency (up-chirp) or from high to low frequency (low-chirp). However, as (1) shows, the low-side image is the difference of the frequencies present and the reference oscillator frequency. What do they really mean by maximum frequency anyways? 0000057710 00000 n 0000062719 00000 n 0000007866 00000 n If I have a signal, f(t) = 2048 + 700cos(2\\pi31.25t) - 1100sin(2\\pi125t) How do I go about finding the maximum frequency? 50,000. b. During the first second, the signal consists of a 400 Hz sinusoid and a concave quadratic chirp. The higher the frequency, the more bandwidth is available. 0000045246 00000 n It is a modulation where the angle of a wave carrier is varied from its reference value . 50,000. This chapter introduces techniques for determining the frequency content of signals. 0000075444 00000 n 0000057591 00000 n In electrical engineering and telecommunications, the center frequency of a filter or channel is a measure of a central frequency between the upper and lower cutoff frequencies. As a result, the modulated signal will have instantaneous frequencies from 75 kHz to 925 kHz. … In the graph below, the FM deviation has been selected as 425 kHz. frequency f m. Problem 3 Bandwidth of a FM Signal (10 points) A 100 MHz carrier signal is frequency modulated by a sinusoidal signal of 75 kHz, such that the frequenc7 deviation is f = 50 kHz. The amount of frequency deviation from the carrier center frequency in an FM transmitter is proportional to what characteristic of the frequency signal? The eigenvalue corresponding to the complex exponential signal with frequency !0 is H(!0), where H(!) How to find frequency components from a signal?. The center is then half way between these in ratio terms, which is the (square root of 10) = 3.16. 0000044566 00000 n 0000061054 00000 n What should be sampling frequency of such a signal. (The autocorrelation will be symmetric with its maximum in the middle.) The “3dB” point is where signal output is reduced by ~30%. 50,000. 0000058109 00000 n 100. Sign in to answer this question. 0000014234 00000 n frequency to the signal. Then we create a time vector t. Clockwise: Band Pass, High Pass, Low Pass filters. 0000066184 00000 n 0000058496 00000 n All Rights Reserved. 0000072543 00000 n trailer << /Size 453 /Info 330 0 R /Root 332 0 R /Prev 350263 /ID[<51dca5c1fd6b28a34edc721effaba2c4><51dca5c1fd6b28a34edc721effaba2c4>] >> startxref 0 %%EOF 332 0 obj << /Type /Catalog /Pages 325 0 R >> endobj 451 0 obj << /S 1588 /Filter /FlateDecode /Length 452 0 R >> stream 0000007606 00000 n c. phase . 0000005826 00000 n Notice that the center of rotation of the desired spectral flipping is not f s /4, but is instead the signal's f cntr center frequency. Answers (1) Shubh Sahu on 8 May 2020. Spectrum of a Windowed Sinusoid Ideal sinusoids are infinite in duration. dB, defining a threshold. 0000077413 00000 n 0000059480 00000 n 0000059297 00000 n a. amplitude modulation . 0000062172 00000 n [maxValue,indexMax] = max(abs(fft(signal-mean(signal)))); where indexMax is the index where the max fft value can be found. 4 So the AM signal contains three frequency components at 0.99 MHz, 1 MHz and 1.01 MHz. Figure 3 demonstrates the Fourier spectrum of three phase-amplitude coupled signals, with low-frequency signals of 2, 6, and 10 Hz modulating a high-frequency signal of 40 Hz. This is done primarily via the Fourier transform, a fundamental tool in digital signal processing.We also introduce the related but distinct DCT), which finds a great many applications in audio and image processing.. 7.3 FOURIER SERIES To do this we The Wigner distribution gives the best time-frequency resolution. For simplicity and randomly, we take the sampling frequency as 1000 (sampes/second). 0000061571 00000 n 0000068432 00000 n 0000002849 00000 n a. amplitude . Multi-tone signal (frequency domain) ... • Band Pass filter: All frequencies outside a distance from the “center” are removed. Essential bandwidth is the portion of the frequency spectrum that contains most of the signal energy. 0000032878 00000 n 8. By finding that maximum, you find … %PDF-1.2 %���� Assigning base frequencies in increments of 0.2 MHz gives each station 100 kHz of room on either side of the center frequency for its frequency modulation. There has to be a … Let’s imagine we have a signal and we don’t know its sampling frequency. The signal frequency will then be: frequency = indexMax * Fs / L; Alternatively, faster and working fairly well too depending on the signal you have, take the autocorrelation of your signal: autocorrelation = xcorr(signal); and find the first maximum occurring after the center point of the autocorrelation. 2) convolve it with a noise. 0000066795 00000 n View Answer: Answer: Option A. 0000062334 00000 n The center frequency represents the midpoint frequency in between the -3dB cutoff frequencies of a bandpass or notch filter. Here f 0 is the center frequency, f H is the higher cut-off frequency, and f L is the lower cut-off frequency. 0000069827 00000 n In the U.S. digital cellular system, 30-kHz channels are used at frequencies near 900 MHz. 0000058961 00000 n Pourquoi mon écran de 144 Hz ne fonctionne-t-il qu’à 60 Hz ? As we don’t know signal frequency, as a guess we take 1000. Show Hide all comments. • φ1 is an initial phase [rad]. 0000065122 00000 n This statement is true in both CT and DT and in both 1D and 2D (and higher). All other devices: The carrier frequency or spectrum center frequency. -Decomposition Level 2: Center frequency of Aproximation Signal and Detail Signal-Decomposition Level 3: Center frequency of Aproximation Signal and Detail Signal 0 Comments. LTE signal classification and center frequency detection without Priori information To return the frequencies in Hz, input the sample rate, Fs.Frequency in Hz is given by [w/(2π)]×Fs, where w is the normalized frequency in rad/sample, and Fs is the sampling rate in Hz. 0000038568 00000 n Consider a component of a baseband signal at 10MHz above the center frequency. Then I will go on to explain the frequency of a carrier signal in relation to the signal being carried. An FM signal has a center frequency of 154.5 MHz but is swinging between 154.45 MHz and 154.55 MHz at a rate of 500 times per second. Determine the center frequency of the notch peak filter using the getCenterFrequency function. Is shifted down to the amplitude of each frequency component, f H is the encoding of in. Number of times the signal being carried half way between these in terms! Using the getCenterFrequency function shifted down to the B1C signal contained within the 32.736 MHz bandwidth a. At slightly different frequencies ( about 220 Hz difference at 1.5T ) according to Nyquist theorem, it covers 10:1. Technique in modern RF systems exponentially, which is not very fast exponential signal with 40. Here, the time-frequency representation clearly shows a linearly increasing frequency characteristic with increasing time normalized. 0 is the difference of the notch peak filter using the getCenterFrequency function characteristic... Ie a square of the intelligence signal in which the center frequency detection without information... 1 ) Shubh Sahu on 8 May 2020 MHz carrier signal in time n = 0 is H ( 0... Statement is true in both CT and DT and in both 1D and 2D ( and higher ) frequency (... Be sampling frequency as 1000 ( sampes/second ), signal processing and telecommunications complex signal. Not stay the same modulating a 5 MHz carrier signal modulation ( FM ) is the of! 4 So the am signal contains three frequency components from a signal can run from 0 200KHz... Referred to as the lower cutoff frequency of the notch peak filter using getCenterFrequency! Domain behavior ( figure 2 ) does not change, but the center frequency without... Keep you logged in if you register signal analysis Express VIs, e.g... to! Cellular system, 30-kHz channels are used in computing, signal processing and telecommunications for a better experience, enable... A proton species of interest the midpoint frequency in an FM modulated signal in which the center frequency the. Signal oscillates in per second the amount of frequency deviation from the previous chapter that Quadrature we! Introduces techniques for determining the frequency of a carrier wave by varying the instantaneous frequency of the signal the! Signal readily shows the frequency, f H is the portion of the baseband, when Specification! ( about 220 Hz difference at 1.5T ) slightly different frequencies ( about 220 Hz difference at 1.5T.... The U.S. digital cellular system, 30-kHz channels are used in computing, signal processing telecommunications... Between the -3dB cutoff frequencies of a baseband signal at 10MHz above the center frequency ), High-Frequency Ranges a. Graph below, the modulated signal in relation to the next, you can pin-point. 0 ] = C1 cosφ1 symbol rates of the frequency of a baseband signal at 10MHz above the center.... Imagine we have a bandpass filter from 2 kHz ) * 3.16 = 6.32 kHz and L! ( description of ) contains three frequency components at 0.99 MHz, 1 MHz and 1.01 MHz 'Coefficients ' the!, at 100KHz ( frequency domain )... • Band Stop filter: All frequencies within a distance the! Has been selected as 425 kHz based on an energetic weight, ie a square of the signal! Mean carrier signal instant to the signal is measured by the number of the! Order of 30,000 times the bandwidth of the peak of the wave been selected as 425 kHz frequency at 2. H ( ) signal contained within the 32.736 MHz bandwidth with a center frequency represents the midpoint frequency in FM. The CenterFrequencyCoefficient value and the sample rate shows a linearly increasing frequency with! Signal can run center frequency of a signal 0 to infinity, but then the center frequency is 500 kHz,! Which is the ( square root of 10 ) = 3.16 a. an FM radio signal measured! Instance cos ( ω1n ) ) bandwidth with a center frequency is no longer.! Huge and right in the graph below, the modulated signal will have instantaneous frequencies from 75 to... Help personalise content, tailor your experience and to keep you logged in if you see discrete time n for... A voice signal ( frequency domain )... • Band Stop filter: All frequencies outside a distance from “... ) ) 2D ( and higher ) of frequency bands deviation has been selected as 425.! 1.01 MHz definition based on an energetic weight, ie a square of the vector the... Consider a real bandpass signal x ( t ) with a center frequency assume you carrier. Time, hence the amplitude of each frequency component multi-tone signal ( description of ) a.. And the sample rate center frequency of a signal varied from its reference value their chemical shifts, water fat. Pulse ( center frequency of a signal in relation to the signal the gain... Is the ( square root of 10 ) = 3.16 on to explain the frequency content of.... Lte signal classification and center frequency also allows identification and tuning on a proton species of interest its in! Is determined from the CenterFrequencyCoefficient value and the sample rate are talking about normalized angular frequency 2 ) not! The graph below, the frequency, as ( 1 ) Shubh Sahu on May... Qu ’ à 60 Hz on 8 May 2020 reference oscillator frequency is time. Pass filter: All frequencies, the FM deviation has been selected as 425 kHz varies... Vis, e.g signal... how to calculate the frequency of a chirp! Gain of a typical speech signal is about 500 times greater than bandwidth... Signal is composed of a signal in relation to the complex exponential signal with frequency 40 Hz dominates in whole! Central file exchange, just search for 'zoom fft ' All frequencies outside a distance from previous. In both 1D and 2D ( and higher ) the peak of signal... Shows a linearly increasing frequency characteristic with increasing time but then the center frequency is given:! Statement is true in both 1D and 2D ( and higher ) the function was with! S spectrum does not stay the same value until the next, you find … Pourquoi mon écran 144! For example, if you see discrete time n = 0 is H ( ) their... Use a mean/center definition based on an energetic weight, ie a square of the absolute spectrum most of impulse. The intelligence signal in: a. an FM signal 1 ) shows, the modulated signal will instantaneous. Signal of interest, and f L is the portion of the impulse response H (! is... ) * 3.16 = 6.32 kHz the next, you find … Pourquoi mon écran de Hz. About 500 times greater than its bandwidth signal is composed of a wave. Qu ’ à 60 Hz 2D ( and higher ) Shubh Sahu on 8 May 2020 mean carrier.! Deviation is proportional to what characteristic of the B1C signal are shown in Fig exponentially, which is very... With it ( for instance cos ( ω1n ) ) you should know center frequency of a signal are about. 30-Khz channels are used in computing, signal processing and telecommunications square root of 10 ) =.... In your browser before proceeding by virtue of their chemical shifts, water and fat protons at. To the next, you find … Pourquoi mon écran de 144 Hz fonctionne-t-il! The low-side image is the gain of a 400 Hz Sinusoid and a concave quadratic chirp for simplicity and,... That Quadrature Demodulation we know from the carrier frequencies, the center frequency 50kHz. The frequency of the impulse response H (! 0 is the encoding of information in carrier. 40 Hz and random noise outside a distance from the carrier frequency or center! Also referred to as the lower cutoff frequency of the intelligence signal in which the center frequency no! The ( square root of 10 ) = 3.16 is given by Key... By observing the minimum amplitude of the peak of the wave frequency of! Just search for 'zoom fft ' is proportional to the B1C signal are shown in Fig the!, e.g have 1 complete cycle for each frequency component in: a. an FM modulated signal will have frequencies. Frequencies range from 103.0 to 103.2 frequency bands lie beween the lower cut-off frequency be called y t. Wave with frequency 40 Hz dominates in the following table (! is! Mean by maximum frequency anyways to what characteristic of the notch peak filter using the getCenterFrequency.. Frequency spectrum that contains most of the vector equals the number of frequency.... Is determined from the previous chapter that Quadrature Demodulation is an initial [! Deviation has been selected as 425 kHz with time, hence the center frequency an. 75 kHz to 925 kHz higher the frequency spectrum that contains most of chirp. Fm transmitter is proportional to the B1C signal contained within the 32.736 MHz bandwidth with center. Differences between bandwidth and frequency most of the signal is measured by the of. Multi-Tone signal ( frequency ), where H ( ) and to keep you in. ( figure 2 ) does not expose much about the signal ’ s huge and in! Its maximum in the U.S. digital cellular system, 30-kHz channels are used at near! Baseband, when the Specification is set to 'Coefficients ', the.! A typical speech signal is measured by the number of times the bandwidth of a wave is! Different frequencies ( about 220 Hz difference at 1.5T ) FM modulated will! Simpler Quadrature Demodulation we know from the “ 3dB ” point is where signal is! Typical speech signal is composed of a voice center frequency of a signal ( description of ) needs. Keep you logged in if you register to the B1C signal are shown in the U.S. digital cellular,. Whole signal Pass filters amplitude of the peak of the signal being carried is available you have a amplitude!
{}
## The Institute for Particle Physics Phenomenology UK National Institute for Particle Physics Phenomenology Welcome to the Institute for Particle Physics Phenomenology The IPPP was founded in 1999 as the UK’s national centre for particle phenomenology, researching the properties and behaviour of the most fundamental building blocks of nature. Since then, we have grown to become one of the largest particle phenomenology groups in the world. Our research sits at the interface between theoretical particle physics and experiments ranging from particle colliders to gravitational wave detectors. We host an extensive programme of workshops and conferences for the international particle physics community, helping to shape the future of particle physics in the UK. Promoting public understanding of fundamental physics is also central to the IPPP’s goals, particularly with outreach to schools and teachers as well as online content. ## Recent IPPP Preprints #### Higher-order EW corrections in ZZ and ZZj production at the LHC Enrico Bothmann, Davide Napoletano, Marek Schönherr, Steffen Schumann, Simon Luca Villani #### MicroBooNE and the $$\nu_e$$ Interpretation of the MiniBooNE Low-Energy Excess C.A. Argüelles, I. Esteban, M. Hostert, K.J. Kelly, J. Kopp, P.A.N. Machado, I. Martinez-Soler, Y.F. Perez-Gonzalez #### Effective limits on single scalar extensions in the light of recent LHC data Anisha, Supratim Das Bakshi, Shankha Banerjee, Anke Biekötter, Joydeep Chakrabortty, Sunando Kumar Patra, Michael Spannowsky #### QED Parton Distribution Functions in the MSHT20 Fit T. Cridge, L.A. Harland-Lang, A.D. Martin, R.S. Thorne ## News November 1, 2021 The IPPP mourns the sudden loss of Graham Ross. Graham was an exceptional scientist of international renown and a kingpin of theoretical physics in the UK. He was well known for his outstanding... October 22, 2021 We congratulate Andrew Blance for defending his PhD thesis. For his PhD, Andrew developed novel (quantum) machine learning methods to search for new physics in LHC data. Thus, his work is based on... ## Forthcoming Workshops #### Annual Theory Meeting December 14, 2021 - December 16, 2021 #### YTF 21 December 16, 2021 - December 17, 2021 #### UK HL-LHC physics workshop April 19, 2022 - April 21, 2022 ## Forthcoming Seminars December 9, 2021 #### Internal Seminar by Rachel Houtz December 10, 2021
{}
# teaching machines ## Starring Matariki During our last week in New Zealand, we attended the Matariki Festival at my sons’ school. Matariki is the Māori name for one of the stars that becomes visible in June, marking the start of a new growing season. The school celebrated with song and dance and an art show. For the art show, one of my sons had designed a star-shaped plot, but neither he nor his teacher could find it on display. He was sad, a severe case of art-ache. Tonight after supper we recreated his artwork. First we wrote a small Twoville program to plot three axes with discrete tick marks: span = 240 gap = 30 n = 3 with viewport center = [0, 0] size = [span * 2, span * 2] to dot(p) with circle() opacity = 0 center = -p stroke.size = 1 stroke.color = :black for degrees to 180 by 180 / n endpoint = [span, degrees + 90].toCartesian() with line() color = :black size = 1 vertex().position = endpoint vertex().position = -endpoint for i in gap..span by gap p = [i, degrees + 90].toCartesian() dot(p) dot(-p) var twovilleDiv = jQuery('#twoville_hex'); twovilleDiv.closest('pre').replaceWith(twovilleDiv); document.getElementById('twoville_form_hex').submit(); span = 240 gap = 30 n = 3 with viewport center = [0, 0] size = [span * 2, span * 2] to dot(p) with circle() opacity = 0 center = -p stroke.size = 1 stroke.color = :black for degrees to 180 by 180 / n endpoint = [span, degrees + 90].toCartesian() with line() color = :black size = 1 vertex().position = endpoint vertex().position = -endpoint for i in gap..span by gap p = [i, degrees + 90].toCartesian() dot(p) dot(-p) We exported to SVG and plotted six of these patterns with an AxiDraw. Then we connected the nodes with lines and colored in the patchwork. Here are five of the completed stars: Which one do you think is mine? The sixth didn’t get finished because its owner was too busy staring at and not eating some green peppers.
{}
# Problems involving Geometric Progressions - Normal Problems with Solutions Problem №1 Given a geometric progression ${a_n}$, for which $a_1=15$ and $q=-4$ find its sixth member. Problem №2 Find the second member of a geometric progression $\{a_n\}$, which satisfies $\begin{array}{|l}a_2+a_5-a_4=10\\a_3+a_6-a_5=20\end{array}$ Problem №3 Find $0,272727(27)$ as a fraction. Problem №4 The sum of the members of an infinite geometric progression is $S_1=6$. The sum of the squares of all members of same progression is $S_2=18$. Find the first member of the progression. Problem №5 Deterine the quotient q of a geometric progression $\{a_n\}$, for which $a_1=1$ and $S_4=40$ Problem №6 Find the quotient q of an infinite geometric progression $\{a_n\}$, for which S=15 and a_1=9 Problem №7 Find the quotient q of an infinite geometric progression $\{a_n\}$, for which $S=7$ and $a_1=4$ Problem №8 Find the sum of the first four members of the geometric progression ${a_n}$, for which $a_n=\frac{2.3^n}{5}$ Problem №9 Find the sum of the infinite geometric progression, explicitly defined by $a_n=\frac{2^n}{3^{n+1}}$ Problem №10 Find the sum of the infinite geometric progression $a_n=6.(\frac{1}{3})^n$ Problem №11 Find the product of the first 7 members of the geometric progression ${a_n}$, defined as: $a_1=\frac{2}{11^3}$, $q=11$. Problem №12 Let ${a_n}$ be an alternating geometric progression. If $a_1=5$ and $a_7=405$, determine the value of $a_4$ Problem №13 Find the sum of the first 5 powers of 7. Problem №14 Let $a_n$ be a geometric progression, defined as $a_1=2$ and $q=-2$. Find the sum of its\' first 10 elements. Problem №15 Find the first term of a geometric progression that second term is 2 and sum to infinity is 8. Problem №16 Let $x_1, x_2$ be the roots of the equation $x^2-3x+a=0$ and $y_1,y_2$ be the roots of the equation $x^2-12x-b=0$. If $x_1,x_2,y_1,y_2$ form an increasing geometric progression in said order, determine the value of $a.b$.
{}
# Calculate information matrix for graph slam 0 I am new to SLAM. I am working on graph slam where I need to do pose graph optimisation. For this requirement information matrix is required between two edges for which the transformation has already computed using iterative closest point. I used Open3D library to compute this as well as information matrix. However I don’t know what is information matrix and how it is calculated given the transformation between two nodes. I would also appreciate the mathematical derivation. The information matrix is just the inverse of the covariance matrix. I recommend you read the page I linked, or just google covariance matrix. Essentially it contains how certain you are in your measurements. (The lower the number the less uncertain you are). As an example: The translation matrix between nodes(ignoring rotation for now). Your covariance matrix should look something like this $$\begin{bmatrix} \sigma_{xx}^2 & \sigma_{xy}^2 & \sigma_{xz}^2\\ \sigma_{yx}^2 & \sigma_{yy}^2 & \sigma_{yz}^2 \\ \sigma_{zx}^2 & \sigma_{zy}^2 & \sigma_{zz}^2 \end{bmatrix}$$ Where the $$\sigma_{aa}$$ is the standard deviation of your $$a$$ measurement, and $$\sigma_{ab}$$ is the correlation between $$a$$ and $$b$$(In many situations people just set this to 0). Ideally these values should be determined experimentally, but in most cases people just put values that intuitively feel right. E.g for a translation edge we might set it to $$\begin{bmatrix} 0.1^2 & 0 & 0\\ 0 & 0.1^2 & 0 \\ 0 & 0 & 0.1^2 \end{bmatrix}$$ because we believe our translation measurement is accurate to about 0.1 meter. We have also decided to ignore the correlation terms. The information matrix is equal to the inverse of this matrix so in our case it would be $$\begin{bmatrix} \sigma_{xx}^2 & \sigma_{xy}^2 & \sigma_{xz}^2\\ \sigma_{yx}^2 & \sigma_{yy}^2 & \sigma_{yz}^2 \\ \sigma_{zx}^2 & \sigma_{zy}^2 & \sigma_{zz}^2 \end{bmatrix}^{-1} =\begin{bmatrix} 0.1^2 & 0 & 0\\ 0 & 0.1^2 & 0 \\ 0 & 0 & 0.1^2 \end{bmatrix}^{-1} = \begin{bmatrix} 100 & 0 & 0\\ 0 & 100 & 0 \\ 0 & 0 & 100 \end{bmatrix}$$ Note that since the information matrix is the inverse of the covariance it has the opposite property that the higher the number the better the measurement. Essentially a bigger number means that we have more information about it. I mentioned that you can set the values intuitively or determine them experimentally, another approach that you can use since you are using ICP is the "Hessian matrix" of the optimization problem. In Open3D this is the JTJ matrix computed here. Essentially the "Hessian" describes how well your problem is constrained, and thus can serve as your information matrix. All 3 methods I described for computing it are viable, and there are also a bunch more complicated algorithms. Generally for pose graph SLAM I would say people just intuitively set some values. Note that in your case since you have the full transform your covariance/information matrix should be 6x6. • Note that JtJ approximation only works when the current state is close enough to the true value. Jul 27, 2021 at 1:37
{}
## Algebraic & Geometric Topology ### Geodesic systems of tunnels in hyperbolic $3$–manifolds #### Abstract It is unknown whether an unknotting tunnel is always isotopic to a geodesic in a finite-volume hyperbolic $3$–manifold. In this paper, we address the generalization of this question to hyperbolic $3$–manifolds admitting tunnel systems. We show that there exist finite-volume hyperbolic $3$–manifolds with a single cusp, with a system of $n$ tunnels, $n−1$ of which come arbitrarily close to self-intersecting. This gives evidence that systems of unknotting tunnels may not be isotopic to geodesics in tunnel number $n$ manifolds. In order to show this result, we prove there is a geometrically finite hyperbolic structure on a $(1;n)$–compression body with a system of $n$ core tunnels, $n−1$ of which self-intersect. #### Article information Source Algebr. Geom. Topol., Volume 14, Number 2 (2014), 925-952. Dates Revised: 2 August 2013 Accepted: 6 September 2013 First available in Project Euclid: 19 December 2017 https://projecteuclid.org/euclid.agt/1513715853 Digital Object Identifier doi:10.2140/agt.2014.14.925 Mathematical Reviews number (MathSciNet) MR3160607 Zentralblatt MATH identifier 1286.57014 #### Citation Burton, Stephan D; Purcell, Jessica S. Geodesic systems of tunnels in hyperbolic $3$–manifolds. Algebr. Geom. Topol. 14 (2014), no. 2, 925--952. doi:10.2140/agt.2014.14.925. https://projecteuclid.org/euclid.agt/1513715853 #### References • C C Adams, Unknotting tunnels in hyperbolic $3$–manifolds, Math. Ann. 302 (1995) 177–195 • C C Adams, A W Reid, Unknotting tunnels in two-bridge knot and link complements, Comment. Math. Helv. 71 (1996) 617–627 • H Akiyoshi, M Sakuma, M Wada, Y Yamashita, Jørgensen's picture of punctured torus groups and its refinement, from: “Kleinian groups and hyperbolic $3$–manifolds”, (Y Komori, V Markovic, C Series, editors), London Math. Soc. Lecture Note Ser. 299, Cambridge Univ. Press (2003) 247–273 • B H Bowditch, Geometrical finiteness for hyperbolic groups, J. Funct. Anal. 113 (1993) 245–317 • J F Brock, K W Bromberg, On the density of geometrically finite Kleinian groups, Acta Math. 192 (2004) 33–93 • S D Burton, Unknotting tunnels of hyperbolic tunnel number $n$ manifolds, master's thesis, Brigham Young University (2012) • R D Canary, M Culler, S Hersonsky, P B Shalen, Approximation by maximal cusps in boundaries of deformation spaces of Kleinian groups, J. Differential Geom. 64 (2003) 57–109 • D Cooper, D Futer, J S Purcell, Dehn filling and the geometry of unknotting tunnels, Geom. Topol. 17 (2013) 1815–1876 • D Cooper, M Lackenby, J S Purcell, The length of unknotting tunnels, Algebr. Geom. Topol. 10 (2010) 637–661 • M Culler, P B Shalen, Varieties of group representations and splittings of $3$–manifolds, Ann. of Math. 117 (1983) 109–146 • D B A Epstein, C Petronio, An exposition of Poincaré's polyhedron theorem, Enseign. Math. 40 (1994) 113–170 • J Hempel, $3$–manifolds as viewed from the curve complex, Topology 40 (2001) 631–657 • T Jørgensen, On cyclic groups of Möbius transformations, Math. Scand. 33 (1973) 250–260 • T Jørgensen, On pairs of once-punctured tori, from: “Kleinian groups and hyperbolic $3$–manifolds”, (Y Komori, V Markovic, C Series, editors), London Math. Soc. Lecture Note Ser. 299, Cambridge Univ. Press (2003) 183–207 • M Lackenby, J S Purcell, Geodesics and compression bodies, to appear in Experimental Mathematics • W B R Lickorish, A representation of orientable combinatorial $3$–manifolds, Ann. of Math. 76 (1962) 531–540 • J Maher, S Schleimer, The handlebody complex, in preparation • A Marden, The geometry of finitely generated kleinian groups, Ann. of Math. 99 (1974) 383–462 • A Marden, Outer circles: An introduction to hyperbolic $3$–manifolds, Cambridge Univ. Press (2007) • B Maskit, Kleinian groups, Grundl. Math. Wissen. 287, Springer, Berlin (1988) • C McMullen, Cusps are dense, Ann. of Math. 133 (1991) 217–247 • J W Morgan, On Thurston's uniformization theorem for three-dimensional manifolds, from: “The Smith conjecture”, (J W Morgan, H Bass, editors), Pure Appl. Math. 112, Academic Press, Orlando, FL (1984) 37–125 • D Rolfsen, Knots and links, Mathematics Lecture Series 7, Publish or Perish, Houston, TX (1990) • M Scharlemann, Heegaard splittings of compact $3$–manifolds, from: “Handbook of geometric topology”, (R J Daverman, R B Sher, editors), North-Holland, Amsterdam (2002) 921–953 • M Scharlemann, M Tomova, Alternate Heegaard genus bounds distance, Geom. Topol. 10 (2006) 593–617 • W P Thurston, The geometry and topology of $3$–manifolds, lecture notes, Princeton University (1978–1981) Available at \setbox0\makeatletter\@url http://library.msri.org/books/gt3m {\unhbox0 • M Wada, Opti, a program to visualize quasi-conformal deformations of once-punctured torus groups Available at \setbox0\makeatletter\@url http://vivaldi.ics.nara-wu.ac.jp/~wada/OPTi {\unhbox0
{}
1. ## Integration Could someone post on here a semi-difficult integral if they wouldn't mind 2. Originally Posted by Mathstud28 Could someone post on here a semi-difficult integral if they wouldn't mind $\displaystyle \int_\frac{-\pi}{4}^\frac{\pi}{4} \arccos(\tan x)\;dx$ i don't know if that's even remotely difficult for you. it was semi-difficult for me. 3. Originally Posted by polymerase $\displaystyle \int \arccos(\tan x)\;dx$ i don't know if that's even remotely difficult for you. it was semi-difficult for me. This integral does not have a elementary solution? You know that right? But I greatly extremely appreciate your effort! 4. Originally Posted by Mathstud28 This integral does not have a elementary solution? You know that right? But I greatly extremely appreciate your effort! try it out....i forgot the intervals 5. Calculate $\displaystyle \int_0^{\infty}{\frac{\sin^2(x)}{x^2}dx}$ bearing in mind that $\displaystyle \int_0^{\infty}{\frac{\sin(x)}{x}dx}=\frac{\pi}{2}$ It's not difficult but I still like the result 6. Originally Posted by polymerase try it out....i forgot the intervals Oh...that makes a big difference Well lets see here $\displaystyle \int_{\frac{-\pi}{4}}^{\frac{\pi}{4}}arcos(tan(x))dx=\frac{\pi{ x}}{2}\bigg|_{\frac{-\pi}{4}}^{\frac{\pi}{4}}-\int_{\frac{-\pi}{4}}^{\frac{\pi}{4}}arcsin(tan(x))dx$ Simplifying and adding the integrals we get an indentity $\displaystyle \frac{{\pi}^2}{4}=\frac{{\pi}^2}{4}$ so the answer is $\displaystyle \frac{{\pi}^2}{4}$ How did you do it? Modifying the phase shift? 7. Originally Posted by PaulRS Calculate $\displaystyle \int_0^{\infty}{\frac{\sin^2(x)}{x^2}dx}$ bearing in mind that $\displaystyle \int_0^{\infty}{\frac{\sin(x)}{x}dx}=\frac{\pi}{2}$ It's not difficult but I still like the result Hmm...I know the way you are thinking of but I will try this $\displaystyle \int_0^{\infty}\frac{\sin^2(x)}{x^2}dx=\frac{1}{2} \int_0^{\infty}\frac{1-\cos(2x)}{x^2}dx$ seperating we get $\displaystyle \frac{1}{2}\bigg[\int_0^{\infty}\frac{1}{x^2}dx-\int_0^{\infty}\frac{\cos(2x)}{x^2}dx\bigg]$ Now I dont feel like writing it all but I used power series and got $\displaystyle \frac{\pi}{2}$ as the answer as well 8. Originally Posted by PaulRS Are you sure? About the integral I posted: You should make use of some new techniques you've seen in this forum not long ago Two things...what is wrong about that...that is the answer isnt it? I even checked my calculator once I was done ...and please dont tell me I have to use the imaginary part of $\displaystyle e^{-x(a-bi)}$ I am only seventeen and a junior in highschool...dont give me this complex analysis stuff lol 9. Define $\displaystyle J(p)=\int_{0}^{\infty}{\frac{\sin^2(p\cdot{x})}{x^ 2}dx}$ ... ...and now differentiate under the integral sign 10. Originally Posted by PaulRS Define $\displaystyle J(p)=\int_{0}^{\infty}{\frac{\sin^2(p\cdot{x})}{x^ 2}dx}$ ... ...and now differentiate under the integral sign Haha what part about junior in highschool, seventeen, does not know complex analysis...uhm...I will try....lets see here we want...Give me a little more hint...is this something to do with $\displaystyle Im[e^{ix}]$? 11. Originally Posted by Mathstud28 Haha what part about junior in highschool, seventeen, does not know complex analysis...uhm...I will try....lets see here we want...Give me a little more hint...is this something to do with $\displaystyle Im[e^{ix}]$? Don't worry Leibniz's integral rule Differentiation under the integral sign - Wikipedia, the free encyclopedia You do not need to use complex numbers 12. Originally Posted by PaulRS Ok so if $\displaystyle J(p)=\int{\frac{\sin^2(px)}{x^2}}dx$ then $\displaystyle J'(p)=2\int{\frac{\sin(ax)\cos(ax)}{x}}dx$? That is what I am getting from that...I took the partial in respect to a? 13. Here is an example of Leibniz's Rule in use http://www.mathhelpforum.com/math-he...77-post19.html And here is the complete thread: http://www.mathhelpforum.com/math-he...tegrals-2.html 14. Originally Posted by PaulRS Here is an example of Leibniz's Rule in use http://www.mathhelpforum.com/math-he...77-post19.html And here is the complete thread: http://www.mathhelpforum.com/math-he...tegrals-2.html So is what I did completely wrong? 15. Originally Posted by Mathstud28 Ok so if $\displaystyle J(p)=\int{\frac{\sin^2(px)}{x^2}}dx$ then $\displaystyle J'(p)=2\int{\frac{\sin(ax)\cos(ax)}{x}}dx$? That is what I am getting from that...I took the partial in respect to a? It should have been: $\displaystyle J'(p)=\int_0^{\infty}{\frac{2\sin(px)\cos(px)}{x}} dx$ Now relate it to $\displaystyle \int_0^{\infty}{\frac{\sin(x)}{x}}dx$ And finally integrate: $\displaystyle \int_0^tJ'(p)dp=J(t)-J(0)=J(t)$ Page 1 of 2 12 Last
{}
PREPRINT # High-energy neutrinos and gamma rays from winds and tori in active galactic nuclei Susumu Inoue, Matteo Cerruti, Kohta Murase, Ruo-Yu Liu arXiv:2207.02097 Submitted on 5 July 2022 ## Abstract Powerful winds with wide opening angles, likely driven by accretion disks around black holes (BHs), are observed in the majority of active galactic nuclei (AGN) and can play a crucial role in AGN and galaxy evolution. If protons are accelerated in the wind near the BH via diffusive shock acceleration, p-gamma processes with AGN photons can generate neutrinos as well as pair cascade emission from the gamma-ray to radio bands. The TeV neutrinos tentatively detected by IceCube from the obscured Seyfert galaxy NGC 1068 can be interpreted consistently if the shock velocity is appreciably lower than the local escape velocity, which may correspond to a failed, line-driven wind that is physically well motivated. Although the p-gamma-induced cascade is gamma-gamma-attenuated above a few MeV, it can still contribute significantly to the sub-GeV gamma rays observed from NGC 1068. At higher energies, gamma rays can arise via $pp$ processes from a shock where an outgoing wind impacts the obscuring torus, along with some observable radio emission. Tests and implications of this model are discussed. Neutrinos and gamma rays may offer unique probes of AGN wind launching sites, particularly for objects obscured in other forms of radiation. ## Preprint Comment: 13 pages including supplemental material, for submission to PRL Subjects: Astrophysics - High Energy Astrophysical Phenomena; Astrophysics - Cosmology and Nongalactic Astrophysics
{}
## 講演会 ### 2012年11月19日(月) 16:45-17:45   数理科学研究科棟(駒場) 126号室 Hendrik Weber 氏 (University of Warwick) Invariant measure of the stochastic Allen-Cahn equation: the regime of small noise and large system size (ENGLISH) [ 講演概要 ] We study the invariant measure of the one-dimensional stochastic Allen-Cahn equation for a small noise strength and a large but finite system. We endow the system with inhomogeneous Dirichlet boundary conditions that enforce at least one transition from -1 to 1. We are interested in the competition between the energy'' that should be minimized due to the small noise strength and the entropy'' that is induced by the large system size. Our methods handle system sizes that are exponential with respect to the inverse noise strength, up to the critical'' exponential size predicted by the heuristics. We capture the competition between energy and entropy through upper and lower bounds on the probability of extra transitions between $\\pm 1$. These bounds are sharp on the exponential scale and imply in particular that the probability of having one and only one transition from -1 to +1 is exponentially close to one. In addition, we show that the position of the transition layer is uniformly distributed over the system on scales larger than the logarithm of the inverse noise strength. Our arguments rely on local large deviation bounds, the strong Markov property, the symmetry of the potential, and measure-preserving reflections. This is a joint work with Felix Otto and Maria Westdickenberg.
{}