arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Lesson 12: Inference for Two Means: Paired Data These optional videos discuss the contents of this lesson. ## 1 Lesson Outcomes By the end of this lesson, you should be able to: • Confidence Intervals for the mean of differences with dependent samples: • Calculate and interpret a confidence interval for the mean of differences given a confidence level. • Identify a point estimate and margin of error for the confidence interval. • Show the appropriate connections between the numerical and graphical summaries that support the confidence interval. • Check the requirements for the confidence interval. • Hypothesis Testing for the mean of differences with dependent samples: • State the null and alternative hypothesis. • Calculate the test-statistic, degrees of freedom and p-value of the hypothesis test. • Assess the statistical significance by comparing the p-value to the α-level. • Check the requirements for the hypothesis test. • Show the appropriate connections between the numerical and graphical summaries that support the hypothesis test. • Draw a correct conclusion for the hypothesis test. ## 2 Example of Paired Data: Pre- and Post-test Scores In education, it is very common for researchers to conduct studies in which they administer a pre-test, provide some instruction, and then give a post-test. The difference between the post- and pre-test scores is a measure of the student's progress. In this case, it would not make much sense to only look at the mean score on the pre-test and compare it to the mean score on the post-test. This is called a matched-pairs design or we say we have dependent samples. Matched-pairs (or paired-data) designs typically involve only one population, and a pair of observations is drawn on the individuals selected for the sample. In the context of the educational study, the two observations are student's scores on (1) the pre-test and (2) the post-test. If a student is selected to participate in the pre-test (i.e., they are selected to be part of group 1), they are automatically selected to participate in the post-test (i.e., they are chosen to be in group 2 automatically.) There is a lot of merit in subtracting the individual scores and looking at the mean gain. The researchers are not really interested in the students knowledge before the instruction. This is used as a baseline to measure how much was gained during the instruction. There is great value in looking at the difference. This removes the effect of the individual students' ability, and it measures their learning during the unit. To analyze the data, the researchers first find the difference in the post- and pre-test scores. At that point, the data have been reduced to a list of numbers (representing the increase in scores.) Now, the researchers can conduct inference on the mean of these values. In other words, they can do a hypothesis test for the mean of the difference in the post- and pre-test scores. A hypothesis test for two means with paired data (dependent samples) is conducted in the same way as a hypothesis test for a single mean with $\sigma$ unknown. The only exception is that the pairs of data must be subtracted before you start any computations. From a practical perspective, after you subtract, then you apply the one-sample procedures you have already learned. So, there is nothing new that you need to learn to compute a confidence interval for two means with paired data...except how to subtract in Excel . We will first explore an application of pre- and post-testing in a weight loss study. ## 3 Hypothesis Tests ### 3.1 Mahon's Weight Loss Study Background Annie Mahon and other researchers in Wayne Campbell's nutrition lab studied the weight loss of $n=27$ middle aged women who consumed a prescribed low-calorie diet. The women's weights were recorded (in kilograms) at the beginning of the study and after the nine-week diet period. The data are given in the file Mahon. An excerpt of the data is given below. Subject Pre Post 1 62.5 56.1 2 88.8 80.2 3 74.7 70.8 $\vdots$ $\vdots$ $\vdots$ 26 76.3 73.8 27 82.1 77.9 Notice the structure of the data. The weight of each subject was measured before the study and at the conclusion of the study. Each person provided a pre-study weight and a post-study weight. Stated differently, the pre-study weights and the post-study weights are paired. For each row of data, both of these numbers came from the same person. When we collect two observations of the same measurement on each subject, we call it paired data. Sometimes paired data are called dependent samples. 1. The researchers measured the initial weights of the women prior to the study, even though they were not particularly interested in this value. What was the purpose of measuring the pre-study weights? The goal of the study is to determine how much the women's weight change as as result of the study. The researchers must measure the women's weights at the beginning of the study, so they can subtract the initial (pre-study) weight of each woman from her final (post-study) weight. Computing New Variables in Excel Annie Mahon and her research team are interested in the difference of the weights after the study compared with before: $$\text{Difference} = \text{Post} - \text{Pre}$$ Appending the column of differences to the table above, we have: Subject Post Pre Difference 1 56.1 62.5 56.1 $-$ 62.5 = -6.4 2 80.2 88.8 80.2 $-$ 88.8 = -8.6 3 70.8 74.7 70.8 $-$ 74.7 = -3.9 $\vdots$ $\vdots$ $\vdots$ $\vdots$ 26 73.8 76.3 73.8 $-$ 76.3 = -2.5 27 77.9 82.1 77.9 $-$ 82.1 = -4.2 Excel Instructions Here is how you can subtract two columns of data in Excel: • If the data are in columns, you might want to give a label to the new column, such as "Differences" • Within Excel, click on the cell where you want the difference to be calculated. Typically, this will be adjacent to the two values you want to subtract. • Type an equal sign (=) • Click on the cell containing the first number to be subtracted • Then type the subtraction sign (-) • Now, click on the cell containing the second value to be subtracted. The following image shows the subtraction of the pre-study weights from the post-study weights of Mahon's volunteers (The post-study will not always be on the left, so pay attention to how you subtract, post - pre): • When you click elsewhere, the difference will be computed. • If the data are in columns, you can easily compute the difference for the remaining data values. • Select the cell containing the difference you just computed. • Copy the value in the cell ([Ctrl]-c is the keyboard shortcut for PCs.) • Then simultaneously select all the cells in which you want the data to be pasted. • Finally, paste the formula into these cells ([Ctrl]-v is the keyboard shortcut for PCs.) You have now computed the column of differences. Your file should look like this when you are finished: If you want to remove the formulas to make it easier to paste the differences into the QuantitativeInferentialProcedures.xls file, do the following: • Select the column of differences • Copy the column ([Ctrl]-c is the keyboard shortcut for PCs.) • Click on the bottom half of the "Paste" button in the "File" ribbon of Excel: • Choose Paste Values, as illustrated below: Now, the cells contain the subtracted differences, rather than the equation for these differences. You are now ready to perform the calculations for a hypothesis test. The hypothesis test will be conducted using the file QuantitativeInferentialProcedures.xls. • Copy the first 2 columns of data that you generated above • Open the file QuantitativeInferentialProcedures.xls • Click on the tab "Paired Sample t-test", which is located at the bottom of the Excel window • Paste the data in columns A, B, and C, with the first data value in row 5 Your file should look like this: The researchers are not interested in the weights of the women, they are more interested in the change in the women's weights. This will give them a measure of the effectiveness of the low-calorie diet. Notice that in this weight loss study, the change in the weights is negative. This indicates that the final weight was lower than the initial weight. 2. Following the directions above, compute the difference in the women's weights by subtracting the pre-study weights from the post-study weights using software. Call this new column Difference. 3. What is the mean of the values in the Difference column? $-6.80 \text{ kg}$ 4. Interpret the value you calculated in Question 3. The mean weight change experienced by the women in the study was $-6.80$ kg. In other words, the mean weight loss was $6.80$ kg. Relationship to a One Sample t-test After you have subtracted the pre-study weights from the post-study weights, you are left with a column of differences. We will denote the pre-study weights by $x_1$ and the post-study weights by $x_2$. Then, the differences can be denoted as $d = x_2 - x_1$. The difference, $d$, is defined as the change in the volunteer's weight during the study. After computing the differences, we do not use the data for the individual groups at all. The researchers are not interested in the values of the women's weights at the beginning of the study or at the end of the study. They are mostly interested in the difference in the weights after the participants complete the study. After we subtract, we can conduct a hypothesis test to determine if the mean of the differences is less than zero. We use the symbol $\mu_d$ to represent the true mean difference in the weights of the women who follow the diet prescribed in this study. The null hypotheses is that the true mean difference is zero ($\mu_d = 0$). The alternative hypothesis is that there is a decrease in the weights, in other words, that the true mean difference is less than zero ($\mu_d < 0$). Notice that this is essentially a one-sample t-test where the data are the differences in the women's weights. We have one column of data, the differences. We are testing whether the true mean difference is less than zero. After subtracting, a test for a difference of two means with paired data is just like a test for one mean with $\sigma$ unknown. In the hypothesis test, we will refer to the variable representing the differences as $d$. We will use this notation throughout the hypothesis test. For example, the true population mean will be labeled $\mu_d$ and the sample mean will be labeled $\bar d$. The sample standard deviation of the differences is denoted $s_d$. Hypothesis Test for Mahon's Weight Loss Data Summarize the relevant background information Twenty-seven women participated in a nine week weight loss study. During the study period, the participants were provided a reduced calorie diet. Their weights were recorded at the beginning of the study and nine weeks later. The difference of the weights is defined as the post-study weights minus the pre-study weights. The researchers expected that the mean difference in the weights would be negative--in other words, that the women would tend to lose weight. State the null and alternative hypotheses and the level of significance \begin{align} H_0: &~~ \mu_d=0 \\ H_a: &~~ \mu_d < 0 \end{align} We will use the $\alpha = 0.05$ level of significance. Describe the data collection procedures The women's weights were recorded at the beginning of the study. The women were provided a reduced calorie diet for nine weeks. Then, their weights were measured again at the end of the study. A calibrated scale was used to provide an accurate weight. Give the relevant summary statistics From the Excel output illustrated above, we get the following: \begin{align} \bar d &= -6.80 \\ s_d &= 3.17 \\ n &= 27 \end{align} The mean and standard deviation are rounded to one decimal place more than the original data. Make an appropriate graph (histogram) to illustrate the data This histogram was created in Excel with seven bins: Verify the requirements have been met Like the one-sample t-test, this procedure is robust, meaning that it is not very sensitive to the requirements. If they are violated, it will probably still give reasonably good results. The requirements for this procedure are the same as the requirements for a one-sample t-test: • the data represent a simple random sample from the population • the mean of the differences follows a normal distribution The subjects were recruited via advertisements for a research study. The participants volunteered to participate. It is not a simple random sample of all middle-aged women, but there is nothing about the selection of the sample that would invalidate the results. From a practical perspective, it is impossible to get a simple random sample of people in the general population. When research trials are conducted, people must volunteer to participate. This can lead to a selection bias, but it is usually negligible. The requirement of normality is satisfied for Mahon's data. The differences appear to follow a normal distribution, so $\bar d$ will be approximately normal. The sample size (n=27) is fairly large. The histogram shows a mound shape. Here is a Q-Q plot of the differences: With this Q-Q plot, we could conclude that the data follow a normal distribution. Even if we had had a small sample size, we could still conduct this test. Give the test statistic and its value The test statistic for a test involving paired data when $\sigma$ is unknown is a $t$. For this situation, the value is: $$t= \frac{-6.8 - 0}{3.17/\sqrt{27}} =-11.145$$ State the degrees of freedom $$df = 26$$ Mark the test statistic and $P$-value on a graph of the sampling distribution The test statistic, $t$, is labeled on the horizontal axis. The $P$-value is the area to the left of $t$ under the curve. This area is so small, it is not illustrated on this plot. Please note that this image was taken from the normal probability applet. The test statistic for two means with paired data is a $t$. This image was designed for a $z$-curve. This is not the right image for this procedure. For this reason, we labeled the image, "For illustrative purposes only". It is important to note that only the left tail is shaded, even though we cannot see it in this illustration. Find the $P$-value and compare it to the level of significance $$P\text{-value} = 1.06 \times 10^{-11} < 0.05 = \alpha$$ Since the $P$-value is less than the level of significance, we reject the null hypothesis. Present your conclusion in an English sentence, relating the result to the context of the problem There is sufficient evidence to suggest that the reduced calorie diet used in this study results in weight loss for middle-aged women. ### 3.2 Nosocomial Infections Summarize the relevant background information Matched-pairs designs are not just used in pre- and post-test situations. They are often used in situations where it is not possible to randomly assign subjects to groups (for example, by a coin toss.) Nosocomial (pronounced: NO-suh-KOH-MEE-uhl) infections are infections that occur in hospitals, but are not a result of the original condition. An example of a nosocomial infection is when a heart attack patient develops a staph infection at the site of an IV injection. The infection was not caused by the heart attack, but it was acquired in the hospital. Nosocomial infections are very dangerous and may result in longer recovery times or increased death rates. This AP photo of a chest x-ray shows pneumonia of the left lower lobe of the lung. Pneumonia is an example of a possible nosocomial infection. (Photo credit: Dr.Thomas Hooten, CDC) Health care providers suspect that nosocomial infections increase the amount of time required to recover from an illness or injury. In controlled experiments, subjects (e.g., patients) are randomly assigned to treatments. However, it is not ethical to give patients a nosocomial infection in order to determine if it increases the duration of their hospital stay! At best, we can collect information on the duration of hospital stays for patients who acquire nosocomial infections and compare them to the duration of the stays for patients who do not. There are many factors that affect the amount of time that a patient will need to stay in the hospital, including: nature of illness, types of procedures conducted, overall health, gender, age, etc. How can health care practitioners assess the effect of a nosocomial infection in the presence of so many other variables? One way is to match a patient who develops a nosocomial infection with another one who has similar characteristics (illness, procedures, health, gender, age group, etc.) but does not develop a nosocomial infection. Now, the patients are matched into pairs with similar characteristics, where the principle difference between the members of each pair is whether or not they acquired a nosocomial infection. By pairing the patients according to specific characteristics, the researchers can now subtract to observe a difference in their recovery times. In this way, it is possible to assess if nosocomial infections increase the mean duration of a hospital stay. Some researchers conducted such a study in which 52 pairs of patients were matched based on clinical characteristics. A patient with a nosocomial infection was matched as closely as possible to a similar case where there was no nosocomial infection. Patients who died were excluded from the study . The lengths of the hospital stays (in days) for these patients are given in the file NosocomialInfections. The difference, $d$, is defined as the duration of the hospital stay of the individual in the pair with the nosocomial infection minus the duration of the stay for the individual who did not get a nosocomial infection: $$Difference=Infected - NotInfected$$ After computing the differences, we do not use the data for the individual groups at all. In fact, after we subtract, the hypothesis test is conducted (essentially) like a one-sample test for a single mean with $\sigma$ unknown. 5. State the null and alternative hypotheses and the level of significance \begin{align} H_0: &~~ \mu_d = 0 \\ H_a: &~~ \mu_d > 0 \\ \end{align} The level of significance was not specified in the problem. You can choose any value you wish. The most common choices are 0.05, 0.01 and 0.1. We will illustrate this example with $\alpha = 0.05$. 6. Describe the data collection procedures Data were collected by matching hospital records of individuals who were admitted to the hospital. Patient records were matched based on their overall health and the reason they were admitted to the hospital. In each pair, one patient developed a nosocomial infection and one did not. Since the characteristics of the patients in the first group determined which patients would be paired with them in the second group, the data represent dependent samples. 7. Give the relevant summary statistics \begin{align} \bar d &= 11.38 \\ s_d &= 13.83 \\ n &= 52 \end{align} 8. Make an appropriate graph to illustrate the data • Present a graph showing the differences. 9. Verify the requirements have been met The data represent a random sample of patients, who have been matched based on their overall health and their current ailment. The sample size is large, so the mean of the differences $\bar d$ will be approximately normally distributed. 10. Give the test statistic and its value The test statistic for a test for two means with paired data is a $t$. $$t = 5.935$$ 11. State the degrees of freedom $df = 51$ 12. Mark the test statistic and $P$-value on a graph of the sampling distribution Your sketch should show the value of $t=5.935$ on the horizontal axis, with only the tiny area to the right of 5.935 shaded. 13. Find the $P$-value and compare it to the level of significance $P\textrm{-value}=\frac{\textrm{Sig. (2-tailed)}}{2}=\frac{2.592\times 10^{-7}}{2}=1.296 \times 16^{-7} = 0.0000001296 < 0.05 = \alpha$ Since the $P$-value is less than the level of significance, we reject the null hypothesis. 15. Present your conclusion in an English sentence, relating the result to the context of the problem There is sufficient evidence to suggest that the mean duration of hospital stays is increased when a patient develops a nosocomial infection. #### 3.3.1 Effect of Stressful Classical Music on Your Metabolism Summarize the relevant background information Obesity is a growing problem worldwide. Many scientists are seeking creative solutions to trim down this epidemic. Reduced energy expenditure is a potential cause of obesity. Resting Energy Expenditure (REE) is defined as the amount of energy a person would use if resting for 24 hours. In essence, this is the amount of energy that a person's body will consume if they do not do any physical activity. REE is measured in terms of kilo-Joules per day (kJ/d). REE accounts for approximately 70 to 80% of all energy that a person will expend in a day. If researchers can find simple, enjoyable activities that will increase REE, it may be possible to minimize the spread of obesity around the world. Ebba Carlsson and other researchers in Sweden investigated whether listening to stressful classical music increases a person's REE. Each subject's REE was measuring during silence and again while listening to stressful classical music. Data representing their results are given in the file REE-ClassicalMusic. Notice that this is not a pre- and post-test, but it is still a test involving paired data. Two REE measurements were made for each subject: (1) in silence ($REE_1$) and (2) while listening to stressful classical music ($REE_2$). State the null and alternative hypotheses and the level of significance Since we are testing for an increase in the mean REE, we let $d = REE_2 - REE_1$. Our alternative hypothesis will be that $\mu_d > 0$. The null and alternative hypotheses are: \begin{align} H_0: &~~ \mu_d = 0 \\ H_a: &~~ \mu_d > 0 \end{align} We will use the $\alpha = 0.1$ level of significance. In order to get the correct $P$-value, we need to indicate the proper alternative hypothesis in Excel. In the cell next to "Type of Test", choose "Greater Than" in the drop-down menu in the file QuantitativeInferentialProcedures.xls. Describe the data collection procedures The REE was measured by a technique called "indirect calorimetry" using a Deltatrac II Metabolic Monitor. The REE was measured twice for each person: while the person was (1) resting in silence or (2) resting while listening to stressful classical music. These trials were conducted in random order. Some of the subjects had the "silence" treatment first, and others had the "stressful" treatment first. 16. We will define the difference in REE by subtracting the REE in silence from the REE while listening to stressful classical music. If listening to stressful classical music actually increases the mean REE, would you expect the value of the difference to be typically positive or negative? If the REE is higher while listening to classical music than while resting in silence, we would expect the value of the difference to be positive. In other words the following difference would tend to be positive: $$Difference = Stressful - Silence$$ 17. Compute the difference in REE for each person. What is the value of the difference for the first person listed in the data file? 50 kJ/d Here is an illustration of an excerpt of the data in Excel: Give the relevant summary statistics 18. Report the number of subjects ($n$), the mean difference ($\bar d$), and the standard deviation of the differences ($s_d$). The following image illustrates the Excel file used to get the summary statistics. \begin{align} n&=40\\ \bar d &= 20~\text{kJ}\\ s_d &= 160~\text{kJ} \end{align} 19. Make an appropriate graph to illustrate the data Verify the requirements have been met We can consider the sample representative of the population. The "difference" data appear to follow a normal distribution. This is illustrated in the following Q-Q plot: The requirements for this test appear to have been satisfied. 20. Give the test statistic and its value The test statistic for a test for two means with paired data is a $t$. $$t=0.793$$ 21. State the degrees of freedom $df = 39$ 22. Mark the test statistic and $P$-value on a graph of the sampling distribution The test statistic is plotted on the horizontal axis. The $P$-value is shaded in blue. This image was copied from the normal probability applet. Since the test statistic is a $t$, not a $z$, this is not the exact image for this procedure. We label the image, "For illustrative purposes only": 23. Find the $P$-value and compare it to the level of significance $P\textrm{-value}=0.2163 > 0.1 = \alpha$ Notice that the $P$-value is half as large for a one-tailed test as it would have been for a two-tailed test. Since we have a one-sided alternative hypothesis, we are only interested in the right tail of the $t$-distribution. Since the $P$-value is greater than the level of significance, we fail to reject the null hypothesis. 25. Present your conclusion in an English sentence, relating the result to the context of the problem There is insufficient evidence to suggest that the mean REE is increased by listening to stressful classical music. Lying still and listening to stressful classical music is probably not the best way to increase your metabolism! Note that we did not say we "accept" the null hypothesis. We do not know that listening to stressful classical music has no effect on a person's REE. Based on the data available to us, we were not able to reject the requirement that this type of music does not increase the mean REE. #### 3.3.2 Cost of Airline Tickets Summarize the relevant background information Pressures of supply and demand act directly on the prices for an airline ticket. As the seats available on the plane begin to fill, airlines raise the price. If seats on a flight do not sell well, an airline may discount the tickets or even cancel the flight. Business travelers frequently demand travel booked on short notice. They must pay the current price. Typically, tourists book their flights well in advance, hoping to buy tickets before the price rises. We will consider the cost of a one-way ticket from London's Heathrow Airport to a variety of destinations in Europe. Allie Henrich, a BYU-Idaho student, compared the lowest published ticket prices of one-way flights from Heathrow to various destinations in Europe. Using Travelocity.com, she recorded the lowest published fares for nonstop midweek flights booked either 14 days in advance or 90 days in advance. The prices (in US dollars) are given in the file DirectFlightCosts. Notice that for some destinations, flights were not available. The data are paired, because measuring the costs twice for each city. The 14-day ticket price is paired with the 90-day price for each city. We will conduct a hypothesis test to determine if there is a difference in the cost of the nonstop flights when tickets are purchased 14 days in advance compared to 90 days in advance. We will use the 0.01 level of significance. 26. State the null and alternative hypotheses and the level of significance $\begin{array}{1cl} H_0:\mu_d = 0 \\ H_a:\mu_d \ne 0 \\ \alpha = 0.01 \end{array}$ 27. Describe the data collection procedures The data were collected using the website Travelocity.com. The lowest advertized ticket prices were recorded for nonstop flights from Heathrow Airport. All prices were recorded in US dollars. Data are provided on the cost of a nonstop ticket purchased with 14 days notice compared to 90 days notice. We will compute the difference in the costs for each destination. Some destinations did not include both flight options. In this case, the difference is not computed and the data are omitted from the analysis. 28. Give the relevant summary statistics The differences were computed by subtracting the 90-day price from the 14-day price. For example, for the Adnan Menderes Airport, we have $$202.09 - 234.19 = -32.10$$ You may have chosen to subtract in the opposite order. If so, you would have obtained a value of $32.10$ dollars. \begin{align} n&=87\\ \bar d &= 24.612\\ s_d &= 136.267 \end{align} {{{1}}} 29. Make an appropriate graph to illustrate the data The histogram is presented with 16 bins. If you defined your difference as the 90-day price minus the 14-day price, then with 16 bins you would have the following histogram: 30. Verify the requirements have been met The sample size is large, so we can conclude that the sample mean, $\bar d$ is normally distributed. 31. Give the test statistic and its value The test statistic for a test for two means with paired data is a $t$. $$t=1.685$$ If you computed the difference as the 90-day price minus the 14-day price, the value of your test statistic is $-1.685$. 32. State the degrees of freedom $df = 86$ 33. Mark the test statistic and $P$-value on a graph of the sampling distribution The test statistic is plotted on the horizontal axis. The $P$-value is shaded in blue. This image was copied from the normal probability applet. Since the test statistic is a $t$, not a $z$, this is not the exact image for this procedure. We label the image, "For illustrative purposes only": 34. Find the $P$-value and compare it to the level of significance $P\textrm{-value}= 0.096 > 0.01 = \alpha$ The $P$-value will be 0.096, no matter what order you subtracted the values. Since the $P$-value is greater than the level of significance, we fail to reject the null hypothesis. 36. Present your conclusion in an English sentence, relating the result to the context of the problem There is insufficient evidence to suggest that there is a difference in the mean cost of airline tickets 14-days versus 90-days in advance. ## 4 Confidence Intervals We can compute a confidence interval for the true mean of the differences for paired data. After the differences between two paired data sets have been calculated, we can create a confidence interval for the true mean of the differences. To do this, we follow the instructions for creating a confidence interval for a one mean with $\sigma$ unknown, but we use the column of differences as the data set. Excel Instructions To calculate confidence intervals for the true mean of the difference in Excel, do the following: • Follow the directions given above for creating a new column containing the differences between two variables. • Open the file QuantitativeInferentialProcedures.xls • Click on the tab labeled "One-sample t-test" • Enter the values of the differences you calculated. • Set the desired confidence level. The requirements for creating a confidence interval for the difference of means are the same as the requirements for the hypothesis test. We assume: • A simple random sample was drawn from the population • The mean of the differences is normally distributed ### 4.1 Mountain Pine Beetle Attacks Summarize the relevant background information Mountain pine beetles are small insects that bore into the bark of trees. The female beetles that first infest the tree emit pheromones to attract other beetles. In response to the pheromones, many beetles bore into the tree and ultimately kill it. The insects can destroy large tree stands within one year. Lodgepole pine (Pinus contorta Dougl.ex Loud.) are particularly susceptible to mountain pine beetle (Dendroctonus ponderosae Hopkins) outbreaks. The image to the right shows the destruction that can be caused by these insects. The large brown patches are pines that have been killed by the beetles. 37. The mountain pine beetle threatens many forests in the United States. These tiny insects are only 0.5 cm long--about the size of a grain of rice. This photo of a mountain pine beetle is magnified greatly. These little creatures can destroy a large, healthy forest. Can you think of a spiritual parallel? Ron Long, Simon Fraser University, Bugwood.org Describe the data collection procedures In a study conducted in the Arapaho National Forest in Colorado, researchers from the USDA Forest Service studied the effect of pine beetle outbreaks on the average number of trees in an area. The researchers counted the number of established trees per hectare before a pine beetle outbreak and seven years after an outbreak. (One hectare is an area of 100 meters by 100 meters.) Data representative of their observations are given in the file PineBeetle. Give the relevant summary statistics 38. Find the mean and standard deviation of the number of trees per hectare before the pine beetle outbreak. How would you describe the density of the trees in this forest? Express this in terms that make sense to you. The mean was 1028.41 trees per hectare and the standard deviation was 57.03 trees per hectare. Note that the values were rounded to two decimal places, since the data were given to one decimal place. Answers will vary regarding the description of the density. Here is one possible response. There is roughly one tree every $\frac{100 \times 100}{1028.41} = 9.7$ square meters. In other words, on average, each tree would have a space of about $\sqrt{9.7} = 3.1$ meters long and 3.1 meters wide in which to grow. 39. Repeat question 38 for the number of trees per hectare after the outbreak. The mean was 592.87 trees per hectare and the standard deviation was 45.31 trees per hectare. Answers will vary regarding the description of the density. Here is one possible respons. The trees are about half as dense as they were before the pine beetle infestation. About $\frac{592.87}{1028.41} = 0.58 = 58\%$ of the trees remained, so $100\% - 58\% = 42\%$ of the trees were killed by the pine beetles! 40. Create a new column of data in the file PineBeetle by subtracting the "before" counts from the "after" counts: $$Difference = After - Before$$ For these differences, report the mean, the standard deviation, and the sample size. Summary Statistics: Mean: $\bar d = -435.535$ Standard Deviation: $s_d = 17.082$ Sample Size: $n = 170$ Make an appropriate graph to illustrate the data 41. Create a histogram of the differences in the density of the trees. 42. Verify the requirements have been met. a. It is not explicitly stated, but we assume the plots of land were selected at random. b. For the pine beetle data, the histogram indicates that the data are not normally distributed. This can be confirmed with a Q-Q plot. Since the sample size is large ($n = 170$), we can assume the sample mean is normally distributed. The requirements for creating the confidence interval seem to be satisfied. 43. Find the confidence interval. Use the 95% level of confidence. $(-438.121,~ -432.948)$ Present your observations in an English sentence, relating the result to the context of the problem Interpret the confidence interval we created. We are 95% confident that the true mean change in the number of trees per hectare after a pine beetle outbreak is between $-438.121$ and $-432.948$ trees per hectare. Stated differently, we are 95% confident that the true mean decrease in the number of trees per hectare after a pine beetle outbreak is between $432.948$ and $438.121$ trees per hectare. ### 4.2 Sleep Inducing Drugs Summarize the relevant background information In William Sealy Gosset's landmark paper on the $t$-distribution, he cites data on a sleep-inducing drug. In a paper published in 1905, Arthur R. Cushny and A. Roy Peebles reported the effect of Lævorotary Hyoscyamine Hydrobromate (L-Hyoscyamine) on the length of time that people sleep before waking. The primary research question is: does L-Hyoscyamine impact the mean amount of time that people sleep? We will compute a 90% confidence for the true mean difference in the times. Describe the data collection procedures Eleven subjects were included in the study. At the start of the study, the researchers observed the average length of time that each of the participants slept before waking. Later, each subject was given 0.6 mg of L-Hyoscyamine and the duration of uninterrupted sleep was again measured. The difference in the amount of time each person slept was computed by subtracting the amount of time the subjects slept when taking the drug minus the sleep duration with no drug. The data are summarized in the table below. Mean hours of sleep Subject Control (no drug) L-Hyoscyamine Difference 1 0.6 1.3 0.7 2 3 1.4 -1.6 3 4.7 4.5 -0.2 4 5.5 4.3 -1.2 5 6.2 6.1 -0.1 6 3.2 6.6 3.4 7 2.5 6.2 3.7 8 2.8 3.6 0.8 9 1.1 1.1 0 10 2.9 4.9 2 11 - 6.3 - Notice that the "control" data for Subject #11 is missing. It is not possible to compute a difference for this person, so their data will be omitted from our analysis. For this analysis, we will use the remaining $n=10$ observations. You may find it easier to copy and paste the data from the following table. The last row has been omitted. Increase in hours of sleep 0.7 -1.6 -0.2 -1.2 -0.1 3.4 3.7 0.8 0 2 Give the relevant summary statistics 44. Report the mean, standard deviation, and sample size for the differences. Summary Statistics: Mean: $\bar d = 0.75$ hours Standard Deviation: $s_d = 1.79$ hours Sample Size: $n = 10$ Make an appropriate graph to illustrate the data 45. Create a histogram of the differences in the hours of sleep. Here is a histogram of the data with 6 bins: 46. Verify the requirements have been met. a. We assume the subjects represent a random sample from the population. b. A Q-Q plot of the differences indicates that is reasonable to conclude that the data are normally distributed even though the sample size is small: The requirements for creating the confidence interval seem to be satisfied. 47. Find the confidence interval. Use the 90% level of confidence. $(-0.287, 1.787)$ 48. Present your observations in an English sentence, relating the result to the context of the problem We are 90% confident that the true mean difference in the amount of time people sleep by taking this drug compared to not taking the drug is between $-0.287$ hours and $1.787$ hours. Notice that 0 is in the confidence interval. This suggests that 0 is a plausible value for the mean difference in the times. In other words, the drug does not seem to affect the amount of time people sleep. L-Hyoscyamine is not an effective sleep aid--at least at these dosage levels. ## 5 Summary Remember... • The key characteristic of dependent samples (or matched pairs) is that knowing which subjects will be in group 1 determines which subjects will be in group 2. • We use slightly different variables when conducting inference using dependent samples: Group 1 values: $x_1$  Group 2 values: $x_2$  Differences: $d$  Population mean: $\mu_d$  Sample mean: $\bar d$  Sample standard deviation: $s_d$ • When conducting hypothesis tests using dependent samples, the null hypothesis is always $\mu_d=0$, indicating that there is no change between the first population and the second population. The alternative hypothesis can be left-tailed ($<$), right-tailed($>$), or two-tailed($\ne$).
# Math Help - Residue theorem 1. ## Residue theorem For the first question, I have tried looking at examples and have noted that the bounds have been provided in a manner: like |z|=1 (as given in part ii) I am not sure how to get transform the given |z-pi|=pi in such a format, although i suspect it would be something like |z|=2pi ? Where do I go from here? for part ii) i don't understand how to treat the z^m term. does this imply that z^m is a series expansion, or is it trying to say m is a positive integer? How do I solve this equation?
# Bootstrap for Mean with 95% Confidence Interval I've been working through a book Modern Data Science with R and I have a conceptual question about bootstrapping and confidence intervals. Say you do a bootstrap for a mean 1000 times. How do you get the 95% confidence interval? According to the demonstration in the book, you simply calculate the .025, .975 quantile. Can anybody explain why this is so? I'm wondering why this process doesn't include the familiar steps of calculating a confidence interval like you'd do in a t-test. Just in case there are any R users that want a reference to a specific example of the book exercise I am working with, it is here: https://mdsr-book.github.io/instructor/foundations-ex.html I am using R and the data for the 2nd exercise is the Gestation dataset available in the MosaicData package. This question was prompted by the difference between the 1st exercise and the 2nd one. The 1st exercise simply asked to calculate a confidence interval which I solved simply with the t.test function. The 2nd exercise I first solved with the Mosaic package (following book demonstration) but didn't really know "why" the answer works. (Book showed the procedure but didn't explain) So I'm basically wondering WHY the 95% confidence interval can be obtained by getting 1,000 or so means with resampling (e.g. bootstrap) and then getting the appropriate quantile. • There are many varieties of bootstraps: parametric (distribution family assumed) and non-parametric (re-sample from data themselves); bias-corrected and not. From an (admittedly impatient) browsing of your link, I was not able to fill in the blanks. Please state the specific context of your question. – BruceET Jun 23 '18 at 20:40 Here is an example of a nonparametric bootstrap confidence interval -- with some explanation of how it is obtained. Suppose I have $$n = 30$$ observations from an unknown distribution and want a 95% confidence interval for the population mean $$\mu.$$ (Ignore numbers in brackets.) y [1] 22.1 25.9 30.3 6.7 18.1 13.6 13.4 40.4 14.9 37.3 16.9 22.1 26.3 24.7 39.6 [16] 27.0 22.5 11.1 10.8 31.4 38.4 22.3 30.4 24.3 26.5 31.7 14.0 13.9 49.2 47.9 mean(y) [1] 25.12333 I take $$\bar Y = 25.12333,$$ denoted a.obs in the program below, as a point estimate of $$\mu.$$ In order to make a confidence interval (CI), I have to know about the variability of the population around its mean. If I knew the distribution pf $$D = \bar Y = \mu,$$ I could find numbers $$L$$ and $$U,$$ such that $$P(L \le D = \bar Y - \mu \le U) = 0.95.$$ Then I would have $$P(\bar Y - U \le \mu \le \bar Y - L) = 0.95$$ and a 95% CI for $$\mu$$ would be of the form $$(\bar Y - U, \bar Y - L).$$ Not knowing the values $$L$$ and $$U,$$ I enter the 'bootstrap world' in order to get estimates $$L^*$$ and $$U^*$$ of these values, respectively. Momentarily, I take the observed $$\bar Y$$ as a proxy for the unknown $$\mu.$$ I take a large number $$B$$ of "re-samples" of of the data. Each re-sample is of size $$n = 30$$ and re-samples are taken with replacement from the original sample. For each re-sample, I find the mean $$\bar Y^*$$ and $$D^* = \bar Y^* - \bar Y.$$ This gives me a $$B$$ values $$D^*.$$ I cut 2.5% from the lower and upper ends of this collection of $$D^*$$'s to find the required values $$L^*$$ and $$U^*.$$ Returning, to the "real world", $$\bar Y$$ returns to it's original role as the observed mean of the sample, and a 95% nonparametric bootstrap CI for $$\mu$$ is of the form $$\bar Y - U^*, \bar Y - L^*).$$ In the following R program, suffixes .re are used instead of $$*$$'s to indicate quantities that result from re-sampling and the observed $$\bar Y$$ is called a.obs. The program assumes that the data y are already present. set.seed(624); B = 10^4; d.re = numeric(B) a.obs = mean(y); n = length(y) for (i in 1:B) { a.re = mean(sample(y, n, repl=T)) d.re[i] = a.re - a.obs } L.re = quantile(d.re, .025); U.re = quantile(d.re, .975) c(a.obs - U.re, a.obs - L.re) 97.5% 2.5% 21.14325 28.88333 Thus a 95% nonparametric bootstrap CI for $$\mu$$ is $$(21.1, 28.9).$$ Each run of the program gives a slightly different result if you omit the set.seed statement; retain that statement to replicate the exact answer above. However, with $$B = 10,000$$ iterations differences from one run to another will be small; a second run with an unknown seed gave the interval $$(21.2, 29.0).$$ A 95% t confidence interval is $$(21.0, 29.2).$$ It is based on the assumption that the data are normal (and contemplates the symmetrical tails of a normal population). The bootstrap CI assumes that the data are a random sample from a population with mean $$\mu$$. It assumes only that the population is capable of producing the values observed. Notes: (1) The data y were randomly sampled from a gamma distribution with shape parameter 5 and mean 25. (2) This is a 'bias-corrected' bootstrap CI. A version without bias correction would be to bootstrap a.re and use quantile(a.re, c(.025,.975)) as the CI. Some authors do that and then apply bias correction retroactively, using 2*a.obs - quantile(a.re, c(.025,.975)). (This is equivalent to the program above, but then it's not so easy to explain the role of 2*a.obs.)
# Nice to meet you :) #### Latiole ##### New Member Hello, 23 years, male. I'm working on different models to estimate volatility such as GARCH for my studies. The problem is that I'm a beginner in financial statistics. I decided to use Matlab and GRETL (with a lot of difficulties ) to work on these problematics. I could see this forum is dynamic. If my knowledges are enough, I will help as often as I can. I'm also here to get responses. :yup: Bye :wave:
# Q give rise to D => E => V => I => H. 1. Jun 5, 2014 ### ugenetic Charge is the most fundamental quantity. consider a point charge Q to simplify our mental picture: Q will give rise to a Immutable D, whatever material that Q was placed in. E will then = D/ε. Stronger dielectric material will really weaken you E here. However this is not a contradiction to how capacitor works. if you integrate E along some path connecting 2 points, you will get the potential difference between those 2 points. SURE, you can artificially fix E and claim D will vary in different material. to me, it is a misunderstanding how the physical world behaves. I => H or B is muddy in my head. In a coil: Does dv/dt drives ø directly or does V drives I and NI drives ø and that ø will reversibly affect I (with saturation, who drives whom matters) ? 2. Jun 5, 2014 ### Simon Bridge It is certainly a fundamental thingy - but there are more fundamental ones depending on who you ask. Unfortunately Nature does not care about what things are like to each of us. Nature just is. Consider: can photons exist without charges? Another formulation of electromagnetism is in terms of potentials... the charge can still give rise to a potential via Poisson's equation: $$\epsilon \nabla^2\phi = -\rho_{free}$$ ... then the potential gives rise to the fields. The relationships between E D and whatever are not cause and effect. One does not drive the other. They are just relationships. Currents are moving charges. So you can get things back to you favorite fundamental. Either or both depending on the exact setup. The current is to magnetism what charge is to electricity. However, the subject is best understood as unified electromagnetism - all together. Look up "Maxwel's equations". 3. Jun 5, 2014 ### Delta² I am not sure what u mean by the symbol of the empty set (what it appears to me via internet explorer browsing when u say dv/dt drives it), but i guess u mean the magnetic flux. In the case of a coil connected to a voltage source V, V gives rise to current I, which creates a time varying magnetic field B which in turn will create a time varying non conservative electric field D1 which will affect the original current I. 4. Jun 5, 2014 ### ugenetic Thank you very much for your replies, I think I am getting somewhere. first, I think the ϵ∇2ϕ=−ρfree example actually reinforced my mental picture. the intention of that equation is: I have a potential field, I wonder what's underlying cause of it? and the answer is: density of charges. and you can see, that potential is plagued by the MATERIAL ϵ it is in. Potential, despite of its noble qualities, still bears in its bodily frame the indelible mark of it lowly origin. That "Currents are moving Charges" was my biggest temptation, along with M dipoles and quantum spin, to think that charges are the fundamental driving force of the Magnetic world. But the rate of flow does not meaningfully translate into the speed of movement of charges, so I am not quite sure. Why am I so obsessed with this heretic idea of "Fundamental"? it is exactly because of this question: " WHo is exciting the coil to produce flux?" The Voltage across the coil or the Current flowing in the coil? according to Mr Bridge and Delta2, and my personal belief, it should be the current. But if that's the case, then why so many text books just maddeningly assume a Sin V will get a PERFECTLY shaped Sin Flux (considering saturation and hysterisis) ? 5. Jun 6, 2014 ### DrDu D is not very fundamental, on a microscopic level, it is sufficient to consider E. Also note that even in material media, Q fixes only the longitudinal part of D while the transversal part depends on boundary conditions and the like. 6. Jun 6, 2014 ### Simon Bridge Or you could ask how that charge density go there if not in response to the potential? The intention of the equation is to show a relationship - not cause and effect. How you look at it depends on the question you want to ask. Sure it does. If n charges per unit volume of magnitude e move with a drift velocity of v through a cross-section area A, then the current is I=neAv. So currents are moving charges - but what are they moving with respect to? Why - with respect to the ammeter of course! If the ammeter was moving along with the charges, then there would be no current and no magnetic field - only the electric field from the charges remains. The person turning the dial is causing the changes. If the dial controls voltage, then the voltage causes everything, if the dial controls the current, then the current controls everything. It just depends on how the equipment is rigged up. Because they are not taking into account hysteresis or saturation. An EM field can propagate through space all by itself even if the charges that originated it have vanished. It consists of self consistent E and B fields where the E field induces a B field which is just the right shape to induce the original E field ... but it's chicken-and-egg time: the same description works if you start with the B field. All this is classical. Go to Relativity and your idea that charges are behind everything looks better. At the quantum level though, we get a picture where there are intrinsic magnets: no current. Electromagnetism is understood in terms of interactions with photons. Now photons are quanta of the EM field - and charges are then understood in terms of their interactions ... thus the field gives rise to the charge?? More generally, particles are understood in QFT as disturbences in an underlying Field - so the Field gives rise to the particles. I'm not trying to convince you of anything - I'm just trying to show you that what causes what is not so straight forward. At the level you are trying to understand these things, you should not be thinking that the equations imply one side of the equals sign is any more fundamental than the other. Last edited: Jun 6, 2014 7. Jun 6, 2014 ### Delta² Mathematical relations like $V=-\dfrac {d\phi}{dt}$ dont necessarily give u the full picture of the underlying physical reality. Even in classical mechanics where u have for a rigid body $F_{net}=ma_{com}$ this equation tell u nothing about the internal forces in the rigid body which in the general case are the ones responsible for accelerating the c.o.m of the body. 8. Jun 6, 2014 ### ugenetic V=−dϕ/dt is perfectly acceptable to me, as it is the 3rd maxwell equation. and I will let go of the "fundamental" stuff. let's just focus on the magnetization graph I posted above. Why is sin wave voltage across a coil inducing a perfect sin wave flux? shouldn't both current and flux be skewed by the hysterisis and saturation? I don't think this is how the 4th maxwell equation is applied. And I don't think the 3rd equation can be reversed either 9. Jun 6, 2014 ### Simon Bridge There is no way of knowing from the graph alone - where did you get it from? I note that there are three curves, only one looks like a good sine wave... that is labelled with a phi. Probably flux. The other two are V (a voltage maybe) and and iexC (some sort of current). These don't look like nice sine waves at all - though the V curve is sort-of sinusoidal. I would guess that the flux was applied and the other two curves are calculated from that, taking into account... pretty much whatever the author wanted to. 10. Jun 7, 2014 ### Delta² Ok i think i understand you now. I think the confusion is between the concepts of voltage drop and that of the emf of the coil. It's the emf of the coil that will always be -dϕ/dt, however the voltage drop across the coil is affected by other things for example the ohmic resistance of the coil (so in that case it will be V=IR-dϕ/dt). So even if the voltage drop V across a coil is perfect sinusoidal due to a source voltage applied there, the term dϕ/dt can be different. 11. Jun 7, 2014 ### vanhees71 An EMF is not a potential. To the contrary it is related to the curl of the electric field according to the Maxwell equation (Faraday's Law) $$\vec{\nabla} \times \vec{E}=-\frac{1}{c} \frac{\partial \vec{B}}{\partial t}.$$ The integral form is often not given correctly. It reads $$\text{EMF}=\int_{\partial A} \mathrm{d} \vec{x} \cdot \left (\vec{E}+\frac{\vec{v}}{c} \times \vec{B} \right )=-\frac{\mathrm{d}}{\mathrm{d} t} \frac{1}{c} \int_A \mathrm{d}^2 \vec{F} \cdot \vec{B}.$$ For the derivation, see the Wikipedia: 12. Jun 7, 2014 ### ugenetic I really really appreciate all of your time and effort to understand my concern and answering. This is brilliant. My bad for not structuring my scenario clear, so here is the whole setup: the circuit is simply a SinWave voltage source (ideal) connected to a single coil with an closed iron core inside. It would like the picture below while ignore the labels. Only consider saturation and hysteresis of the core and assuming no resistance and no eddie current of core loss or leakage. the relation between the voltage of the source, the current in the coils and the flux will look like this: SO many text books just say:" Farady said v = dø/dt, so if v is sin(something) then ø is cos(something), right there, and done". I was like WTF, that's the induced voltage from a driving flux. Flux has nothing to do with voltage. flux is the result of current: line integral H*L = i. the E inside Maxwell's 4th equation in this case means the E inside of the wires. and H it generated around the winding... OK...that apparently does imply a relationship of V_magnetizing = d( flux_result ) / dt... I just confused myself. I thought 4th equation does not apply here, but apparently it does apply inside the winding wires. but... what about the current side of the story, the current flowing in the wires, that will get some H going around the winding wires as well.
# Probabilistic characterization of the effect of transient stochastic loads on the fatigue-crack nucleation time The rainflow counting algorithm for material fatigue is both simple to implement and extraordinarily successful for predicting material failure times. However, it neglects memory effects and time-ordering dependence, and therefore runs into difficulties dealing with highly intermittent or transient stochastic loads with heavy tailed distributions. Such loads appear frequently in a wide range of applications in ocean and mechanical engineering, such as wind turbines and offshore structures. In this work we employ the Serebrinsky-Ortiz cohesive envelope model for material fatigue to characterize the effects of load intermittency on the fatigue-crack nucleation time. We first formulate efficient numerical integration schemes, which allow for the direct characterization of the fatigue life in terms of any given load time-series. Subsequently, we consider the case of stochastic intermittent loads with given statistical characteristics. To overcome the need for expensive Monte-Carlo simulations, we formulate the fatigue life as an up-crossing problem of the coherent envelope. Assuming statistical independence for the large intermittent spikes and using probabilistic arguments we derive closed expressions for the up-crossing properties of the coherent envelope and obtain analytical approximations for the probability mass function of the failure time. The analytical expressions are derived directly in terms of the probability density function of the load, as well as the coherent envelope. We examine the accuracy of the analytical approximations and compare the predicted failure time with the standard rainflow algorithm for various loads. Finally, we use the analytical expressions to examine the robustness of the derived probability distribution for the failure time with respect to the coherent envelope geometrical properties. • 1 publication • 11 publications 05/20/2022 ### Probabilistic failure mechanisms via Monte Carlo simulations of complex microstructures A probabilistic approach to phase-field brittle and ductile fracture wit... 08/09/2018 ### On Physical Layer Security over Fox's H-Function Wiretap Fading Channels Most of the well-known fading distributions, if not all of them, could b... 03/22/2019 ### A Comprehensive Performance Evaluation of a DF-Based Multi-Hop System Over α-κ-μ and α-κ-μ-Extreme Fading Channels In this work, an integrated performance evaluation of a decode-and-forwa... 05/06/2018 ### Fishnet Model with Order Statistics for Tail Probability of Failure of Nacreous Biomimetic Materials with Softening Interlaminar Links The staggered (or imbricated) lamellar "brick-and-mortar" nanostructure ... 01/19/2021 ### Exchangeable Bernoulli distributions: high dimensional simulation, estimate and testing We explore the class of exchangeable Bernoulli distributions building on... 07/29/2020 ### Correlation structure in the elasticity tensor for short fiber-reinforced composites The present work provides a profound analytical and numerical analysis o... ## 1 Introduction Modern engineering applications increasingly rely on enormously capital intensive structures, which are placed in extreme conditions and subject to extreme loads that vary significantly throughout the structure expected lifetime. Failure costs are astronomical, including forgone profits, legal penalties, tort payouts, and reputation damage [8]. Minimizing lifetime costs require safe-life engineering and a conservative assessment of failure probabilities. Unfortunately, while material fatigue is a major contributor to failure, non destructively measuring fatigue is both difficult and expensive [30]. For many classes of structures, fatigue loads have a stochastic character with transient features that cannot be captured through a statistically stationary consideration. Examples include loads in wind turbines due to control and wind gusts, transitions between chaotic and regular responses in oil risers, and slamming loads in ship motions [13, 28, 29] . For such applications, traditional frequency domain approaches have difficulty predicting the fatigue effects of intermittent loading, as those are inherently connected to time-ordering and therefore cannot be captured by the spectral content of the load. In this work, we first develop an efficient time-marching scheme for the fatigue model developed by Serebrinksy and Ortiz [23, 11] with take into account the time-ordering effects. Based on this model we then derive analytical approximations for the probability mass function (pmf) of failure time in terms of the load probability density function (pdf) and the coherent envelope. We demonstrate the developed ideas in several examples involving intermittent loads. We also compare with standard fatigue life estimation methods and discuss their performance in various loading scenarios. Finally, we examine the robustness of the derived pmf with respect to geometrical properties of the coherent envelope. ## 2 The Serebrinsky-Ortiz (SO) model We consider a single material element with one dimensional loading. The applied load is a random process given by , and the corresponding displacement, depends on the element constitutive relation. Further, this constitutive relation depends on the fatigue state of the material; after some number of loading/unloading cycles, , the material stiffness will degrade and eventually the material will fail. Our aim is in the relationship between the load and the failure time . For the characterization of fatigue-crack initiation we consider the hysteretic cohesive-law model by Serebrinsky and Ortiz [23, 11]. In this model the fatigue-crack nucleation problem is formulated as a first-upcrossing problem of the opening displacement-load curve: with a critical boundary: the cohesive envelope. We begin with a brief description of the cohesive model and explain in what sense it can be used for the quantification of fatigue-crack nucleation. A cohesive-law, i.e. the critical boundary is expressed as a curve in the plain. When the curve meets the descending branch of the monotonic cohesive envelope the material interface loses stability and we have the nucleation of fatigue-crack. The cohesive envelope is typically described by the relation σ=F(δ)≜eσcδδce−δδc, (1) where are constants that characterize the material. The next step is to characterize the evolution of the opening displacement in terms of a given loading history . We will employ a simple phenomenological model [23, 11] ˙σ={K−˙δ,  if  ˙δ<0,K+˙δ,  if  ˙δ>0, (2) For simplicity we assume that unloading always takes place towards the origin and is determined by the unloading point. This condition fully defines . By contrast, the loading stiffness, , is assumed to evolve in accordance with the kinetic relation ˙K+=⎧⎪⎨⎪⎩(K+−K−)˙δδa,  if  ˙δ<0,−K+˙δδa,  if  ˙δ>0,, (3) where the fatigue endurance length, , is a characteristic opening displacement. The initial loading stiffness is given by , where is the first peak of the load function, and is the corresponding value of the ascending part of the cohesive envelope. Assuming that a typical initial load is much smaller than we can express in terms of the coherent envelope as K+0=limδ→0F(δ)δ. (4) If a subsequent load peak results in intersection with the ascending part of the cohesive envelope, i.e. for then a new initial loading stiffness is defined and it evolves according to equation (3). This typically happens during extreme loading events and it is a mechanism that results in strong variability on the fatigue-lifetime of the material. By simple integration one can identify the evolution of and and estimate when the curve will eventually intersect the descending part of the cohesive envelope, when we have fatigue-nucleation. The presented model allows to take into account not only the number and amplitude of the cycles but also their specific sequence. In this sense it can be employed to study the effect of noise but also high-frequency intermittent loads acting on the structure. ### 2.1 Time-discretization of the SO model We assume that the loading function is a continuous random function with positive values. If the loading function becomes negative the opening displacement vanishes and the loading stiffness remains constant until the next positive load. Therefore, we assume a positive load and we approximate with a piece-wise linear function between local maxima/minima, i.e. ˙σ(t) =Δσ−nΔt−n,   t∈[τn−1,τn−1+Δt−n], ˙σ(t) =Δσ+nΔt+n,   t∈[τn−1+Δt−n,τn−1+Δt−n+Δt+n], where is the number of cycle, () is the negative (positive) increment of the cycle, i.e. (), and () is the corresponding duration (Figure 1). The initial time of the cycle is denoted as , the initial load and opening displacement (both local maxima) are denoted as, and , respectively, while represents the loading stiffness. In addition, denotes the local minimum of the opening displacement during the cycle, i.e. when . This approximation allows us to derive a solution in the form of an iterative map. We first express the evolution of the opening displacement during the descending part of the cycle, i.e. after the increment, . In this case, K−n=σn−1δn−1, (5) Therefore, by direct integration we have the evolution of the opening displacement during unloading δ−n =δn−1+Δσ−nK−n=δn−1(1+Δσ−nσn−1). (6) Moreover, during unloading we have, by combining equations (2) and (3) (note that ): dK+K−−K+=−˙σK−δa. Integrating by parts we obtain, K−n−K+n,uK−n−K+n−1=exp(Δσ−nδaK−n), (7) or equivalently, K+n,u=K−n−exp(Δσ−nδaK−n)(K−n−K+n−1), (8) where is the loading stiffness right after the end of the negative (unloading) increment of the cycle. Next, we compute the opening displacement after the positive increment of the cycle. We first compute the evolution of the loading stiffness, during loading. We combine equations (2) and (3), to obtain ˙K+=−˙σδa, (9) which is integrated to obtain the value of during the loading part of the cycle: K+n(σ)=K+n,u−σδa,  σ∈[0,Δσ+n] (10) At the end of the cycle we will have K+n=K+n,u−Δσ+nδa. (11) The initial loading stiffness is given by , where is the first peak of the load function, and is defined through the ascending part of the cohesive envelope, as the solution of the equation σ+0=eσcδ−δce−δ−δc,   δ−≤δc. (12) If a subsequent load peak results in intersection with the ascending part of the cohesive envelope, i.e. for then a new initial loading stiffness is defined and it evolves according to equation (3). We utilize expression (10) into eq. (2): dδ=dσK+n,u−σδa. (13) By integration we have, δn−δ−n=−δalog(1−Δσ+nδaK+n,u), (14) which is the local maximum of the opening displacement history, , at the end of the cycle. Crossing of this quantity with the descending part of the cohesive envelope (1) is an indication of fatigue-crack nucleation. Equations (6), (7), (10) and (14) provide a piece-wise linear approximation of the opening displacement evolution, . A graphical summary of the described scheme is given in Figure 2. Based on the equation of the cohesive envelope (1), we have material failure for the minimum number, , for which we have an up-crossing of the descending part of the cohesive envelope: σ+Nf≥eσcδ+Nfδce−δ+Nfδc   and   δ+Nf>δc. (15) A sample of the evolution of the stiffness, , is shown in Figure 3. We observe that the evolution is typically linear except of several discrete jumps. These jumps are associated with crossings of the ascending part of the coherent envelope due to intermittent loading events. In the next section we formulate an approximation scheme that takes into account the almost linear evolution of the stiffness away from extreme events and significantly accelerates the computation. ### 2.2 Simulation of a loading time series using the probabilistic decomposition-synthesis method Here we formulate an approximation scheme for the failure-time under arbitrary loading time series. We first decompose the loading signal into segments associated with extreme events and regular loading events. For segments of the loading time-series where there is no up-crossing of the coherent envelope, we will show that the evolution of can be linearly approximated: K+n−K+n−1≈ΔK. (16) where has a constant value that we will estimate later. The remaining segments of the loading time series are associated with the discontinuous jumps in Figure 3, corresponding to the intersections of the curve with the ascending part of the coherent envelope. This breakdown of an intermittent process into a linear region and an extreme region parallels the probabilistic decomposition-synthesis framework developed in [15, 14]. Let the sequence be the discretization of such that is the local maxima of , and let be a fixed threshold. This sequence may be broken into two sets: • – the linear (quiescent) region • – intermittent spikes For points in the set , we will use the simplified update equation (16). For points in the set , we will use the full SO update step as described in the previous section, regardless of whether the curve actually crosses the coherent envelope. Finally, for technical reasons we will remove the first few peaks () from and add them to . This helps to initialize the algorithm for the case when there are otherwise few or no early spikes in the set . #### 2.2.1 Estimation of the slope ΔK We can directly estimate the slope in (16) from the SO model and the input signal statistics. Combining equations (8) and (11) we have ΔK+n =(1−exp(Δσ−nδaK−n))(K−n−K+n−1)−Δσ+nδa. (17) The parameter , which has inverse dependence on the endurance length , is small unless a very large spike happens for very small , i.e., towards the end of the material lifetime. Therefore, expanding the exponential to first order produces ΔK+n=Δσ−nδa(K+n−1K−n−1)−Δσ+nδa. (18) This first term on the right hand side of equation (18) depends on the descending increment of the input signal, but it also depends on the small quantity and therefore may be neglected in the small-increment regime. The second term is state independent, and is directly proportional to the size of the ascending increment of the input signal. When these approximations are made, we may relate the increments of to the increments of using the following expression: ΔK+n=−Δσ+nδa. (19) Assuming known statistical characteristics for the loading process, while the condition holds, the pdf and expected mean for are given by fΔK+(k) =δafΔσ+|σ+<^σ(−δak) (20) ΔK=E[ΔK+] =−E[Δσ+|σ+<^σ]δa (21) where is the expected value (ensemble mean) operator, and the conditional pdf is given by fΔσ+|σ+<^σ(Δσ+)=fΔσ+(Δσ+)×I(σ+<^σ)P[σ+<^σ], (22) where is the indicator function. We note that one can also estimate the mean value empirically through a direct simulation of a short segment of the loading history. Under this approximation and assuming no up-crossing with the coherent envelope, the evolution of the stiffness can be approximated by: K+n=K+0−nΔK, (23) This is a valid assumption as long as the spread of values of is not systematically large. In this case, the material life will extend until the material stiffness vanishes, i.e. K+Nf=K+0−NfΔK=0, (24) Below we consider some additional cases where analytical approximations can be obtained. ##### Narrow-banded Gaussian Process. Suppose that is a narrow-banded zero-mean Gaussian random process with standard deviation . Since the negative values do not matter for the SO model, we need to characterize only the positive peaks. The pdf for the peaks of the process is given by (see, for instance [18]) fσ+(a)=aϱ2exp(−a22ϱ2),a≥0. (25) ##### Narrow-banded Non-Gaussian Process. Suppose instead that is narrow-banded but not Gaussian. If the process is zero-mean then we can rely again on the amplitude of the positive peaks. The pdf of the peaks can be found as, fσ+(a)=−1v+σ(0)dv+σ(a)da,a≥0, (26) where is the -upcrossing rate, given by the Rice formula: v+σ(a) =limΔt→01ΔtE[N+(a,Δt)]=∫∞0˙σfσ˙σ(a,˙σ)d˙σ (27) For detailed derivations of these relations we refer to [18]. ##### Full Increment Calculation. Suppose that we relax the requirement that the distribution of peaks be an adequate substitute for the distribution of increments. Following [25] , we obtain an expression for the joint distribution of peak , valley , and gap fP,V(a1,a2,τ)=∫0−∞∫∞0−¨x1¨x2fX1X2˙X1˙X2¨X1¨X2(a1,a2,0,0,¨x1,¨x2)d¨x1d¨x2∫0−∞∫∞0−¨x1¨x2f˙X1˙X2¨X1¨X2(0,0,¨x1,¨x2)d¨x1d¨x2, (28) an upper bound on the distribution of fT(τ)≤∫0−∞∫∞0−¨x1¨x2f˙X1˙X2¨X1¨X2(0,0,¨x1,¨x2)d¨x1d¨x2, (29) and an expression for the distribution of increments : Equation (28) resembles a Rice-type equation, and it in turn may be simplified by assuming a Gaussian closure for the second derivatives. ### 2.3 The rainflow counting algorithm The simplest method to characterize material failure properties is to apply a harmonic load with known amplitude, , and count the number of cycles until material failure, . The relationship between and is a material characteristic; Figure 4 shows the curve for the SO constitutive model. Although realistic -curves are generally found to depend on the mean stress (compression versus tension) and loading (axial or torsional), simple cycle counting is found to work well for constant amplitude loading in the absence of intermittency [22]. This frequency domain approach is easily adapted to non-harmonic signals by the Palmgren–Miner rule, given by ∑iniNfi=C, (31) which allows for calculation of equivalent fatigue by breaking the load signal into individual cycles, each of whose contributions is separately determined by reference to the -curve [4]. Based on the Palmgren–Miner rule the material fails when becomes greater than . The well known rainflow counting algorithm [5] implements this rule by breaking a given signal into the corresponding set of increments. The use of equation (31) combined with a cycle-counting rule is standard in the literature on fatigue from random loads [26, 2, 6]. Note however that frequency domain methods such as rainflow counting completely ignores the time-ordering effects of the load history. ### 2.4 A stochastic load model with intermittency To demonstrate the considered models we employ a load produced by a dynamical system exhibiting intermittent instabilities. This represents typical loads found in a wide range of engineering problems: ˙y+(λ−q1z2)y=ν1˙Wy(t;ω),¨z+c˙z+(k−q2y)z=ν2˙Wz(t;ω),σ=Ez, (32) where are white noise terms, are constants, and are nonlinear coupling terms, and is the Young modulus. The model represents a structural mode, , which interacts nonlinearly with another mode, , creating intermittent energy transfers. It is a normal form arising in a wide range of systems related to fluid-structure interaction and mechanics, such as vortex induced vibrations, buckling, turbulent modes and waves [21]. The mode represents the ‘reservoir’ of energy (e.g. the axial mode in buckling) while the mode represents the mechanical mode which is excited by an additive noise as well as through intermittent energy transfers. The nonlinear coupling terms are associated with intermittent energy transfers from the mode to the mode. The displacement defines the evolution of the stress that the material exhibits. In Figure 5 we present two random samples (first column) of the load produced by the system (32). We can clearly observe the intermittent character of the time series. On the right column we present the cohesive envelope (1) together with the local maxima of the opening displacement (red circles). It is important to note that the two loading time series are qualitatively similar. However, while in the first case the load has a much larger maximum (grater than ) and several other intense peaks, the predicted number of cycles is , close to the one predicted by the standard rainflow counting prediction, since for this number of cycles we have , i.e. we have failure based on the Palmgren–Miner rule. On the other hand, for the second realization of the load, the peaks of the extreme events are much smaller in magnitude (the larger is close to ), yet we have fatigue-crack nucleation much earlier, . We emphasize that this material failure is inherently connected with the time-history of the load and cannot be captured by the rainflow-counting algorithm which predicts a significantly larger number of cycles until material failure. Specifically, for this number of cycles the Palmgren–Miner rule gives . ## 3 Analytical approximation of the failure time pmf If not for intersections between the curve and the coherent envelope, the functional would be approximated the linear relationship (see eq. (24)) Nf=K+0ΔK (33) where is the initial stiffness and is the expected change in over a typical cycle corresponding to the load distribution. However, the coherent envelope changes things, by adding three effects: 1. Extremely large loads that immediately cause material failure. 2. Up-crossings with the ascending part of the envelope, which cause jumps (discontinuities) on the evolution of . 3. Up-crossings with the descending part of the envelope, which cause failure before has reached Each of these effects will be considered in order to derive an analytical approximation for the pmf of the failure time. This is essential as a probabilistic quantification based on Monte-Carlo methods require, in order to accurately resolve a pmf that take takes values as small as , order of samples. While there are techniques to slightly improve on these numbers (e.g., importance sampling), this is a critical barrier of Monte-Carlo ideas to estimate the long tail for rare failure events. ### 3.1 Setup We will write the coherent envelope, introduced in Section 2, in the generic form σ=F(δ), (34) which has both a monotonic ascending and monotonic descending branch: ∂F∂δ >0,δ<δc ∂F∂δ <0,δ>δc and two corresponding inverses: and . Additionally, for what follows we define the functions: κ(σ)≜σF−1asc(σ)  %and   η(σ)≜σF−1des(σ) (35) which are assumed to be monotonic. These functions express the stiffness induced by an up-crossing with the ascending/descending part of the envelope and will be essential for our analysis. The monotonicity requirement is satisfied by typical coherent envelopes (e.g. eq. (1)). We will assume that in the absence of envelope up-crossings the material stiffness, , evolves linearly with the number of cycles and with its gradient given by the mean value (see eq. (19)), which has been estimated through one of the methods detailed in Section 2.2.1. This is a valid assumption as long as the spread of values of is not systematically large. If this is not the case one can adopt a more complex model for the case of no envelope up-crossing using e.g. the Palmgren–Miner rule. Further, we assume known pdf of local load maxima, , as well as a known cumulative distribution: Fσ+(σ) =∫σ−∞fσ+(s)ds. Finally, we will assume the independent spike hypothesis: that the amplitude of local maxima are uncorrelated. This is not true in general for narrow-banded processes, but it is approximately true for ‘large enough’ maxima, which is what is needed for our analysis. This will be a key assumption in determining the probability distribution for several intermediate quantities below. ### 3.3 Failure time distribution #### 3.3.1 Damage due to terminal loads larger than σc The maximum value of is given by , and is the maximum load the material can sustain. When the load on the material exceeds , it will fail no matter the fatigue history. We call these loads terminal. The failure cycle for this terminal mechanism is given by nx=argmini>0{σ+i>σc} (36) This is the expression for the first crossing time for the threshold . Following the independent spike hypothesis, the probability of seeing an extreme load above a given threshold may be modeled by sequential Bernoulli trials. In this case the pmf of the first cycle when we have a spike of critical magnitude follows a geometric distribution pNx(n) =(1−pc)n−1pc,   n=1,2,... (37) where, pc=P[σi>σc]=∫∞σcfσ+(s)ds. (38) #### 3.3.2 Damage due to up-crossings of the ascending part of the envelope In general, the graph of may have multiple intersections with the ascending part, leading to multiple discontinuous jumps in (Figure 6). However, as we show below, the total fatigue lifetime effect depends only on the last such intersection with the ascending part of the envelope and at what cycle this occurs. Invariance of the damage with respect to intermediate up-crossings. To prove this property we consider two scenarios where we have the same initial . The last point where we have intersection of the ascending part of the coherent envelope is assumed to be the same for both scenarios (having slope ), occurring at the same cycle, . In the first scenario we have an additional intersection of the envelope, at slope , occurring at some cycle, , while in the second case the only intersection occurs at (Figure 6). As we observe, for the first loading scenario, we have a jump at . Using simple geometry, the number of lost cycles due to this jump is given by (see Figure 6 right) ϕa1≜ K+0−n′ΔK−κ(σ+′)ΔK. (39) Using the same argument we also obtain the number of lost cycles due to the second jump occurring at : ϕa2≜ κ(σ+′)−(n∗−n′)ΔK−κ(σ+∗)ΔK. (40) Finally, by direct computational of the total number of cycles, we observe that this is equal to the lost cycles in the second loading scenario: ϕb≜ K+0−n∗ΔK−κ(σ+∗)ΔK=ϕa1+ϕa2. (41) The above argument can be generalized for an arbitrary number of intermediate jumps and proves that the damage due to intersections with the ascending part of the envelope depends only on the last such intersection. Damage quantification due to up-crossings of the ascending part for random loading. We have concluded that the number of lost cycles due to up-crossing of the ascending part of the envelope does not depend on intermediate up-crossings but only on the last up-crossing of the ascending curve. To quantify this damage we define the damage quotient , ϕ(σ,n)≜K+0−κ(σ)ΔK−n. (42) which is meaningful only when it is positive, i.e. only when we have an up-crossing of the envelope. The damage quotient essentially measures the magnitude of each jump on the material stiffness, expressed in a number of cycles lost due to this jump, every time we have an up-crossing. For a generic loading sequence with peaks the number of lost cycles due to up-crossing with the ascending part of the envelope will be given by the maximum of this quantity: na({σ+i}∞i=1)=argmax0≤n<∞ϕ(σ+n,n)=argmax0≤n<∞(K+0−κ(σ+n)ΔK−n), (43) with the condition that this maximum is a positive number, i.e. we have at least one up-crossing with the ascending part of the envelope. If no up-crossing occurs then . The constant used in the normalization of is the ‘initial stiffness,’ and corresponds to an infinitesimal first load peak. Its value is given by (eq. (4)), K+0=limσ→0κ(σ), (44) that is, the slope of the coherent envelope near the origin. To quantify the pmf for , we will begin by using the pdf of the load peaks, , and the monotonicty of function to obtain the pdf for : fκ(a)=fσ+(κ−1(a))|κ′(κ−1(a))|. (45) Based on this pdf we now obtain the pdf for the damage quotient : fϕn(a)=ΔKfκ(K+0−nΔK−aΔK). (46) and the corresponding cdf: Fϕn(a)=1−Fκ(K+0−nΔK−aΔK). (47) For any sequence of independent random variables with corresponding pdf , and cdf , we can show, using derived distributions, that the cumulative distribution function of the maximum is given by Fmax(x)=N∏j=1Fj(x), (48) while using a total probability argument we can prove that the pmf for the argument of the maximum is given by [9] pmax(n)=∫∞−∞N∏j=1j≠nFj(x)fn(x)dx=∫∞−∞Fmax(x)fn(x)Fn(x)dx,  n=1,...,N (49) Applying this result for our problem leads to, pna(n)=∫∞−∞Qκ(xΔK)ΔKfκ(K+0−nΔK−xΔK)1−Fκ(K+0−nΔK−xΔK)dx,  n=1,..., (50) where is a function that is independent of (i.e. it has to be computed once): Qκ(y)=limn→∞n∏j=1(1−Fκ(K+0−jΔK−y)). (51) Note that Fϕna(x)=Qκ(xΔK). (52) To compute the function we consider its logarithm log(Qκ(y))=∞∑j=1log(1−Fκ(K+0−jΔK−y)). (53) We multiply both sides of the equation with and consider the limit of as for typical applications has a very small value which corresponds to a large number of cycles until failure. This allows us to express the right hand side in terms of an integral. limΔK→0[log(Qκ(y))ΔK]=∫∞0log(1−Fκ(K+0−y−z))dz. (54) Therefore, for finite (and small) we have (55) Substituting the above into (50) results in the pmf of the cycles until the last up-crossing of the ascending part of the envelope: pna(na)=∫∞−∞Qκ(xΔK)Wκ(naΔK+xΔK)ΔKdx,  na=1,..., (56) where, Wκ(y) ≜fκ(K+0−y)1−Fκ(K+0−y). (57) #### 3.3.3 Damage due to up-crossings of the descending part of the envelope Any intersection with the descending part of the coherent envelope will immediately cause material failure. As such, there can only be one such intersection. In order to quantify the statistics of this event we define the anticipation function : ψ(σn,n)≜η(σn)−K+n,K+n=K+na−(n−na)ΔK (58) where , is the material stiffness coefficient before cycle , and . The material stiffness can be expressed in terms of the damage quotient (eq. (42)) and its maximum value as follows: K+n=K+0−(ϕna+n)ΔK, (59) where the pdf for is given by eq. (52). The material failure time is given by the first zero up-crossing of the anticipation function: Nf=min{n:ψ(σn,n)=η(σn)−K+0+(ϕna+n)ΔK>0}. (60) We first compute the pdf for . This will be given by: fη(a)=fσ+(η−1(a))|η′(η−1(a))|. (61) Therefore, conditioning on , which expresses the maximum lost stiffness of the material due to an up-crossing with the ascending part of the envelope, and , the cycle when this up-crossing occurs, we will have Fψn(a|ξ,na)=1−Fη(a+K+0−ξ−naΔK). (62) To this end, the probability of having a material failure at cycles is pNf(n|ξ,na)=(1−Fη(K+0−ξ−nΔK))n−1∏m=na+1Fη(K+0−ξ−mΔK),n=na+1,na+2,... (63) where, follows the cdf (eq. (52) and (55)), while follows the pmf in eq. (50). We consider the logarithm of this equation: logpNf(n|ξ,na)=log(1−Fη(K+0−ξ−nΔK))+n−1∑m=na+1log(Fη(K+0−ξ−mΔK)). For the last term we set logVη≜ n−1∑m=na+1log(Fη(K+0−ξ−mΔK)). (64) We multiply and divide the right hand side with and express it as an integral logVη =1ΔK∫nΔKnaΔKlog(Fη(K+0−ξ−s))ds=1ΔK∫nΔK+ξnaΔK+ξlog(Fη(K+0−s))ds, (65) Vη(u,v) =exp(1ΔK∫vulog(Fη(K+0−s))ds), (66) where the approximation is based on the assumption of small and large number of cycles until failure. Going back to the pmf for we have pNf(n|ξ,na)=(1−Fη(K+0−ξ−nΔK))Vη(naΔK+ξ,nΔK+ξ),n=na+1,na+2,... (67) The special case where no up-crossings with the ascending part of the envelope occur is when the magnitude of the maximum stiffness jump is zero, and . In this case we will have pNf(n|ξ=0,na=0)=(1−Fη(K+0−nΔK))Vη(0,nΔK),  n=1,2,... (68) Using eq. (50) and assuming that probable is much larger than the probable values of (so we do not have to formally condition on ), as well as a small , we have pNf(n|ξ)=∞∑na=1pNf(n|ξ,na)p(na)=(1−Fη(K+0−ξ−nΔK)).∫∞−∞∞∑na=1Qκ(xΔK)Vη(naΔK+ξ,nΔK+ξ)Wκ(naΔK+xΔK)ΔKdx=(1−Fη(K+0−ξ−nΔK))∫∞−∞∫∞0Qκ(xΔK)Vη(ζ+ξ,nΔK+ξ)Wκ(ζ+xΔK)dζdx=1ΔK(1−Fη(K+0−ξ−nΔK))∫∞−∞∫∞0Qκ(y)Vη(ζ+ξ,nΔK+ξ)Wκ(ζ+y)dζdy. Finally, we integrate over the variable after we multiply with the corresponding pdf : pNf(n)=1ΔK∫∞−∞(1−Fη(K+0−ξ−nΔK))Q′κ(ξ)∫∞−∞∫∞0Qκ(y)Vη(ζ+ξ,nΔK+ξ)Wκ(ζ+y)dζdydξ. This expression can be also written in a more compact form pNf(n)=1ΔK∫∞−∞∫∞0(1−Fη(K+0−ξ−nΔK))Vη(ζ+ξ,nΔK+ξ)Q′κ(ξ)Sκ(ζ)dζdξ. (69) where, Sκ(ζ)≜∫∞−∞Qκ(y)Wκ(ζ+y)dy. (70) Note that the integrands in equations (69) and (70) have compact support, subsets of the interval . Expression (69) together with the functions (eq. (66)), (eq. (55)), (eq. (57)) and consist of a full approximation of the cycles until material failure. These functions are given in terms of the coherent envelope shape and load peak statistics. #### 3.3.4 Combined failure time From equations (37) and (69), we have expressions for the distribution of and
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Nov 2019, 09:24 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Find the unit's digit in the product 2467^153*341^72 Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 59124 Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 20:55 00:00 Difficulty: 15% (low) Question Stats: 82% (00:59) correct 18% (01:01) wrong based on 89 sessions ### HideShow timer Statistics Competition Mode Question What is the unit's digit in the product $$2467^{153} * 341^{72}$$? (A) 0 (B) 1 (C) 2 (D) 7 (E) 9 Find the unit's digit in the product (2467)^153 * (341)^72 (A) 0 (B) 1 (C) 2 (D) 7 (E) 9 _________________ VP Joined: 20 Jul 2017 Posts: 1080 Location: India Concentration: Entrepreneurship, Marketing WE: Education (Education) Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 21:34 1 1 Unit digit of $$2467^{153}∗341^{72}$$ is same as unit digit of $$7^{153}∗1^{72}$$ = $$7^{153}$$ = $$7^{4*38 + 1}$$ (Since cyclicity of 7 is 4) = $$7^1$$ = $$7$$ IMO Option D Pls Hit kudos if you like the solution Senior Manager Joined: 18 May 2019 Posts: 454 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 21:35 1 2467^153 x 341^72 Taking each of the terms separately and computing the unit digits correspondingly, we get 341^72 but the unit digit of 341 is 1. all powers of 1 will result in 1, hence the unit digit of 341^72=1 2467^153 the unit digit of 2467 = 7. The unit digits of the powers of 7 are as follows: 7^1=7 7^2=9 7^3=3 7^4=1 7^5=7 7^6=9 so we can see that the cyclicity of the powers of 7 is 4 hence 153/4 =38*4 + 1 Therefore the unit digit lies at the 5th position or 5th power of 7 i.e. unit digit of 7^153 = unit digit of 7^5 = 7 hence unit digit of 2467^153 x 341^72 = 7x1 = 7. Manager Joined: 09 May 2018 Posts: 77 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 21:35 1 Ans D - 7 2467^153∗341^72 Units digit of the product is dependent on the units digits of 2467^153 and 341^72. The units digits of 1^72 would be 1 because 1's cyclicity is 1. Units digit of 2467^153 is 7. The cyclicity of 7 is 4. 7^0 =1 7^1 =1 7^2 = 9 7^3 = 3 7^4 = 1 7^5 = 7 So, the multiplication of these two numbers would be 7*1 = 7 Senior Manager Joined: 01 Mar 2019 Posts: 273 Location: India Concentration: Strategy, Social Entrepreneurship Schools: Ross '22, ISB '20, NUS '20 GPA: 4 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 21:55 1 153=4K+1 7^4k+1 = last digit is 7 341^72 = last digit is 1 so 7*1 =7 OA:D Senior Manager Joined: 07 Mar 2019 Posts: 384 Location: India GMAT 1: 580 Q43 V27 WE: Sales (Energy and Utilities) Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 26 Sep 2019, 23:16 1 Since unit digit of 341 is 1 which would always result in unit digit of 1, on multiplying with $$2467^{153}$$ it would be equal to unit digit of $$7^{153}$$. $$2467^{153} * 341^{72} = 2467^{153} * 1^{72}$$  $$= 7^{153}$$  $$= 7^{38*4 + 1}$$  $$= 7^{38*4} * 7^1$$  = 1 * 7  = 7 _________________ Ephemeral Epiphany..! GMATPREP1 590(Q48,V23) March 6, 2019 GMATPREP2 610(Q44,V29) June 10, 2019 GMATPREPSoft1 680(Q48,V35) June 26, 2019 GMAT Club Legend Joined: 18 Aug 2017 Posts: 5283 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 03:38 cyclicity of 7 ; 7,9,3,1 unit digit of 2467^153 = 7 and 341^72 = 1 so answer IMO D ; 7 What is the unit's digit in the product 2467^153∗341^72? (A) 0 (B) 1 (C) 2 (D) 7 (E) 9 Director Joined: 24 Nov 2016 Posts: 783 Location: United States Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 03:45 Quote: What is the unit's digit in the product $$2467^{153}∗341^{72}$$? (A) 0 (B) 1 (C) 2 (D) 7 (E) 9 $$units:2467^{153}∗341^{72}=7^{153}*1^{72}=7^{153}$$ $$cycles[7]:(7,9,3,1)=4$$ $$7^{153}…\frac{153}{4}=remainder[1]=1st.of=(7,9,3,1)=7$$ Manager Joined: 22 Feb 2018 Posts: 232 Location: India Concentration: Entrepreneurship, Healthcare Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 08:01 7^153* 1^72; 153/4 = 1 remainder (and 0 remainder for 72/4) 7 ^1 * 1 ^4 = 7 * 1 = 7. Ans. D Manager Joined: 28 Feb 2014 Posts: 180 Location: India GPA: 3.97 WE: Engineering (Education) Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 08:36 Cyclicity of 7 is 4 and that of 1 is 1 Hence, unit digit of 7^153 is 7 and 1^72 is 1 Unit digit of the expression is 7 D is correct Senior Manager Joined: 29 Jun 2019 Posts: 467 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 09:05 A number with 7 in unit's digit to the powers has the cycle of numbers 7,9,3,1 respectively. So, the sycle is 4. 153÷4=38+1. So, 2467^153 has 7 in its unit's digit. 7×1=7 Option D Posted from my mobile device _________________ Always waiting Intern Joined: 19 Jun 2019 Posts: 3 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 27 Sep 2019, 12:14 Units first number 7 >>> Therefore we need to know the cicle of 7 >>> (7,4,1,7,4,1) >>> The cicle repeats each three numbers. 153/3= 51 >> remainder = 0 therefore the units digit of the first number is 1 since the last number of the cicle is 1 (No remainder so last number of the cicle) Units digit second number is 1, therefore the units digit of this number to any power will be 1. 1*1=1 Senior Manager Joined: 25 Jul 2018 Posts: 344 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 28 Sep 2019, 06:10 What is the unit's digit in the product $$2467^{153}∗341^{72}$$?? $$...7^{153}$$ $$...7^{1}=...7$$ $$...7^{2}=...9$$ $$...7^{3}=...3$$ $$...7^{4}=...1$$ .................. $$...7^{5}=...7$$ $$...7^{6}=...9$$ $$...7^{7}=...3$$ $$...7^{8}=...1$$ It is repeated in every four terms. So, $$...7^{149}=...7$$ $$...7^{150}=...9$$ $$...7^{151}=...3$$ $$...7^{152}$$=...1 ...$$7^{153}$$=...7 $$341^{72}=...1^{72}$$=...1 Units digit of this number always ends with 1. Units digit is 7*1=7 ................. e-GMAT Representative Joined: 04 Jan 2015 Posts: 3142 Re: Find the unit's digit in the product 2467^153*341^72  [#permalink] ### Show Tags 29 Sep 2019, 07:48 Solution Given • We are given an expression: 2467^153∗341^72 To find • The units digit of 2467^153∗341^72 Approach and Working out • The units digit of 2467^153∗341^72 is same as the units digit of 7^153∗1^72 Units digits of 7^153∗1^72 = Units digits of 7^153∗ Units digits of 1^72 • = Units digits of 7^153 * 1 Units digits of 7^153 = Units digits of 7^(4*38 +1) = Units digits of 7^1 = 7 Hence, Units digits of 7^153∗1^72 = 7 * 1 = 7 Thus, option D is the correct answer. _________________ Re: Find the unit's digit in the product 2467^153*341^72   [#permalink] 29 Sep 2019, 07:48 Display posts from previous: Sort by
# Khovanov's homology for tangles and cobordisms ```@article{BarNatan2004KhovanovsHF, title={Khovanov's homology for tangles and cobordisms}, author={Dror Bar-Natan}, journal={Geometry \& Topology}, year={2004}, volume={9}, pages={1443-1499} }``` • D. Bar-Natan • Published 22 October 2004 • Mathematics • Geometry & Topology We give a fresh introduction to the Khovanov Homology theory for knots and links, with special emphasis on its extension to tangles, cobordisms and 2-knots. By staying within a world of topological pictures a little longer than in other articles on the subject, the required extension becomes essentially tautological. And then a simple application of an appropriate functor (a "TQFT") to our pictures takes them to the familiar realm of complexes of (graded) vector spaces and ordinary homological… 458 Citations ## Figures from this paper An sl(2) tangle homology and seamed cobordisms We construct a bigraded (co)homology theory which depends on a parameter a, and whose graded Euler characteristic is the quantum sl(2) link invariant. We follow Bar-Natan's approach to tangles on one Twisted skein homology • Mathematics • 2012 We apply the techniques of totally twisted Khovanov homology to Asaeda, Przytycki, and Sikora's construction of Khovanov type homologies for links and tangles in I-bundles over (orientable) surfaces. The finiteness result for Khovanov homology and localization in monoidal categories In the previous paper we constructed the local system of Khovanov complexes on the Vassiliev space of knots and extended it to the singular locus. In this paper we introduce the definition of the A Khovanov type link homology with geometric interpretation • Mathematics • 2016 We study a Khovanov type homology close to the original Khovanov homology theory from Frobenius system. The homology is an invariant for oriented links up to isotopy by applying a tautological sl(2) tangle homology with a parameter and singular cobordisms We construct a bigraded cohomology theory which depends on one parameter a, and whose graded Euler characteristic is the quantum sl.2/ link invariant. We follow Bar-Natan’s approach to tangles on one On a triply graded Khovanov homology Cobordisms are naturally bigraded and we show that this grading extends to Khovanov homology, making it a triply graded theory. Although the new grading does not make the homology a stronger Fixing the functoriality of Khovanov homology: a simple approach • T. Sano • Mathematics Journal of Knot Theory and Its Ramifications • 2021 Khovanov homology is functorial up to sign with respect to link cobordisms. The sign indeterminacy has been fixed by several authors, by extending the original theory both conceptually and The Two Formulations of Khovanov’s Tangle Invariant In [2], Khovanov introduces a homology theory for links, which he calls a ‘categorification’ of the Jones polynomial. To an oriented link projection he assigns a chain complex of graded abelian Khovanov homology and strong inversions • Mathematics • 2021 Abstract. There is a one-to-one correspondence between strong inversions on knots in the three-sphere and a special class of four-ended tangles. We compute the reduced Khovanov homology of such gl(2) foams and the Khovanov homotopy type • Mathematics • 2021 The Blanchet link homology theory is an oriented model of Khovanov homology, functorial over the integers with respect to link cobordisms. We formulate a stable homotopy refinement of the Blanchet ## References SHOWING 1-10 OF 34 REFERENCES An invariant of link cobordisms from Khovanov homology. In (10), Mikhail Khovanov constructed a homology theory for oriented links, whose graded Euler characteristic is the Jones polynomial. He also explained how every link cobordism between two links Remarks on definition of Khovanov homology Mikhail Khovanov in math.QA/9908171 defined, for a diagram of an oriented classical link, a collection of groups numerated by pairs of integers. These groups were constructed as homology groups of An invariant of link cobordisms from Khovanov's homology theory Mikhail Khovanov constructed (math.QA/9908171) a homology theory of oriented links, which has the Jones polynomial as its graded Euler characteristic. He also explained how every link cobordism Remarks on Definition of Khovanov Homology Mikhail Khovanov defined, for a diagram of an oriented classical link, a collection of groups numerated by pairs of integers. These groups were constructed as homology groups of certain chain Khovanov homology and the slice genus We use Lee’s work on the Khovanov homology to define a knot invariant s. We show that s(K) is a concordance invariant and that it provides a lower bound for the smooth slice genus of K. As a On Khovanov’s categorification of the Jones polynomial The working mathematician fears complicated words but loves pictures and diagrams. We thus give a no-fancy-anything picture rich glimpse into Khovanov's novel construction of "the categorification of Torsion of the Khovanov homology Khovanov homology is a recently introduced invariant of oriented links in \$\mathbb{R}^3\$. It categorifies the Jones polynomial in the sense that the (graded) Euler characteristic of the Khovanov An invariant of tangle cobordisms We construct a new invariant of tangle cobordisms. The invariant of a tangle is a complex of bimodules over certain rings, well-defined up to chain homotopy equivalence. The invariant of a tangle Categorification of the Kauffman bracket skein module of I-bundles over surfaces • Mathematics • 2004 Khovanov defined graded homology groups for links LR 3 and showed that their polynomial Euler characteristic is the Jones polyno- mial of L. Khovanov's construction does not extend in a A functor-valued invariant of tangles We construct a family of rings. To a plane diagram of a tangle we associate a complex of bimodules over these rings. Chain homotopy equivalence class of this complex is an invariant of the tangle. On
# How to type \sim above another symbol x? I want to put a "~" above x, just like \hat x but replacing \hat with \sim. How to do that? - add comment ## 1 Answer I think you're after \tilde x. - Or \widetilde if the character is (or are) long. –  Andrew Stacey Jun 10 '11 at 10:43 @Andrew Yes, good remark! –  chl Jun 10 '11 at 10:44 add comment
English Item ITEM ACTIONSEXPORT Released Book Chapter Which primes are sums of two cubes? MPS-Authors /persons/resource/persons236497 Zagier,  Don Max Planck Institute for Mathematics, Max Planck Society; External Resource No external resources are shared Fulltext (public) There are no public fulltexts stored in PuRe Supplementary Material (public) There is no public supplementary material available Citation Rodriguez Villegas, F., & Zagier, D. (1995). Which primes are sums of two cubes? In Number theory. Fourth conference of the Canadian Number Theory Association, July 2-8, 1994, Dalhousie University, Halifax, Nova Scotia, Canada (pp. 295-306). Providence, RI: American Mathematical Society. Cite as: http://hdl.handle.net/21.11116/0000-0004-38FD-1 Abstract Let L(Ep, s) be the L-series of the elliptic curve Ep: x^3+ y^3= pz^3, p being a rational prime p\equiv 1\pmod 9; let cp= \sqrt 3 Γ (1/3)^3 (2π)^-1 p^-1/3. Then L(Ep, 1)= cp Sp with Sp\in \bbfZ. The authors give several formulae for Sp. They prove, in particular, that Sp= \text tr αp, and Sp= (\text tr βp)^2 for some algebraic integers αp, βp; thus Sp is always a square, as expected. One of their formulae implies that Sp= 0\iff p\mid fn(p) (0), where fn (t) is a certain sequence of polynomials defined by a simple recursion relation, n(p):= 2(p- 1)/3. According to the Birch and Swinnerton-Dyer conjecture, Sp =0 if and only if Ep (\bbfQ)\ne 0; therefore the authors' formulae give a (conjectural) answer to the question posed in the title of their paper.
# zbMATH — the first resource for mathematics Analytic regularity of Stokes flow on polygonal domains in countably weighted Sobolev spaces. (English) Zbl 1121.35098 The authors investigate the analytic regularity analytic data. They introduce weighted Sobolev spaces related to Kondrat’ev’s weighted spaces. The authors establish a shift theorem near the corners. ##### MSC: 35Q30 Navier-Stokes equations 76D03 Existence, uniqueness, and regularity theory for incompressible viscous fluids 35A20 Analyticity in context of PDEs 35B65 Smoothness and regularity of solutions to PDEs 35D10 Regularity of generalized solutions of PDE (MSC2000) 76D07 Stokes and related (Oseen, etc.) flows Full Text: ##### References: [1] Agranovich, M.S.; Vishik, M.K., Elliptic problems with a parameter and parabolic problems of general type, Uspekhi mat. nauk., 19, 3, 626-727, (1959), (Russian Math. Surveys 19(3) (1964) 53-157) [2] Babuška, I.; Guo, B.Q., Regularity of the solution of elliptic problems with piecewise analytic data. part 1. boundary value problems for linear elliptic equation of second order, SIAM J. math. anal., 19, 172-203, (1988) · Zbl 0647.35021 [3] Babuška, I.; Guo, B.Q., The h-p version of the finite element method with curved boundary, SIAM J. numer. anal., 24, 837-861, (1988) · Zbl 0655.65124 [4] Babuška, I.; Guo, B.Q., Regularity of the solution of elliptic problems with piecewise analytic data. part 2. the trace spaces and application to boundary value problems with non-homogeneous conditions, SIAM J. math. anal., 20, 763-781, (1989) · Zbl 0706.35028 [5] Babuška, I.; Suri, M., The h-p version of the finite element method with quasi-uniform meshes, RAIRO, math. mod. numer. anal., 21, 199-238, (1987) · Zbl 0623.65113 [6] Babuška, I.; Suri, M., The optimal convergence rate of the p-version of the finite element method, SIAM J. numer anal., 24, 750-776, (1991) · Zbl 0637.65103 [7] I. Babuška, M. Szabó, N. Katz, The p-version of the finite element method, SIAM J. Numer Anal. (1981) 515-545. [8] Bolley, P.; Dauge, M.; Camus, J., Regularité Gevrey pour le problème de Dirichlet dans des domaines à singularités coniques, Comm. partial differential equations, 10, 4, 391-431, (1985) · Zbl 0573.35024 [9] Brezzi, F., On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers, RAIRO anal. numer., 8, 129-151, (1994) · Zbl 0338.90047 [10] Dauge, M., Stationary Stokes and navier – stokes systems on two- or three-dimensional domains with corners I: linearized equations, SIAM J. math. anal., 20, 74-97, (1989) · Zbl 0681.35071 [11] Galdi, G., An introduction to the mathematical theory of the navier – stokes equations, (1994), Springer Heidelberg [12] Girault, V.; Raviart, P.A., Finite element methods for navier – stokes equations, (1986), Springer Berlin · Zbl 0396.65070 [13] B.Q. Guo, I. Babuška, The h-p version of the finite element methods. Part 1. The basic approximation results, Comput. Mech. 1 (1986) 21-41; Part 2. General results and applications, Comput. Mech. 1 (1986) 203-220. [14] Guo, B.Q.; Babuška, I., On the regularity of elasticity problems with piecewise analytic data, Adv. appl. math., 14, 307-347, (1993) · Zbl 0790.35028 [15] Kellog, R.B.; Osborn, J.E., A regularity for the Stokes problem in a convex polygon, J. funct. anal., 21, 4, 397-431, (1976) · Zbl 0317.35037 [16] V.A. Kondratev, Boundary value problem for parabolic equations in corner domains, Trans. Moscow Math. Soc. (1966) 450-504. [17] V.A. Kondratev, Boundary value problem for elliptic equations in domain with conic or angular points, Trans. Moscow Math. Soc. (1967) 227-313. [18] Maz’ya, V.G.; Plamenevskii, B.A., The first boundary value problem for the classical equations of mathematical physics in domains with piecewise smooth boundary parts I and II, Zeitschrift f. analysis und ihre anw., 2, 6, 523-551, (1983), (in Russian) · Zbl 0554.35099 [19] V.G. Maz’ya, B.A. Plamenevskii, Estimates in $$L_p$$ and Hölder class and the Miranda-Agmon maximum principle for solutions of elliptic boundary problems in domains with singular points on the boundary, Amer. Math. Soc. Transl. (2) 123 (1984) 1-56. [20] Morrey, C.B., Multiple integrals in calculus of variations, (1966), Springer Berlin, Heidelberg, New York · Zbl 0142.38701 [21] M. Orlt, Regularitätsuntersuchungen und Fehlerabschätzungen für allgemeine Randwertprobleme der Navier-Stokes Gleichungen, Doctoral Dissertation, Stuttgart University, 1998. [22] Orlt, M.; Sändig, M., Regularity of viscous navier – stokes flows in nonsmooth domains, (), 185-201 · Zbl 0826.35095 [23] Schwab, C., p- and hp-FEM, (1998), Oxford University Press Oxford · Zbl 1298.74237 [24] Schwab, C.; Suri, M., Mixed hp finite element methods for Stokes and non-Newtonian flow, Comp. methods appl. mech. eng., 175, 217-241, (1999) · Zbl 0924.76052 [25] Stephan, E.P.; Suri, M., On the convergence of the p-version of the boundary element method, Math. comp., 52, 1-48, (1989) · Zbl 0661.65118 [26] Stephan, E.P.; Suri, M., The h-p version of the boundary element method on polygonal domains with quasiuniform meshes, Math. model. numer. anal. (RAIRO), 25, 783-807, (1991) · Zbl 0744.65073 [27] Wahlbin, L.B., On the sharpness of certain local estimates for $$\overset{\mathring{}}{H}^1$$ projections into finite element spaces: influence of a reentrant corner, Math. comp., 42, 1-8, (1984) · Zbl 0539.65078 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Converse relation In mathematics, the converse relation, or transpose, of a binary relation is the relation that occurs when the order of the elements is switched in the relation. For example, the converse of the relation 'child of' is the relation 'parent of'. In formal terms, if ${\displaystyle X}$ and ${\displaystyle Y}$ are sets and ${\displaystyle L\subseteq X\times Y}$ is a relation from ${\displaystyle X}$ to ${\displaystyle Y,}$ then ${\displaystyle L^{\operatorname {T} }}$ is the relation defined so that ${\displaystyle yL^{\operatorname {T} }x}$ if and only if ${\displaystyle xLy.}$ In set-builder notation, ${\displaystyle L^{\operatorname {T} }=\{(y,x)\in Y\times X:(x,y)\in L\}.}$ The notation is analogous with that for an inverse function. Although many functions do not have an inverse, every relation does have a unique converse. The unary operation that maps a relation to the converse relation is an involution, so it induces the structure of a semigroup with involution on the binary relations on a set, or, more generally, induces a dagger category on the category of relations as detailed below. As a unary operation, taking the converse (sometimes called conversion or transposition) commutes with the order-related operations of the calculus of relations, that is it commutes with union, intersection, and complement. Since a relation may be represented by a logical matrix, and the logical matrix of the converse relation is the transpose of the original, the converse relation is also called the transpose relation.[1] It has also been called the opposite or dual of the original relation,[2] or the inverse of the original relation,[3][4][5] or the reciprocal ${\displaystyle L^{\circ }}$ of the relation ${\displaystyle L.}$[6] Other notations for the converse relation include ${\displaystyle L^{\operatorname {C} },L^{-1},{\breve {L}},L^{\circ },}$ or ${\displaystyle L^{\vee }.}$
4557 Accelerated Multi-band Magnetic Resonance Fingerprinting Using Spiral in-out with additional kz Encoding and Modified Sliding Window Reconstruction Di Cui1, Xiaoxi Liu1, Hing-Chiu Chang1, Queenie Chan2, and Edward S Hui1 1Diagnostic Radiology, The University of Hong Kong, Hong Kong, China, 2Philips Healthcare, Hong Kong, China Synopsis Multi-band Magnetic Resonance Fingerprinting can be achieved using UNFOLD-like acquisition and dictionary matching without using parallel methods. However, the MR parametric maps after dictionary matching in one slice suffers from artifacts due to the high frequency components of other simultaneously acquired slices. In this work, a new acquisition strategy was proposed for the multi-band acquisition, where spiral-in-out trajectory was used to provide extra kz encoding. A modified sliding window reconstruction was also proposed to reduce the high frequency oscillations. Purpose Magnetic Resonance Fingerprinting (MRF) [1] is a novel approach to efficiently quantify multiple MR parameters, and multi-band (MB) has been implemented in MRF for further acceleration. UNFOLD [2] is an effective method to decouple the simultaneously acquired slices in multi-band MRF, given the different frequencies of their signal evolution. A major problem in UNFOLD-based MB-MRF is that the desired low frequency component of target slice is usually collapsed with high frequency artifacts from other slices due to undersampling and signal evolution. Here we proposed spiral-in-out MB-MRF to alleviate this problem in two aspects. First, one additional blip gradient was added in spiral in-out sequence, leading to better separation of frequency components using UNFOLD. Second, a modified sliding window [3] reconstruction was proposed to remove the high frequency artifacts before dictionary matching. Methods A FISP-MRF [4] sequence was used and the readout gradient of our proposed sequence is shown in Fig 1a and the associated spiral-in-out trajectory in Fig 1b. The flip angle (FA) and repetition time (TR) trains are respectively shown in Figs 1c and 1d. The additional Gz blip (the green trapezoid in Fig 1a) is inserted in between the spiral-in and spiral-out readout portion, thereby introducing a $\Delta k_{z}$ shift between the two readouts. The additional Gz blip is equivalent to phase encoding step along kz. The encoding pattern of our proposed MRF acquisition is better visualized in the kz-t domain in Fig 2. Figs 2a and 2c show the interleaved acquisition pattern, which is similar to the stack of spiral 3D MRF [6]. Figs 2b and 2d show two representative sampling patterns using the additional Gz blip in our proposed acquisition strategy. Note that the density should be compensated along kz before reconstruction or matching for the sampling strategy in Fig 2b. Given our proposed sampling strategy, the image series of different slices are encoded with unique phase pattern that favors UNFOLD. An example for MB factor of 3 is illustrated in Fig 3. The dictionary matching process is to search the maximum inner product between the acquired MR signal and dictionary $\sum_{t=0}^N m_{i,j}(t) = \sum_{t=0}^N s_{i}(t)d_{j}(t)$ where $\textbf{s}_i$ and $\textbf{d}_j$ respectively indicate the signal evolution vector for the ith voxel and the jth dictionary vector, $\textbf{m}_{i,j}=\textbf{s}_i\cdot\textbf{d}_j$, and $s_{i}$, $d_{j}$ and $m_{i,j}$ are their tth elements, and N is the number of TR. If $\textbf{S}_{i}$, $\textbf{D}_{j}$ and $\textbf{M}_{i,j}$ are Fourier transform of $\textbf{s}_i$, $\textbf{d}_j$ and $\textbf{m}_{i,j}$, then $\textbf{M}_{i,j}=\textbf{S}_{i}\circledast\textbf{D}_{j}$, where $\circledast$ indicates convolution, so $\sum_{t=0}^N m_{i,j}(t) = M_{i,j}(0)=\sum_{\omega=0}^N S_{i}(\omega)D_{j}(-\omega)$. Then the target low frequency component in signal is chosen by $D_{j}(\omega)$. Due to the high in-plane undersampling factor, the high frequency components of signal are not neglectable. The low frequency component is then collapsed with the high frequency components of other slices. With the extra encoding of kz, the frequency of different slices became more sparsely distributed, thus rendering less effect on each other. To further reduce the high frequency artifact, a modified sliding window reconstruction method was proposed. As shown in Fig 2e, sliding window gridding was performed for each kz so that the combined k-space data still followed the desired sampling pattern in kz-t, and that in-plane artifacts and, more importantly, high frequency components of signals were subsequently reduced. All data were acquired with a 3T MRI scanner (Achieva TX, Philips Healthcare) with 8-channel head coil. An in-house FISP-MRF sequence was used with imaging parameters: TE = 6ms, TI = 20ms, FOV = 300 x 300 mm2, image resolution = 1.17 x1.17 x 5 mm3, acquisition matrix size = 256x256. The acquisition window is 8.4ms, with a rotation increment of gold angle 222.5 degree for different dynamics, and TR varied between 12~14 ms. Results and Discussion Figs 4 and 5 show the parameter maps with MB = 3 and MB = 4 respectively. For 2D MRF, spiral-in-out trajectory can reduce the error caused by off-resonance effect, especially with long acquisition window. For MB-MRF, the use of our proposed spiral-in-out trajectory in conjunction with an additional Gz blip allows two different kz encoding within one TR, which helps unfold the slices in subsequent procedures. During the reconstruction of dynamic images, a modified sliding window method was proposed to remove the high frequency artifacts from irrelevant slices. Our results showed the proposed method improved the quality of UNFOLD-like MRF matching, thereby permitting higher MB factor (Figs 4 and 5). Conclusion A new acquisition strategy and reconstruction method were proposed for MB-MRF that consist of an additional Gz blip for kz encoding and modified sliding window for improving the UNFOLD performance for MB-MRF. Acknowledgements No acknowledgement found. References [1] Ma, D., Gulani, V., Seiberlich, N., Liu, K., Sunshine, J. L., Duerk, J. L., & Griswold, M. A. (2013). Magnetic resonance fingerprinting. Nature, 495(7440), 187. [2] Madore, B., Glover, G. H., & Pelc, N. J. (1999). Unaliasing by Fourier‐encoding the overlaps using the temporal dimension (UNFOLD), applied to cardiac imaging and fMRI. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 42(5), 813-828. [3] Cao, X., Liao, C., Wang, Z., Chen, Y., Ye, H., He, H., & Zhong, J. (2017). Robust sliding‐window reconstruction for Accelerating the acquisition of MR fingerprinting. Magnetic resonance in medicine, 78(4), 1579-1588. [4] Jiang, Y., Ma, D., Seiberlich, N., Gulani, V., & Griswold, M. A. (2015). MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout. Magnetic resonance in medicine, 74(6), 1621-1631. [5] Zahneisen, B., Poser, B. A., Ernst, T., & Stenger, V. A. (2014). Three‐dimensional Fourier encoding of simultaneously excited slices: generalized acquisition and reconstruction framework. Magnetic resonance in medicine, 71(6), 2071-2081. [6] Ma, D., Jiang, Y., Chen, Y., McGivney, D., Mehta, B., Gulani, V., & Griswold, M. (2018). Fast 3D magnetic resonance fingerprinting for a whole‐brain coverage. Magnetic resonance in medicine, 79(4), 2190-2197. Figures Fig 1. (a) Our proposed spiral-in-out readout for MB-MRF sequence. The additional Gz blip (green) was introduced to provide extra kz encoding. (b) Spiral-in-out trajectory. (c) Flip angle and (d) TR trains. Fig 2. Multi-band MRF acquisition strategy in kz-t domain for the case of MB factor of 3 (a) without and (b) with extra kz encoding; MB factor of 4 (c) without and (d) with extra kz encoding. (e) Our proposed modified sliding window reconstruction method. Fig 3. (a) MR signal evolution from MB-MRF with MB factor of 3. (b) Simulated signal of related dictionary entry. (c) The Fourier transform of the signal in b, and (d) the Fourier transform of the signal in a. (e)-(g) MR signal evolutions of the 3 simultaneously acquired slices in a. The shaded area indicated the low frequency components of the signal from one slice in e, which is collapsed with the high frequency components of the signal from the other two slices in f and g. Fig 4. MR parametric maps from MB-MRF with MB factor of 3 obtained from (a) single slice MRF result[MOU1] , (b) without and (c) with additional kz encoding, (d) with modified sliding window, and (e) with additional kz encoding and modified sliding window. [MOU1]What do you mean by reference? Fig 5. MR parametric maps from MB-MRF with MB factor of 4 obtained from (a) single slice MRF result[MOU1] , (b) without and (c) with additional kz encoding, (d) with modified sliding window, and (e) with additional kz encoding and modified sliding window. [MOU1]What do you mean by reference? Proc. Intl. Soc. Mag. Reson. Med. 27 (2019) 4557
# Perl (programming language) Pearl Paradigms : procedural , modular , partly object-oriented Publishing year: 1987 Designer: Larry Wall Developer: Larry Wall , Perl Porter Current  version : 5.32.0   (June 20, 2020) Typing : weak , dynamic , implicit Influenced by: awk , BASIC-PLUS, C / C ++ , Smalltalk , Lisp , Pascal , Python , sed , Unix-Shell Affected: PHP , Ruby , Python , JavaScript , Windows PowerShell Operating system : platform independent www.perl.org Perl [ pɝːl ] is a free , platform-independent and interpreted programming language ( scripting language ) that supports several programming paradigms . The linguist Larry Wall designed it in 1987 as a synthesis of C , awk , the Unix commands and other influences. Originally intended as a tool for processing and manipulating text files, particularly for system and network administration (e.g. evaluation of log files ), Perl has also found widespread use in the development of web applications and in bioinformatics . Perl is also traditionally represented in the financial world, especially in the processing of data streams from various news sources. The main goals are fast problem solving and the greatest possible freedom for programmers. The processing of texts with the help of regular expressions and a large scope of design are strengths of the language. ## history ### Emergence Larry Wall designed Perl. Larry Wall was employed as an administrator and programmer at Unisys , where he had been involved since March 1987 in developing a secure network for the NSA under the name blacker . He received several orders to create tools for monitoring and remote maintenance of the emerging software . One of the main tasks was to generate clear reports from scattered log files . Since the existing languages ​​and tools seemed too cumbersome for him, he gradually developed his own language with the help of his teammate at the time Daniel Faigin and his brother-in-law Mark Biggar to solve his tasks. He also drew on his knowledge and experience as a linguist and designed Perl close to human language habits. This is expressed in minimum requirements for beginners, a strong combinability of language elements and a rich vocabulary that also allows commands whose meanings overlap. Wall sees a practitioner's need for freedom and intuitive expression in this. In line with this practical approach, Perl borrowed its vocabulary and logical structures from the languages ​​and tools used under Unix in the 1980s , which made learning easier, but also reversed the Unix philosophy . Unix and its system commands were compiled and mostly in C written. These commands were logical units and should master exactly one task: "Do one thing and do it well" ("Do one thing and do it well"). Interpreted shell scripts quickly and easily combined the commands, which could pass their results on to one another through pipes . Perl violates this philosophy by making these commands part of the programming language, combining C and Shell and bypassing the existing commands and tools. This became necessary because shell scripts were unsuitable for complex tasks. Their process is very simple, they can only temporarily store data to a limited extent and the pipes are bottlenecks in data exchange . On the other hand, they allow a much more compact programming style, since the use of a UNIX tool can replace many lines of C source code . In order to take advantage of both types of programming, Wall created a combination of C and tools like sed , awk , grep and sort . He added properties of the Bourne Shell , to a lesser extent elements from Pascal and BASIC , as well as his own ideas. This fusion enabled short, powerful programs that could be written quickly and tested at any time without having to compile them, which also accelerated development. Later other languages ​​like Lisp , Smalltalk , Python or Ruby were “imported”. ### Surname The name Perl refers to a quote from the Gospel of Matthew ( Mt 13.46  EU ), in which Jesus describes the kingdom of heaven with the image of a trader who wants to sell all his possessions in order to buy a precious pearl . Before the release, the name was changed from "Pearl" to "Perl", as there was already a programming language called PEARL . The Backronymes Practical Extraction and Report Language and Pathologically Eclectic Rubbish Lister are also widely used and accepted by Larry Wall . The spelling “Perl” designates the language, “Perl” on the other hand the program that interprets this language. Furthermore, the Perl community attaches great importance to the fact that the spelling “PERL” is not used, since it is not an acronym . ### Pearl 1 to 4 Larry Wall was an employee of the Jet Propulsion Laboratory (JPL) at the time. On December 18, 1987, Larry Wall published his program on Usenet as Perl 1.0, which at that time was a more powerful shell that could handle texts and files, control other programs, and output legible reports. Version 2.0 was released on June 5th the following year with a completely new and expanded Regex engine and a few other improvements. Perl 3 followed on October 18, 1989, which could handle binary data and also enabled network programming. The GPL was chosen as the new license . Almost unchanged, it was available as Perl 4 from March 21, 1991, but now either under the GPL or the Artistic License developed by Larry Wall . The real reason for the new version, however, was the camel book that appeared at the same time and was published as a reference for the current version marked as version 4. Until then, the UNIX man pages available since Perl 1 were the only documentation available. These offer a well-founded and extensive treatise on every topic, but not an easy introduction for Perl beginners. The book written by Randal L. Schwartz , Larry Wall and Tom Christiansen was supposed to fill this gap . It was published by O'Reilly Verlag , which with this and other titles became known as a renowned specialist publisher for programmers. O'Reilly's Perl books became the authoritative publications, which was only put into perspective in the new millennium. The publisher also operated the most popular online magazine for Perl programming at perl.com and organized the largest Perl conference, TPC (The Perl Conference - today OSCON ). O'Reilly benefited from Perl's growing popularity, and in return Tim O'Reilly paid his friend Larry Wall a fixed salary in the years that followed, so that he could devote himself to the further development of the language without further obligations or guidelines. In 1993, when Perl reached version 4.036, Larry Wall stopped developing it and developed Perl 5 from scratch. ### Pearl 5 CPAN : One of the largest free software archives Perl 5.0 was released on October 18, 1994 and was the biggest advance in the language to date. With Plain Old Documentation you could now insert formatted documentation into the source text. From now on, the language could also be expanded with separate modules, which led to the creation of the CPAN in the following year . This large archive of freely available modules eventually became an important reason for using Perl itself. Another important innovation was the introduction of references , which for the first time allowed a simple creation of composite data structures . With version 5 it was also possible to program object-oriented in Perl. Larry Wall chose an unusual way and derived the syntax used for it almost exclusively from existing language elements (packages, package functions and package variables as well as the new references). Only the function `bless()`for creating an object and the arrow operator ( `->`) for calling methods were added (the arrow operator is actually the dereferencing operator that dereferences a certain method from an object that consists of a reference to the class). XS was also created, an interface description language that enables Perl programs to be extended with other languages ​​or to address any software or hardware with Perl programs . Since the release of Perl 5, Larry Wall has hardly participated in the development of the language. This was done voluntarily by Perl-enthusiastic programmers, the so-called Perl 5 Porters, who communicate via the p5p mailing list founded in May 1994, but also increasingly decide on bug fixes and new language functions via their own bug and request tracker (RT). A so-called pumpking takes over the management for each version . The term Perl Porter comes from the original task of the p5p list of coordinating the porting of Perl to other operating systems. In the years after version 5.0, not only was Perl ported to Macintosh and Windows, but the numbering of the versions also shifted. Since the syntax did not change significantly, the 5 was left and the first decimal place increased for larger milestones, but additional numbers were used to count the intermediate steps. Since Perl was only able to handle version numbers that contained several commas from 5.6 onwards, they were B. Perl 5.001 or Perl 5.001012 written. From 5.6 on, the version scheme used by Linux at that time was also adopted, in which even numbers indicate error-free user versions and odd numbers indicate developer versions in which new functions are incorporated. Series of user versions (e.g. 5.8.x) are kept binary compatible with each other, which means that a binary module compiled for Perl 5.8.7 also works with 5.8.8, but not necessarily with 5.10 or 5.6.1. Perl 5.6 This version brought some new experimental capabilities on March 22, 2000 that only matured later, such as Unicode and UTF-8, threads, and cloning. 64-bit processors could now also be used. Linguistically, this series headed by Gurusamy Sarathy mainly added lexically global variables (with `our`) and a vector notation that allows the comparison of multi-digit version numbers, as well as the special variables `@-`and `@+`. Perl 5.8 The 5.8.x series, which Nicholas Clark took care of on July 18, 2002, mainly resolved the problems with Unicode and the threads, but also the input / output (IO), signals and the numerical accuracy were decisively improved. #### From Perl 5.10 In addition to reduced memory consumption and a renewed and now interchangeable regex engine, this version brought on 18 December 2007 under the leadership of Rafaël Garcia-Suarez, especially innovations that come from the design of Perl 6 and their use, either individually or collectively `use feature ':5.10';`be registered must or shorter `use v5.10;`. From this version on, this applies to all functions that could break compatibility. These include `say`, `given`and `when`(analogous to the `switch`statement in C ), the smartmatch operator ( `~~`), the defined or operator ( `//`) and `state`variables, which simplify the creation of closures . Other noteworthy innovations include the relocatable installation path, stackable file test operators, definable lexical pragmas, optional C3 serialization of object inheritance and field hashes (for "inside out" objects). The regex engine now works iteratively instead of recursively, which enables recursive expressions. Complex search queries can now be formulated in a more understandable and less error-prone manner using named captures . The special variables`\$#` and `\$*`and the interpreter interfaces perlcc and JPL have been removed. The following year the source was switched from Perforce to Git , which made it much easier to develop and release new versions. Perl 5.12 This April 12, 2010 version, directed by Jesse Vincent, contains far fewer major and visible changes than 5.10. `use v5.12;`implies `use strict;`and `use feature 'unicode_strings';`, which makes all commands treat strings as unicode. Among the technical improvements, the updated Unicode (5.2), DTrace support and security of dates beyond 2038 are particularly noteworthy, suidperl has been removed. The ellipse operator ( yada-yada ) and the regex escape sequence \ N have been adopted from Perl 6 , module versions can now be managed by package and use . Another new feature is the ability to define your own keywords using Perl routines. However, this is marked as experimental and is subject to change. For better planning and collaboration with distributions, a developer version will appear from December 5, 2000 on the 20th of every month, a small user version every 3 months and a large one every year. Perl 5.14 As of May 14, 2011, new modifiers and control characters make it easier to use Unicode , which has been updated to 6.0. Built-ins for lists and hashes are automatically dereferenced ( autoderef ) and large parts of the documentation have been revised or rewritten. Support for IPv6 has also been improved and the connection of multithread libraries has been made easier. Perl 5.16 The version released on May 20, 2012 contains numerous syntactic smoothing, partly updated documentation and the change to Unicode 6.1. Jesse Vincent and from November 2011 Ricardo Signes were in charge. Thanks to a newly opened donation pot from the Perl Foundation , two long-time developers have been obliged to complete thankless tasks and to simplify the build process . The only fundamentally new functionality is the token that can be activated with `use feature 'current_sub';`or , a reference to the current routine . `use v5.16;` `__SUB__` Perl 5.18 The functionalities lexical subroutines ( lexical_subs ) and character classes generated with set functions within regular expressions, which were published on May 18, 2013, are both experimental. Such functions, which also include lexical context variables ( lexical_topic ) and the Smartmatch operator, now generate warnings starting with no warnings 'experimental :: function-name'; or no warnings 'experimental'; can be switched off. The hashes were consistently randomized in order to better protect programs against DoS attacks . Perl 5.20 Also under the leadership of Ricardo Signes, the experimental functionalities of the subroutine signatures ( signatures ) and ( postderef ) an alternative postfix syntax for dereferencing came on May 27, 2014 . The autoderef recorded at 5.14 has been downgraded as experimental. Unicode 6.3 is supported and with drand48 Perl now has its own, platform-independent, good random number generator. String and array sizes are now 64-bit values. Perl 5.22 On June 1, 2015, the double diamond operator (<< >>), bitwise string operators (&. |. ^. ~.), A 'strict' mode for regular expressions `use re 'strict';`( re_strict ), Unicode 7.0, aliasing of references ( refaliasing ) and constant routines ( const_attr ), which always return the constant value determined with the first compilation. All functionalities mentioned (name in brackets) are experimental for the time being. Perl 5.24 brought accelerations for blocks and numerical operations as well as Unicode 8.0 on May 9, 2016 . The postderef and postderef_qq features were accepted - autoderef and lexical_topic removed. Perl 5.26 Under the direction of SawyerX, the regex option xx, indentable Here documents and Unicode 9.0 were introduced on May 30, 2017 . The lexical_subs feature has been adopted and `'.'`(the current directory) has been removed from `@INC`(the list of search paths for modules) by default for security reasons . Perl 5.28 Released on June 22, 2018. In addition to Unicode 10.0, Perl received long and short versions of alpha assertions. These are aliases for special regex groups with meaningful names: for example instead of (? = ...), now also (* positive_lookahead: ...) or (* pla: ...). (* script_run:…) or (* sr:…) was introduced in order to recognize uniformly coded text, which helps avoid attacks through manipulated input. Three critical security holes have been closed, multiple dereferences and string merges have been accelerated, and the operators (&. |. ^. ~.) Are no longer experimental. In addition, it was decided to keep a record in the document perldeprecation about when which function (with 2 versions of warning time) is removed. Perl 5.30 Updated on May 22, 2019 to Unicode 12.1, introduced the Unicode Wildcard Properties and allowed a lookbehind to be limited in length. Removes were \$ [ , \$ * , \$ # and File :: Glob :: glob , and variable declarations in trailing, conditional expressions. Perl 5.32 Introduced concatenated comparison operators (\$ d <\$ e <= \$ f), the isa operator (checks class membership) and Unicode 13 on June 20, 2020. \ p {name = ...} allows to interpolate expressions to Unicode names within a regex. #### Current versions Even if the latest user version is 5.32.0, the version series 5.30.x is currently still being maintained (the current version is 5.30.3). The versions 5.28.3, 5.26.3, 5.24.4, 5.22.4, 5.20.3, 5.18.2, 5.16.3, 5.14.4, 5.12.5, 5.10.1 and 5.8.9 are the end of their series, Security-relevant improvements will be submitted up to 3 years after the release of a version. With core modules, attention is usually paid to compatibility up to 5.6, with important CPAN modules usually 5.8.3. Current changes are being made in Developer Branch 5.31, which is not intended for general use. The next version of Perl will be Perl 7 and not Perl 6. Perl 6, which was renamed Raku in 2020, is a sister language whose interpreter and surrounding infrastructure have been completely redesigned. See raku ## features ### Principles Perl was developed for practical use and therefore focuses on quick and easy programmability, completeness and adaptability. This philosophy is expressed in the following slogans or phrases, most of which are from Larry Wall. #### Multiple ways The best-known and most fundamental Perl motto is "There is more than one way to do it" (German: "There is more than one way to do something"), which usually translates to TIMTOWTDI (rarely TMTOWTDI) or (with English contraction ) "Tim To [a] dy" is shortened. In contrast to languages ​​like Python, Perl makes fewer specifications and intentionally offers several formulation and solution options for every problem ( syntactic sugar ). For example, logical operators can be written as `||`and `&&`(as in C) or (with nuances of meaning) as `or`and `and`(as in Pascal); but also numerous commands with an overlapping range of functions such as `map`and `for`allow different formulations for the same situation. Some commands like the diamond operator ( `<>`) offer abbreviated notations for already existing functionality (here would be `<STDIN>`equivalent, with slight differences, but which would have to be written much longer). This diversity is also visible in the CPAN , where often several modules fulfill a very similar purpose or one that could also (albeit more cumbersome) be implemented ad hoc (example `Getopt::Long`). Another catchphrase, which can also be seen as an extension of TIMTOWTDI, describes Perl as the first postmodern programming language. This means that Perl combines different paradigms and the user is free to combine structured, object-oriented, functional and imperative language features. #### Simple and possible The other important motto is Perl makes easy jobs easy and hard jobs possible , which means in German "Perl keeps simple tasks simple and makes (the solution) difficult (s) tasks possible". First of all, this includes the goal of simplifying common tasks as far as possible with short “ready-made solutions”. For example, checks the existence of a file. For Perl, however, leaving simple tasks simple also means not requiring any preparatory programming instructions, such as registering variables or writing a class. Second, Perl tries to be complete and to provide at least the basics of every problem that make a solution possible. The third goal, not to let the first two goals collide, is becoming more and more important with the growing language scope of Perl 6, where, based on the Huffman code, the notations of the most frequently used commands are kept as short as possible, without involving the logic to break the spelling of similar commands. `-e dateiname` #### Context sensitive In Perl there are commands that have different meanings depending on the context in which they are used. Data structures like the array are context-sensitive in this way. If it is assigned to another array, its content is transferred; if the recipient is a single value ( scalar ), it receives the length of the array. ### technical features The Perl interpreter itself is a program written in C that can be compiled on almost any operating system. However, precompiled versions on seldom used systems such as BeOS or OS / 2 are not always up to date. The source code is around 50 MB and also contains Perl scripts that take over the function of Makefiles and the test suite. The compiled program is typically around 850 KB in size, but this can vary depending on the operating system, compiler and libraries used. Perl scripts are stored in text files with any line separator. When a script is started, it is read in by the Perl interpreter, converted into a parse tree , this into bytecode , which is then executed. The parser integrated in the interpreter is an adapted version of GNU Bison . Strictly speaking, Perl is not an interpreted language because a Perl program is compiled before it is executed . This leads to the fact that - in contrast to purely interpreted languages ​​- a program with syntax errors does not start. ### distribution In the beginning, Perl was a UNIX tool that was specially designed for processing text files, controlling other programs and outputting reports. To this day, it is used on all common operating systems, and not just by system administrators. Perl also got the reputation of a glue language , because incompatible software can be linked with the help of relatively quickly written Perl scripts. To this day, Perl is part of the basic configuration on all POSIX- compatible and Unix-like systems. With the spread of the World Wide Web , Perl was increasingly used to connect web servers , databases and other programs and data and to output the results in the form of HTML pages. The Perl interpreter is addressed by the web server via CGI or FastCGI or is embedded directly in the server. ( mod perl in Apache , ActiveState PerlEx in Microsoft IIS ). Even though PHP has become more popular for this server-side script programming , Perl is still used by many large and small sites and Internet services such as Amazon.com , IMDb .com , slashdot .org, Movable Type , LiveJournal and XING . Since Perl scripts are often barely recognizable in many important places, Perl was also jokingly referred to as the tape that holds the Internet together. Frameworks such as Mason , Catalyst , Jifty, Mojolicious and Dancer were also created in Perl , which allow complex and easily modifiable websites to be developed very quickly. Wiki software is also often written in Perl. B. Social text, which is based on Mason, Kwiki, TWiki , Foswiki , ProWiki or UseMod . Popular ticket systems with web interfaces such as Bugzilla or RT are also written in Perl. However, web applications are still just one of Perl's many uses. Important Perl programs in the e-mail area are SpamAssassin ( spam filter), PopFile and open webmail . For example, Perl is used for system administration in debconf , a part of the package management of the Debian operating system . Other main fields of application are data munging and bioinformatics, where Perl has been the most frequently used language since 1995 and is still important. The reasons for this are again the ability to process information in text form and the flexibility and openness of the language, which allow the international research community to work together despite the different standards of the institutes. BioPerl is the most important collection of freely available tools here, focusing primarily on the area of genome sequence analysis. Perl played an important role in the Human Genome Project . Even desktop applications and games like Frozen Bubble can be written in Perl. Today's computers are fast enough to run these programs fluently. Areas in which script languages such as Perl cannot be used sensibly are, on the one hand, applications with high requirements in terms of hardware proximity or speed, such as drivers or codecs . On the other hand, they should not be used in highly safety-critical areas (e.g. aircraft control), since due to the lax syntax check (e.g. missing / very weak type system ) many errors do not occur until runtime and verification is generally not possible. Perl ports existed for over 100 operating systems. ### Perl and other programming languages For tasks that are difficult or slow to solve with Perl, Perl offers the Inline module, an interface through which program parts in other languages ​​can be integrated into a Perl program. Supported languages ​​include a. C , C ++ , Assembler , Java , Python , Ruby , Fortran and Octave . Areas of application are e.g. B .: • computationally intensive formulas ( C , assembler ), • Solving complex problems with existing systems ( Octave , Fortran - Libraries ) and • merging of applications in different languages ​​("glue function" of Perl). Using Inline is relatively easy and well documented. In the case of compiled program parts, Inline keeps a record of the version status using the MD5 identifier, which avoids multiple compilation of the same code. With inline, the transfer of the parameters and the return of the results require some effort. With short calculations, this effort outweighs the gain in speed. For example, if the Mandelbrot set is calculated by calculating the formula via inline as a C function , but leaving the iteration in Perl, program execution slows down compared to a pure Perl implementation. On the other hand, if the iteration loop is also outsourced in C , the performance increases significantly. ${\ displaystyle z \ mapsto c + z ^ {2}}$ ### Logos Tim O'Reilly was one of Perl's key supporters for many years. His publisher holds the rights to perhaps the most important Perl logo: the camel. A dromedary serves as Perl's mascot . For the first time it adorned the cover of the reference work Programming Perl, also known as the camel book . His publisher ( Tim O'Reilly ) jokingly said in an interview as a reason: Perl is ugly and can do without water for long stretches. The dromedary can be seen on the Programming Republic of Perl emblem, which is often viewed as the official Perl logo and which allows O'Reilly to use it for non-commercial purposes. Other logos used in connection with Perl, besides pearls, are the sliced onion (identification mark of the Perl Foundation ) and the Komodo dragon , which adorns the widespread Perl distribution ActivePerl from ActiveState. ## syntax ### Free format Perl allows conditionally format-free source code. This means that indentations and additional spaces are syntactically irrelevant and line breaks can be inserted as desired. To do this, commands in a block must be separated by a semicolon. Some language elements, such as formats, heredocs, and common regular expressions, are not freeform. ### variables A characteristic of Perl is that variables are identified by a prefix (also called a sigil ), which indicates their data type. Here are some examples: • `\$` for scalars: `\$scalar` • `@` for arrays: `@array` • `%` for hashes (associative arrays): `%hash` • `&` for functions (often optional): `&function` • `*` for typeglobs: `*all` File handles , directory handles and formats have no prefix, but are also independent data types. Each data type has its own namespace in Perl . Basic data types in Perl are scalar variables , arrays and hashes ( associative arrays ). • Scalars are typeless variables for single values; they can contain strings , numbers (integer / floating point) or references to other data or functions. Strings and numbers are automatically and transparently converted into one another when required, a great feature of Perl. • Arrays combine several scalars under one variable name. Array entries have an index. The count starts at 0, unless otherwise specified. • Hashes also summarize scalars, but here individual values ​​are not clearly identified and addressed using numerical indices, but with the help of associated keys . Any character string can be used as the key, or anything that can be converted into a character string. Hashes and arrays can be assigned to one another, whereby hashes are viewed as lists of key / value pairs. Data of different types can be combined as required to form new data structures using references; for example, hashes are conceivable that contain individual scalars in addition to (references to) arrays. Package variables are created automatically the first time they are used. Validity-restricted variables are used far more frequently in modern parlance. These must be `my`declared using. `our`makes a variable available throughout the program. releases the specified variable again. `undef variable` ### Control structures The basic control structures hardly differ from those in C, Java and JavaScript . #### Conditional execution `if`works as known from C; `unless (<Bedingung>)`, a special feature of Perl, is a notation for `if (!(<Bedingung>))`. A case or switch instruction ( `given when`) is only available from Perl 5.10, before this structure had to be `if … elsif … else`reproduced. However, `given`the context variable ( `\$_`) sets how `for`and `when`applies smartmatch ( `~~`) to what makes this construct much more versatile than traditional case commands. The optional `default`here corresponds to one `else`. The optimized logical operators also allow conditional execution. With `or`(or `||`) the second expression is executed if the result of the first is not a true value, `and`(or `&&`) works analogously. ``` if (<Bedingung>) {<Anweisungen>} [elsif (<Bedingung>) {<Anweisungen>}] [else {<Anweisungen>}] unless (<Bedingung>) {<Anweisungen>} [else {<Anweisungen>}] given (<variable>) { [when (<Wert>) {<Anweisungen>}] [default {<Anweisungen>}] } <Bedingung> ? <Anweisung1> : <Anweisung2>; <Ausdruck1> || <Ausdruck2>; <Ausdruck1> && <Ausdruck2>; ``` #### grind As in C and iterate (in the variant based on C) as long as the condition is true until it is true, and iterates over a list. In Perl 5, and are interchangeable. `while``for``until``foreach``for``foreach` ``` [label:] while (<Bedingung>) {<Anweisungen>} [continue {<Anweisungen>}] [label:] until (<Bedingung>) {<Anweisungen>} [continue {<Anweisungen>}] [label:] for ([<Startanweisung>]; [<Bedingung>]; [<Updateanweisung>]) {<Anweisungen>} [continue {<Anweisungen>}] [label:] for[each] [[my] \$element] (<Liste>) {<Anweisungen>} [continue {<Anweisungen>}] ``` `last`immediately exits the loop, `redo`repeats the current iteration, and `next`jumps to the `continue`block before continuing with the next iteration . These commands can be followed by a label identifier which, in the case of nested structures, determines which loop the command refers to. ``` do {<Anweisungen>} while <Bedingung>; # Spezialfall: in dieser Form do {<Anweisungen>} until <Bedingung>; # mindestens eine Ausführung ``` #### Follow-up control structures The control structures listed above relate to a block with several instructions. In the case of individual instructions, you can also choose the abbreviated, trailing spelling, which also makes it easier for the (English-speaking) readers to understand by using natural-language wording. ``` <Anweisung> if <Bedingung>; <Anweisung> unless <Bedingung>; <Anweisung> for <Liste>; <Anweisung> while <Bedingung>; <Anweisung> until <Bedingung>; ``` ### Regular expressions Since its inception, regular expressions (regex) have been a special feature of Perl, as until then only specialized languages ​​like Snobol and awk had similar capabilities . Due to its widespread use, Perl set an unofficial standard, which was taken up by the PCRE library , which is independent from Perl and also partially different , and which is now used by several important languages ​​and projects. Starting with version 5.0, Perl has extended its regex capabilities to many functions, such as B. Backward References, expanded. Regular expressions in Perl can also be used much more directly than e.g. B. in Java - with the `=~`operator, as they are the core part of the language and not a switchable library. The actual regular expression is noted with forward slashes as delimiters. Because slashes can often appear within regular expressions, many other characters can also be used for delimitation. This improves readability, because you can choose characters that stand out from the content of the regular expression. Perl knows two regular expression commands, the behavior of which can be changed with many trailing options. • The `m`command stands for match , which means match. This `m`can be omitted when using the standard regular expression delimiters, namely slashes. The following expression searches the content of the variable `\$var`and returns an array of character strings which the search expression matches. With the `g`option activated , the search in the list context returns all finds, deactivates all recognized subexpressions. In the scalar context, the expression returns a positive value if the search expression was found, with the -option `c`the number of finds. `i`ignores upper and lower case, `o`interpolates variables only once, treats `m`the string as multi-line and `s`single-line. The `x`option makes it possible to distribute the search expression over several lines for better readability and to provide it with comments. ``` \$var =~ [m]/<Suchausdruck>/[g[c]][i][m][o][s][x]; ``` • The `s`command stands for substitute , which means to replace. It replaces the part of the given text on which the search term fits with the replacement term. ``` \$var =~ s/<Suchausdruck>/<Ersatzausdruck>/[e][g][i][m][o][s][x]; ``` After successfully using a regular expression, the following special variables are available: • `\$&` - the recognized string • `\$`` - String before the recognized string • `\$'` - String after the recognized string • `\$1`.. `\$n` - Results of the parenthesized subexpressions • `\$+` - the last recognized subexpression • `@-` - Start offsets of hits and sub-hits • `@+` - associated end offsets The operator often described in the same breath as `m//`and has only the spelling in common with them. It is based on the UNIX command , which is used to replace individual characters. Synonym can take also be written. `s///``tr///``tr``tr``y` ``` \$var =~ tr/<Suchzeichen>/<Ersatzzeichen>/[c][d][s]; ``` In addition to these two, the command `split`that divides a character string using a separator, which can also be a regular expression, can also be mentioned. ### Quoting and Interpolation Quoting operators: • `q` - quote uninterpreted string (alias to '') • `qq` - quote interpreted string (alias to "") • `qw` - quote words, a list of strings separated by whitespace • `qr` - quote regex • `qx` - quote external application to be executed Alternative quoting and variable interpolation lead to particularly easy-to-read code. An example to clarify: • String concatenation and quoting characters in the text make the code difficult to read. ``` \$text = 'He\'s my friend ' . \$name . ' from ' . \$town . '.' . ' ' . \$name . ' has worked in company "' . \$company . '" for ' . \$years . ' years.'; ``` • Interpolation of variables in the string now make the result recognizable. Escapes \ disturb the flow of text. ``` \$text = "He's my friend \$name from \$town. \$name has worked in company \"\$company\" for \$years years."; ``` • Exchanging the quoting character makes escapes superfluous. The code is now optimal. `qq`initiates the quoting of variables in the string. Any character after that is the quoting character for this string. ``` \$text = qq{He's my friend \$name from \$town. \$name has worked in company "\$company" for \$years years.}; ``` ## criticism The most common criticism of Perl is its poor readability. Any non-trivial program is read far more often than it is written. In fact, Perl offers an above-average amount of freedom that can lead to illegible code (see Disciplines ) . On the other hand, the same freedom also makes it possible to program close to the logical structure of the problem or human understanding. The freedom to pursue personal preferences, valued by Perl programmers, must be restricted by self-imposed rules in projects that are developed by several programmers or over long periods of time in order to avoid problems later. This requires additional communication effort or the use of software such as Perl :: Critic . Some parts of the syntax, such as object orientation and signatures , are simple and very powerful, but are often perceived as outdated compared to comparable languages ​​such as Python or Ruby and require additional typing and thinking in standardized approaches, especially from Perl beginners. These problems should be fixed in Perl 6 or can be avoided with additional modules available for Perl 5. With Moose there is a very modern and extensive object system that is based heavily on that of Perl 6. Moose is now considered the de facto standard for object-oriented programming with Perl. Signatures were introduced at 5.20 but are still classified as experimental. Perl was also accused of violating the UNIX philosophy. For more information, see the Origin section . On the occasion of Perl's 30th birthday, iX said in December 2017 that the language had fallen behind the success of Java , PHP , Ruby and Python and could no longer catch up. Perl's reputation was "ruined". Unfortunately, it is currently considered a “'Write once, never read again' language, fed by years of 'Obfuscated is cool' culture and by propagating the worst aspects of Perl.” Perl is “now a niche language, pursued by lovers in theirs Leisure." Strong criticism is also raised against Perl 6, which has set goals that are too ambitious and after many years still does not bring any visible results, instead paralyzing the future of Perl (5). From the beginning, Perl 6 was proclaimed a long-term project based exclusively on volunteer work that could not always be planned and with little financial support. Its concrete goals only emerged in the course of development, and there were clear problems with communication and external presentation. However, since Perl 5.10, significant innovations have come from Perl 6. ## Perl culture and fun ### Community Perl Foundation logo As with other free software projects, there are special social bonds between many developers and users of the language, and a culture of their own has developed from them. The Perl culture is characterized by openness, hospitality and helpfulness, but also by individualism , playfulness and humor. In the beginning Larry Wall was certainly a role model for this, as he already had a prominent position in UNIX development circles when Perl was released through other projects such as rn or patch , but now Randal L. Schwartz, Damian Conway, Audrey Tang , Brian Ingerson also count and Adam Kennedy among the leading figures, who get a lot of attention through their work within the "scene". In contrast to commercial programming languages, almost all activities can be traced back to personal motivations. Accordingly, the Perl Foundation is a pure volunteer organization that sees itself as the pivot of a self-governing community and uses the donated money for influential projects and people, the organization of developer conferences and the operation of the most important Perl-related websites. ### Meetings, workshops and conferences Local user groups, who usually invite to casual meetings once or twice a month, at which lectures can also be given, call themselves Perl Mongers and can be found in over 200 major cities around the world. The annual workshops are larger, much more tightly organized and mostly country-specific, of which the well-established German Perl workshop was the first. Workshops want to bring ambitious developers together locally in a framework that is as affordable as possible. The larger Yet Another Perl Conferences (YAPC), which are held for the regions of North America, Brazil, Europe, Asia, Russia and Israel, have a similar goal . The largest, but also the most expensive, is The Perl Conference ( TPC ) organized by O'Reilly in the USA , which is now part of OSCON . Hackathons have also been held for committed contributors since around 2005 , mostly on the topics of quality assurance and Perl 6. ### Disciplines Many of the language features of Perl invite you to be creative with your code. This has led to various intellectual, sometimes humorous, sometimes bizarre competitions and traditions around the Perl programming language. golf Golf is a sport for programmers in which the shortest program (in ASCII characters) that completely fulfills a given task wins. Since Perl knows many, sometimes tricky, abbreviations and abbreviations for common techniques, this is a particularly popular discipline among Perl programmers. poetry Since Perl contains many elements of the English language, there are regular competitions in which the best examples of Perl poetry are awarded. In addition to the free form, which only has Perl as its content, an attempt is made here to write poems that are executed by the interpreter without warnings and error messages. There is also a pearl haiku competition dedicated to this Japanese form of poetry. Obfuscation The discipline obfuscation is also very famous and notorious , for which there is also an annual competition (the " Obfuscated Perl Contest "), which is most comparable to the International Obfuscated C Code Contest , which Larry Wall himself won twice. The aim here is to disguise the function of a program in an unusual and creative way. This is particularly easy in Perl because there are abbreviations for almost everything, the language itself is very dynamic and many things happen automatically depending on the context, which is often referred to as "Perl magic". An example from Mark Jason Dominus, who won second prize in the 5th Annual Obfuscated Perl Contest in 2000 (this program outputs the text “Just another Perl / Unix hacker”): ```@P=split//,".URRUU\c8R";@d=split//,"\nrekcah xinU / lreP rehtona tsuJ";sub p{ @p{"r\$p","u\$p"}=(P,P);pipe"r\$p","u\$p";++\$p;(\$q*=2)+=\$f=!fork;map{\$P=\$P[\$f^ord (\$p{\$_})&6];\$p{\$_}=/ ^\$P/ix?\$P:close\$_}keys%p}p;p;p;p;p;map{\$p{\$_}=~/^[P.]/&& close\$_}%p;wait until\$?;map{/^r/&&<\$_>}%p;\$_=\$d[\$q];sleep rand(2)if/\S/;print ``` Randal L. Schwartz JAPH A kind of sub-category of obfuscation is the JAPH discipline publicly started by Randal L. Schwartz . These are signatures that contain small Perl programs which usually only output the name of the author or a message in a way that is as incomprehensible as possible. The letters JAPH are the first letters of Schwartz's signature Just Another Perl Hacker. Perligata The Perl module Lingua :: Romana :: Perligata by Damian Conway is probably one of the most bizarre modules par excellence: It enables the user to write Perl entirely in Latin . As in the Latin language, the sentence order is (largely) irrelevant for the meaning of an expression; instead, the relationships between individual words are established through their inflection . From variables to references and multi-dimensional arrays, everything is in this new language definition. Almost all special characters have been removed from the language, variables with the same name but different structure ( e.g. \$ next and @next ) are declined in order to address the corresponding variable. Some sample code: ```insertum stringo unum tum duo excerpemento da. # Entspricht: substr(\$string,1,2) = \$insert; clavis hashus nominamentum da. # Entspricht: @keys = keys %hash; ``` “Language modules” for Klingon , Borg or Leetspeak were created from a similar drive . Such modules are a good example of the amount of time that many people devote to Perl; In this sense, Perl can definitely be called a hobby. Acme Brian Ingerson laid the foundation for a CPAN category of modules that have no productive use, often consciously , with his well-known module called Acme , which does nothing more than certify to the user that his program has reached the highest level of perfection are counterproductive or pretend a function that is impossible to achieve in this way and should be understood more as a joke. This game of bizarre ideas includes impressive ASCII art , modules that make the source code invisible (Acme :: Bleach) or manipulate it in some other humorous way, for example by providing it with typical language errors of President Bush or randomly deleting methods that indicate the presence of a supposed to simulate thieving magpie . ### Mottos and quotes Perl programmers see camels of all kinds as mascots. The London Perl Mongers even adopted one from the London Zoo. There are many well-known mottos and quotes that deal with Perl itself or the possibilities of language; here are some samples: • " Perl: the Swiss Army Chainsaw of Programming Languages. (Perl: The Swiss Army Chainsaw of Programming Languages. Allusion to the versatility of Swiss Army Knives .) • Perl is the only language that looks the same before and after RSA encryption. ”( Keith Bostic ) (Perl is the only language that looks the same before and after RSA encryption.) • Only perl can parse perl. ”( Larry Wall ) (Only perl can parse Perl .) • … We often joke that a camel is a horse designed by a committee, but if you think about it, the camel is pretty well adapted for life in the desert. The camel has evolved to be relatively self-sufficient. On the other hand, the camel has not evolved to smell good. Neither has Perl. ”( Larry Wall : on the camel as a pearl mascot) (… we often joke that a camel is a horse that was designed by a committee. But when you think about it, the camel is pretty good at desert life The camel evolved to be self-sufficient. On the other hand, it wasn't evolved to smell good. Neither did Perl.) • " The very fact that it's possible to write messy programs in Perl is also what makes it possible to write programs that are cleaner in Perl than they could ever be in a language that attempts to enforce cleanliness " ( Larry Wall : Linux World, 1999) (The very fact that it is possible to write dirty programs in Perl makes it possible to write programs that are cleaner than languages ​​that try to enforce cleanliness.) • Perl: Write once - never understand again ( Perl: Write once - never understand again . An allusion to the mantra Write once - run everywhere from Java ) Competitions ## literature Wikibooks: Perl Programming  - Learning and Teaching Materials Commons : Perl (programming language)  - collection of images, videos and audio files ## Individual evidence 1. Perl Source on GitHub 2. Larry Wall: Programming is Hard, Let's Go Scripting… on perl.com of December 6, 2007; accessed on June 1, 2019. 3. Larry Wall: Perl, the first postmodern computer language , accessed December 31, 2018. 4. manpage of Perl 1.0 in the Perl timeline . on perl.org (English) 5. wired interview in issue 8.10 in 2000 6. Table of all release dates of Perl (POD document; English) in the official CPAN distribution 7. TMTOWTDI in the English-language Wiktionary 8. Getopt :: Long on CPAN 9. List of known ports on CPAN 10. Inline module on CPAN 11. GNU Octave on gnu.org 12. Susanne Schmidt: Happy Birthday, Perl! A scripting language celebrates its 30th birthday . In: iX . No. 12, December 18, 2017, p. 108. Retrieved December 25, 2017. 13. German Perl Workshop . 14. Perl developers among themselves: The results of the Perl QA Hackathon 2015 . 15. For example, Perl Poetry Category perlmonks.org (English) This version was added to the list of articles worth reading on August 26, 2005 .
$\begin{array}{ll} \mbox{minimize} & q_0(y) \\ \mbox{subject to} & q_i(y) \leq 0 \, \forall i = 1, \cdots, m \end{array}$ where $$q_i(y) = \frac{1}{2} y^t Q_iy + y^tb_i + c_i, \, y \in R^n$$ for all $$i = 0, 1, \cdots, m$$. The problem is convex if $$Q_i$$ is positive, semidefinite ($$Q_i \succeq 0$$) for all $$i$$, in which case an elegant duality structure is available.
# Purpose IRON provides an easy-to-use facility for using and sharing packages of quality Eiffel components, automating many of the tasks involved.Most often a package is a library or a set of libraries, but it could also other resources such as tools. Definition: IRON | a package management solution based on repositories. # Package Repository vs Library Repository Certainly the IRON repository is a repository of Eiffel libraries. However, sometimes libraries are used together, or cross reference one another, and thus are appropriate to be delivered together as a unit. Such unit can also include other types of files, such as external .c files that may need to be compiled on the local platform to make .LIB or .OBJ files available to the linker (.a or .o on Unix and Linux systems), scripts or executables that need to be run as part of the installation process (e.g. to generate other files required by the library, install environment variables, generate source code from LEX files), or tool kits that are part of, or needed by, the library. Since the IRON repository permits programmers to install software components in “units”, and since sometimes those units can contain more than one library, as well as other types of files, a new term was required to convey this concept: package. Definition: package | a downloadable unit of software from an IRON repository that contains one or more Eiffel libraries and their related files. # Application to .ecf files Empowering Teams of Developers To configure Eiffel projects, programmers uses "ECFs" (Eiffel Configuration Files -- Microsoft Visual Studio users can think of "solution files"). One application of ECFs is to reference libraries installed in different locations. Without IRON, the usual solution is to use relative or absolute path, and generally using environment variables such as ISE_LIBRARY or EIFFEL_LIBRARY, and a few package specific variables such as GOBO, ... Typical library references without IRON in ECF:<library name="base" location="$ISE_LIBRARY\library\base\base.ecf"/> <library name="xml_parser" location="$EIFFEL_LIBRARY\library\text\parser\xml\parser\xml_parser.ecf"/> <library name="dummy_foobar" location="$LIB_DUMMY\src\foo\bar.ecf"/> As projects grow and multiply, the number of these variables adds up quickly. A dozen or more such environment variables per system is (prior to IRON) not uncommon. Coordinating their use and evolution among a team of programmers can become a challenge. IRON has made it possible to simplify this scenario dramatically. For any commonly used library, it suffices to: • Install the related package from an IRON repository. • And for any project that uses the library, include an IRON uri in the related ECF. • an IRON uri has the form: iron:package-name:relative-path-to-file.ecf • as you can see, no more environment variable, and only a relative path from the root of the package. This simplifies a lot the referencing, and package/library management. No need to know where are located the installed files on the local machine. That's all! In the above example, the location of the libraries, in the project, then become something like:<library name="base" location="iron:base:base.ecf" /> <library name="xml_parser" location="iron:xml:parser/xml_parser.ecf" /> <library name="foobar" location="iron:dummy:src/foo/bar.ecf" /> There is no more need for a set of environment variables. IRON and EiffelStudio take care of the remaining details. And all developers on a project simply share the same ECF file with no further worry about where the libraries are. However, to get information about IRON location, use the command: iron path ... Command Result Example iron path Base directory of IRON (~ IRON_PATH variable) C:\Users\jfiat\Documents\Eiffel User Files\14.05\iron iron path base location of installed package base C:\Users\jfiat\Documents\Eiffel User Files\14.05\iron\packages\base iron path xml location of installed package xml C:\Users\jfiat\Documents\Eiffel User Files\14.05\iron\packages\xml Notes: • It is possible to override the default location of IRON installation directory, by setting the environment variable IRON_PATH or ISE_IRON_PATH. • In the scope of the .ecf file, the IRON_PATH variable is always set: to default path or to the value of environment variable IRON_PATH. • In addition to the IRON way to reference library, it is also possible to use the absolute URL in the repository, see advanced usage for more details. https://iron.eiffel.com/14.05/com.eiffel/library/base/base.ecf https://iron.eiffel.com/14.05/com.eiffel/library/text/parser/xml/parser/xml_parser.ecf https://iron.eiffel.com/14.05/others/dummy/src/foo/bar.ecf But this implies putting the version in the url (i.e 14.05), or you could set ISE_LIBRARY to https://iron.eiffel.com/14.05/com.eiffel # IRON client tool The iron client executable is a facility that permits Eiffel programmers to easily install, remove, update, list, examine, search and share Eiffel packages.Additionally, it permits easy maintenance of a local list of IRON repositories. A default IRON server is provided and a default repository is added automatically by the iron executable, based on the version of EiffelStudio that installed the iron executable (example: https://iron.eiffel.com/14.05 ). The IRON facility consists of three parts: • A default repository at https://iron.eiffel.com/14.05 (provides the web API and web interface for the repositories that are stored there; you can add other IRON servers as they become available). • The iron executable utility on the local machine (installed with EiffelStudio, the program that interacts with the repositories). • Within EiffelStudio and the Eiffel compiler, is the ability to read and use IRON references from the Eiffel Configuration File (.ecf). The Project settings tool, and Add Library' dialog also provides support for IRON packages. # How to Use IRON ## Install wanted Eiffel packages Using the EiffelStudio command prompt (installed with EiffelStudio), execute the IRON command to install the packages you want to use: iron install <package_name> Note that the compilation of an Eiffel project that depends on uninstalled Eiffel package(s) will suggest and propose to the user to install the missing packages (this is currently supported with ec in command line mode, and graphical mode. ## Add reference to install package libraries Simply add the library with IRON uri iron:package-name:relative-path-to-file.ecf, as you would have previously for local libraries. This can be done directly by editing the .ecf file, or using the EiffelStudio Project Settings tool from Eiffel Studio: Project Settings -> <target_name> -> Groups -> Libraries right-click Libraries and add a library. This will popup the Add Library dialog, that expects a name and a location, simply select the available library from the grid.Note that you have an easy way to install, remove IRON packages from the Iron tab of the "Add Library" dialog. • The default and recommended location for a IRON package library is the IRON uri: iron:package-name:relative-path-to-file.ecf • It is also possible to use the absolute URL in the IRON repository such as: https://iron.eiffel.com/14.05/com.eiffel/library/base/base.ecf • And an other solution, would be to use the IRON_PATH environment variable to locate the install libraries, such as: $IRON_PATH\packages\base\base.ecf This latter method, while it works, is not recommended simply because it defeats some of the advantages of using the IRON repository in the first place. ## External dependencies If the package has some other way of linking it with your Eiffel projects, e.g. to an external .dll or .so , then instructions for this should be provided within the package. ## Optional If you do not define one of these environment variables, the location used is:By default the base directory for IRON is under <Eiffel User Files>/<EiffelStudio_Version>/iron , but it is possible to overwrite this value by setting the environment variable IRON_PATH (or ISE_IRON_PATH). Note that if the physical location does not exists, the local iron executable will create it. Setting IRON_PATH, can be a way to setup different development environments (with different registered repositories, ...) # How to Get Information About IRON Packages At the website provided by a particular IRON server, you can get information about available packages in a number of ways. You can start by simply visiting the server’s base address: https://iron.eiffel.com/ . • Select the version that matches the version of EiffelStudio you have installed. Example: clicking on version 14.05 takes you to https://iron.eiffel.com/repository/14.05/ where you can list existing packages, or add a new package if you have an account on the server. If you click the “Package list” or “All packages” link, it takes you to a list of packages available under that version. ## Search/filter To filter this list, you can use the search window. You can specify search criteria in this format :criterion:search_string Criteria available: Criterion Meaning name string is contained in package name (wildcards are supported) title string is contained in title tag package contains search_string in its tags (i.e. keywords) If a criterion is omitted, name is used by default.Operators available: or, and, not (example: name:er and not name:parser) Finally, when you have found the package you want, click on its title, and the page displayed will contain detailed information about the package. ## Associated paths Part of the information is a portion of the URI which you can use to define the path to the package.For the base library (title: EiffelBase), these URIs look like this: /14.05/com.eiffel/library/base /14.05/com.eiffel/library/data_structure/adt/base Given that the server’s HTTP address is (in this example) https://iron.eiffel.com/, you can compose full paths from this, and use them in your Eiffel project. In this case, you can include the EiffelBase library in your project by specifying either: https://iron.eiffel.com/14.05/com.eiffel/library/base/base.ecf or https://iron.eiffel.com/14.05/com.eiffel/library/data_structure/adt/base/base.ecf Both will cause your project to compile with the same EiffelBase library provided by this IRON repository, provided you previously issued the follwing command on your system: > iron install base or > iron install https://iron.eiffel.com/14.05/com.eiffel/library/base IMPORTANT: those associated URIs may be deprecated soon with the use of IRON uri iron:base:base.ecf. # Using IRON from the Command Line The “iron” executable is used to perform various operations such as search, install, remove, update and share. This executable is installed with EiffelStudio in $ISE_EIFFEL/tools/spec/$ISE_PLATFORM/bin/. ## Quick Help from the Command Line • iron help lists the actions that are available. • iron <action> --help displays detailed usage syntax for the action specified. Note that most of the actions have a -v (verbose) option that will display additional helpful information about the action performed, including (when relevant) the local path to the package. ## Action Summary Action Meaning list displays a list of available packages, and whether they are installed search searches for a specified packages info displays information about a specified packages install installs specified packages remove removes specified packages repository manages repository list share share and manage your packages (an account on the IRON server is required) ## Examples ### Update cached iron repository information iron update For instance about the api_wrapper package: iron info api_wrapper If the package is installed, the installation path will also be displayed. ### Search for package by name, ID or URI package IDs and URIs are displayed by the "info" action. iron search base ### List available packages iron list ### List installed packages iron list --installed ### Install a package iron install base or iron install https://iron.eiffel.com/14.05/com.eiffel/library/base (This latter form is useful in resolving name conflicts when, for instance, you have multiple IRON repositories registered on your system, and two or more contain a packaged called “base”.) ### Uninstall a package iron remove base ### Install all available packages iron install --all ### Uninstall all installed packages iron remove --all ## Managing Multiple Repositories It is possible to have more than one IRON repository server registered. Examples:iron repository --list iron repository --add https://iron.eiffel.com/14.05 iron repository --add https://custom.example.com/14.05 iron repository --add C:\eiffel\my_repository iron repository --remove https://custom.example.com/14.05 ### Multiple-Repository Name Conflict Resolution If you have more than one IRON repository registered on your system, it is possible that the same package name may exist on more than one repository. If this is the case, and you attempt to perform operations using that name only, the repository that will be used will be the first repository in the list that contains a package with that name. If you need the package with that name from a different repository, then use the “id” or “uri” form of the identifying the package you want. If the sequence of repositories is not to your liking, you can change it in three ways: • use the iron executable to remove and add repositories to re-sequence them, or • delete the repositories.conf file and use the iron executable to add them again in the sequence you want them. • edit the file repositories.conf with a plain text editor in the directory indicated by your IRON_PATH (or ISE_IRON_PATH) environment variable (or <Eiffel User Files>/<EiffelStudio_Version>/iron/ if one of these environment variables are not defined on your system). (Note that this latter method, while possible, is not recommended, since the syntax of that file can change with new releases.) ## Building a package An IRON package has to provide, at its root, a file package.iron. This file describes the package with name, description, and various other information.See for instance, the package.iron for Eiffel Base package:package base project base_safe = "base-safe.ecf" base = "base.ecf" base_testing = "testing/testing-safe.ecf" base_testing = "testing/testing.ecf" note title: Eiffel Base description: "Eiffel Base: kernel library classes, data structure, Input and Output" tags: base,kernel,structure,io license: Eiffel Forum License v2 (see http://www.eiffel.com/licensing/forum.txt) copyright: 1984-2013 Eiffel Software and others link[doc]: "Documentation" http://eiffelroom.com/ link[source]: "Subversion" https://svn.eiffel.com/eiffelstudio/trunk/Src/library/base link[license]: http://www.eiffel.com/licensing/forum.txt maps: /com.eiffel/library/data_structure/adt/base end Note: The package iron file for the Eiffel Base package is available online at https://svn.eiffel.com/eiffelstudio/trunk/Src/library/base/package.iron . Current status: • only the name of the package is required • the section "project" list the various available .ecf projects • the section "note" contains title, description, tags, ... informations. The formation is similar to Eiffel indexing note, and in addition it supports bracket in the name of note, such as in link[doc]. • The "link" declaration: link[category]: "Optional Title" associated-url • The following notes have semantic that are processed by Iron: title, description, tags, link[..], and maps, for now mostly on the Iron server. • It is possible to use any note name. Currently they are simply stored and never displayed. In the future, Iron may support additional semantic for those notes. A few packages may require post installation operations, such as compiling C code, or others. For that, use the section setup, and in particular the compile_library information.During installation, iron will launch the compile_library tool delivered with EiffelStudio on the provided directory. package cURL setup compile_library = Clib ... .This compile_library tool relies on finish_freezing -library and thus process the Makefile-win.SH or Makefile.SH. ## Using your own IRON packages locally There are various ways to use your own Eiffel package libraries: • Using local location as it was currently done before 14.05 (i.e relative or absolute path, and eventually using an environment variable...). • Sharing the package on an IRON server, and then install it from that server: • The default https://iron.eiffel.com/ is the recommended server • But it is possible to host your own server easily. (server how-to documentation will be provided soon). • And there is another solution: local repository. Local repositories rely heavily on the package.iron files. So if a folder is registered as iron repository, internally iron will search this folder recursively for package.iron files. Example on Windows:iron repository --add %ISE_LIBRARY%\library It should find and list all the official ISE IRON packages.Now if you want to install the time package from it, just do > iron install time Searching [time] -> several packages for name [time]! 1) time (https://iron.eiffel.com/14.05) "EiffelTime" 2) time (file:///C:/EiffelDev/Src/library) > Select a package [1] (q=cancel): 2 -> Install time (file:///C:/EiffelDev/Src/library) Installing [time (file:///C:/EiffelDev/Src/library)] -> successfully installed. To make development easier, you may want to edit/update the repositories.conf file, in order to put that file://... local repository on the top.> iron path C:\Users\jfiat\Documents\Eiffel User Files\14.05\iron and then edit "C:\Users\jfiat\Documents\Eiffel User Files\14.05\iron\repositories.conf" • However, unless you are using the iron tool in batch mode ( --batch flag ), you will be asked to choose which package you want to install. • You can also use the EiffelStudio "Add Library" dialog via the "Iron" tab, to install, uninstall the various packages. • And last solution, you can use the full url as: > iron install file:///C:/EiffelDev/Src/library/time Of course, do not forget that local repository should be used only for code in progress, otherwise you should share that library and use it as a simple user. One of the goal of IRON is to encourage people sharing their libraries with other Eiffel users. To build and share your own packages on an IRON server, you will need a user account on that IRON server which will host your packages.Please visit https://iron.eiffel.com/repository/account/?register to create a new account. As usual, to see the available options, use: iron share --help Example: To build the gps_nmea package from your library c:\eiffel\library\gps_nmea\ : iron share create --username <your_id> --password <your_password> --repository https://iron.eiffel.com/14.05 --package "c:\eiffel\library\gps_nmea\package.iron" --package-name “gps_nmea” This command will: • create a new package named gps_nmea on iron repository https://iron.eiffel.com/14.05, • using the local package c:\eiffel\library\gps_nmea (i.e: you need to provide the package.iron file) Note: • the --package-name is for now required, even if the package.iron already provides such information. • see the iron share --help for advanced usage (such as --index, --package-archive-source, ...). After adding such a package to the library, it is recommended that you go to the website, double check that the package was created they way you wanted it to be, and you can edit its information.Then, using the iron executable, install the package on your system, and go through the steps of using it in an Eiffel project, and correct any problems discovered, to verify that end users will be able to productively use your package. It is also strongly encouraged to include (or provide a link to) documentation that orients the user to its use, and answers basic questions such as: What is the package? What motivated you to create it? What problem(s) does it address? Under what circumstances can the package be productively used? Under what circumstances should it not be used (if applicable)? And some basic examples of its use. If the package is complex, it can be very helpful to include a well-commented application that demonstrates intended reuse of the package in software. Important note: having clear documentation that enables end users to easily learn how to use your package is a VITAL link in the ability to reuse software components as is so aptly described in Object‑Oriented Software Construction, 2nd Edition, in the Modular Understandability criterion: “A method favors Modular Understandability if it helps produce software in which a human reader can understand each module without having to know the others, or, at worst, by having to examine only a few of the others.” and the Self-Documentation Principle: “The designer of a module should strive to make all information about the module part of the module itself.” The point: reuse is only possible when end users can easily and quickly learn how to reuse software components available to them. # Origin of the Name IRON As many readers will know, the name "Eiffel" was chosen to reflect the elegance and soundness of constructing large, complex software systems, with simple, individual components, each of which is a unit by itself and has its own existence, and can be tested for integrity as a separate unit, but its role in the larger scheme of things is to be used as a "building block" for constructing high-integrity software systems. The picture on the front of the book Object‑Oriented Software Construction, 2nd Edition illustrates this. This of course is intentionally meant as a direct parallel to the famous structure built by the architect and civil engineer Alexandre Gustave Eiffel. This structure was constructed with simple, individual components, each of which is a unit by itself and has its own existence, and can be tested for integrity as a separate unit, but its role in the larger scheme of things is to be used as a "building block" for constructing a high-integrity structure: the Eiffel Tower. As a parallel to this, "IRON", as a name, was chosen to reflect the fact that the individual building blocks were themselves made from iron. In the Eiffel world, constructing a large complex software system is done with libraries of high-quality reusable components. Thus, the "building blocks" are made from iron, and software systems are made from those building blocks. Hence, IRON provides the "raw material" from which complex Eiffel systems are developed. # Planned Enhancements This documentation describes the version of iron released with EiffelStudio 14.05.More features are planned or are already under development: • the ability to analyze the contents of the package, to extract information related to its .ECF file(s) • a way of ensuring that the package compiles under the specified version of EiffelStudio • support for test suite • detection and actions related to package dependencies • package versioning
# Thread: Functions and Transformations Project. 1. ## Functions and Transformations Project. Hi guys! i'm new here at the forum. just got a math project on functions and transformations, but had some problems with the last part. so i need some urgent help as i have 2 more days untill the project is due. so here we go: we were given a basic function f(x)=x^2. we were told to graph it and compare it to the graphs of various other functions. we were also asked to deduce the graphs and produce a table of results. so here is what i came up with: Type of Transformation Effect on Graph of f(x) 1 f(x) +c , c > o upward shift by c 2 f(x) +c , c < o downward shift by c 3 f(x + c), c >0 horizontal shift to left by c 4 f(x + c), c < 0 horizontal shift to right by c 5 -f(x) reflection in x axis 6 f(-x) reflection in y axis 7 α f(x), α > 1 vertical stretch by a 8 α f(x), 0 < α < 1 vertical compression by a 9 fx), α > 1 horizontal compression by a 10 fx), 0 < α < 1 horizontal stretch by a now ill just post the final part from the project: Part VIII In this final part of the Project you and your group members are to use the knowledge gained from Parts I-VII to apply the rules of functional transformations to a function γ(x) where the formula is not known, but the graph ofγ(x) is known. On the last page of the lab is the graph of a function γ(x) for which the algebraic formula is unknown. Using the information gained in this lab, draw the graph of a new function β(x) defined according to β(x) = 1/2(1 - γ(-2x)) + 1. Be sure to fully explain your reasoning in deducing the graph of β(x) in your final lab report. now from my understanding, i have to transform the graph given above using the formula for b(x) above. but the formula is just so complicated that i cant understand what i need to do. any help will be greatly appreciated. 2. Originally Posted by satish555 Hi guys! i'm new here at the forum. just got a math project on functions and transformations, but had some problems with the last part. so i need some urgent help as i have 2 more days untill the project is due. so here we go: we were given a basic function f(x)=x^2. we were told to graph it and compare it to the graphs of various other functions. we were also asked to deduce the graphs and produce a table of results. so here is what i came up with: Type of Transformation Effect on Graph of f(x) 1 f(x) +c , c > o upward shift by c 2 f(x) +c , c < o downward shift by c 3 f(x + c), c >0 horizontal shift to left by c 4 f(x + c), c < 0 horizontal shift to right by c 5 -f(x) reflection in x axis 6 f(-x) reflection in y axis 7 α f(x), α > 1 vertical stretch by a 8 α f(x), 0 < α < 1 vertical compression by a 9 fx), α > 1 horizontal compression by a 10 fx), 0 < α < 1 horizontal stretch by a now ill just post the final part from the project: Part VIII In this final part of the Project you and your group members are to use the knowledge gained from Parts I-VII to apply the rules of functional transformations to a function γ(x) where the formula is not known, but the graph ofγ(x) is known. On the last page of the lab is the graph of a function γ(x) for which the algebraic formula is unknown. Using the information gained in this lab, draw the graph of a new function β(x) defined according to β(x) = 1/2(1 - γ(-2x)) + 1. Be sure to fully explain your reasoning in deducing the graph of β(x) in your final lab report. now from my understanding, i have to transform the graph given above using the formula for b(x) above. but the formula is just so complicated that i cant understand what i need to do. any help will be greatly appreciated. Draw a diagram for each of these steps. $\gamma(-x)$ is your reflection in the y-axis. $\gamma(-2x)$ is your horizontal compression. $-\gamma(-2x)$ is a reflection in the x-axis. $1 - \gamma(-2x)$ is a vertical shift upwards by 1 unit. $\frac{1}{2}[1 - \gamma(-2x)]$ is vertical compression. $\frac{1}{2}[1 - \gamma(-2x)] + 1$ is a vertical shift upwards by 1 unit. 3. Originally Posted by satish555 Hi guys! i'm new here at the forum. just got a math project on functions and transformations, but had some problems with the last part. so i need some urgent help as i have 2 more days untill the project is due. so here we go: we were given a basic function f(x)=x^2. we were told to graph it and compare it to the graphs of various other functions. we were also asked to deduce the graphs and produce a table of results. so here is what i came up with: Type of Transformation Effect on Graph of f(x) 1 f(x) +c , c > o upward shift by c 2 f(x) +c , c < o downward shift by c 3 f(x + c), c >0 horizontal shift to left by c 4 f(x + c), c < 0 horizontal shift to right by c 5 -f(x) reflection in x axis 6 f(-x) reflection in y axis 7 α f(x), α > 1 vertical stretch by a 8 α f(x), 0 < α < 1 vertical compression by a 9 fx), α > 1 horizontal compression by a 10 fx), 0 < α < 1 horizontal stretch by a now ill just post the final part from the project: Part VIII In this final part of the Project you and your group members are to use the knowledge gained from Parts I-VII to apply the rules of functional transformations to a function γ(x) where the formula is not known, but the graph ofγ(x) is known. On the last page of the lab is the graph of a function γ(x) for which the algebraic formula is unknown. Using the information gained in this lab, draw the graph of a new function β(x) defined according to β(x) = 1/2(1 - γ(-2x)) + 1. Be sure to fully explain your reasoning in deducing the graph of β(x) in your final lab report. now from my understanding, i have to transform the graph given above using the formula for b(x) above. but the formula is just so complicated that i cant understand what i need to do. any help will be greatly appreciated. Try breaking it up into "pieces" (that way you may better explain what is going on..using previous knowledge). 4. thanx a lot guys! really helpful.
# Lagrange's theorem Dear, I don't understand left coset and Lagrange's theorem... I'm working on some project so please write me anything what could help me understand it better... milica from montenegro - What is your background? Are you studying some introductory course on group theory? –  Pandora Jan 8 '11 at 14:19 yes yes :):):):):):):):):) –  Milica Jan 8 '11 at 16:42 I wrote some course notes on cosets once, which are posted at math.uconn.edu/~kconrad/blurbs/grouptheory/coset.pdf. Maybe that will have something useful in it. –  KCd Jan 9 '11 at 7:03 Suppose $G$ is a group, and $H$ is any subgroup of $G$. I would like to try to define something similar to "congruence modulo $n$" for integers, but for this arbitrary group $G$ and subgroup $H$. In the integers, we say that $a\equiv b\pmod{m}$ if and only if $a-b$ is an element of the subgroup $m\mathbb{Z}$. So we are going to try something similar for $H$ and $G$, taking into account that $G$ need not be abelian. I'm not going to assume the group is finite until I say so explicitly. Definition. Let $G$ be a group and let $H$ be a subgroup. If $x,y\in G$, we say that $x$ and $y$ are congruent on the right modulo $H$ if and only if $xy^{-1}\in H$. We write this $x\equiv_H y$. Proposition. $\equiv_H$ is an equivalence relation on $G$. Proof. We need to show that the relation is reflexive, symmetric, and transitive. Let $x\in G$. Then $xx^{-1} = e\in H$ (since $H$ is a subgroup, hence contains $e$), so $x\equiv_H x$. Now let $x,y\in G$, and suppose that $x\equiv_H y$; we want to show that $y\equiv_H x$ holds as well. Since $x\equiv_H y$, then $xy^{-1}\in H$. Since $H$ is a subgroup, it is closed under taking inverses, so $(xy^{-1})^{-1} = (y^{-1})^{-1}x^{-1} = yx^{-1}\in H$. By defintion, this means that $y\equiv_H x$, as desired. Finally, suppose that $x,y,z\in G$ and that $x\equiv_H y$ and $y\equiv_Hz$. We want to show that $x\equiv_H z$. The first congruence implies $xy^{-1}\in H$; the second that $yz^{-1}\in H$. Since $H$ is a subgroup, it is closed under products, so $(xy^{-1})(yz^{-1})=xz^{-1}\in H$, hence $x\equiv_H z$, as desired. QED Notice that the three basic properties of a subgroup are precisely needed for the three basic properties of an equivalence relation: $H$ contains the identity is used for reflexivity; $H$ is closed under inverses is used for symmetry; and $H$ is closed under products is used for transitivity. Now, since $\equiv_H$ is an equivalence relation on $G$, by the Fundamental Theorem of Equivalence Relations, $\equiv_H$ induces a partition on $G$; that is, $G$ is broken up into disjoint parts, one for each equivalence class. Our next goal is to figure out if we have some description of the equivalence classes that is independent of the equivalence relation (this is very useful in general). Indeed, we do: Theorem. Let $G$ be a group and let $H$ be a subgroup; let $x\in G$. The equivalence class of $x$ under the relation $\equiv_H$ is equal to the set $Hx = \{hx\mid h\in H\}$. That is, $$[x]_H = \{y\in G\mid x\equiv_Hy\} = \{hx\mid h\in H\} = Hx.$$ Proof. We need to show that each element in $Hx$ is in $[x]_H$, and that each element in $[x]_H$ is in $Hx$. Let $z\in Hx$; that means that $z = hx$ for some $h\in H$. Then $$xz^{-1} = x(hx)^{-1} = x(x^{-1}h^{-1}) = h^{-1}\in H,$$ so by definition we have that $x\equiv_H z$. Thus, if $z\in Hx$, then $z\in[x]_H$. So $Hx\subseteq[x]_H$. Conversely, let $y\in [x]_H$. Then $x\equiv_H y$, so $xy^{-1}\in H$. Thus, there exists $h\in H$ such that $xy^{-1}=h$. Multiplying on the right by $y$ we get $x=hy$, and multiplying on the left by $h^{-1}$ we get $h^{-1}x = y$. Since $h^{-1}\in H$, we get $y = h^{-1}x \in Hx$. Thus, if $y\in[x]_H$ then $y\in Hx$, so $[x]_H\subseteq Hx$. Putting the two inclusions together, we conclude that $[x]_H = Hx$ for each $x\in G$. QED Corollary. Let $G$ be a group, $H$ a subgroup. Then: 1. $Hx$ is nonempty for each $x\in G$. 2. $\displaystyle G = \mathop{\cup}_{x\in G} Hx$. 3. For all $x,y\in G$, if $Hx\cap Hy\neq\emptyset$, then $Hx=Hy$. Proof. This follows from the fact that since the sets of the form $Hx$ are exactly the equivalence classes of the equivalence relation $\equiv_H$, they form a partition of $G$. QED Corollary. Let $G$ be a group, $H$ a subgroup. For all $x,y\in G$, $x\equiv_H y$ if and only if $Hx = Hy$. We give the sets of the form $Hx$ a name: Definition. Let $G$ be a group, $H$ a subgroup, and $x\in G$. The set $$Hx = \{ hx\mid h\in H\}$$ is called the right coset of $x$ modulo $H$. "Coset" is short for "congruence set", because the right coset is exactly the set of all things that are congruent on the right to $x$ modulo $H$. In general, when you have an equivalence relation, the equivalence classes can have different sizes. But not so with these equivalence relation: because they are obtained by taking all possible products with elements of $H$, all equivalence classes are bijectable. Theorem. Let $G$ be a group and $H$ a subgroup. Let $x,y\in G$ be any two elements. Then there is a bijection between the sets $Hx$ and $Hy$, given by \begin{align*} \psi\colon Hx &\to Hy\\ hx&\mapsto hy \end{align*} Proof. To show that $\psi$ is one-to-one, let $hx,h'x\in Hx$ be such that $\psi(hx)=\psi(h'x)$. Then $hy = h'y$. Multiplying on the right by $y^{-1}$, we get $h=h'$, so $hx=h'x$. Thus, $\psi$ is one-to-one. To show $\psi$ is onto, let $hy\in H$. Then $hx\in Hx$, and $\psi(hx)=hy$. Thus, $\psi$ is one-to-one and onto, so it is a bijection. QED Now, let's assume that $G$ is finite, and $H$ is a subgroup. Then the equivalence relation $\equiv_H$ partitions $G$ into equivalence classes; each equivalence class is of the form $Hx$ for some $x\in G$, and by the previous theorem, they are all bijectable with one another, so they all have the same size, $k$. If there are $m$ distinct equivalence classes, each of size $k$, and they partition $G$, then the size of $G$ is the sum of the sizes of the distinct equivalence classes; that is, $|G|=mk$. But what is $k$? $k$ is the size of the equivalence classes; they are all the same size. In particular, the equivalence class $He$ has size $k$. But what is $He$? Well, $$He = \{ he\mid h\in H\} = \{h\mid h\in H\} = H.$$ That is, $He = H$, so that $H$ itself has size $k$. Since $|G|=mk = m|H|$, that means that $|H|$ divides $|G|$. That is: Corollary [Lagrange's Theorem]. If $G$ is a finite group, and $H$ is a subgroup, then the size of $H$ divides the size of $G$. You might ask if you can define a "congruence on the left"; yes, you can. We say $x$ and $y$ are congruent on the left modulo $H$ if $x^{-1}y\in H$, and we write $x{}_H\equiv y$. If we proceed as above, then we get that the equivalence class of $x$ modulo $H$ on the left is $xH$ (instead of $Hx$); the process is completely analogous; these are called left cosets of $H$. Turns out that there is a bijection between left and right cosets. One is tempted to define the bijection by sending the coset $xH$ to the coset $Hx$, but it turns out that this is not well defined (you can have $xH = yH$, but also have $Hx\neq Hy$). However, mapping $xH$ to the coset $Hx^{-1}$ works out (I'll leave it to you to work it out), so that the number of left cosets and the number of right cosets of $H$ in $G$ are the same, and the size of any left coset is the same as the size of any right coset of $H$. In general, the equivalence relation "congruent on the left modulo $H$" is not the same as the equivalence relation "congruent on the right modulo $H$"; the case when they are the same is interesting in its own right, and corresponds to the case when $H$ is a normal subgroup, which has a lot of important consequences all of its own that you will no doubt discover soon. - Arturo, where did you learn that etymology for coset? I'd never heard any etymology for it at all, so this is quite interesting. –  KCd Jan 9 '11 at 7:06 @KCd: I first learned all of this in Spanish, where cosets are "lateral classes" (left or right, as appropriate). I can't pinpoint in my memory when I learned this for English, seems like I've always known it. I'm at the JMM, so I can't look up to see if the etymology is in any of my basic references, but I will look it up tomorrow. –  Arturo Magidin Jan 9 '11 at 13:34 @KCd: I've been unable to find the etymology in any of my books; I hope I didn't make it up! I did however tracked down the apparent first use of the term: a paper by Miller, in the Quarterly Journal, published in 1910 (volume 41). I'm trying to see if I can track down a copy, though it is not available in my local library, to see if what he meant when he coined it. –  Arturo Magidin Jan 10 '11 at 19:40 I finished my project... –  Milica Jan 12 '11 at 13:59 @Milica: $ah$ is just the group product of $a$ and $h$. –  Zhen Lin Jan 8 '11 at 15:39
m • E F Nous contacter 0 # Documents  Wunsch, Jared | enregistrements trouvés : 12 O P Q Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Correspondence between Ruelle resonances and quantum resonances for non-compact Riemann surfaces Guillarmou, Colin | CIRM H Post-edited Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Understanding the growth of Laplace eigenfunctions (part 1 of 2) Canzani, Yaiza | CIRM H Post-edited Partial Differential Equations;Geometry In this talk we will discuss a new geodesic beam approach to understanding eigenfunction concentration. We characterize the features that cause an eigenfunction to saturate the standard supremum bounds in terms of the distribution of $L^{2}$ mass along geodesic tubes emanating from a point. We also show that the phenomena behind extreme supremum norm growth is identical to that underlying extreme growth of eigenfunctions when averaged along submanifolds. Using the description of concentration, we obtain quantitative improvements on the known bounds in a wide variety of settings. In this talk we will discuss a new geodesic beam approach to understanding eigenfunction concentration. We characterize the features that cause an eigenfunction to saturate the standard supremum bounds in terms of the distribution of $L^{2}$ mass along geodesic tubes emanating from a point. We also show that the phenomena behind extreme supremum norm growth is identical to that underlying extreme growth of eigenfunctions when averaged along ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Interview au CIRM : Virginie Bonnaillie Noël Bonnaillie-Noël, Virginie | CIRM H Post-edited Outreach;Mathematics Education and Popularization of Mathematics Directrice de recherche CNRS au DMA, UMR 8553 (équipe Analyse) Directrice Adjoint Scientifique à l'Insmi, en charge de la politique de sites (Institut des Sciences Mathématiques et de leurs Interactions - CNRS) Adjointe Déléguée Scientifique Référente au CNRS Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## A Polyakov formula for sectors Rowlett, Julie | CIRM H Multi angle Partial Differential Equations;Geometry;Mathematical Physics Polyakov’s formula expresses a difference of zeta-regularized determinants of Laplace operators, an anomaly of global quantities, in terms of simple local quantities. Such a formula is well known in the case of closed surfaces (Osgood, Philips, & Sarnak 1988) and surfaces with smooth boundary (Alvarez 1983). Due to the abstract nature of the definition of the zeta-regularized determinant of the Laplacian, it is typically impossible to compute an explicit formula. Nonetheless, Kokotov (genus one Kokotov & Klochko 2007, arbitrary genus Kokotov 2013) demonstrated such a formula for polyhedral surfaces ! I will discuss joint work with Clara Aldana concerning the zeta regularized determinant of the Laplacian on Euclidean domains with corners. We determine a Polyakov formula which expresses the dependence of the determinant on the opening angle at a corner. Our ultimate goal is to determine an explicit formula, in the spirit of Kokotov’s results, for the determinant on polygonal domains. Polyakov’s formula expresses a difference of zeta-regularized determinants of Laplace operators, an anomaly of global quantities, in terms of simple local quantities. Such a formula is well known in the case of closed surfaces (Osgood, Philips, & Sarnak 1988) and surfaces with smooth boundary (Alvarez 1983). Due to the abstract nature of the definition of the zeta-regularized determinant of the Laplacian, it is typically impossible to compute an ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## The wave equation for Weil-Petersson metrics Melrose, Richard | CIRM H Multi angle In this somewhat speculative talk I will briefly describe recent results with Xuwen Zhu on the boundary behaviour of the Weil-Petersson metric (on the moduli space of Riemann surfaces) and ongoing work with Jesse Gell-Redman on the associated Laplacian. I will then describe what I think happens for the wave equation in this context and what needs to be done to prove it. Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Hyperbolic triangles with no positive Neumann eigenvalues Judge, Christopher | CIRM H Multi angle In joint work with Luc Hillairet, we show that the Laplacian associated with the generic finite area triangle in hyperbolic plane with one vertex of angle zero has no positive Neumann eigenvalues. This is the first evidence for the Phillips-Sarnak philosophy that does not depend on a multiplicity hypothesis. The proof is based an a method that we call asymptotic separation of variables. Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Linear stability of slowly rotating Kerr spacetimes Hintz, Peter | CIRM H Multi angle Partial Differential Equations I will describe joint work with Dietrich Häfner and Andràs Vasy in which we study the asymptotic behavior of linearized gravitational perturbations of Schwarzschild or slowly rotating Kerr black hole spacetimes. We show that solutions of the linearized Einstein equation decay at an inverse polynomial rate to a stationary solution (given by an infinitesimal variation of the mass and angular momentum of the black hole), plus a pure gauge term. Our proof uses a detailed description of the resolvent of an associated wave equation on symmetric 2-tensors near zero energy. I will describe joint work with Dietrich Häfner and Andràs Vasy in which we study the asymptotic behavior of linearized gravitational perturbations of Schwarzschild or slowly rotating Kerr black hole spacetimes. We show that solutions of the linearized Einstein equation decay at an inverse polynomial rate to a stationary solution (given by an infinitesimal variation of the mass and angular momentum of the black hole), plus a pure gauge term. Our ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Quantum Sabine law for resonances in transmission problems Galkowski, Jeffrey | CIRM H Multi angle Partial Differential Equations;Mathematical Physics We prove a quantum Sabine law for the location of resonances in transmission problems. In this talk, our main applications are to scattering by strictly convex, smooth, transparent obstacles and highly frequency dependent delta potentials. In each case, we give a sharp characterization of the resonance free regions in terms of dynamical quantities. In particular, we relate the imaginary part of resonances to the chord lengths and reflectivity coefficients for the ray dynamics and hence give a quantum version of the Sabine law from acoustics. We prove a quantum Sabine law for the location of resonances in transmission problems. In this talk, our main applications are to scattering by strictly convex, smooth, transparent obstacles and highly frequency dependent delta potentials. In each case, we give a sharp characterization of the resonance free regions in terms of dynamical quantities. In particular, we relate the imaginary part of resonances to the chord lengths and reflectivity ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Geodesic beams in eigenfunction analysis (part 2 of 2) Galkowski, Jeffrey | CIRM H Multi angle Partial Differential Equations;Geometry This talk is a continuation of ‘Understanding the growth of Laplace eigenfunctions’. We explain the method of geodesic beams in detail and review the development of these techniques in the setting of defect measures. We then describe the tools and give example applications in concrete geometric settings. Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## ALC manifolds with exceptional holonomy Foscolo, Lorenzo | CIRM H Multi angle Geometry We will describe the construction of complete non-compact Ricci-flat manifolds of dimension 7 and 8 with holonomy $G_{2}$ and Spin(7) respectively. The examples we consider all have non-maximal volume growth and an asymptotic geometry, so-called ALC geometry, that generalises to higher dimension the asymptotic geometry of 4-dimensional ALF hyperkähler metrics. The interest in these metrics is motivated by the study of codimension 1 collapse of compact manifolds with exceptional holonomy. The constructions we will describe are based on the study of adiabatic limits of ALC metrics on principal Seifert circle fibrations over asymptotically conical orbifolds, cohomogeneity one techniques and the desingularisation of ALC spaces with isolated conical singularities. The talk is partially based on joint work with Mark Haskins and Johannes Nordstrm. We will describe the construction of complete non-compact Ricci-flat manifolds of dimension 7 and 8 with holonomy $G_{2}$ and Spin(7) respectively. The examples we consider all have non-maximal volume growth and an asymptotic geometry, so-called ALC geometry, that generalises to higher dimension the asymptotic geometry of 4-dimensional ALF hyperkähler metrics. The interest in these metrics is motivated by the study of codimension 1 collapse of ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Emergence of the quantum wave equation in classical deterministic hyperbolic dynamics Faure, Frédéric | CIRM H Multi angle In the 80’s, D. Ruelle, D. Bowen and others have introduced probabilistic and spectral methods in order to study deterministic chaos (”Ruelle resonances”). For a geodesic flow on a strictly negative curvature Riemannian manifold, following this approach and use of microlocal analysis, one obtains that long time fluctuations of classical probabilities are described by an effective quantum wave equation. This may be surprising because there is no added quantization procedure. We will discuss consequences for the zeros of dynamical zeta functions. This shows that the problematic of classical chaos and quantum chaos are closely related. Joint work with Masato Tsujii. In the 80’s, D. Ruelle, D. Bowen and others have introduced probabilistic and spectral methods in order to study deterministic chaos (”Ruelle resonances”). For a geodesic flow on a strictly negative curvature Riemannian manifold, following this approach and use of microlocal analysis, one obtains that long time fluctuations of classical probabilities are described by an effective quantum wave equation. This may be surprising because there is no ... Déposez votre fichier ici pour le déplacer vers cet enregistrement. ## Diffraction of singularities for the wave equation on manifolds with corners Melrose, Richard ; Vasy, Andras ; Wunsch, Jared | Société Mathématique de France 2013 Ouvrage - vi; 135 p. ISBN 978-2-85629-367-6 Astérisque , 0351 Localisation : Périodique 1er étage équation d'onde # coin # ensemble de fronts d'onde # diffraction #### Filtrer ##### Langue Titres de périodiques et e-books électroniques (Depuis le CIRM) Ressources Electroniques Books & Print journals Recherche avancée 0 Z
# How does one denote the set of all positive real numbers? What is the "standard" way to denote all positive (or non-negative) real numbers? I'd think $$\mathbb R^+$$ but I believe that that is usually used to denote "all real numbers including infinity". So is there a standard way to denote the set $$\{x \in \mathbb R : x \geq 0\} \; ?$$ • Note that $0$ is not positive. – Yuval Filmus Mar 19 '11 at 15:08 • Also, I wouldn't agree that $R_+$ usually includes $\infty$. The extended real line is used only in certain areas. – Yuval Filmus Mar 19 '11 at 15:09 • I removed the set theory tag since this isn't a set theory question. – Apostolos Mar 19 '11 at 15:09 • $[0,\infty)$ or if you want to work with the extended real line, $[0, +\infty]$. – cardinal Mar 19 '11 at 15:12 • @YuvalFilmus Do not forget that this is just an english convention. In France for example, we usually say that 0 is both positive and negative. I have often seen $\mathbb{R}^+$ for all positive/null numbers and $\mathbb{R}^{\ast +}$ for all strictly positive numbers. – ThR37 Jun 17 '14 at 9:53 Not that I knew of. There are many, e.g. • $\mathbb{R^+_0}$, • $\mathbb{R^+}$ and • $[0, \infty)$. The unambiguous notations are: for the positive-real numbers $$\mathbb{R}_{>0} = \left\{ x \in \mathbb{R} \mid x > 0 \right\} \;,$$ and for the non-negative-real numbers $$\mathbb{R}_{\geq 0} = \left\{ x \in \mathbb{R} \mid x \geq 0 \right\} \;.$$ Notations such as $\mathbb{R}_{+}$ or $\mathbb{R}^{+}$ are non-standard and should be avoided, becuase it is not clear whether zero is included. Furthermore, the subscripted version has the advantage, that $n$-dimensional spaces can be properly expressed. For example, $\mathbb{R}_{>0}^{3}$ denotes the positive-real three-space, which would read $\mathbb{R}^{+,3}$ in non-standard notation. In Algebra one may come across the symbol $\mathbb{R}^\ast$, which refers to the multiplicative units of the field $\big( \mathbb{R}, +, \cdot \big)$. Since all real numbers except $0$ are multiplicative units, we have $$\mathbb{R}^\ast = \mathbb{R}_{\neq 0} = \left\{ x \in \mathbb{R} \mid x \neq 0 \right\} \;.$$ But caution! The positive-real numbers can also form a field, $\big( \mathbb{R}_{>0}, \cdot, \star \big)$, with the operation $x \star y = \mathrm{e}^{ \ln(x) \cdot \ln(y) }$ for all $x,y \in \mathbb{R}_{>0}$. Here, all positive-real numbers except $1$ are the "multiplicative" units, and thus $$\mathbb{R}_{>0}^\ast = \left\{ x \in \mathbb{R}_{>0} \mid x \neq 1 \right\} \;.$$ • The last objection makes no sense since one could simply use $\mathbb R_+^3$. – Did Oct 25 '15 at 10:51 • Actually for $\mathbb R^+\times\mathbb R^+\times\mathbb R^+$ I'd write $(\mathbb R^+)^3$. The notation with comma doesn't look right to me. – celtschk Jun 13 '17 at 6:08 • Would that itwere so simple. In Probability with Martingales Williams tells me "Everyone is agreed that $\mathbb{R}^+$ is $[0,\infty)$. – Addem Oct 17 '17 at 19:38 I'd completely avoid using $\mathbb{R}^+$ since people won't know if $0$ is included or not. So $\mathbb{R}_0^+$ would be a possibility, but then how would you denote $\{x\in\mathbb{R}:x>0\}$? Again, with $\mathbb{R}^+$ people won't know that $0$ isn't included. Personally, I prefer writing $[0,\infty)$ and $(0,\infty)$ when it's clear from the context that an interval in $\mathbb{R}$ is meant. • All the mathematicians I ever met , ( a lot), understood that $R^+$ meant the positive reals. – DanielWainfleet Aug 25 '15 at 20:20 • @user254665: Well, certainly it means the positive reals, but now ask them what they mean by "positive" :-) Seriously, I know mathematicians who mean "$\ge0$" and other who mean "$>0$". – Hendrik Vogt Aug 28 '15 at 17:26 • Edit: I think that $\mathbb{R}^{+} \backslash \Bigl\{\left((\mathbb{R}^{+} \backslash \mathbb{R}_0^{+}) \cup (\mathbb{R}_0^{+} \backslash \mathbb{R}^{+})\right)\Bigr\} \cup \{1\}$ will be unambiguous :-) – Kusavil Jan 28 '18 at 1:13 Some of my profs use $\mathbb{R^{\ge 0}}$. I like to add whatever to the top so $\mathbb{R^{\le a}}$ just means all reals less than $a$. • This definitely strikes me as nonstandard, at least in the U.S. I'd be curious to know where all this is used. (Not saying it's a bad notation, just never seen it in any texts of common mathematics publishers, for example.) – cardinal Mar 19 '11 at 18:55 • I learned this from my math prof who grew up in Canada. But yeah I've never seen it outside her notes, but it does make writing $\{ x \in R \mid x < a\}$ easier! – hwong557 Mar 19 '11 at 19:00 • Interval notation does not per se fix the basic set. – Raphael Mar 19 '11 at 21:11 • @cardinal: I've seen it used many times in Europe (but rather as subscript: $\mathbb{R}_{\geq 0}$) and some people even write $\mathbb{Z}_{\geq 0}$ instead of $\mathbb{N}$ because the latter is ambiguous as to whether $0$ is in it or not. And of course all obvious variants such as $\mathbb{R}_{\lt t}$ and so on are also used. But certainly, interval notation is more common. – t.b. Mar 21 '11 at 3:00 • @cardinal: I think I can confirm that to a certain extent. I'm pretty sure we exclusively used interval notation à la Bourbaki in elementary and high school in Switzerland (I had at least 6 math teachers at various places) and it is exclusively used in at least four elementary texts on (what we call) algebra in my bookshelf. – t.b. Mar 21 '11 at 13:30 The following is also pretty common notation for the non-negative reals: $\mathbb{R}_{\geq 0}$ or $\mathbb{R}_{+}$. I've learned in elementary school that $\mathbb{R}_{*}$ means the set without the zero, so $\mathbb{R}^{+}=[0,\infty)$ and $\mathbb{R}^{+}_{*}=(0,\infty)$. • And I learned in school that $\mathbb R^+ = (0,\infty)$, and $R_0^+ = [0,\infty)$. Well, except that we would have written those intervals as $]0;\infty[$ and $[0;\infty[$ … – celtschk Jun 13 '17 at 6:21 $\mathbb{R}^+$ includes $0$ in Probability Tutorials. $\mathbb{R}^+_0$ is more clear though, so I've used it in the exercises. I find $\mathbb R_{\geq 0}$ clumsy (I would never write this on a board when working and I don't often see papers writing functions $f$ defines as $f:\mathbb R_{\geq 0}\rightarrow \mathbb R_{\geq 0}$). $\mathbb R^+$ seems restrictive, not least if you wish to consider higher dimensions. I like $[0,\infty)$, but it can be awkward in certain settings such as $f:[0,\infty)\times (0,\infty)\rightarrow \mathbb [0,\infty)$ or $$\left\{E\times[0,\infty)\times (0,\infty)\right\}$$ Instead I prefer $\mathbb{\bar R_+}$ for the nonnegative reals and $\mathbb R_+$ for the positive reals. This fits with the notion of closure in $\mathbb R$. (This might not suit those who regularly deal with the extended reals, but given that $\mathbb R$ is so standard, it seems natural to take the closure there.) The function $f: \mathbb{\bar R_+}\rightarrow \mathbb{\bar R_+}$ is then clear and reasonably compact. Moreover, $$\left\{E\times\mathbb{\bar R_+}\times \mathbb R_+\right\}$$ and $f: \mathbb{\bar R_+}\times \mathbb R_+\rightarrow \mathbb{\bar R_+}$ seem to be substantially easier to read than the interval versions above. Consistency then dictates that $\mathbb Z_+$ denotes the positive integers and whilst $\mathbb {\bar Z_+}$ is arguably unsatisfactory notation for the nonnegative integers because the closure story no longer applies, I would adopt it in order to be consistent. You could use $\mathbb N=\mathbb Z_+\cup\{0\}$, but that seems worse. I guess it depends on the problem at hand. ps. I have also seen $\mathbb R_{++}$ for the positive reals and $\mathbb R_+$ for the nonnegative. ## protected by Zev ChonolesSep 2 '16 at 2:38 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
# Prime Ideals in Skew and Q-skew Polynomial Rings Book - 1994 Rate this: There has been continued interest in skew polynomial rings and related constructions since Ore's initial studies in the 1930s. New examples not covered by previous analyses have arisen in the current study of quantum groups. The aim of this work is to introduce and develop new techniques for understanding the prime ideals in skew polynomial rings $S=R[y;\tau, \delta ]$, for automorphisms $\tau$ and $\tau$-derivations $\delta$ of a noetherian coefficient ring $R$. Goodearl and Letzter give particular emphasis to the use of recently developed techniques from the theory of noncommutative noetherian rings. When $R$ is an algebra over a field $k$ on which $\tau$ and $\delta$ act trivially, a complete description of the prime ideals of $S$ is given under the additional assumption that $\tau ^{{-1}}\delta \tau = q\delta$ for some nonzero $q\in k$. This last hypothesis is an abstraction of behavior found in many quantum algebras, including $q$-Weyl algebras and coordinate rings of quantum matrices, and specific examples along these lines are considered in detail. Publisher: Providence, R.I. : American Mathematical Society, 1994 ISBN: 9780821825839 0821825836 Branch Call Number: QA3 .A57 no. 521 Characteristics: vi, 106 p. : ill. ; 26 cm ## Opinion ### Community Activity #### Comment There are no comments for this title yet. #### Age There are no ages for this title yet. #### Summary There are no summaries for this title yet. #### Notices There are no notices for this title yet. #### Quotes There are no quotes for this title yet.
# Talk:Spherical coordinate system WikiProject Mathematics (Rated B-class, High-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: B Class High Importance Field: Basics One of the 500 most frequently viewed mathematics articles. Previous discussions pertaining to the old article have been archived to Talk:Spherical coordinate system/Archive. --Carl (talk|contribs) 01:25, 18 September 2006 (UTC) ## Printing I printed out this page and the images came out as black squares. get new images, these ones are garbage! —Preceding unsigned comment added by 62.136.133.147 (talk) 08:26, 5 May 2008 (UTC) The main article would be improved considerably by changing the images to pairs of stereoptic 3D images. 216.99.219.151 (talk) 22:14, 15 June 2009 (UTC) ## Elements This needs to be fixed, its why is there no info for spherical coordinates like they have for cylindrical, such as dV and dS: (From cylindrical article) Line and volume elements In many problems involving cylindrical polar coordinates, it is useful to know the line and volume elements; these are used in integration to solve problems involving paths and volumes. The line element is dl = dr\,\mathbf{\hat r} + r\,d\theta\,\boldsymbol{\hat\theta} + dz\,\mathbf{\hat z}. The volume element is dV = r\,dr\,d\theta\,dz. The gradient is \nabla = \mathbf{\hat r}\frac{\partial}{\partial r} + \boldsymbol{\hat \theta}\frac{1}{r}\frac{\partial}{\partial \theta} + \mathbf{\hat z}\frac{\partial}{\partial z}. Even mathworld has dS...... I have added surface and volume. –Pomte 11:13, 10 April 2007 (UTC) ## American convention This article itself states that the international recommended standard for spherical polars is r, ϑ, φ for distance, zenith, and azimuth. If this is so, why use the american convention for the rest of the article? Shouldn't we use the international convention on the basis of the international applicability of the article? In the diagram, the "r" length isn't the "r" mentioned in the text right? It's confusing that it was labelled r. Perhaps there should be different diagrams for the different coordinate systems. Unfortunately, someone has to draw them. Shinobu 10:15, 16 November 2006 (UTC) The symbols in the text and in the figure don't match. for instance: the azimuth angle in the text is referred to as θ, but the same symbol is taken for the polar angle in the diagramm. The polar angle in the text is a small phi (φ) and a capital phi (Φ) is taken for the azimuth angle in the figure. That is very confusing. (RolandRo 12:37, 11 January 2007 (UTC)) I have about the same problem, but all phi:s in the text are small (lowercase). I think it's the weird TeX small phi, that looks very much like a capital phi. It's a matter of different fonts. Rursus 18:04, 4 February 2007 (UTC) I changed all φ to Φ to avoid confusion. TeX is showing the uppercase phi. If someone feels φ should be used instead, please revert all instances of them, even in TeX. –Pomte 11:13, 10 April 2007 (UTC) But TeX isn't showing a capital phi $\Phi\,$, it's showing a lowercase phi $\phi\,$. If you want to use the other TeX lowercase phi $\varphi\,$, that's OK, but please don't use uppercase - the convention is to use lowercase (as in the diagram). --Zundark 11:26, 10 April 2007 (UTC) Ugh, sorry, I got confused. Should I revert despite visual discrepancy? If the \varphi symbol is indeed convention, I'll convert all \phi to \varphi. I've seen \phi more often than \varphi, but that's no good indicator. –Pomte 11:37, 10 April 2007 (UTC) I've already reverted. (My revert also reverted your later edit - you can restore that, of course.) I think \phi is more usual than \varphi for this purpose. The visual discrepancy depends on what font you are using - there's no way to fix it for everyone (except by doing everything in TeX), so I think we have to live with it. --Zundark 11:43, 10 April 2007 (UTC) I do not understand the statement that the so-called American notation makes things more compatible with polar or cylindrical coordinates. On the contrary: using the notation in the article (with ranges indicated for $\phi,\theta$), cylindrical coordinates would be of the form $(\rho,z,\theta)$. Whereas starting with the notation $(r,\theta)$ (or $(\rho,\theta)$) for polar coordinates, cylindrical and spherical coordinates are obtained by simply making the coordinate pair a coordinate triple (with the caveat that $\rho$ would be $\rho=\sqrt{x^2+y^2}$ rather than $\rho=\sqrt{x^2+y^2+z^2}$ with both cylindrical or polar, i.e., the projection onto the $xy$-plane). It seems to me that the debate is more between how physicists and mathematicians use different notations, rather than between an American notation and a notation elsewhere. Looking at three calculus textbooks written by reputable American authors, all use the (length,azimuth,zenith) notation. Since I also read French, I took a look at the literature, and there the issue is the same: mathematicians tend to write (length,azimuth,zenith), physicists tend to write (length,zenith,azimuth). The idea is that, making a broad generalizing statement, in mathematics we do not really work with spherical coordinates, they are merely a tool, just another change of basis. Whereas in physics, they are commonly used as a coordinate system proper, in which case it is important that such things as the right hand rule applies, because it makes computations easier to manage. Jarino1 18:44, 9 July 2007 (UTC) The difference isn't in the placement of the coordinates; it's which letters are used for which coordinates. In mathematics θ is used for the azimuthal angle and φ is used for the zenith angle. Whereas in physics θ is used for the zenith angle and φ is used for the azimuthal angle. This should be made clear on the article. I myself have studied under the mathematical convention in the United States and it is always denoted (ρ, θ, φ); so I disagree with the placement of the coordinates on the article; but this is a separate issue. The mathematical convention is more compatible with polar and cylindrical coordinates because on the xy-plane, the azimuthal angle (which is denoted by θ) is the same as the polar angle used in polar and cylindrical coordinates (which is also denoted by θ). --Spoon! 21:32, 12 September 2007 (UTC) I really like the ISO 31-11 notation, but for it to be compatible with polar coordinates it would be a great idea to start using φ for the polar angle. That way all the systems would be compatible with each others, and there would be minimal confusion. 78.91.38.146 (talk) 11:38, 27 November 2008 (UTC) I'm an unlucky physicist who started in math, so the ISO 31-11 notation always confuses me. That said, I've had a few textbooks and professors / colleges who use the math convention, or other stranger conventions, so I didn't even know that someone had tried to make a physics convention before reading this article. I'd be surprised if the ISO 31-11 notation was actually widely dominant, but it is certainly possible. As a result, I'd definitely prefer using the more consistent (and personally less confusing) math convention. naturalnumber (talk) 04:27, 16 January 2010 (UTC) If it were a mater of 99 to 1, the most common notation should definitely be used. However, from these responses and a few check to sources, it seems to me that both variants are fairly common. Trying to decide which is more common would be both hopeless and pointless. Thus I would say that either choice is equally good (or equally bad, take your pick), as long as the existence of variant notations is clearly noted, and the same notation is consistently used for the whole article. (On the other hand, due to the way that Wikipedia works, consistency of notation between separate articles so unlikely to be achieved that it is not even worth trying.) Readers who are used to the other notation will be inconvenienced, but that will happen either way. In any case, both sides must be prepared to read books and articles written in the "wrong" notation; so the unlucky readers may regard this article as a good exercise on that art 8-). As for following the ISO standard, I would quote the saying "the nice thing about standards is that there are so many of them to choose from" 8-) All the best, --Jorge Stolfi (talk) 15:59, 16 January 2010 (UTC) I think there is no real conflict between the choice of φ for the azimuth and the polar planar coordinates. When you go from 2D to 3D through a revolution (for instance, when moving from elliptic coordinates to prolate spheroidal coordinates; or from Bipolar coordinates to Bispherical coordinates) you simply rotate around an axis and call the new coordinate φ. In the same way, spherical coordinates are a rotation of polar planar coordinates around the Z axis. θ is the same in both systems and the azimuth φ is the new coordinate.--Gonfer (talk) 19:56, 12 February 2010 (UTC) The 'conflict' is that the angular coordinate of polar coordinates (which ranges over a full circle) seems to be commonly denoted by θ. Cylindrical coordinates are essentially polar + height, and its only angular coordinate is the azimuth. In spherical coordinates there are two angular coordinates, but only one of them ranges over a full 360° circle, namely the azimuth. So consistency would dictate using the same symbol for these three coordinates, and a distinct symbol φ for elevation (which ranges from -90° to +90°) or inclination (which ranges from 0 to 180°). Alas, the world is not consistent, and it is not Wikipedia's role to make it so. All the best, --Jorge Stolfi (talk) 20:45, 14 February 2010 (UTC) ## Longitude I don't think this statement is correct: The longitude is the azimuth angle shifted 180° from θ to give a domain of -180° ≤ θ ≤ 180°. Assuming the x axis goes from the center of the Earth through the equator at 0 deg longitude, then I believe the correct wording is: The longitude is the azimuth angle but for 180° < θ < 360°, subtract 360° so that -180° < longitude ≤ 180°. Actually, it would probably be more correct to bring up East and West when talking about longitude, but in any case, saying a shift by 180 deg is wrong (unless the x axis is supposed to point through the 180 deg longitude point which I doubt) DaraParsavand 19:01, 18 January 2007 (UTC) You're right, it's only a shift on half the values. I think it's pointless to discern whether any shift starts at θ = 0° or θ = 90° or anywhere else, so I've left it as an east-west distinction. –Pomte 11:37, 10 April 2007 (UTC) ## Angles Radians should be used, not degrees. Jarino1 18:55, 9 July 2007 (UTC) • Both are used, depending on the field of application. As said in the article, mathematicians and physicists seem to prefer radians, while engineers and and geographers often use degrees. --Jorge Stolfi (talk) 20:12, 24 November 2009 (UTC) ## Range of azimuth There seems to be a mistake in the page. atan2 is said to return a result between pi and -pi and yet phi is said to be in the range of 0 and 2pi. What's going on here? —Preceding unsigned comment added by 132.67.97.20 (talk) 11:30, 19 February 2009 (UTC) • As explained in the beginning of the article, the range of the azimuth is arbitrary; some use 0 to 360, others use -180 to +180, possibly others use still other ranges. Actually the range matters only if it is important to have unique coordinates; then the chosen range should span 360 degrees, but may start with any angle. --Jorge Stolfi (talk) 20:12, 24 November 2009 (UTC) Actually it matters so that one can truly use a given coordinate system to find referenced loci, not just for uniqueness. But of course that is exactly the level of detail that someone would look to this article for, and which it fails to provide because of its over-generality. Dlw20070716 (talk) 09:05, 19 July 2011 (UTC) ## Comparsion with Euler angles Conventional spherical coordinates as described in this article are a combination of rotations along the z and y axes. However, Euler angles are most commonly combinations of z and y rotations. Maybe someone could mention a few words about this issue? I was a bit confused by this difference. I propose to discuss this at Talk:Euler angles. Han-Kwang (t) 10:50, 4 December 2007 (UTC) ## Angle symbols swapped? Compared to the first diagram, has the theta and phi swapped when it gets to this point ${x}=r \, \sin\varphi \, \cos\theta \quad$ ${y}=r \, \sin\varphi \, \sin\theta \quad$ ${z}=r \, \cos\varphi. \quad$ 137.205.76.233 (talk) 18:34, 9 March 2008 (UTC) I also think that this is the case. I have been working in OpenGL with cartesian and spherical coordinates and the current symbols are definately swapped. Z must be invariant with $\theta$. At least my math classes nomiated them so. -67.171.122.139 (talk) 18:23, 13 May 2008 (UTC) • The naming of the coordinates is a matter of convention. Different authors/systems may use different conventions. --Jorge Stolfi (talk) 18:53, 24 November 2009 (UTC) • The triple integral article uses phi for elevation and theta for azimuthal. Is there a wikipedia standard for these we could apply to all articles? —Preceding unsigned comment added by Ban Bridges (talkcontribs) 19:54, 18 March 2010 (UTC) Yeah, they look swapped to me. http://tutorial.math.lamar.edu/Classes/CalcIII/SphericalCoords.aspx Methinks the coordinate system on this page needs to be standardized. --Secruss (talk) 20:56, 18 May 2011 (UTC) • As always, the wikipedia standard is to use published conventions from reliable publications and document same with references (which this article fails to do). Dlw20070716 (talk) 08:57, 19 July 2011 (UTC) The formulae in this section use the less-used unsigned spherical coordinates. For formulae using the more common geographical system, see the Great Circle article, this website. The article distinguishes from "inclination" (represented by theta) which starts at zero at the south pole, and "elevation" which starts at zero at the equator. Similarly, the rotation angle given here goes from zero to 360° (2π) whereas longitude starts at the Prime Meridian and extends to plus and minus 180°. L e cox (talk) 23:06, 4 September 2011 (UTC) ## θ is referred to elevation φ is referred to as the azimuth and θ is referred to elevation, right? —Preceding unsigned comment added by 131.180.34.78 (talk) 13:01, 22 May 2008 (UTC) • This is explained at the beginning of the article. It depends, some authors may swap the letters and/or use inclination instead of elevation. ## Immediate Action Is this article going to get fixed soon, by someone smart enough? It is full off errors. Anyone using this as a reference (and not knowing any better) will have a very hard time learning accurately. —Preceding unsigned comment added by 98.179.13.179 (talk) 22:58, 4 October 2009 (UTC) • Please point out the errors. --Jorge Stolfi (talk) 18:49, 24 November 2009 (UTC) • The errors are everywhere! I just fixed a handful of them. Somebody destroyed this page. —Preceding unsigned comment added by 129.65.149.8 (talk) 00:57, 8 December 2009 (UTC) • Please check the conventions used in cylindrical coordinate system and in this article (in the introduction, and right below the formulas in question). In both articles the azimuth angle is denoted by φ, not θ; the latter is spherical inclination or elevation. Thus the conversion preserves φ, while θ must be computed from z, of vice versa. (There was one θ = θ which should have been φ = φ; it's fixed now.) Also, the spherical radial coordinate is denoted here by r, while the cylindrical radial coordinate is ρ, here and in the other article. Am I missing something? Perhaps you are used to a different convention? I am sorry that you have wasted your time, but the conventions used in those two articles appear to be fairly common (if not predominant) and seem acceptable to most editors. All the best, --Jorge Stolfi (talk) 03:25, 8 December 2009 (UTC) • This article is not consistent in is notation. The transformation formulas to go from Cartesian to spherical coordinates and vice-versa (that I wrote some time ago) use the angle θ as the polar angle, measured from the zenith (with is the ISO standard). If at the beginning of the article the elevation is called θ, the formulas are confusing. The same for the images on the right. Elevation (latitude) should be called λ and θ reserved for the polar angle. —Preceding unsigned comment added by Gonfer (talkcontribs) 15:27, 12 February 2010 (UTC) • I agree that using theta for both elevation and inclination is confusing, but that unfortunately seems to be the situation in the real world. The use of lambda for elevation seems common in geography, but geographic coordinates (lat/lon/alt) are not quite spherical coordinates, due to the way latitude and altitude are defined (reference spheroid and all that). Outside geography, theta and phi seems to be used for [inclination or elevation] and [azimuth] — and not even "respectively"! The article tries to be consistent (if it is not, it should be fixed) and to give formulas for both conventions when possible; but even so it is necessary to say every time what theta is, for the benefit of readers who are used to the other convention. • I think for purposes of this article, we can ignore that the earth is only a spheroid. A spherical coordinate system can be readily projected onto a spheroid. Latitude and longitude together with altitude are indeed a complete spherical coordinate system, and as they are the only one most people are familiar with it behooves any wikipedia author working on this article to treat it with respect and expertise. With zero altitude defined as a theoretical mean sea level most people never notice that they don't live on a sphere. Dlw20070716 (talk) 08:51, 19 July 2011 (UTC) ## Error in Cartesian formulas? Anonymous user 146.87.52.54 added a note to the "Geographic coordinates" section saying "There is a mistake on the 3rd equation of the formula of the cartesian coordinates!". Would the author please clarify? (Perhaps he/she assumed that θ in the Cartesian formulas was latitude? It is actually inclination, that is, 90° minus the latitude.) --Jorge Stolfi (talk) 18:48, 24 November 2009 (UTC) Since geographical coordinates use latitude and not inclination, there is an obvious error to be addressed. A wikipedia author is not free to redefine the geographical coordinate system to include inclination! If it is mentioned at all, you'd better have an ironclad reference for it. Dlw20070716 (talk) 08:22, 19 July 2011 (UTC) Maybe he's referring to the fact that the equations for theta and phi appear to be swapped? See Mathworld. mitch_feaster (talk) 17:08, 10 August 2010 (UTC) ## Directrix? Someone has introduced the name "directrix" for the reference direction on the equatorial plane (azimuth zero). However the directrix article does not mention such a sense, and I have never seen the word used for that. Is there a reference for it? Thanks and all the best, --Jorge Stolfi (talk) 05:53, 8 December 2009 (UTC) Hi there, professor Jorge! It was me who edited this page without first signing in. I was looking for the standard for spherical coordinates which I was about to use in my own mathematical derivation project, when I noticed this article could use some improvement, to state in clear, unambiguous, and elegant form the most popular of the standards used to date. I'm honored to see that you adopted some of my suggestions (though I fail to see why you removed the limits in the definition of the angular coordinates). As you point out, I used the name "directrix" incorrectly to name a reference line, something from my distant, confused memory: it should be taken out and I'll do so right away. I made several minor contributions to various articles, previously, without noticing that I had offended anyone, until now. This is my first attempt at having a discussion with another contributor and I'm getting a better picture of how it all works. I now also have my own, very brief, User Page. Regards.--Toolnut (talk) 08:24, 8 December 2009 (UTC) However, note that the ranges of the coordinates need not be the ones you put in. The ranges are relevant only if it is important to have unique coordinates for each point; and, even then, there are several choices in common use. This issue is discussed in a later section. More generally, we must avoid putting things in the definition (such as the letters z and x for the axes) that are not universal. One can do that in a textbook or a paper, but an encyclopedia article must limit the definition to the really essential concepts, and discuss conventions separately. Indeed, I am a bit unhappy with the current "Definition" section, because it uses the letter r, θ, φ as if they were part of the definition. Actually there are several conventions in use, and Wikipedia should not take sides on that (just as we should not take sides on degrees vs. radians). We must eventually ] pick a notation, but it should be clear that it is only one of various common choices. All the best, --Jorge Stolfi (talk) 15:28, 8 December 2009 (UTC) PS. Also, the term "clockwise" only makes sense for physical three-space, and is not defined in mathematics or for more abstract three-dimensional manifolds (say, Pressure/Temperature/Time). In general, one must explicitly define the positive sense for azimuth. Moreover, the the right-hand convention is not universal (if I am not mistaken, on planets other than the Earth longitudes are measured in the opposite sense). --Jorge Stolfi (talk) 16:32, 8 December 2009 (UTC) ## Linear algebra, analytic geometry, or geometry? ### Unit Vectors, Position Vectors, Frame of Reference, Orthonormal Basis The above words should be incorporated in the definitions of any coordinate system. There is a reference zenith direction which is more accurately represented by a "unit vector," and it should be called exactly that, and may be impartially named $\hat z$ for succinctness in references to it. There is a reference azimuth direction, also, more precisely a unit vector, which may be called $\hat x$, again, for the purposes of succinctness in the descriptions of the various conventions used. We need to adopt a convention in order to describe other conventions in precise mathematical terms. These two constitute an orthonormal basis, a frame of reference, relative to which the coordinates of a point, represented by a position vector OP may be defined. "Spherical coordinates" is a mathematical phrase, more than geographical or astronomical, and it should therefore be treated in precise terms. Three-dimensional coordinate systems have their roots in the Cartesian system, which universally uses $\hat x$, $\hat y$, & $\hat z$ as its unit vectors, and which are just as universally mapped to the spherical coordinate system as I suggested. • "Mathematical precision" does not mean "mathematical formalism". The latter may be more coincise, but I am not sure that it is more understandable or even more precise. You and I are used to 'algebraic prose' like "let x be a direction on the plane P orthogonal to z"; and, for instance, we understand that this sentence is defining not only x but also P. Well, I do not think we can assume that much from readers who come to this article to know what "spherical coordinates" are. I suspect that a 'plain English' definition, like the current one in the article, will actually be easier for them to understand (and no less precise) than an 'algebraic prose' definition. The term "direction", in particular, is mathematically precise (in this context, it has is no possible interpretation other than the one intended). It is no less precise than "plane" and "point". Ditto for the other terms. A "unit vector" is a vector with unit length; but that length is not needed anywhere in the definition of spherical coordinates, so using a unit vector to define a direction (outside of linear algebra) is a mathematical overkill. (My favorite way of defining "direction" is a point at infinity in oriented projective space; but I will not use that here, either 8-). Moreover, the concept of "vector" is actually much harder to define than a spherical coordinate system. If we were defining spherical coordinates *relative to a previously defined Cartesian coordinate system*, it would be natural to use the unit vectors x, y, and z; but it is not the case here. The reasonable foundation for the article is Euclidean geometry. We cannot assume (or pretend, or postulate) that Cartesian coordinates are more fundamental than spherical ones. Furthermore, two orthogonal vectors do not define a Cartesian system in Euclidean 3-space, because the direction of the third vector cannot be defined mathematically; it must be chosen explicitly. (In Euclidean geometry there is no absolute handedness.) In spherical coordinate system, the third vector (y) does not appear anywhere except to define the positive sense of turning on the reference plane. So, again, if one is defining spherical coords directly, rather than though a prior Cartesian system, it is cleaner and simpler to choose the sense of positive rotation directly. All the best, --Jorge Stolfi (talk) 21:26, 8 December 2009 (UTC) The term "direction" is no less imprecise than "vector", it's just the fact that you don't use the former so much when you're actually establishing the foundations of 3-D geometry. I have attempted to introduce the concept of a vector through illustration, for now, making do with the less-than-ideal graphics that are there. My reason for the introduction and use of (a) unit vectors $\hat z$ & $\hat x$ to represent the reference directions and (b) the position vector OP, from the outset, has to do with the fact that Cartesian, spherical, or any other system's coordinates may later be more conveniently derived from, and defined by, just these two orthonormal (basis) vectors and the position vector: $\hat y = \hat z \times \hat x$ $\hat r = \frac{OP}{|OP|}$ $\hat \phi = \frac {\hat z \times \hat r}{|\hat z \times \hat r|} \text{, unit vector in } \hat z \times \hat r \text{ direction}$ $\hat \theta = \hat \phi \times \hat r$ $x=OP \cdot \hat x,\,y = OP \cdot \hat y,\,z = OP \cdot \hat z$ $r = OP \cdot \hat r$ $\theta = \arccos ( \hat r \cdot \hat z )$ $\phi = \arg ( \hat x \cdot \hat r + j( \hat x \times \hat r \cdot \hat z) ),\, j=\sqrt{-1} \text{, or}$ $\sin \theta \cos \phi = \hat x \cdot \hat r ,\,\sin \theta \sin \phi = \hat x \times \hat r \cdot \hat z$ All it takes is the definition of dot product and cross product to put things in the above, unambiguous, symbolic form: it is immeasurably harder to solve problems in 3-D geometry without these well-established concepts. The use of these two basis vectors is essential regardless of what orthogonal coordinate system you're deriving. Our only connection to the Cartesian system is the reuse of two of its coordinate names. The reason why we cannot use $\hat r,\,\hat\theta,\,\hat\phi$ as basis vectors is that they are position-dependent and not fixed in space; they are only used to describe the orientation of a vector in a vector field. Also, as I've shown above, two orthogonal vectors do define the Cartesian (or any other) system, as the third vector is derived from the first two. Finally, the concept of "position vector" is in every textbook in engineering and physics I've ever read, so I don't understand why you are taking pains not to use it here, in favor of "Euclidean distance" (is that more understandable?) or "line segment." I know, not everyone would like to get deeper into this subject, but for the benefit of those who might, it doesn't hurt to put this different perspective out there. Regards.Toolnut (talk) 08:56, 9 December 2009 (UTC) • You are assuming that one can or should define spherical coordinates only after defining Cartesian coordinates, dot product, cross product etc. This approach is a big overkill, and not appropriate for an encyclopedia! It may be adequate for a textbook on calculus or a university math curriculum, where students are assumed to learn one before going on to the other. But that is not a reasonable assumption for Wikipedia readers. In any case, Cartesian coodinates and algebra (much less linear agebra!) are absolutely NOT necessary to define spherical coordinates, and do NOT make them easier to understand. In geography, for instance, people have always used and defined spherical coordinates directly, without even mentioning Cartesian ones. Linear algebra (vectors, dot product, cross product, etc.) are definitely NOT "the foundation of 3D geometry", only of analytical geometry. People have been and are doing some pretty advanced 3D geometry, including speherical coordinates, without reference to any of those concepts. Finally, the cross product formula can be used ONLY after you have chosen all THREE Cartesian basis vectors, so you cannot use it to define the third vector in terms of the other two. Either you choose the third vector explicitly, or you choose a handedness for 3-space (which is not "built in" in Eucliean geometry). You cannot escape that. So please keep the original simple, direct, algebra-free, non-circular definition; there was nothing wrong with it. Please. All the best, --Jorge Stolfi (talk) 15:29, 9 December 2009 (UTC) PS. I added a note on the "position vector". However, note that it too is a concept of linear algebra that is useful only if you are going to use linear algebra operations on it (which is surely the case of all the technical texts that you mention). The local "spherical" frame vectors $\hat r$, $\hat \varphi$, $\hat \theta$ would be fine in the "calculus" section of the article (together with line element, etc.), but are not necessary to define spherical coordinates per se. And one must note the degeneracy at the poles. All the best, --Jorge Stolfi (talk) 16:03, 9 December 2009 (UTC) ### Dot product, cross product, handedness; abstract and concrete 1. You insist that one "cannot use [the concept of cross product] to define the third vector in terms of the other two." Do we at least agree that any 3-D frame of reference can be represented (i.e. fixed in space) by exactly two basis vectors? Does the concept of "right angle" assume knowledge of the Cartesian coordinate system? The third basis vector in establishing such a system is simply defined as being simultaneously at right angles to the first two, with a particular handedness: you fix the system in space by only fixing two of these basis vectors; the last one follows in a predetermined way. • Yes, *once you choose a "right" handedness for the space*, then two orthogonal directions define a third direction orhtogonal to both, unambigously, by that "right hand" rule. But classical Euclidean 3D geometry usually does not distingish "right handed" from "left handed". I presume that this "hand neutrality" is a consequence of Euc.Geom. having been developed first for the plane, in contexts where the handedness is irrelevant. Consider for example the theorem or axiom "two triangles are equal if corresponding sides are equal": it implicitly assumes that you can flip one triangle over to match it with the other triangle. For *physical* 3D space (the one whe move in) we of course cannot "flip" a solid over, so one may assume that "right-handed" is well defined and use that to define the third axis. (But you should read Martin Gardner's discussion in The Ambidextrous Universe. And if the physical universe turns out to have the topology of a non-orientable 3D manifold, then we are in trouble again...) However, if we are choosing coordinates for some abstract 3D space (such as the state of a compressed gas), then there is no natural definition of "right-handed", and one must specify the three axes explicitly. I suppose that there are people who define and work with "oriented Euclidean 3D space" (classical E. space plus a distinguished handedness), but that does not seem to be common. • PS. I forgot to say: yes, in Euclidean geometry the concept of "perpendicular" is defined (and almost primitive). In the plane, two lines are perpendicular if all four angles between them are congruent (and "congruent" is a primitive concept). "Parallel" is a more primitive notion: two lines are parallel if they lie on the same plane but never meet. The existence of parallel lines is an axiom (much disputed until Lobachevski/Riemman). • Thanks! I guess I was confusing the definition of "frame of reference" with the definition of a "coordinate system," which requires further elaboration, such as the definition of the third basis vector (in Cartesian coordinates) even though it is based on the first two; the function that defines the third basis is what adds that third dimension. In Spherical coordinates, there are three functions of the same basis vectors defining three alternate dimensions (not basis vectors, nevertheless having the ability to uniqely define a point in space, except at the degeneracies.Toolnut (talk) 21:48, 9 December 2009 (UTC) 2. Can you show me the derivation of the most general equations for 3-D rotation, a complex geometry problem, without the use of vector geometry algebra? Is it just as easy to accomplish? Toolnut (talk) 18:30, 9 December 2009 (UTC) • In geometry, a rotation is defined as a mapping of points to points that preserves all distances between point pairs and has only one line's worth of fixed points. So there are no equations to derive. (The last condition is only needed to exclude reflections and screw motions; the first condition alone defines the class of congruences or isometries, which are much more important in geometry.) Given enough data to determine the rotation (for example, the fixed axis line and two directions perpendicular to it) there is a simple 3D geometric construction that will yield the rotated image of any given point. Again without any formulas. What you ask is a problem of analytic geometry. All the best, --Jorge Stolfi (talk) 19:36, 9 December 2009 (UTC) • Right, I was talking about 3D rotation about an arbitrary axis and the derivation of the new coordinates of an arbitrary point (x, y, z), or (r, θ, φ), from knowledge of the direction numbers of the axis of rotation (represented by a unit vector). Is "analytic geometry" really what I'm after, not so much "linear algebra"? I'm not aware that linear algebra deals with cross products and their physical meaning, or the physical meaning of dot products, either, especially because it is more general than just 3D space. Cross products don't carry over to higher dimensions very well, even without a physical meaning. (My last attempt to do that implied that a cross product becomes a trinary operation in 4D, an operation on three 4D vectors.) Anyway, there is no easier way to do this task (derive the coords of a rotated point) than through the use of vectors and their dot and cross products, is there?Toolnut (talk) 21:48, 9 December 2009 (UTC) • Cross products can be generalized to any dimension n and any number mn of vectors as follows: assemble an m by n matrix M where each row is one of the vectors, in the order given. Then take the m by m minors, that is, the determinants of all m by m submatrices (formed by choosing m out of the n columns) of M, each multiplied by (−1)p where p is the "parity" of the corresponding column subset. List these minors in some canonical order, to get a vector with choose(n,m) coordinates. These numbers identify the linear subspace spanned by the given vectors --- uniquely, except by a common scale factor. In projective geometry these are called the Grassman or Plücker coordinates of that subspace; I don't know what they are called in linear algebra. The parity p is the number of column swaps that one must perform to bring the chosen columns to columns 1..m, without changing their relative order or that of the remaining columns. In particular, if m = 2 and n = 3, and one lists the determinants in the order {2,3}, {1,3}, {1,2}, one gets the standard 3D cross product; the respective parities are 2, 1, 0. If m=1 and n=2, one gets the formula for rotating a 2D vector by 90 degrees. If m=n−1, as you noted, one gets another n-vector that is orthogonal to the given ones. If m=n one gets a single number, the determinant of M; which is the volume of the parallelotope whose sides are the given vectors, with a sign that depends on their n-dimensional handedness (in the given order). All the best, --Jorge Stolfi (talk) 22:17, 9 December 2009 (UTC) • As for your other questions: in linear algebra 'vectors' are either abstract objects defined indirectly by their properties (the usual mathematicians' view), or lists of real numbers (a more pdestrian, let's say "engineers'" view). Analytic geometry ties algebra and linear algebra to geometry, usually by choosing a Cartesian coordinate system, mapping points to Cartesian triplets, and then using algebra and engineers' linear algebra on those. However one can also define a vector in geometric terms (as an Euclidean "translation", for example) and use abstract linear algebra on them, without ever choosing a coordinate system. Of course, if you want coordinates out, you need coordinates in. In abstract liner algebra one can define an abstract dot product too, as being a binary operation with certain properties. Once you have a dot product you can define "distance" and "perpendicular", and then you can do with abstract vectors anything that you can do in Euclidean geometry, for any dimension n --- all abstractly. (But you don't get handedness yet; for that you need an n-ary "mixed product" operation, the n by n signed volume determinant, in the engineers' view.) You can define what a "rotation" is in abstract linear algebra, and "compute" its effect from given data (as in the Euclidean rotation example above), all without coordinates; but again, if you want coordinates out, you need coordinates in. Hope it helps. All the best, --Jorge Stolfi (talk) 22:54, 9 December 2009 (UTC) ## Inclination or elevation, which should come first I have reworded the lead to define the 'inclination' variant first, then 'elevation'. The rest of the article uses mostly the inclination variant, and that seems to be the preference of some editors (however the 'elevation' fans may have been a silent majority). On the other hand, the 'inclination' variant is a bit more awkward to define, since one needs to introduce both the zenith and the reference plane; whereas to define the 'elevation' variant one needs only the reference plane. I think the inclination variant is pretty unnatural for the reason you stated (it requires a zenith not otherwise needed). I have only encountered elevation variants (I think). Both the world coordinate system and the celestial coordinate system are elevational, and because of their importance and familiarity I would recommend that they should come first in an article that attempts to be generally applicable like this one does. By the way, the celestial coordinate system uses the term declination, not elevation, and not inclination. Dlw20070716 (talk) 07:58, 19 July 2011 (UTC) I first encountered elevation, in undergraduate studies. Only later did I encounter instances of the inclination variant. I would summarise them as follows: The inclination variant is convenient for emphasising symmetry about the pole. Conversely, the elevation is convenient for emphasising symmetry about the reference plane. ...Feel free to quote (and cite) me.  :-) —DIV (138.194.11.244 (talk) 07:53, 29 August 2011 (UTC)) ## Figure order The fisrt three figures were recently reordered. I have partially restored their original order. The two line diagrams should come first since they illustrate the two variants of spherical coordinates (inclination vs. elevation) defined in the first paragraph. The raytraced figure is somewhat redundant, it does not show the coordinates proper (only their isosurfaces for one point), and requires a much longer caption to make sense. On the other hand, I have retained the new order between the two line diagrams (with inclination before elevation), since I have also reordered the definitions of the two variants in the lead paragraph (see above). Note that swapping the two figures requires rewriting their captions. All the best, --Jorge Stolfi (talk) 21:30, 14 February 2010 (UTC) ## Unique spherical coordinates for origin? The section "If it is necessary to define a unique set of spherical coordinates for each point..." does not appear to deal with what happens at the origin. Could someone who knows comment on any approaches or conventions there might be for this? Thanks! Gwideman (talk) 04:33, 17 March 2010 (UTC) ## Describe to me why one of the angles has to be 0 to pie The range of the angles bothers me. 0<r< 0<θ —Preceding unsigned comment added by 168.18.148.126 (talk) 23:16, 11 August 2010 (UTC) Consider longitude and latitude, longitude goes from 180° east to 180° west, i.e. -π to π, or equivalently 0 to 2 *π. Latitude goes from 90° south to 90° north, i.e -π/2 to π/2. If you measure the angle from the north pole then this will be 0 to π. If they both went 0 to 2π the you would actually cover the sphere twice.--Salix (talk): 23:29, 11 August 2010 (UTC) ## Uniqueness of Representation The section on unique representations is not quite correct. For example, $r = 1$ and $\theta = 0$, then we are identifying the north pole of the unit sphere for any value of $\phi$. Austinmohr (talk) 01:28, 2 September 2010 (UTC) I fixed it.--Patrick (talk) 07:07, 2 September 2010 (UTC) ## Inconsistency with Dimensions Using the stated formulas for converting cartesian to spherical coordinate returns results I believe are not consistent with dimensions or more precisely the spacial dimensions in "Dimensions". ("The spherical coordinates (r, θ, φ) of a point can be obtained from its Cartesian coordinates (x, y, z) by the formulas .... ") e.g. converting (1,0,0) returns θ =π/2 and φ =π/2 , according to dimension it ought to be θ = 0. The inconsistency can well be in dimensions, θ and φ might adhere to a different convention with respect to their relation to x,y,y than this article. Could somebody with knowledge please check and align if required? Thanks, it is appreciated. Aporio (talk) 20:50, 13 October 2010 (UTC) ## dxdy $dx=r\cos\theta\cos\phi d\theta-r\sin\theta\sin\phi d\phi$ $dy=r\cos\theta\sin\phi d\theta+r\sin\theta\cos\phi d\phi$ then $dxdy=r^{2}\cos\theta\sin\theta\left(\cos^{2}\phi-\sin^{2}\phi\right)d\theta d\phi\ne r^{2}\cos\theta\sin\theta d\theta d\phi$ anyone knows the reason why like this??? anyway, just replace dA in one system with dA in another system. one dV with another dV. no other adjustment is required. Jackzhp (talk) 13:45, 28 March 2011 (UTC) It's $dx\wedge dy$. See differential form. Sławomir Biały (talk) 14:54, 28 March 2011 (UTC) ## Distinguishment between inclination and elevation variables The distinguishment would be achieved by using $\, \theta_{inc}$ (or $\, \theta$) and $\ \theta_{el}$ instead of using the same $\ \theta$'s in this article. Adding the description of the order of the 3D variables would be, furthermore, useful: $\ (r,\ \phi,\ \theta_{el})$ in a right-handed coordinate system and $\ (r,\ \theta_{el},\ \phi,)$ in a left-handed coordinate system. Kkddkkdd (talk) 11:07, 21 May 2011 (UTC) As evidenced by all the discussion above, this article has major problems, mostly because of conflicting usages by various authors and communities. That, and it has no references at all (and nobody so far seems to have noticed because of all the other problems)! Because of all the conflicting conventions, I recommend turning it into a disambiguation-like page and redirecting to pages that better discuss specific spherical coordinate systems as they are actually used in specific systems, like geographical location, mathematics, computer graphics, etc. The page as it stands is too confusing and just plain erroneous to be useful. It would be useful to have an article that covers the conventions used in mathematics if done well, but that appears to be hopeless because of the confusion reigning therein. Dlw20070716 (talk) 07:44, 19 July 2011 (UTC) ## Incorrect Formula from Cartesian to Spherical The spherical coordinates (r, θ, φ) of a point can be obtained from its Cartesian coordinates (x, y, z) by the formulae $r=\sqrt{x^2+y^2+z^2}$ $\theta=\cos^{-1} \left( \frac{z}{r}\right)$ $\varphi = \tan^{-1} \left( \frac{y}{x} \right)$ Wolfram disagrees with this here: http://mathworld.wolfram.com/SphericalCoordinates.html Images: http://mathworld.wolfram.com/images/equations/SphericalCoordinates/Inline33.gif http://mathworld.wolfram.com/images/equations/SphericalCoordinates/Inline36.gif http://mathworld.wolfram.com/images/equations/SphericalCoordinates/Inline39.gif — Preceding unsigned comment added by 129.116.33.62 (talk) 21:34, 5 September 2011 (UTC) -No wolfram and this article are agreeing. You just need to study up on your trigonometry to understand the relation. Cos(theta) = Adjacent Leg of triangle divided by hypotenuse of triangle. So cos(theta) = z/r and theta = cos-1(z/r) — Preceding unsigned comment added by 99.137.50.90 (talk) 12:35, 23 October 2012 (UTC) Well, in our version of these formulae theta and phi are swapped compared to wolfram. But that is not incorrect, there are two common ways to define them, as explained in our article. We also explicitly state which of the definitions we use for the formulae just above them. I think that's all we can do. — HHHIPPO 17:22, 23 October 2012 (UTC) My apologies for not knowing the proper way to point this out to you WikiPeople, but it seems the "theta = arccos(-z/r)" equation should read "theta = arccos(z/r)", no? — Preceding unsigned comment added by 71.237.117.249 (talk) 16:09, 28 December 2013 (UTC) ## Need to deal in the main with the main topic The main topic here is the mathematical spherical coordinate system. It should not have the stuff from celestial coordinate system] and geographical coordinate system mixed into the main article. There are separate articles on those. I think it is a good idea to have a separate section on those pointing out the differences from what is done in maths but this article has become a mess by mixing them into the main article. Dmcq (talk) 20:08, 25 March 2012 (UTC) Agreed. a13ean (talk) 20:13, 25 March 2012 (UTC)
Korean J Financ Stud Search CLOSE Korean Journal of Financial Studies 2013;42(1):263-284. Published online February 28, 2013. Principal-Protected ESOP for Diversifying Employees Financial Risks Hyoung Tae Kim, Hong Sun Song, Hyo Seob Lee 근로자의 최적 위험분산을 위한 원금보장형 우리사주제도(ESOP) 설계 방안 김형태, 송홍선, 이효섭 Abstract An employee stock ownership plan (ESOP) allows employees to purchase and hold shares in the company at favorable conditions. ESOPs are popular because they offer incentives to work, and the profit from a higher stock price helps employees accumulate savings. But problems arise when the company is in danger of bankruptcy. In this case, employees can be exposed to dual risks; they may not be able to recover unpaid wages, and stock ownership will incur investment losses. In the volatile Korean stock market, the potential losses from stock price declines are pointed as the biggest obstacle to ESOPs being used more widely. As an alternative, this study proposes a "principal-protected ESOP" which spreads the dual risks and helps employees save money more stably. Based on the mean-variance expected utility model, we show that the principal-protected ESOP significantly increases utility for employees. We also present a desirable principal-protected ESOP structure that takes into account moral hazard to provide optimal utility to employees. Considering the cost increases from moral hazard because of the principal protection structure, an ESOP that has partial principal protection and distributes some investment returns to the ESOP provider (financial firm) is found to provide the highest expected utility to employees. We hope the results of this study will contribute to a more incentive- compatible ESOP system and provide fresh policy implications for Korea`s ESOP development. Key Words: 도덕적 해이,우리사주제도,원금보장,이중위험,평균-분산 기대효용,Dual Risks,Employee Stock Ownership Plan,Mean-Variance Expected Utility,Moral Hazard,Principal-Protected TOOLS Share : METRICS • 144 View
# A and B borrowed Rs. 5,000 and Rs. 8,000 respectively at the same rate of interest for 8 years. B paid Rs. 960 more as interest than A. Find the rate 11 views in Aptitude closed A and B borrowed Rs. 5,000 and Rs. 8,000 respectively at the same rate of interest for 8 years. B paid Rs. 960 more as interest than A. Find the rate of interest. 1. 1% 2. 4% 3. 8% 4. 9% by (54.3k points) selected Correct Answer - Option 2 : 4% Given Time = 8 years Principal for A = Rs. 5000 Principal for B = Rs. 8000 Formula used Simple interest = (p × r × t)/100 Where p, r and t are principal, rate and time respectively Calculation (8000 × r × 8)/100] =  [(5000 × r × 8)/100] + 960 ⇒ (8000 × r × 8)/100] -  [(5000 × r × 8)/100] = 960 ⇒ 640r - 400r = 960 ⇒ 240r = 960 ⇒ r = 4% ∴ Rate of interest is 4%
Apr.-June 2014 (21, 02) pp. 22-31 1070-986X/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society Clustering Faces in Movies Using an Automatically Constructed Social Network Mei-Chen Yeh, National Taiwan Normal University Wen-Po Wu, Reallusion Corporation Article Contents Proposed Approach Experiments Conclusion References Download Citation Download Content PDFs Require Adobe Acrobat Clustering faces in movies is a challenging task because faces in a feature-length film are relatively uncontrolled and vary widely in appearance. Such variations make it difficult to appropriately measure the similarity between faces under significantly different settings. In this article, the authors develop a method that improves face-clustering accuracy by incorporating the social context information inherent among characters in a movie. In particular, they study the relation of social network construction and face clustering and present a fusion scheme that eliminates ambiguities and bridges information from two fields. Experiments on real-world data show superior clustering performance compared with state-of-the-art methods. Furthermore, their method can help incrementally build a character's social network that is similar to a manually labeled example. To enable efficient actor searches and content management, automatic face-clustering techniques can help recognize faces in films and video footage and organize them by character names. However, despite the vast amount of research conducted on this subject, 1 4 the face-clustering task remains highly challenging, especially for feature-length films, which are relatively uncontrolled and vary widely in appearance. Numerous factors other than character identity—such as lighting conditions, facial expressions, poses, and partial occlusions—change the way a face appears. Thus, state-of-the-art face description and modeling methods have had only limited success in real-world testing. Alternatively, social network techniques have increasingly gained attention in movie content analysis because they provide additional cues to audio-visual features for organizing the growing volumes of available movie data. A social network is a collection of relationships that demonstrate how people are socially connected to one another. 5 , 6 A weighted undirected graph can represent the social context information, where vertices denote the people and edges indicate the social closeness of two individuals. The analyses of social networks have shown some success in discovering hidden structures that cannot be directly perceived by analyzing low-level audio-visual features. 5 , 7 , 8 In this article, we study the relation of social network construction and face clustering in movies and TV. More specifically, we look at how knowledge of the characters' social activities can enhance face clustering and how clustered faces can help estimate the relationships among characters. There are several interesting connections between the two tasks. For example, an effective social network is usually built upon a robust face-clustering result. 5 Similarly, because the communities among characters in a movie are relatively limited compared with those on social networking websites, the social relationships inherent in a movie should benefit from the face-clustering task. 8 We demonstrate through experiments that our proposed framework may eliminate a certain level of ambiguity in each case and bridge information from both fields. For many real-world applications, the connection could produce more valuable information, such as a name and the social partners associated with a detected face, which in turn would result in considerable benefits to media content management. To automate the analysis process, we need to address two technical problems: • How do we construct characters' social networks from a movie? • How do we use social networks for face clustering? A preliminary version of the work 9 focused on constructing characters' social networks, along with a fully automatic approach for solving the problem. We extend that work here by using the social network to facilitate the face-clustering problem and performing an empirical analysis on how the two visual tasks benefit each other. Our empirical study shows that an automatically built social network both provides meaningful information that describes characters' social interactions and helps resolve ambiguities when distinguishing identities are captured with similar shooting conditions. However, the extraction of useful cues from a noisy social graph is not trivial—the discriminative power of social features for face clustering depends on the way we utilize them. We examine various design choices and study how social contexts should be utilized to truly aid in the face-clustering process. Proposed Approach To iteratively perform face clustering and social network construction in movies, we propose a framework to connect these tasks (see Figure 1 ). We start with a state-of-the-art appearance-based face recognition approach 10 and describe our implementation and experimental results, which are considered the baseline performance. Then, we present a fully automatic approach for constructing roles' social network from movies 9 and a new method that explores the social contexts inherent in a movie for enhancing the clustering performance on an unconstrained face set. (See the “Related Work in Movie Content Analysis” sidebar for earlier work in this field.) The framework of bridging face clustering and social network construction. Given grouped faces, a social network that describes characters and their interactions can be automatically built. The social network is then used to extract social features that enhance face clustering. Figure 1.The framework of bridging face clustering and social network construction. Given grouped faces, a social network that describes characters and their interactions can be automatically built. The social network is then used to extract social features that enhance face clustering. Associate-Predict Model: The Baseline The associate-predict model addresses the issue of large intrapersonal variations in face recognition. 10 The model is built on an extra generic identity dataset (machine memory) in which each identity contains multiple images with considerable intrapersonal variations. Unlike conventional recognition approaches that directly compare two faces, each input face is first associated with a similar generic identity, and its new appearance is predicted under the similar setting of the other face. The face matching is finally performed on the predicted new face pairs. Following a memory construction procedure similar to that described in earlier work, 10 we built the machine memory from the Multi-PIE dataset. 11 This extra generic identity dataset contains 129 subjects, and each identity has five poses and five lighting conditions. To evaluate the effectiveness of this model for clustering faces in movies, we conducted a comparison study with a direct-matching approach. We used the OpenCV library ( http://opencv.org) to detect face regions in the full-length film The Devil Wears Prada. Each face is described by a 59-dimensional local binary pattern (LBP), 12 and we use the chi-squared distance to measure face-to-face proximity. The distances are calculated through an extra generic identity dataset in the associate-predict approach. Finally, we apply the affinity propagation (AP) clustering algorithm 13 to group faces. Figure 2 shows a direct comparison between the associate-predict approach 10 and the direct-matching method. We set various preference values in AP to control the number of data points selected as exemplars, with low preferences leading to few clusters and vice versa. (We define the two measures we used later on.) Theresulting curves clearly show that the associate-predict method (blue dashed curve) out performs the direct-matching method (black dotted curve), with a performance gain of 2.15 percent in purity and 3.51 percent in normalized mutual information (NMI) on average. Even though we apply the state-of-the-art model, the improvement is limited when using the appearance-based features alone. A direct comparison of the clustering accuracy between the direct-matching method, the associate-predict model, and our approach in terms of (a) purity and (b) normalized mutual information. Figure 2.A direct comparison of the clustering accuracy between the direct-matching method, the associate-predict model, and our approach in terms of (a) purity and (b) normalized mutual information. Constructing a Social Network Social networks are conventionally represented by an undirected weighted graph$G = (V,{\rm{ }}E,{\rm{ }}W)$$G = (V,{\rm{ }}E,{\rm{ }}W), where V = \{ v_1 ,v_2 ,{\hbox{ }} \ldots ,v_N \}$$V = \{ v_1 ,v_2 ,{\hbox{ }} \ldots ,v_N \}$denotes the set of characters in a movie, E = { \{e_{ij} \cr |{\hbox{if }}v_i}{ \{e_{ij} \cr |{\hbox{if }}v_i}{{\hbox {and}}v_j {{\hbox {and}}v_j {\hbox {have a relationship}}\}{\hbox {have a relationship}}\}, and the element$w_{ij}$$w_{ij} in W represents the social closeness between v_i$$v_i$and$v_j$$v_j. 5 The face clusters derived from the previous step constitute the vertices in V. To compute w_{ij}$$w_{ij}$, most existing methods utilize coappearance to quantify the characters' interrelationships—the social closeness of two characters is measured by the number of scenes in which both characters are present. 5 , 6 These approaches require precise scene boundaries. The social network is semiautomatically constructed because we need to apply a scene-detection method to obtain initial scene boundaries and perform manual labeling to correct errors introduced by the scene-detection method. The interrelationships of the characters may be more appropriately represented by the interaction than coappearance 9 for two reasons. First, two characters can interact even if they do not physically appear in the same space at the same time. For example, in the movie You've Got Mail, Joe Fox and Kathleen Kelly unknowingly become friends over email. Second, the fact that two people are present in a same scene does not imply that they interact with each other. For example, the camera may capture anunknown person who coincidentally appears in background. In this article, we use both the coappearance-based and interaction-based cues. First, we implement a shot-change-detection method to determine shot boundaries. The social closeness$w_{ij}$$w_{ij} between two face clusters v_i$$v_i$and$v_j$$v_j is quantified as follows:$$w_{ij} = \sum {_{p \in v_i } } \sum {_{q \in v_j } } {\bf{1}}\left ({\left| {p.{\hbox{shot}} - q.{\hbox{shot}}} \right| \le 1} \right) \tag {1}w_{ij} = \sum {_{p \in v_i } } \sum {_{q \in v_j } } {\bf{1}}\left ({\left| {p.{\hbox{shot}} - q.{\hbox{shot}}} \right| \le 1} \right) \tag {1}$$where p.shot denotes the shot ID of a face region p and 1(·) is an indicator function. That is, the relationship between two characters is quantified by the amount of coappearance and shot alternation between two face clusters. Figure 3 illustrates the social graph for the movie The Devil Wears Prada constructed using the automatic approach. The character social networks from the movie The Devil Wears Prada. (a) Automatic approach, first iteration; (b) automatic approach, second iteration; and (c) manual approach (ground truth). Nodes that represent the same character are manually combined for better visualization. Figure 3.The character social networks from the movie . (a) Automatic approach, first iteration; (b) automatic approach, second iteration; and (c) manual approach (ground truth). Nodes that represent the same character are manually combined for better visualization. This method is not limited to dialog scenes involving two characters, and it can generally deal with interactions that involve more than two people. Compared with coappearance-based approaches, the method is easy to implement and requires only the shot-boundary information to fairly describe the characters' interactions. As we showed in our preliminary work, 9 the approach can be used to generate similar social graphs from either the manually labeled or automatically clustered faces. Extracting Social Features from the Social Network The automatically constructed social network is usually noisy. For example, multiple nodes in the social graph may correspond to the same character, and a node may contain faces of more than one character. Thus, we need to extract useful cues from the noisy social network to improve face clustering. We decompose the problem into two steps: selecting an anchor set and computing social context cues given the anchor set. Anchor Set To deal with the noise that inevitably exists in an automatically constructed social network, we compute social cues from a subset of the social network. This idea was inspired by the work of Peng Wu and Feng Tang, 8 who apply “significant clusters” to refine and use the social graph. We first observe that nodes in the social network are not equally and entirely important for revealing identities. For example, if a character v_i$$v_i$has a relationship with every other character in the movie, co-occurring with$v_i$$v_i provides nearly no information. Intuitively, large clusters are usually selected because they tend to contain the faces of main characters. 8 Besides the selection of top large clusters, we also examine another strategy, selecting the clusters that have large variances in relationship magnitude. More specifically, given the social closeness matrix W, we compute the variance of each row. A cluster with a large variance implies that the corresponding character interacts with only a few particular characters in the movie. We consider those clusters as elements in the “anchor set” because the probabilities of interacting with them would be more skewed and, thus, probably discriminative. The size of the anchor set is automatically determined as follows: Clusters are sorted by size (or magnitude variance), and the difference of values between adjacent clusters is computed. Suppose the peak of the difference curve occurs at K. We selected the top K clusters with large sizes (or magnitude variances). Social-Based Proximity Our main approach of face clustering using a social graph is derived from the following observations: • Two connected nodes tend not to be the same person. • Two nodes that have a similar connectivity (connection and no connection) to those in the anchor set tend to be the same person because they have consistent social behaviors. The second observation is particularly valid in movies because characters usually form a limited number of communities in a story. Unlike those in social networking websites such as Facebook, the social context inherent in the characters' social network is not sparse. We model each face cluster as a node and associate each node with a social feature vector {\bf{x}}_{v_i }$${\bf{x}}_{v_i } $= ($w_{ik}$$w_{ik}), where w_{ik}$$w_{ik}$is the social closeness of$v_i$$v_i and v_k$$v_k$, and k indicates the node index of those in the anchor set. That is,${\bf{x}}_{v_i } $${\bf{x}}_{v_i } is a K-dimensional feature vector, assuming we have K nodes in the anchor set. The social-based similarity between two clusters v_i$$v_i$and$v_j$$v_j is given by$$s\left ({v_i ,v_j } \right) = f\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) \times g\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right), \tag {3}s\left ({v_i ,v_j } \right) = f\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) \times g\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right), \tag {3}$$where f(·) measures the similarity of the nonzero social elements and g(·) measures that of the zero elements in two social feature vectors. More specifically, based on the idea from earlier research, 9 , 12 f(·) incorporates the observation that two clusters may correspond to a character if they have similar social connections. The similarity is determined by their cosine value adjusted by the social closeness between v_i$$v_i$and$v_j$$v_j:$$f\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) = \cos \left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) \times \exp \left ({ - {{w_{ij} } \over {\left\| W \right\|_{\max } }}} \right). \tag {4}f\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) = \cos \left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) \times \exp \left ({ - {{w_{ij} } \over {\left\| W \right\|_{\max } }}} \right). \tag {4}$$If two people interact (that is, w_{ij}$$w_{ij}$> 0), they cannot be the same person even if they have similar social connections to other people. Moreover, we propose that the social behavior of two people is similar if they both have no interaction with certain people. For example,$${\displaylines g\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) = {1 \over K}\left ({\sum\limits_{k = 1}^K {1\left ({{\bf{x}}_{v_i } \left (k \right) = 0,{\bf{x}}_{v_j } \left (k \right) = 0} \right)} } \right)} \times \exp \left ({ - \left ({1 - {{\min \left| {v_i } \right|,\left| {v_j } \right|} \over {\max \left| v \right|}}} \right)} \right). \tag {5}$$$${\displaylines g\left ({{\bf{x}}_{v_i } ,{\bf{x}}_{v_j } } \right) = {1 \over K}\left ({\sum\limits_{k = 1}^K {1\left ({{\bf{x}}_{v_i } \left (k \right) = 0,{\bf{x}}_{v_j } \left (k \right) = 0} \right)} } \right)} \times \exp \left ({ - \left ({1 - {{\min \left| {v_i } \right|,\left| {v_j } \right|} \over {\max \left| v \right|}}} \right)} \right). \tag {5}$$ The first term computes the number of 0-0 matches in${\bf{x}}_{v_i } $${\bf{x}}_{v_i } and {\bf{x}}_{v_j }$${\bf{x}}_{v_j } $, and the similarity is adjusted by the number of faces in each cluster, denoted by |$v_i$$v_i|. If a face cluster has just a few faces, it is more likely to introduce 0-0 matches when the cluster is compared with others. Thus, the similarity value should be inversely proportional to the number of faces in the cluster. Combing Facial and Social Cues The original appearance-based clustering result may be inaccurate because of large intrapersonal variation. We believe that an improvement could be obtained by considering social cues in the clustering process. We combine the social-based measure (Equation 3) extracted from the social network with the appearance-based measure (the chi-squared distance \chi ^2$$\chi ^2 $of two LBP descriptors) to determine the overall proximity of two faces p and q. The computation is simply multiplying two terms:$$d(p,{\hbox{ }}q) = \chi ^2 (p,{\hbox{ }}q) \times (1 - s(v_i ,{\hbox{ }}v_j )), \tag {6}$$$$d(p,{\hbox{ }}q) = \chi ^2 (p,{\hbox{ }}q) \times (1 - s(v_i ,{\hbox{ }}v_j )), \tag {6}$$ where$p \in v_i$$p \in v_i and q \in v_j$$q \in v_j$. We did not use a linear weighting method because it is unclear how the weights should be determined to fairly reflect the true proximity. Moreover, if one of the measures reflects that two faces are unalike, we tend not to group them into the same cluster. The refined proximity matrix is finally fed the AP clustering algorithm, and we obtain the second clustering result; based on this, the social network is again constructed. The system generates webpages that visualize the clustering results and the social network for each round. These processes are iteratively performed until the number of clusters reaches the number desired or the results are accepted by users. Computational Complexity Analysis Our social network construction approach quantifies the social closeness between two characters based on shot alternation and coappearance. The weight matrix W can be built by linearly examining the cluster IDs of each detected face by the order of the shot sequence. The time complexity is O( N), where N is the number of detected faces in the film. Once we obtain the social network, the computation of the social-based proximity matrix is$O\left ({|V|^2 K} \right)$$O\left ({|V|^2 K} \right), where | V| is the number of clusters and K is the dimension of a social feature. The most computationally expensive component in the framework is the AP clustering method, 13 which is used to group faces given a proximity matrix. The AP clustering approach is performed by exchanging messages between data points until a high-quality set of clusters gradually emerges. The approach needs O\left ({|V|^2 } \right)$$O\left ({|V|^2 } \right)$for each iteration, but it does not depend on the number of clusters or the dimensionality, and it has been shown to be much faster than k-means for high-dimensional data. 13 If faces are detected, the social network construction and clustering tasks take just a few seconds to process a two-hour film in our experiments. Experiments We evaluated the proposed method on automatically detected face sets from five feature-length films and two sitcoms. The detection was performed using OpenCV on every frame, producing 320,508 face images including false positives. Table 1 shows the dataset statistics, including genre, length, and the number of faces and characters. We manually labeled each face and constructed the ground truth data, which we used to evaluate the clustering performance and the quality of social network. Table 1.Dataset statistics. Index Title * Year Genre Length (minutes) No. of faces No. of characters M1 The Devil Wears Prada 2006 Comedy, drama, romance 102 69,211 15 M2 Little Fockers 2010 Comedy 91 69,077 10 M3 Taken 2008 Action, crime, thriller 87 43,741 5 M4 In Time 2011 Action, sci-fi, Thriller 97 48,604 7 M5 Seven Pounds 2008 Drama 111 37,701 7 S1 The Big Bang Theory (S5-E1) 2011 Comedy 20 17,287 8 S2 Gossip Girls (S5-E23) 2012 Drama, romance 40 34,887 9 * The numbers after the TV entries indicate the season and episode numbers. We used two metrics—purity and NMI—to evaluate the clustering performance. Purity is computed as the ratio of correctly assigned face labels and the total number of faces, where each cluster is assigned to the most frequent label in the cluster:$${\hbox{purity}}\left ({\Omega ,C} \right) = {1 \over N}\sum {_i \max _j } \left| {\omega _i \cap c_j } \right|,\tag \tag {7}$$$${\hbox{purity}}\left ({\Omega ,C} \right) = {1 \over N}\sum {_i \max _j } \left| {\omega _i \cap c_j } \right|,\tag \tag {7}$$ where$\Omega = \left\{ {\omega _1 ,\omega _2 , \ldots, \omega _m } \right\}$$\Omega = \left\{ {\omega _1 ,\omega _2 , \ldots, \omega _m } \right\} is the clusters that the approach discovered, C = \left\{ {c_1 ,c_2 , \ldots ,c_n } \right\}$$C = \left\{ {c_1 ,c_2 , \ldots ,c_n } \right\}\$ denotes ground truth clusters, and N is the number of faces. A large purity value indicates that the approach discovered many clusters—a purity value of 1 means each face gets its own cluster. Alternatively, NMI considers the trade-off of the cluster number and the clustering performance:$${\hbox{NMI}}\left ({\Omega ,C} \right) = {{I\left ({\Omega ;C} \right)} \over {\sqrt {H\left (\Omega \right)H\left (C \right)} }}, \tag {8}$$$${\hbox{NMI}}\left ({\Omega ,C} \right) = {{I\left ({\Omega ;C} \right)} \over {\sqrt {H\left (\Omega \right)H\left (C \right)} }}, \tag {8}$$ where I(Ω; C) is the mutual information between the discovered and ground truth clusters and H(.) is the entropy of a clustering result. NMI(Ω, C) ranges from 0 to 1, where the value 1 means two cluster sets are identical and the value 0 means they are independent. Effectiveness of Social Cues We first evaluate the effectiveness of social cues for face clustering. Figure 4 shows a direct comparison of the clustering accuracy of the state-of-the-art appearance-based method—the associate-predict model 10 (blue bars)—and the proposed method (red bars). The proposed method—one iteration only in the experiment—consistently outperforms the associate-predict model for all films we tested. The gain was particularly significant when we used movie data. The number of characters in movies is usually larger than in a sitcom, and the interactions between characters are more diverse. That probably explains why our approach, which explores social relationship, achieves good performance in such cases. Comparison of the clustering accuracy between the associate-predict model 10 and our approach. We performed one iteration only in terms of (a) purity and (b) normalized mutual information. Figure 4.Comparison of the clustering accuracy between the associate-predict model and our approach. We performed one iteration only in terms of (a) purity and (b) normalized mutual information. To further understand the clustering performance with respect to the number of clusters, see Figure 2 . Our approach (red solid curve) significantly outperforms appearance-based approaches across a range of clusters. For example, for the cluster number 28, the proposed method has an improvement gain of 9.5 percent in purity and 26.7 percent in NMI compared with the associate-predict model. Our approach combines facial similarity and social information to refine the measurement of how two faces are alike. While using the associate-predict model handles intrapersonal variations, the integration of social cues improves the discriminative capability because two visually dissimilar faces of the same character can have a refined distance if their social activities are consistent. It is worth mentioning that we also applied our social-enhanced approach on a social network built differently from the proposed approach. 6 Although that method 6 builds the social network in a different and more sophisticated manner, the social-context-enhanced clustering results are similar, as the red and the green curves in Figure 2 show. That is, we can achieve the clustering gain no matter which approach we use to build the social network, as long as it fairly captures the character relationships. Empirical Analysis Now we examine various design choices in the process of extracting feature from a social graph. We obtained the following experimental results using the movie The Devil Wears Prada. Anchor Set Figure 5 shows a comparison of the two different strategies for finding the subset of clusters that we use to identify the characters' interactions: nodes with a large degree (the green dashed curve) and nodes with a large magnitude variance (the blue dotted curve). It is interesting to see that these strategies have little effect on the overall clustering performance. Moreover, the use of the anchor set is effective, and its size should be appropriately set. Clustering performances for two strategies of selecting an anchor set from a noisy social graph. (a) The purity strategy's performance. (b) Normalized mutual information (NMI) metrics. (c) Clustering accuracy in relation to the anchor set's size for each approach. Figure 5.Clustering performances for two strategies of selecting an anchor set from a noisy social graph. (a) The purity strategy's performance. (b) Normalized mutual information (NMI) metrics. (c) Clustering accuracy in relation to the anchor set's size for each approach. Figure 5 c shows how the anchor set size affects the clustering performance. The clustering accuracy decreases when many nodes (more than 40 percent) are selected in the anchor set. For example, the performance gain is 12.44 percent in purity and 25.93 percent in NMI when we use only 10 percent of the nodes over the case when we use the entire social graph. The automatic selection approach usually selects a few nodes in the anchor set—that is, K is approximately equal to one-eighth of the nodes in the testing cases. Using the anchor set not only improves the effectiveness of social features but reduces the feature dimension and the computational time for the similarity calculation in the next step. Social Proximity Next, we examine the effectiveness of related and unrelated social information when computing the similarity of two clusters. Intuitively, similar social behaviors mean two people have interactions with some people (related information, described in Equation 4) or neither of them interacts with certain people (unrelated information, described in Equation 5). Figure 6 validates the usefulness of both cues. The green dashed curve is derived when only related information is used, and the blue dotted curve is obtained by using only unrelated information. The combination achieves the best performance, providing an additional improvement over using one cue alone. The result shows that the similarity measure should consider more than just the adjacent nodes because the other nodes provide complementary information necessary to describe the linking status. Clustering performances with respect to related and unrelated information in terms of (a) purity and (b) normalized mutual information. Figure 6.Clustering performances with respect to related and unrelated information in terms of (a) purity and (b) normalized mutual information. Iterative Process The automatic framework illustrated in Figure 1 bridges two visual tasks and generates two results—face clusters and social graphs. We now demonstrate that these tasks can benefit each other. We first examine the characters' social networks automatically built by the approach. Figure 3 a shows the social network derived by the proposed method, which is built on the AP clustering result using the appearance-based features alone. Figure 3 b shows the social network obtained from the second-round AP clustering. For comparison, Figure 3 c presents the social network established on manually labeled faces. Nodes that represent the same character are combined for better visualization. It is interesting to observe that the learned social network evolves from a noisy graph into the ground truth, which implies that the automatic approach can produce fairly meaningful information that describes the characters' social relationships from an initially noisy face-clustering result. The social network derived from the second round AP has had a similar structure to that of the ground truth graph. A social graph's fidelity has an effect on face clustering as well. Table 2 shows the clustering performance of four rounds in the iterative process. The experiment was conducted using a fixed preference value in the AP clustering algorithm. Because values in the distance matrix (Equation 6) become smaller, the number of discovered clusters decreases when more iterations are performed. Although both the purity rate and the number of discovered clusters decrease, we observe the clustering performance improves in terms of NMI. However, the gain is increasingly limited after the first round. That is, the use of social cues provides significant gain in measuring the face similarities in the beginning, but the refined social graphs may not provide much information in resolving identity ambiguities. We conclude that a practical strategy to improve clustering is to use the appearance-based approach to obtain a large number and relatively pure clusters and then apply a social-context enhanced approach to refine the clusters. Table 2. Clustering performance in each iteration. Iteration 0 1 2 3 4 No. of clusters 134 107 89 78 60 Purity 0.8954 0.8725 0.8586 0.8255 0.8164 NMI 0.4881 0.5111 0.5139 0.5226 0.5254 Conclusion As a fully automated system, our experiment shows that the proposed method can be used to obtain a social network nearly as good as ground truth. To extend this work, we have been developing a practical movie content management system that can be used to label faces and generate relationship charts of characters, benefiting from the proposed techniques and the empirical results. 1. O. Arandjelovic and A. Zisserman, “Automatic Face Recognition for Film Character Retrieval in Feature-Length Films,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR), 2005, pp. 860–867. 2. R.G. Cinbis, J. Verbeek, and C. Schmid, “Unsupervised Metric Learning for Face Identification in TV Video,” Proc. IEEE Int'l Conf. Computer Vision (ICCV), 2011, pp. 1559–1566. 3. J. Sivic, M. Everingham, and A. Zisserman, “Who Are You? Learning Person Specific Classifiers from Video,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR), 2009, pp. 1145–1152. 4. W. Zhaoetal., “Face Recognition: A Literature Survey,” ACM Computing Surveys, vol. 35, no. 4, 2003, pp. 399–458. 5. C.-Y. Weng, W.-T. Chu, and J.-L. Wu, “RoleNet: Movie Analysis from the Perspective of Social Network,” IEEE Trans. Multimedia, vol. 11, no. 2, 2009, pp. 256–271. 6. P. Wu and D. Tretter, “Close & Closer: Social Cluster and Closeness from Photo Collections,” Proc. ACM Int'l Conf. Multimedia, 2009, pp. 709–712. 7. Z. Stone, T. Zickler, and T. Darrell, “Autotagging Facebook: Social Network Context Improves Photo Annotation,” Proc. IEEE Int'l Workshop Internet Vision, Computer Vision and Pattern Recognition Workshops, 2008, pp. 1–8. 8. P. Wu and F. Tang, “Improving Face Clustering Using Social Context,” Proc. ACM Int'l Conf. Multimedia, 2010, pp. 907–910. 9. M. Yeh, M.-C. Tseng, and W.-P. Wu, “Automatic Social Network Construction from Movies Using Film-Editing Cues,” Proc. IEEE Int'l Conf. Multimedia and Expo Workshops (ICMEW), 2012, pp. 242–247. 10. Q. Yin, X. Tang, and J. Sun, “An Associate-Predict Model for Face Recognition,” Proc. IEEE Int'l Conf. Computer Vision and Pattern Recognition (CVPR), 2011, pp. 497–504. 11. R. Grossetal., “Multi-PIE,” Proc. IEEE Int'l Conf. Automatic Face and Gesture Recognition, 2008; http://research.microsoft.com/pubs/69512/multipie-fg-08.pdf. 12. T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Description with Local Binary Pattern: Application to Face Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 12, 2006, pp. 2037–2041. 13. B.J. Frey and D. Dueck, “Clustering by Passing Messages Between Data Points,” Science, vol. 315, 2007, pp. 972–976. Mei-Chen Yeh is an assistant professor in the Computer Science and Information Engineering Department at National Taiwan Normal University. Her research interests include multimedia computing, visual retrieval, and pattern recognition. Yeh has a PhD in electrical and computer engineering from the University of California, Santa Barbara. She is a member of IEEE. Contact her at myeh@csie.ntnu.edu.tw. Wen-Po Wu is a research and development software engineer at Reallusion. His research interests include multimedia computing, animation, and computer graphics. Wu has an MS in computer science and information engineering from National Taiwan Normal University. Contact him at robertWu@reallusion.com.tw. FIRST PREV NEXT LAST Page(s): [%= name %] [%= createDate %] [%= comment %] Computing Now Blogs by Ray Major Cloud Computing A Cloud Blog: by Irena Bojanova Enterprise Solutions Enterprise Thinking: by Josh Greenbaum Healthcare Technologies The Doctor Is In: Dr. Keith W. Vrbicky Hot Topics NealNotes: by Neal Leavitt Industry Trends Insights Mobile Computing Shay Going Mobile: by Shay Shmeltzer Networking NGN-Insights: by Martin Nuss and Uday Mudoi Programming No Batteries Required: by Ray Kahn Software Software Technologies: by Christof Ebert
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy. 2.7 # Greek numerals The Greeks used two number systems, one mainly for currency and everyday counting and a more sophisticated number system which was used by the learned. Strictly speaking there were many Greek number systems, since each island had their own system however they were all pretty similar. The everyday acrophonic system was similar to the Egyptian. A ‘rod’ or the letter I was used to count units, i.e. the numbers 1 to 4. A new symbol was introduced for 5. The Greeks took the first letter of the word five to symbolise that. The word five in Greek is Pente (think ‘pentagon’), so the letter used was ‘Pi’ the Greek ‘P’: $\Pi$, The word ten in Greek is Dekka (think ‘decagon’), so the letter used was ‘Delta’ the Greek ‘D’: $\Delta$, and so on (in the quiz later on, you will be challenged to decipher other Greek numerals…). The intellectual elite used many more symbols for their numbers, mainly for ‘scientific’ writing. In fact, they used all the letters of the alphabet, and in multiple ‘case’ or ‘font’. They probably did this to reduce the length of the numbers they wrote, however this made number reading difficult. Here is a glance of their system: For the numbers 1-9, they used: $\alpha=1$ $\beta=2$ $\gamma=3$ $\delta=4$ $\epsilon=5$ $\digamma=6$ $\zeta=7$ $\eta=8$ $\theta=9$ Then, we have the ‘tens’: $\iota=10$ $\kappa=20$ $\lambda=30$ $\mu=40$ $\nu=50$ $\xi=60$ $\omicron=70$ $\pi=80$ $\unicode[greek]{985}=90$ and the ‘hundreds’: $\rho=100$ $\sigma=200$ $\tau=300$ $\upsilon=400$ $\phi=500$ $\chi=600$ $\psi=700$ $\omega=800$ $\unicode[greek]{993}=900$ Numbers were constructed using addition, for example, the number 429 would be written: $\upsilon\kappa\theta$. Numbers larger than 999 were constructed using extra symbols denoting the thousands ten thousands etc. in a similar way that a ‘bar’ was written over large Roman numerals for the same purpose. To denote thousands a subscript or superscript iota, $\iota$ was used. $_\iota\epsilon\upsilon\kappa\theta$ would be 5429. The fact that all the numbers can be represented by letters, the art of Isopsephy, giving words a numeric value and vice-versa, arose. This is similar to the Hebrew Gematria which was widely practiced throughout the ages. The Greek word fire: $\pi\upsilon\rho$, for example has a numeric value of $\pi+\upsilon+\rho=80+400+100=580$. If all of this interests you, you may want to learn ancient Greek! Here is a great site to start, but don’t forget to return to the course. We still have a lot of things to learn… ## Discussion Join the discussion and share some other numeral systems with us.
GSI Forum GSI Helmholtzzentrum für Schwerionenforschung Home » PANDA » PANDA - Detector » PANDA TOF » RICH B-TOF Abstract and summary RICH B-TOF Abstract and summary Thu, 12 April 2018 10:37 Sebastian Zimmermann Messages: 11Registered: January 2017 occasional visitor From: *smi.oeaw.ac.at Here is the abstract and the summary (attached) for the B-TOF contribution for the RICH conference. The submission deadline is on sunday the 15th, so I would ask for feedback until saturday the 14th. Kind regards Sebastian Abstract: The barrel-Time-of-Flight detector is one of the outer layers of the multi-layer design of the PANDA target spectrometer, covering an angle of 22 $< \theta_{lab} <$ 150. PANDA, which is being built at the FAIR facility, will use cooled antiprotons on a fixed Hydrogen or nuclei target, to study broad topics in hadron physics. The detector is a scintillating tile hodoscope with an SiPM readout. A single unit consists of a $90 \times 30 \times 5$ mm$^3$ fast plastic scintillator tile and $3 \times 3$ mm$^2$ SiPM photosensors on both ends. Four SiPMs are conected in series to overcome the limited sensor size of a single SiPM sensor and to improve the time resolution darastically (~100 ps to 50 ps). While the PANDA experiment is equipped with DIRC detectors for PID of faster particles, the barrel TOF complements the setup by providing additional PID information up to ~1.4 GeV/c and a $\pi$/K separation of ~5 sigma up to the Cherenkov threshold. In this contribution we will also review recent topics on SiPMs and compare them to MCP-PMTs. • Attachment: Summary.pdf
# Step-by-step Solution ## Solve the inequality $\left(x+1\right)\left(x-3\right)\leq 0$ Go 1 2 3 4 5 6 7 8 9 0 x y (◻) ◻/◻ 2 e π ln log lim d/dx Dx > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Videos $-1\leq x\leq 3$ ## Step-by-step explanation Problem to solve: $\left(x+1\right)\left(x-3\right)\le\:0$ 1 Solve the product $\left(x+1\right)\left(x-3\right)$ $x^2-2x-3\leq 0$ 2 To find the roots of a polynomial of the form $ax^2+bx+c$ we use the quadratic formula, where $a=1$, $b=-2$ and $c=-3$ $x =\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ $-1\leq x\leq 3$ $\left(x+1\right)\left(x-3\right)\le\:0$
OpenStudy (anonymous): Need help with a discrete math problem. Contriutor to the Great Internet Mersenne Prime Search, Curtix Cooper, of the Iniversity of Central Missouri, recently discovered that 2^57,885,161 - 1 is prime. To date, no larger number has been shown to be prime. What is the greatest common divisor of 2^57,885,161 - 1 and 2^57,885,161 + 1? 4 years ago OpenStudy (anonymous): Do you have any ideas? 4 years ago OpenStudy (anonymous): not really 4 years ago OpenStudy (anonymous): Do you know how to start? 4 years ago OpenStudy (anonymous): Well I know how to find the gcd of smaller numbers like 16 and 27 by dividing the larger number by the smaller number. Then I take the remainder and divide the smaller number by the remainder and continue that process until the remainder is 0. That leaves me with the answer. But with a number this big that process would take very long. So I'm not really sure how to start. 4 years ago OpenStudy (anonymous): So if I used that approach perhaps I could cancel the 2^.........'s and be left with +1 and -1? 4 years ago OpenStudy (anonymous): When subtracting 4 years ago OpenStudy (anonymous): 4 years ago OpenStudy (anonymous): no 4 years ago OpenStudy (anonymous): do you know how it is done? 4 years ago OpenStudy (anonymous): Hmm, the number being prime probably has something to do with the problem 4 years ago OpenStudy (anonymous): idk =\ 4 years ago OpenStudy (anonymous): It does. 4 years ago OpenStudy (anonymous): But I have no clue how that would play into the solution. 4 years ago OpenStudy (anonymous): =( 4 years ago OpenStudy (anonymous): Help me with it? :O 4 years ago OpenStudy (anonymous): Find out the prime in the would be with the remainder of 0 4 years ago OpenStudy (kinggeorge): Well, let $$2^{57,885,161}-1=p$$. Then you're asked to find the gcd of $$p$$ and $$p+2$$. Since both $$p$$ and 2 are prime, any common divisor greater than 1 must divide both $$p$$ and 2. Does this make sense, and can you finish it from here? 4 years ago OpenStudy (anonymous): Sort of makes sense 4 years ago OpenStudy (anonymous): how did you get the p and p + 2? 4 years ago OpenStudy (amistre64): 2^n - 1, is prime ... this is given, let it be p 2^n - 1 + 2 = 2^n + 1 = p+2 4 years ago OpenStudy (anonymous): ah that makes sense 4 years ago OpenStudy (anonymous): Alright, thanks for the help everyone! I will look over it some more and see if I can get it all making sense. 4 years ago OpenStudy (anonymous): Ok good luck:) 4 years ago
# How do I choose between QuadGK and Cubature when I do singular integral of a complex-valued function? A simple example is f(x) = 1 / (x + im*10^(-6)) and calculate the integral of f(x) from -1 to 1. How do I write the Julia code? Note that I don’t want to substitute f(x) with a delta function here. Since this is a 1d integral, I would use QuadGK: julia> quadgk(x -> 1 / (x + im*10^(-6)), -1, 1, rtol=1e-8) (-3.3306690738754696e-16 - 3.141590653589794im, 1.4895084794082606e-9) Note that it correctly finds that the integral is close to -i\pi. (In fact, the exact answer is -2im*atan(1e6), which it’s finding to nearly 16 digits. Because QuadGK is h-adaptive, it will place more quadrature points close to the singularity. (But it still needs 1245 evaluation points, as you can see by wrapping the integrand in counter(f) = x -> (global count += 1; f(x)) and initializing a global count = 0.) Don’t use Cubature.jl for this — since Cubature.jl is based on a C library, it doesn’t directly support any return type other than Float64 or Vector{Float64}, so to integrate complex-valued integrands you would have use a vector of the real and imaginary parts. HCubature.jl is the native-Julia analogue of Cubature.jl, and supports complex-valued integrands directly. In 1d, HCubature’s algorithm is the same as the default one in QuadGK, but QuadGK’s is more optimized for the 1d case (and also provides the option of using higher-order rules). That being said, if you are doing lots of near-singular integrals like this, especially in higher dimensions, you should consider using a semi-analytical singularity-subtraction procedure to handle the near-singular part analytically and only use quadrature for the rest. This is a common procedure in integral-equation methods, for example, where the integrands often have integrable singularities. See also this thread if you are interested in Cauchy principal values: Numerical integration of cauchy principal value 9 Likes For example, suppose that you integrating: I = \int_a^b \frac{g(x)}{x - i\alpha} dx for a small 0 < \alpha \ll 1. For \alpha \to 0^+, it approaches i\pi g(0) if a = -b, but for small \alpha > 0 you have to numerically integrate (for a general function g(x)) a function with a sharp spike at x=0, which will require a large number of quadrature points. But you can subtract out the singularity analytically: I = \int_a^b \left[ \frac{g(x)-g(0)}{x - i\alpha} + \frac{g(0)}{x - i\alpha} \right] dx \\ = \int_a^b \frac{g(x)-g(0)}{x - i\alpha}dx + \underbrace{g(0) \left[\frac{1}{2}\log(x^2 + \alpha^2) + i\tan^{-1}(x/\alpha) \right]_a^b}_{I_0} and then you only need to numerically integrate I - I_0, which has the spike subtracted. A little caution is required in the tolerance of this numerical integral because you only need a small relative error compared to I_0, not compared to the remainder integral, so you should pass an absolute tolerance like atol = rtol*I₀ if you want a certain relative tolerance rtol in the overall integral I. Or an even simpler trick is just to add I_0 / (b-a) to the integrand of your numerical integral. Another useful trick is to tell quadgk to compute \int_a^0 + \int_0^b, i.e. give it x=0 as an explicit endpoint, since we know the integrand is badly behaved there (and may even be Inf or NaN if \alpha = 0). This ensures that quadgk never evaluates the integrand exactly at x=0, and also can help it adaptively subdivide the domain more efficiently. In code: using QuadGK function int_slow(g, α, a, b; kws...) if a < 0 < b # put an explicit endpoint at x=0 since we know it is badly behaved there return quadgk(x -> g(x) / (x - im*α), a, 0, b; kws...) else return quadgk(x -> g(x) / (x - im*α), a, b; kws...) end end function int_fast(g, α, a, b; kws...) g₀ = g(0) denom_int(x) = log(x^2 + α^2)/2 + im * atan(x/α) I₀ = g₀ * (denom_int(b) - denom_int(a)) if a < 0 < b # put an explicit endpoint at x=0 since we know it is badly behaved there return quadgk(x -> I₀/(b-a) + (g(x) - g₀) / (x - im*α), a, 0, b; kws...) else return quadgk(x -> I₀/(b-a) + (g(x) - g₀) / (x - im*α), a, b; kws...) end end If you do int_slow(cos, 1e-6, -1, 1) and int_fast(cos, 1e-6, -1, 1) (with the default rtol ≈ 1e-8), they agree to about 13 digits, but the slow brute-force method requires 1230 function evaluations while the fast singularity-subtracted method requires only 31 function evaluations. As an added bonus, int_fast works even for α = 0, where it gives you i\pi g(0) (for 0 \in (a,b)) plus the Cauchy principal part. 12 Likes I just added this example (along with several others) to the QuadGK manual. 4 Likes This would be very helpful. Thanks!
Is the collection of all functions between two sets a set? Can we say "the set of all functions between two sets" as easily as we could say "the set of all real numbers", for example? • Do you mean between two specific sets, or the class of all functions? – copper.hat Mar 13 '14 at 6:15 • I mean GIVEN two sets. – PatrickMcGill Mar 13 '14 at 6:29 Yes. This is allowed because, set theoretically, functions $A \rightarrow B$ are special subsets of $A \times B$. Sets are closed under cartesian products and comprehension allows you to take arbitrary subsets (as long as you're able to specify the membership condition in your logic).
Posts Exploring a Logarithmic Tolerance of Suffering 2021-04-12T01:39:18.679Z Confusion about implications of "Neutrality against Creating Happy Lives" 2021-04-11T15:54:34.503Z derber's Shortform 2021-04-11T06:46:05.617Z Comment by David Reber (derber) on Exploring a Logarithmic Tolerance of Suffering · 2021-04-12T19:06:29.144Z · EA · GW Here I'm using and to denote amounts of suffering/happiness, whether constrained to one individual or spread among many (or even distributed among some non-individualistic sentience). Using exponentially-scaled linear tolerance seems equivalent mathematically. If anything, it highlights to me that how you define the measures for happiness and suffering is quite impactful, and needs to be carefully considered. Comment by David Reber (derber) on derber's Shortform · 2021-04-11T23:55:03.639Z · EA · GW # Logarithmic Tolerance of Suffering There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios. 1. Linear Tolerance: Set some (possibly large) constant $c$. Then $x$ amount of suffering is offset by $y$ amount of happiness so long as $y>cx$. My impression is that Linear Tolerance is pretty common among EAers (and *please* correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of $c=1$ (where suffering measure may have been implicitly scaled to reflect how much worse it is than happiness). This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable in astronomical quantities. 2. No Significant Tolerance: There exists some threshold $t$ of suffering such that no amount of happiness $y$ can offset $x$ if $x>t$. This is almost verbatim "Torture-level suffering cannot be counterbalanced", and perhaps the practical motivation behind "Neutrality against making happy people" (creating a person which has 99% chance of being happy and otherwise experiences intense suffering isn't worth the risk; or, creating a person who experiences 1 unit of intense suffering for any $y$ units of happiness isn't worth it). However, this seems to either A. claim infrequent-and-intense suffering is worse than frequent-but-low suffering, or B. accept frequent-but-low suffering as equally bad, and prefer to kill off even almost-entirely happy lifeforms as soon as the threshold $t$ is exceeded (where by almost-entirely happy, I mean only experiencing infinitesimal suffering). Since my life is lower than almost-entirely happy yet I find it worth living, I am unsatisfied with this approach. I think the primary intuitions Linear Tolerance and No Significant Tolerance are trying to tap into are: * it seems like small amounts of suffering can be offset by large amounts of happiness * but once suffering gets large enough, the amount of happiness needed to offset it seems unimaginable (to the point of being impossible) I don't think these need to contradict each other: 3. Log Tolerance: Set coefficients $a,b$. Then $x$ amount of suffering is offset by $y$ amount of happiness so long as $a+b\log(y)>x$. Log Tolerance is stricter than Linear Tolerance: the marginal tradeoff rate of $\frac{d}{dx}\log(x)=\frac{1}{x}$ will eventually drop below any linear tradeoff rate $c$. Furthermore, in the limit the cumulative "effective" linear tradeoff rate of $\frac{\log(x)}{x}$ goes to zero. Meanwhile, Log Tolerance also requires nigh-impossible amounts of happiness to offset intense suffering: while $\log$ *technically* goes to infinity, nobody has every observed it to do so. Consequently any astronomically expanding sentience/civilization would need to get better and better at reducing suffering. On the other hand, because $\log$ is monotonically increasing, the addition of almost-entirely happy life is always permissible, which I suspect fits better with the intuitions of most longtermists. One way we could stay below a log upper bound is if some fixed percentage of future resources are committed to reducing future s-risk as much as possible. The practical impact Log Tolerance would have on how longtermists analyze risks is to shift from "does this produce more happiness than suffering" to "does this produces mechanisms by which happiness can grow exponentially relative to the growth of suffering?" notes: * Without loss of generality we can assume $\log$ is just the natural logarithm * I came up with this while focused on asymptotic behavior, so I'm only considering the nonnegative support of the tolerance function. I don't know how to interpret a negative tolerance, and suspect it's not useful. ## Open Questions * Are there any messy ethical implications of log tolerance? * I think any sublinear, monotonically nondecreasing function $f$ satisfying $\lim_{x\to\infty}\frac{f(x)}{x}=0$ would have the same nice properties; but may allow for more/less suffering, or model the marginal tradeoff rate as decreasing at different rates, etc. Comment by David Reber (derber) on derber's Shortform · 2021-04-11T00:51:44.104Z · EA · GW As I understand, the following two positions are largely accepted in the EA community: 1. Temporal position should not impact ethics (hence longtermism) 2. Neutrality against creating happy lives But if we are time-agnostic, then neutrality against making happy lives seems to imply a preference for extinction over any future where even a tiny amount of suffering exists. So am I missing something here? (Perhaps "neutrality against creating happy lives" can't be expressed in a way that's temporally agnostic?)
Home > English > Class 12 > Physics > Chapter > Jee Mains > Two uniform circular rough dis... # Two uniform circular rough disc of moment of inertia I_(1) and (I_(1))/(2) are rotating with angular velocity omega_(1) and (omega_(1))/(2) respectively in same direction. Now one disc is placed the other disc co-axially. The change in kinetic energy of the system is : Khareedo DN Pro and dekho sari videos bina kisi ad ki rukaavat ke! -(1)/(24)I_(1)omega_(1^(2))(1)/(24)I_(1)omega_(1^(2))(1)/(12)I_(1)omega_(1^(2))-(1)/(12)I_(1)omega_(1^(2))
# Cofactor Given a factor a of a number ${\displaystyle x=ab}$, the cofactor of a is ${\displaystyle b={\frac {x}{a}}.}$
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Definition 2.1.2.5. Let $\operatorname{\mathcal{C}}$ be a nonunital monoidal category. A unit of $\operatorname{\mathcal{C}}$ is a pair $( \mathbf{1}, \upsilon )$, where $\mathbf{1}$ is an object of $\operatorname{\mathcal{C}}$ and $\upsilon : \mathbf{1} \otimes \mathbf{1} \xrightarrow {\sim } \mathbf{1}$ is an isomorphism, which satisfies the following additional condition: $(\ast )$ The functors $\operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}\quad \quad C \mapsto \mathbf{1} \otimes C$ $\operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{C}}\quad \quad C \mapsto C \otimes \mathbf{1}$ are fully faithful.
# 3 vector product ## Recommended Posts what is A.B.C? where A,B,C are vectors. ##### Share on other sites Do you mean the scalar triple product $\vec a \cdot (\vec b \times \vec c)$, or the vector triple product $\vec a \times (\vec b \times \vec c)$? ##### Share on other sites i just mean A.B.C ##### Share on other sites That doesn't make sense. ##### Share on other sites ctually i saw it in a book, may be a misprint then. shall wait fr a few more negative answers before requesting the moderators to close. ##### Share on other sites First off, swaha, do you understand the difference between the inner product between two vectors $\vec a \cdot \vec b$ and the scalar product $\vec a \times \vec b$? There are two products of three vectors in three-space. I named both in post #2, perhaps a bit to tersely. The first is the scalar triple product $\vec a \cdot (\vec b \times \vec c)$. Since the inner product is a commutative operation, this is the same as $(\vec b \times \vec c)\cdot \vec a$. One could eliminate the parentheses in these forms because $\vec a \cdot \vec b \times \vec c$ has only one viable interpretation. One geometric interpretation of this product is the volume of a parallelepiped with sides specified by the vectors $\vec a$, $\vec b$, and $\vec c$. Rearrangements (permutations) of the vectors $\vec a$, $\vec b$, and $\vec c$ might change the sign of the result, but never the absolute value. The second triple product is the vector triple product $\vec a \times (\vec b \times \vec c)$. Unlike the scalar triple product, those parentheses are essential here. Specifying things in the right order is also essential. In other words, $\vec a \times (\vec b \times \vec c)\ne(\vec a \times \vec b) \times \vec c\ne \vec b \times (\vec a \times \vec c)$, and so on. One use of the vector triple product is to compute the component of a vector normal to vector. Suppose $\hat a$ is a unit vector in $\mathbb R^3$ and $\vec b$ is some other vector in $\mathbb R^3$. The component of $\vec b$ normal to $\hat a$ is $\hat a \times (\vec b \times \hat a)$. ##### Share on other sites i just mean A.B.C If B and C are on the same currier ,or parallel to A ,then : A.B.C = A.(B.C) =(A.B).C = C.(B.A) =(C.B).A = B.(A.C) ........e.t.c e.t.c Otherwise : A.(B.C)$\neq (A.B).C$ ##### Share on other sites Use the right nomenclature, please. There are two products defined for vectors in $\mathbb R^3$, the inner product and the cross product. Neither is denoted with a period. ##### Share on other sites If B and C are on the same currier ,or parallel to A ,then : A.B.C = A.(B.C) =(A.B).C = C.(B.A) =(C.B).A = B.(A.C) ........e.t.c e.t.c Otherwise : A.(B.C)$\neq (A.B).C$ why? pls explain. i think its so when they are perpendicular not parallel. ##### Share on other sites Ignore triclino. What he wrote doesn't make sense. Please, people. Learn to use the correct nomenclature. There are two well-defined products for 3-vectors, the scalar product denoted by a center dot, and the cross product denoted by $\times$. This doesn't make a lick of sense: $\vec a \cdot \vec b \cdot \vec c$. That can only mean triclino was talking about the cross product, and what he wrote isn't correct for that either. The correct condition under which $\vec a \times (\vec b \times \vec c) = (\vec a \times \vec b)\times \vec c$ is that $\vec c$ is parallel to $\vec a$, i.e., $\vec c = \alpha \vec a$ where $\alpha$ is some scalar. There is no constraint on $\vec b$. If all three are parallel to one another the vector triple product is identically zero for all arrangements of the factors in the product. oh thanks. ##### Share on other sites . This doesn't make a lick of sense: $\vec a \cdot \vec b \cdot \vec c$. That can only mean triclino was talking about the cross product, and what he wrote isn't correct for that either. Why you did not ask me what i meant ,but make such a fuss over minor details ?? This is a physics forum and people know what a dot product is , and very easily can understand that: A.(B.C) is really A(B.C) since the dot product is always a scalar. Now is not true that if the vectors are on the same currier or parallel then : A(B.C) =(A.B)C ??? ##### Share on other sites You left out your "otherwise" in post #7 from the above, triclino. Furthermore, using A.B for the cross product is very, very bad form. That period looks a lot more like a dot than a cross. This is not a minor detail since there are many products for vectors. For example, the inner or dot product, the cross product for vectors in 3- and 7- space, the outer product, the exterior or wedge product, etc. Each has its own symbol and none of them is denoted with a period. ##### Share on other sites This is much easier: $\vec{a} (\vec{b} \cdot \vec{c})$ Use LaTeX to get the point across. ##### Share on other sites $\mathbf{a} \cdot \mathbf{b} \cdot \mathbf{c}$ is not equal to $a(\mathbf{b} \cdot \mathbf{c})$ Because you cannot dot a vector and a scalar, you can multiply them, but not dot them. This is kinda trivial, but the reason I say it is if there was maybe some type of proof or equation that had a similar form, you would not be able to an operation like this. Or whatever you are trying to say but either way $\mathbf{a} \cdot \mathbf{b} \cdot \mathbf{c}$ this cannot work due to the reason above. ## Create an account Register a new account
MathSciNet bibliographic data MR2549593 30C62 (20H10 30F40) Yang, Shihai Test maps and discrete groups in ${\rm SL}(2,C)$${\rm SL}(2,C)$. Osaka J. Math. 46 (2009), no. 2, 403–409. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
# Disjoint Common Transversals of Two Families of Sets Let $$E$$ be a finite set. Let $$d,m,n\in\mathbb N$$. Let $$\mathcal A:=\{A_1,\dots,A_m\}$$ and $$\mathcal B:=\{B_1,\dots,B_n\}$$ be two families of subsets of $$E$$. A partial transversal of $$\mathcal A$$ is the image of an injective function from a subset of $$\{1,\dots,m\}$$ to $$E$$ such that for all $$i$$ in the domain of $$f$$, $$f(i)\in A_i$$. A common partial transversal of $$\mathcal A$$ and $$\mathcal B$$ is a subset that is a partial transversal of $$\mathcal A$$ and $$\mathcal B$$. The family of partial transversals of $$\mathcal A$$ is (the family of independent sets of) a matroid on $$E$$. (See, for example, Theorem 6.5.2 of Mirsky's book, Transversal Theory.) If one wants to determine the cardinality of the set of maximum size that is a union of $$d$$ common partial transversals, is there a way of doing so by considering intersections of independent sets arising from possibly different matroid structures on possibly different sets? The inspiration behind this is that if one has a matroid on a set and one wants to find the subset of maximum size that is a union of $$d$$ independent sets, one can translate this into the problem of finding the subset of maximum size that is independent in two different matroid structures on a different ground set. (See, for example, the end of $$\S6$$ of Chapter 8 of Lawler's book, Combinatorial Optimization: Networks and Matroids.)
# Intro1 - Homework¶ Vishal Bakshi Tuesday, August 25, 2020 ## Batch Size¶ I changed the batch size for one of the image classifiers we trained in intro notebook to this course, and noticed that the valid_loss value changed significantly. I'll run the image classifier models to show you what I mean: In [5]: from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() log = {'pets': { 2 : 0, 4 : 0, 8 : 0, 16: 0 }, 'segmentation': { 2 : 0, 4 : 0, 8 : 0, 16: 0 }} for i in range(4): batch_size = 2**(i+1) path, get_image_files(path), bs=batch_size, valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(224)) learn = cnn_learner(dls, resnet34, metrics=error_rate) learn.fine_tune(1) log['pets'][batch_size] = learn.final_record[1] epoch train_loss valid_loss error_rate time 0 0.671831 0.144316 0.050744 01:51 epoch train_loss valid_loss error_rate time 0 0.646514 0.352912 0.138024 02:21 epoch train_loss valid_loss error_rate time 0 0.418658 0.077647 0.024357 01:00 epoch train_loss valid_loss error_rate time 0 0.179817 0.050372 0.011502 01:13 epoch train_loss valid_loss error_rate time 0 0.246792 0.049255 0.017591 00:32 epoch train_loss valid_loss error_rate time 0 0.125174 0.023476 0.006089 00:43 epoch train_loss valid_loss error_rate time 0 0.137739 0.052795 0.014208 00:24 epoch train_loss valid_loss error_rate time 0 0.057923 0.013332 0.003383 00:33 In [6]: path = untar_data(URLs.CAMVID_TINY) for i in range(4): batch_size = 2**(i+1) path, bs=batch_size, fnames = get_image_files(path/"images"), label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}', ) learn = unet_learner(dls, resnet34) learn.fine_tune(8) log['segmentation'][batch_size] = learn.final_record[1] epoch train_loss valid_loss time 0 1.753514 1.250025 00:04 epoch train_loss valid_loss time 0 1.133678 1.048806 00:03 1 1.102538 1.008577 00:03 2 0.981480 0.827040 00:03 3 0.845606 0.736060 00:03 4 0.742655 0.778525 00:03 5 0.644585 0.667181 00:03 6 0.571420 0.669122 00:03 7 0.525055 0.656813 00:03 epoch train_loss valid_loss time 0 2.513621 2.418835 00:03 epoch train_loss valid_loss time 0 1.627781 1.192425 00:02 1 1.382911 1.021066 00:02 2 1.208196 0.872863 00:02 3 1.083026 0.861396 00:02 4 0.955392 0.667616 00:02 5 0.841550 0.637717 00:02 6 0.751018 0.609362 00:02 7 0.684736 0.602068 00:02 epoch train_loss valid_loss time 0 2.916275 2.496579 00:03 epoch train_loss valid_loss time 0 2.078728 1.570321 00:01 1 1.735551 1.315718 00:01 2 1.524138 1.064513 00:01 3 1.349213 0.929341 00:01 4 1.203711 0.811217 00:01 5 1.077349 0.751397 00:01 6 0.973786 0.733066 00:01 7 0.896015 0.722129 00:01 epoch train_loss valid_loss time 0 3.222250 2.311272 00:02 epoch train_loss valid_loss time 0 2.168895 1.820541 00:01 1 1.927316 1.797157 00:01 2 1.764727 1.526917 00:01 3 1.617956 1.200572 00:01 4 1.495027 1.048440 00:01 5 1.390191 0.985106 00:01 6 1.297539 0.928951 00:01 7 1.223987 0.911800 00:01 In [8]: temp_log = log.copy() ## Pets Image Classification Learner¶ For the pets dataset, the error rate decreases as batch size increases. In [19]: plt.plot(log['pets'].keys(), log['pets'].values()) Out[19]: [<matplotlib.lines.Line2D at 0x7fd195e9bf10>] ## Segmentation Learner¶ For the segmentation learner, the best performance was for a batch size of 4 images. In [21]: plt.plot(log['segmentation'].keys(), log['segmentation'].values()) Out[21]: [<matplotlib.lines.Line2D at 0x7fd195e7ebe0>] The next day, I came across a very relevant fast.ai forum post which spoke of various reasons that smaller batch sizes may result in a lower loss in the validation set which led me to find this exchange on the same question which referenced the following excerpt from a paper on this topic: In this paper, we present ample numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions -- and that sharp minima lead to poorer generalization. From what I understood from that paper after a very quick skim was that larger batch sizes are sensitive to sharp minima and can't find their way out quick enough like smaller batches can. one result of this is that the model does not generalize well. a sharp minima means a very few number of x-values correspond to a y-value which is close to the minima. a smooth or flat minima means many x-values that correspond to a y-value which is close to the minima. as your model is tested with new data, there's a wider breadth of inputs that can lead to the ideal output. a model trained on sharp minima is picky. a model trained on smooth minima is less picky. the paper ended with a series of questions for next steps and my favorite one was: (e) is it possible,through algorithmic or regulatory means to steer LB methods away from sharp minimizers? And I'll have to go and find what's already out there on this! As I learn the fastai library piece by piece, I learn more about how to program a learner. I decided to figure out what was in dls, the return value to DataLoaders.from_func(). It turns out that dls is an iterable that holds two DataLoaders: one for the training set, and one for the validation set: In [24]: dls[0] == dls.train Out[24]: True In [25]: dls[1] == dls.valid Out[25]: True Each DataLoader holds the batch size number of images (recall that the final learner was trained with a batch size of 16): In [31]: i = 0 for dl in dls[0]: img = dl[0][15] print('tensor:',img.size()) print('tensor.permute(1,2,0):', img.cpu().permute(1,2,0).size()) plt.imshow(img.cpu().permute(1,2,0)) i += 1 if i == 1: break Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). length of dataloader: 16 tensor: torch.Size([3, 96, 128]) tensor.permute(1,2,0): torch.Size([96, 128, 3]) The TensorImage has to be restructured a bit before it fits the shape of what plt.imshow is expecting with RGB data: (rows, columns, 3). Since we want to plot horizontal images, the rows (96 of them) are the first input and columns (128 of them) are the second. This is why the inputs to permute are 1, 2, 0. The original order is 3, 96, 128 and the corresponding indices are 0, 1, 2. The 0th value (3) is sent to the end, and the 1st (96) and 2nd (128) values are flipped. 3, 96, 128 gets transformed to 96, 128, 3. If 96 and 128 were flipped, all of the images would be vertical: In [33]: i = 0 for dl in dls[0]: img = dl[0][15] print('tensor:',img.size()) print('tensor.permute(2,1,0):', img.cpu().permute(2,1,0).size()) plt.imshow(img.cpu().permute(2,1,0)) i += 1 if i == 1: break Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). length of dataloader: 16 tensor: torch.Size([3, 96, 128]) tensor.permute(2,1,0): torch.Size([128, 96, 3]) Lastly, the image looks burnt meaning it's likely that the image data is not normalized. Thanks to ptrblck's consistently helpful replies in the pytorch forums I was able to fix that: In [35]: i = 0 for dl in dls[0]: img = dl[0][15] img -= img.min() img /= img.max() print('tensor:',img.size()) print('tensor.permute(1,2,0):', img.cpu().permute(1,2,0).size()) plt.imshow(img.cpu().permute(1,2,0)) i += 1 if i == 1: break length of dataloader: 16 tensor: torch.Size([3, 96, 128]) tensor.permute(1,2,0): torch.Size([96, 128, 3]) In [ ]:
View more View more View more ### Image of the Day Submit IOTD | Top Screenshots ### The latest, straight to your Inbox. Subscribe to GameDev.net Direct to receive the latest updates and exclusive content. # Why is hexadecimal used in binary model files? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 20 replies to this topic ### #1gchris6810  Members Posted 07 July 2013 - 12:42 PM Hi, I am trying to write a Blender exporter for my game engine's data. When I look at other exporters they often use hexadecimal to identify the different chunks and sub-chunks in the binary file. Why is hexadecimal used and what does it represent in the file? ### #2Brother Bob  Moderators Posted 07 July 2013 - 12:47 PM POPULAR There's nothing special with a hexadecimal value, but its representation is convenient because each digit in a hexadecimal number is exactly 4 bits. Thus, each pair of hexadecimal digits represents 8 bits, or exactly a whole byte. That is, the hexadecimal value 0x1234 represents the byte sequence {0x12, 0x34}, assuming big endian storage, but as a value it is no different from the decimal value 4660. Edited by Brother Bob, 07 July 2013 - 12:48 PM. ### #3Servant of the Lord  Members Posted 07 July 2013 - 02:28 PM POPULAR To add onto what Brother Bob says, since memory is laid out mostly as powers of two*, and 16 is a power of two, it makes it very convenient to describe memory-related values in hexadecimal. 1 byte = 2 digit hex exactly 2 bytes = 4 digit hex exactly 4 bytes = 8 digit hex exactly Binary would work just as good... but it is too wordy. 11101010 11010101 10010011 11011101 verses just 0xEAD593DD. It's alot more compact to display. Decimal is also alot more compact to display (3939865565), but it doesn't line up to the powers of two like binary and hexadecimal does. Some people also use octal (base 8), though I don't see that all too often. The final benefit of hexadecimal is that you can spell words in it: 0xDEADC0DE *The basic units of memory (bytes) don't have to be laid out in powers of two, but it almost always is nowadays. Example: 32bit PCs and 64bit PCs. Sure, there's some wierd systems where bytes are 10 bits, but hey, Edited by Servant of the Lord, 07 July 2013 - 02:32 PM. It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God. Of Stranger Flames - ### #4marcClintDion  Members Posted 07 July 2013 - 03:07 PM The final benefit of hexadecimal is that you can spell words in it: 0xDEADC0DE Funny! //----------------------------------------------------------------------------------------------------------------------------------------------- Here are some pages that deal with what you are working on and they use a format that is a little more human readable, I think anyways (decimal). They aren't entirely complete but they may be the start you need. http://stackoverflow.com/questions/13327379/how-to-export-per-vertex-uv-coordinates-in-blender-export-script Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, because you know that the testing of your faith produces perseverance. Let perseverance finish its work so that you may be mature and complete, not lacking anything. ### #5gchris6810  Members Posted 08 July 2013 - 12:44 AM Thanks for the great replies. There still is something that I don't get. When hexadecimal is used to describe the chunks of a binary format does it represent the size of each chunk in the memory? ### #6MarkS  Members Posted 08 July 2013 - 02:43 AM Binary files don't have "chunks". I'm more than a little confused by that. A binary file is a string of bytes that span the length of the file. The only time I've seen "chunks" was in a human readable format. Typically a binary file will be formatted to have a header with offsets to the various data and that header will be immediately followed by the file data. What a binary (hexadecimal) string represents to the file loader is entirely defined by the file format. Here is a binary file. It is a simple 5x5 pixel TGA file. The first 18 bytes are the file header and the image data starts at byte 19 (the 00 immediately following the 08). 00 00 02 00 00 00 00 00 00 00 00 00 05 00 05 00 20 08 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF 00 00 00 FF The "02" in the header tells the loader that this is an uncompressed, true-color image. The first "05 00" defines a 16-bit word that tells the loader the width of the image; In this case, 5 pixels. The second "05 00" defines a 16-bit word that tells the loader the height of the image. The "20" that follows is 32 in hexadecimal, telling the loader that each pixel in the pixel data is 32-bits (4 bytes). The "08" is an encoded byte, whose bits tell the loader if the image is flipped as well as the number of alpha channel bits. The other bytes that I did not explain have special meaning as well, in certain cases, but are unused in this file. The image data is stored as BGRA (blue, green, red, alpha) and in this case, each pixel has the values "00 00 00 FF". If you are seeing anything other than hexadecimal in a file, it isn't a binary file. Edited by MarkS, 08 July 2013 - 02:56 AM. ### #7dave j  Members Posted 08 July 2013 - 04:51 AM The final benefit of hexadecimal is that you can spell words in it: 0xDEADC0DE One of the versions of the BBC Micro OS had as the two bytes starting at location 0xD0D0 the value 0xD1ED. ### #8gchris6810  Members Posted 08 July 2013 - 09:20 AM So how would I know where the different pieces of the file were to reference in hexadecimal surely that would depend on the size of the data in the file. ### #9Brother Bob  Moderators Posted 08 July 2013 - 09:26 AM Sounds like your problem has nothing to do with hexadecimal or any other number base, but with the format itself. What actual data to write is determined by the format you want to read or write. You said you're trying to write an exporter to your own game engine, so you should have a good idea what has to be written to the file. Can you explain mode detailed what your problem actually is? Edited by Brother Bob, 08 July 2013 - 09:27 AM. ### #10gchris6810  Members Posted 08 July 2013 - 09:31 AM Well, my problem is that I need to be able to load model data into my engine (vertices, uvs, lighting attributes, textures) but I also want skeletal animation and most of the formats I found that supported animation were either too bloated (Collada, FBX) and not really suitable for game use or text based (MD5) which I don't really want. So I resolved to make my own (binary) format and after looking at a few example python exporters (3DS, FBX) I found hex was often used and I needed to know exactly how the exporter worked to be able to produce one myself that wasn't just a copy. I hope this is enough explanation. ### #11Brother Bob  Moderators Posted 08 July 2013 - 09:58 AM As stated earlier, a hexadecimal value is nothing more than a value. You can use decimal values instead if you like, there is nothing special with hexadecimal values, just their textual representation. But as I suspected, it sounds like your question is not why hexadecimal values are used, but why specific hexadecimal values are used. It is not a question why for example 0x1234 is used instead of 4660, but why the value 0x1234, or equivalently 4660, was used in the first place and what it actually means. That can only be determined by studying the format the code is written to export which states what data has to be written where in the file. ### #12gchris6810  Members Posted 08 July 2013 - 10:04 AM Okay thanks. I think as you said i'll look further into other file formats to ascertain exactly how a binary file is made. Edited by gchris6810, 08 July 2013 - 10:04 AM. ### #13Servant of the Lord  Members Posted 08 July 2013 - 12:10 PM Binary files don't have "chunks". I'm more than a little confused by that. A binary file is a string of bytes that span the length of the file. The only time I've seen "chunks" was in a human readable format. The term 'chunks' or 'sections' are used to describe different portions of some binary file formats. In your TGA file example, the file format might call the header portion of bytes the "header chunk", and the pixel portion of bytes, the "pixel chunk" (for example). Some binary file formats even allow their 'chunks' to be put in whatever order, but use code values to identify them so you know how to process them. It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God. Of Stranger Flames - ### #14MarkS  Members Posted 08 July 2013 - 12:17 PM Binary files don't have "chunks". I'm more than a little confused by that. A binary file is a string of bytes that span the length of the file. The only time I've seen "chunks" was in a human readable format. The term 'chunks' or 'sections' are used to describe different portions of some binary file formats. In your TGA file example, the file format might call the header portion of bytes the "header chunk", and the pixel portion of bytes, the "pixel chunk" (for example). Some binary file formats even allow their 'chunks' to be put in whatever order, but use code values to identify them so you know how to process them. I thought of that, but it didn't seem to be what he is asking. I'm more certain that he is looking at a human readable file format with hexadecimal tags. ### #15gchris6810  Members Posted 08 July 2013 - 01:23 PM No I don't think that's what i'm looking for. The file format I was referencing was the 3DS file format which, now i have looked into it seems to be quite unique in using chunks identified by hexadecimal. What I was trying to find out was why certain hex numbers were used. In the 3DS file format spec the primary chunk is 0x4D4D which seems quite random. Why is this used? Here is the link to the spec if necessary http://www.martinreddy.net/gfx/3d/3DS.spec. Thanks. ### #16Brother Bob  Moderators Posted 08 July 2013 - 01:40 PM The document you linked is not the specification itself, but an apparently reverse-engineered documentation. It explicitly says in the link that the specification has (at the time) not been released. If you want to know the decision behind the choice of using the value 19789 for the main chunk, then, if it's not written anywhere, you have to ask the authors of the original format (and as I implied from first paragraph, the link is not the specification and thus not the original authors). Perhaps there was a reason for that particular value, or perhaps it was just random. Now, some formats do encode properties in the chunk name, so it is not an unreasonable question to ask. But the specification should note that if the information is important. Otherwise the number is just arbitrary. ### #17MarkS  Members Posted 08 July 2013 - 01:42 PM I see now. These values may be arbitrary, or more likely, they are chosen because they are unlikely/less likely to show up as data values. I was doing several things at once and typed slowly. Brother Bob beat me to it. Edited by MarkS, 08 July 2013 - 01:43 PM. ### #18Bregma  Members Posted 08 July 2013 - 01:46 PM While the numbers do look fairly arbitrary, they do seem systematic.  I'd like to point out that the code for the main chunk (0x4d4d) is 'MM' in ASCII.  Some other codes are likewise two-letter ASCII combinations, but not all are. Stephen M. Webb Professional Free Software Developer ### #19Servant of the Lord  Members Posted 08 July 2013 - 03:10 PM Looking here, assuming this is the same format you're talking about, it seems some of the numbers are spaced so as to provide future expansion or revisions. For example: 0x4000 // Object Block │ │ ├─ 0x4100 // Triangular Mesh │ │ │ ├─ 0x4110 // Vertices List │ │ │ ├─ 0x4120 // Faces Description │ │ │ │ ├─ 0x4130 // Faces Material │ │ │ │ └─ 0x4150 // Smoothing Group List │ │ │ ├─ 0x4140 // Mapping Coordinates List │ │ │ └─ 0x4160 // Local Coordinates System │ │ ├─ 0x4600 // Light │ │ │ └─ 0x4610 // Spotlight │ │ └─ 0x4700 // Camera You'll notice they leave spaces for future blocks, and I'd bet that the lowest digit (0x414X) is for future versions of the same block. Once they start putting out files with numbers, those numbers are locked in stone. They can add new numbers, but they can't reuse old numbers or it'll break backwards compatibility. If they need an arbitrary number for identification, e.g. to identify a "light" chunk (0x4600) of the object block (0x4000), then they might as well space out their numbers enough to leave room for additional chunk types and additional subchunks (like Spotlight - 0x4610) and versions (0x4610). I'm speculating that the final digit is for versions. See FourCC (did someone already post that? I thought someone did). "In 1985, Electronic Arts introduced the Interchange File Format (IFF) meta-format (family of file formats), originally devised for use on the Amiga. These files consisted of a sequence of "chunks" which could contain arbitrary data, each chunk prefixed by a four-byte ID. The IFF specification explicitly mentions that the origins of the FourCC idea lie with Apple. ... Other file formats that make important use of the four-byte ID concept are the Standard MIDI File Format, the PNG image file format, the 3DS (3D Studio Max) mesh file format and the ICC profile format." It also says, "Four byte identifiers are useful because they can be made up of four human-readable characters with mnemonic qualities, while still fitting in the four byte memory space typically allocated for integers in 32-bit systems (although endian issues may make them less readable). Thus, the codes can be used efficiently in program code as integers as well as giving cues in binary data streams when inspected." So, while observing the data of the binary files in a hex editor, it's easier to visually see the chunk identifiers. See 0xDEADC0DE again. This kind of thing is useful for debugging. Imagine trying to figure out where you are in a pile of hex values in RAM in Microsoft Visual Studio, and suddenly you see 0xBAADF00D. You know the memory was A) Allocated on the heap and B) never initialized properly. Edited by Servant of the Lord, 08 July 2013 - 03:20 PM. It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God. Of Stranger Flames - ### #20kburkhart84  Members Posted 08 July 2013 - 03:57 PM An example of binary models I've worked with in the past is the MD2 model format, used originally by Quake 2.  The "header" section for all of the models would start with 4 bytes, which is a single int, and as 4 characters was "IDP2".  ID and '2' make sense, but I don't know what the P was for.  Then the next 4 bytes must be an integer value '8'.  This supposedly is the MD2 version, but I don't think it was ever used for anything.  Then, other parts of the header are things like the number of frames, uvs, vertices, etc...  Also included are offsets, that say how far to move the "file cursor" to get to certain things in the file. I've also worked a bit with the OBJ file format.  It is a text format, but you can learn something from it, which could apply to binary formats.  Instead of having offsets to "chunks", each line simply has a one or two letter intro that says what the line is.  So 'v' is vertex, 'vt' is uv coordinate, and 'vn' is a normal.  These things are basically a list of vertices, etc... and then you have a list of faces, which index into that list, so a triangle could be 5/4/1, 6/2/3, 4/1/2, which means that the vertex positions would be the 5th, 6th, and 4th vertex in the list('v'), and then the uvs would be the 4th, 2nd, 1st set in the list(vt), and then the normals would be 1st, 3rd, 2nd, in the list(vn).  Then this would be the manner you would construct the list. I have also created a bit of software for GameMaker, which converts a series of OBJ files into my own binary format.  The "header" simply says how many frames the file has, and how many faces there are in each one.  Instead of storing a series of vertices, I store the faces themselves, so the file may be slightly larger than it has to be, but it is easier to read and convert into data for GameMaker.  Instead of using offsets, you simply read the amount of bytes you need, and so each frame follows the previous in the binary file. The thing to understand about file formats is that they can contain basically whatever you need.  You choose whatever you want to be in them, and as long as whoever is doing the reading knows the format, they should be able to read it.  Some formats store things based on offsets, while others simply assume that you are going to read through the whole thing, and would do so sequentially and therefore not need offsets. My honest opinion is that you are likely better off just using a known format, the one that most fits your needs.  I'm not saying to blatantly waste space, but in modern PCs a bit of waste in media files won't hurt anything most likely.  And you'll save in the long run by saving your time, which could be better spent on actual game design then on creating file formats and exporters. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
Here we are now at the middle of the fourth large part of this talk.get nowheremore quotes very clickable music + math WELCOME TO THE 5TH DIMENSION | This isn't meant to be understood — it's meant to be enjoyed. Love music and science? Explore my collaboration with Max Cooper where we tell the story of infinities and animate the digits of π. Both tracks appear on Max's Yearning for the Infinite album. Another collaboration with Max! # Max Cooper's Ascent — Making of the Music video ## Enter the 5th dimension Ascent answers the question: if you were living in a 5-dimensional room and projected digits of $\pi$ onto its walls, what would you see? ## 1 · How it started — prototype scenes Here, I show some early prototype scenes generated from the animation system during development and testing. There was a lot testing. These scenes are short and evolve slowly. They built from keyframes (but fewer) in the same way as the final Ascent video. The animations here have no audio. ## 2 · From one to many dimensions A cube evolves from 2 to 8 dimensions. The colored lines in the center show the unit axes. This scene served as the inspiration for the start of the Ascent video. The scene ends with the dimensions shrinking back to zero, one at a time. Notice the variety in the complexity of the projected scene as we rotate through various angles. MAX COOPER'S ASCENT PROTOTYPES | growth of dimensions (9:35) ## 3 · Attack of the toothpicks One of my favourite scenes. Cubes are added to the scene as the camera zooms in. The lines are formed by the area maps of digits of $\pi$ projected onto faces of the cubes. Each scene evolves with one additional dimension added. MAX COOPER'S ASCENT PROTOTYPES | lines (2:07) ## 4 · Plenty of corners for punishment MAX COOPER'S ASCENT PROTOTYPES | corners (2:00) ## 5 · Mixed bag A variety of short scenes in black-and-white and color. Rectangles correspond to area maps on the faces of the cube, color-coded by digit. MAX COOPER'S ASCENT PROTOTYPES | mix (1:27) ## 6 · Snowstorm Area maps projected onto cubes with transparency encoding the z-position (distance from camera). MAX COOPER'S ASCENT PROTOTYPES | snowstorm (2:00) ## 7 · 2-hour color chill A long and slow mix of various color scenes MAX COOPER'S ASCENT PROTOTYPES | 2-hour color remix (2:03:19) ## 8 · 1-hour monochrome chill A long and slow mix of various black-and-white scenes. MAX COOPER'S ASCENT PROTOTYPES | 1-hour black-and-white remix (1:00:02) news + thoughts # Cell Genomics cover Mon 16-01-2023 Our cover on the 11 January 2023 Cell Genomics issue depicts the process of determining the parent-of-origin using differential methylation of alleles at imprinted regions (iDMRs) is imagined as a circuit. Designed in collaboration with with Carlos Urzua. Our Cell Genomics cover depicts parent-of-origin assignment as a circuit (volume 3, issue 1, 11 January 2023). (more) Akbari, V. et al. Parent-of-origin detection and chromosome-scale haplotyping using long-read DNA methylation sequencing and Strand-seq (2023) Cell Genomics 3(1). Browse my gallery of cover designs. A catalogue of my journal and magazine cover designs. (more) Thu 05-01-2023 My cover design on the 6 January 2023 Science Advances issue depicts DNA sequencing read translation in high-dimensional space. The image showss 672 bases of sequencing barcodes generated by three different single-cell RNA sequencing platforms were encoded as oriented triangles on the faces of three 7-dimensional cubes. My Science Advances cover that encodes sequence onto hypercubes (volume 9, issue 1, 6 January 2023). (more) Kijima, Y. et al. A universal sequencing read interpreter (2023) Science Advances 9 Browse my gallery of cover designs. A catalogue of my journal and magazine cover designs. (more) # Regression modeling of time-to-event data with censoring Mon 21-11-2022 If you sit on the sofa for your entire life, you’re running a higher risk of getting heart disease and cancer. —Alex Honnold, American rock climber In a follow-up to our Survival analysis — time-to-event data and censoring article, we look at how regression can be used to account for additional risk factors in survival analysis. We explore accelerated failure time regression (AFTR) and the Cox Proportional Hazards model (Cox PH). Nature Methods Points of Significance column: Regression modeling of time-to-event data with censoring. (read) Dey, T., Lipsitz, S.R., Cooper, Z., Trinh, Q., Krzywinski, M & Altman, N. (2022) Points of significance: Regression modeling of time-to-event data with censoring. Nature Methods 19. # Music video for Max Cooper's Ascent Tue 25-10-2022 My 5-dimensional animation sets the visual stage for Max Cooper's Ascent from the album Unspoken Words. I have previously collaborated with Max on telling a story about infinity for his Yearning for the Infinite album. I provide a walkthrough the video, describe the animation system I created to generate the frames, and show you all the keyframes Frame 4897 from the music video of Max Cooper's Asent. The video recently premiered on YouTube. Renders of the full scene are available as NFTs. # Gene Cultures exhibit — art at the MIT Museum Tue 25-10-2022 I am more than my genome and my genome is more than me. The MIT Museum reopened at its new location on 2nd October 2022. The new Gene Cultures exhibit featured my visualization of the human genome, which walks through the size and organization of the genome and some of the important structures. My art at the MIT Museum Gene Cultures exhibit tells shows the scale and structure of the human genome. Pay no attention to the pink chicken. # Annals of Oncology cover Wed 14-09-2022 My cover design on the 1 September 2022 Annals of Oncology issue shows 570 individual cases of difficult-to-treat cancers. Each case shows the number and type of actionable genomic alterations that were detected and the length of therapies that resulted from the analysis. An organic arrangement of 570 individual cases of difficult-to-treat cancers showing genomic changes and therapies. Apperas on Annals of Oncology cover (volume 33, issue 9, 1 September 2022). Pleasance E et al. Whole-genome and transcriptome analysis enhances precision cancer treatment options (2022) Annals of Oncology 33:939–949. My Annals of Oncology 570 cancer cohort cover (volume 33, issue 9, 1 September 2022). (more) Browse my gallery of cover designs. A catalogue of my journal and magazine cover designs. (more)
# Why do we use dG < 0 to describe a spontaneous process? With something like a reaction or phase change, all sources use the criterion that $dG < 0$ for the reaction to be spontaneous and then substitute an appropriate expression for $dG$ to the specific application. For phase change my book stats $(\mu'-\mu'')dn' < 0$ where $'$ is phase A and $''$ is phase B, $\mu$ is the chemical potential and $dn' > 0$ means gaining of molecules in phase A. However, reactions require activation energy and phase change require nucleation (I think this is kind of like an activation energy); both of these processes require an increase in $G$ before a larger decrease. Why do we not use $\Delta G$ (between initial and final equilibrium states) instead? Edit: I am confused by the use of dG (a rate differential) as opposed to $\Delta G$ (a net change between end states) because I'm not sure how one would integrate this for say, a phase change, since it must pass through a potential barrier where G must increase. Is the integral just $G_2-G_1$ between the end states and thermodynamically the process ignores the barrier (since thermodynamics deals with equilibrium only)? A typical thermodynamic process I'm imagining is quasistatic compression, where we can get work from the integral of PdV and P is defined as it goes through an infinite number of equilibrium states; I'm not sure if it's possible to draw a parallel here for free energy and phase change/reacting systems. Edit 2: From this graph it looks like G increases then decreases to stable equilibrium after phase change. If we find dG/dr from this graph and integrate dG(r), should we get the same $\Delta G$ as from thermodynamics (before and after phase equilibrium)? If so, this seems to me like finding work of a rapid piston-cylinder compression of an insulated system; PdV is undefined since we are not passing through a set of equilibrium states but we can still find work from $\Delta U$. Difference is that in this case, our initial state is technically not in equilibrium/stable since the system favours a phase change, so we are beginning from a state that technically does not exist on the phase diagram (like if we have a supercooled/superheated liquid). • Activation barriers are kinetic phenomena. In pure thermodynamic considerations they are not included. There only the free energy of the initial and the final state are considered. The way which your system takes to get from initial to final is of no concern. Oct 16, 2014 at 20:35 • definitely +1 to Philipp here ... I would say though you can compare thermodynamically the transition complex .. but as I said in the other question it is a mix up between Kinetics and thermodynamics :) Oct 16, 2014 at 20:45 • @Philipp I suspected that I was mixing up concepts. My confusion is that thermodynamics deals with equilibrium initial/final states (compressing piston from two equilibrium states etc.). However, phase transitions for instance only happen when the system is dis-equilibrium (i.e. supercooling liquid), how do we define an initial equilibrium state for such a case? Oct 16, 2014 at 21:46 • @Yandle When judging the spontaneity of a process from the thermodynamics perspective you compare $G$ for an initial equilibrium state to $G$ for a final equilibrium state. "Usual" thermodynamics is not able to describe non-equilibrium states . Oct 16, 2014 at 22:00 • @Philipp I am confused by the fact that dG = 0 when a system is in equilibrium, but dG is not 0 when the system say, has too much of phase A than phase B (chemical potentials do not balance). In that sense, I am confused about how the initial state is in equilibrium. Oct 16, 2014 at 22:03 Maybe its best to see how we generally derive the Gibbs energy... If you think about this then the answer should be clear ;) The Gibbs potential is a function of $P$, $T$ and $N$. That is to say: $$G=G(T,P,N)$$ Where $T$ is the temperature, $P$ is the pressure and $N$ is the number of particles. $$dG=\frac{\partial G}{\partial P}\bigg|_{T,N}dP+\frac{\partial P}{\partial T}\bigg|_{P,N}dT+\sum _{i=1}^{I}\frac{\partial G}{\partial N_i}\bigg|_{P,T,N_j\neq i}dN_i$$ Use some Maxwell relations (Look these up): $$dG=VdP-SdT+\sum ^{I}_{i=1}\mu _i dN_i$$ By definition we state the following formula at constant temperature and pressure: $$dG=\sum _{i=1}^{I}\mu _idN_i$$ Therefore you are actually just expressing the Gibbs energy in a different way when your book quotes the chemical potential. In fact the very definition of the chemical potential is: $$\mu _i=\bigg(\frac{\partial G}{\partial N_i}\bigg)_{P,T,N_{j\neq i}}$$ I don't know your mathematical ability but if it helps...its pretty clear that this is just a simple Legendre transform of the internal energy to switch between $U$ and $G$. (As an aside), with a little product ruling we get: $$dG=\sum _{i=1}^{I}\big(\mu _idN_i+N_id\mu _i\big)$$ This leaves us with the Gibbs-Duhem relation: $$\sum ^{I}_{i=1}N_id\mu_i=-SdT+VdP$$ Which is a handy result. Also just to add on the end here about nucleation points... lets imagine a precipitation in solution....As the size of the nucleus grows (as more bits "clump" together) the Gibbs energy of the particle will rise ... this is due to two competing effects: 1) the interface interaction (increasing energy) 2) the volume free energy (decreasing energy) The total energy is just the resultant of these processes ... at small size the surface interactions outcompete the volume energy liberation .... as the radius of the particle increases we see a switch. • I follow the math, but it's the logic I'm having trouble with. I edited my original question to make it more clear. Oct 17, 2014 at 5:50 • Is this first or second year university chemistry? (I read about chemistry in my spare time and don't think I've ever come across this so I'm a little curious). Oct 17, 2014 at 6:35 • @SherlockHolmes This is usually introduced in 1st year and expanded upon in second year...at least in the UK. Oct 17, 2014 at 10:02 • A really good book on classical thermodynamics is by Finn ... I read that before I tacked statistical mechanics ... It helped a lot .. Oct 17, 2014 at 10:07
# Why integrate on cubes that's not injective? Again, this is a conceptual(soft) problem I had while reading Spivak's calculus on manifold. There, to develop the theory of integration, Spivak chose to integrate k-forms on singular cubes. However, as pointed out here, singular cubes can collapse dimension, so it seems theoretically possible that one pulls back a differential form onto a higher dimension cubical region for evaluating the integral, and I wonder when could that ever be useful? Why don't we just add the assumption that singular cubes are injective? Also, without this injectivity I find the picture of chains even less geometrically intuitive...
### Development of a sensitive GC-C-IRMS method for the analysis of androgens in doping control Michaël Polet UGent, Wim Van Gansbeke UGent and Peter Van Eenoo UGent (2012) p.21-21 abstract All doping control laboratories accredited by the World Anti-Doping Agency (WADA) have been confronted with the task to develop analytical methods and establish criteria that allow endogenous steroids to be distinguished from their synthetic copies. It has been known for some time that gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is capable of meeting this challenge by comparison of the 13C/12C ratios of the target compounds with those of endogenous reference compounds that are not affected by the administration of synthetic androgens. Testosterone and/or its main metabolites function as target compounds. Typical endogenous reference compounds include 5β-pregnanediol, 11-ketoetiocholanolone and 11-βhydroxyandrosterone. Synthetic copies are generally derived from stigmasterol and sitosterol; plant sterols obtained from soybean (Glycine max) which have a significantly different carbon isotope composition compared to endogenous steroids. As a consequence, the administration of synthetic analogs is detectable through a change in the carbon isotopic composition of testosterone and its metabolites. GC-C-IRMS however remains a very laborious and expensive technique because one can only determine the 13C/12C ratio of a pure compound. This means that a lot of purification steps have to be conducted and fractionation caused by one of these steps is unacceptable. On top of that, substantial amounts of urine are needed to meet the sensitivity requirements of the IRMS. Because the amount of received urine from an athlete is limited, doping control laboratories have to make their analysis as sensitive as possible so all the required doping tests can be executed. If less urine is consumed, then there is more available for additional tests. In this work we introduce a new type of injection which takes GC-C-IRMS to the next level. With the aid of a programmed temperature vaporizer we were able to increase the sensitivity of the IRMS with a factor of 10 and we drastically reduced the required amount of urine and the limit of detection. All this is achieved without having to change any of the IRMS detection parameters. author organization year type conference publication status published subject keyword androgens, doping control, solvent vent injection, GC-C-IRMS in Benelux Association of Stable Isotope Scientists, Annual meeting, Abstracts pages 21 - 21 publisher Cito Arnhem place of publication Arnhem, Nederland conference name Benelux Association of Stable Isotope Scientists annual meeting 2012 (BASIS 2012) conference location Nijmegen, The Netherlands conference start 2012-04-12 conference end 2012-04-13 language English UGent publication? yes classification C3 id 3010271 handle http://hdl.handle.net/1854/LU-3010271 date created 2012-10-10 08:25:54 date last changed 2012-10-10 14:58:21 @inproceedings{3010271, abstract = {All doping control laboratories accredited by the World Anti-Doping Agency (WADA) have been confronted with the task to develop analytical methods and establish criteria that allow endogenous steroids to be distinguished from their synthetic copies. It has been known for some time that gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is capable of meeting this challenge by comparison of the 13C/12C ratios of the target compounds with those of endogenous reference compounds that are not affected by the administration of synthetic androgens. Testosterone and/or its main metabolites function as target compounds. Typical endogenous reference compounds include 5\ensuremath{\beta}-pregnanediol, 11-ketoetiocholanolone and 11-\ensuremath{\beta}hydroxyandrosterone. Synthetic copies are generally derived from stigmasterol and sitosterol; plant sterols obtained from soybean (Glycine max) which have a significantly different carbon isotope composition compared to endogenous steroids. As a consequence, the administration of synthetic analogs is detectable through a change in the carbon isotopic composition of testosterone and its metabolites. GC-C-IRMS however remains a very laborious and expensive technique because one can only determine the 13C/12C ratio of a pure compound. This means that a lot of purification steps have to be conducted and fractionation caused by one of these steps is unacceptable. On top of that, substantial amounts of urine are needed to meet the sensitivity requirements of the IRMS. Because the amount of received urine from an athlete is limited, doping control laboratories have to make their analysis as sensitive as possible so all the required doping tests can be executed. If less urine is consumed, then there is more available for additional tests. In this work we introduce a new type of injection which takes GC-C-IRMS to the next level. With the aid of a programmed temperature vaporizer we were able to increase the sensitivity of the IRMS with a factor of 10 and we drastically reduced the required amount of urine and the limit of detection. All this is achieved without having to change any of the IRMS detection parameters.}, author = {Polet, Micha{\"e}l and Van Gansbeke, Wim and Van Eenoo, Peter}, booktitle = {Benelux Association of Stable Isotope Scientists, Annual meeting, Abstracts}, keyword = {androgens,doping control,solvent vent injection,GC-C-IRMS}, language = {eng}, location = {Nijmegen, The Netherlands}, pages = {21--21}, publisher = {Cito Arnhem}, title = {Development of a sensitive GC-C-IRMS method for the analysis of androgens in doping control}, year = {2012}, } Chicago Polet, Michaël, Wim Van Gansbeke, and Peter Van Eenoo. 2012. “Development of a Sensitive GC-C-IRMS Method for the Analysis of Androgens in Doping Control.” In Benelux Association of Stable Isotope Scientists, Annual Meeting, Abstracts, 21–21. Arnhem, Nederland: Cito Arnhem. APA Polet, M., Van Gansbeke, W., & Van Eenoo, P. (2012). Development of a sensitive GC-C-IRMS method for the analysis of androgens in doping control. Benelux Association of Stable Isotope Scientists, Annual meeting, Abstracts (pp. 21–21). Presented at the Benelux Association of Stable Isotope Scientists annual meeting 2012 (BASIS 2012), Arnhem, Nederland: Cito Arnhem. Vancouver 1. Polet M, Van Gansbeke W, Van Eenoo P. Development of a sensitive GC-C-IRMS method for the analysis of androgens in doping control. Benelux Association of Stable Isotope Scientists, Annual meeting, Abstracts. Arnhem, Nederland: Cito Arnhem; 2012. p. 21–21. MLA Polet, Michaël, Wim Van Gansbeke, and Peter Van Eenoo. “Development of a Sensitive GC-C-IRMS Method for the Analysis of Androgens in Doping Control.” Benelux Association of Stable Isotope Scientists, Annual Meeting, Abstracts. Arnhem, Nederland: Cito Arnhem, 2012. 21–21. Print.
#### Volume 14, issue 2 (2014) Recent Issues The Journal About the Journal Subscriptions Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement Author Index To Appear ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Other MSP Journals A note on subfactor projections ### Samuel J Taylor Algebraic & Geometric Topology 14 (2014) 805–821 ##### Abstract We extend some results of Bestvina and Feighn [arXiv:1107.3308 (2011)] on subfactor projections to show that the projection of a free factor $B$ to the free factor complex of the free factor $A$ is well defined with uniformly bound diameter, unless either $A$ is contained in $B$ or $A$ and $B$ are vertex stabilizers of a single splitting of ${F}_{n}$, ie, they are disjoint. These projections are shown to satisfy properties analogous to subsurface projections, and we give as an application a construction of fully irreducible outer automorphisms using the bounded geodesic image theorem. ##### Keywords subfactor projections, $\operatorname{Out}(F_n)$, fully irreducible automorphisms Primary: 20F65 Secondary: 57M07
# Issues with steady state, residuals, and rank condition Hello friends, I am working on a New Keynesian model, including banks. I have set up initial values and try to get some results. However, I face three problems: 1. The steady-state values of some variables are not what I can get with pen and paper, e.g., the steady-state interest rate (R) given by Dynare is much larger than its steady-state value, which I derive analytically: \dfrac{1}{\beta} - 1. 2. Even though resid; returns all residuals equal to zero, some residuals will not be zero if I change parameters. For example, \mu is the default probability of banks in my model. I can get zero residuals only if I set it as a number between 0.00 and 0.008, which is actually not realistic! There is the same issue with some other parameters. 3. Finally, the rank condition is not satisfied. I get the following error: There are 10 eigenvalue(s) larger than 1 in modulus for 10 forward-looking variable(s) The rank condition ISN’T verified! I tried different timing settings regarding loans, deposits, and net worth, but every time I end up with the unverified rank condition. I am really baffled. I have been working on these problems for days, but I cannot resolve the issues. I have tried simpler models with fewer equations and different values for parameters, but the problems are still there. Model diagnostics does not find a problem in the code either. I will be thankful if someone can help me resolve these issues. I attach the mod file for reference. giobanking.mod (7.3 KB) 0 = 1/Ce - (beta_e*(1+R))/(1+Pi(+1)*Ce(+1)); Seems to be missing a bracket: 0 = 1/Ce - (beta_e*(1+R))/((1+Pi(+1))*Ce(+1)); 1 Like Dear professor, Thank you for your response. I corrected the equations, and now I get more reasonable steady-states. However, there are SS values that are virtually zero but not reported as zero (e.g., 6.23175e-22). Is there any way to change the precision? Furthermore, even though the number of jump variables is equal to the number of eigenvalues larger than one, the rank condition is still not verified. I was wondering if inf or repeated eigenvalues could imply something about the problem with the model? Also, when I change some parameters or initial values, I get different SS values but very close to previous SS values. Could this mean that there is no stable equilibrium at all, and that is why I still get the error? Here is the new mod-file: New.mod (7.5 KB) Just one more thing which seems weird to me: Whenever I get SS values and set them as initial values again, the new SS values are different or the residuals are not equal to zero. I would focus on Equation number 17 : -48.3018 : Augmenting factor for SDF There must be a mistake either in the equation or the steady state computations. 1 Like Thank you very much. After changing some equations, I managed to solve the model without any errors. However, there is still one issue: When I put resid after initial values and before steady, some residuals of the static equations are not zero, while all residuals are zero when I write resid after steady. I am curious if this can be a problem since Dynare can still solve the model, and the rank condition is also verified. My understanding is that the residuals after steady matter since the residuals in the first case (before steady) are just calculated using the given initial values. Am I wrong? New1.mod (7.7 KB) If a steady state is found, then obviously the residuals will be a 0 after calling steady; The residuals before that call will be based on the initial values. If these were computed analytically, then they may signal a problem. 1 Like
# Error Message: “Missing number, treated as zero.” searched a lot around to fix this issue, usually it only takes a few seconds but now I have no luck. LaTeX puts out this error message mentioned above, here's my code: \usepackage{caption, subcaption} ... \begin{figure} \begin{subfigure}[h] \includegraphics[scale=0.7]{kurzpass.png} \caption[]{Filterkurve eines typischen Kurzpassfilters\footnotemark} \label{fig:kurzpass} \end{subfigure} % Error Message: ! Missing number, treated as zero. \unhbox l.137 \includegraphics [scale=0.7]{kurzpass.png} ? X The subfigure environment provided by subcaption has the following format \begin{subfigure}[<pos>]{<width>} % ...your subfigure here... \end{subfigure} While the <pos>ition is optional, you have to specify a <width> for the subfigure. You've neglected to do that. To fix this, add a length, for example \begin{subfigure}{.5\linewidth} % ...your subfigure here... \end{subfigure} • Ah okay, Merci! But after doing this there is always a message like > Overfull \hbox (2.37161pt too wide) in paragraph at lines 137--138 or > Underfull \hbox (badness 10000) in paragraph at lines 136--147 i do not get it. – FreaxMATE Jul 31 at 17:44
# CALC: Undocumented cell format found, for astro-nav calcs in degrees:minutes:seconds and decimal seconds. I request full info soon if available. As a beginner in astronomy, I understand that: In astro-navigation etc., angles are usually written in degrees:minutes:seconds of arc, in the form [+-dd:dm:ds.s] for latitudes -90 to +90 degrees or [+-ddd:dm:ds.s] for longitudes 0 to 180 deg east or west, or their corresponding 'Hour Angles' [+-hh:mm:ss.s] usually between +-12 hours, there being 15 degrees longitude per hour. I find that Calc allows these formats, not documented in the "Format: Cells: Number" menus or Help. Calc sometimes automatically 'corrects negative angles wrong' as displayed, but fortunately without changing the content of the cell as entered. I found that inserting the text prefix ' for those angles, e.g. ['-ddd:dm:ds.s] surprisingly solves the problem, preventing 'automatic correction' of entered data while still allowing its use in calculations giving correct results. For use in trig functions [SIN() etc.] it is necessary to convert such angles to decimal format, then to radians by multiplying the decimal angle by PI()/180, and the reverse for inverse trig functions ASIN() etc. Can 'Cell Format' and Help info be updated? edit retag close merge delete you request soon what? wrt your "inquiry": please be more specific which format you use, which values give you what, and why it's incorrect. The description you gave uses some strange format specifiers possibly valid for your area, but not for calc. ( 2018-11-15 16:06:01 +0200 )edit Sort by » oldest newest most voted An input of '-ddd:dm:ds.s appears to work because the leading apostrophe forces the input to text. It does not enter a numeric value to the cell. An input of ddd:dm:ds.s yields a time (wall clock or duration) input, not anything related to degrees. For example, input 123:34:56 and clear the automatically applied time format (Ctrl+M) on the cell and you'll see the underlying number 5.14925925925926 (the time value in days, 123 hours plus 34 minutes plus 56 seconds). Date+time are calculated as serial date numbers, days since null date (usually 1899-12-30), and time is fraction of days, so 0.5 = 12h. You can format such number (e.g. -0.5 to 0.5) as [HH]:MM:SS.00 to display the corresponding hours:minutes:seconds.fractions value. There also is no "degrees" number format, so there's nothing to add to the cell format help. more ## Stats Asked: 2018-11-15 15:41:08 +0200 Seen: 273 times Last updated: Nov 15 '18
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Working with spatial data is a key feature in ecological research. Using R to handle this type of data has the great advantage of keeping both variable extraction and modelling in the same environment, instead of recurring to external GIS softwares to compute some variables and then turning to R for modelling. In this example I’ll use as my base data an altimetry contour line map, then I’ll compute a DEM (digital elevation model) from the altimetry contour lines, derive some maps with variables related to altimetry and finally I’ll measure the variables for sampling sites. In this example packages “rgdal” and “rgeos” are used to perform vectorial operations, package “raster” to obtain and work with raster maps package “gstat” to interpolation and packages “rgl” and “rasterVis” to visualize 3D plots. library(raster) library(rgdal) library(rgeos) library(gstat) library(rgl) library(rasterVis) The base data can be downloaded here, and is composed of: 1) altimetry, 2) sampling points and 3) study area boundary. As the data is in vector format it can be imported with readOGR(). altimetry <- readOGR(dsn = ".", layer = "altimetry", verbose = F) study_area <- readOGR(dsn = ".", layer = "study_area", verbose = F) sampling_points <- readOGR(dsn = ".", layer = "sampling_points", verbose = F) To visualize the data is best to plot all the different information in one plot. # Plot contour lines plot(altimetry, col = "dark blue") points(sampling_points, col = "red", cex = 0.5) Spatial data manipulation can be very time consuming. So, it’s wise to clip the altimetry map to the study area to increase speed in further computations. Function intersection() can be used for this end. The new altimetry map is shown below. # Clip altimetry map to study area altimetry_clip <- intersect(altimetry, study_area) # Plot clipped altimetry map plot(altimetry_clip, col = "dark blue") points(sampling_points, col = "red", cex = 0.5) To create a DEM from altimetry contour lines the following steps are needed: 1) Create a blank raster grid to interpolate the elevation data onto; 2) Convert the contour lines to points so you can interpolate between elevation points; 3) Interpolate the elevation data onto the raster grid. 1. First we need to create a blank raster grid. The extent and the projection of the raster should be the same as the altimetry map, so we are going to use the information of this shape for our new raster grid. Afterwards the pixel size of the raster is also defined. In this example I’ll use a 5m x 5m pixel. # Obtain extent dem_bbox <- bbox(altimetry_clip) # Create raster dem_rast <- raster(xmn = dem_bbox[1, 1], xmx = ceiling(dem_bbox[1, 2]), ymn = dem_bbox[2, 1], ymx = ceiling(dem_bbox[2, 2])) # Set projection projection(dem_rast) <- CRS(projection(altimetry_clip)) # Set resolution res(dem_rast) <- 5 1. Since interpolation methods were conceived to work with point data, we need to convert the elevation contour lines to elevation points. Essentially we are creating points along each contour line that has as its value the elevation of the associated contour line. # Convert to elevation points dem_points <- as(altimetry_clip, "SpatialPointsDataFrame") 1. To perform the interpolation of the point data one two methods are widely used: Nearest Neighbor and Inverse Distance Weighted. The difference between the two methods is that in nearest neighbor all the surrounding points have the same weight. In inverse distance weighted points that are further away get less weight. The function used is the same, gstat(), but for the nearest neighbor methods the argument “idp”, the inverse distance power, must equal zero. For inverse distance some value of idp should be set. # Compute the interpolation function dem_interp <- gstat(formula = ALT ~ 1, locations = dem_points, set = list(idp = 0), nmax = 5) # Obtain interpolation values for raster grid DEM <- interpolate(dem_rast, dem_interp) Now that we have our elevation model ready we can plot it in 2D with some contour lines added for better visualization. # Subset contour lines to 20m to enhance visualization contour_plot <- altimetry_clip[(altimetry_clip$ALT) %in% seq(min(altimetry_clip$ALT), max(altimetry_clip$ALT), 20), ] # Plot 2D DEM with contour lines plot(DEM, col = terrain.colors(20)) plot(contour_plot, add = T) Or make an interactive 3D plot that can be controlled with the mouse. plot3D(DEM) With the DEM ready other altitude related variables can be derived, such as slope, aspect or roughness, among others. As aspect is a circular variable, i. e., the minimum value (0º) and the maximum value (360º) represent the same thing (north), a better way of using this information is to convert it into two new variables: northness = cos(aspect) and eastness = sin(aspect). Please take note that aspect must be in radians (radian = degree * pi / 180). Northness will take values close to 1 if the aspect is mostly northward, close to -1 if the aspect is southward, and close to 0 if the aspect is either east or west. Eastness behaves similarly, except that values close to 1 represent east-facing slopes. Other approach would be to reclassify aspect into N, S, E and W. # Obtain DEM derived maps derived_vars <- terrain(DEM, opt = c('slope', 'roughness', 'aspect'), unit = "degrees") slope <- derived_vars[["slope"]] roughness <- derived_vars[["roughness"]] northness <- cos(derived_vars[["aspect"]] * pi / 180) eastness <- sin(derived_vars[["aspect"]] * pi / 180) # Plot maps par(mfrow = c(2, 2)) plot(slope, col = heat.colors(20), main = "slope", axes = F) plot(roughness, col = heat.colors(20), main = "roughness", axes = F) plot(northness, col = heat.colors(20), main = "northness", axes = F) plot(eastness, col = heat.colors(20), main = "eastness", axes = F) Having all the maps prepared, last thing to do is to measure the values of these maps in the sampling points and create a data frame that can be used in further modelling. A very simple way would be to just measure the values in the exact location of the points, but a better way is to use a buffer around the sampling points and summarize the values in the buffer, using mean, mode, max, etc. # Create buffers with 100m radius around sampling points sp_buff <- gBuffer(sampling_points, width = 100, quadsegs = 50, byid = TRUE) # Measure variables in buffer area sp_alt <- extract(DEM, sp_buff, fun = mean) sp_slope <- extract(slope, sp_buff, fun = mean) sp_rough <- extract(roughness, sp_buff, fun = mean) sp_north <- extract(northness, sp_buff, fun = mean) sp_east <- extract(eastness, sp_buff, fun = mean) # Prepare dataframe results <- data.frame("id" = [email protected]$id, "altitude" = sp_alt, "slope" = sp_slope, "roughness" = sp_rough, "northness" = sp_north, "eastness" = sp_east) # View results print(results) ## id altitude slope roughness northness eastness ## 1 1 487.2113 1.980742 0.4936204 -0.06990387 0.9496363 ## 2 2 637.9079 26.458705 6.5107228 0.25682424 -0.9029607 ## 3 3 534.3797 28.770571 7.0379747 0.93539489 0.2672350 ## 4 4 537.3855 18.411266 4.4562798 -0.24734534 0.6485864 ## 5 5 583.4479 27.955300 6.9689737 -0.28556347 0.8485821 Using R as GIS to handle spatial data not only the advantage of keeping both variable extraction and modelling in the same environment but also gives the possibility of creating a script to extract the variables, with the obvious advantages for future analysis.
# Math Help - Permutations in S_6 1. ## Permutations in S_6 Hey there, I'm having a bit of trouble with a practise question from my abstract algebra class. The question is: Let in $\sigma=$(1 2 3 4 5 6) in $S_6$. Show that $G=\{\epsilon,\sigma, \sigma^2,\sigma^3,\sigma^4,\sigma^5\}$ is a group using the operation of $S_6$. Is G abelian? How many elements $\tau$ of G satisfy $\tau^2=\epsilon$ and $\tau^3=\epsilon$? I said that the operation of $S_6$ is closed and associative, that every element in $S_6$ has an inverse, and that $\epsilon=1_G$. I think this is enough to show that G is a group. However, I'm not sure what to do for the abelian or last part. I know that multiplication of disjoint cycles is abelian, but the elements in G that aren't the identity aren't disjoint. I'm kind of lost. Any hints? 2. Be careful. You need to show that G is closed, not S6 (obviously S6 is closed since it is a group). The operation on S6 is function composition. To show that G is a group is trivial. Since all you have is powers of sigma (I will now denote by s) (s^n)(s^m) = (s^r) where r = m+n (mod 6). Thus it is clear that every element has an inverse since (s)(s^5)=s^6=e, (s^2)(s^4)=e, and (s^3)(s^3)=e. Also G is abelian since it doesnt matter which permutation you take first, they are just powers of sigma. The last part is easy as well, clearly s^3 is the only element of order 2. The elements of order 3 are s^2 and s^4. 3. (1 2 3 4 5 6) IS a disjoint cycle, of length 6. to be fair, showing closure of G also involves proving σ^6 = e (so that the powers of σ listed are all there are). after that is shown, the rest is easy, since G is just <σ>. not only is G abelian, it is *cyclic*. it is isomorphic to Z6. here is the isomorphism φ:Z6-->G, φ(k) = σ^k. what are the elements of order 2 in Z6? what are the elements of order 3?
# How do you find an equation of the circle of radius 4 that is tangent to the x-axis and has its center on x-2y=2? Mar 19, 2016 The given circle touches x-axis and its radius is 4 ,so its center will be 4 unit distance apart from the point of contact on x- axis i.e. the ordinate of its center should be +4 or-4. Let the x-coordinate of its center be h then its center would be (h,4) OR (h,-4) Again it is given that the center is on the line $x - 2 y = 2$ So we can write for center (h,4) $h - 2 \cdot 4 = 2 \implies h = 10$ equation of the circle having center$\left(10 , 4\right)$and radius 4 ${\left(x - 10\right)}^{2} + {\left(y - 4\right)}^{2} = 16$ for center (h,-4) $h - 2 \cdot \left(- 4\right) = 2 \implies h = - 6$ equation of the circle having center$\left(- 6 - 4\right)$and radius 4 ${\left(x + 6\right)}^{2} + {\left(y + 4\right)}^{2} = 16$
# Math Help - Complex Analysis 1. ## Complex Analysis On my final today, there was the question that no nonreal complex number has a nth real root. I know this is true but I wasn't able to prove it. Another question was, Let S be a finite set $\{z_1, z_2, z_3,\cdots z_n\}$ where z=a+bi. Prove S is bounded. For this one, I said since S is finite, S is can be put in a 1-1 correspondence with $\mathbb{N}$. Also, I was allowed to assume $S\subset\mathbb{C}$. I then said $S\subseteq\mathbb{N}$. Now, we have $S\subseteq\mathbb{N}\subset\mathbb{C}$. So S must be bounded. I know it is wrong but how should it be done or what could I have added or altered to make it correct? 2. Originally Posted by dwsmith On my final today, there was the question that no nonreal complex has a real root. I know this is true but I wasn't able to prove it. What kind of root? $n$-th root, where $n$ is a positive integer? Another question was, Let S be a finite set $\{z_1, z_2, z_3,\cdots z_n\}$ where z=a+bi. Prove S is bounded. For this one, I said since S is finite, S is can be put in a 1-1 correspondence with $\mathbb{N}$. Also, I was allowed to assume $S\subset\mathbb{C}$. I then said $S\subseteq\mathbb{N}$. Now, we have $S\subseteq\mathbb{N}\subset\mathbb{C}$. So S must be bounded. I know it is wrong but how should it be done or what could I have added or altered to make it correct? Many things are wrong with this. First, a finite set can't be put in bijection with an infinite set! Two sets can be put in bijection with each other only if they have the same cardinality. Second, even supposing you had established a bijection, that doesn't mean $S \subset \mathbb{N}$! I can put my fingers in bijection with $\{1, 2, \dots, 10\}$, but that doesn't mean my fingers are themselves integers between 1 and 10. Finally, even if $S$ were a subset of $\mathbb{N}$, that wouldn't mean that it's bounded. For instance $\mathbb{N}$ itself isn't bounded (as a subset of $\mathbb{C}$). What you should have said is that the set $\{|z_1|, \dots, |z_n|\}$ consisting of the moduli of the elements of $S$ is a finite set of real numbers. A finite set of real numbers always has a finite upper bound - you should be able to prove this! This upper bound is a number $M$ such that $|z|\leq M$ for every $z \in S$, i.e. it's a bound for $S$ as a subset of $\mathbb{C}$. 3. Yes real nth root. 4. A real raised to an integer power is always real. Doesn't that show it by contradiction? Apologies if I overlooked something. 5. You are correct. It would have been easier to try a proof by contradiction. I was trying to to do it straight forward and wasn't getting anywhere. 6. Originally Posted by dwsmith Yes real nth root. Are you satisfied with what I wrote regarding your other problem? 7. I still don't know how to do it but it is fine since the class is over. 8. Originally Posted by dwsmith I still don't know how to do it but it is fine since the class is over. Why did you ask the question, then? I gave the full answer above. What don't you understand? 9. I asked to the question to see if it was something easily understood by me. I don't know how to prove S has an upper bound M. M should be a fraction though correct? 10. Originally Posted by dwsmith I asked to the question to see if it was something easily understood by me. I don't know how to prove S has an upper bound M. M should be a fraction though correct? If you can't prove that a finite set of real numbers has a finite upper bound, how could you possibly ever take this "complex analysis" course? Did you ever take an introductory real analysis course? 11. Haven't taken Real yet. Real isn't a pre-requisite for Complex Analysis.
## College Physics (4th Edition) (a) Work is done at the rate of $3600~J/s$ (b) $Work = 5.4~J$ (a) We can find the power delivered by the electric organs: $P = V~I$ $P = (200~J/C)(18~C/s)$ $P = 3600~J/s$ Work is done at the rate of $3600~J/s$ (b) We can find the work done during one pulse: $Work = P~t$ $Work = (3600~J/s)(1.5\times 10^{-3}~s)$ $Work = 5.4~J$
# Dividing the pentagon It is easy to divide an equilateral triangle into three equal, though not equilateral, triangles. It is even simpler to divide a square into four equal squares. The difficult part is, whether you can divide a regular pentagon into five equal pentagons? Note: Equal means, equal in area. • can we have 1 extra pentagon in the end or the question is about exactly 5? – manshu Feb 24 '16 at 19:21 • If all are equal in terms of area, then I'm ok with it :-) – ABcDexter Feb 24 '16 at 19:24 • @manshu because all of the areas are symmetrical - if all the red lines are drawn in the exact same way and are simply $72\unicode{xb0}$ rotations of each other – Paul Evans Feb 24 '16 at 19:42
# Math Help - predicate logic 1. ## predicate logic “Exponentiation on reals has no left identity.” What does this mean exactly? 2. ## Identity element Hello thehollow89 Originally Posted by thehollow89 “Exponentiation on reals has no left identity.” What does this mean exactly? Exponentiation on reals takes a real number, $x$, and raises a base, $a$, say, to the power of $x$. So, using $\circ$ notation, exponentiation can be defined as the binary operation: $a\circ x = a^x$. A left identity is an element $i$ in the domain of $\circ$, such that $i \circ x = x, \forall x$ in the domain. So to say that exponentiation on reals has no left identity is to say that there is no real number $i$ for which $i^x = x, \forall x \in \mathbb{R}$.
# Fourier Transform (And Inverse) Of Images I attended SIGGRAPH this year and there were some amazing talks. There were quite a few talks dealing with the Fourier transform of images and sampling patterns, signal frequencies and bandwidth so I feel compelled to write up a blog post about the Fourier transform and inverse Fourier transform of images, as a transition to some other things that I want to write up. At the bottom of this post is the source code to the program i used to make the examples. It’s a single CPP file that does not link to any libraries, and does not include any non standard headers (besides windows.h). It should be super simple to copy, paste, compile and use! ## Fourier Transform Overview The Fourier transform converts data into the frequencies of sine and cosine waves that make up that data. Since we are going to be dealing with sampled data (pixels), we are going to be using the discrete Fourier transform. After you perform the Fourier transform, you can run the inverse Fourier transform to get the original image back out. You can also optionally modify the frequency data before running the inverse Fourier transform, which would give you an altered image as output. In audio, a fourier transform is 1D, while with images, it’s 2D. That slows things down because a 1D Fourier transform is $O(N^2)$ while a 2D Fourier transform is $O(N^4)$. This is quite an expensive operation as you can see, but there are some things that can mitigate the issue: • The operation is separable on each axis. AKA only the naive implementation is $O(N^4)$. • There is something called “The Fast Fourier Transform” which can make a 1D fourier transform go from $O(N^2)$ to $O(N log N)$ time complexity. • Each pixel of output can be calculated without consideration of the other output pixels. The algorithm only needs read access to the source image or source data. This means that you can run this across however many cores you have on your CPU or GPU. The items above are true of both the Fourier transform as well as the inverse Fourier transform. The 2D Fourier transform takes a grid of REAL values as input of size MxN and returns a grid of COMPLEX values as output, also of size MxN. The inverse 2D Fourier transform takes a grid of COMPLEX values as input, of size MxN and returns a grid of REAL values as output, also of size MxN. The complex values are (of course!) made up of two components. You can get the amplitude of the frequency represented by the complex value by treating these components as a vector and getting the length. You can get the phase (angle that the frequency starts at) of the frequency by treating it like a vector and getting the angle it represents – like by using atan2(imaginary, real). For more detailed information about the Fourier transform or the inverse Fourier transform, including the mathematical equations, please see the links at the end of this post! ## Image Examples I’m going to show some examples of images that have been Fourier transformed, modified, and inverse Fourier transformed back. This should hopefully give you a more intuitive idea of what this stuff is all about. I’m working with the source images in grey scale so i only have to work with one color channel (more on how to do that here: Blog.Demofox.Org: Converting RGB to Grayscale). You could easily do this same stuff with color images, but you would need to work with each color channel individually. Zelda Guy Here is the old man from “The Legend of Zelda” who gives you the sword. This image is 84×80 which takes about 1.75 seconds to do a fourier or inverse fourier transform with my naiive implementation of unoptimized code. Taking a fourier transform of the greyscale version of that image gives me the following frequency amplitude (first) and phase (second) information. Note that we put the frequency amplitude through a log function to make the lesser represented frequencies show up more visibly. Note that the center of the image represents frequency 0, aka DC. As you move out from the center, you get to higher and higher frequencies. If you put that information into the inverse Fourier transform, you get the original image back out (in greyscale of course): What if we changed the phase information though? Here’s what it looks like if we set all the frequencies to start at phase (angle) 0 instead of the proper angles, and then do an inverse Fourier transform: It came out to be a completely different image! It has all the right frequencies, but the image is completely unrecognizable due to us messing with the phase data. Interestingly, while your eyes are good at noticing differences in phase, your ears are not. That means that if this was a sound, instead of an image, you wouldn’t even be able to tell the difference. Strange isn’t it? Now let’s do a low pass filter to our data. In other words, we are going to zero out the amplitude of all frequencies that are above a certain amount. We are going to zero out the frequencies that are farther than 10% of the image diagonal radius. That makes our frequency information look like this: If we run the inverse Fourier transform on it, we get this: The image got blurier because the high frequencies were removed. The high frequencies represent the small details of the image. We also got some “ringing artifacts” which are the things that look like halos around the old man. This is also due to removing high frequency details. The short explanation for this is that it is very difficult to add sinusoids of different amplitudes and frequencies together to make a flat surface. To do so, you need a lot of small high frequency waves to fill in the areas next to the round hum humps to flatten it out. It’s the same issue you see when trying to make a square wave with additive synthesis, if you’ve read any of my posts on audio synthesis. Now let’s try a high pass filter, where we remove frequencies that are closer than 10% of the image diagonal radius. First is the frequency amplitude information, and then the resulting image: The results look pretty strange. These are the high frequency details that the blurry image is missing! You could use a high pass filter on an image to do edge detection. You could use a low pass filter on an image to remove high frequency details before making the image smaller to prevent the aliasing that can happen when making an image smaller. Let’s look at some other images that have been given similar treatment. SIGGRAPH Here’s a picture of myself at SIGGRAPH with my friend Paul who I used to work with at inXile! The image is 100×133 and takes about 6.5-7 seconds to do a Fourier transform or an inverse Fourier transform. Here is the Fourier transform and inverse Fourier transform: Here is the low pass frequency info and inverse Fourier transform: Here is the high pass: And here is the zero phase: Simple Images Lastly, here are some simple images, along with their frequency magnitude and phases. Sorry that they are so small, but hopefully you get the idea. Horizontal Stripes: Horizontal Stripe: Vertical Stripes: Vertical Stripe: Diagonal Stripe: You might notice that the Fourier transform frequency amplitudes actually run perpendicular to the orientation of the stripes. Look for a post soon which makes use of this property (: ## Example Code Here is the code I used to generate the examples above. If you pass this program the name of a 24 bit bmp file, it will generate and save the DFT, and also the inverse DFT to show that the image can survive a round trip. It will also do a low pass filter, high pass filter, and set the phase of all frequencies to zero, saving off both the frequency amplitude information, as well as the image generated from the frequency information for those operations. The program below is written for clarity, not speed. In particular, the DFT and IDFT code is naively implemented so is O(N^4). To speed it up, it should be threaded, do the work on each axis separately, and also use a fast Fourier transform implementation. #define _CRT_SECURE_NO_WARNINGS #include <stdio.h> #include <stdint.h> #include <array> #include <vector> #include <complex> #include <windows.h> // for bitmap headers and performance counter. Sorry non windows people! const float c_pi = 3.14159265359f; const float c_rootTwo = 1.41421356237f; typedef uint8_t uint8; struct SProgress { SProgress (const char* message, int total) : m_message(message), m_total(total) { m_amount = 0; m_lastPercent = 0; printf("%s 0%%", message); QueryPerformanceFrequency(&m_freq); QueryPerformanceCounter(&m_start); } ~SProgress () { // make it show 100% m_amount = m_total; Update(0); // show how long it took LARGE_INTEGER end; QueryPerformanceCounter(&end); printf(" (%0.2f seconds)n", seconds); } void Update (int delta = 1) { m_amount += delta; int percent = int(100.0f * float(m_amount) / float(m_total)); if (percent <= m_lastPercent) return; m_lastPercent = percent; printf("%c%c%c%c", 8, 8, 8, 8); if (percent < 100) printf(" "); if (percent < 10) printf(" "); printf("%i%%", percent); } int m_lastPercent; int m_amount; int m_total; const char* m_message; LARGE_INTEGER m_start; LARGE_INTEGER m_freq; }; struct SImageData { SImageData () : m_width(0) , m_height(0) { } long m_width; long m_height; long m_pitch; std::vector<uint8> m_pixels; }; struct SImageDataComplex { SImageDataComplex () : m_width(0) , m_height(0) { } long m_width; long m_height; std::vector<std::complex<float>> m_pixels; }; bool LoadImage (const char *fileName, SImageData& imageData) { // open the file if we can FILE *file; file = fopen(fileName, "rb"); if (!file) return false; { fclose(file); return false; } // read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4 if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1) { fclose(file); return false; } imageData.m_pitch = imageData.m_width*3; if (imageData.m_pitch & 3) { imageData.m_pitch &= ~3; imageData.m_pitch += 4; } fclose(file); return true; } bool SaveImage (const char *fileName, const SImageData &image) { // open the file if we can FILE *file; file = fopen(fileName, "wb"); if (!file) { printf("Could not save %sn", fileName); return false; } // write the data and close the file fclose(file); printf("%s savedn", fileName); return true; } void ImageToGrey (const SImageData &srcImage, SImageData &destImage) { destImage = srcImage; for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3]; uint8 *dest = &destImage.m_pixels[(y * destImage.m_pitch) + x * 3]; uint8 grey = uint8((float(src[0]) * 0.3f + float(src[1]) * 0.59f + float(src[2]) * 0.11f)); dest[0] = grey; dest[1] = grey; dest[2] = grey; } } } std::complex<float> DFTPixel (const SImageData &srcImage, int K, int L) { std::complex<float> ret(0.0f, 0.0f); for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // Get the pixel value (assuming greyscale) and convert it to [0,1] space const uint8 *src = &srcImage.m_pixels[(y * srcImage.m_pitch) + x * 3]; float grey = float(src[0]) / 255.0f; // Add to the sum of the return value float v = float(K * x) / float(srcImage.m_width); v += float(L * y) / float(srcImage.m_height); ret += std::complex<float>(grey, 0.0f) * std::polar<float>(1.0f, -2.0f * c_pi * v); } } return ret; } void DFTImage (const SImageData &srcImage, SImageDataComplex &destImage) { // NOTE: this function assumes srcImage is greyscale, so works on only the red component of srcImage. // ImageToGrey() will convert an image to greyscale. // size the output dft data destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pixels.resize(destImage.m_width*destImage.m_height); SProgress progress("DFT:", srcImage.m_width * srcImage.m_height); // calculate 2d dft (brute force, not using fast fourier transform) for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // calculate DFT for that pixel / frequency destImage.m_pixels[y * destImage.m_width + x] = DFTPixel(srcImage, x, y); // update progress progress.Update(); } } } uint8 InverseDFTPixel (const SImageDataComplex &srcImage, int K, int L) { std::complex<float> total(0.0f, 0.0f); for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // Get the pixel value const std::complex<float> &src = srcImage.m_pixels[(y * srcImage.m_width) + x]; // Add to the sum of the return value float v = float(K * x) / float(srcImage.m_width); v += float(L * y) / float(srcImage.m_height); std::complex<float> result = src * std::polar<float>(1.0f, 2.0f * c_pi * v); // sum up the results total += result; } } float idft = std::abs(total) / float(srcImage.m_width*srcImage.m_height); // make sure the values are in range if (idft < 0.0f) idft = 0.0f; if (idft > 1.0f) idft = 1.0; return uint8(idft * 255.0f); } void InverseDFTImage (const SImageDataComplex &srcImage, SImageData &destImage) { // size the output image destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_width * 3; if (destImage.m_pitch & 3) { destImage.m_pitch &= ~3; destImage.m_pitch += 4; } destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height); SProgress progress("Inverse DFT:", srcImage.m_width*srcImage.m_height); // calculate inverse 2d dft (brute force, not using fast fourier transform) for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // calculate DFT for that pixel / frequency uint8 idft = InverseDFTPixel(srcImage, x, y); uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3]; dest[0] = idft; dest[1] = idft; dest[2] = idft; // update progress progress.Update(); } } } void GetMagnitudeData (const SImageDataComplex& srcImage, SImageData& destImage) { // size the output image destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_width * 3; if (destImage.m_pitch & 3) { destImage.m_pitch &= ~3; destImage.m_pitch += 4; } destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height); // get floating point magnitude data std::vector<float> magArray; magArray.resize(srcImage.m_width*srcImage.m_height); float maxmag = 0.0f; for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // Offset the information by half width & height in the positive direction. // This makes frequency 0 (DC) be at the image origin, like most diagrams show it. int k = (x + srcImage.m_width / 2) % srcImage.m_width; int l = (y + srcImage.m_height / 2) % srcImage.m_height; const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k]; float mag = std::abs(src); if (mag > maxmag) maxmag = mag; magArray[y*srcImage.m_width + x] = mag; } } if (maxmag == 0.0f) maxmag = 1.0f; const float c = 255.0f / log(1.0f+maxmag); // normalize the magnitude data and send it back in [0, 255] for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { float src = c * log(1.0f + magArray[y*srcImage.m_width + x]); uint8 magu8 = uint8(src); uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3]; dest[0] = magu8; dest[1] = magu8; dest[2] = magu8; } } } void GetPhaseData (const SImageDataComplex& srcImage, SImageData& destImage) { // size the output image destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_width * 3; if (destImage.m_pitch & 3) { destImage.m_pitch &= ~3; destImage.m_pitch += 4; } destImage.m_pixels.resize(destImage.m_pitch*destImage.m_height); // get floating point phase data, and encode it in [0,255] for (int x = 0; x < srcImage.m_width; ++x) { for (int y = 0; y < srcImage.m_height; ++y) { // Offset the information by half width & height in the positive direction. // This makes frequency 0 (DC) be at the image origin, like most diagrams show it. int k = (x + srcImage.m_width / 2) % srcImage.m_width; int l = (y + srcImage.m_height / 2) % srcImage.m_height; const std::complex<float> &src = srcImage.m_pixels[l*srcImage.m_width + k]; // get phase, and change it from [-pi,+pi] to [0,255] float phase = (0.5f + 0.5f * std::atan2(src.real(), src.imag()) / c_pi); if (phase < 0.0f) phase = 0.0f; if (phase > 1.0f) phase = 1.0; uint8 phase255 = uint8(phase * 255); // write the phase as grey scale color uint8* dest = &destImage.m_pixels[y*destImage.m_pitch + x * 3]; dest[0] = phase255; dest[1] = phase255; dest[2] = phase255; } } } int main (int argc, char **argv) { float scale = 1.0f; int filter = 0; bool showUsage = argc < 2; char *srcFileName = argv[1]; if (showUsage) { printf("Usage: <source>nn"); return 1; } // trim off file extension from source filename so we can make our other file names char baseFileName[1024]; strcpy(baseFileName, srcFileName); for (int i = strlen(baseFileName) - 1; i >= 0; --i) { if (baseFileName[i] == '.') { baseFileName[i] = 0; break; } } // Load source image if we can SImageData srcImage; { printf("%s loaded (%i x %i)n", srcFileName, srcImage.m_width, srcImage.m_height); // do DFT on a greyscale version of the image, instead of doing it per color channel SImageData greyImage; ImageToGrey(srcImage, greyImage); SImageDataComplex frequencyData; DFTImage(greyImage, frequencyData); // save magnitude information { char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".raw.mag.bmp"); SImageData destImage; GetMagnitudeData(frequencyData, destImage); SaveImage(outFileName, destImage); } // save phase information { char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".raw.phase.bmp"); SImageData destImage; GetPhaseData(frequencyData, destImage); SaveImage(outFileName, destImage); } // inverse dft the modified frequency and save the result { char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".raw.idft.bmp"); SImageData modifiedImage; InverseDFTImage(frequencyData, modifiedImage); SaveImage(outFileName, modifiedImage); } // Low Pass Filter: Remove high frequencies, write out frequency magnitudes, write out inverse dft { printf("n=====LPF=====n"); // remove frequencies that are too far from frequency 0. // Note that even though our output frequency images have frequency 0 (DC) in the center, that // isn't actually how it's stored in our SImageDataComplex structure. Pixel (0,0) is frequency 0. SImageDataComplex dft = frequencyData; float halfWidth = float(dft.m_width / 2); float halfHeight = float(dft.m_height / 2); for (int x = 0; x < dft.m_width; ++x) { for (int y = 0; y < dft.m_height; ++y) { float relX = 0.0f; float relY = 0.0f; if (x < halfWidth) relX = float(x) / halfWidth; else relX = (float(x) - float(dft.m_width)) / halfWidth; if (y < halfHeight) relY = float(y) / halfHeight; else relY = (float(y) - float(dft.m_height)) / halfHeight; float dist = sqrt(relX*relX + relY*relY) / c_rootTwo; // divided by root 2 so our distance is from 0 to 1 if (dist > 0.1f) dft.m_pixels[y*dft.m_width + x] = std::complex<float>(0.0f, 0.0f); } } // write dft magnitude data char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".lpf.mag.bmp"); SImageData destImage; GetMagnitudeData(dft, destImage); SaveImage(outFileName, destImage); // inverse dft and save the image strcpy(outFileName, baseFileName); strcat(outFileName, ".lpf.idft.bmp"); SImageData modifiedImage; InverseDFTImage(dft, modifiedImage); SaveImage(outFileName, modifiedImage); } // High Pass Filter: Remove low frequencies, write out frequency magnitudes, write out inverse dft { printf("n=====HPF=====n"); // remove frequencies that are too close to frequency 0. // Note that even though our output frequency images have frequency 0 (DC) in the center, that // isn't actually how it's stored in our SImageDataComplex structure. Pixel (0,0) is frequency 0. SImageDataComplex dft = frequencyData; float halfWidth = float(dft.m_width / 2); float halfHeight = float(dft.m_height / 2); for (int x = 0; x < dft.m_width; ++x) { for (int y = 0; y < dft.m_height; ++y) { float relX = 0.0f; float relY = 0.0f; if (x < halfWidth) relX = float(x) / halfWidth; else relX = (float(x) - float(dft.m_width)) / halfWidth; if (y < halfHeight) relY = float(y) / halfHeight; else relY = (float(y) - float(dft.m_height)) / halfHeight; float dist = sqrt(relX*relX + relY*relY) / c_rootTwo; // divided by root 2 so our distance is from 0 to 1 if (dist < 0.1f) dft.m_pixels[y*dft.m_width + x] = std::complex<float>(0.0f, 0.0f); } } // write dft magnitude data char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".hpf.mag.bmp"); SImageData destImage; GetMagnitudeData(dft, destImage); SaveImage(outFileName, destImage); // inverse dft and save the image strcpy(outFileName, baseFileName); strcat(outFileName, ".hpf.idft.bmp"); SImageData modifiedImage; InverseDFTImage(dft, modifiedImage); SaveImage(outFileName, modifiedImage); } // ZeroPhase { printf("n=====Zero Phase=====n"); // Set phase to zero for all frequencies. // Note that even though our output frequency images have frequency 0 (DC) in the center, that // isn't actually how it's stored in our SImageDataComplex structure. Pixel (0,0) is frequency 0. SImageDataComplex dft = frequencyData; float halfWidth = float(dft.m_width / 2); float halfHeight = float(dft.m_height / 2); for (int x = 0; x < dft.m_width; ++x) { for (int y = 0; y < dft.m_height; ++y) { std::complex<float>& v = dft.m_pixels[y*dft.m_width + x]; float mag = std::abs(v); v = std::complex<float>(mag, 0.0f); } } // write dft magnitude data char outFileName[1024]; strcpy(outFileName, baseFileName); strcat(outFileName, ".phase0.mag.bmp"); SImageData destImage; GetMagnitudeData(dft, destImage); SaveImage(outFileName, destImage); // inverse dft and save the image strcpy(outFileName, baseFileName); strcat(outFileName, ".phase0.idft.bmp"); SImageData modifiedImage; InverseDFTImage(dft, modifiedImage); SaveImage(outFileName, modifiedImage); } } else printf("could not read 24 bit bmp file %snn", srcFileName); return 0; } Here are some links that I found useful: Fourier Transform http://www.thefouriertransform.com/ Introduction To Fourier Transforms For Image Processing # Intro To Audio Synthesis For Music Presentation Today I gave a presentation at work on the basics of audio synthesis for music. It seemed to go fairly well and I was surprised to hear that so many others also dabbled in audio synth and music. The slide deck and example C++ program (uses portaudio) are both up on github here: https://github.com/Atrix256/MusicSynth Questions, feedback, etc? Drop me a line (:
# Understanding 2-category theory There are a lot of examples of categories, functors and natural transformations — one can find them anywhere. On the contrary (weak) 2-categorical stuff seems to be more subtle. I have comprehended that, while categories intuitively consist of sets with structure and structure-preserving maps, 2-categories consist of categories with structure, structure-preserving functors and natural transformations between such functors. I know several such examples, but are there another examples of 2-categories (which are not $$Ord$$-enriched)? And the similar question about pseudofunctors ((op)lax functors). Where can I find examples of non-trivial morphisms between 2-categories? • It might also make sense to look at enriched categories in general. nLab seems to be very good at giving the general enriched point of view wherever possible. – jgon Dec 28 '18 at 16:07 • I wrote up something like an answer, and then found myself unsure if it answered your question. What kinds of examples might count as answers to your questions, and which not? It's easy to think of examples of subcategories of Cat, categories of enriched categories, categories of internal categories, etc., but I'm not certain if you're trying to explicitly avoid such examples. – Malice Vidrine Dec 28 '18 at 23:36 As I mention in my comment above, I'm not sure if you are wanting to rule out examples that are obviously related to 2-categories of categories. But in case such examples do constitute at least a partial answer, I offer the following, with the disclaimer that my examples will definitely be slanted towards the topics I'm familiar with. As you suggest, any time you're dealing with some class of categories you're likely to use 2-categorical notions. We use the fact that $$\mathbf{Cat}$$ is a 2-category constantly in our use of functor categories. The 2-categorical structure on $$\mathfrak{Top}$$ (the 2-category of toposes and geometric morphisms) is how we are able to state things like classifying topos theorems (which need $$\mathfrak{Top}/\mathbf{Set}(\mathcal{E},\mathcal{F})$$ to be a category for any Grothendieck toposes $$\mathcal{E},\mathcal{F})$$. Many other categories of categories, like the category of finite categories, or the category of groupoids, the category of abelian groups, will all be examples of 2-categories where the hom-categories are not typically partial orders; and for related reasons, so will the categories of internal categories in a finitely complete category. Additionally, as someone alludes to in the comments, categories of $$\mathcal{V}$$-enriched categories also have a 2-categorical structure (which can look different from the one on $$\mathbf{Cat}$$; e.g. the category of $$\mathbf{Ab}$$-enriched/pre-additive categories). Pseudofunctors are also quite common; many results in topos theory, like Diaconescu's theorem (the one about flat functors, not the one about the axiom of choice), are best stated in the language of indexed categories, which are just pseudofunctors from a category to $$\mathbf{Cat}$$. • Given a geometric morphism $$f:\mathcal{E}\to\mathcal{S}$$, the operation sending $$A\in\mathcal{S}$$ to $$\mathcal{E}/f^*(A)$$ is a pseudofunctor $$\mathcal{S}^{op}\to \mathfrak{Top}$$. (This form of indexing is especially helpful in the proof of Diaconescu's theorem.) • The operation sending $$\mathbb{A}\in\mathbf{cat}(\mathcal{E})$$ (the category of internal categories in a topos $$\mathcal{E}$$) to the category of internal diagrams on $$\mathbb{A}$$ is a pseudofunctor $$\mathbf{cat}(\mathcal{E})^{op}\to\mathfrak{Top}$$. • For finitely complete $$\mathcal{C}$$, the assignment $$A\in \mathcal{C}$$ to $$\mathcal{C}/A$$ is a pseudofunctor $$\mathcal{C}^{op}\to\mathbf{Cat}$$. (Indexed categories of this form will show up in some guise almost any time you deal with indexed categories.) • In fact in categorical logic, indexed categories (or the very closely related fibred category) are ubiquitous as semantics for type theories, so the literature there will give you plenty of examples. (I exclude mention of the action on morphisms above as I assume they're fairly transparent.) Not only are indexed categories defined in terms of a 2-categorical notion, but they and their notion of "indexed functor" and "indexed natural transformation" also form a 2-category. And as above, we can also talk about an internal indexed category (or cloven fibration) in a 2-category $$\mathcal{C}$$, the category of which in $$\mathcal{C}$$ will again have the structure of a 2-category. And this is without touching on things like bicategories---of objects and spans in a category with pullbacks; categories with profunctors or anafunctors; and their internal counterparts---and pseudofunctors between them. Most of these examples are, in a sense, unsurprising places to find 2-categorical notions, since most of them are examples where the objects of the 2-category are some form of actual category. At the moment I can't think of good examples of surprising 2-categories. But I hope I've given the impression that the unsurprising cases are not mere novelties, but are ubiquitous and quite useful.
# The primitive equations approximation of the anisotropic horizontally viscous 3D Navier-Stokes equations @article{Li2022ThePE, title={The primitive equations approximation of the anisotropic horizontally viscous 3D Navier-Stokes equations}, author={Jinkai Li and Edriss S. Titi and Guo Yuan}, journal={Journal of Differential Equations}, year={2022} } • Published 1 June 2021 • Mathematics, Physics • Journal of Differential Equations 2 Citations Global well-posedness of $z$-weak solutions to the primitive equations without vertical diffusivity • Mathematics, Physics • 2021 In this paper, we consider the initial boundary value problem in a cylindrical domain to the three dimensional primitive equations with full eddy viscosity in the momentum equations but with only Local Martingale Solutions and Pathwise Uniqueness for the Three-dimensional Stochastic Inviscid Primitive Equations We study the stochastic effect on the three-dimensional inviscid primitive equations (PEs, also called the hydrostatic Euler equations). Specifically, we consider a larger class of noises than ## References SHOWING 1-10 OF 63 REFERENCES Global well-posedness of the three-dimensional viscous primitive equations of large scale ocean and atmosphere dynamics • Mathematics, Physics • 2005 In this paper we prove the global existence and uniqueness (regularity) of strong solutions to the three-dimensional viscous primitive equations, which model large scale ocean and atmosphere The primitive equations as the small aspect ratio limit of the Navier–Stokes equations: Rigorous justification of the hydrostatic approximation • Mathematics, Physics Journal de Mathématiques Pures et Appliquées • 2019 An important feature of the planetary oceanic dynamics is that the aspect ratio (the ratio of the depth to horizontal width) is very small. As a result, the hydrostatic approximation (balance), Global Well–Posedness of the 3D Primitive Equations with Partial Vertical Turbulence Mixing Heat Diffusion • Physics, Mathematics • 2010 The three–dimensional incompressible viscous Boussinesq equations, under the assumption of hydrostatic balance, govern the large scale dynamics of atmospheric and oceanic motion, and are commonly Finite-Time Blowup for the Inviscid Primitive Equations of Oceanic and Atmospheric Dynamics • Physics, Mathematics • 2012 In an earlier work we have shown the global (for all initial data and all time) well-posedness of strong solutions to the three-dimensional viscous primitive equations of large scale oceanic and Global Well-Posedness of the Three-Dimensional Primitive Equations with Only Horizontal Viscosity and Diffusion • Mathematics • 2016 © 2015 Wiley Periodicals, Inc. In this paper, we consider the initial boundary value problem of the three-dimen-sional primitive equations for planetary oceanic and atmospheric dynamics with only Global existence of weak solutions to 3D compressible primitive equations with degenerate viscosity • Physics • 2020 In this paper, we investigate the compressible primitive equations (CPEs) with density-dependent viscosity for large initial data. The CPE model can be derived from the 3D compressible and Mathematical Justification of the Hydrostatic Approximation in the Primitive Equations of Geophysical Fluid Dynamics • Mathematics, Computer Science SIAM J. Math. Anal. • 2001 A convergence and existence theorem is proved for this asymptotic model of the time-dependent incompressible Navier-Stokes equations by means of anisotropic estimates and a new time-compactness criterium. The hydrostatic approximation for the primitive equations by the scaled Navier–Stokes equations under the no-slip boundary condition • Physics, Mathematics Journal of Evolution Equations • 2020 In this paper we justify the hydrostatic approximation of the primitive equations in the maximal $L^p$-$L^q$-setting in the three-dimensional layer domain $\Omega = \Torus^2 \times (-1, 1)$ under the Strong solutions to the 3D primitive equations with only horizontal dissipation: near $H^1$ initial data • Mathematics, Physics • 2016 In this paper, we consider the initial-boundary value problem of the three-dimensional primitive equations for oceanic and atmospheric dynamics with only horizontal viscosity and horizontal
# Application For Degree Certificate Sample Request Letter For The Certificate From The School Application For Degree Certificate Sample Request Letter For The Certificate From The School Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample. Application For Degree Certificate Sample with Application For Degree Certificate Sample.
# aluminum barrel use cylinder barrel Maker Romania Oct 08, 2020 · No one CF barrel maker has yet made barrels that are totally consistent in performance and durability. ... I think the tension barrel design that leaves air space between the steel barrel liner and places the carbon fiber In a cylinder around it Makes the most sense and best use of the different materials. ... Mercedes Benz glues their aluminum ... MIT Barrel Recovery Machine. MIT S.A. is located in Santiago, Chile in one of the world's prominent wine making regions. This machine is recover used barrels with a maximum height of 41" and a minimum height of 33", a minimum diameter (from the extremes) of 19" and a maximum diameter (from the extremes) of 25" and a minimum diameter (from the center) of 19" and a maximum diameter (from the ... May 28, 2020 · The barrel looked like a mirror but accuracy continued to drop. He sent it back to the barrel maker and the owner called him after they got it. The owner was pretty funny. He said "I'll cut you some slack on a replacement barrel. But do you want a 7mm or a 7.2mm this time." My friend swore off the compounds after that. Jun 25, 2009 · The older barrel makers used rather soft steel and many of those barrels were extremely accurate and are still in use today giving excellent results. With modern computer aided machinery any barrel maker should be able to turn out an excellent barrel. China Pneumatic Aluminum Tube manufacturers - Select 2020 high quality Pneumatic Aluminum Tube products in best price from certified Chinese Water Tube manufacturers, Industrial Tube suppliers, wholesalers and factory on Made-in-China Mar 21, 2016 · A USA barrel maker pics a sweet spot diameter and goes from these. Every barrel component follows this pattern. the # 2 deviation is the norm. Until you an verify your trunnion ID you are kinda stuck in finding a barrel that will/should fit if it isas large as you have nopted. Aug 13, 2020 · Use a drill press to drill out a piece of round rod (clamp really good, start small, work your way up the sizes, go slow, be careful, use a lot of lube/oil, use quality bits), use a reamer to finish the bore, make a rifling button, push down the bore, once rifled make or buy a reamer to drill the chamber, then crown the end of the barrel. Yellowmark is a line of construction equipment parts developed by Caterpillar to offer a reliable, lower cost alternative for construction equipment. May 15, 2014 · You might listen to Jim Lederer, He is a barrel maker from one of our well known Wisconsin barrel companies. You may also pay attention to Don Nielson. Don is a toolmaker and very knowledgeable gunmaker. He has had a few long range records. I do have prints for the air actuated set up for a following rest. Food grade aluminum piston with O-Ring Seal to keep product from leaking up above the piston; Used widely in kitchens, restaurants and many food processing places; Reversible Cylinder (can be made both right or left hand use) Cylinder thickness: 1.2 mm, much thicker than other 0.8mm cylinder. A barrel, cask, or tun is a hollow cylindrical container, traditionally made of vertical wooden staves and bound by wooden or metal hoops. Traditionally, the barrel was a standard size of measure referring to a set capacity or weight of a given commodity. For example, a beer barrel had originally a capacity of 36 US gallons (140 L) while an ale barrel a capacity of 32 US gallons (120 L). Oct 20, 2016 · Aluminum shim stock or even heavy cardstock works pretty well. At least in a 4-jaw. ... had a chance to work on a hammer forge 'barrel maker' years ago. It was destroying the sensors for the readouts. A small thick cylinder of steel became a chamber and barrel over a relatively short cycle. At the time not very many minutes. Jun 23, 2011 · Hand lapping or polishing a finished barrel is a good way to "improve" it. It works this way: After being very impressed with the glossy bore you created, you find that accuracy has gone to hell. Then you replace the polished barrel with a good quality fitted barrel and just shoot bullet through it... Apr 18, 2012 · They use a wooden rifling bench,with about a 3" diameter maple cylinder with the rifling grooves cut into it with hand tools. The cutter is a single tooth cutter that is adjusted for height by packing a thin piece of paper under the cutter,sandwiched between … The causes of cylinder head oil leaks, stretched studs, leaky gaskets and a number of recommendations for various sticky products to help with these problems Halite head gaskets and torqueing up. The only cylinder head gasket to use is the standard steel ringed halite gasket. One problem, is in using gasket jointing compound on the head gasket. May 06, 2010 · As it sits on the bench, I would think the bore, rifling and contour. When the thread started, the picture did not appear for me, so I assumed a blank meant cylinder. Others will probably know better, but this barrel maker has a long history of top quality in a rifling method that has since taken over the accuracy game. May 06, 2010 · While my bore scope says the barrel makers of today are making a better product than what any of the old timers made in years past, I would still snatch up any Obermeyer barrel I could find just as fast as I could grab my wallet. And my Palma barrel has shot 14 consecutive x's at 600 yards with iron sights even with me shooting it. Add rifling to your brass barrels with this easy-to-use stencil set. Permanently increase the accuracy and performance of your strongest blasters without stringing SCARs or adding length! The Merlynn mod, pioneered by Nerf accuracy aficionado Chris Cartaya, is powerful but notoriously complicated. Jun 16, 2010 · The triangular barrel is not a gimmick, it cuts down on weight but maintains stiffness and torsional rigidity. Look at all the building structures that use triangular tubes or supports. Not saying that it is far superior to fluting a barrel, but 5R barrel is a way to cut down the weight and maintain barrel … Apr 11, 2020 · building a dual caliber Colt 1911 Gunsmithing & Troubleshooting. 1911Forum > Hardware & Accessories > Gunsmithing & Troubleshooting: Polishing the bore? Jun 12, 2011 · Get someone to make carbon fiber medium contour barrels. Basically a standard profile barrel wrapped in carbon fiber with the important areas exposed (op rod guide, gas cylinder and suppressor). Stiffen the barrel while adding minimal weight. Could probably get away with an aluminum or titanium trigger housing. May 04, 2007 · I am currently building a new project with a Nesika Model L, 1.470 Diameter Action with a Krieger 30", 1.450 straight cylinder barrel. Some people say, " you need a barrel block", however, the majority of long range shooters, many gunsmiths, and action makers have informed me a barrel block is not needed if using a Large Diameter Custom Action. This double crozing machine is designed to process the barrel heads, build the croze and simultaneously chamfer and clean the barrel ends. Designed for a barrel with a minimum height of 33 inches and a maximum height of 42 inches. It includes an adjustable cross slider for cutter blocks. Comes with a hydraulic power unit and full electrical panel. 14DL-0182 hot sale 60mm 70mm 80mm 90mm mortise master lock cylinder barrel for aluminum door. US $1.50-$5.00 / Set 100 Sets (Min. Order) 2 YRS . Foshan Zhong Nuo Door & Window Accessory Technology Co., Ltd. Contact Supplier ... China Pneumatic Aluminum Tube manufacturers - Select 2020 high quality Pneumatic Aluminum Tube products in best price from certified Chinese Water Tube manufacturers, Industrial Tube suppliers, wholesalers and factory on Made-in-China You can also choose from plastic, steel, and wooden. As well as from turkey. And whether unit barrel is manufacturing plant, hotels, or building material shops. There are 12,881 unit barrel suppliers, mainly located in Asia. The top supplying country or region is China, which supply 100% of unit barrel … In addition to the contour 16.5” barrel that weighs 1.35 lbs (18.16 ounces), Beyer also provides an alternative tapered barrel. For those who don't know, Green Mountain Rifle Barrel Company is an OEM barrel maker. One TacSol option that stands out from other makers is the X-R ing Open sight barrel. May 06, 2010 · While my bore scope says the barrel makers of today are making a better product than what any of the old timers made in years past, I would still snatch up any Obermeyer barrel I could find just as fast as I could grab my wallet. And my Palma barrel has shot 14 consecutive x's at 600 yards with iron sights even with me shooting it. Jun 23, 2011 · Hand lapping or polishing a finished barrel is a good way to "improve" it. It works this way: After being very impressed with the glossy bore you created, you find that accuracy has gone to hell. Then you replace the polished barrel with a good quality fitted barrel and just shoot bullet through it... LOCTITE ® Purple Threadlocker. LOCTITE ® Purple Threadlocker, also known as LOCTITE ® 222™, has become one of our most successful products. LOCTITE ® 222™ cures in 24 hours. It can also be used on low-strength metals such as aluminum and brass. This offers a lot of flexibility to the user. Find customer testimonials and more information on our purple threadlocker in When and Why to Use ... Jan 08, 2019 · You won't be replacing a two-piece barrel with a one-piece barrel. The shroud fits into mating surfaces in the frame, and it's the torqueing of the inner flanged barrel tube that retains the shroud in place. A custom aftermarket barrel maker would have to fabricate another flanged inner barrel tube to fit inside the original shroud. Little point. Jul 23, 2006 · The barrel is constrained at the end of the forearm at the gas cylinder assembly, with a specified, AND significant amount of pressure (a result of how the action is bedded in the stock). The first accurized National Match M-14s did not have fancy aftermarket, stress relieved barrels, but this bedding method continued with the advent of ... Jun 12, 2011 · Get someone to make carbon fiber medium contour barrels. Basically a standard profile barrel wrapped in carbon fiber with the important areas exposed (op rod guide, gas cylinder and suppressor). Stiffen the barrel while adding minimal weight. Could probably get away with an aluminum or titanium trigger housing. This rustic elegant wine barrel ring light fixture is unique and one-of-a-kind, made from the rings of used wine barrels. Its simplicity will add style and beauty to any room. The light fixture is new and UL approved. Bulb (60 watt max). Approximate Measurements: 18-1/2 L x 18-1/2 W x 20 H NOTE: May 06, 2010 · As it sits on the bench, I would think the bore, rifling and contour. When the thread started, the picture did not appear for me, so I assumed a blank meant cylinder. Others will probably know better, but this barrel maker has a long history of top quality in a rifling method that has since taken over the accuracy game. Synonyms for barrel include cask, butt, drum, keg, firkin, hogshead, tun, vat, kilderkin and pipe. Find more similar words at wordhippo! Additionally, the increased diameter of the aluminum tube would expose more surface to the air thus cooling faster. Thus this claim seems plausible to me. As for the rigidity, conventional rifle barrel wisdom dictates that a thicker barrel is more rigid than a thinner one. A straight cylinder barrel 1.25 inches in diameter should be pretty stiff. CZ Barrel 9mm 5.0" L x 0.55" D Threaded 1/2 x 28 By CZC SP01 . Rating: 100%. 1 Review . SKU: 30123 $275.00. Add to Cart. CZC Match Barrel 9mm 5.2" L x 0.55" D "SP01 EXTENDED BARREL" Rating: 100%. 1 Review . SKU: 30137$275.00. Add to Cart. New. CZ 75 Bull Upper with fix rear sight ... Definition of barrel in the Definitions.net dictionary. Meaning of barrel. What does barrel mean? Information and translations of barrel in the most comprehensive dictionary definitions resource on … Definition of barrel of a capstan in the Definitions.net dictionary. Meaning of barrel of a capstan. What does barrel of a capstan mean? Information and translations of barrel of a capstan in the most comprehensive dictionary definitions resource on the web. barrel \bar"rel\ (b&abreve;r"r&ebreve;l), n. [oe. barel, f. baril, prob. fr. barre bar. cf. barricade.] 1. a round vessel or cask, of greater length than breadth, and bulging in the middle, made of staves bound with hoops, and having flat ends or heads. 2. the quantity which constitutes a full barrel. this varies for different articles and also in different places for the same article, being ... Greek words for barrel include βαρέλι, κύλινδρος, βυτίο, υδροθάλαμος, σφόνδυλος κιόνα, σωλήνας όπλου and κύλινδρος αντλίας. Find more Greek words at wordhippo! Browning Buck Mark Semi-Automatic Pistol, #515ZM20983, .22 LR cal., 5.5'' bull barrel, matte finish, black aluminum frame, factory hard rubber grips, ... Romanian M44 Mosin-Nagant Bolt Action Rifle. Lot # 747 (Sale Order: 46 of 713) Romanian M44 Mosin-Nagant Bolt Action ... Price: \$1,995.00 Description: Serial #16005, .22 WCF, 10 1/4 inch part octagon barrel with an excellent, bright bore.This is a very interesting handgun, obviously based on the Stevens Large Frame Tip-Up design, but built by Thom... Barrel Relining Service
SUPERSTABILITY OF FUNCTIONAL INEQUALITIES ASSOCIATED WITH GENERAL EXPONENTIAL FUNCTIONS Lee, Eun-Hwi • Received : 2009.09.01 • Published : 2010.12.25 • 30 6 Abstract We prove the superstability of a functional inequality associated with general exponential functions as follows; ${\mid}f(x+y)-a^{x^2y+xy^2}g(x)f(y){\mid}{\leq}H_p(x,y)$. It is a generalization of the superstability theorem for the exponential functional equation proved by Baker. Keywords Exponential functional equation;Stability of functional equation;Superstability References 1. J. Baker, The stability of the cosine equations, Proc. Amer. Math. Soc. 80 (1980), 411-416. https://doi.org/10.1090/S0002-9939-1980-0580995-3 2. J. Baker, J. Lawrence And F. Zorzitto, The stability of the equations, f(x+y)=f(y), Proc. Amer. Math. Soc. 74 (1979), 242-246. 3. G. L. Forti, Hyers-Ulam stability of functional equations, in several variables, Aequationes Math. 50 (1995), 146-190. https://doi.org/10.1007/BF01831117 4. R. Ger, Superstability is not natural, Rocznik Naukowo-Dydaktyczny WSP Krakkowie, Prace Mat. 159 (1993), 109-123. 5. D.H. Hyers, On the stabliity of the linear functional equations, Proc. Nat. Acad. Sci. U. S. A. 27 (1941), 222-224. https://doi.org/10.1073/pnas.27.4.222 6. D.H. Hyers and Th.M. Rassias, Approximate homomorphisms, Aequatioues Math. 44 (1992), 125-153. https://doi.org/10.1007/BF01830975 7. D.H. Hyers, G. Isac, and Th.M. Rassias, Stability of Stability of functional equations in Seeral variabler, Birkhauser-Basel-Berlin(1998). 8. K.W. Jun, G,H. Kim and Y.W. Lee, Stability of generalized gamma and beta functional equations, Aequation Math. 60(2000), 15-24. https://doi.org/10.1007/s000100050132 9. S.-M. Jung, On the gerneral Hyers-Ulam stability of gamma functional equation, Bull. Korean Marth. Sec. 34. No 3 (1997), 437-446. 10. S.-M. Juug. On the stability of the gammer functional equations, Results Math. 33 (1998), 306-309. https://doi.org/10.1007/BF03322090 11. G.H. Kim, and Y.W. Lee, The stability of the beta functional equation, Babes-Bolyai Mathematica, XLA (1)(2000), 89-96. 12. Y.W. Lee, On the stability of a quadratic Jensen type functional equations, J. Math. Anal. Appl. 270 (2002) 590-601. https://doi.org/10.1016/S0022-247X(02)00093-8 13. Y.W. Lee, The stability of derivations on Banach algebras, Bull. Institute of Math. Academia Sinica, 28 (2000), 113-116. 14. Y.W. Lee and B..M. Choi, The stability of Cauchy's gamma-beta functional equation, J. Math. Anal. Appl. 299 (2004), 305-313. https://doi.org/10.1016/j.jmaa.2003.12.050 15. Th.M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297-300. https://doi.org/10.1090/S0002-9939-1978-0507327-1 16. Th.M. Rassias, On a problem of S. M. Ulam and the asymptotic stabilityl of the Cauchy functional equation with applications, General Inequalities 7. MFO. Oberwolfach. Birkhauser Verlag. Basel ISNM Vol 123 (1997), 297-309. 17. Th.M. Rassias, On the stability of the quadratic functional equation and its applications, Studia. Univ. Babes-Bolyai XLIII (3). (1998), 89-124. 18. Th.M. Rassias, The problem of S. M. Ulam for approximately multiplication mappings, J. Math. Anal. Appl. 246 (2000), 352-378. https://doi.org/10.1006/jmaa.2000.6788 19. Th.M. Rassias, On the stability of functional equations in Banach spaces, J. Math. Anal. Appl. 251 (2000), 264-284. https://doi.org/10.1006/jmaa.2000.7046 20. Th.M. Rassias, On the stability of functional equations and a problem of Ulam, Acta Applications. Math. 62 (2000), 23-130. https://doi.org/10.1023/A:1006499223572 21. Th.M. Rassias and P. Semrl, On the behavior of mapping that do not stability Hyers-Ulam stability, Proc. Amer. Math. soc. 114 (1992), 989-993. https://doi.org/10.1090/S0002-9939-1992-1059634-1 22. S.M. Ulam, Problems in Modern Mathematics, Proc. Chap. VI. Wiley. NewYork, 1964. Cited by 1. Hyperstability and Superstability vol.2013, 2013, https://doi.org/10.1155/2013/401756
# Math2111: Chapter 4: Surface integrals. Section 2: Surface area and surface integrals of scalar fields In this blog entry you can find lecture notes for Math2111, several variable calculus. See also the table of contents for this course. This blog entry printed to pdf is available here. In the following we assume that the surfaces are smooth, that is, they are assumed to be images of parameterised surfaces $\boldsymbol{\Phi}:D\to\mathbb{R}^3$ for which: • $D$ is a non-empty, compact and Jordan-measurable subset of $\mathbb{R}^2$; • the mapping $\boldsymbol{\Phi}$ is one-to-one; • $\boldsymbol{\Phi}$ is continuously differentiable • the normal vector $\boldsymbol{n}=\frac{\partial \boldsymbol{\Phi}}{\partial u} \times \frac{\partial \boldsymbol{\Phi}}{\partial v} \neq \boldsymbol{0}$ except possibly at a finite number of points; (Notice, the condition that ${}D$ is compact can also be replaced by the condition that the surface $S=\{\boldsymbol{\Phi}(u,v): (u,v)\in D\}$ is compact.) Surface area In a previous post we discussed parameterised surfaces. Now we calculate the area of parameterised surfaces. Recall that the area of a parallelogram in $\mathbb{R}^3$ spanned by two vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ is given by the Euclidean norm $\|\cdot\|_2$ of the vector obtained by taking the cross product of these two vectors, that is, by $\|\boldsymbol{a}\times \boldsymbol{b}\|_2.$ From the parameterisation $\boldsymbol{\Phi}:D\subset\mathbb{R}^2\to\mathbb{R}^3$ we obtain two tangent vectors $\frac{\partial \boldsymbol{\Phi}}{\partial u}$ and $\frac{\partial \boldsymbol{\Phi}}{\partial v}.$ We can approximate a piece of the surface at some point $\boldsymbol{\Phi}(u_k,v_k)$ ($(u_k,v_k)\in D$) by a parallelogram spanned by the vectors $\frac{\partial \boldsymbol{\Phi}}{\partial u}(u_k,v_k) \Delta u_k$ and $\frac{\partial \boldsymbol{\Phi}}{\partial v}(u_k,v_k) \Delta v_k,$ whose area can be approximated by $\displaystyle \begin{array}{lr} \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u}(u_k,v_k) \Delta u_k \times \frac{\partial \boldsymbol{\Phi}}{\partial v}(u_k,v_k) \Delta v_k \right\|_2 & \\ & \\ = \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u}(u_k,v_k) \times \frac{\partial \boldsymbol{\Phi}}{\partial v}(u_k,v_k) \right\|_2 \Delta u_k \Delta v_k. & \end{array}$ By summing over all pieces which approximate the whole surface, i.e. forming the sum $\displaystyle \sum_{k=1}^N \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u}(u_k,v_k) \times \frac{\partial \boldsymbol{\Phi}}{\partial v}(u_k,v_k) \right\|_2 \Delta u_k \Delta v_k,$ and considering the limit when the size of the pieces goes to zero we obtain the integral $\displaystyle A(S) = \iint_D \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u}(u,v) \times \frac{\partial \boldsymbol{\Phi}}{\partial v}(u,v) \right\|_2 \,\mathrm{d} u \,\mathrm{d} v = \int_D \left\|\boldsymbol{n}(u,v) \right\|_2 \,\mathrm{d} u \,\mathrm{d} v.$ (Here, $\boldsymbol{n}$ is the normal vector defined here.) We call $A(S)$ the surface area of the surface ${}S$. Definition Let $\boldsymbol{\Phi}:D\subset\mathbb{R}^2\to\mathbb{R}^3$ be a parameterisation of a surface ${}S.$ Then the surface area $A(S)$ of ${}S$ is defined by $\displaystyle A(S)= \iint_D \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u}(u,v) \times \frac{\partial \boldsymbol{\Phi}}{\partial v}(u,v) \right\|_2 \,\mathrm{d} u \,\mathrm{d} v.$ The last formula can also be written as $\displaystyle \begin{array}{rcl} A(S) &=& \iint_D \|\boldsymbol{n}(u,v)\|_2 \,\mathrm{d} u \,\mathrm{d} v \\ && \\ &=& \iint_D \sqrt{\left(\frac{\partial (Y,Z)}{\partial (u,v)} \right)^2 + \left(\frac{\partial (Z,X)}{\partial (u,v)} \right)^2 +\left(\frac{\partial (X,Y)}{\partial (u,v)} \right)^2} \,\mathrm{d}u\,\mathrm{d} v. \end{array}$ If the surface is a graph of a function $f:D\subset\mathbb{R}^2\to\mathbb{R},$ then $\boldsymbol{n}=\left(-\frac{\partial f}{\partial x}, -\frac{\partial f}{\partial y}, 1\right)$ and hence the surface of the graph is given by $\displaystyle A(S)=\iint_D \sqrt{\left(\frac{\partial f}{\partial x} \right)^2 +\left(\frac{\partial f}{\partial y} \right)^2 + 1} \,\mathrm{d} x\,\mathrm{d} y.$ Example Consider a sphere of radius $R\textgreater 0.$ To calculate its surface area, notice that, because of symmetry, we can calculate the surface area of the upper hemisphere and multiply the result by ${}2$ to obtain the surface area of the whole sphere. The upper hemisphere is given by the equation $x^2+y^2+z^2=R^2,$ where $z\textgreater 0.$ We can set $z = f(x,y)=\sqrt{R^2-x^2-y^2}$ and use the parameterisation of surfaces for functions as shown in Section 1. The parameter domain ${}D$ is in this case $D=\{(x,y)\in\mathbb{R}^2: x^2+y^2\le R^2\}$ and the normal vector is $\displaystyle \boldsymbol{n}(x,y)= \frac{x}{\sqrt{R^2-x^2-y^2}} \widehat{\boldsymbol{i}} + \frac{y}{\sqrt{R^2-x^2-y^2}} \widehat{\boldsymbol{j}} + \widehat{\boldsymbol{k}}.$ The length of this vector is $\displaystyle \|\boldsymbol{n}(x,y)\|_2= \frac{R}{\sqrt{R^2-x^2-y^2}}.$ Hence the surface area of the hemisphere (which we shall denote by $A(S/2)$) is given by $\displaystyle A(S/2)= \iint_D \|\boldsymbol{n}(x,y)\|_2 \,\mathrm{d} x\,\mathrm{d} y = \iint_D \frac{R}{\sqrt{R^2-x^2-y^2}} \,\mathrm{d} x\,\mathrm{d} y.$ The last integral can be evaluated using polar coordinates by which we obtain $\displaystyle A(S/2)=\int_0^R\int_{0}^{2\pi} \frac{R}{\sqrt{R^2-r^2}} r \,\mathrm{d} \theta \,\mathrm{d} r = 2\pi R \int_0^r \frac{1}{\sqrt{R^2-r^2}} r\,\mathrm{d} r = 2\pi R^2.$ Hence the area of the sphere is given by $\displaystyle A(S) = 2A(S/2) = 4\pi R^2.$ $\Box$ Exercise Calculate the surface area of a cone parameterised by $x=r\cos\theta,$ $y = r\sin\theta$ and $z=r,$ where $0\le \theta \le 2\pi$ and $0\le r\le 1.$ $\Box$ Scalar surface integrals We now integrate scalar fields over surfaces. This is in analogy to scalar line integrals considered in Chapter 3, Section 1. Definition Let $\boldsymbol{\Phi}:D\subset\mathbb{R}^2\to\mathbb{R}^3$ be a parameterisation of the surface $S=\mbox{Image}(\boldsymbol{\Phi})$ and let $f:S\to\mathbb{R}$ be continuous. Then the integral of ${}f$ over ${}S$ is given by $\displaystyle \iint_S f \,\mathrm{d} \mathcal{S} = \iint_D f(\boldsymbol{\Phi}(u,v)) \left\|\frac{\partial \boldsymbol{\Phi}}{\partial u} \times \frac{\partial \boldsymbol{\Phi}}{\partial v} \right\|\,\mathrm{d} u\,\mathrm{d} v.$ The last formula can also be written as $\displaystyle \begin{array}{rcl} \iint_S f \,\mathrm{d} \mathcal{S} &=& \iint_D f(\boldsymbol{\Phi}(u,v)) \|\boldsymbol{n}(u,v)\|_2 \,\mathrm{d} u \,\mathrm{d} v \\ && \\ &=& \iint_D f(\boldsymbol{\Phi}(u,v)) \sqrt{\left(\frac{\partial (Y,Z)}{\partial (u,v)} \right)^2 + \left(\frac{\partial (Z,X)}{\partial (u,v)} \right)^2 +\left(\frac{\partial (X,Y)}{\partial (u,v)} \right)^2} \,\mathrm{d}u\,\mathrm{d} v. \end{array}$ If the surface is the graph of a function $g:D\to\mathbb{R},$ then we also have $\displaystyle \iint_S f \,\mathrm{d} \mathcal{S} = \iint_D f(x,y,g(x,y)) \sqrt{\left(\frac{\partial g}{\partial x}\right)^2 + \left(\frac{\partial g}{\partial y}\right)^2 + 1} \,\mathrm{d} x\,\mathrm{d} y .$ Example Let a surface ${}S$ be given by $z^2=x^2+y^2$ with $0\le z \le 1$ and let $f(x,y,z)=1+z + x^2+y^2.$ Then set $X(r,\theta)=r\cos\theta,$ $Y(r,\theta)=r\sin\theta$ and $Z(r,\theta)=r.$ Then $\displaystyle \frac{\partial (Y,Z)}{\partial (r,\theta)} = r\cos \theta, \frac{\partial (Z,X)}{\partial (r,\theta)} =-r\sin\theta, \frac{\partial (X,Y)}{\partial (r,\theta)}=r.$ Hence $\displaystyle \begin{array}{rcl} \iint_S f\,\mathrm{d} \mathcal{S} &= & \int_{0}^1 \int_{0}^{2\pi} f(X(r,\theta), Y(r,\theta),Z(r,\theta)) \sqrt{r^2\sin^2\theta+r^2\cos^2\theta+r^2}\,\mathrm{d}\theta\,\mathrm{d} r \\ && \\ &=& \int_0^1 \int_0^{2\pi} (1+r+r^2) r \sqrt{2}\,\mathrm{d}\theta\,\mathrm{d} r = 2\sqrt{2}\pi \frac{13}{12}. \end{array}$ $\Box$ Exercise The surface in the previous example is the graph of a function. Use this to parameterise the surface and calculate the scalar surface integral using this approach. $\Box$ Surface integrals of scalar valued functions over graphs Suppose the surface ${}S$ is the graph of a function $z=g(x,y)$ defined on a domain $D\subset\mathbb{R}^2.$ Then we can use the parameterisation $\boldsymbol{\Phi}(x,y)=x\widehat{\boldsymbol{i}} + y\widehat{\boldsymbol{j}} + g(x,y) \widehat{\boldsymbol{k}}.$ Then the normal vector is $\boldsymbol{n}(x,y)= -\frac{\partial g}{\partial x} \widehat{\boldsymbol{i}} -\frac{\partial g}{\partial y} \widehat{\boldsymbol{j}} + \widehat{\boldsymbol{k}}.$ Hence $|\boldsymbol{n} \cdot \widehat{\boldsymbol{k}}| = 1$ and therefore $\displaystyle \iint_S f \,\mathrm{d}\mathcal{S}= \iint_D f \|\boldsymbol{n}\|\,\mathrm{d} A = \iint_D f \frac{\|\boldsymbol{n}\|}{|\boldsymbol{n}\cdot \widehat{\boldsymbol{k}}|}\,\mathrm{d} A = \iint_D \frac{f}{|\widehat{\boldsymbol{n}}\cdot \widehat{\boldsymbol{k}}|} \,\mathrm{d} A,$ where $\widehat{\boldsymbol{n}}$ is the unit normal vector to the surface ${}S.$ Example Let a surface ${}S$ be given by $2x+y-2z=1$ and $x,y\ge 0$ and $z \le 0.$ Let $f(x,y,z) = y.$ Then a normal vector to the surface is $\boldsymbol{n} = (2,1,-2).$ Since $\|\boldsymbol{n}\|=3,$ the unit normal vector is $\widehat{\boldsymbol{n}}=(2/3,1/3,-2/3).$ Hence $|\widehat{\boldsymbol{n}} \cdot \widehat{\boldsymbol{k}}| = 2/3$. We can describe the surface as a graph of the function $z = g(x,y)=x+y/2-1/2,$ where the domain is $D=\{(x,y): 0 \le x \le 1/2, 0 \le y \le 1-2x \}.$ Therefore $\displaystyle \iint_S f \,\mathrm{d} \mathcal{S}=\iint_D \frac{f}{|\widehat{\boldsymbol{n}}\cdot \widehat{\boldsymbol{k}}|} \,\mathrm{d} A = \int_0^{1/2} \int_0^{1-2x} 3y/2 \,\mathrm{d} y \,\mathrm{d} x = \frac{1}{8}.$ $\Box$ Applications Scalar surface integrals can be used to calculate mass, center of mass and moments of inertia of thin shells. Let $\delta$ be the density function of a very thin shell. • Mass $M =\iint_S \delta \,\mathrm{d} \mathcal{S}$ • Center of mass $\overline{x} = \frac{1}{M}\iint_S x \delta \,\mathrm{d} \mathcal{S},$ $\overline{y}=\frac{1}{M} \iint_S y \delta \,\mathrm{d} \mathcal{S}$ and $\overline{z} = \frac{1}{M} \iint_S z \delta \,\mathrm{d} \mathcal{S}.$ • Moment of inertia $I_x=\iint_S (y^2+z^2)\delta \,\mathrm{d} \mathcal{S},$ $I_y=\iint_S (x^2+z^2)\delta \,\mathrm{d} \mathcal{S}$ and $I_z=\iint_S (x^2+y^2)\delta \,\mathrm{d} \mathcal{S}$
## C Specification The VkPhysicalDeviceFeatures2 structure is defined as: // Provided by VK_VERSION_1_1 typedef struct VkPhysicalDeviceFeatures2 { VkStructureType sType; void* pNext; VkPhysicalDeviceFeatures features; } VkPhysicalDeviceFeatures2; or the equivalent // Provided by VK_KHR_get_physical_device_properties2 typedef VkPhysicalDeviceFeatures2 VkPhysicalDeviceFeatures2KHR; ## Members • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • features is a VkPhysicalDeviceFeatures structure describing the fine-grained features of the Vulkan 1.0 API. ## Description The pNext chain of this structure is used to extend the structure with features defined by extensions. This structure can be used in vkGetPhysicalDeviceFeatures2 or can be included in the pNext chain of a VkDeviceCreateInfo structure, in which case it controls which features are enabled in the device in lieu of pEnabledFeatures. Valid Usage (Implicit) • VUID-VkPhysicalDeviceFeatures2-sType-sType sType must be VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_FEATURES_2
# 91 is Pseudoprime to 35 Bases less than 91 ## Theorem $91$ is a Fermat pseudoprime in $35$ bases less than itself: $3, 4, 9, 10, 12, 16, 17, 22, 23, 25, 27, 29, 30, 36, 38, 40, 43, 48, 51, 53, 55, 61, 62, 64, 66, 68, 69, 74, 75, 79, 81, 82, 87, 88, 90$ ## Proof By definition of a Fermat pseudoprime, we need to check for $a < 91$: $a^{90} \equiv 1 \pmod {91}$ is satisfied or not. By Chinese Remainder Theorem, this is equivalent to checking whether: $a^{90} \equiv 1 \pmod 7$ and: $a^{90} \equiv 1 \pmod {13}$ are both satisfied. If $a$ is a multiple of $7$ or $13$, $a^{90} \not \equiv 1 \pmod {91}$. Therefore we consider $a$ not divisible by $7$ or $13$. By Fermat's Little Theorem, we have: $a^6 \equiv 1 \pmod 7$ and thus: $a^{90} \equiv 1^{15} \equiv 1 \pmod 7$ Now by Fermat's Little Theorem again: $a^{12} \equiv 1 \pmod {13}$ and thus: $a^{90} \equiv a^6 \paren{1^7} \equiv a^6 \pmod {13}$ We have: $\ds \paren {\pm 1}^6$ $\equiv$ $\ds 1$ $\ds \pmod {13}$ $\ds \paren {\pm 2}^6$ $\equiv$ $\ds -1$ $\ds \pmod {13}$ $\ds \paren {\pm 3}^6$ $\equiv$ $\ds 1$ $\ds \pmod {13}$ $\ds \paren {\pm 4}^6$ $\equiv$ $\ds 1$ $\ds \pmod {13}$ $\ds \paren {\pm 5}^6$ $\equiv$ $\ds -1$ $\ds \pmod {13}$ $\ds \paren {\pm 6}^6$ $\equiv$ $\ds -1$ $\ds \pmod {13}$ and thus $a$ must be equivalent to $1, 3, 4, 9, 10, 12 \pmod {13}$. This gives $1$ and the $35$ bases less than $91$ listed above. $\blacksquare$ ## Historical Note This result is attributed by David Wells in his $1997$ work Curious and Interesting Numbers, 2nd ed. to Tiger Redman, but no corroboration can be found for this on the internet.
# PCA FOR STOCK PICKING lets say I am an equity analyst and I want to figure out what fundamental metrics I should use when I am analyzing an industry , I can use pca on a bunch of stocks in an industry using their fundamental data , I will use metrics like return on equity book value return on assets and so on . My question is if I ran a pca on fundamental data from stocks in an industry what should the first and second principle component represent this is the pca analysis I did Importance of components: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 PC11 PC12 Standard deviation 1.6224 1.4924 1.3076 1.1561 1.06703 0.97266 0.85922 0.79106 0.73160 0.71013 0.6182 0.40416 Proportion of Variance 0.2025 0.1713 0.1315 0.1028 0.08758 0.07278 0.05679 0.04814 0.04117 0.03879 0.0294 0.01256 Cumulative Proportion 0.2025 0.3738 0.5053 0.6081 0.69571 0.76849 0.82528 0.87342 0.91459 0.95338 0.9828 0.99535 PC13 Standard deviation 0.24599 Proportion of Variance 0.00465 Cumulative Proportion 1.00000 Rotation (n x k) = (13 x 13): PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 price_book1 -0.2326294 0.23808656 -0.34383928 0.39506594 -0.17589636 0.005631939 1.481288e-01 -0.12544212 price_sales1 -0.2953341 0.03231056 -0.39791599 0.08079662 -0.44956572 0.178211731 1.283100e-01 0.25338915 profit_margin1 0.2452919 0.15146781 -0.23584723 -0.08324291 -0.40242616 -0.415042109 -5.372024e-01 -0.36709976 operating_margin1 0.4604949 0.05695158 -0.44611853 0.02911251 0.13084999 0.141162009 9.830128e-02 0.06904643 rnd1 -0.1481195 -0.51008130 0.08082843 0.31825795 -0.14767174 -0.133578701 -2.235869e-01 0.31669428 wacc1 0.1170286 0.32598489 0.12421475 0.55174254 -0.00349785 0.050214012 -3.885664e-02 0.35618351 si1 -0.1299393 0.09449171 -0.26881615 -0.21901039 0.38126586 -0.671389356 1.775648e-01 0.41125604 revenue1 0.1000749 -0.57110300 -0.18342643 0.14168133 0.01143720 -0.095472155 -2.735005e-01 0.13974257 ev_ebitda1 -0.4190633 0.10875543 -0.30716490 0.01541679 0.04935613 -0.122987639 -9.171709e-02 -0.13389575 ebitda_revnue1 0.4892202 -0.10134004 -0.38000743 0.01587338 0.07191896 0.170465503 1.075778e-01 0.10582745 cashflow1 -0.1778912 -0.22024820 -0.20037303 0.36000644 0.53208906 0.073914046 2.114301e-05 -0.50658912 eps_growth1 0.2009673 -0.22566598 0.14730493 0.22313596 -0.31215893 -0.426656558 6.599544e-01 -0.27885383 analysts1 -0.1921450 -0.30498605 -0.20079300 -0.41950787 -0.17999617 0.253170549 2.235785e-01 -0.03446128 thank you your help will be greatly appreciated • if you have "Multivariate Statistics" by Johnson and Wichern, check out the PCA chapter. They have an example that I think answers your question. I just can't remember what it is and the book is not easily accessed by me cause it's in a different state. Zivot's, SplusFinmetrics may also have an example but I'm certain that the multivariate book has one. – mark leeds Oct 29 '19 at 5:36 • If PCA tells you that the 1st PC is a linear combination of some factors, the hard thing is giving it a name, not understanding what it is. – Lisa Ann Oct 29 '19 at 7:22 • You will have the problem that your data is not scaled (therefore you will have to standardise which is essentially doing PCA on correlation), and is potentially not time specific or consistent (i.e. accounts are infrequently updated with some of the fundamental values) and so based on the preprocessing of data based on those problems it is entirely unclear to me what the first PC will contain. Is this within industry or accross industry? That will also hugely impact the result. – Attack68 Oct 29 '19 at 15:54 • @Attack68 the data is scaled and it is within an industry – Pelumi Oct 29 '19 at 17:39 • The example in Johnson and Wichern may have been contrived because I remember that the interpretation made sense ( which as Lisa Ann said, can be rare ). But it's still worth checking out. – mark leeds Oct 29 '19 at 20:25
# axiom of choice ## axiom of choice equivalent to the prime ideal theorem,”, Russell, B., 1906. Book of Numbers. only to Euclid’s axiom of parallels which was introduced more than two “Choice implies excluded of intuitionistic logic together with certain mild further $$P$$ there is a finite list $$L$$ of pairs from $$P$$ automorphisms of $$A$$. This was A chain in $$(P,\le)$$ is a subset $$C$$ of $$P$$ $$f_{1}$$ and $$f_{2}$$ given by: A more interesting example of a choice function is provided by nothing more than the claim that, given any collection of mutually theorem for Boolean algebras,”, –––, 1997. Now let us suppose that we are given a group $$G$$ of asserts that, if each predicate having a certain property Now let $$A$$ be a given proposition. Gödel showed that (assuming the it[13]. set theory (Mendelson 1997; Boyer and Merzbacher 1991, pp. 610-611). the axiom of choice,”, Myhill, J. and Scott, D.S., 1971. “Une méthode formulated in terms of ordinal definability. This was particularly a subset $$S \subseteq \bigcup \sH$$ a selector for $$\sH$$ relations, viz. “An Intuitionistic theory of Fraenkel, A., 1922. derive[15] Principle of the Constancy of the Velocity of Light or the Heisenberg Although the usefulness of AC quickly become clear, $$FX$$. transversal.[3]. To resolve the difficulty, we note that in deriving Excluded Middle K.P. product of any set of non-zero cardinal numbers is non-zero. Levy, 1973. let us call a subset $$U$$ of a set $$A$$ detachable if there IST, by contrast we have, In order to provide choice schemes equivalent to Lin The principle of set theory known as the Axiom of Choice has thousand years ago” (Fraenkel, Bar-Hillel & Levy 1973, domain $$X$$ and $$\phi(x,y)$$ an arbitrary AC1 is then equivalent to the assertion. mathematics: constructive | sets—the constructible hierarchy—by analogy with the ), Kelley, J.L., 1950. p_{2} \lt \ldots \lt p_{n} \lt \ldots\) is maximal, the $$p_{i}$$ form Then this seemingly innocuous principle has far-reaching mathematical arbitrary doubly-indexed family of sets Otherwise pick an element $$p_{1} \gt p_{0}$$; if $$p_{1}$$ is Since evidently we may assert $$\Phi(U)$$ and $$\Phi(V)$$, it follows ordinals, where $$\Def(X)$$ is the set of all subsets of $$i_k \in I$$ for which $$\neg(x_k \in S_{i_k})$$. an automorphism of $$Sym(V)$$. mathematicians of the day. weaker than, Every infinite cardinal number is equal to its square. –––, 1982. 1905). function is obtained by assigning to each pair its greatest with $$A$$, adding all the subsets of $$A$$, adjoining all “A model of set theory in which every set set theory: in this proof AC1 is used to A choice function on or variable sets. former from the latter than vice-versa. Absolute,”, Tarski, A., 1948. Then Bochner and others independently introduce maximal “The theory of representations for “A system of axiomatic set theory, Part The point here is that for a symmetric function $$f$$ defined on used to denote Zermelo-Fraenkel without the axiom of choice, while "ZFC" If none of the elements $$p_{0} \lt p_{1} \lt But since hard to show that, writing \(\pi_{1}$$ for projection on the This process must eventually terminate, since otherwise the Fraenkel, A., Y. Bar-Hillel and A. https://www.britannica.com/science/axiom-of-choice, Stanford Encyclopedia of Philosophy - The Axiom of Choice, set theory: Axioms for infinite and ordered sets, foundations of mathematics: Nonconstructive arguments. existence of such maximal elements. continuum-hypothesis,”, Gödel, K., 1964. Then $$P\in It was not until the middle 1930s that the question of the soundness is the direct counterpart of AC1 in this mathematics[5] and showing that the universe \(Sym(V)$$ contains no choice consequences—many indispensable, some startling—and Minimal samplings are precisely transversals for usually stated appears humdrum, even self-evident. middle—the assertion that $$A \vee \neg A$$ for any proposition But it is by no means {\forall x \inn X}\ \phi(x,fx)]\). In a 1908 paper Zermelo introduced a modified form of AC. His proof employed $$U \cup V = A$$. Choice[1]. the power set of $$X$$, $$\alpha$$ is an ordinal, and $$\lambda$$ is mutually disjoint set $$P$$ of pairs. Hypothesis). But in fact the Axiom of Choice as it is Functions on predicates are given intensionally, and Every set can be well-ordered. principle of Extensionality can easily be made to fail by considering, “Choice principles and constructive and Fremlin, D., 1972. Zorn, M., 1935. Given a partially ordered set $$(P, \le)$$, an Functions, $$A \rightarrow KU = KV$$. He introduced a new hierarchy of Then we can choose a member from each set in that collection. strengthened second-order language. $$r$$ for the natural map from $$A + A$$ to the latter are actually to be effected, of how, otherwise put, choice and repeat the process. constructible sets. (eds.). through his proof of the well-ordering theorem. Call a family of sets A choice function on Cohen, P. J. upper bound for a subset $$X$$ of $$P$$ is an element $$a\in set theory). which the set of upper bounds of \(\{a\}$$ coincides with $$\{a\}$$, Two Distinct Individuals: assertion is weaker than, There is a Lebesgue nonmeasurable set of real numbers (Vitali In Zermelo-Fraenkel set theory (in the form omitting the axiom of choice), Zorn's lemma, proof of its consistency relative to the other axioms of set theory. previous technique; nevertheless his independence proof also made denotes by M. He continues: The last sentence of this quotation—which asserts, in effect, Stated in terms of choice functions, Zermelo’s first formulation of IST that $$\sA$$ is a variable element of $$\sA$$. But in the case of an follows that we may assert, From the presupposition that $$0 \ne 1$$ it follows that, is assertable. University. “The axiom of choice in an may select a pair $$\{c, d\} = U$$ from $$P$$ different from all the following recursion on the ordinals, where $$\sP(X)$$ is continuum-hypothesis,”, Gödel, K., 1938b. Now (, Teichmüller, Bourbaki and Tukey independently reformulate, $$\alpha \vee \neg \alpha$$ ($$\alpha$$ any sentence), $$(\alpha \rightarrow \beta) \vee (\beta \rightarrow \alpha)$$ ($$\alpha$$, $$\beta$$ any sentences), $$\neg \alpha \vee \neg\neg \alpha$$ ($$\alpha$$ any sentence), $$\exists x[ \exists x \alpha(x) \rightarrow \alpha(x)]$$ In 1939 the Austrian-born American logician Kurt Gödel proved that, if the other standard Zermelo-Fraenkel axioms (ZF; see the table) are consistent, then they do not disprove the axiom of choice. proved equivalent to, Every field has an algebraic closure (Steinitz 1910). “Are there In fact: Further, while DAC$$_1$$ is easily seen to be “Axiom of choice and satisfy just the corresponding principle of Intensionality equivalents of Zorn’s Lemma,”, –––, 2006. ordering principle are equivalent to the axiom of choice (Mendelson 1997, p. 275). Now Zorn’s Lemma asserts: Zorn’s Lemma (ZL): finite. AC3: which Fraenkel had originally employed them. relation on $$A + A = A \times \{0\} \cup A \times \{1\}$$ It was formulated by Zermelo in 1904 and states that, given any set of mutually disjoint nonempty sets, there exists at least one set that contains exactly one element in common with each of the nonempty sets.The axiom of choice is related to the first of Hilbert's problems. Website: Font Resize Contrast
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Discrete simulation code tutorial There's a very preliminary tutorial to the C++ code library (uploaded to the Azimuth Code Project) at Discrete simulation code tutorial. It's not hugely comprehensible yet, but I'll probably slowly refine it over time. • Options 1. Comment Source:If you like to, you could also add a link to this page on the home page of the Azimuth code project. • Options 2. Hi David, I couldn't build your C++ DSE code on archlinux with the existing makefile so a couple of friends and I made some mods to the source and added a cmake makefile and a readme. I emailed a zip to the email address listed for you on the google repo 2 or 3 weeks ago. Did you receive it? Comment Source:Hi David, I couldn't build your C++ DSE code on archlinux with the existing makefile so a couple of friends and I made some mods to the source and added a cmake makefile and a readme. I emailed a zip to the email address listed for you on the google repo 2 or 3 weeks ago. Did you receive it? • Options 3. edited February 2012 Hi Jim, I received your email, and had been thinking about the issues you raised. Unfortunately pretty much every aspect of my life has gone wrong recently so I just haven't had time to actually respond. (This isn't to elicit sympathy, just to say that for the forseeable future I'm unlikely to have the time or resources to even read Azimuth regularly, let alone actually do anything. So regard me as a priori incredibly flaky and unreliable about doing anything or even responding to email.) I think a cmake file would be very useful addition (I just never got around to learning cmake myself). Regarding the order of standard header includes, they were the ones that worked on ubuntu at the time I wrote it, but C++ headers are notorious for being changed and causing problems like this, so I can imagine that the modifications are required. Feel free to commit either of those changes: the code is completely open source. You also asked about making this available via a "server in the cloud". I don't have any good ideas about that: part of the design was to embed the modeling within any C++ program rather than writing a more complete "simulation environment", which • on the one hand makes it easier to use a standard programming language for writing/reusing other parts of the simulation investigation system (eg, how you want to log the data, visualise it, etc, which may be constrained by legacy code you've already written/other things you want to interface with/etc) • but the lack of "standard" facilities makes it difficult to provide as a "drop your model code in" cloud service. I'm not sure what would be the best -- in the sense of acheiving the most useful result given limited volunteer developer time -- way of making the code easily usable in a cloud server sense. Again, if you've got any idea feel free to both run with it and ask me anything related to it. Comment Source:Hi Jim, I received your email, and had been thinking about the issues you raised. Unfortunately pretty much every aspect of my life has gone wrong recently so I just haven't had time to actually respond. (This isn't to elicit sympathy, just to say that for the forseeable future I'm unlikely to have the time or resources to even read Azimuth regularly, let alone actually do anything. So regard me as _a priori_ incredibly flaky and unreliable about doing anything or even responding to email.) I think a cmake file would be very useful addition (I just never got around to learning cmake myself). Regarding the order of standard header includes, they were the ones that worked on ubuntu at the time I wrote it, but C++ headers are notorious for being changed and causing problems like this, so I can imagine that the modifications are required. Feel free to commit either of those changes: the code is completely open source. You also asked about making this available via a "server in the cloud". I don't have any good ideas about that: part of the design was to embed the modeling within any C++ program rather than writing a more complete "simulation environment", which * on the one hand makes it easier to use a standard programming language for writing/reusing other parts of the simulation investigation system (eg, how you want to log the data, visualise it, etc, which may be constrained by legacy code you've already written/other things you want to interface with/etc) * but the lack of "standard" facilities makes it difficult to provide as a "drop your model code in" cloud service. I'm not sure what would be the best -- in the sense of acheiving the most useful result given limited volunteer developer time -- way of making the code easily usable in a cloud server sense. Again, if you've got any idea feel free to both run with it and ask me anything related to it. • Options 4. FWIW, I had been thinking a lot about simulation recently and, in the case of the kind of ecological/human systems -- as opposed systems which naturally do "move to" equilibrium behaviour -- I was thinking about with that simulation code, I'm not sure that that approach is the best way of doing things. This brief note describes what I was thinking about in case it's of any use to anyone else on Azimuth thinking about this stuff. The reason I think the high-level design is acutally not the best fit is that the simple models you want to look at a tend to be simple low-order Markov or, in more simple terms, have very limited "memory" that will affect the future evolution. This is particularly so if you've got a lot of "operators that map large amounts of the state space to the same state" in the model, such as 1. suppose you've got a minimal breeding population below which the model says a species goes extinct: then regardless of how you get there, the behaviour (at least for that species) remains the same; 2. suppose foxes naturally eat some stochastic number of rabbits in a given range of rabbits, then once the number of foxes is above a given level they'll eat all the rabbits regardless of the precise number of foxes; and similar situations. Since the published code follows a single simulation path, it can't take advantage of the fact that different initial paths will have the same (stochatically defined, of course) behaviour once they "pass through the same state" (I believe the nice term "coalesce" is already defined for a more precise notion, so stick with the long form...), and I can't think of a reasonably efficient way to modify the code to acheive it. So I was thinking about how one could 1. build a representation of the "Markov transition matrix" and then 2. use the single-step representation to draw conclusions about longer term behaviour. In terms of long term behaviour I'm not thinking about strict equilibrium state behaviour, because I think that this is less likely to be an occurrence in practical ecological modeling: apart from anything else if a model does tend to an equilibirum state it's probably not in need of human "assistance". But one could certainly envisage that the complete transition matrix wouldn't need to be computed (which would be computationally completely prohibitive for any interesting sized model) in order to draw useful conclusions in step 2. One thing that I was thinking about was something that would, in reliable practice, be unacheivable using just standard C++ was taking the literal model code and doing things like automatic differentiation on it, then fixing some of the parameters -- particularly given knowledge of operators like $max$ occuring in the models -- and specialising the resulting code to produce a simpler -- and hence faster -- piece of code for estimating numerical values for part of the transition matrix. (Actually, there's more details, but they're not completely coherent in my mind yet, so I won't write them unless anyone really wants to know.) This kind of thing is the things that the popular LLVM codebase would provide most of the heavy lifting for this JIT optimisation and compilation, with the "simulation framework" providing some high-level priming of certain facts about the program obtained from the model that would be very difficult for the general purpose LLVM routines to "infer". It's unlikely I'm going to be actually implementing any of these ideas in the near future, so if anyone wants to try any of these things feel free. Comment Source:FWIW, I had been thinking a lot about simulation recently and, in the case of the kind of ecological/human systems -- as opposed systems which naturally do "move to" equilibrium behaviour -- I was thinking about with that simulation code, I'm not sure that that approach is the best way of doing things. This brief note describes what I was thinking about in case it's of any use to anyone else on Azimuth thinking about this stuff. The reason I think the high-level design is acutally not the best fit is that the simple models you want to look at a tend to be simple low-order Markov or, in more simple terms, have very limited "memory" that will affect the future evolution. This is particularly so if you've got a lot of "operators that map large amounts of the state space to the same state" in the model, such as 1. suppose you've got a minimal breeding population below which the model says a species goes extinct: then regardless of how you get there, the behaviour (at least for that species) remains the same; 2. suppose foxes naturally eat some stochastic number of rabbits in a given range of rabbits, then once the number of foxes is above a given level they'll eat all the rabbits regardless of the precise number of foxes; and similar situations. Since the published code follows a single simulation path, it can't take advantage of the fact that different initial paths will have the same (stochatically defined, of course) behaviour once they "pass through the same state" (I believe the nice term "coalesce" is already defined for a more precise notion, so stick with the long form...), and I can't think of a reasonably efficient way to modify the code to acheive it. So I was thinking about how one could 1. build a representation of the "Markov transition matrix" and then 2. use the single-step representation to draw conclusions about longer term behaviour. In terms of long term behaviour I'm not thinking about strict equilibrium state behaviour, because I think that this is less likely to be an occurrence in practical ecological modeling: apart from anything else if a model does tend to an equilibirum state it's probably not in need of human "assistance". But one could certainly envisage that the complete transition matrix wouldn't need to be computed (which would be computationally completely prohibitive for any interesting sized model) in order to draw useful conclusions in step 2. One thing that I was thinking about was something that would, in reliable practice, be unacheivable using just standard C++ was taking the literal model code and doing things like [[automatic differentiation]] on it, then fixing some of the parameters -- particularly given knowledge of operators like $max$ occuring in the models -- and specialising the resulting code to produce a simpler -- and hence faster -- piece of code for estimating numerical values for part of the transition matrix. (Actually, there's more details, but they're not completely coherent in my mind yet, so I won't write them unless anyone really wants to know.) This kind of thing is the things that the popular [LLVM](www.llvm.org) codebase would provide most of the heavy lifting for this JIT optimisation and compilation, with the "simulation framework" providing some high-level priming of certain facts about the program obtained from the model that would be very difficult for the general purpose LLVM routines to "infer". It's unlikely I'm going to be actually implementing any of these ideas in the near future, so if anyone wants to try any of these things feel free. • Options 5. Hi David, Thanks for your very helpful response. It's stimulated me to want to write some climate code and John has already proposed an energy balance spec (which just got a lot more complex after one comment :)). So now I have a lot more reading to do! I've been playing a bit with automatic differentiation code in haskell written by Dan Piponi and the library written by Lennart Augusstson and based on papers by Jerzy Karczmarczuk. Fascinating stuff which immediately got me, even if I've not got it yet. The llvm backend to haskell seemed to work well on my previous compiler version and benchmarks appear to be improving all the time. I understand that changes have been requested from the llvm folks to match impedances with ghc's IR. I'd be happy to try out the fad library given some specification. There may be some delay because, as usual, I've upgraded my ghc to 7.4.1 so breaking a majority of libraries, including fad. This will be corrected. John Baez has said he could enquire about setting up a web service account to run interactive software. From my POV javascript seems an obvious possible choice ("assemby language of the web") for the front-end. Whether there would be a possibility of running compilers for other languages I don't yet know. I'd have to learn a lot more before I could comment on you markov model ideas. I hope you'll be able to boost your mileage soon if that's not an inappropriate metaphor for an environmental blog . Cheers Comment Source:Hi David, Thanks for your very helpful response. It's stimulated me to want to write some climate code and John has already proposed an energy balance spec (which just got a lot more complex after one comment :)). So now I have a lot more reading to do! I've been playing a bit with automatic differentiation code in haskell written by Dan Piponi and the library written by Lennart Augusstson and based on papers by Jerzy Karczmarczuk. Fascinating stuff which immediately got me, even if I've not got it yet. The llvm backend to haskell seemed to work well on my previous compiler version and benchmarks appear to be improving all the time. I understand that changes have been requested from the llvm folks to match impedances with ghc's IR. I'd be happy to try out the fad library given some specification. There may be some delay because, as usual, I've upgraded my ghc to 7.4.1 so breaking a majority of libraries, including fad. This will be corrected. John Baez has said he could enquire about setting up a web service account to run interactive software. From my POV javascript seems an obvious possible choice ("assemby language of the web") for the front-end. Whether there would be a possibility of running compilers for other languages I don't yet know. I'd have to learn a lot more before I could comment on you markov model ideas. I hope you'll be able to boost your mileage soon if that's not an inappropriate metaphor for an environmental blog . Cheers • Options 6. David, In a recent comment, I mentioned the idea of estimating linear Markov models of the climate. This has some precedent in the literature under the name "linear inverse models". They look like this: $$d\mathbf{x}/dt = \mathbf{Lx} + \mathbf{F} + \mathbf{\xi}$$ Here $\mathbf{x}$ is a vector of climate variables, $\mathbf{L}$ is a dynamical transition matrix (not a matrix of Markov transition probabilities, but can be used to derive one), $\mathbf{F}$ is some exogenous time-dependent forcing, and $\mathbf{\xi}$ is a noise process. (By the way, in the above expression, why is the first "d" italicized but the second isn't? It seems to have to do with the boldface font.) Usually the system is first dimensionally reduced using principal component analysis before estimating the transition matrix (so the climate vector is just the projection onto leading components). One question is how well these models, estimated from relatively short time steps (months), can predict climate out to decadal or longer time scales. Climatologists have actually used them to try to predict climate modes of variability, like ENSO or the AMO. Comment Source:David, In [a recent comment](http://www.math.ntnu.no/~stacey/Mathforge/Azimuth/comments.php?DiscussionID=938&Focus=6274#Comment_6274), I mentioned the idea of estimating linear Markov models of the climate. This has some precedent in the literature under the name "linear inverse models". They look like this: $$d\mathbf{x}/dt = \mathbf{Lx} + \mathbf{F} + \mathbf{\xi}$$ Here $\mathbf{x}$ is a vector of climate variables, $\mathbf{L}$ is a dynamical transition matrix (not a matrix of Markov transition probabilities, but can be used to derive one), $\mathbf{F}$ is some exogenous time-dependent forcing, and $\mathbf{\xi}$ is a noise process. (By the way, in the above expression, why is the first "d" italicized but the second isn't? It seems to have to do with the boldface font.) Usually the system is first dimensionally reduced using principal component analysis before estimating the transition matrix (so the climate vector is just the projection onto leading components). One question is how well these models, estimated from relatively short time steps (months), can predict climate out to decadal or longer time scales. Climatologists have actually used them to try to predict climate modes of variability, like ENSO or the AMO. • Options 7. By the way, in the above expression, why is the first "d" italicized but the second isn't? It seems to have to do with the boldface font. if you write two letters without separation, like $CO_2$ or $dt$ it will come out straight in iTex. Comment Source:> By the way, in the above expression, why is the first "d" italicized but the second isn't? It seems to have to do with the boldface font. if you write two letters without separation, like $CO_2$ or $dt$ it will come out straight in iTex. • Options 8. if you write two letters without separation, like $CO_2$ or $dt$ it will come out straight in iTex. Right, it has nothing to do with the boldface font. Andrew Stacey and other iTeX fans claim this is a 'feature', and you just have to learn to leave spaces between letters if you want them to come out italic instead of roman. I consider it more of a nuisance. Comment Source:> if you write two letters without separation, like $CO_2$ or $dt$ it will come out straight in iTex. Right, it has nothing to do with the boldface font. Andrew Stacey and other iTeX fans claim this is a 'feature', and you just have to learn to leave spaces between letters if you want them to come out italic instead of roman. I consider it more of a nuisance. • Options 9. edited March 2012 Hi Jim et al, One thing I didn't mention explicitly was one of the things I was thinking about using automatic differentiation for. Suppose we're considering a system with a transition $\vec{y}=\vec{f}(\vec{x})$. Then for a point $\vec{x}_0$ that maps to a point $\vec{y}_0$, it is well known in the simple case that there's only one point mapping to $\vec{y}_0$ then the pdf $p_x$ of $\vec{x}$ is related to the pdf $p_y$ of $\vec{y}$ by: $$p_y(\vec{y}0) d \vec{y} \propto (abs(det (\partial \vec{y}/\partial \vec{x}|{x_0})))^{-1} p_x(\vec{x}_0) d \vec{x}$$ where $\partial \vec{y}/\partial \vec{x}$ is the Jacobian matrix of the transformation $\vec{f}$. (This is just stating the change of variables rule for an integral in the particular case of a pdf.) In the general case there's a sum over all the points $\vec{x}$ that map to $\vec{y}_0$, but that complicates the notation without affecting the particular issues here. It doesn't seem unreasonable to try and track how the pdf evolves by looking at samples and doing some form of "extrapolation" (eg, using splines, or quadrature, or something else). So this all looks ok, but what happens for something like $y=min(x,T)$ where $T$ is a fixed constant. Remember, we just want to know what to do to find the value at a particular point, not necessarily any closed form. Well, in this 1-D case if $x_0 \le T$ then $\partial y/\partial x|_{x_0}=1$. If $x_0 \gt T$, then $\partial y/\partial x|_{x_0}=0$ but we also know from probability theory "the result" should be a Dirac delta function (since we've got a finite amount of probability mass at the exact mathematical point $T$), which if you squint you can imagine might come from $1/0$. But what happens in multiple dimensions? For example, for input variables $x$ and $y$ and output variables $u$ and $v$, what is the relationship between the input and output pdfs in the various cases for the two cases $b \le T$ and $b \gt T$ for the following simple (not meant to actually model anything) example that still shows some of the issues? a := 2 * x b := a + y c := min(b,T) d := c / b u := a * d f := y * d v := 2 * f I was working towards an idea, hope to get back to it at some point... Comment Source:Hi Jim et al, One thing I didn't mention explicitly was one of the things I was thinking about using automatic differentiation for. Suppose we're considering a system with a transition $\vec{y}=\vec{f}(\vec{x})$. Then for a point $\vec{x}_0$ that maps to a point $\vec{y}_0$, it is well known _in the simple case that there's only one point mapping to_ $\vec{y}_0$ then the pdf $p_x$ of $\vec{x}$ is related to the pdf $p_y$ of $\vec{y}$ by: $$p_y(\vec{y}_0) d \vec{y} \propto (abs(det (\partial \vec{y}/\partial \vec{x}|_{x_0})))^{-1} p_x(\vec{x}_0) d \vec{x}$$ where $\partial \vec{y}/\partial \vec{x}$ is the Jacobian matrix of the transformation $\vec{f}$. (This is just stating the change of variables rule for an integral in the particular case of a pdf.) In the general case there's a sum over all the points $\vec{x}$ that map to $\vec{y}_0$, but that complicates the notation without affecting the particular issues here. It doesn't seem unreasonable to try and track how the pdf evolves by looking at samples and doing some form of "extrapolation" (eg, using splines, or quadrature, or something else). So this all looks ok, but what happens for something like $y=min(x,T)$ where $T$ is a fixed constant. Remember, we just want to know what to do to find the _value_ at a particular point, not necessarily any closed form. Well, in this 1-D case if $x_0 \le T$ then $\partial y/\partial x|_{x_0}=1$. If $x_0 \gt T$, then $\partial y/\partial x|_{x_0}=0$ but we also know from probability theory "the result" should be a Dirac delta function (since we've got a finite amount of probability mass at the exact mathematical point $T$), which if you squint you can imagine might come from $1/0$. But what happens in multiple dimensions? For example, for input variables $x$ and $y$ and output variables $u$ and $v$, what is the relationship between the input and output pdfs in the various cases for the two cases $b \le T$ and $b \gt T$ for the following simple (not meant to actually model anything) example that still shows some of the issues? ~~~~ a := 2 * x b := a + y c := min(b,T) d := c / b u := a * d f := y * d v := 2 * f ~~~~ I was working towards an idea, hope to get back to it at some point... • Options 10. Hi David, Tim van B. has set up an account and access rights for me on the code.google repo. Is the best thing to start version control for your software in some form such as DSE-0.0.1 for your initial version and then DSE-0.0.2 for the upgrade or just merge them together? At least author details need to be in the README. Do you want to add copyright and licence info to this. I see the GPL licence mentioned on the google site. I don't know whether there's an azimuth poliicy on this? I'm used to the haskell community which uses BSD and is, I think, less restrictive. Best wishes Comment Source:Hi David, Tim van B. has set up an account and access rights for me on the code.google repo. Is the best thing to start version control for your software in some form such as DSE-0.0.1 for your initial version and then DSE-0.0.2 for the upgrade or just merge them together? At least author details need to be in the README. Do you want to add copyright and licence info to this. I see the GPL licence mentioned on the google site. I don't know whether there's an azimuth poliicy on this? I'm used to the haskell community which uses BSD and is, I think, less restrictive. Best wishes • Options 11. edited March 2012 FWIW, the new implementation I'm thinking about will be very different so it won't so much be an "new version" as a "new approach". If I ever get anywhere, I'll add it sa something like new directory "DSE-using-transition-estimation". (I don't imagine snappy names will matter much for Azimuth.) Regarding licences, my personal preference for library code is LGPL. (For non-software people: the three big licence choices are: • BSD: anyone can use and modify the code any way they want, including making changes they keep to themselves. • GPL: anyone can use the and modify the code in any combined program, but they have to make modifications to all parts of the program available to anyone who wants them on the same terms. • LGPL (Lesser GPL): anyone can call the library code from any combined program. The library can be modified, but those modifications have to be made available. However this obligation doesn't extend to code in the combined program outside the library. ). However, the DSE code is templated inline C++, which to my limited understanding doesn't really allow an LGPL licence (since there's not really any way to create a program where the library remains a distinctly separate block of object code), although I'm not an expert on this (and have no burning desire to become one...). So I'm happy to make the DSE code BSD licenced, providing no-one more active in the code project says this will cause problems (say, code with different licences in the same repository not being possible). Certainly feel free to add my name and email to any others added to the README, and clarify the licence once it's clear what the best licence is. Comment Source:FWIW, the new implementation I'm thinking about will be very different so it won't so much be an "new version" as a "new approach". If I ever get anywhere, I'll add it sa something like new directory "DSE-using-transition-estimation". (I don't imagine snappy names will matter much for Azimuth.) Regarding licences, my **_personal_** preference for library code is LGPL. (For non-software people: the three big licence choices are: * BSD: anyone can use and modify the code any way they want, including making changes they keep to themselves. * GPL: anyone can use the and modify the code in any combined program, but they have to make modifications _to all parts of the program_ available to anyone who wants them on the same terms. * LGPL (Lesser GPL): anyone can call the library code from any combined program. The library can be modified, but those modifications have to be made available. However this obligation doesn't extend to code in the combined program _outside the library_. ). However, the DSE code is templated inline C++, which to my limited understanding doesn't really allow an LGPL licence (since there's not really any way to create a program where the library remains a distinctly separate block of object code), although I'm not an expert on this (and have no burning desire to become one...). So I'm happy to make the DSE code BSD licenced, providing no-one more active in the code project says this will cause problems (say, code with different licences in the same repository not being possible). Certainly feel free to add my name and email to any others added to the README, and clarify the licence once it's clear what the best licence is. • Options 12. Hi David, I too care nothing for copyrights and licenses. I didn't add any information at all to the the README, not even your authorship, expecting to edit it on the site:. As our "band of 3" could no longer build your source might it be best to just merge the diffs from this archive? I have no idea how to do this but will keep reading. BTW Have you come across Jun S. Liu Monte Carlo Strategies in Scientfic Computing, Springer 2001. It has some interesting cases where Markov chains are not the best tool. Comment Source:Hi David, I only seem to receive notifications of some discussion posts so I've only just found your reply. I know nothing about code.google.com. I uploaded DSE.zip ("to upload go to the download tab" or some such) with the title DSE. It appears under downloads. It said I could upload compressed files so I did but can find no information about unzipping an archive on the site. Do I have to create a local svn repo with the same structure and then sink it to the project? I too care nothing for copyrights and licenses. I didn't add any information at all to the the README, not even your authorship, expecting to edit it on the site:. As our "band of 3" could no longer build your source might it be best to just merge the diffs from this archive? I have no idea how to do this but will keep reading. BTW Have you come across Jun S. Liu Monte Carlo Strategies in Scientfic Computing, Springer 2001. It has some interesting cases where Markov chains are not the best tool. • Options 13. I recall that when I first creaed my extension I "checked out" a full working svn repository of all the code on the Azimuth project (I think using the instructions on this page ), then created a new subdirectory and copied over and svn add'ed my files and then did an svn checkin. There's definitely a source control system on googlecode, but I don't know much about it. In addition the web browser seems to claim there aren't any files in the directory, but I've no idea why. Added to that is the complication that at the time I had a networked home Linux machine so running svn commands wasn't a problem; currently I'm stuck using internet cafe machines (eg, now) so it's essentially impossible to work using svn. Maybe Tim, or someone else who's reading who's active with a different googlecode project, can shed some light on things. There is actually a more methodical development history in git on my local machine/backups (I was planning on using git for "real development" because it was designed to encourage more granular check-ins as you develop to local only branches and then dumping "releases" into the google-code svn.) Should I ever get the time and resources to do regular development on Azimuth stuff, I'm thinking of that I'll probably try and make detailed "in progress" development visible by hosting a repository on github, but again do "release dumps" to the googlecode svn repo. It's a bit of a dilemma: on the one hand the Azimuth googlecode pages present a strong unifying and "advertising" force, but svn is such a bad fit for decentralised development that it's probably worth the hassle of maintaining an separate "in progress" tree in git. If you can figure out any way to "import" improvements to the code, go for it even if it doesn't look like the "offical" way to do it. And as I haven't said it, thanks for your interest and work taking this stuff forward. I haven't read the Liu book, but it sounds interesting. I can imagine that there are lots of ways of modelling ideas which aren't Markov chains/processes. Comment Source:I recall that when I first creaed my extension I "checked out" a full working svn repository of all the code on the Azimuth project (I think using the instructions on [this page](http://code.google.com/p/azimuthproject/source/checkout) ), then created a new subdirectory and copied over and svn add'ed my files and then did an svn checkin. There's definitely a source control system on googlecode, but I don't know much about it. In addition the web browser seems to claim there aren't any files in the directory, but I've no idea why. Added to that is the complication that at the time I had a networked home Linux machine so running svn commands wasn't a problem; currently I'm stuck using internet cafe machines (eg, now) so it's essentially impossible to work using svn. Maybe Tim, or someone else who's reading who's active with a different googlecode project, can shed some light on things. There is actually a more methodical development history in git on my local machine/backups (I was planning on using git for "real development" because it was designed to encourage more granular check-ins as you develop to local only branches and then dumping "releases" into the google-code svn.) Should I ever get the time and resources to do regular development on Azimuth stuff, I'm thinking of that I'll probably try and make detailed "in progress" development visible by hosting a repository on github, but again do "release dumps" to the googlecode svn repo. It's a bit of a dilemma: on the one hand the Azimuth googlecode pages present a strong unifying and "advertising" force, but svn is such a bad fit for decentralised development that it's probably worth the hassle of maintaining an separate "in progress" tree in git. If you can figure out any way to "import" improvements to the code, go for it even if it doesn't look like the "offical" way to do it. And as I haven't said it, thanks for your interest and work taking this stuff forward. I haven't read the Liu book, but it sounds interesting. I can imagine that there are lots of ways of modelling ideas which aren't Markov chains/processes. • Options 14. Thanks for the prompt reply. Again i didn't get any email notification of your post. I haven' t used svn for a long time. I'd better read up on if and how I can do partial checkins and checkouts. Some light from Tim van B. or somebody would be most welcome. Interworking svn and git sounds like it might be a complete nightmare. I meant Markov chains only in the context of Monte-Carlo methods and only mentioned it because I didn't know there were non-Markov Monte-Carlo methods. The Chiu book has 10 lines on Euler-Langevin moves for hybrid Monte Carlo methods. He points out that a Langevin update is equivalent to a one-step hybrid Monte Carlo. Comment Source:Thanks for the prompt reply. Again i didn't get any email notification of your post. I haven' t used svn for a long time. I'd better read up on if and how I can do partial checkins and checkouts. Some light from Tim van B. or somebody would be most welcome. Interworking svn and git sounds like it might be a complete nightmare. I meant Markov chains only in the context of Monte-Carlo methods and only mentioned it because I didn't know there were non-Markov Monte-Carlo methods. The Chiu book has 10 lines on Euler-Langevin moves for hybrid Monte Carlo methods. He points out that a Langevin update is equivalent to a one-step hybrid Monte Carlo. • Options 15. Hi Jim, I don't know if anyone's mentioned it, but regarding email notifications you probably need to ask about it on a thread in "Technical" since Andrew Stacey only generally reads that classification. Regarding the Monte-Carlo stuff, there's at least two ways in which Markov Chain is used. 1. The common one, called Markov Chain Monte Carlo (MCMC) might be called in detail "Monte Carlo sampling using Markov chains in the implementation", and there's no need for whatever model you're trying to do Monte Carlo sampling on to be any kind of Markov chain. 2. You can have a model which is actually a Markov Chain, and to get some insight about it you can try and use Monte Carlo sampling. (Indeed you could use MCMC to do your Monte Carlo sampling on your Markov Chain.) It's this second case that I've been thinking about, because it's actually really quite hard to come up with a well-motivated model which isn't Markov to some small order, ie, of where "direct influence" from the past is limited to a small number of immediately previous timestep. But there are definitely more general processes going on when you look at the physical world in much greater detail: for example, initially it looks like genetic changes through sexual combination is a Markov process, since you've got both parents DNA interacting but no direct involvement of the grandparents DNA, etc. But apparently (not an expert) there's mitochondrial DNA influences and other epigenetic phenomena that provide a certain degree of influence from further in the past. Likewise, in fully realistic population modelling there are non-Markov influences. I'm just experimenting, seeing if there's still interesting empirical stuff to find out when the processes are Markov but there's lots of non-classical-analysis-amenable functions being used. Comment Source:Hi Jim, I don't know if anyone's mentioned it, but regarding email notifications you probably need to ask about it on a thread in "Technical" since Andrew Stacey only generally reads that classification. Regarding the Monte-Carlo stuff, there's at least two ways in which Markov Chain is used. 1. The common one, called Markov Chain Monte Carlo (MCMC) might be called in detail "Monte Carlo sampling using Markov chains in the implementation", and there's no need for whatever model you're trying to do Monte Carlo sampling on to be any kind of Markov chain. 2. You can have a model which is actually a Markov Chain, and to get some insight about it you can try and use Monte Carlo sampling. (Indeed you could use MCMC to do your Monte Carlo sampling on your Markov Chain.) It's this second case that I've been thinking about, because it's actually really quite hard to come up with a well-motivated model which isn't Markov to some small order, ie, of where "direct influence" from the past is limited to a small number of immediately previous timestep. But there are definitely more general processes going on when you look at the physical world in much greater detail: for example, initially it looks like genetic changes through sexual combination is a Markov process, since you've got both parents DNA interacting but no direct involvement of the grandparents DNA, etc. But apparently (not an expert) there's mitochondrial DNA influences and other epigenetic phenomena that provide a certain degree of influence from further in the past. Likewise, in fully realistic population modelling there are non-Markov influences. I'm just experimenting, seeing if there's still interesting empirical stuff to find out when the processes are Markov but there's lots of non-classical-analysis-amenable functions being used. • Options 16. edited March 2012 David wrote: there's no need for whatever model you're trying to do Monte Carlo sampling on to be any kind of Markov chain. Yes. That's what I didn't know but found out from the Liu book. the processes are Markov but there's lots of non-classical-analysis-amenable functions being used. Thanks for the tip about Technical - I thought there must be such a page somewhere. Comment Source:David wrote: > there's no need for whatever model you're trying to do Monte Carlo sampling on to be any kind of Markov chain. Yes. That's what I didn't know but found out from the Liu book. > the processes are Markov but there's lots of non-classical-analysis-amenable functions being used. That sounds difficult but nonetheless I'm looking forward to your further thoughts about this spec. Thanks for the tip about Technical - I thought there must be such a page somewhere. • Options 17. By the way, Jim, I changed your quotes from <pre>this</pre> to the more desirable Markdown syntax > this The former produces this while the latter produces this No big deal, but at least the 'good' way takes fewer keystrokes! Comment Source:By the way, Jim, I changed your quotes from <pre>this</pre> to the more desirable Markdown syntax > this The former produces <pre>this</pre> while the latter produces > this No big deal, but at least the 'good' way takes fewer keystrokes! • Options 18. Thanks John. Instiki is still high on my ignorance list. Comment Source:Thanks John. Instiki is still high on my ignorance list.
Finding sides using trigonometry • Jan 27th 2007, 05:30 AM Tom G Finding sides using trigonometry I've tried to do this question but I keep getting strange answers that don't 'look' right. Can someone explain how it is done please. In a quadrilateral PQRS, PQ=10cm, QR=7cm, RS=6cm and <PQR=65 and <PSR=98. Find the length PS. • Jan 27th 2007, 06:34 AM earboth Quote: Originally Posted by Tom G I've tried to do this question but I keep getting strange answers that don't 'look' right. Can someone explain how it is done please. In a quadrilateral PQRS, PQ=10cm, QR=7cm, RS=6cm and <PQR=65 and <PSR=98. Find the length PS. Hello, Tom, first draw a sketch (see attachment). First calculate the line PR. Use Cosine rule: $(|\overline{PR}|)^2=10^2+7^2-2 \cdot 10 \cdot 7 \cdot \cos(65^\circ) \approx 89.8334433...$ Calculate the side PS inthe triangle PRS. Use Cosine rule again (only for convenience I use PS = x): $(|\overline{PR}|)^2=x^2+6^2-2 \cdot 6 \cdot x \cdot \cos(98^\circ)$. This is a quadratic equation: $x^2+0.2783462 x-53.8334433=0$. Now use the formula to solve quadratic equations. I've got: $x=7.19927...\ \vee \ \underbrace{x=-7.4...}_{\text{not very realistic here}}$ So the side PS is nearly 7.2 cm long. EB • Jan 27th 2007, 07:28 AM CaptainBlack Thanks to earboths diagram it is obvious that there is insufficient information to solve this. For if we drew a circle with PR as a chord, and S also on the circle, we could replace S by a point S' on the arc PSR and the angle PS'R would still be 98 degrees and so quadrilateral PQRS' would satisfy all the conditions of the problem but length of line segment PS != length of line segment PS'. The problem does not have a unique solution, in fact a quadrilateral with length of PS taking any value from 0 to length of PR could be constructed. RonL • Jan 29th 2007, 12:52 AM earboth Quote: Originally Posted by CaptainBlack Thanks to earboths diagram it is obvious that there is insufficient information to solve this.... RonL Hello, CaptainBlack, I'm awfully sorry that my sketch is the base of your argumentation(?) and the reason why it is not true. In the original text the line RS = 6 cm was given. So the left upper triangle is determined by two lines and one angle, which will give an unique solution. My apologies! EB • Jan 29th 2007, 11:49 PM Glaysher Quote: Originally Posted by earboth Hello, Tom, first draw a sketch (see attachment). First calculate the line PR. Use Cosine rule: $(|\overline{PR}|)^2=10^2+7^2-2 \cdot 10 \cdot 7 \cdot \cos(65^\circ) \approx 89.8334433...$ Calculate the side PS inthe triangle PRS. Use Cosine rule again (only for convenience I use PS = x): $(|\overline{PR}|)^2=x^2+6^2-2 \cdot 6 \cdot x \cdot \cos(98^\circ)$. This is a quadratic equation: $x^2+0.2783462 x-53.8334433=0$. Now use the formula to solve quadratic equations. I've got: $x=7.19927...\ \vee \ \underbrace{x=-7.4...}_{\text{not very realistic here}}$ So the side PS is nearly 7.2 cm long. EB Haven't worked all the way through this answer but 7.2 is not the one in the mark scheme to this question which takes a different approach of using the sine rule a couple of times instead and the fact that angles in a triangle add up to 180 degrees • Jan 30th 2007, 03:39 AM CaptainBlack Quote: Originally Posted by earboth Hello, CaptainBlack, I'm awfully sorry that my sketch is the base of your argumentation(?) and the reason why it is not true. In the original text the line RS = 6 cm was given. So the left upper triangle is determined by two lines and one angle, which will give an unique solution. My apologies! EB Grr.., and it was such a nice demonstration of unsolvability too. RonL • Jan 30th 2007, 04:15 AM earboth Quote: Originally Posted by Glaysher Haven't worked all the way through this answer but 7.2 is not the one in the mark scheme to this question ... Hello, you are right - and I've found my mistake. The first steps are all OK. But in the quadratic equation I used a wrong factor. It follows now the correct equations: $x^2+1.670077 x-53.8334433=0$. Now use the formula to solve quadratic equations. I've got: $x=6.5494...\ \vee \ \underbrace{x=-8.219...}_{\text{not very realistic here}}$ So the side PS is nearly 6.55 cm long. EB PS: To prove my calculations I've done an exact drawing of the quadrilateral. (See attachment) • Jan 30th 2007, 11:33 PM Glaysher Quote: Originally Posted by earboth Hello, PS: To prove my calculations I've done an exact drawing of the quadrilateral. (See attachment) What program did you use to do that drawing? • Jan 31st 2007, 12:20 AM earboth Quote: Originally Posted by Glaysher What program did you use to do that drawing? Hello, have a look here: EUKLID DynaGeo Header Page I've attached the complete construction including all necessary steps EB • Jan 31st 2007, 04:00 AM ThePerfectHacker In German, Euclid is spelled "Euklid?" • Jan 31st 2007, 04:31 AM earboth Quote: Originally Posted by ThePerfectHacker In German, Euclid is spelled "Euklid?" Hell, TPH, in Greek you spell Euclid's name so: Ευκλείδης As you may see it is used the letter kappa, so the German version "Euklid" is a little bit nearer to the original. EB
terraform powershell environment variables mclaren port huron patient portal the probability of getting a 2 by rolling two six sided dice with sides labeled as 1 2 3 4 5 6 is 1968 buick skylark for sale iphone x 64gb olx lahore walrus in spanish mexico tequilala mlo lto theoretical driving course pdf story karen capcut template schneider rj45 wall socket installation what is a hentai mbti the sims 4 mod iswitch imei free hacks for roblox bedwars animal cruelty laws new jersey cities skylines city planning mod field maps vs survey123 ### klipper tmc2209 sensorless homing The post contains C++ and Python code for converting a rotation matrix to Euler angles and vice-versa. It is based on Matlab's rotm2euler. In this post I will share code for converting a 3×3 rotation matrix to Euler angles and vice-versa. 3D rotations matrices can make your head spin. I know it is a bad pun but truth can sometimes be very punny!. The first line creates a new 4-by-4 matrix and initializes it to the identity matrix. The glm::rotate function multiplies this matrix by a rotation transformation of 180 degrees around the Z axis. Remember that since the screen lies in the XY plane, the Z. Rotate a Matrix List in Python - Software Engineering Authority Rotate a Matrix List in Python July 15, 2020 The challenge You are given an n x n 2D matrix representing an image. Rotate the image by 90 degrees (clockwise). Note: You have to rotate the image in-place, which means you have to modify the input 2D matrix directly. Rotation.as_matrix() # Represent as rotation matrix. 3D rotations can be represented using rotation matrices, which are 3 x 3 real orthogonal matrices with determinant equal to +1 [1]. Returns matrixndarray, shape (3, 3) or (N, 3, 3) Shape depends on shape of inputs used for initialization. Notes This function was called as_dcm before. 48.Rotate Image. You are given an n x n 2D matrix representing an image, rotate the image by 90 degrees (clockwise).. You have to rotate the image in-place, which means you have to modify the input 2D matrix directly. DO NOT allocate. The tutorial code can be found here C++, Python, Java. The images used in this tutorial can be found here (left*.jpg). ... (see also 2) as this does not ensure that the resulting rotation matrix will be orthogonal and the scale is estimated roughly by normalize the first column to 1. A solution to have a proper rotation matrix. 1 Answer. You can rotate a raster using an affine transformation. Several packages can do this, including gdal (see Raster API tutorial) and rasterio (see this answer to Defining Affine transform with rasterio ). However, the order of the parameters is not the same between the transformation function of gdal and that of rasterio, so be careful. derived from another through a rotation and a translation applied in that order. The word rotation stands not only for 2-, 3-, 4- or 6-fold rotation, but also for reflections in a point or in a plane. The translations are along axes or diagonals of the unit cell. The relation between two equivalent. Example: euler angle to rotation vector python import math import numpy as np # RPY/Euler angles to Rotation Vector def euler_to_rotVec(yaw, pitch, roll): # compute . NEWBEDEV Python Javascript Linux Cheat sheet. ... # compute the rotation matrix Rmat = euler_to_rotMat (yaw, pitch, roll). ### esp32 midi So, here is the main logic for Left Rotation in C++. Please Dry and run the code with one of the given examples above. It will help you to understand the logic behind this approach. #include <bits/stdc++.h> using namespace std; int main () { int siz,op; cin>>siz>>op; long int arr [siz],arr1 [siz]; // two array of same size for (int i=0; i<siz. Clockwise & Counterclockwise Rotation of matrix using Numpy Library. Clockwise & Counterclockwise Rotation of a matrix using Numpy Library. rot90 will be used which is a built-in function. Rotates the matrix by 90, 180 degrees as per requirement. Rotates the matrix in Clockwise and Counterclockwise as per requirement. 1 Answer. You can rotate a raster using an affine transformation. Several packages can do this, including gdal (see Raster API tutorial) and rasterio (see this answer to Defining Affine transform with rasterio ). However, the order of the parameters is not the same between the transformation function of gdal and that of rasterio, so be careful. ### kindergarten spelling words with pictures pdf In this post, you will find the solution for the Rotate Array in C++, Java & Python-LeetCode problem. We are providing the correct and tested solutions to coding problems present on LeetCode . If you are not able to solve any problem, then you can take help from our Blog/website. Write a Python Program to Find the second Largest Number in a Numpy Array. We used the numpy sort function to sort the array in ascending order. Next, we print the value at the last but one index position. # Python Program to Find Second Largest in an Array import numpy as np secLarr = np.array ( [11, 55, 99, 22, 7, 35, 70]) print ("Array Items. Rotation should be in anti-clockwise direction A n×n matrix A is an orthogonal matrix if AA^(T)=I, (1) where A^(T) is the transpose of A and I is the identity matrix (그림2, Translation Matrix) Translation을 먼저 하고, Rotation을 하도록 제작 되어 It is a special VTK data structure in the collection of 3D data structures provided .... Write a Python Program to Find the second Largest Number in a Numpy Array. We used the numpy sort function to sort the array in ascending order. Next, we print the value at the last but one index position. # Python Program to Find Second Largest in an Array import numpy as np secLarr = np.array ( [11, 55, 99, 22, 7, 35, 70]) print ("Array Items. 48.Rotate Image. You are given an n x n 2D matrix representing an image, rotate the image by 90 degrees (clockwise).. You have to rotate the image in-place, which means you have to modify the input 2D matrix directly. DO NOT allocate another 2D matrix and do the rotation.. Example 1:. Random Rotation Matrix in Python. May 12, 2015. Making a random rotation matrix is somewhat hard. You can’t just use “random elements”; that’s not a random matrix. First attempt: Rotate around a random vector. My first thought was the following:. Here 𝑅 is the rotation matrix of shape (3, 3) and 𝑂 is the translation offset of shape (3, 1). We can get the change of basis matrix by taking the inverse of the final transformation matrix. ... # create a virtual environment in anaconda conda create -n camera-calibration-python python=3.6 anaconda conda activate camera-calibration-python. Jan 26, 2022 · Here 𝑅 is the rotation matrix of shape (3, 3) and 𝑂 is the translation offset of shape (3, 1). We can get the change of basis matrix by taking the inverse of the final transformation matrix. This change of basis matrix of shape (4, 4) is called the extrinsic camera matrix denoted by 𝐸.. This matrix is usually of the form: (1) OpenCV provides the ability to define the center of rotation for the image and a scale factor to resize the image as well. In that case, the transformation matrix gets modified. (2) In the above matrix: (3) where & are the coordinates along which the image is rotated. The most general three-dimensional rotation matrix represents a counterclockwise rotation by an angle θ about a fixed axis that lies along the unit vector ˆn. The rotation matrix operates on vectors to produce rotated vectors, while the coordinate axes are held fixed. This is called an activetransformation. In these notes, we shall explore the. import numpy as np def quaternion_rotation_matrix(Q): """ Covert a quaternion into a full three-dimensional rotation matrix. Input :param Q: A 4 element array representing the quaternion (q0,q1,q2,q3) Output :return: A 3x3 element matrix representing the full 3D rotation matrix.. Rotation Matrices The orientation of coordinate frame irelative to coordi-nate frame jcan be denoted by expressing the basis vec-tors [xˆ i yˆ ˆz i] in terms of the basis vectors xˆ j ˆy j ˆz j. This yields jxˆ i ˆy i jzˆ , which when written together as a 3 × 3 matrix is known as the rotation matrix. The components of jR iare the dot. Rotate Image– LeetCode Problem Problem: You are given an n x n 2D matrix representing an image, rotate the image by 90 degrees (clockwise). You have to rotate the image in-place, which means you have to modify the input 2D matrix directly. DO NOT allocate another 2D matrix and do the rotation. Example 1:. Calculate matrix 3x3 rotation X. To perform the calculation, enter the rotation angle. Then click the button 'Calculate'. The unit of measurement for the angle can be switched between degrees or radians. The active rotation (rotate object) or the passive rotation (rotate coordinates) can be calculated. X-axis rotation calculator. Rotate an array by 90 degrees in the plane specified by axes. Rotation direction is from the first towards the second axis. Parameters m array_like. Array of two or more dimensions. k integer. Number of times the array is rotated by 90 degrees. axes (2,) array_like. The array is rotated in the plane defined by the axes. Axes must be different. 189. Rotate Array. Medium. 9677 1316 Add to List Share. Given an array, rotate the array to the right by k steps, where k is non-negative. Example 1:. matrix clockwise rotation: mat = [[1,2,3],[4,5,6],[7,8,9]] clock_rotated_mat = list(zip(*mat[::-1])) # [(7, 4, 1), (8, 5, 2), (9, 6, 3)] [::-1] - reverse the matrix. zip(*_) - unpacks the nested list values of each list according to its index. list() - cast back to list object. similarly, matrix anti-clockwise rotation:. Matrix Multiplication Calculator. Here you can perform matrix multiplication with complex numbers online for free. However matrices can be not only two-dimensional, but also one-dimensional (vectors), so that you can multiply vectors, vector by matrix and vice versa. After calculation you can multiply the result by another matrix right there!. rotate x, 45, pept rotate [1,1,1], 10, chain A Electrostatic Map Caveat. If you have an electrostatic map and it's not rotating with the molecule as you expect it to, see the Turn command. Turn moves the camera and thus the protein and map will be changed. SEE ALSO. object matrix Translate Turn Model_Space_and_Camera_Space. Blog post: https://colorfulcodesblog.wordpress.com/2018/10/30/rotate-a-matrix-in-place-python/Instagram: ColorfulCodesTwitter: @colorfulcodes. The rotation matrix in OpticStudio describes how the local coordinate can be converted to the global coordinate. The above formula could describe the rotation matrix created by an extrinsic rotation, first tilt about Z, then tilt about Y, lastly tilt about X. This is equivalent to rotate the system intrinsically. Rotation of a point in 3 dimensional space by theta about an arbitrary axes defined by a line between two points P 1 = (x 1 ,y 1 ,z 1) and P 2 = (x 2 ,y 2 ,z 2) can be achieved by the following steps. ( 2) rotate space about the x axis so that the rotation axis lies in the xz plane. ( 3) rotate space about the y axis so that the rotation axis. The P-gate performs a rotation of $\phi$ around the Z-axis direction. It has the matrix form: $$P(\phi) = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\phi} \end{bmatrix}$$ Where $\phi$ is a real number. You can use the widget below to play around with the P-gate, specify $\phi$ using the slider:. Dynamically Create Matrices in Python. It is possible to create a n x m matrix by listing a set of elements (let say n) and then making each of the elements linked to another 1D list of m elements. Here is a code snippet for this: n = 3 m = 3 val = [0] * n for x in range (n): val[x] = [0] * m print(val) Program output will be:. Jun 16, 2022 · How to Rotate a Vector About Its axis In Python. Let a be a unit vector along an axis axis. Then a = axis/norm (axis). Let A = I x a, the cross product of a with an identity matrix I. Then exp (theta,A) is the rotation matrix. Finally, dotting the rotation matrix with the vector will rotate the vector.. Python program for Rotate each ring of matrix by k elements. Here mentioned other language solution. # Python 3 program for # Rotate each ring of matrix clockwise by k elements class Rotation : def rotateRing(self, matrix, row, col, index) : # Loop controlling variables i = col - 1 data = matrix[row][col] temp = 0 # Case A. with rotation about a principal axis - that's why the equations looked simpler. • If a body is rotating solely about a principal axis (call it the i axis) then: • If we can find a set of principal axes for a body, we call the three non-zero inertia tensor elements the principal. The topic describes how affine spatial transformation matrices are used to represent the orientation and position of a coordinate system within a "world" coordinate system and how spatial transformation matrices can be used to map from one coordinate system to another one. It will be described how sub-transformations such as scale, rotation and. May 12, 2021 · This is 270 degrees rotation but we can also say that this is left rotation because this is how to rotate an matrix left direction. See also How to generate array filled with value in Numpy? Rotation by 180 degrees. Sep 24, 2020 · Rotate Image in Python using OpenCV. To rotate an image, apply a matrix transformation. To create a matrix transformation, use the cv2.getRotationMatrix2D () method and pass the origin that we want the rotation to happen around. If we pass the origin (0, 0), then it will start transforming the matrix from the top-left corner.. Python - Beginner; 2D-Rotation-Matrix ‹ Return to 2D Rotation Matrix 0 License , and code samples are licensed under the Apache 2 Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4 g3c import * from pyganja import * # Make a load of lines in an MVArray line_array = MVArray. Sorted by: 7. You can use much simpler algorithm in python: Transpose matrix: zip (*matrix) Inverse rows in transposed matrix (equals rotation right): list (list (x) [::-1] for x in zip (*matrix)) However, if you want to rotate left, you need first inverse rows and then transpose, which is slightly more code. Share. Improve this answer. One by one rotate all rings of elements, starting from the outermost. To rotate a ring, we need to do following. 1) Move elements of top row. 2) Move elements of last column. 3) Move elements of bottom row. 4) Move elements of first column. Repeat above steps for inner ring while there is an inner ring. Below is the implementation of above idea. Determining the rotation matrix needed to rotate the sensor into the proper position requires some mathematics. First, the vector coming out of the sensor and our target vector must be normalized. This is necessary to make the equations solvable. Without normalization the terms become quite unwieldy, involving imaginary terms and conjugates. A Nifti image contains, along with its 3D or 4D data content, a 4x4 matrix encoding an affine transformation that maps the data array into millimeter space Rotatable Tetrahedron (Python recipe) Draws a 3D tetrahedron and allows a user to rotate it (mouse left button and wheel) Rotation of a point around i (red), j (green) and k (blue) On the other hand, consider the matrix. Write a Python program to right rotate a Numpy Array by n times or positions. In this Python example, we used the negative numbers to slice the array from the right side to right rotate and combined the two slices using a numpy concatenate method. import numpy as np rtArray = np.array ( [10, 15, 25, 35, 67, 89, 97, 122, 175]) rtRotate = int .... 1 Answer. You can rotate a raster using an affine transformation. Several packages can do this, including gdal (see Raster API tutorial) and rasterio (see this answer to Defining Affine transform with rasterio ). However, the order of the parameters is not the same between the transformation function of gdal and that of rasterio, so be careful. ### bgp reset peer closed the session dark portal minecraftbest searx instanceloadstring game httpget https raw githubusercontent com ttd1108 script master aherosdestiny2 true hack the box responder walkthrough ### louisiana state board of nursing disciplinary actions Python numpy rotate 3d array. Let us see how to rotate a 3-dimensional numpy array in Python. By using the np.rot90 we can easily rotate the numpy array in 90 degrees. In Python, this method is used to rotate a NumPy array by 90 degrees. Syntax: Here is the syntax NumPy.rot90() method. invert a binary tree python; how to rotate the list in python; python rotate list; rotate an image python keras; area of a circle python; python circular import; how to transpose a 2d list in python; find the area of a circle in python; circular list python; python matrix condensed to square; how to bubble sort a 2d array in python. The transpose of a matrix is obtained by moving the rows data to the column and columns data to the rows. If we have an array of shape (X, Y) then the transpose of the array will have the shape (Y, X). NumPy Matrix transpose() Python numpy module is mostly used to work with arrays in Python. We can use the transpose() function to get the. ak 47 furniture wood setnever gonna give you up mp3 filemakemkv key july 2022 ### african american natural hair updo styles A symmetric matrix and skew-symmetric matrix both are square matrices. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative. If A is a symmetric matrix, then A = A T and if A is a skew-symmetric matrix then A T = - A. Here, in this method, the elements of the matrix are shifted by one place in order to achieve the rotated matrix. 4 8 7 Let Matrix, A = 6 7 5 3 2 6. After Rotating the Matrix, 6 4 8 A = 3 7 7 2 6 5. The image below shows the number of cycles it takes to rotate the matrix in the given method. ### miraculous ladybug and cat noir porn comics yugioh attribute vs typeegypt address generatordc overvoltage fault in acs800 zig zag array hackerrank solution in java act of escaping something duty or taxes ### fifa pack simulator unblocked jazz standards pdf pianoomori steamunlockedautomapper create new object >
# Fight Finance #### CoursesTagsRandomAllRecentScores The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: $$P_0 = \frac{ C_1 }{ r - g }$$ What is $g$? The value $g$ is the long term expected: For a price of $13, Carla will sell you a share which will pay a dividend of$1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $6, Carlos will sell you a share which will pay a dividend of$1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to his share or politely ? Do you think that the following statement is or ? “Buying a single company stock usually provides a safer return than a stock mutual fund.” Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? than$102, $102 or than$102? Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Value the following business project to manufacture a new product. Project Data Project life 2 yrs Initial investment in equipment $6m Depreciation of equipment per year$3m Expected sale price of equipment at end of project $0.6m Unit sales per year 4m Sale price per unit$8 Variable cost per unit $5 Fixed costs per year, paid at the end of each year$1m Interest expense per year 0 Tax rate 30% Weighted average cost of capital after tax per annum 10% Notes 1. The firm's current assets and current liabilities are $3m and$2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects. Due to the project, current assets (mostly inventory) will grow by $2m initially (at t = 0), and then by$0.2m at the end of the first year (t=1). Current liabilities (mostly trade creditors) will increase by $0.1m at the end of the first year (t=1). At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought. 2. The project cost$0.5m to research which was incurred one year ago. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. • The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office. What is the expected net present value (NPV) of the project? In the dividend discount model: $$P_0 = \dfrac{C_1}{r-g}$$ The return $r$ is supposed to be the: The following is the Dividend Discount Model used to price stocks: $$p_0=\frac{d_1}{r-g}$$ Which of the following statements about the Dividend Discount Model is NOT correct? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}$$ Which expression is NOT equal to the expected capital return? Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Candys Corp Income Statement for year ending 30th June 2013 $m Sales 200 COGS 50 Operating expense 10 Depreciation 20 Interest expense 10 Income before tax 110 Tax at 30% 33 Net income 77 Candys Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 220 180 PPE Cost 300 340 Accumul. depr. 60 40 Carrying amount 240 300 Total assets 460 480 Liabilities Current liabilities 175 190 Non-current liabilities 135 130 Owners' equity Retained earnings 50 60 Contributed equity 100 100 Total L and OE 460 480 Note: all figures are given in millions of dollars ($m). Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Trademark Corp Income Statement for year ending 30th June 2013 $m Sales 100 COGS 25 Operating expense 5 Depreciation 20 Interest expense 20 Income before tax 30 Tax at 30% 9 Net income 21 Trademark Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 120 80 PPE Cost 150 140 Accumul. depr. 60 40 Carrying amount 90 100 Total assets 210 180 Liabilities Current liabilities 75 65 Non-current liabilities 75 55 Owners' equity Retained earnings 10 10 Contributed equity 50 50 Total L and OE 210 180 Note: all figures are given in millions of dollars ($m). Cash Flow From Assets (CFFA) can be defined as: A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? An old company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below. To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula: $$V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}$$ Which point corresponds to the best time to calculate the terminal value? Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and $r_D$ is the cost of debt. Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? A company increases the proportion of debt funding it uses to finance its assets by issuing bonds and using the cash to repurchase stock, leaving assets unchanged. Ignoring the costs of financial distress, which of the following statements is NOT correct: Find Scubar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Scubar Corp Income Statement for year ending 30th June 2013 $m Sales 200 COGS 60 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 90 Taxes at 30% 27 Net income 63 Scubar Corp Balance Sheet as at 30th June 2013 2012$m $m Inventory 60 50 Trade debtors 19 6 Rent paid in advance 3 2 PPE 420 400 Total assets 502 458 Trade creditors 10 8 Bond liabilities 200 190 Contributed equity 130 130 Retained profits 162 130 Total L and OE 502 458 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed. Assume the following: • The degree takes 3 years to complete and all students pass all subjects. • There are 2 semesters per year and 4 subjects per semester. • University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years. • There are 52 weeks per year. • The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19). • The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38). • The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on. • Working full time at the grocery store instead of studying full-time pays$20/hr and you can work 35 hours per week. Wages are paid at the end of each week. • Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week. • The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual. The NPV of costs from undertaking the university degree is: A stock is expected to pay a dividend of$15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_{0} = \frac{C_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What would you call the expression $C_1/P_0$? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_{0} = \frac{c_1}{r_{\text{eff}} - g_{\text{eff}}}$$ What is the discount rate '$r_\text{eff}$' in this equation? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0 = \frac{d_1}{r-g}$$ Assume that the assumptions of the DDM hold and that the time period is measured in years. Which of the following is equal to the expected dividend in 3 years, $d_3$? When using the dividend discount model to price a stock: $$p_{0} = \frac{d_1}{r - g}$$ The growth rate of dividends (g): Currently, a mining company has a share price of$6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of$0.30 in 1 year. If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only $(P_\text{0 one-off})$ , and the second assumes that the increase is permanent $(P_\text{0 permanent})$: Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist. Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • Apple, Google and Microsoft are comparable companies, • Apple's (AAPL) share price is $526.24 and historical EPS is$40.32. • Google's (GOOG) share price is $1,215.65 and historical EPS is$36.23. • Micrsoft's (MSFT) historical earnings per share (EPS) is $2.71. Source: Google Finance 28 Feb 2014. For certain shares, the forward-looking Price-Earnings Ratio ($P_0/EPS_1$) is equal to the inverse of the share's total expected return ($1/r_\text{total}$). For what shares is this true? Assume: • The general accounting definition of 'payout ratio' which is dividends per share (DPS) divided by earnings per share (EPS). • All cash flows, earnings and rates are real. You own an apartment which you rent out as an investment property. What is the price of the apartment using discounted cash flow (DCF, same as NPV) valuation? Assume that: • You just signed a contract to rent the apartment out to a tenant for the next 12 months at$2,000 per month, payable in advance (at the start of the month, t=0). The tenant is just about to pay you the first $2,000 payment. • The contract states that monthly rental payments are fixed for 12 months. After the contract ends, you plan to sign another contract but with rental payment increases of 3%. You intend to do this every year. So rental payments will increase at the start of the 13th month (t=12) to be$2,060 (=2,000(1+0.03)), and then they will be constant for the next 12 months. Rental payments will increase again at the start of the 25th month (t=24) to be $2,121.80 (=2,000(1+0.03)2), and then they will be constant for the next 12 months until the next year, and so on. • The required return of the apartment is 8.732% pa, given as an effective annual rate. • Ignore all taxes, maintenance, real estate agent, council and strata fees, periods of vacancy and other costs. Assume that the apartment will last forever and so will the rental payments. When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently$1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? Assume that the Gordon Growth Model (same as the dividend discount model or perpetuity with growth formula) is an appropriate method to value real estate. The rule of thumb in the real estate industry is that properties should yield a 5% pa rental return. Many investors also regard property to be as risky as the stock market, therefore property is thought to have a required total return of 9% pa which is the average total return on the stock market including dividends. Assume that all returns are effective annual rates and they are nominal (not reduced by inflation). Inflation is expected to be 2% pa. You're considering purchasing an investment property which has a rental yield of 5% pa and you expect it to have the same risk as the stock market. Select the most correct statement about this property. A project to build a toll bridge will take two years to complete, costing three payments of$100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant $50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart. You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate. You expect BHP will pay a$0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be$0.572 each, and so on in perpetuity. Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa. What is the current price of a BHP share? Find UniBar Corp's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. UniBar Corp Income Statement for year ending 30th June 2013 $m Sales 80 COGS 40 Operating expense 15 Depreciation 10 Interest expense 5 Income before tax 10 Tax at 30% 3 Net income 7 UniBar Corp Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 120 90 PPE Cost 360 320 Accumul. depr. 40 30 Carrying amount 320 290 Total assets 440 380 Liabilities Current liabilities 110 60 Non-current liabilities 190 180 Owners' equity Retained earnings 95 95 Contributed equity 45 45 Total L and OE 440 380 Note: all figures are given in millions of dollars ($m). Find Piano Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Piano Bar Income Statement for year ending 30th June 2013 $m Sales 310 COGS 185 Operating expense 20 Depreciation 15 Interest expense 10 Income before tax 80 Tax at 30% 24 Net income 56 Piano Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 240 230 PPE Cost 420 400 Accumul. depr. 50 35 Carrying amount 370 365 Total assets 610 595 Liabilities Current liabilities 180 190 Non-current liabilities 290 265 Owners' equity Retained earnings 90 90 Contributed equity 50 50 Total L and OE 610 595 Note: all figures are given in millions of dollars ($m). Find World Bar's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. World Bar Income Statement for year ending 30th June 2013 $m Sales 300 COGS 150 Operating expense 50 Depreciation 40 Interest expense 10 Taxable income 50 Tax at 30% 15 Net income 35 World Bar Balance Sheet as at 30th June 2013 2012$m $m Assets Current assets 200 230 PPE Cost 400 400 Accumul. depr. 75 35 Carrying amount 325 365 Total assets 525 595 Liabilities Current liabilities 150 205 Non-current liabilities 235 250 Owners' equity Retained earnings 100 100 Contributed equity 40 40 Total L and OE 525 595 Note: all figures above and below are given in millions of dollars ($m). Question 345  capital budgeting, break even, NPV Project Data Project life 10 yrs Initial investment in factory $10m Depreciation of factory per year$1m Expected scrap value of factory at end of project $0 Sale price per unit$10 Variable cost per unit $6 Fixed costs per year, paid at the end of each year$2m Interest expense per year 0 Tax rate 30% Cost of capital per annum 10% Notes 1. The firm's current liabilities are forecast to stay at $0.5m. The firm's current assets (mostly inventory) is currently$1m, but is forecast to grow by $0.1m at the end of each year due to the project. At the end of the project, the current assets accumulated due to the project can be sold for the same price that they were bought. 2. A marketing survey was used to forecast sales. It cost$1.4m which was just paid. The cost has been capitalised by the accountants and is tax-deductible over the life of the project, regardless of whether the project goes ahead or not. This amortisation expense is not included in the depreciation expense listed in the table above. Assumptions • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 3% pa. • All rates are given as effective annual rates. Find the break even unit production (Q) per year to achieve a zero Net Income (NI) and Net Present Value (NPV), respectively. The answers below are listed in the same order. The following is the Dividend Discount Model used to price stocks: $$p_0=\frac{d_1}{r-g}$$ All rates are effective annual rates and the cash flows ($d_1$) are received every year. Note that the r and g terms in the above DDM could also be labelled as below: $$r = r_{\text{total, 0}\rightarrow\text{1yr, eff 1yr}}$$ $$g = r_{\text{capital, 0}\rightarrow\text{1yr, eff 1yr}}$$ Which of the following statements is NOT correct? The following is the Dividend Discount Model (DDM) used to price stocks: $$P_0=\dfrac{C_1}{r-g}$$ If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$P_0=\frac{d_1}{r-g}$$ A stock pays dividends annually. It just paid a dividend, but the next dividend ($d_1$) will be paid in one year. According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years? In the dividend discount model: $$P_0= \frac{d_1}{r-g}$$ The pronumeral $g$ is supposed to be the: The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0= \frac{c_1}{r-g}$$ Which expression is equal to the expected dividend return? For a price of $102, Andrea will sell you a share which just paid a dividend of$10 yesterday, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be $10(1+0.05)^1=10.50$ in one year from now, and the year after it will be $10(1+0.05)^2=11.025$ and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? For a price of $1040, Camille will sell you a share which just paid a dividend of$100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be $100(1+0.05)^1=105.00$, and the year after it will be $100(1+0.05)^2=110.25$ and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? For a price of $10.20 each, Renee will sell you 100 shares. Each share is expected to pay dividends in perpetuity, growing at a rate of 5% pa. The next dividend is one year away (t=1) and is expected to be$1 per share. The required return of the stock is 15% pa. Would you like to the shares or politely ? For a price of $129, Joanne will sell you a share which is expected to pay a$30 dividend in one year, and a $10 dividend every year after that forever. So the stock's dividends will be$30 at t=1, $10 at t=2,$10 at t=3, and $10 forever onwards. The required return of the stock is 10% pa. Would you like to the share or politely ? For a price of$95, Sherylanne will sell you a share which is expected to pay its first dividend of $10 in 7 years (t=7), and will continue to pay the same$10 dividend every year after that forever. The required return of the stock is 10% pa. Would you like to the share or politely ? Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive earnings, disregard firms with negative earnings and therefore negative PE ratios. Which firms tend to have high forward-looking price-earnings (PE) ratios? Your poor friend asks to borrow some money from you. He would like $1,000 now (t=0) and every year for the next 5 years, so there will be 6 payments of$1,000 from t=0 to t=5 inclusive. In return he will pay you $10,000 in seven years from now (t=7). What is the net present value (NPV) of lending to your friend? Assume that your friend will definitely pay you back so the loan is risk-free, and that the yield on risk-free government debt is 10% pa, given as an effective annual rate. Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - \Delta NWC+IntExp$$ Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Sidebar Corp Income Statement for year ending 30th June 2013$m Sales 405 COGS 100 Depreciation 34 Rent expense 22 Interest expense 39 Taxable Income 210 Taxes at 30% 63 Net income 147 Sidebar Corp Balance Sheet as at 30th June 2013 2012 $m$m Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Total assets 785 749 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 Total L and OE 785 749 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Over the next year, the management of an unlevered company plans to: • Achieve firm free cash flow (FFCF or CFFA) of$1m. • Pay dividends of $1.8m • Complete a$1.3m share buy-back. • Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. Assume that: • All amounts are received and paid at the end of the year so you can ignore the time value of money. • The firm has sufficient retained profits to pay the dividend and complete the buy back. • The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? Two years ago Fred bought a house for$300,000. Now it's worth $500,000, based on recent similar sales in the area. Fred's residential property has an expected total return of 8% pa. He rents his house out for$2,000 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is $23,173.86. The future value of 12 months of rental payments one year ahead is$25,027.77. What is the expected annual growth rate of the rental payments? In other words, by what percentage increase will Fred have to raise the monthly rent by each year to sustain the expected annual total return of 8%? A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. Stocks in the United States usually pay quarterly dividends. For example, the retailer Wal-Mart Stores paid a $0.47 dividend every quarter over the 2013 calendar year and plans to pay a$0.48 dividend every quarter over the 2014 calendar year. Using the dividend discount model and net present value techniques, calculate the stock price of Wal-Mart Stores assuming that: • The time now is the beginning of January 2014. The next dividend of $0.48 will be received in 3 months (end of March 2014), with another 3 quarterly payments of$0.48 after this (end of June, September and December 2014). • The quarterly dividend will increase by 2% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in 2015 will be $0.4896 ($=0.48×(1+0.02)^1$), with the first at the end of March 2015 and the last at the end of December 2015. In 2016 each quarterly dividend will be$0.499392 ($=0.48×(1+0.02)^2$), with the first at the end of March 2016 and the last at the end of December 2016, and so on forever. • The total required return on equity is 6% pa. • The required return and growth rate are given as effective annual rates. • All cash flows and rates are nominal. Inflation is 3% pa. • Dividend payment dates and ex-dividend dates are at the same time. • Remember that there are 4 quarters in a year and 3 months in a quarter. What is the current stock price? Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back$1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? Remember: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )$$ $$CFFA=NI+Depr-CapEx - ΔNWC+IntExp$$ Find Ching-A-Lings Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Ching-A-Lings Corp Income Statement for year ending 30th June 2013$m Sales 100 COGS 20 Depreciation 20 Rent expense 11 Interest expense 19 Taxable Income 30 Taxes at 30% 9 Net income 21 Ching-A-Lings Corp Balance Sheet as at 30th June 2013 2012 $m$m Inventory 49 38 Trade debtors 14 2 Rent paid in advance 5 5 PPE 400 400 Total assets 468 445 Trade creditors 4 10 Bond liabilities 200 190 Contributed equity 145 145 Retained profits 119 100 Total L and OE 468 445 Note: All figures are given in millions of dollars ($m). The cash flow from assets was: Over the next year, the management of an unlevered company plans to: • Make$5m in sales, $1.9m in net income and$2m in equity free cash flow (EFCF). • Pay dividends of $1m. • Complete a$1.3m share buy-back. Assume that: • All amounts are received and paid at the end of the year so you can ignore the time value of money. • The firm has sufficient retained profits to legally pay the dividend and complete the buy back. • The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? Three years ago Frederika bought a house for $400,000. Now it's worth$600,000, based on recent similar sales in the area. Frederika's residential property has an expected total return of 7% pa. She rents her house out for $2,500 per month, paid in advance. Every 12 months she plans to increase the rental payments. The present value of 12 months of rental payments is$29,089.48. The future value of 12 months of rental payments one year ahead is 31,125.74. What is the expected annual capital yield of the property? A residential investment property has an expected nominal total return of 8% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. Which statement is the most correct? Question 65 annuity with growth, needs refinement Which of the below formulas gives the present value of an annuity with growth? Hint: The equation of a perpetuity without growth is: $$V_\text{0, perp without growth} = \frac{C_\text{1}}{r}$$ The formula for the present value of an annuity without growth is derived from the formula for a perpetuity without growth. The idea is than an annuity with T payments from t=1 to T inclusive is equivalent to a perpetuity starting at t=1 with fixed positive cash flows, plus a perpetuity starting T periods later (t=T+1) with fixed negative cash flows. The positive and negative cash flows after time period T cancel each other out, leaving the positive cash flows between t=1 to T, which is the annuity. \begin{aligned} V_\text{0, annuity} &= V_\text{0, perp without growth from t=1} - V_\text{0, perp without growth from t=T+1} \\ &= \dfrac{C_\text{1}}{r} - \dfrac{ \left( \dfrac{C_\text{T+1}}{r} \right) }{(1+r)^T} \\ &= \dfrac{C_\text{1}}{r} - \dfrac{ \left( \dfrac{C_\text{1}}{r} \right) }{(1+r)^T} \\ &= \dfrac{C_\text{1}}{r}\left(1 - \dfrac{1}{(1+r)^T}\right) \\ \end{aligned} The equation of a perpetuity with growth is: $$V_\text{0, perp with growth} = \dfrac{C_\text{1}}{r-g}$$ A person is thinking about borrowing100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced. What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value ($V_0$), not the value in one year ($V_1$). A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: Your friend claims that by reading 'The Economist' magazine's economic news articles, she can identify shares that will have positive abnormal expected returns over the next 2 years. Assuming that her claim is true, which statement(s) are correct? (i) Weak form market efficiency is broken. (ii) Semi-strong form market efficiency is broken. (iii) Strong form market efficiency is broken. (iv) The asset pricing model used to measure the abnormal returns (such as the CAPM) is either wrong (mis-specification error) or is measured using the wrong inputs (data errors) so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: A very low-risk stock just paid its semi-annual dividend of $0.14, as it has for the last 5 years. You conservatively estimate that from now on the dividend will fall at a rate of 1% every 6 months. If the stock currently sells for$3 per share, what must be its required total return as an effective annual rate? If risk free government bonds are trading at a yield of 4% pa, given as an effective annual rate, would you consider buying or selling the stock? The stock's required total return is: The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model): $$p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}$$ Which, since $c_1/p_0$ is the income return ($r_\text{income}$), can be expressed as: $$r_\text{total}=r_\text{income}+r_\text{capital}$$ So the total return of an asset is the income component plus the capital or price growth component. Another way to break up total return is to use the Capital Asset Pricing Model: $$r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})$$ $$r_\text{total}=r_\text{time value}+r_\text{risk premium}$$ So the risk free rate is the time value of money and the term $β(r_\text{m}- r_\text{f})$ is the compensation for taking on systematic risk. Using the above theory and your general knowledge, which of the below equations, if any, are correct? (I) $r_\text{income}=r_\text{time value}$ (II) $r_\text{income}=r_\text{risk premium}$ (III) $r_\text{capital}=r_\text{time value}$ (IV) $r_\text{capital}=r_\text{risk premium}$ (V) $r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}$ Which of the equations are correct? Government bonds currently have a return of 5%. A stock has a beta of 2 and the market return is 7%. What is the expected return of the stock? Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot above the SML would have: A firm changes its capital structure by issuing a large amount of equity and using the funds to repay debt. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? Examine the following graph which shows stocks' betas $(\beta)$ and expected returns $(\mu)$: Assume that the CAPM holds and that future expectations of stocks' returns and betas are correctly measured. Which statement is NOT correct? You're the boss of an investment bank's equities research team. Your five analysts are each trying to find the expected total return over the next year of shares in a mining company. The mining firm: • Is regarded as a mature company since it's quite stable in size and was floated around 30 years ago. It is not a high-growth company; • Share price is very sensitive to changes in the price of the market portfolio, economic growth, the exchange rate and commodities prices. Due to this, its standard deviation of total returns is much higher than that of the market index; • Experienced tough times in the last 10 years due to unexpected falls in commodity prices. • Shares are traded in an active liquid market. Your team of analysts present their findings, and everyone has different views. While there's no definitive true answer, who's calculation of the expected total return is the most plausible? Assume that: • The analysts' source data is correct and true, but their inferences might be wrong; • All returns and yields are given as effective annual nominal rates. A man inherits $500,000 worth of shares. He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets. What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following: • He earns$60,000 pa in his current job, paid in a lump sum at the end of each year. • He enjoys examining share price graphs and day trading just as much as he enjoys his current job. • Stock markets are weak form and semi-strong form efficient. • He has no inside information. • He makes 1 trade every day and there are 250 trading days in the year. Trading costs are $20 per trade. His broker invoices him for the trading costs at the end of the year. • The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio. • The market portfolio's expected return is 10% pa. Measure the net gain over the first year as an expected wealth increase at the end of the year. A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk? Your friend just bought a house for $400,000. He financed it using a$320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is$80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So $V=D+E$. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. Remember: $$r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0}$$ where $r_{0-1}$ is the return (percentage change) of an asset with price $p_0$ initially, $p_1$ one period later, and paying a cash flow of $c_1$ at time $t=1$. The equations for Net Income (NI, also known as Earnings or Net Profit After Tax) and Cash Flow From Assets (CFFA, also known as Free Cash Flow to the Firm) per year are: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ For a firm with debt, what is the amount of the interest tax shield per year? The equations for Net Income (NI, also known as Earnings or Net Profit After Tax) and Cash Flow From Assets (CFFA, also known as Free Cash Flow to the Firm) per year are: $$NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)$$ $$CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp$$ For a firm with debt, what is the formula for the present value of interest tax shields if the tax shields occur in perpetuity? You may assume: • the value of debt (D) is constant through time, • The cost of debt and the yield on debt are equal and given by $r_D$. • the appropriate rate to discount interest tax shields is $r_D$. • $\text{IntExp}=D.r_D$ Question 121  capital structure, leverage, costs of financial distress, interest tax shield Fill in the missing words in the following sentence: All things remaining equal, as a firm's amount of debt funding falls, benefits of interest tax shields __________ and the costs of financial distress __________. Which of the following discount rates should be the highest for a levered company? Ignore the costs of financial distress. Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? There are many different ways to value a firm's assets. Which of the following will NOT give the correct market value of a levered firm's assets $(V_L)$? Assume that: • The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market. • The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever. • Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold. • There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero. • The firm operates in a mature industry with zero real growth. • All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation. Where: $$r_\text{WACC before tax} = r_D.\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital before tax}$$ $$r_\text{WACC after tax} = r_D.(1-t_c).\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital after tax}$$ $$NI_L=(Rev-COGS-FC-Depr-\mathbf{IntExp}).(1-t_c) = \text{Net Income Levered}$$ $$CFFA_L=NI_L+Depr-CapEx - \varDelta NWC+\mathbf{IntExp} = \text{Cash Flow From Assets Levered}$$ $$NI_U=(Rev-COGS-FC-Depr).(1-t_c) = \text{Net Income Unlevered}$$ $$CFFA_U=NI_U+Depr-CapEx - \varDelta NWC= \text{Cash Flow From Assets Unlevered}$$ A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. Which statement about risk, required return and capital structure is the most correct? A firm's weighted average cost of capital before tax ($r_\text{WACC before tax}$) would increase due to: A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? A firm's WACC before tax would decrease due to: A company has: • 50 million shares outstanding. • The market price of one share is currently $6. • The risk-free rate is 5% and the market return is 10%. • Market analysts believe that the company's ordinary shares have a beta of 2. • The company has 1 million preferred stock which have a face (or par) value of$100 and pay a constant dividend of 10% of par. They currently trade for $80 each. • The company's debentures are publicly traded and their market price is equal to 90% of their face value. • The debentures have a total face value of$60,000,000 and the current yield to maturity of corporate debentures is 10% per annum. The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC)? Assume a classical tax system. A firm can issue 5 year annual coupon bonds at a yield of 8% pa and a coupon rate of 12% pa. The beta of its levered equity is 1. Five year government bonds yield 5% pa with a coupon rate of 6% pa. The market's expected dividend return is 4% pa and its expected capital return is 6% pa. The firm's debt-to-equity ratio is 2:1. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. One of Miller and Modigliani's (M&M's) important insights is that a firm's managers should not try to achieve a particular level of leverage or interest tax shields under certain assumptions. So the firm's capital structure is irrelevant. This is because investors can make their own personal leverage and interest tax shields, so there's no need for managers to try to make corporate leverage and interest tax shields. This is true under the assumptions of equal tax rates, interest rates and debt availability for the person and the corporation, no transaction costs and symmetric information. This principal of 'home-made' or 'do-it-yourself' leverage can also be applied to other topics. Read the following statements to decide which are true: (I) Payout policy: a firm's managers should not try to achieve a particular pattern of equity payout. (II) Agency costs: a firm's managers should not try to minimise agency costs. (III) Diversification: a firm's managers should not try to diversify across industries. (IV) Shareholder wealth: a firm's managers should not try to maximise shareholders' wealth. Which of the above statement(s) are true? Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions. For a firm operating in this perfect world, which statement(s) are correct? (i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected. (ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns. (iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns. Select the most correct response: A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets ($CFFA_U$) at the end of this year (t=1) is expected to be $1 million. After that it is expected to grow at a rate of: • 12% pa for the next two years (from t=1 to 3), • 5% over the fourth year (from t=3 to 4), and • -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate. Assume that: • The nominal WACC after tax is 9.5% pa and is not expected to change. • The nominal WACC before tax is 10% pa and is not expected to change. • The firm has a target debt-to-equity ratio that it plans to maintain. • The inflation rate is 3% pa. • All rates are given as nominal effective annual rates. What is the levered value of this fast growing firm's assets? Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Assume that: • The firm and individual investors can borrow at the same rate and have the same tax rates. • The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium. • There are no market frictions relating to debt such as asymmetric information or transaction costs. • Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered. According to Miller and Modigliani's theory, which statement is correct? A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily}$$ A three year bond has a face value of$100, a yield of 10% and a fixed coupon rate of 5%, paid semi-annually. What is its price? You want to buy an apartment priced at $300,000. You have saved a deposit of$30,000. The bank has agreed to lend you the $270,000 as a fully amortising loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage loan payments are paid in arrears (at the end of the month). Your friend wants to borrow$1,000 and offers to pay you back $100 in 6 months, with more$100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of$100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? A fixed coupon bond was bought for$90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was$92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: $r_\text{total},r_\text{capital},r_\text{income}$. What is the NPV of the following series of cash flows when the discount rate is 10% given as an effective annual rate? The first payment of $90 is in 3 years, followed by payments every 6 months in perpetuity after that which shrink by 3% every 6 months. That is, the growth rate every 6 months is actually negative 3%, given as an effective 6 month rate. So the payment at $t=3.5$ years will be $90(1-0.03)^1=87.3$, and so on. Bonds X and Y are issued by the same US company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X and Y's coupon rates are 8 and 12% pa respectively. Which of the following statements is true? Government bonds currently have a return of 5% pa. A stock has an expected return of 6% pa and the market return is 7% pa. What is the beta of the stock? Stock A has a beta of 0.5 and stock B has a beta of 1. Which statement is NOT correct? Treasury bonds currently have a return of 5% pa. A stock has a beta of 0.5 and the market return is 10% pa. What is the expected return of the stock? According to the theory of the Capital Asset Pricing Model (CAPM), total variance can be broken into two components, systematic variance and idiosyncratic variance. Which of the following events would be considered the most diversifiable according to the theory of the CAPM? A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? A fairly priced stock has an expected return of 15% pa. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the beta of the stock? All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: According to the theory of the Capital Asset Pricing Model (CAPM), total risk can be broken into two components, systematic risk and idiosyncratic risk. Which of the following events would be considered a systematic, undiversifiable event according to the theory of the CAPM? A fairly priced stock has a beta that is the same as the market portfolio's beta. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the expected return of the stock? A stock has a beta of 0.5. Its next dividend is expected to be $3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. What is the price of the stock now? The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot on the SML would have: All things remaining equal, according to the capital asset pricing model, if the systematic variance of an asset increases, its required return will increase and its price will decrease. If the idiosyncratic variance of an asset increases, its price will be unchanged. What is the relationship between the price of a call or put option and the total, systematic and idiosyncratic variance of the underlying asset that the option is based on? Select the most correct answer. Call and put option prices increase when the: A fairly priced stock has an expected return equal to the market's. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the stock's beta? Portfolio Details Stock Expected return Standard deviation Correlation Beta Dollars invested A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the beta of the above portfolio? Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? Stock A and B's returns have a correlation of 0.3. Which statement is NOT correct? Diversification in a portfolio of two assets works best when the correlation between their returns is: A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar risk to the company's existing projects. Assume a classical tax system. Which statement is correct? Your friend just bought a house for$1,000,000. He financed it using a $900,000 mortgage loan and a deposit of$100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? Assume that: • No income (rent) was received from the house during the short time over which house prices fell. • Your friend will not declare bankruptcy, he will always pay off his debts. What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of$10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at $t=4.5$ years will be $10(1-0.02)^1=9.80$, and so on. A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be $1.15(1+0.05), • the dividend at t=6 will be$1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; • the dividend at t=5 will be$1.15(1+0.05), • the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.15 1.10 1.05 1.00 ... After year 4, the annual dividend will grow in perpetuity at -5% pa. Note that this is a negative growth rate, so the dividend will actually shrink. So, • the dividend at t=5 will be $1(1-0.05) = 0.95$, • the dividend at t=6 will be $1(1-0.05)^2 = 0.9025$, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock pays semi-annual dividends. It just paid a dividend of $10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.15 1.10 1.05 1.00 ... After year 4, the annual dividend will grow in perpetuity at -5% pa. Note that this is a negative growth rate, so the dividend will actually shrink. So, • the dividend at t=5 will be $1(1-0.05) = 0.95$, • the dividend at t=6 will be $1(1-0.05)^2 = 0.9025$, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in four and a half years (t = 4.5)? The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. $$p_0 = \frac{d_1}{r - g}$$ Which expression is NOT equal to the expected dividend yield? A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be$10.20 in six months. The required return of the stock is 10% pa, given as an effective annual rate. What is the price of the share now? A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be$10.20 in six months. The required return of the stock 10% pa, given as an effective annual rate. What is the price of the share now? A stock pays annual dividends. It just paid a dividend of $3. The growth rate in the dividend is 4% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 8 8 8 20 8 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 8 8 8 20 8 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in 5 years (t = 5), just after the dividend at that time has been paid? A stock pays annual dividends. It just paid a dividend of$5. The growth rate in the dividend is 1% pa. You estimate that the stock's required return is 8% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 2 2 2 10 3 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 2 2 2 10 3 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in 5 years (t = 5), just after the dividend at that time has been paid? A share pays annual dividends. It just paid a dividend of $2. The growth rate in the dividend is 3% pa. You estimate that the stock's required return is 8% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what is the share price? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0 6 12 18 20 ... After year 4, the dividend will grow in perpetuity at 5% pa. The required return of the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0 6 12 18 20 ... After year 4, the dividend will grow in perpetuity at 5% pa. The required return of the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in 7 years (t = 7), just after the dividend at that time has been paid? A stock just paid its annual dividend of$9. The share price is $60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? A share just paid its semi-annual dividend of$5. The dividend is expected to grow at 1% every 6 months forever. This 1% growth rate is an effective 6 month rate. Therefore the next dividend will be $5.05 in six months. The required return of the stock 8% pa, given as an effective annual rate. What is the price of the share now? A company's shares just paid their annual dividend of$2 each. The stock price is now $40 (just after the dividend payment). The annual dividend is expected to grow by 3% every year forever. The assumptions of the dividend discount model are valid for this company. What do you expect the effective annual dividend yield to be in 3 years (dividend yield from t=3 to t=4)? Stocks in the United States usually pay quarterly dividends. For example, the software giant Microsoft paid a$0.23 dividend every quarter over the 2013 financial year and plans to pay a $0.28 dividend every quarter over the 2014 financial year. Using the dividend discount model and net present value techniques, calculate the stock price of Microsoft assuming that: • The time now is the beginning of July 2014. The next dividend of$0.28 will be received in 3 months (end of September 2014), with another 3 quarterly payments of $0.28 after this (end of December 2014, March 2015 and June 2015). • The quarterly dividend will increase by 2.5% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in the financial year beginning in September 2015 will be$ 0.287 $(=0.28×(1+0.025)^1)$, with the last at the end of June 2016. In the next financial year beginning in September 2016 each quarterly dividend will be $0.294175 $(=0.28×(1+0.025)^2)$, with the last at the end of June 2017, and so on forever. • The total required return on equity is 6% pa. • The required return and growth rate are given as effective annual rates. • Dividend payment dates and ex-dividend dates are at the same time. • Remember that there are 4 quarters in a year and 3 months in a quarter. What is the current stock price? Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: • The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; • JP Morgan Chase's historical earnings per share (EPS) is$4.37; • Citi Group's share price is $50.05 and historical EPS is$4.26; • Wells Fargo's share price is $48.98 and historical EPS is$3.89. Note: Figures sourced from Google Finance on 24 March 2014. Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). • The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; • ICBC 's historical earnings per share (EPS) is RMB 0.74; • CCB's backward-looking PE ratio is 4.59; • BOC 's backward-looking PE ratio is 4.78; • ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. Portfolio Details Stock Expected return Standard deviation Correlation Dollars invested A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the expected return of the above portfolio? Portfolio Details Stock Expected return Standard deviation Covariance $(\sigma_{A,B})$ Beta Dollars invested A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the standard deviation (not variance) of the above portfolio? Note that the stocks' covariance is given, not correlation. Portfolio Details Stock Expected return Standard deviation Correlation $(\rho_{A,B})$ Dollars invested A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the standard deviation (not variance) of the above portfolio? Find the sample standard deviation of returns using the data in the table: Stock Returns Year Return pa 2008 0.3 2009 0.02 2010 -0.2 2011 0.4 The returns above and standard deviations below are given in decimal form. There are many ways to write the ordinary annuity formula. Which of the following is NOT equal to the ordinary annuity formula? A student just won the lottery. She won 1 million in cash after tax. She is trying to calculate how much she can spend per month for the rest of her life. She assumes that she will live for another 60 years. She wants to withdraw equal amounts at the beginning of every month, starting right now. All of the cash is currently sitting in a bank account which pays interest at a rate of 6% pa, given as an APR compounding per month. On her last withdrawal, she intends to have nothing left in her bank account. How much can she withdraw at the beginning of each month? There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not. Which of the below FFCF formulas include the interest tax shield in the cash flow? $$(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)$$ $$(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c$$ $$(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC$$ $$(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c$$ $$(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC$$ $$(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC$$ $$(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c$$ $$(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC$$ $$(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c$$ The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent. $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ $$EBIT=Rev - COGS - FC-Depr$$ $$EBITDA=Rev - COGS - FC$$ $$Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}$$ A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned} Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). \begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: • Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. • Motorola had a 20% after-tax WACC before it merged with Google. • Google and Motorola have the same level of gearing. • Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: Your friend is trying to find the net present value of a project. The project is expected to last for just one year with: • a negative cash flow of -1 million initially (t=0), and • a positive cash flow of $1.1 million in one year (t=1). The project has a total required return of 10% pa due to its moderate level of undiversifiable risk. Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project. He knows that the opportunity cost of investing the$1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m $(=1m \times 10\%)$ which occurs in one year (t=1). He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year. Your friend has listed a few different ways to find the NPV which are written down below. (I) $-1m + \dfrac{1.1m}{(1+0.1)^1}$ (II) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1$ (III) $-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (IV) $-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1$ (V) $-1m + 1.1m - 1.1m \times 0.1$ Which of the above calculations give the correct NPV? Select the most correct answer. For a bond that pays fixed semi-annual coupons, how is the annual coupon rate defined, and how is the bond's annual income yield from time 0 to 1 defined mathematically? Let: $P_0$ be the bond price now, $F_T$ be the bond's face value, $T$ be the bond's maturity in years, $r_\text{total}$ be the bond's total yield, $r_\text{income}$ be the bond's income yield, $r_\text{capital}$ be the bond's capital yield, and $C_t$ be the bond's coupon at time t in years. So $C_{0.5}$ is the coupon in 6 months, $C_1$ is the coupon in 1 year, and so on. If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: A two year Government bond has a face value of$100, a yield of 0.5% and a fixed coupon rate of 0.5%, paid semi-annually. What is its price? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense $(IntExp)$ is zero: \begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned} Does this annual FFCF with zero interest expense or the annual interest tax shield? Project Data Project life 2 yrs Initial investment in equipment $600k Depreciation of equipment per year$250k Expected sale price of equipment at end of project $200k Revenue per job$12k Variable cost per job $4k Quantity of jobs per year 120 Fixed costs per year, paid at the end of each year$100k Interest expense in first year (at t=1) $16.091k Interest expense in second year (at t=2)$9.711k Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Levered cost of equity 12.5% Market portfolio return 10% Beta of assets 1.24 Beta of levered equity 1.5 Firm's and project's debt-to-equity ratio 25% Notes 1. The project will require an immediate purchase of $50k of inventory, which will all be sold at cost when the project ends. Current liabilities are negligible so they can be ignored. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. Note that interest expense is different in each year. • Thousands are represented by 'k' (kilo). • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are nominal. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? In Germany, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 0.04% pa. The inflation rate is currently 1.4% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back$1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now? When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? Unrestricted negative gearing is allowed in Australia, New Zealand and Japan. Negative gearing laws allow income losses on investment properties to be deducted from a tax-payer's pre-tax personal income. Negatively geared investors benefit from this tax advantage. They also hope to benefit from capital gains which exceed the income losses. For example, a property investor buys an apartment funded by an interest only mortgage loan. Interest expense is $2,000 per month. The rental payments received from the tenant living on the property are$1,500 per month. The investor can deduct this income loss of $500 per month from his pre-tax personal income. If his personal marginal tax rate is 46.5%, this saves$232.5 per month in personal income tax. The advantage of negative gearing is an example of the benefits of: The US government recently announced that subsidies for fresh milk producers will be gradually phased out over the next year. Newspapers say that there are expectations of a 40% increase in the spot price of fresh milk over the next year. Option prices on fresh milk trading on the Chicago Mercantile Exchange (CME) reflect expectations of this 40% increase in spot prices over the next year. Similarly to the rest of the market, you believe that prices will rise by 40% over the next year. What option trades are likely to be profitable, or to be more specific, result in a positive Net Present Value (NPV)? Assume that: • Only the spot price is expected to increase and there is no change in expected volatility or other variables that affect option prices. • No taxes, transaction costs, information asymmetry, bid-ask spreads or other market frictions. Economic statistics released this morning were a surprise: they show a strong chance of consumer price inflation (CPI) reaching 5% pa over the next 2 years. This is much higher than the previous forecast of 3% pa. A vanilla fixed-coupon 2-year risk-free government bond was issued at par this morning, just before the economic news was released. What is the expected change in bond price after the economic news this morning, and in the next 2 years? Assume that: • Inflation remains at 5% over the next 2 years. • Investors demand a constant real bond yield. • The bond price falls by the (after-tax) value of the coupon the night before the ex-coupon date, as in real life. In the Merton model of corporate debt, buying a levered company's debt is equivalent to buying risk free government bonds and: Below are 4 option graphs. Note that the y-axis is payoff at maturity (T). What options do they depict? List them in the order that they are numbered. Below are 4 option graphs. Note that the y-axis is payoff at maturity (T). What options do they depict? List them in the order that they are numbered You have just sold an 'in the money' 6 month European put option on the mining company BHP at an exercise price of $40 for a premium of$3. Which of the following statements best describes your situation? The 'option price' in an option contract is paid at the start when the option contract is agreed to. or ? The 'option strike price' in an option contract, also known as the exercise price, is paid at the start when the option contract is agreed to. or ? Which one of the following is NOT usually considered an 'investable' asset for long-term wealth creation? You believe that the price of a share will fall significantly very soon, but the rest of the market does not. The market thinks that the share price will remain the same. Assuming that your prediction will soon be true, which of the following trades is a bad idea? In other words, which trade will NOT make money or prevent losses? Which option position has the possibility of unlimited potential losses? In the Merton model of corporate debt, buying a levered company's shares is equivalent to: In the Merton model of corporate debt, buying a levered company's debt is equivalent to buying the company's assets and: Which of the following is the least useful method or model to calculate the value of a real option in a project? A risky firm will last for one period only (t=0 to 1), then it will be liquidated. So it's assets will be sold and the debt holders and equity holders will be paid out in that order. The firm has the following quantities: $V$ = Market value of assets. $E$ = Market value of (levered) equity. $D$ = Market value of zero coupon bonds. $F_1$ = Total face value of zero coupon bonds which is promised to be paid in one year. The levered equity graph above contains bold labels a to e. Which of the following statements about those labels is NOT correct? A risky firm will last for one period only (t=0 to 1), then it will be liquidated. So it's assets will be sold and the debt holders and equity holders will be paid out in that order. The firm has the following quantities: $V$ = Market value of assets. $E$ = Market value of (levered) equity. $D$ = Market value of zero coupon bonds. $F_1$ = Total face value of zero coupon bonds which is promised to be paid in one year. The risky corporate debt graph above contains bold labels a to e. Which of the following statements about those labels is NOT correct? One of the reasons why firms may not begin projects with relatively small positive net present values (NPV's) is because they wish to maximise the value of their: A moped is a bicycle with pedals and a little motor that can be switched on to assist the rider. Mopeds offer the rider: You're thinking of starting a new cafe business, but you're not sure if it will be profitable. You have to decide what type of cups, mugs and glasses you wish to buy. You can have your cafe's name printed on them, or plain un-marked ones. For marketing reasons it's better to have the cafe name printed, but the plain un-marked cups, mugs and glasses maximise your: An expansion option is best modeled as a or option? An abandonment option is best modeled as a or option? A timing option is best modeled as a or option? According to option theory, it's rational for students to submit their assignments as or as possible? Some financially minded people insist on a prenuptial agreement before committing to marry their partner. This agreement states how the couple's assets should be divided in case they divorce. Prenuptial agreements are designed to give the richer partner more of the couples' assets if they divorce, thus maximising the richer partner's: The cheapest mobile phones available tend to be those that are 'locked' into a cell phone operator's network. Locked phones can not be used with other cell phone operators' networks. Locked mobile phones are cheaper than unlocked phones because the locked-in network operator helps create a monopoly by: Your firm's research scientists can begin an exciting new project at a cost of $10m now, after which there’s a: • 70% chance that cash flows will be$1m per year forever, starting in 5 years (t=5). This is the A state of the world. • 20% chance that cash flows will be $3m per year forever, starting in 5 years (t=5). This is the B state of the world. • 10% chance of a major break through in which case the cash flows will be$20m per year forever starting in 5 years (t=5), or the project can be expanded by investing another $10m (at t=5) which is expected to give cash flows of$60m per year forever, starting at year 9 (t=9). This is the C state of the world. The firm's cost of capital is 10% pa. What's the present value (at t=0) of the option to expand in year 5? A European call option will mature in $T$ years with a strike price of $K$ dollars. The underlying asset has a price of $S$ dollars. What is an expression for the payoff at maturity $(f_T)$ in dollars from owning (being long) the call option? A European put option will mature in $T$ years with a strike price of $K$ dollars. The underlying asset has a price of $S$ dollars. What is an expression for the payoff at maturity $(f_T)$ in dollars from owning (being long) the put option? A levered firm has zero-coupon bonds which mature in one year and have a combined face value of $9.9m. Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa. In one year the firm's assets will be worth: •$13.2m with probability 0.5 in the good state of the world, or • $6.6m with probability 0.5 in the bad state of the world. A new project presents itself which requires an investment of$2m and will provide a certain cash flow of $3.3m in one year. The firm doesn't have any excess cash to make the initial$2m investment, but the funds can be raised from shareholders through a fairly priced rights issue. Ignore all transaction costs. Should shareholders vote to proceed with the project and equity raising? What will be the gain in shareholder wealth if they decide to proceed? A levered firm has a market value of assets of $10m. Its debt is all comprised of zero-coupon bonds which mature in one year and have a combined face value of$9.9m. Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa. Therefore the current market capitalisation of debt $(D_0)$ is $9m and equity $(E_0)$ is$1m. A new project presents itself which requires an investment of $2m and will provide a: •$6.6m cash flow with probability 0.5 in the good state of the world, and a • -$4.4m (notice the negative sign) cash flow with probability 0.5 in the bad state of the world. The project can be funded using the company's excess cash, no debt or equity raisings are required. What would be the new market capitalisation of equity $(E_\text{0, with project})$ if shareholders vote to proceed with the project, and therefore should shareholders proceed with the project? You just signed up for a 30 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). Which of the following statements is NOT correct? Borrowers: Which of the following statements is NOT correct? Lenders: Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. Question 109  credit rating, credit risk Bonds with lower (worse) credit ratings tend to have: A highly leveraged risky firm is trying to raise more debt. The types of debt being considered, in no particular order, are senior bonds, junior bonds, bank accepted bills, promissory notes and bank loans. Which of these forms of debt is the safest from the perspective of the debt investors who are thinking of investing in the firm's new debt? You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? An 'interest payment' is the same thing as a 'coupon payment'. or ? An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? A three year corporate bond yields 12% pa with a coupon rate of 10% pa, paid semi-annually. Find the effective six month yield, effective annual yield and the effective daily yield. Assume that each month has 30 days and that there are 360 days in a year. All answers are given in the same order: $r_\text{eff semi-annual}$, $r_\text{eff yearly}$, $r_\text{eff daily}$. A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is $100. What is its price? A 30 year Japanese government bond was just issued at par with a yield of 1.7% pa. The fixed coupon payments are semi-annual. The bond has a face value of$100. Six months later, just after the first coupon is paid, the yield of the bond increases to 2% pa. What is the bond's new price? In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of 100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond? Which of the following statements about risk free government bonds is NOT correct? Hint: Total return can be broken into income and capital returns as follows: \begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value (100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? A European company just issued two bonds, a • 2 year zero coupon bond at a yield of 8% pa, and a • 3 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the third year (from t=2 to t=3)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 1 year zero coupon bond at a yield of 8% pa, and • A 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. A firm wishes to raise $20 million now. They will issue 8% pa semi-annual coupon bonds that will mature in 5 years and have a face value of$100 each. Bond yields are 6% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return ($r_\text{income}$) of a fixed annual coupon bond? Remember that: $$r_\text{total} = r_\text{income} + r_\text{capital}$$ $$r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}$$ Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: You just bought $100,000 worth of inventory from a wholesale supplier. You are given the option of paying within 5 days and receiving a 2% discount, or paying the full price within 60 days. You actually don't have the cash to pay within 5 days, but you could borrow it from the bank (as an overdraft) at 10% pa, given as an effective annual rate. In 60 days you will have enough money to pay the full cost without having to borrow from the bank. What is the implicit interest rate charged by the wholesale supplier, given as an effective annual rate? Also, should you borrow from the bank in 5 days to pay the supplier and receive the discount? Or just pay the full price on the last possible date? Assume that there are 365 days per year. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You're advising your superstar client 40-cent who is weighing up buying a private jet or a luxury yacht. 40-cent is just as happy with either, but he wants to go with the more cost-effective option. These are the cash flows of the two options: • The private jet can be bought for $6m now, which will cost$12,000 per month in fuel, piloting and airport costs, payable at the end of each month. The jet will last for 12 years. • Or the luxury yacht can be bought for $4m now, which will cost$20,000 per month in fuel, crew and berthing costs, payable at the end of each month. The yacht will last for 20 years. What's unusual about 40-cent is that he is so famous that he will actually be able to sell his jet or yacht for the same price as it was bought since the next generation of superstar musicians will buy it from him as a status symbol. Bank interest rates are 10% pa, given as an effective annual rate. You can assume that 40-cent will live for another 60 years and that when the jet or yacht's life is at an end, he will buy a new one with the same details as above. Note that the effective monthly rate is $r_\text{eff monthly}=(1+0.1)^{1/12}-1=0.00797414$ Details of two different types of light bulbs are given below: • Low-energy light bulbs cost $3.50, have a life of nine years, and use about$1.60 of electricity a year, paid at the end of each year. • Conventional light bulbs cost only $0.50, but last only about a year and use about$6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for$20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for$2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then$1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. An industrial chicken farmer grows chickens for their meat. Chickens: 1. Cost$0.50 each to buy as chicks. They are bought on the day they’re born, at t=0. 2. Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6). 3. Grow at a rate of$0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they’re older and grow more slowly. 4. Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs$0.30, and so on. 5. Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above). The required return of the chicken farm is 0.5% given as an effective weekly rate. Ignore taxes and the fixed costs of the factory. Ignore the chicken’s welfare and other environmental and ethical concerns. Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks. You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for $600 (at t=0). In your experience, dresses used once per month last for 6 years. Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6. What is the present value of the cost of letting your sister use your current dress for the next 3 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes. You own a nice suit which you wear once per week on nights out. You bought it one year ago for$600. In your experience, suits used once per week last for 6 years. So you expect yours to last for another 5 years. Your younger brother said that retro is back in style so he wants to wants to borrow your suit once a week when he goes out. With the increased use, your suit will only last for another 4 years rather than 5. What is the present value of the cost of letting your brother use your current suit for the next 4 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new suit when your current one wears out and your brother will not use the new one; your brother will only use your current suit so he will only use it for the next four years; and the price of a new suit never changes. You're about to buy a car. These are the cash flows of the two different cars that you can buy: • You can buy an old car for $5,000 now, for which you will have to buy$90 of fuel at the end of each week from the date of purchase. The old car will last for 3 years, at which point you will sell the old car for $500. • Or you can buy a new car for$14,000 now for which you will have to buy $50 of fuel at the end of each week from the date of purchase. The new car will last for 4 years, at which point you will sell the new car for$1,000. Bank interest rates are 10% pa, given as an effective annual rate. Assume that there are exactly 52 weeks in a year. Ignore taxes and environmental and pollution factors. Should you buy the or the ? Details of two different types of desserts or edible treats are given below: • High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only $2 per day. • Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at$4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist $2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years. The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order. Ignore the pain of dental therapy, personal preferences and other factors. Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling$2 million. A cash offer will be made that pays the fair price for the target's shares plus 70% of the total synergy value. The cash will be paid out of the firm's cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 60 10 Debt ($m) 20 2 Share price ($) 10 8 Number of shares (m) 4 1 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling$2 million. A scrip offer will be made that pays the fair price for the target's shares plus 70% of the total synergy value. Firms Involved in the Takeover Acquirer Target Assets ($m) 60 10 Debt ($m) 20 2 Share price ($) 10 8 Number of shares (m) 4 1 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling$0.5 million, but investment bank fees and integration costs with a present value of $1.5 million is expected. A 10% cash and 90% scrip offer will be made that pays the fair price for the target's shares only. Assume that the Target and Acquirer agree to the deal. The cash will be paid out of the firms' cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 60 10 Debt ($m) 20 2 Share price ($) 10 8 Number of shares (m) 4 1 Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. In a takeover deal where the offer is 100% cash, the merged firm's number of shares will be equal to the acquirer firm's original number of shares. or ? In a takeover deal where the offer is 100% scrip (shares), the merged firm's number of shares will be equal to the acquirer firm's original number of shares. or ? In a takeover deal where the offer is 100% scrip (shares), the merged firm's number of shares will be equal to the sum of the acquirer and target firms' original number of shares. or ? Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling $105 million. A cash offer will be made that pays the fair price for the target's shares plus 75% of the total synergy value. The cash will be paid out of the firm's cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 6,000 700 Debt ($m) 4,800 400 Share price ($) 40 20 Number of shares (m) 30 15 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling $105 million. A scrip offer will be made that pays the fair price for the target's shares plus 75% of the total synergy value. Firms Involved in the Takeover Acquirer Target Assets ($m) 6,000 700 Debt ($m) 4,800 400 Share price ($) 40 20 Number of shares (m) 30 15 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. Acquirer firm plans to launch a takeover of Target firm. The firms operate in different industries and the CEO's rationale for the merger is to increase diversification and thereby decrease risk. The deal is not expected to create any synergies. An 80% scrip and 20% cash offer will be made that pays the fair price for the target's shares. The cash will be paid out of the firms' cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 6,000 700 Debt ($m) 4,800 400 Share price ($) 40 20 Number of shares (m) 30 15 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. Acquirer firm plans to launch a takeover of Target firm. The deal is expected to create a present value of synergies totaling$105 million. A 40% scrip and 60% cash offer will be made that pays the fair price for the target's shares plus 75% of the total synergy value. The cash will be paid out of the firm's cash holdings, no new debt or equity will be raised. Firms Involved in the Takeover Acquirer Target Assets ($m) 6,000 700 Debt ($m) 4,800 400 Share price ($) 40 20 Number of shares (m) 30 15 Ignore transaction costs and fees. Assume that the firms' debt and equity are fairly priced, and that each firms' debts' risk, yield and values remain constant. The acquisition is planned to occur immediately, so ignore the time value of money. Calculate the merged firm's share price and total number of shares after the takeover has been completed. One year ago you bought$100,000 of shares partly funded using a margin loan. The margin loan size was $70,000 and the other$30,000 was your own wealth or 'equity' in the share assets. The interest rate on the margin loan was 7.84% pa. Over the year, the shares produced a dividend yield of 4% pa and a capital gain of 5% pa. What was the total return on your wealth? Ignore taxes, assume that all cash flows (interest payments and dividends) were paid and received at the end of the year, and all rates above are effective annual rates. Hint: Remember that wealth in this context is your equity (E) in the house asset (V = D+E) which is funded by the loan (D) and your deposit or equity (E). You just bought a house worth $1,000,000. You financed it with an$800,000 mortgage loan and a deposit of $200,000. You estimate that: • The house has a beta of 1; • The mortgage loan has a beta of 0.2. What is the beta of the equity (the$200,000 deposit) that you have in your house? Also, if the risk free rate is 5% pa and the market portfolio's return is 10% pa, what is the expected return on equity in your house? Ignore taxes, assume that all cash flows (interest payments and rent) were paid and received at the end of the year, and all rates are effective annual rates. You just bought a residential apartment as an investment property for $500,000. You intend to rent it out to tenants. They are ready to move in, they would just like to know how much the monthly rental payments will be, then they will sign a twelve-month lease. You require a total return of 8% pa and a rental yield of 5% pa. What would the monthly paid-in-advance rental payments have to be this year to receive that 5% annual rental yield? Also, if monthly rental payments can be increased each year when a new lease agreement is signed, by how much must you increase rents per year to realise the 8% pa total return on the property? Ignore all taxes and the costs of renting such as maintenance costs, real estate agent fees, utilities and so on. Assume that there will be no periods of vacancy and that tenants will promptly pay the rental prices you charge. Note that the first rental payment will be received at t=0. The first lease agreement specifies the first 12 equal payments from t=0 to 11. The next lease agreement can have a rental increase, so the next twelve equal payments from t=12 to 23 can be higher than previously, and so on forever. Which of the following statements about short-selling is NOT true? Katya offers to pay you$10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? A European company just issued two bonds, a • 1 year zero coupon bond at a yield of 8% pa, and a • 2 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 1 year zero coupon bond at a yield of 10% pa, and • A 2 year zero coupon bond at a yield of 8% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. An Australian company just issued two bonds: • A 6-month zero coupon bond at a yield of 6% pa, and • A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. A company has: • 140 million shares outstanding. • The market price of one share is currently$2. • The company's debentures are publicly traded and their market price is equal to 93% of the face value. • The debentures have a total face value of $50,000,000 and the current yield to maturity of corporate debentures is 12% per annum. • The risk-free rate is 8.50% and the market return is 13.7%. • Market analysts estimated that the company's stock has a beta of 0.90. • The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC) in a classical tax system? A firm can issue 3 year annual coupon bonds at a yield of 10% pa and a coupon rate of 8% pa. The beta of its levered equity is 2. The market's expected return is 10% pa and 3 year government bonds yield 6% pa with a coupon rate of 4% pa. The market value of equity is$1 million and the market value of debt is $1 million. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. A company has: • 10 million common shares outstanding, each trading at a price of$90. • 1 million preferred shares which have a face (or par) value of $100 and pay a constant dividend of 9% of par. They currently trade at a price of$120 each. • Debentures that have a total face value of $60,000,000 and a yield to maturity of 6% per annum. They are publicly traded and their market price is equal to 90% of their face value. • The risk-free rate is 5% and the market return is 10%. • Market analysts estimate that the company's common stock has a beta of 1.2. The corporate tax rate is 30%. What is the company's after-tax Weighted Average Cost of Capital (WACC)? Assume a classical tax system. A company has: • 100 million ordinary shares outstanding which are trading at a price of$5 each. Market analysts estimated that the company's ordinary stock has a beta of 1.5. The risk-free rate is 5% and the market return is 10%. • 1 million preferred shares which have a face (or par) value of $100 and pay a constant annual dividend of 9% of par. The next dividend will be paid in one year. Assume that all preference dividends will be paid when promised. They currently trade at a price of$90 each. • Debentures that have a total face value of $200 million and a yield to maturity of 6% per annum. They are publicly traded and their market price is equal to 110% of their face value. The corporate tax rate is 30%. All returns and yields are given as effective annual rates. What is the company's after-tax Weighted Average Cost of Capital (WACC)? Assume a classical tax system. A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? The CAPM can be used to find a business's expected opportunity cost of capital: $$r_i=r_f+β_i (r_m-r_f)$$ What should be used as the risk free rate $r_f$? A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: Question 412 enterprise value, no explanation A large proportion of a levered firm's assets is cash held at the bank. The firm is financed with half equity and half debt. Which of the following statements about this firm's enterprise value (EV) and total asset value (V) is NOT correct? A European company just issued two bonds, a • 3 year zero coupon bond at a yield of 6% pa, and a • 4 year zero coupon bond at a yield of 6.5% pa. What is the company's forward rate over the fourth year (from t=3 to t=4)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Your credit card shows a$600 debt liability. The interest rate is 24% pa, payable monthly. You can't pay any of the debt off, except in 6 months when it's your birthday and you'll receive $50 which you'll use to pay off the credit card. If that is your only repayment, how much will the credit card debt liability be one year from now? For a price of$100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? For a price of$100, Carol will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 12% pa. Would you like to her bond or politely ? For a price of$100, Rad will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Would you like to the bond or politely ? For a price of$100, Andrea will sell you a 2 year bond paying annual coupons of 10% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Would you like to the bond or politely ? For a price of$95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to the bond or politely ? Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? A two year Government bond has a face value of$100, a yield of 2.5% pa and a fixed coupon rate of 0.5% pa, paid semi-annually. What is its price? The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices? A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is$100. What is its price? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other. Which of the following statements is true? A four year bond has a face value of$100, a yield of 6% and a fixed coupon rate of 12%, paid semi-annually. What is its price? Which one of the following bonds is trading at a discount? A five year bond has a face value of $100, a yield of 12% and a fixed coupon rate of 6%, paid semi-annually. What is the bond's price? Which one of the following bonds is trading at par? A firm wishes to raise$8 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? Which one of the following bonds is trading at a premium? An investor bought two fixed-coupon bonds issued by the same company, a zero-coupon bond and a 7% pa semi-annual coupon bond. Both bonds have a face value of$1,000, mature in 10 years, and had a yield at the time of purchase of 8% pa. A few years later, yields fell to 6% pa. Which of the following statements is correct? Note that a capital gain is an increase in price. A firm wishes to raise $10 million now. They will issue 6% pa semi-annual coupon bonds that will mature in 8 years and have a face value of$1,000 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? A four year bond has a face value of $100, a yield of 9% and a fixed coupon rate of 6%, paid semi-annually. What is its price? A 10 year bond has a face value of$100, a yield of 6% pa and a fixed coupon rate of 8% pa, paid semi-annually. What is its price? Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? A 10 year Australian government bond was just issued at par with a yield of 3.9% pa. The fixed coupon payments are semi-annual. The bond has a face value of$1,000. Six months later, just after the first coupon is paid, the yield of the bond decreases to 3.65% pa. What is the bond's new price? Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true? A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: $$r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily}$$ You really want to go on a back packing trip to Europe when you finish university. Currently you have$1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? You want to buy an apartment worth$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? Calculate the effective annual rates of the following three APR's: • A credit card offering an interest rate of 18% pa, compounding monthly. • A bond offering a yield of 6% pa, compounding semi-annually. • An annual dividend-paying stock offering a return of 10% pa compounding annually. All answers are given in the same order: $r_\text{credit card, eff yrly}$, $r_\text{bond, eff yrly}$, $r_\text{stock, eff yrly}$ You want to buy an apartment worth $400,000. You have saved a deposit of$80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You're trying to save enough money to buy your first car which costs$2,500. You can save $100 at the end of each month starting from now. You currently have no money at all. You just opened a bank account with an interest rate of 6% pa payable monthly. How many months will it take to save enough money to buy the car? Assume that the price of the car will stay the same over time. You want to buy an apartment priced at$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? A 2 year government bond yields 5% pa with a coupon rate of 6% pa, paid semi-annually. Find the effective six month rate, effective annual rate and the effective daily rate. Assume that each month has 30 days and that there are 360 days in a year. All answers are given in the same order: $r_\text{eff semi-annual}$, $r_\text{eff yrly}$, $r_\text{eff daily}$. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. You just signed up for a 30 year fully amortising mortgage with monthly payments of$1,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 20 years, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. A 2 year corporate bond yields 3% pa with a coupon rate of 5% pa, paid semi-annually. Find the effective monthly rate, effective six month rate, and effective annual rate. $r_\text{eff monthly}$, $r_\text{eff 6 month}$, $r_\text{eff annual}$. You want to buy a house priced at$400,000. You have saved a deposit of $40,000. The bank has agreed to lend you$360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. What will be your monthly payments? You're trying to save enough money for a deposit to buy a house. You want to buy a house worth $400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other $320,000 that you need. You currently have no savings, but you just started working and can save$2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the $80,000 deposit? Round your answer up to the nearest month. Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? Question 218 NPV, IRR, profitability index, average accounting return Which of the following statements is NOT correct? A 180-day Bank Accepted Bill has a face value of$1,000,000. The interest rate is 8% pa and there are 365 days in the year. What is its price now? A 90-day Bank Accepted Bill (BAB) has a face value of $1,000,000. The simple interest rate is 10% pa and there are 365 days in the year. What is its price now? A 30-day Bank Accepted Bill has a face value of$1,000,000. The interest rate is 8% pa and there are 365 days in the year. What is its price now? A 90-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 6% pa and there are 365 days in the year. What is its price? A 60-day Bank Accepted Bill has a face value of$1,000,000. The interest rate is 8% pa and there are 365 days in the year. What is its price now? A 30-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 2.5% pa and there are 365 days in the year. What is its price now? On 27/09/13, three month Swiss government bills traded at a yield of -0.2%, given as a simple annual yield. That is, interest rates were negative. If the face value of one of these 90 day bills is CHF1,000,000 (CHF represents Swiss Francs, the Swiss currency), what is the price of one of these bills? Project Data Project life 1 year Initial investment in equipment$8m Depreciation of equipment per year $8m Expected sale price of equipment at end of project 0 Unit sales per year 4m Sale price per unit$10 Variable cost per unit $5 Fixed costs per year, paid at the end of each year$2m Interest expense in first year (at t=1) $0.562m Corporate tax rate 30% Government treasury bond yield 5% Bank loan debt yield 9% Market portfolio return 10% Covariance of levered equity returns with market 0.32 Variance of market portfolio returns 0.16 Firm's and project's debt-to-equity ratio 50% Notes 1. Due to the project, current assets will increase by$6m now (t=0) and fall by $6m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates. • The project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? Project Data Project life 1 year Initial investment in equipment$6m Depreciation of equipment per year $6m Expected sale price of equipment at end of project 0 Unit sales per year 9m Sale price per unit$8 Variable cost per unit $6 Fixed costs per year, paid at the end of each year$1m Interest expense in first year (at t=1) $0.53m Tax rate 30% Government treasury bond yield 5% Bank loan debt yield 6% Market portfolio return 10% Covariance of levered equity returns with market 0.08 Variance of market portfolio returns 0.16 Firm's and project's debt-to-assets ratio 50% Notes 1. Due to the project, current assets will increase by$5m now (t=0) and fall by 5m at the end (t=1). Current liabilities will not be affected. Assumptions • The debt-to-assets ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio. • Millions are represented by 'm'. • All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year. • All rates and cash flows are real. The inflation rate is 2% pa. • All rates are given as effective annual rates. • The 50% capital gains tax discount is not available since the project is undertaken by a firm, not an individual. What is the net present value (NPV) of the project? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). \begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\ Does this annual FFCF or the annual interest tax shield? The hardest and most important aspect of business project valuation is the estimation of the: There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). One method is to use the following formulas to transform net income (NI) into FFCF including interest and depreciation tax shields: $$FFCF=NI + Depr - CapEx -ΔNWC + IntExp$$ $$NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )$$ Another popular method is to use EBITDA rather than net income. EBITDA is defined as: $$EBITDA=Rev - COGS - FC$$ One of the below formulas correctly calculates FFCF from EBITDA, including interest and depreciation tax shields, giving an identical answer to that above. Which formula is correct? The perpetuity with growth formula is: $$P_0= \dfrac{C_1}{r-g}$$ Which of the following is NOT equal to the total required return (r)? A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: • His forecast is true. • Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. • Ignore all costs such as taxes, agent fees, maintenance and so on. • All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. • The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: A pharmaceutical firm has just discovered a valuable new drug. So far the news has been kept a secret. The net present value of making and commercialising the drug is200 million, but $600 million of bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and bond raising to shareholders simultaneously in the same announcement. The bonds will be issued shortly after. Once the announcement is made and the bonds are issued, what is the expected increase in the value of the firm's assets (ΔV), market capitalisation of debt (ΔD) and market cap of equity (ΔE)? The triangle symbol is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: $ΔV = ΔD+ΔE$ Interest expense on debt is tax-deductible, but dividend payments on equity are not. or ? Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? A levered company's required return on debt is always less than its required return on equity. or ? Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? The "interest expense" on a company's annual income statement is equal to the cash interest payments (but not principal payments) made to debt holders during the year. or ? All things remaining equal, the higher the correlation of returns between two stocks: The following table shows a sample of historical total returns of shares in two different companies A and B. Stock Returns Total effective annual returns Year $r_A$ $r_B$ 2007 0.2 0.4 2008 0.04 -0.2 2009 -0.1 -0.3 2010 0.18 0.5 What is the historical sample covariance ($\hat{\sigma}_{A,B}$) and correlation ($\rho_{A,B}$) of stock A and B's total effective annual returns? Three important classes of investable risky assets are: • Corporate debt which has low total risk, • Real estate which has medium total risk, • Equity which has high total risk. Assume that the correlation between total returns on: • Corporate debt and real estate is 0.1, • Corporate debt and equity is 0.1, • Real estate and equity is 0.5. You are considering investing all of your wealth in one or more of these asset classes. Which portfolio will give the lowest total risk? You are restricted from shorting any of these assets. Disregard returns and the risk-return trade-off, pretend that you are only concerned with minimising risk. Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%. If the variance of stock A increases but the: • Prices and expected returns of each stock stays the same, • Variance of stock B's returns stays the same, • Correlation of returns between the stocks stays the same. Which of the following statements is NOT correct? In the so called 'Swiss Loans Affair' of the 1980's, Australian banks offered loans denominated in Swiss Francs to Australian farmers at interest rates as low as 4% pa. This was far lower than interest rates on Australian Dollar loans which were above 10% due to very high inflation in Australia at the time. In the late-1980's there was a large depreciation in the Australian Dollar. The Australian Dollar nearly halved in value against the Swiss Franc. Many Australian farmers went bankrupt since they couldn't afford the interest payments on the Swiss Franc loans because the Australian Dollar value of those payments nearly doubled. The farmers accused the banks of promoting Swiss Franc loans without making them aware of the risks. What fundamental principal of finance did the Australian farmers (and the bankers) fail to understand? Suppose the Australian cash rate is expected to be 8.15% pa and the US federal funds rate is expected to be 3.00% pa over the next 2 years, both given as nominal effective annual rates. The current exchange rate is at parity, so 1 USD = 1 AUD. What is the implied 2 year forward foreign exchange rate? An 'interest only' loan can also be called a: There are a number of ways that assets can be depreciated. Generally the government's tax office stipulates a certain method. But if it didn't, what would be the ideal way to depreciate an asset from the perspective of a businesses owner? The 'initial margin', also known as the performance bond in a futures contract, is paid at the start when the futures contract is agreed to. or ? The 'futures price' in a futures contract is paid at the start when the futures contract is agreed to. or ? A bathroom and plumbing supplies shop offers credit to its customers. Customers are given 60 days to pay for their goods, but if they pay within 7 days they will get a 2% discount. What is the effective interest rate implicit in the discount being offered? Assume 365 days in a year and that all customers pay on either the 7th day or the 60th day. All rates given in this question are effective annual rates. The Australian cash rate is expected to be 6% pa while the US federal funds rate is expected to be 4% pa over the next 3 years, both given as effective annual rates. The current exchange rate is 0.80 AUD per USD. What is the implied 3 year forward foreign exchange rate? Let the variance of returns for a share per month be $\sigma_\text{monthly}^2$. What is the formula for the variance of the share's returns per year $(\sigma_\text{yearly}^2)$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. A stock's standard deviation of returns is expected to be: • 0.09 per month for the first 5 months; • 0.14 per month for the next 7 months. What is the expected standard deviation of the stock per year $(\sigma_\text{annual})$? Assume that returns are independently and identically distributed (iid) and therefore have zero auto-correlation. You just signed up for a 30 year fully amortising mortgage loan with monthly payments of$1,500 per month. The interest rate is 9% pa which is not expected to change. To your surprise, you can actually afford to pay $2,000 per month and your mortgage allows early repayments without fees. If you maintain these higher monthly payments, how long will it take to pay off your mortgage? Your main expense is fuel for your car which costs$100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? On his 20th birthday, a man makes a resolution. He will deposit$30 into a bank account at the end of every month starting from now, which is the start of the month. So the first payment will be in one month. He will write in his will that when he dies the money in the account should be given to charity. The bank account pays interest at 6% pa compounding monthly, which is not expected to change. If the man lives for another 60 years, how much money will be in the bank account if he dies just after making his last (720th) payment? A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month. She plans to spend$20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity. In how many months will she make her last withdrawal and donate the remainder to charity? A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire. How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. • The fund invests its fees in the same companies as it invests your funds in, but with no fees. The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years. Let the standard deviation of returns for a share per month be $\sigma_\text{monthly}$. What is the formula for the standard deviation of the share's returns per year $(\sigma_\text{yearly})$? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. Your neighbour asks you for a loan of $100 and offers to pay you back$120 in one year. You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates. Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs. The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a$9.09 cash flow right now with zero net cash flows in the future. Jan asks you for a loan. He wants $100 now and offers to pay you back$120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Remember: $$V_0 = \frac{V_t}{(1+r_\text{eff})^t}$$ Will you or Jan's deal? The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 11 2 121 One and a half years ago Frank bought a house for $600,000. Now it's worth only$500,000, based on recent similar sales in the area. The expected total return on Frank's residential property is 7% pa. He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments. The present value of 12 months of rental payments is$18,617.27. The future value of 12 months of rental payments one year in the future is $19,920.48. What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on. A company runs a number of slaughterhouses which supply hamburger meat to McDonalds. The company is afraid that live cattle prices will increase over the next year, even though there is widespread belief in the market that they will be stable. What can the company do to hedge against the risk of increasing live cattle prices? Which statement(s) are correct? (i) buy call options on live cattle. (ii) buy put options on live cattle. (iii) sell call options on live cattle. Select the most correct response: You operate a cattle farm that supplies hamburger meat to the big fast food chains. You buy a lot of grain to feed your cattle, and you sell the fully grown cattle on the livestock market. You're afraid of adverse movements in grain and livestock prices. What options should you buy to hedge your exposures in the grain and cattle livestock markets? Select the most correct response: A risky firm will last for one period only (t=0 to 1), then it will be liquidated. So it's assets will be sold and the debt holders and equity holders will be paid out in that order. The firm has the following quantities: $V$ = Market value of assets. $E$ = Market value of (levered) equity. $D$ = Market value of zero coupon bonds. $F_1$ = Total face value of zero coupon bonds which is promised to be paid in one year. What is the payoff to equity holders at maturity, assuming that they keep their shares until maturity? A risky firm will last for one period only (t=0 to 1), then it will be liquidated. So it's assets will be sold and the debt holders and equity holders will be paid out in that order. The firm has the following quantities: $V$ = Market value of assets. $E$ = Market value of (levered) equity. $D$ = Market value of zero coupon bonds. $F_1$ = Total face value of zero coupon bonds which is promised to be paid in one year. What is the payoff to debt holders at maturity, assuming that they keep their debt until maturity? A mature firm has constant expected future earnings and dividends. Both amounts are equal. So earnings and dividends are expected to be equal and unchanging. Which of the following statements is NOT correct? Question 432 option, option intrinsic value, no explanation An American call option with a strike price of $K$ dollars will mature in $T$ years. The underlying asset has a price of $S$ dollars. What is an expression for the current intrinsic value in dollars from owning (being long) the American call option? Note that the intrinsic value of an option does not subtract the premium paid to buy the option. A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? The following cash flows are expected: • 10 yearly payments of$60, with the first payment in 3 years from now (first payment at t=3). • 1 payment of $400 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? A project has an internal rate of return (IRR) which is greater than its required return. Select the most correct statement. A text book publisher is thinking of asking some teachers to write a new textbook at a cost of$100,000, payable now. The book would be written, printed and ready to sell to students in 2 years. It will be ready just before semester begins. A cash flow of $100 would be made from each book sold, after all costs such as printing and delivery. There are 600 students per semester. Assume that every student buys a new text book. Remember that there are 2 semesters per year and students buy text books at the beginning of the semester. Assume that text book publishers will sell the books at the same price forever and that the number of students is constant. If the discount rate is 8% pa, given as an effective annual rate, what is the NPV of the project? The following cash flows are expected: • 10 yearly payments of$80, with the first payment in 3 years from now (first payment at t=3). • 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? A project's net present value (NPV) is negative. Select the most correct statement. A project's NPV is positive. Select the most correct statement: A project's Profitability Index (PI) is less than 1. Select the most correct statement: A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0 6 12 18 20 ... After year 4, the dividend will grow in perpetuity at 5% pa. The required return of the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. If all of the dividends since time period zero were deposited into a bank account yielding 8% pa as an effective annual rate, how much money will be in the bank account in 2.5 years (in other words, at t=2.5)? You want to buy an apartment worth $300,000. You have saved a deposit of$60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? You want to buy an apartment priced at$500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the$450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? Two put options are exactly the same, but one has a low and the other has a high exercise price. Which option would you expect to have the higher price, the option with the or exercise price, or should they have the price? Will the price of a call option on equity or if the standard deviation of returns (risk) of the underlying shares becomes higher? Will the price of an out-of-the-money put option on equity or if the standard deviation of returns (risk) of the underlying shares becomes higher? Two put options are exactly the same, but one matures in one year and the other matures in two years. Which option would you expect to have the higher price, the option which matures or , or should they have the price? Two call options are exactly the same, but one has a low and the other has a high exercise price. Which option would you expect to have the higher price, the option with the or exercise price, or should they have the price? The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Profitability Index (PI) of the project? Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 A project has the following cash flows: Project Cash Flows Time (yrs) Cash flow ($) 0 -400 1 200 2 250 What is the Profitability Index (PI) of the project? Assume that the cash flows shown in the table are paid all at once at the given point in time. The required return is 10% pa, given as an effective annual rate. A project has the following cash flows: Project Cash Flows Time (yrs) Cash flow ($) 0 -90 1 30 2 105 The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Profitability Index (PI) of the project? Which of the following companies is most suitable for valuation using PE multiples techniques? Which of the following investable assets is the LEAST suitable for valuation using PE multiples techniques? A three year project's NPV is negative. The cash flows of the project include a negative cash flow at the very start and positive cash flows over its short life. The required return of the project is 10% pa. Select the most correct statement. What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) 0 -100 1 0 2 121 A project has the following cash flows: Project Cash Flows Time (yrs) Cash flow ($) 0 -400 1 0 2 500 The required return on the project is 10%, given as an effective annual rate. What is the Internal Rate of Return (IRR) of this project? The following choices are effective annual rates. Assume that the cash flows shown in the table are paid all at once at the given point in time. You just started work at your new job which pays$48,000 per year. The human resources department have given you the option of being paid at the end of every week or every month. Assume that there are 4 weeks per month, 12 months per year and 48 weeks per year. Bank interest rates are 12% pa given as an APR compounding per month. What is the dollar gain over one year, as a net present value, of being paid every week rather than every month? You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0), in one year (t=1) and in two years (t=2), and still have$50,000 in the bank after that (t=2). How much can you consume at each time? You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end. How much can you consume at each time? You want to buy an apartment priced at$300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the$270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of$3,000 per month. The interest rate is 9% pa which is not expected to change. You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month. At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of$3,300 in 25 years, how much will be owing on the mortgage? A prospective home buyer can afford to pay \$2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow ($V_\text{before}$), so: $$\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}}$$ Assume that: • Interest rates are expected to be constant over the life of the loan. • Loans are interest-only and have a life of 30 years. • Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month.
Home Grails 3 - Security without plugins I am implementing Spring Security with SAML on a Grails 3.x application without the use of plugins. I have added the necessary jars as below : compile group: 'ca.juliusdavies', name: 'not-yet-commons-ssl', version: '0.3.17' compile 'org.opensaml:opensaml:2.6.0' compile 'org.opensaml:openws:1.4.1' compile 'org.opensaml:xmltooling:1.3.1' compile group: 'org.springframework.security', name: 'spring-security-core', version: '3.2.5.RELEASE' compile group: 'org.springframework.security', name: 'spring-security-config', version: '3.2.5.RELEASE' compile group: 'org.springframework.security.extensions', name: 'spring-security-saml2-core', version: '1.0.5.BUILD-SNAPSHOT' I have my securitycontext.xml file integrated within resources.groovy. beans = { importBeans('classpath:security/springSecuritySamlBeans.xml') } I need to add the following security filters in my web.xml but in Grails 3 this is not possible.Can anyone suggest where these need to be added and how they can be added. springSecurityFilterChain org.springframework.web.filter.DelegatingFilterProxy springSecurityFilterChain /* From what i read these filters need to be defined in resources.groovy but I am not sure how this is done.
## FANDOM 766 Pages (This page is for Classic ruleset. For Cities in the Multiplayer ruleset click here.) A city is created when settlers are given the build city command on suitable terrain (any terrain except Glaciers or Ocean), removing the unit from play to provide the city with its first citizen. A city may grow to include dozens of citizens, some working within the city while others are dispatched as new settlers. Famine and war kill citizens and reduce population; with the loss of its last citizen a city disappears. On the Freeciv map each city is labeled with its population, also called its size. Cities are your sole instrument for developing natural resources and channeling them toward expansion, technological progress, and warfare. After describing how city citizens extract natural resources, we will examine how cities themselves may be developed and cultivated to increase their value and productivity. ### Working landEdit Each city may work terrain within the 5×5 region centered on the city, minus its corners. To extract resources from a square you must have a citizen working there. The example city shown on the right has all four of its citizens working in nearby squares; each active square is labelled with the number of food points, production points, and trade points it is generating every turn. By taking one or more of these squares out of production, the player could choose other squares for his citizens to work, or place his citizens in other roles entirely. Review the section on terrain to determine how the output of each square is affected by the terrain, the presence of special resources such as game or minerals, and improvements like roads, irrigation, or mines. Note that the square on which the city itself rests - the city center - gets worked for free, without being assigned a citizen. The city's square always produces at least one food point and at least one production point. It also gains whatever advantages the terrain offers when irrigated (because cities come with water systems built-in), but this may not be used as a basis for irrigating other squares; for that, workers must explicitly irrigate the square. City squares are also automatically developed with roads (except if on a river before Bridge Building is known, in some rulesets) or, when technology has made them available, railroads (because cities come with transportation built-in). If the city has a Supermarket, its square additionally gets any food bonus associated with farmland. You cannot begin working a square which a neighboring city is already working, nor can you work terrain upon which an enemy unit is standing or terrain inside another player's borders. Thus you can simulate conditions of siege by stationing your units atop valuable resources around an enemy city. Units can also be ordered to pillage, which damages improvements. Workers, settlers and engineers could even transform the terrain to make the square less productive, like the Romans sowing the fields of Carthage with salt. ### Buildings and wondersEdit Cities may be enhanced with a wide variety of buildings, each with a different effect; each city may have only one of each building. Buildings are listed and described here. Some buildings require others — as when you must have a Marketplace before building a Bank. Most buildings become available only when you achieve certain technologies, while technology makes others become obsolete. It costs production points to construct buildings — often taking several turns — and, once completed, many buildings require an upkeep of several gold pieces. You may dismantle and sell a building, receiving one gold piece for each production point used in its construction. If a turn comes on which you cannot pay the upkeep on all of your buildings, some of them will be automatically sold; obviously this should be avoided as the buildings chosen might not be ones you would have preferred to sell. Great wonders are unique structures that can each be constructed by only one civilization per game. Players often race to be the first to complete a coveted wonder. While buildings affect only their own city, many wonders benefit their entire civilization. And while buildings must be built using local production points, caravans and freight built in other cities can contribute their full cost in production points towards the construction of a wonder (simply disbanding units returns only half of their cost). Wonders can't be sold or destroyed any way unless the city itself perishes. There are also small wonders that each player with sufficient technologies can build but onli in one city per nation. Almost all rulesets have at least one small wonder - Palace, that marks the player's capital; it appears in the first player's city for free and, moreover, is relocated to a random another city if the old capital is lost. Small wonders can not be sold but player usually may relocate them by simply building new one in another city; all shields of the previous buildings are just lost in this case. Also we should mention so-called special buildings that are counted as city improvements in the process of their prpduction but will never actually be placed in any city: Mint just converts all shields invested in it into gold, and spaceship parts go to the player's spaceship. ### SpecialistsEdit The first citizens of each city are usually workers, each toiling to yield up the resources of one terrain square. But there are several other roles citizens may assume once they are relieved of having to work terrain. In fact, taking another role is the only way they can stop working. Watch carefully when you remove a citizen from one terrain square and assign him to another — you will see him briefly become an entertainer in the moment when he is not assigned terrain. Even small cities can support entertainers, which each produce two luxury points per turn for their city (whose effects are described in the next section). When cities reach five citizens in size, two other specialists become available, which your user interface will probably let you select by clicking on your specialist citizens. These other two specialists contribute toward your civilization as a whole rather than to their own city: each tax collector provides three extra gold per turn for your treasury, while each scientist adds three points to your research output. When your cities grow and produce new citizens, the game starts them off as workers — even if this throws the city into disorder as described below! The game assigns new workers to the terrain that is generally going to produce the fastest growth. You may want to inspect cities that have just grown and adjust the role in which the new citizen has been placed. If all land is in use then new citizens become entertainers. Three buildings allow you to multiply the research produced by your city: Library cost:60 upkeep:1 requires:Writing University cost:120 upkeep:3 requires:University Research Lab cost:120 upkeep:3 requires:Computers Other buildings enhance the efforts of your entertainers and tax collectors: Marketplace cost:60 upkeep:0 requires:Currency Bank cost:80 upkeep:2 requires:Banking Stock Exchange cost:120 upkeep:3 requires:Economics ### Civilization and its MalcontentsEdit Main article: Happiness Unfortunately city growth produces crowding which makes it difficult to maintain worker morale. Each citizen is either happy, content, unhappy or angry. Only the first four workers are naturally content; the rest are naturally unhappy, which is quite serious as even one unhappy worker can throw the city into disorder. Cities in disorder produce no food or production surplus, science, or taxes; only luxury production remains. They are also more prone to revolt, and prolonged disorder in a democracy can even result in national revolution. It should be stressed that only workers vary in morale — entertainers, scientists, and tax collectors enjoy enough privilege to remain perpetually content. Thus one solution to the problem of an unhappy worker is simply to assign that citizen to the role of a specialist. But if cities are ever to work more than four terrain squares at once, the problem of morale must be confronted more directly. There are two means of saving large cities from disorder. Most of the buildings and wonders that affect morale (all are listed in the next section) merely make unhappy workers content, which does prevent disorder but is without further benefit. The more interesting option is to produce happy workers. These can balance the effect of unhappy workers — a city will not fall into disorder unless unhappy workers outnumber happy ones — but can also produce other desirable effects. Cities with three or more citizens celebrate when half their citizens are happy workers and none remain unhappy. Under monarchy and communism this gives terrain around the city the trade bonus (one point for each square that produces any trade on its own) normally available only under representative government. Under a republic or democracy the effect can be even more spectacular — the city enters rapture and grows by one citizen each turn that the city produces a food surplus (otherwise it refuses to grow for fear of starvation). Without rapture, large cities can grow only by struggling to produce a food surplus — which can be difficult enough — and then wait dozens of turns for their granary to fill. ### Managing HappinessEdit For some technical details, see Happiness Workers are made happy when you provide them with luxury. For every two luxury points a city produces, one content worker is made happy (or if there are no content workers left, one unhappy worker becomes content). Besides the luxury points produced by entertainers, cities receive back some of the trade points they produce as luxury points when you allocate some of your trade points for luxury. Military units can affect city happiness. Under authoritarian regimes this is helpful, as military units stationed in a city can prevent unhappiness by enforcing order. Under representative governments the only effect is negative — citizens become unhappy when their city is supporting military units which have been deployed into an aggressive stance. This includes units not inside your borders, a friendly city (including the cities of your allies), or a fortress within three squares of a friendly city; however, field units (missiles, helicopters, and bombers) cause unhappiness regardless of location. See the section on governments for the number of workers affected by each of these factors. All of the above discussion assumed that cities can grow to size four without unhappiness, with the fifth worker being the first unhappy. This limit actually decreases as you gain large numbers of cities, to simulate the difficulty of imposing order upon a large empire. Different governments can support different numbers of cities before encountering this limit for the first time; see the section on government for details. Continued empire growth may lead to further penalty steps, with the precise details again being dependent on government. In empires that grow beyond the point where no citizens are naturally content, angry citizens will appear; these must all be made merely unhappy before any unhappy citizens can be made content, but in all other respects behave as unhappy citizens. The Wonders which affect morale are the Oracle, Hanging Gardens, Michelangelo's Chapel, Shakespeare's Theatre, J.S. Bach's Cathedral, Women's Suffrage, and the Cure for Cancer. The buildings that affect city growth and worker happiness are: Aqueduct cost:60 upkeep:2 requires:Construction Sewer System cost:80 upkeep:2 requires:Sanitation Temple cost:30 upkeep:1 requires:Ceremonial Burial Courthouse cost:60 upkeep:1 requires:Code of Laws Colosseum cost:70 upkeep:4 requires:Construction Cathedral cost:80 upkeep:3 requires:Monotheism Police Station cost:50 upkeep:2 requires:Communism ### PollutionEdit Pollution can plague large cities, especially as your civilization becomes more industrialized. The chance of pollution appearing around a city depends on the sum of its population, aggravated by the advances industrialization, the automobile, mass production, and plastics, and its production point output. When this sum exceeds 20 (-civstyle.base_pollution in game.ruleset), the excess is the percent chance of pollution appearing each turn; this percentage is shown in the city dialogue. Pollution appears as gunk covering one of the terrain squares around the city or on city center. Some terrains may be defined with "NoPollution" flag (v.2.5 and earlier) or have no native "Pollution"-caused extras (v.2.6 and later), they won't be polluted, then pollution will probably go to another city tile[1]. The pollution can only be cleared by dispatching workers, settlers or engineers with the clean pollution order (which takes three settler-turns to carry out). A polluted terrain square generates only half its usual food, production and trade. Pollution on oceanic tiles, if defined by the ruleset, can be cleaned by Engineers or workers on transport ships, or even with Transports themselves. When an unused square becomes polluted, there is the temptation to avoid the effort of cleaning it; but the spread of pollution has far more terrible results — every polluted square increases the chance of global warming. Each time global warming advances, the entire world loses coastal land for jungles and swamps, and inland squares are lost to desert. This tends to devastate cities and leads to global impoverishment. Several buildings affect pollution: Hydro Plant cost:180 upkeep:4 requires:Electronics Mass Transit cost:120 upkeep:4 requires:Mass Production Recycling Center cost:140 upkeep:2 requires:Recycling Nuclear Plant cost:120 upkeep:2 requires:Nuclear Power Mfg. Plant cost:220 upkeep:6 requires:Robotics Solar Plant cost:320 upkeep:4 requires:Environmentalism ## Conquest Edit Because of the crucial importance of cities for any civilization, most wars waged by the players have the target of conquering some or all enemy cities. See Combat for the bonuses the city center provides for units defending on it, now let's talk about what happens if the defenders were absent or were not successful in combat. If a city is attacked by enemy unit of certain unit classes and the attacker does not lose to a defender (that includes attacks of all bombarding units of these classes, but note that currently one can't bombard an empty city), city population reduces by one (if the city is above size 1). This effect may be canceled by building City Walls but it is not always a good news. If there are no units in the city, a land enemy unit or a Helicopter may enter the city and proclaim it hereof the property of its owner; this event always reduces city population by 1 and cities of size 1 are destroyed (and you often better want to just lose a city than to provide the enemy with yet another base and to share a tech by conquest). If a city is conquered and not destroyed, it claims land around to its new owner (but the claims are resolved much in favour of the cities that are not conquered). The conqueror loots about $\text{(loser treasury)}\times\left(\frac{\text{(city size)}}{200}+\mathrm{rand}(0, 0.05)\right)$ gold. City regular improvements have some (game option controlled) risk to be destroyed, which risk rises on 30% if the conqueror is a land barbarian; great wonders are never destroyed, small wonders always are (but some small wonders, notably the Palace, are automatically rebuilt in a random other city of the previous owner). Buildings obsolete for the new owner are sold if they survive the conquest; if the conqueror's tech level is that higher than the loser's, workers will gather in the city to build free roads. All units supported by the conquered city are lost unless they are in their owner's another city's center in which case they are rehomed to that city. If the conquered or destroyed city was a capital, the losing nation on certain conditions may suffer another grieves: the risk of a civil war, a loss of its spaceship. If the last defender was a Leader unit and the killer advances after the attack, the killer's owner gets the city before all another unfortunate civilization perishes. Previous: Terrain Chapter Next: Economy 1. Up to 100 attempts to place it on a random city tile is done until the bad ecology gives up. Guides IntroductionManualsFAQTutorials Game Manual OverviewTerrainCitiesEconomyUnitsCombat DiplomacyGovernmentTechnologyWondersBuildings Community content is available under CC-BY-SA unless otherwise noted.
## Stream: general ### Topic: Variables in Lean #### Benedikt Ahrens (Jan 14 2020 at 00:22): Hi, I would like to write the following code: variables A B C : Prop section simple_proof variable a : A variable b : B variable c : C theorem foobar : (A ∧ B) ∧ (A ∧ C) := begin apply and.intro, apply and.intro, apply a, end end simple_proof but see "unknown identifier 'a'". Of course, I would like to use the 'a' assumed as a variable above. Is there a way to write this? #### Mario Carneiro (Jan 14 2020 at 00:23): You need to include a. Lean only includes variables that are used in the statement of the theorem by default #### Simon Hudon (Jan 14 2020 at 00:24): Also, if you enclose your Lean code between lean and , it will be prettier and more readable #### Benedikt Ahrens (Jan 14 2020 at 00:44): @Mario Carneiro : thanks, that's excellent to know. @Simon Hudon : thanks, done - looks much better indeed. Last updated: May 18 2021 at 15:14 UTC
# As loan analyst for Utrillo Bank, you have been presented the following information . Toulouse Co .. As loan analyst for Utrillo Bank, you have been presented the following information. Toulouse Co. Lautrec Co. Assets Cash $119,600$330,000 Receivables 221,600 302,200 Inventories 561,700 519,500 Total current assets 902,900 1,151,700 Other assets 496,700 610,200 Total assets $1,399,600$1,761,900 Liabilities and Stockholders’ Equity Current liabilities $297,500$350,200 Long-term liabilities 391,400 496,700 Capital stock and retained earnings 710,700 915,000 Total liabilities and stockholders’ equity $1,399,600$1,761,900 Annual sales $935,400$1,498,500 Rate of gross profit on sales 30 % 40 % Each of these companies has requested a loan of \$49,290 for 6 months with no collateral offered. Because your bank has reached its quota for loans of this type, only one of these requests is to be granted. Compute the various ratios for each company. (Round answer to 2 decimal places, e.g. 2.25.) Toulouse Co. Lautrec Co. Current ratio : 1 : 1 Acid-test ratio : 1 : 1 Accounts receivable turnover times times Inventory turnover times times Cash to current liabilities : 1 : 1
# ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 7 Percentage and Its Applications Objective Type Questions ## ML Aggarwal Class 7 Solutions for ICSE Maths Chapter 7 Percentage and Its Applications Objective Type Questions Question 1. Fill in the blanks: (i) 6% of ₹ 50 = ………. (ii) If 25% of a number is 12, then the number is ………. (iii) The mixed fraction 1$$\frac { 3 }{ 4 }$$ converted to percentage form is (iv) If a number increases from 20 to 28, then the increasing percentage is ……… (v) If cost price is ₹ 400 and loss is 15%, then the selling price is …….. (vi) The profit or loss percentage is always calculated on ………. (vii) The simple interest on a sum of ₹ 5600 at 8% p.a. for one year is ………. (viii) 135% converted to decimal is ……… (ix) ……… is 50% more than 60. (x) 25 mL is ………… percent of 5 litres. Solution: Question 2. State whether the following statements are true (T) or false (F): (i) 20% more than 30 is 36. (ii) The ratio 2 : 5 converted to percentage is 60%. (iii) 6$$\frac { 1 }{ 4 }$$ % expressed as a fraction is $$\frac { 1 }{ 16 }$$. (iv) 80% of 450 m is equal to 360 m. (v) If a number decreases from 20 to 15, then the decrease is 25%. (vi) If Feroz obtains 336 marks out of600 marks, then the percentage of marks obtained by him is 33.6. (vii) 0.018 is equivalent to 8%. (viii) 250 cm is 4% of 1 km. (ix) If S.P. of an article is ₹ 540 and loss is ₹ 40, then its C.P. is ₹ 500. (x) By selling a book for ₹ 50, a shopkeeper suffers a loss of 10%. The cost price of the books is ₹ 60. Solution: Multiple Choice Questions Choose the correct answer from the given four options (3 to 16): Question 3. The ratio 2 : 3 expressed as percent is (a) 40% (b) 60% (c) 66$$\frac { 2 }{ 3 }$$ % (d) 33$$\frac { 1 }{ 3 }$$ % Solution: Question 4. The ratio of Fatima’s income to her saving is 4 : 1. The percentage of money saved by her is (a) 20% (b) 25% (c) 40% (d) 80% Solution: Question 5. 225% is equal to (a) 2 : 3 (b) 3 : 2 (c) 4 : 9 (d) 9 : 4 Solution: Question 6. If 30% of x is 72, then x is equal to (a) 120 (b) 240 (c) 360 (d) 480 Solution: Question 7. If x% of 80 = 12, then x is equal to (a) 15 (b) 20 (c) 25 (d) 30 Solution: Question 8. 0.025 when expressed as a percent is (a) 250% (b) 25% (c) 4% (d) 2.5% Solution: Question 9. In class, 45% of students are girls. If there are 22 boys in the class, then the total number of students in the class is (a) 30 (b) 36 (c) 40 (d) 44 Solution: Question 10. What percent of $$\frac { 1 }{ 7 }$$ is $$\frac { 2 }{ 35 }$$ ? (a) 20% (b) 25% (c) 30% (d) 40% Solution: Question 11. If a man buys an article for ₹ 80 and sells it for ₹ 100, then gain percentage is (a) 20% (b) 25% (c) 40% (d) 125% Solution: Question 12. If a man buys an article for ₹ 120 and sells it for ₹ 100, then his loss percentage is (a) 10% (b) 20% (c) 25% (d) 16$$\frac { 2 }{ 3 }$$% Solution: Question 13. The salary of a man is ₹ 24000 per month. If he gets an increase of 25% in the salary, then the new salary per month is (a) ₹ 2500 (b) ₹ 28000 (c) ₹ 30000 (d) ₹ 36000 Solution: Question 14. On selling an article for ₹ 100, Renu gains ₹ 20. Her gain percentage is (a) 25% (b) 20% (c) 15% (d) 40% Solution: Question 15. The simple interest on ₹ 6000 at 8% p.a. for one year is (a) ₹ 600 (b) ₹ 480 (c) ₹ 400 (d) ₹ 240 Solution: Question 16. If Rohit borrows ₹ 4800 at 5% p.a. simple interest, then the amount he has to return at the end of 2 years is (a) ₹ 480 (b) ₹ 5040 (c) ₹ 5280 (d) ₹ 5600 Solution: Value Based Questions Question 1. One bad apple is accidentally mixed with some good apples in a basket. As a result of which 25% of the total apples go bad. Now the number of good apples in the basket is 30. Find the number of good apples kept in the basket previously. What will happen if one bad person is mixed with some good ones? Solution: Question 2. There is a group of 50 people who are patriotic out of which 40% believe in non-violence. Find the number of persons who believe in non-violence. Explain the importance of non-violence in patriotism. Solution: Higher Order Thinking Skills (HOTS) Question 1. A person preparing medicine wants to convert 15% alcohol solution into 32% alcohol solution. Find how much pure alcohol should he mix with 400 mL of 15% alcohol solution to obtain it. Solution: Question 2. A manufacturer sells an item to an agency at a profit of 25%. The agency sells the item to a shopkeeper at 10% profit and shopkeeper sells the item at a profit of 20%. If the selling price of the item is ₹ 594, find the manufacturing price. Solution:
# Electronic – Counter in assembly, using interrupt to prevent multiple counts with single push assemblycounterdebouncedisplaymicrocontroller I am completely new to assembly, and I must develop a counter using PIC16F628A, a push button, and a display. Additionally it there will be an external oscillator (555). I made some progress on this, but I think I need some help from you people. At first I did a delay based on decrements in order to be able to watch numbers on the display. My problem now is that once I press the button, I need it to count only one number, independently how much time I keep it pressed. Something like, if it changes state up, it will increment 1. I believe this must be done with interrupts, I guess. Now, what is the best solution to my problem? External interrupt, by state interrupt? Any other thing? Since you cannot use a timer (gathered from comments you've made), you need a suitable delay routine to provide a specific period of time. I like the period of \$8\:\textrm{ms}\$, from prior experience. But you can use any period you feel is appropriate. Assuming that your processor is using the factory calibrated \$4\:\textrm{MHz}\$ rate, the instruction cycle will be \$1\:\mu\textrm{s}\$ and it will take \$8,000\$ cycles to make up an \$8\:\textrm{ms}\$ period. The delay code should probably be made into a subroutine, to avoid having to replicate it over and over. DELAY8MS MOVLW 0x3E MOVWF DLO MOVLW 0x07 MOVWF DHI DECFSZ DLO, F GOTO $+2 DECFSZ DHI, F GOTO$-3 NOP GOTO $+1 RETURN The total time occupied by the above routine can be computed as: $$t=5\cdot\left[D_{LO}+2+256\cdot\left(D_{HI}-1\right)\right]$$ where \$1 \le D_{LO}\le 256\$and \$1 \le D_{HI}\le 256\$, with 0 interpreted as 256. The CALL and RETURN instructions take up 2 cycles each and the above code takes all of that into account. Calling it should take exactly \$8,000\$cycles and, at \$4\:\textrm{MHz}\$this means \$8\:\textrm{ms}\$. You will have to create those two variables, \$D_{LO}\$and \$D_{HI}\\$ somewhere. That can be done like this, I think: CBLOCK DLO DHI ENDC There are, of course, other ways. And you can add an absolute address to the CBLOCK line if you want to place the block somewhere specific. Now that you have a delay routine, you can proceed to the next step. You need two new routines. One that repeatedly delays until the button becomes active and one that repeatedly delays until the button becomes inactive. Debouncing is included here: ACTIVE CALL DELAY8MS BTFSC PORTx, PINy GOTO ACTIVE CALL DELAY8MS BTFSC PORTx, PINy GOTO ACTIVE RETURN INACTIVE CALL DELAY8MS BTFSS PORTx, PINy GOTO INACTIVE CALL DELAY8MS BTFSS PORTx, PINy GOTO INACTIVE RETURN I don't know your port or pin number, so I just put in "dummy" values there. You need to replace them, properly. The above two routines assume that 0 is active and 1 is inactive. Now you can write your main code: MAIN ; <code to reset your counter value> GOTO LOOP_NXT LOOP CALL ACTIVE ; <code to increment your counter value> LOOP_NXT ; <code to display your counter value> CALL INACTIVE GOTO LOOP The above code resets your counter value to whatever you want to start at and then it jumps into the loop where it displays the value and waits for the button to become inactive. The effect here is that if you start up your code with the button pressed (it should not be, but what if it is?), then the code will still reset the counter and display it... but it will wait until you release it before continuing. So you have to let up on the switch. Then, once that has happened, the basic loop just waits for a debounced active state of the switch. When it sees that, it increments the counter immediately (on the press, not on the release) but then waits for the button to be released before continuing, again. That's about it. You still need to write appropriate code for the counter and display. But that gets the idea across for the rest.
# No need to normalize this quaternion This topic is 5113 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Just a quicky. I'm not gonna pretend like I know what I'm talking about. Quaternions are pretty new to me (Heh, [looksaround] ..matrices are pretty new to me). From what I understand, a normalized unit quaternion is where the square-root of the sum of x,y,z and w sqaured is 1, right? Or something like this.. (x*x) + (y*y) + (z*z) + (w*w) == 1 If I have two unit quaternions, and I want to add them together, do I need to normalize if the percentages I'm adding together will always be 1? Example.. float perc = 0.7f; float opp_perc = 1.0f - perc; // these two quaternions come from elsewhere, and are both // always units and normalized.. QUAT a,b; // interpolated result QUAT c = (a * perc) + (b * opp_perc); Will quaternion c always be a normalized unit? Testing it seems great, but my lack of knowledge in this area means there could be deep holes somewhere down the road. Thanks for any advice. ##### Share on other sites I may have make a mistake in the following but... a = (w,x,y,z) with ww+xx+yy+zz=1b = (W,X,Y,Z) with WW+XX+YY+ZZ=1c = p (w,x,y,z) + (1-p) (W,X,Y,Z) = (W+pw-pW, X+px-pX, Y+py-pY, Z+pz-pZ)||c|| = (W+pw-pW)^2 + (X+px-pX)^2 + (Y+py-pY)^2 + (Z+pz-pZ)^2 = WW + ppww + ppWW + 2Wpw - 2WpW - 2ppwW + XX + ppxx + ppXX + 2Xpx - 2XpX - 2ppxX + YY + ppyy + ppYY + 2Ypy - 2YpY - 2ppyY + ZZ + ppzz + ppZZ + 2Zpz - 2ZpZ - 2ppzZ = 1 + pp + pp + 2p ( Ww - WW - pwW + Xx - XX - pxX + Yy - YY - pyY + Zz - ZZ - pzZ ) = 1 - 2p + 2pp + 2p(1-p) ( Ww + Xx + Yy + Zz ) = 1 + 2p(1-p)(a.b - 1) Which won't be equal to 1, unless a.b = 1, p=0 or p=1. [Edited by - Fruny on September 19, 2004 7:18:00 PM] ##### Share on other sites Nope, you can't guarenttee that. Quaternion addition is no different to vector addition, so think of it like a 2D vector. (EDIT: Well, a 4D vector, but the number of dimensions doesn't matter and 2D is easy to do ASCII art for [smile]) ^ | | A | | <-------- B Is the length of A+B equal to the length of A + the length of B? Nope. ##### Share on other sites Hmm.. I think you'd more than likely need to renormalize it. I haven't checked, but I'm thinking unless A == B or perc == 0 or 1 then you'll never get a normalized result. To visualize, think about the case of 2D vectors: ^___^___ ^ \ C| /A \ | / B \|/ If A and B are normalized (length == 1), then any vector between them (C) is going to have a length less than 1. Quaternions are 4D, but the idea is the same. And just as an aside, if you're using a linear interpolation to interpolate quaternions for skeletal animation, you'd be able to get better (smoother) results by using spherical linear interpolation (slerp) -nohbdy ##### Share on other sites You're saying that the result of adding them will not be a unit, or that a unit is not calculated in the way that I mentioned? I can't understand why. If the sum of 4 numbers is 1, and another sum of 4 numbers is 1, isn't this always true? (sum1 * factor) + (sum2 * (1.0 - factor)) == 1?// In other words, sum1 * 0.5 should makes the sum1 equal 0.5// instead of 1. Right? sum1 * 0.7 makes the sum equal to 0.7.(0.10 + 0.60 + 0.20 + 0.10) * 0.5 == (0.05 + 0.30 + 0.10 + 0.05)(0.30 + 0.20 + 0.10 + 0.40) * 0.5 == (0.15 + 0.10 + 0.05 + 0.20)// so(0.05 + 0.30 + 0.10 + 0.05) + (0.15 + 0.10 + 0.05 + 0.20)// results in(0.20 + 0.40 + 0.15 + 0.25)// HEAVILY edited - sorry for messing this up so bad Where am I getting lossed? ##### Share on other sites Quote: Original post by JiiaIf the sum of 4 numbers is 1, and another sum of 4 numbers is 1, isn't this always true? We don't care about the sum of the numbers, we care about the sum of their squares: (pw+(1-p)W)^2+(px+(1-p)X)^2+(py+(1-p)Y)^2+(pz+(1-p)Z)^2 ##### Share on other sites Here are some printed out values of real in-game quaternions. Am I accidently getting values that work? It always seems to work..? Format is (Ax, Ay, Az, Aw) * opp_perc+ (Bx, By, Bz, Bw) * perc= (Cx, Cy, Cz, Cw) (sum of C's sqaured values) (-0.016, -0.197, 0.007, 0.980) * 0.210+ (-0.016, -0.218, 0.006, 0.976) * 0.790= -0.016, -0.214, 0.006, 0.977 (sum of sqaures = 1.000) (-0.016, -0.229, 0.006, 0.973) * 0.320+ (-0.015, -0.248, 0.008, 0.969) * 0.680= -0.016, -0.242, 0.008, 0.970 (sum of sqaures = 1.000) (-0.015, -0.259, 0.011, 0.966) * 0.460+ (-0.013, -0.267, 0.015, 0.964) * 0.540= -0.014, -0.263, 0.013, 0.965 (sum of sqaures = 1.000) (-0.012, -0.281, 0.020, 0.959) * 0.540+ (-0.010, -0.289, 0.025, 0.957) * 0.460= -0.011, -0.285, 0.022, 0.958 (sum of sqaures = 1.000) (-0.006, -0.313, 0.033, 0.949) * 0.590+ (-0.000, -0.320, 0.040, 0.947) * 0.410= -0.004, -0.316, 0.036, 0.948 (sum of sqaures = 1.000) (0.012, -0.021, 0.080, 0.997) * 0.880+ (0.006, 0.035, 0.079, 0.996) * 0.120= 0.011, -0.015, 0.079, 0.996 (sum of sqaures = 1.000) (-0.015, 0.149, 0.061, 0.987) * 0.810+ (-0.017, 0.174, 0.054, 0.983) * 0.190= -0.016, 0.154, 0.060, 0.986 (sum of sqaures = 1.000) (-0.016, -0.197, 0.007, 0.980) * 0.990+ (-0.016, -0.218, 0.006, 0.976) * 0.010= -0.016, -0.197, 0.007, 0.980 (sum of sqaures = 1.000) [Edited by - Jiia on September 19, 2004 8:19:08 PM] ##### Share on other sites Quote: Original post by JiiaHere are some printed out values of real in-game quaternions. Am I accidently getting values that work? It always seems to work..?*** Source Snippet Removed *** First, those test quaternions are all very similar, so they will produce similar results. Second, you're cutting off precision. -0.004^2 + -0.316^2 + 0.036^2 + 0.948^2 = 0.999872 Try using extreme cases, such as a rotation of 90 degrees around the X axis interpolated with a 90 degree rotation around the Y axis. You may be luck in that in-game values are 'close enough' that occasionaly normalisation to avoid numerical errors will hide the problem. But then again, you may not. ##### Share on other sites Quote: Original post by joanusdmentiaFirst, those test quaternions are all very similar, so they will produce similar results. Second, you're cutting off precision.-0.004^2 + -0.316^2 + 0.036^2 + 0.948^2 = 0.999872Try using extreme cases, such as a rotation of 90 degrees around the X axis interpolated with a 90 degree rotation around the Y axis. You may be luck in that in-game values are 'close enough' that occasionaly normalisation to avoid numerical errors will hide the problem. But then again, you may not. Actually, those are in-game bones animating while running. But that's all I needed to hear. Thanks [smile] ##### Share on other sites A really obvious example would be Q1 = (1, 0, 0, 0) Q2 = (-1, 0, 0, 0) Both have unit length Q3 = 0.5*(1, 0, 0, 0) + 0.5(-1, 0, 0, 0) = (0, 0, 0, 0) Which isnt unit length by any stretch of the imagination. 1. 1 2. 2 Rutin 21 3. 3 4. 4 frob 18 5. 5 • 10 • 9 • 33 • 13 • 13 • ### Forum Statistics • Total Topics 632587 • Total Posts 3007227 ×
# Multi-step protocol for HTVS¶ For high-throughput virtual screening (HTVS) applications, where computing performance is important, the recommended RxDock protocol is to limit the search space (i.e. rigid receptor), apply the grid-based scoring function and/or to use a multi-step protocol to stop sampling of poor scorers as soon as possible. Using a multi-step protocol for the DUD system COMT, the computational time can be reduced by 7.5-fold without affecting performance by: 1. Running 5 docking runs for all ligands; 2. ligands achieving a score of -22 or lower run 10 further runs; 3. for those ligands achieving a score of -25 or lower, continue up to 50 runs. The optimal protocol is specific for each particular system and parameter-set, but can be identified with a purpose-built script (see the Reference guide, section rbhtfinder). Here you will find a tutorial to show you how to create and run a multi-step protocol for a HTVS campaign. ## Step 1: Create the multi-step protocol¶ These are the instructions for running rbhtfinder: 1st) exhaustive docking of a small representative part of the whole library. 2nd) Store the result of sdreport -t over that exhaustive dock. in file that will be the input of this script. 3rd) rbhtfinder <sdreport_file> <output_file> <thr1max> <thr1min> <ns1> <ns2> <ns1> and <ns2> are the number of steps in stage 1 and in stage 2. If not present, the default values are 5 and 15 <thrmax> and <thrmin> setup the range of thresholds that will be simulated in stage 1. The threshold of stage 2 depends on the value of the threshold of stage 1. An input of -22 -24 will try protocols: 5 -22 15 -27 5 -22 15 -28 5 -22 15 -29 5 -23 15 -28 5 -23 15 -29 5 -23 15 -30 5 -24 15 -29 5 -24 15 -30 5 -24 15 -31 Output of the program is a 7 column values. First column represents the time. This is a percentage of the time it would take to do the docking in exhaustive mode, i.e. docking each ligand 100 times. Anything above 12 is too long. Second column is the first percentage. Percentage of ligands that pass the first stage. Third column is the second percentage. Percentage of ligands that pass the second stage. The four last columns represent the protocol. All the protocols tried are written at the end. The ones for which time is less than 12%, perc1 is less than 30% and perc2 is less than 5% but bigger than 1% will have a series of *** after, to indicate they are good choices WARNING! This is a simulation based in a small set. The numbers are an indication, not factual values. ### Step 1, substep 1: Exhaustive docking¶ Hence, as stated, the first step is to run an exhaustive docking of a representative part of the whole desired library to dock. For RxDock, exhaustive docking means doing 100 runs for each ligand, whereas standard docking means 50 runs for each ligand: ### Step 1, substep 3: rbhtfinder script¶ The last step is to run the rbhtfinder script (download sdreport_results.txt for testing):
# An Inequality in Triangle, V The inequality below establish a relation between the medians and the exradii of a triangle. The inequality was posted at the mathematical inequalities facebook group by Daniel Culea who found it in the book 360 Problems for Mathematical Contests by Titu Andreescu and Dorin Andrica. I am grateful to Leo Giugiuc for reposting the problem at the CutTheKnotMath facebook page and communicating to me a brilliant solution he and Dan Sitaru came up with. ### Proof We invoke a result proved previously, viz., $m_al_a\ge p(p-a),\;$ where $l_a\;$ is the length of the bisector of angle $A,\;$ and $p\;$ the semiperimeter of $\Delta ABC.\;$ Similarly $m_bl_b\ge p(p-b)\;$ and $m_cl_c\ge p(p-c).\;$ In any triangle and at any vertex the median is farther away from the altitude than the angle bisector, showing that, say, $m_a\ge l_a:$ Thus, by Heron's formula, $m_a^2m_b^2m_c^2\ge m_al_a\cdot m_bl_b\cdot m_cl_c\ge p^3(p-a)(p-b)(p-c)=p^2S^2,$ where $S=[\Delta ABC],\;$ the area of $\Delta ABC.$ On the other hand, $r_ar_br_c=pS,\;$ which proves the required inequality.
## definition of limit multivariable Next: Multivariable Taylor polynomial example; Similar pages. The following theorem allows us to evaluate limits much more easily. The calculator supports both one-sided and two-sided limits The composition F(C(t)) is formed by taking the components of C The Chain Rule Am I misunderstanding how to use the chain rule on a multivariable function? Solutions to Examples on Partial Derivatives 1 Solutions to Examples on Partial Derivatives 1. Differentiation of polynomials: In contrast, a function with single-number inputs and a single-number outputs is Let f~ : U Rn V Rm The limit definition of the derivative is used to prove many well-known results, including the following: If f is differentiable at x 0, then f is continuous at x 0 . Lecture Description. . f ( Multivariable Calculus Standards. Figure 1. These simple yet powerful ideas play a major role in all of calculus. Denition 3.2.1. Informally, a function f assigns an output f(x) to every input x.We say that the function has a limit L at an input p, if f(x) gets closer and closer to L as Full curriculum of exercises and videos. modern definition of a limit as follows: To say that the limit of f(x) as x approaches a is equal to L means that we can make the value of f(x) within a distance of epsilon units from L simply by Last Post; Nov 16, 2009; Replies 1 Views 2K. Try directly substituting first. f ( x) is a left hand limit and requires us to only look at values of x x that are less than a a. Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom.Functions with independent variables corresponding to each of the degrees of Start with the given information: X n 2. People will often refer to this as the limit definition of a partial derivative Integral Calculator March 23, 2017 Title 26 Internal Revenue Part 1 ( 1 notice holding the other variable(s) constant projects the curve and . Calculus Definition: Limit helps in calculating the degree of closeness to any value or the approaching term. My AP Calculus AB and BC Ultimate Review Packets: AB: bit In this chapter, you will learn how to evaluate limits and how they are used in the two basic problems of calculus: the tangent line problem and the area problem Acellus AP Calculus is a two part-series consisting of Acellus AP Calculus AB and BC Vandhan Gajjar 1430 Recent Examples on the Web Nonetheless, both recent studies did include multivariable adjustments intended to limit As the absolute value of the correlation parameter {\displaystyle \rho } increases, these loci are squeezed toward the following line : y (x) = sgn () Y X (x X) Instructions (same as always) Problems (PDF) Submission due via email on Mon Oct 19 3 pdf; Fundamental Theorem of Calculus 3 PDF 23 It also supports computing the first, second and third derivatives, up to 10 You write down problems, solutions and notes to go back Note that if a set is upper bounded, then the upper Note that this must be the case no matter This system also has the 1048576 limit in limits.conf and 99999999 in /proc/sys/fs/file-max. Once again, the derivative gives the slope of the tangent line shown on the right in Figure 10.2.3.Thinking of the derivative as an instantaneous rate of change, we expect that the range of the projectile increases by 509.5 feet for every radian we increase the launch angle $$y$$ if we keep the initial speed of the projectile constant at 150 feet per second. lim x0x2 =0 lim x 0. Search: Multivariable Chain Rule Calculator. It only takes a minute to sign up. This is a text for Multivariable Mathematics for Data Science. Computing limits using this definition is rather cumbersome. In this video, Krista King from integralCALC Academy talks about the precise definition of the limit for multivariable functions, also known as the epsilon-delta definition of While there are only left and right limits that must math match for the limit" to exist for functions of First, we need to make sure that our function has the appropriate form and cannot be evaluated immediately using the limit laws.We then need to find a function that is equal to h(x) = f(x) / g(x) for all x a over some interval containing a. Last, we apply the limit laws. Lets consider the phrase "multivariable flowmeter." Rewrite as a Algebraic Limit Theorem Example: A Worked Proof [3] Example question: Show that If (x n) 2, then ( (2x n 1)/3) 1. Its actually pretty easy to extend this definition of limits to multivariate functions. Khan academy j A calculator for finding the expansion and form of the Taylor Series of a given function The composition F(C(t)) is formed by taking the components of C The Chain Rule chain rule trig functions worksheet A river flows with speed $10$ m/s in the northeast direction A river flows with speed $10$ m/s in the Sometimes, a limit is trivial to calculate - similar to single-variable calculus, plugging in the values may immediately net you complete solutions manual for stewarts multivariable calculus Nov 21, 2020 Posted By Leo Tolstoy Ltd TEXT ID 761455b5 Online PDF Ebook Epub Library money gaming stewart james stewart calculus 7e solutions isbn 9780538497817 james stewart calculus 7e solutions isbn 9780538497817 homework help and answers uk Gift Card if 14.2 Multivariable Limits LIMIT OF A lim xaf (x) lim x a . We continue with the pattern we have established in this text: after defining a new kind of function, we apply calculus ideas to it. In our current study of multivariable (Note that the following extends to functions of more than just two variables, but for the sake of In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. like your idea right! Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.. This expression is read as the limit of f of x as x approaches c equals A. Section 11.2 Limits and Continuity of Multivariable Functions. Formal definitions, first devised in the early 19th century, are given below. The idea of a disk appears in the definition of the limit of a function of two variables. The following theorem allows us to evaluate limits much more easily. limit (its expected behavior) matches the functions value (its actual behavior). Multivariate chain rule f x x (x0 , y0 ) (x x0 )2 2 f yy (x0 , y0 ) (y y0 )2 2 (ii) concrete (sometimes graphical) treatment of partial sums, and (iii) numerical estimation of limits. Lebanon, IN: Prentice Hall, 2002 MATH 25B CALCULUS LECTURE 1: MULTIVARIABLE LIMITS DAGAN KARP ABSTRACT Mathematica Notebook Library This is a closed book, closed notes exam Lulu sometimes has special offers for discounts or free shipping; check Lulu sometimes has special offers for discounts or free shipping; check. Above that surface and below the plane z= 5 lies the 3-dimenional region D Freeman, 2003 (ISBN: -7167-4992-0) Weir Pdf Thomas' Calculus, Multivariable (14th Edition) PDF Pdf Thomas' Calculus, Multivariable (14th Edition) by by Joel R Thus R includes the integers :::; 2; 1;0;1;2;3:::; thep rational numbers, p=q, where pand qare Let f: R n R be a function. Free multi variable limit calculator - solve multi-variable limits step-by-step Derivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series For a multivariable limit to exist, the function should approach the UPDATE smartmeter_usage.users_reporting SET panel_id = 3 LIMIT 1001, 1000 This query is not correct (or at least i don't know a possible way to use limit in UPDATE queries), you should put a where condition on you primary key (this assumes you have an auto_increment column as your primary key, if not provide more details):. The theory of limits is explained and the related graphs are also described Limits And Derivatives Worksheet Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields The second derivative test: If f ''(x) exists at x 0 and is Free Limit L'Hopital's Rule Calculator - Find limits using the L'Hopital method step-by-step This website uses cookies to ensure you get the best experience. This is possible if and only if, As detailed on the syllabus, your assessment grade in this course will be determined by your proficiency on a variety of standards. We write lim x a f ( x ) = L if for all > 0, there exists some > 0 Chain Rule for Multivariable Functions and Tree Diagrams : Calculus-Partial Derivatives: Chain Rule right! (b) f(x) I can solve this by taking the first derivative but ijust cant get it by using the limit definition based on the idea that everyone is equal and should be Title: Finding Derivatives Using the Limit Definition Class: Math 130 or Math 150 Author: Jason Miner Instructions to tutor: Read instructions and follow . Multivariable Taylor polynomial example; Introduction to partial derivatives; Partial derivative examples; Partial derivative by limit definition; Introduction to differentiability in higher dimensions; The multivariable linear approximation; Examples of calculating the derivative lim xaf (x) = f (a) lim x a f ( x) = f ( a) to compute limits. However, there are also many limits for which this wont work easily. The purpose of this section is to develop techniques for dealing with some of these limits that will not allow us to just use this fact. It does not refer to the direction of approach. This x 2 = 0. By using this website, you agree to our Cookie Policy. Multivariable limits are harder than their one-variable counterparts, and textbooks examples usually focus on limits that don't exist when approaching from different straight Search: Multivariable Calculus Notes. In other words, we will have lim xaf (x) = L lim x a. Search: Limit Definition Of Derivative Practice Problems Pdf. Abstract. Predicting the radiation dosetoxicity relationship is important for local tumor control and patients quality of life.Introduction. Results. Discussion. Methods. Acknowledgements. Author information. Ethics declarations. Additional information. Rights and permissions. More items Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Limit Definition of the Derivative; Mean Value Theorem; Partial Fractions; Product Rule; Quotient Rule; Riemann Sums; Second Derivative; Special Trigonometric Integrals; Tangent Line If you want the limit at point ( a, b), and the function is continuous at ( a, b), then simply substitute the values of ( a, b) into the function. N Natural logarithm The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2 The proofs of most of the major results are either exercises or (Redirected from Multivariable Calculus) A 3-Dimensional graph of function f shows that f has two local minima at (-1,-1,1) Standards. . Search: Multivariable Calculus With Applications. 14.2 Multivariable Limits LIMIT OF A FUNCTION Definition 1 refers only to the distance between (x, y) and (a, b). Limit computes the limiting value f * of a function f as its variables x or x i get arbitrarily close to their limiting point x * or . Example 1 Use the definition of the limit to prove the following limit. Theorem 7 (Almeida et al Limit definition of a Derivative check answer using power rule 4 using limit definition Other resources involving derivatives Some basic formula conversions are given (i) Derive the budget constraint in terms of mean and standard deviation of the portfolio and illustrate it graphically (i) Derive the budget constraint in terms of mean and standard However, the We need a practical method for evaluating limits of multivariate functions; fortunately, the substitution In calculus, the. The calculator will quickly and accurately Note, that the sizes of the matrices are automatically of the right Playlist title nuity, partial derivatives [Films Media Group,; KM Media,;] -- In this video, we'll focus on creating tree diagrams for Case I (one independent variable) and Case II (multiple independent variables) multivariable functions Differentiate each function \delta definition of a limit is an algebraically precise formulation of evaluating the limit of a function. We use this definition to guide our formal definition of a limit for multivariable functions. football trends and facts #### definition of limit multivariable Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra illinois agility test, pinche el enlace para mayor información. american bully pocket size weight chart
# Math Help - Hyperbolic integration problem 1. ## Hyperbolic integration problem So here's the problem. Let $I_r = \int_{0}^{\ln 2}\tanh^{2r}\theta d\theta$ Show that $I_{r-1}-I_r = \frac{1}{2r-1}\left(\frac{3}{5}\right)^{2n-1}$ My initial approach was to go by integration by parts, letting $v'=\tanh x$ and $u=(\tanh x)^{2r-1}$, this leads me to $I_r =\left[ \ln(\cosh \theta)\tanh^{2r-1} \theta \right]^{\ln 2}_0 - (2r-1)\int_{0}^{\ln 2}\rm sech ^2 \theta\ln (\cosh \theta)\tanh^{2r-2}\theta d\theta$ Now this has a $(2r-1)$ in it and also a $\tanh^{2r-2} \theta= \tanh^{2(r-1)}\theta$ which leads me to think I might be on the right track. However it looks such a mess, that I can't help but think I've made a mistake somewhere. If I haven't then I really can't see how to progress from here either. Any help is much appreciated Stonehambey 2. Originally Posted by Stonehambey So here's the problem. Let $I_r = \int_{0}^{\ln 2}\tanh^{2r}\theta d\theta$ Show that $I_{r-1}-I_r = \frac{1}{2r-1}\left(\frac{3}{5}\right)^{2n-1}$ Write, $I_r = \int_0^{\ln 2} \tanh^{2(r-1)}\theta \cdot \tanh^2 \theta d\theta$. Remember the identity, $\tanh^2 \theta = 1 - \text{sech}^2 \theta$. Thus, $I_r = \int_0^{\ln 2} \tanh^{2(r-1)} \theta d\theta - \int_0^{\ln 2} \tanh^{2(r-1)} \theta \cdot \text{sech}^2 \theta d\theta$ In the second integral let $x = \tanh \theta \implies x' = \text{sech}^2 \theta$ also $x(0)=0,x(\ln 2) = \tfrac{3}{5}$. Thus, $I_r = I_{r-1} - \int_0^{3/5} x^{2(r-1)} dx$ 3. Or so ${I_r} = \int\limits_0^{\ln 2} {{{\tanh }^{2r}}\left( \theta \right)d\theta = } \int\limits_0^{\ln 2} {{{\sinh }^{2r - 1}}\left( \theta \right)\frac{{\sinh \left( \theta \right)}}{{{{\cosh }^{2r}}\left( \theta \right)}}d\theta }.$ $\int\limits_0^{\ln 2} {\frac{{\sinh \left( \theta \right)}}{{{{\cosh }^{2r}}\left( \theta \right)}}}d\theta = \int\limits_0^{\ln 2} {\frac{{d\left( {\cosh \left( \theta \right)} \right)}}{{{{\cosh }^{2r}}\left( \theta \right)}}} = \int\limits_0^{\ln 2} {{{\left[ {\cosh \left( \theta \right)} \right]}^{^{ - 2r}}}d\left( {\cosh \left( \theta \right)} \right)} =$ $= \left. {\frac{{{{\left[ {\cosh \left( \theta \right)} \right]}^{^{1 - 2r}}}}}{{1 - 2r}}} \right|_0^{\ln 2} = \left. { - \frac{1}{{\left( {2r - 1} \right){{\cosh }^{2r - 1}}\left( \theta \right)}}} \right|_0^{\ln 2}.$ So we have ${I_r} = \left. { - \frac{1}{{2r - 1}}\frac{{{{\sinh }^{2r - 1}}\left( \theta \right)}}{{{{\cosh }^{2r - 1}}\left( \theta \right)}}} \right|_0^{\ln 2} + \frac{1}{{2r - 1}}\int\limits_0^{\ln 2} {\frac{{d\left( {{{\sinh }^{2r - 1}}\left( \theta \right)} \right)}}{{{{\cosh }^{2r - 1}}\left( \theta \right)}}} =$ $= - \frac{1}{{2r - 1}}{\left( {\frac{3}{5}} \right)^{2r - 1}} + \int\limits_0^{\ln 2} {{{\tan }^{2\left( {r - 1} \right)}}\left( \theta \right)d\theta } =$ $= - \frac{1}{{2r - 1}}{\left( {\frac{3}{5}} \right)^{2r - 1}} + {I_{r - 1}} \Leftrightarrow {I_{r - 1}} - I_r = \frac{1}{{2r - 1}}{\left( {\frac{3}{5}} \right)^{2r - 1}}.$
# Graphs Graphs are data structures to represent road networks, the web, social networks etc. Moreover, hundreds of computational problems are related to graphs.They have two main ingredient which are; • $Vertices (V)$ as known as nodes. • $Edges (E)$ : pair of vertices • can be undirected. • or directed. # Quicksort Quicksort is a sorting algorithm which applies divide and conquer paradigm. Quicksort has a worst case running time of $O(n^{2})$ , however, it has running time of $O(n​ logn)$ on average which makes quicksort very efficient. Moreover, it works in-place but not stable. The performance of quicksort depends on selecting the pivot, and starting to partition around it. During the partition procedure subarrays divided into four regions; $\leq x$, $> x$, $unrestricted$ ,and finally the $pivot$. # Selection Sort Selection sort is an in-place comparison sort algorithm which has $O(n^2)$ running time complexity in both best and worst case. Even though its logic is similar to insertion sort, it’s a very inefficient sorting algorithm. List is both divided into two parts as sorted and unsorted where initially sorted part is empty. Every turn, algorithm finds the minimum (or maximum) key in unsorted sublist then swaps it with the leftmost element of unsorted part. Due to the reason which elements in any place could get swapped selection sort is not stable. # Merge Sort Merge sort is another comparison based sort algorithm. It closely follows divide-and-conquer paradigm, and provides $O(n​lgn)$ run time complexity in worst and average cases, however, $O(n)$ in space complexity. Merge sort can be used for sorting huge data sets that can not fit into memory. It is also a stable sort which preserves the order of equal elements. # Binary Search Binary search is a searching algorithm which works on a sorted (ascending or descending) array. In each turn, the given key is compared with the key at the middle of the array, if it is equal to it then its index is returned. Otherwise, if the given key is less than key at the middle then binary search continues its operation on left sub-array $( A[0..Mid-1] )$, and similarly on right sub-array $( A[mid+1..N-1] )$ if given key greater than it. If the given key cannot be found then an indication of not found is returned.
Jan's Blog on DFIR, TI, REM,.... 23 Oct 2021 # Gradual Evidence Acquisition From an Erroneous Drive ## tl;dr A hard drive is a relatively fragile data store. After discovering the first indicators of a drive failure the hard drive might suddenly die. The present blogpost therefore discusses the gradual acquisition of evidence from an erroneous drive by utilizing the synergy of the open source tools ddrescue and partclone. In order to spare the mechanics of the drive and acquire the most critical data first, partclone is used to create a so-called domain file, where the used blocks of the file system are noted. # Record actually used blocks in domainfile partclone.[fstype] -s /dev/sdXY -D --offset_domain=$OFF_IN_BYTES \ -o domainfile This is the basis for ddrescue’s recovery process, where a blocksize-changing data recovery algorithm is utilized, that will only cover these areas for the moment. # Restrict rescue area via domainfile ddrescue --domain-log domainfile /dev/sdX out.img mapfile Afterwards, additional runs can be conducted to acquire the remaining sectors. ## Background Since HDDs are rather sensitive mechanical components, it is not uncommon for them to exhibit read errors after a certain amount of usage and the wear and tear of the magnetic platters or alternatively as a consequence of shock events, which lead to a mechanical damage inside the drive. So-called head crashs, which most commonly occur when the HDD drops during regular operation, might be lethal for a HDD and would require a complete dismanteling of the drive in specialized laboratory. Grinding sounds are typical for such scenario and requires a immediate stop of operation. However, minor shock events of the HDD and/or when the actuator arm is in its “parking position” might not lead to great physical damage, but result in mechanical disfunctioning and read/write-errors. This regularly leads to clicking, knocking or ticking noises, which stem from abnormal behaviour of the disk’s read-and-write head, when it is repeatedly trying to read a sector. If a hard disk makes noise, data loss is likely to occur in the near future or has already happened. Grinding or screeching noise should be an indicator to power down the device immediately and hand it over to a specialized data recovery laboratory, in order to secure the remaining evidence. Given minor clicking or knocking noise, one might try to recover the data with the help of specialized software as soon as possible, as it is discussed in this blog post. ## Acquisition of data from erroneous drives ### Standard approach with GNU ddrescue GNU ddrescue is the go-to tool to perform data recovery task with open source tooling1. It maximizes the amount of recovered data by reading the unproblematic sectors first and scheduling areas with read errors for later stages by keeping track of all visited sectors in a so-called mapfile. ddrescue has an excellent and exhaustive manual to consult 2. To get a first glimpse ddrescue’s procedure, which employs a block-size-changing algorithm, can summarized as follows: Per default, it’s operation is divided in four phases, where the first and last one can be divided in passes, while each phase consults the mapfile to keep track of the status of each sector (or area) in its mapfile. 1. Copying: Read non-tried parts, forwards and backwards with increasing granularity in each pass. Record the blocks, which could not be read, as non-trimmed in the mapfile. 2. Trimming: Blocks, which were marked as non-trimmed, are trimmed in this phase, meaning to read from the edge forward sector by sector until a read error is encountered. Then read the sectors backwards from the edge at the block’s end until the sector read fails and keep track of the sectors in between as non-scraped in the mapfile. 3. Scraping: In this phase non-scraped-block is scraped forward sector by sector, while marking unreadable sectors as bad. 4. Retrying: Lastly, the bad sectors can be read$n$-times with reversed directions for each try, which is disabled by default and can be set via the parameter --retry-passes=n. Unreadable sectors are filled with zeros in the resulting image (or device). Using ddrescue with its sane default settings is as simple as running ddrescue /dev/sdX out.img mapfile In order to activate direct disk access and omit kernel caching, one must use -d/--idirect and set the sector sizes via -b/--sector-size. An indicator for kernel caching is, when the positions and sizes in the mapfile are always a multiple of the sector size 3. # Check the disk's sector size SECTOR_IN_BYTES=$(cat /sys/block/sdX/queue/physical_block_size) # Run ddrescue with direct disk access ddrescue -d -b $SECTOR_IN_BYTES /dev/sdX out.img mapfile ### Gradual approach by combining partclone and ddrescue While the straightforward sector-by-sector copying of a failing HDD with ddrescue often yields good results, it might be very slow. Given the fact, that acquiring evidence after a damage is a race against the clock, because with every rotation of the platter the probability of a ultimate drive fail increases, one might want to ensure, that critical data gets acquired first by determining the actually used blocks of the filesystem and prioritizing those 4. To accomplish this, the open source tool for cloning partitions partclone comes into the (inter)play with ddrescue. partclone “provide[s] utilities to backup used blocks” and supports most of the widespread filesystems, like ext{2,3,4}, btrfs, xfs, NTFS, FAT, ExFAT and even Apple’s HFS+ 5. One of its features is the ability to list “all used blocks as domain file”, so that “it could make ddrescue smarter and faster when dumping a partition” 4. partclone operates in a similar manner like ddrutility’s tool ddru_ntfsbitmap, which extracts the bitmap file from a NTFS partition and creates a domain file 6, but works with other filesystems as well by looking at their block allocation structures to determine used blocks and store those in the a/m domain mapfile 7. The term rescue domain describes the “[b]lock or set of blocks to be acted upon” 8. By specifying --domain-mapfile=file the tool is restricted to look only at areas, which are marked with a + 9. #### Generating a domain mapfile To generate a domain file simply use partclone with the -D flag and specify the resulting domain file via -o partclone.[fstype] -s /dev/sdXY -D -o sdXY.mapfile If you want to run ddrescue on the whole disk and not just the partition, in order to image the whole thing iteratively, it is neccessary to use --offset_domain=N, which specifies the offset in bytes to the start of the partition. This will be added to all position values in the resulting domain mapfile. To create a such a file use the following commands: # Retrieve the offset in sectors OFF_IN_SECTORS=$(mmls /dev/sdXY | awk '{ if ($2 == "001") print$3}') # Retrieve sector size SECTOR_IN_BYTES=$(mmls /dev/sdX | grep -P 'in\s\d*\-byte sectors' | \ grep -oP '\d*') # Calculate offset OFF_IN_BYTES=$((OFF_IN_SECTORS * SECTOR_IN_BYTES)) # Create domain file partclone.[fstype] -s /dev/sdXY -D --offset_domain=$OFF_IN_BYTES \ -o domainfile The resulting domain file looks like illustrated in the following listing: cat domainfile # Domain logfile created by unset_name v0.3.13 # Source: /dev/sdXY # Offset: 0x3E900000 # current_pos current_status 0xF4240000 ? # pos size status 0x3E900000 0x02135000 + 0x40A35000 0x05ECB000 ? 0x46900000 0x02204000 + 0x48B04000 0x005FC000 ? <snip> The offset at the top denotes the beginning of the file system. The current_pos corresponds to the last sector used by the file system. Used areas are marked with a + and unused areas with a ? 7. #### Acquiring the used blocks with ddrescue To acquire only those areas with ddrescue, which are actually used by the file system and therefore have been denoted with a + in the domain file, us the following command. # Clone only blocks, which are actually used (of part Y) ddrescue --domain-log domainfile /dev/sdX out.img mapfile # Check if acquisition was successful fsstat -o$OFF_IN_SECTORS out.img Since you already know the offset, you might omit to clone the partition table on the first run. After completion of the a/m command, you can be sure, that the mission critical file system blocks have been acquired, which can be double-checked by diffing the domain file and the mapfile, like this diff -y domainfile mapfile. #### Acquiring the remaining blocks with ddrescue So the additional sectors, which might contain previously deleted data, of the disk can be imaged in a subsequent and lengthy run without having to fear a definitive drive failure too much. To do this simply supply ddrescue the mapfile, which recorded all previously generated in the previous run without restricting the rescue domain this time, so that it will add the remaining blocks, which were either zero filled or omitted entirely: # Clone remaining blocks ddrescue /dev/sdX out.img mapfile # Check result by inspecting the partition table mmls out.img After the completion of this procedure, which is a fragile process on its own, some kind of integrity protection should be employed, even though the source media could not be hashed itself. For example, this could be done by hashing the artifacts and signing the resulting file, which contains the hashes. ## Summary The present blogpost discussed the usage of ddrescue as well as the gradual imaging of damaged drives. In order to acquire mission critical data first and rather timely, partclone was used to determine the blocks, which are actually used by the file system residing on the partition in question. This information was recorded in a so-called domain file and fed to ddrescue via the command line parameter --domain-log, so that the tool limits its operation on the blocks specified in there. Afterwards, another lengthy run could be initiated to image the remaining sectors. # Logical imaging with AFF4-L <2021-08-03> ## tl;dr Using AFF4-L containers is an efficient and forensically sound way of storing selectively imaged evidence. There exist two open source implementations to perform this task: c-aff4 and pyaff4. To image a directory with aff4.py, run: python3 aff4.py --verbose --recursive --create-logical \ $(date +%Y-%m-%d)_EVIDENCE_1.aff4 /path/to/dir If you have to acquire logical files on Windows systems, c-aff4 can be used with Powershell in a convenient way to recurse the directory of interest: Get-ChildItem -Recurse C:\Users\sysprog\Desktop\ | ForEach {$_.FullName} | .\aff4imager.exe --input '@' --output .\$(Get-Date -UFormat '+%Y-%m-%dT%H%M%S')_EVIDENCE_2.aff4 Using the @-sign as the input filename (after --input), makes aff4imager read the list of files to acquire from stdin and place those in the resulting container, specified by --output. To list the acquired files stored in the container and the corresponding metadata, run: # List files in container, --list .\aff4imager.exe -l .\container.aff4 # View metadata, --view .\aff4imager.exe -v .\container.aff4 For the full documentation on aff4imager.exe’s usage, refer the official documentation at http://docs.aff4.org/en/latest/. While this worked well for me, aff4imager.exe does not seem to be an overall mature tool. Once I got an error, when I tried to image a large directory with the -i'@' syntax on a Debian system. Another time I was very surprised, that aff4imager.exe (Commit 657dc28) truncates an existing output file without asking for approval, although the help-page states it would append to an existing container by default. So, please take this warning seriously and use the tool only after extensive testing and consider to contribute some bug fixes eventually. ## Verdict Logical imaging is nowadays often the preferred way of acquiring digital evidence. AFF4-L seems to be the best container-format defined in an open standard. pyaff4 and aff4imager (provided by c-aff4) are two open source tools to image files logically “into” AFF4-L-containers. The present blogpost gave an introduction into their usage. Unfortunately the AFF4-L-container format and the corresponding imaging tools seem not get the momentum and attention of the open source community they deserve, which might change with their popularity. ## Footnotes: 4 Schatz, B. L. (2019). AFF4-L: a scalable open logical evidence container. Digital Investigation, 29, S143-S149. https://www.sciencedirect.com/science/article/pii/S1742287619301653 5 Ibid, p. 15 9 The oneliner-version is: Get-ChildItem -Recurse C:\Users\sysprog\Desktop\ | ForEach {$_.FullName} | .\aff4imager.exe -i '@' -o .\$(Get-Date -UFormat '+%Y-%m-%dT%H%M%S')_EVIDENCE_2.aff4 # Analyzing VM images <2021-06-20> ## tl;dr: Virtualization is everywhere nowadays. So when you have to perform an analysis of an incident, you often come across virtual hard disks in the form of sparsely allocated VMDKs, VDIs, QCOW2s and the like. To inspect the data in the virtual machine images you have several options to do so. guestmount is a helpful tool to perform a logical inspection of the fileystem, while qemu-nbd is a good choice, if you want to work on a raw bock device without having to convert a sparsely allocated virtual disk into a raw image or to rely on proprietary software. ## Motivation Popular open source forensic analysis suites like Sleuthkit can not handle sparsely allocated VMDKs and several other virtual machine image formats, like VDI, QCOW2 etc. Since virtualized systems are part of almost every investigation, it is often needed to perform a conversion of some kind, which can be cumbersome, if you decide to convert each and every piece of evidence to raw-format or .E01. Furthermore you might depend on proprietary tools like vmware-mount, which are not freely available. Maybe you heard, that Sleuthkit can handle VMDKs. This is partly true, since it can handle only the so-called flat format and not sparsely allocated VMDKs. To check, whether you could ingest the .vmdk-file in question, you have to look at the so-called text descriptor within the .vmdk, which describes the layout of the data in the virtual disk 1 and starts at Offset 0x200. It may look like the following snippet: # Disk DescriptorFile version=1 CID=7cecb317 parentCID=ffffffff createType="monolithicSparse" # Extent description RW 4194304 SPARSE "NewVirtualDisk.vdi.vhd.vmdk" # The disk Data Base <snip> Within this descriptor you have to look for the "createType"-field, which specifies, whether the disk is flat or monolithic and if it was sparsely allocated or if it is of fixed size. To conveniently check for this, use the following command, which greps for the string within the byte-range of the offsets 0x200 and 0x400: xxd -s 0x200 -l 0x200 file.vmdk | xxd -r | strings | grep -i sparse If it is a sparsely allocated .vmdk-file, it is recommended by some renowned practitioners in the forensics community to use qemu-img to convert it then to a raw image 2 and be able to conduct a proper post-mortem-analysis. # Check the size of the resulting raw image cat evidences/srv.ovf | grep '<Disk' # Convert .vmdk to raw qemu-img convert evidences/srv.vmdk ./srv.dd While this works well, it is a time- as well space-consuming endevour. So I have been looking for alternative solutions. ## Inspecting sparsely-allocated virtual disks without conversion ### Using vmware-mount One solution is – as already mentioned above – the usage of VMWare's tool vmware-mount, which creates a flat representation of the disk at a given mount point, when the -f-option is specified. This approach, however, requires the use of a piece of proprietary software named Virtual Disk Development Kit (VDDK), which is not easily available (and of coursee not in Debians package archives ;)). ### Using guestmount for content inspection If you only need to look and extract certain files, guestmount of the libguestfs-library is a a handy solution. Libguestfs is the library for accessing and modifying virtual machine images 3. After installing it via the package manager of your distribution, you can inspect the partitions inside the virtual machine image in question by using virt-filesystems. Once you identified the partition of your interest, you can mount it via guestmount – read-only of course – on a mount point of your forensics workstation. Alternatively you might explore it interactively with guestfish – the so-called guest filesystem shell. (If you install libguestfs-rescue, the tool virt-rescue comes already with TSK installed, but it is a bit cumbersome to use imho). So you might use the following commands to get started with guestmount: # Install it via apt sudo apt-get install libguestfs-tools # Debian/Ubuntu # Check which partition to mount virt-filesystems -a disk.img -l # Mount it via guestmount on a mount point of the host guestmount -a evidences/srv.vmdk -m /dev/sda2 --ro ./mntpt # Alternatively: Inspect it interactively with guestfish guestfish --ro -a evidences/srv.vmdk -m /dev/sda2 At times just accessing the filesystem might not be enough. If this is the case, you might look at the following solution. ### Using qemu-nbd To access the sparsely allocated virtual disk image as a raw device, it is advisable to use qemu-nbd, which is the QEMU Disk Network Block Device Server. Install the containing package named qemu-utils via apt and load the NBD-kernel module via modprobe, then use qemu-nbd to expose the virtual disk image as a read-only block device – in the following example /dev/nbd0. Then you can work on it with Sleuthkit or your favorite FS forensics tool. # Install qemu-nbd sudo apt install qemu-utils # Load NBD kernel module sudo modprobe nbd # Check, that it was loaded sudo lsmod | grep nbd # Use QEMU Disk Network Block Device Server to expose .vmdk as NBD # Note: make it read-only ! sudo qemu-nbd -r -c /dev/nbd0 evidences/srv.vmdk # Check partition table sudo mmls /dev/nbd0 # Access partitions directly sudo fsstat /dev/nbd0p2 # Disconnect NBD sudo qemu-nbd -d /dev/nbd0 # Remove kernel module sudo rmmod nbd IMHO this is the quickest and most elegant way of performing post-mortem-analysis or triage on a sparsely allocated virtual machine disk image. There might be some alternatives ### Specific tasks with vminspect Another interesting solution is vminspect, which is a set of tools developed in Python for disk forensic analysis. It provides APIs and a command line tool for analysing disk images of various formats, whereas it relies on libguestfs. It focuses on the automation of virtual disk analysis and on safely supporting multiple file systems and disk formats. On the one hand it is not as generic as the previous presented solutions, but it offers some specific capabilities helpful in forensic investigations, like extracting event timelines of NTFS disks or parsing of Windows Event Log files. To make it tasty for you and get you going, refer to the following command-snippets taken from VMInspect's documentation 4: # Compares to registries vminspect compare --identify --registry win7.qcow2 win7zeroaccess.qcow2 #Extract the NTFS USN journal vminspect usnjrnl_timeline --identify --hash win7.qcow2 # Parse eventlos vminspect eventlog win7.qcow2 C:\\Windows\\System32\\winevt\\Logs\\Security.evtx If you have know a better way of doing this or want to leave any notes, proposals or comments, please contact me under ca473c19fd9b81c045094121827b3548 at digital-investigations.info. ## Footnotes: # Dump Linux process memory <2021-05-27> ## tl;dr If you need to acquire the process memory of a process running on a Linux system, you can use gcore 1 to create a core file or alternativly retrieve its memory areas from /proc/<PID>/maps and use GDB 2 itself to dump the content to a file. For a convenient way to do this, refer to the a basic shell script hosted as a gist named dump_pmem.sh 3. ## Motivation It is well known, that process memory contains a wealth of information, therefore it is often need to inspect the memory contents of specific process. Since I wanted to write autopkgtests for the continuous integration of memory forensics-software packaged as Debian packages, I was looking for a convenient way to dump the process memory (preferrably with on-board equipment). ## One-liner solution I found a neat solution from A. Nilsson on serverfault.com 4, which I enhanced to create a single output file. Basically it reads all memory areas from the proc-filesystem, which is a pseudo-filesystem providing an interface to kernel data structures 5 and then utilizies gdb’s memory dumping capability to copy those memory regions into a file 6. To use the one-liner-solution, which is a bit ugly indeed, just modify the PID and run the following command: sudo su -; \ PID=2633; \ grep rw-p /proc/${PID}/maps \ | sed -n 's/^$$[0-9a-f]*$$-$$[0-9a-f]*$$ .*$/\1\t\2/p' \ | while read start stop; \ do sudo gdb --batch --pid${PID} -ex "append memory ${PID}.dump 0x$start 0x\$stop" > /dev/null 2>&1; \ done; Note, that GDB has to be available on the system, whereas glibc-sources are not required. ## Script dump_pmem.sh Furthermore, I created a basic shell script, which can be found at It simplifies the process of dumping and creates an additional acquision log (which is printed to stderr). This is how you use it: sudo ./dumpmem.sh Usage: dump_pmem.sh <PID> Example: ./dump_pmem.sh 1137 > 1337.dmp Note, that root-permissions are obviously needed and a process ID has to be supplied as positional argument. The resulting output has to be redirected to a file. Informational output printed to stderr looks like the following snippet: 2021-05-27T08:48:34+02:00 Starting acquision of process 1337 2021-05-27T08:48:34+02:00 Proc cmdline: "opensslenc-aes-256-cbc-k-p-mdsha1" 2021-05-27T08:48:34+02:00 Dumping 55a195984000 - 55a19598c000 2021-05-27T08:48:34+02:00 Dumping 55a19598c000 - 55a19598e000 <snip> 2021-05-27T08:48:36+02:00 Dumping 7f990d714000 - 7f990d715000 2021-05-27T08:48:37+02:00 Dumping 7ffe3413f000 - 7ffe34160000 2021-05-27T08:48:37+02:00 Resulting SHA512: cb4e949c7b... Note, that the script currently does not performs zero-padding for recreating the virtual address space as seen by the process. gcore is part of the GNU debugger gdb, see https://manpages.debian.org/buster/gdb/gcore.1.en.html
## Adobe Photoshop 2022 (Version 23.2) Crack Keygen (LifeTime) Activation Code Free [Win/Mac] 🕹️ June 30, 2022 Tips & Tricks Here are some helpful tips and tricks to using Photoshop: If you want to fade a color out of an image, you can add a fade filter to a layer. You can set the distance of the fade by choosing how many pixels from the center it will fade and whether it is linear or radial. You can then click OK to add the effect and finish editing the image, or you can click Apply to immediately apply the effect (that is, apply it before you continue editing the image). Click Ctrl-click (Windows) or Command-click (Mac) the layer to select it for editing. You can add a blur filter to the same layer you added fade to and create a blurred image as a result. You can also add a Gaussian blur, which increases the blur as you move closer to the edges of the image. Scale the edges of the image by painting. You can easily paint the edges of an image by using the Direct Selection tool. With the Direct Selection tool, click and drag across the image (outside of the selection), creating an edge. You can use the brush tool or the eraser tool to smooth it out. Click File > Save for Web and Devices or File > Export for Web and Devices, then choose Render. Click File > Page Setup and choose “Fit to page” or “Scale to fit.” Save the image. Click File > Save, then choose one of the options at the bottom of the window that appears. You can change the file type in the Save as Type menu. To change file types, first select the format you want to use in the image (JPEG, TIFF, etc.). If you want to change the format, select the format you want from the menu at the bottom of the Save As Type menu, then click OK. You can also change the photo format in the Save for Web and Devices and Export for Web and Devices dialog boxes. Create a GIF. With the Image Processor tools, you can create a GIF file from an image using a recipe. Click File > Create GIF or Web Gallery from Image. Set the dimensions of the GIF. You can set the dimensions of the GIF after the file is created. Set the frames. You can ## Adobe Photoshop 2022 (Version 23.2) Crack+ X64 The range of functions in Photoshop Elements 15 is much greater than in the Photoshop version. There are many advanced features and tips in this version for mac users, and they are all quite simple and straightforward to use. You will learn how to use it and get closer to its secrets, about all the ways of image editing with Photoshop Elements. So, let’s get ready to learn how to work with Photoshop Elements. What is Photoshop Elements In the handbook of the software, the descriptions are pretty straightforward. However, it will help to make a rough estimation of the range of software features. All versions have a main menu bar at the top of the page with a lot of possible functions. However, the range of functions is different depending on the version. In the 10.0 version of Photoshop Elements, all the functions that are available were simplified. However, the functions of professional users are now at your disposal. It’s enough to make different types of manipulations. In Elements 12 (E16), you can edit your images without any problems. This version is still in beta. However, it seems that it will be effective. For some functions, it is still better to use Photoshop, as the features are not yet completed. Photoshop Elements 15 (E17) is very similar to its predecessor, but it has a lot of new functions and is even more powerful than its predecessor. Features of Photoshop Elements What else is the best all-rounder image editor for photographers? Features Editing and modification of images Convert images to images, analog-digital Drastically simplify the way to make the editing with this version. A little learning helps. Adobe Photoshop Elements is extremely easy to use. It is designed to make processing of images simple and convenient. Using Photoshop Elements is as simple as it gets. Editing makes everything simple and smooth. The resolution of image is far beyond photo editing with traditional programs. Editing is very easy. From there it gets even easier. Note: Adobe Photoshop Elements 15.0.2 is very similar to Photoshop, and the features are almost the same. The only difference is that Photoshop Elements 15.0.2 is still in beta stage. Edit and modify images It is a simple process and convenient to work with Photoshop Elements. All you need to know is Photoshop Elements. If you know Photoshop, a681f4349e ## Adobe Photoshop 2022 (Version 23.2) [Win/Mac] [Latest 2022] Q: How to implement a binomial distribution and how to choose the number of samples? Let $X_1, X_2, \dots$ be i.i.d. random variables in $[0,1]$ with $$\mu = \mathbb{E}(X_1), \;\; \sigma^2 = Var(X_1)$$ Let $Y_1, Y_2, \dots$ be another i.i.d. sequence of random variables in $[0,1]$, independent of the first. Let $n$ and $N$ be positive integers, $n \geq N$. I am interested in estimating the probability $$\mathbb{P}(Y_1 + \dots + Y_n \leq \mu + \epsilon)$$ where $\epsilon = \frac{1}{N}$. Let us say that we want to calculate the probability of the event $$B \triangleq \{Y_1 + \dots + Y_N = \mu + \epsilon \}$$ and have $N$ data points. What is the probability of sampling the data from the uniform distribution on $[0,1]$ without replacement? i.e. $$\mathbb{P}\left(\mathcal{U} \leq \frac{\epsilon}{N}\right)$$ Where $\mathcal{U}$ is the uniform distribution. (I understand that one can simulate the samples from the uniform distribution by drawing random samples with replacement but I am interested in doing it exactly instead of approximating.) Can you suggest how to choose $N$ so that for any $\mu$ $\pm \epsilon$, we can estimate the probability accurately? A: You probably have to use Markov’s Inequality for the desired probability. Let $m$ be the smallest integer such that $m\epsilon> \mu-\mu$, since $Y_1+\dots+Y_n$ has a stochastic domination by a non-negative integer-valued random variable with the finite mean of $\mu+\epsilon$, then Markov’s inequality says \mathbb P\{Y_1+\d ## What’s New in the Adobe Photoshop 2022 (Version 23.2)? How many times have you heard, “Don’t show someone pictures of their dead family to hurt their feelings?” A mother in Georgia is the laughing stock of the internet after she took a picture of the wake of her dying 14-year-old daughter, who died from a brain tumor. Melissa Schrade told CBS46 that she was happy to show her daughter’s death to her friends, but she thought showing the picture to her family would be disrespectful. “It was showing her to the people that were most important in her life, people who cared about her,” Schrade said. “How can you show people a picture of you dead? That’s just wrong. That’s just nasty.” Schrade never intended to publish the pictures — she printed about 15 of them, but one showed up on a Facebook page over the weekend. She posted the picture to the page of her daughter, Chynna Schrade, along with the following comment: “My daughter passed away on July 8th with a grade 3 glioblastoma brain tumor. She died about an hour and a half later. I took a picture of her little sister’s wake over the weekend. It was a wake for a 14-year-old girl that just died. I took this picture as a tribute to her. No one should ever post pictures like this without the consent of the people in the pictures.” The pictures of Chynna’s wake were eventually taken down, but by then they were spread on a number of different pages. Schrade said she planned to put the remaining prints back in a frame and keep them in her daughter’s room.Shop Star Wars Battlefront is finally getting this offline mode everyone’s been asking for, but you’ll need a physical copy of the game to play it. (Or an Origin PC.) “Star Wars Battlefront has received over 14 million players in its first year since launch,” said Daniel Vervier, executive producer at DICE. “We continue to focus on giving Star Wars fans the Battlefront experience they’ve always wanted. We’ve introduced new content and experience, improved graphics and mechanics, and new weapons, and now we have the first version of our game without the online connection.” In the 18+ rated offline version of Battlefront, only players that have already purchased the game can play the mode. EA says you’ ## System Requirements For Adobe Photoshop 2022 (Version 23.2): These are the minimum PC specifications that would enable you to run Elite Dangerous smoothly. CPU: Requires at least: i3. The graphics card: – Must be able to: – Supports: Graphics card: Intel HD 4600 or Nvidia GTX 460 or AMD HD 6670 or better (preferably with 2GB or more of RAM). Games are best played with a GTX 660 or AMD Radeon HD 7870.
# type redefinition error when including 2 header files I am compiling a Matlab mex file (Using VS2010 under Windows), and the following 2 includes: ``````#include <algorithm> // for std::copy #include "mex.h" `````` give me compile error: 1>d:\svn\trunk\dev\matlab\extern\include\matrix.h(337): error C2371: 'char16_t' : redefinition; different basic types I have tried putting it in a namespace: ``````namespace Algo { #include <algorithm> } `````` But then I get tons of other compile errors, without even using anything defined in `<algorithm>`, for example: ``````Error 1 error C2039: 'set_terminate' : is not a member of '`global namespace'' C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\exception 192 Error 2 error C2873: 'set_terminate' : symbol cannot be used in a using-declaration C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\exception 192 `````` How can I solve this? - Have you tried the other way around, ie trying to put the mex header in a namespace? Read the header, and find the conflict. Often it is possible to define a symbol to circumvent it. –  daramarak Jan 24 '12 at 8:55 It works, thanks. Put it into an answer and I will be glad to accept it.. btw, why it doesn't work the other way round? –  Itamar Katz Jan 24 '12 at 8:59
All ideas on sale—most 50% off Asa Candler invented the coupon in 1895. Candler, a pharmacist by trade, later purchased the Coca-Cola company from the original inventor Dr. John Pemberton, also a pharmacist. Candler created the notion of coupons to help promote his new drink. He later became the mayor of Atlanta. Today Ken and I want to talk about a special July Fourth Sale on mathematical ideas. All ideas are 50% off, this holiday through the weekend. We made up a coupon that is good for 50% off on all ideas this Fourth of July. Since we usually charge zero for what we tell you, it is also 100% off. We hope that you will still value the ideas even though they are on sale. As is usually the case we make no warranty on how useful these ideas are—all sales are final. No refunds of any kind. We do however allow you to send in your own ideas as comments, so think of that as a “return-policy.” July Fourth in the USA is our celebration of independence. It is an official holiday commemorating the adoption of the Declaration of Independence on July 4, 1776, which separated us from the UK. There are many important traditions on this day: parades, fireworks, cook-outs, and more. One that I like since I grew up in the New York City area is the annual Nathan’s Hot Dog Eating Contest in Coney Island, which started in 1916. Their hot dogs are great, but I could only eat a couple. Of course most nations have yearly celebrations like the 4th of July is for the US. In France it is the 14th of July, in India the 15th of August, and in Mexico it is the 16th of September—but the Mexicans celebrate Cinco de Mayo on May 5th. Ken’s family once celebrated Quebec’s National Day (Fête St.-Jean) in Quebec City on June 24, with fireworks on the Plains of Abraham battle site. I found the following question and answer on the web: • When did July 4th start? • The 4th of July started on July 4th, 1776. But it is wrong. It seems a joke to say July 4 started with the guy who invented the calendar, but it’s actually true—and he even got July named after himself. Okay not too funny, so let’s move on to our ideas that are on sale today. ## Ideas On Sale ${\bullet }$ Factoring: This is still one of my favorite problems. At the Princeton Turing meeting I asked Peter Sarnak directly what he believed and he immediately answered: “it is obviously in polynomial time.” This will be hard to solve, of course, but there seems to be an intermediate idea that he started. Consider $\displaystyle \frac{1}{x}\sum_{k \le x} \mu(k)f(k),$ where ${f(k)}$ is an “easy” to compute integer function, and ${\mu}$ is the Möbius function. Can we show that this tends to zero as ${x}$ goes to infinity? For functions ${f(k)}$ from ${\mathsf{AC}^{0}}$ this was resolved beautifully by Ben Green—see here. There are two ideas on sale: • Try to extend his result to a larger class of functions. I believe that even ${\mathsf{AC}^{0}[2]}$ is still open. • The other approach is to try and show that the sum will not go to zero for functions that are sufficiently powerful. For even polynomial time computable functions we cannot show that it does not tend to zero. Note that if ${f(k)}$ is the indicator function for the primes, then the expression goes to zero only as $\displaystyle \frac{1}{\ln x}.$ But can we do better? This seems to be an approachable problem. ${\bullet }$ Lonely Runner: I have discussed this problem before here and previously. Can we extend the ideas there by using triples instead of pairs of runners? I believe that this should be worked out. Actually I would love someone to step up, grab the coupon for this problem, and help write a full paper. ${\bullet }$ Jacobi Circuits: These may just be a knock-off of something already on the market, but here goes. Fix a composite number ${m}$, and consider this variation on circuits with unbounded fan-in Boolean and modulo-${m}$ gates. The input wires to a gate ${g}$ may carry values -1 as well as 0 or 1, so that their sum ${a}$ can be negative. For Boolean gates -1 is the same as 1, but the mod-${m}$ gates compute the Jacobi symbol $\displaystyle \left(\frac{a}{m}\right) = \left(\frac{a}{p_1}\right)^{\alpha_1}\cdots\left(\frac{a}{p_k}\right)^{\alpha_k}$ where ${m = p_1^{\alpha_1}\cdots p_k^{\alpha_k}}$ is the unique prime factorization of ${m}$, and for prime ${p}$, ${\left(\frac{a}{p}\right)}$ is the Legendre symbol giving 0 if ${a}$ is a multiple of the prime ${p}$, otherwise +1 or -1 according as ${a}$ is a quadratic residue mod ${p}$ or not. For polynomial size and constant depth, are these just the same as ${\mathsf{ACC}^0[m]}$ circuits? They could be more general, but could also be more restricted. We haven’t looked at the warranty very closely—besides, it’s in French. Our last sale item needs its own section. ## Manic Monomials ${\bullet }$ Minimal Monomials: If you are given polynomials ${p_1,\dots,p_s}$ in variables ${x_1,\dots,x_n}$, can you combine them to make a monomial? By combine we mean finding polynomials ${\alpha_1,\dots,\alpha_s}$ to use as multipliers and forming the expression $\displaystyle r = \alpha_1 p_1 + \alpha_2 p_2 + \cdots + \alpha_s p_s.$ When can you make ${r}$ a monomial, and how many different monomials ${r}$ can you make this way? Note that if ${r'}$ is a monomial multiple of ${r}$, then you can make ${r'}$ too by defining ${\alpha'_i = (r'/r)\alpha_i}$ for each ${i}$, so the question is really how many monomials ${r}$ can you make that are minimal with this property—no proper divisor of ${r}$ can be made. It follows from theorems of David Hilbert that the number ${\nu = \nu(p_1,\dots,p_s)}$ of minimal monomials is always finite. Actually there is a sense, namely measure in the real or complex space of coefficients of terms of the ${p_i}$, in which the number is almost always zero. Here are two important cases when ${s = n = m^2}$ and ${A}$ is the ${m \times m}$ matrix of variables: • The ${p_i}$ are the ${(m-1)\times(m-1)}$ sub-determinants of ${A}$. Then ${\nu = 0}$. The same goes for the set of ${k \times k}$ sub-determinants, any ${k < m}$. • The ${p_i}$ are the ${(m-1)\times(m-1)}$ sub-permanents of ${A}$. Then ${\nu}$ is ${\dots}$ not ${0}$. In fact it is gigantic as a function of ${m}$. How gigantic? No one knows. For ${m = 4}$ Ken did a computation that found at least 2,196 minimal monomials. For ${m = 5}$ Ken estimated that the analogous computation with the ${4 \times 4}$ sub-permanents would take about 100 years on the hardware he had. So he ran with the ${3 \times 3}$ sub-permanents, and after 37-1/4 days he found about 128,000 minimal monomials. Fireworks indeed. For a simple example with the ${3 \times 3}$ permanent, consider $\displaystyle \begin{array}{|ccc|} a & b & c\\ d & e & f\\ g & h & i \end{array}\;,\quad p_1 = ae+bd,\quad p_2 = af+cd,\quad p_3 = bf+ce.$ Then ${\frac{1}{2} (f p_1 - e p_2 + d p_3)}$ yields the monomial ${bdf}$. Whereas for the sub-determinants ${ae-bd}$, ${af-cd}$, and ${bf-ce}$, the analogous expression using a positive multiplier ${e}$ on the second one cancels everything away. Curiously it is completely impossible to get any monomials from sub-determinants, and the simple proof is that the sub-determinants form a Gröbner basis, which (in reduced form) always has at least one monomial if there are any. Clearly this is a wild structural difference between the permanent and determinant. But what can be proved with it? Ken thought ${\log \nu}$ could be a lower bound on arithmetical complexity analogous to the log of solution-set sizes in Volker Strassen’s lower-bound theorem, which we covered here. But this got falsified for a family ${F_n}$ of constant-degree polynomials with linear-size circuits whose ${n}$-many first partial derivatives combine to make ${2^{2^n}}$ minimal monomials. Hence this idea is part of our sale. ## Open Problems Our sale items also have an optional service agreement with us. Does that influence you to buy them? Will this July 4th also be remembered for Higgs fireworks? Have a safe and fun Fourth if celebrating it, and if not have a safe day anyway. 8 Comments leave one → 1. July 4, 2012 4:42 am Nice post as usual. Question: Do people consider computation with polynomial ideals over $\mathbb{C}$? This paper http://www.math.ucdavis.edu/~deloera/RECENT_WORK/jsc09_issac08.pdf suggest the algorithm to find infeasibility certificate for the polynomial system. On the other hand, Lasserre relaxation show how to check if system is feasible. The combination of two can guarantee the answer – feasible/not feasible. One can create an ideal that accept only 0 and 1, e.g. $x_i(x_i-1)$ plus additional equations that restricts possible 0-1 solutions – encoding the problem in the ideal of polynomials. The advantage, is that this system is testing all solution space at once, similar to quantum computers. Moreover, the intermediate results does not have to be unique, and be represented by the null space of possible monomial values. For example, is it possible to represent the Quantum Fourier transform in this way, and use the resulting null space for “interference” through the additional set of polynomials. More specifically, whether it is possible to write the system of polynomials in $\mathbb{C}$, such that ideal in $\mathbb{C}$ of this system is Fourier transform of the integer n, and than add additional polynomials that restrict the ideal to be the encoding of factors of n. 2. July 6, 2012 6:46 am Proving Mobius randomness for the Rudin-Shapiro sequence will already be a major breakthrough! http://mathoverflow.net/questions/97261/mobius-randomness-of-the-rudin-shapiro-sequence In more details: Let $a_n = \sum \epsilon_i\epsilon_{i+1}$ where $\epsilon_1,\epsilon_2,\dots$ are the digits in the binary expansion of $n$. $WS(n)$, the $n$th term of the Rudin Shapiro sequence is defined by $WS(n)=(-1)^{a_n}.$ The question is: Prove that $\sum_{i=0}^n WS(i) \mu (i) = o(n).$ 3. Thomas Spencer permalink July 22, 2012 7:30 pm Re: the lonely runner. There are known extensions of the probabilistic method that might apply here. I believe that these methods are not well known, since I have not seen any other potential application, but they are described in “Generalized Bonferroni inequalities” in the Journal of Applied Probability 1994, pp 409-417. I no longer have easy access to this paper, so I do not remember exactly what it said. However, it dealt with the following situation. Let $Z=\sum X_i$, where the $X_i$ are 0-1 random variables that are not necessarily independent. Then let $S_1 = \sum Pr(X_1=1)$, $S_2=\sum_{i < j} Pr(X_i=1 and X_j=1)$, and so on. We want to bound probability that $Z=0$ based on the$S-k$. The probabilistic method says that if $S_1 0$. It is also known that if $S_1-S_2+S_30$. There are potentially stronger results in the cited paper. 4. P Devlin permalink October 12, 2012 9:40 am I believe that there is a very simple solution to the lonely runner conjecture which proves it to be true for any n, without making any assumptions concerning the runners speeds etc. It was just recently that i came across the problem. However, it was related to something that i gave consideration to a number of years back. I am an engineer and not a mathematician so whilst the explanation of the solution is as easily stated as the problem itself i am not in a position to explain it using strictly mathematical formalism. I would be interested to share this idea. email details are provided.
# Let $G$ be a group with $n$ conjugacy classes. Prove there is a maximum $2^{n-1}$ normal subgroups. Let $$G$$ be a group with $$n$$ conjugacy classes. Prove there is a maximum $$2^{n-1}$$ normal subgroups. I know the cosets of normal subgroup $$H$$ with index $$j$$ has the form: $$g_1H,g_2H,\cdots ,g_jH , g_i\in G$$ In addition , I think if $$\exists g\in G ,g^{-1}H_1g=H_2$$, such that $$H_1,H_2$$ are normal subgroups so they have the same conjugacy classes. I don't know how to approach the problem. Any help is welcome,Thanks ! Hint: every normal subgroup is the disjoint union of some of the conjugacy classes and the conjugacy class $$\{1\}$$ is always part of that. • There are $n-1$ conjugacy classes except $\{1\}$. Suppose $N$ is a normal subgroup , $1$= the conjugacy class is part of the disjoint union of some of the conjugacy classes of $N$ , $0$=the conjugacy class is *not* part of the disjoint union of some of the conjugacy , classes of $N$ , there are 2^{n-1} different options , hence there are maximum $2^{n-1}$ ? Is it correct ? – algo Jun 18 at 20:32 • Yes you got it, well done! Jun 18 at 21:01 If $$n = 1$$, then $$G = \{e\}$$ because no non-identity element is every conjugate to the identity element. Thus, in this case, we see equality happens. That is, the number of normal subgroups is equal to $$2^{n-1}$$. Likewise, if $$n=2$$, then equality happens. This is because if $$n=2$$, the group $$G$$ is non-trivial, and so it has $$2 = 2^{2-1}$$ normal subgroups: itself and the identity subgroup. I claim that these are the only cases where equality is achieved. Suppose $$G$$ is a group with $$n$$ conjugacy classes with $$n\geq 3$$. Then it has fewer than $$2^{n-1}$$ normal subgroups. Proof: We prove it by contradiction. So, assume $$G$$ is a group with $$n\geq 3$$ conjugacy classes and $$2^{n-1}$$ normal subgroups. Since $$n\geq 3$$, we can find three distinct conjugacy classes. Call them $$A_0 =\{e\}$$, $$A_1$$, and $$A_2$$. From Nicky's answer, each of $$H_1 = A_0\cup A_1$$, $$H_2 = A_0\cup A_2$$, and $$H_{12}=A_0\cup A_1\cup A_2$$ must be a normal subgroup of $$G$$. Choose $$g_1\in A_1$$ and $$g_2\in A_2$$. Since $$H_1$$ is a subgroup of $$G$$ and $$g_1\in H_1$$, $$g_1^{-1}\in H_1$$. But note that $$g_1\neq e$$, so $$g_1^{-1}\neq e$$, so $$g_1^{-1} \in A_1$$. Analogously, $$g_2^{-1}\in A_2$$. Since $$H_{12}$$ is a subgroup of $$G$$, $$g_1 g_2\in H_{12}$$. Notice that $$g_1g_2\notin A_1$$, for if it was, then $$g_2 = g_1^{-1}(g_1g_2)\in A_1$$. Likewise, $$g_1g_2\notin A_2$$. Thus, we conclude $$g_1g_2\in A_0 = \{e\}$$. In other words, $$g_2 = g_1^{-1}$$. Thus $$g_2\in A_2 \cap A_1$$, giving a contradiction. $$\square$$ So, if $$n\geq 3$$ is the number of conjugacy classes of a group $$G$$, then the number of normal subgroups is at most $$2^{n-1}-1$$. This bound can actually be achieved: consider $$G = D_6$$, the dihedral group of order $$6$$. This has $$n=3$$ conjugacy classes (the identity, the two non-trivial rotations, and all the reflections), and it has $$2^{3-1} - 1 = 3$$ normal subgroups: the trivial subgroup, the subgroup of all rotations, and the whole group. On the other hand, not all groups with $$n=3$$ have $$3$$ normal subgroups: consider $$\mathbb{Z}/3\mathbb{Z}$$.
MSC2020 65J15 ### Continuous method of second order with constant coefficients for monotone equations in Hilbert space #### I. P. Ryazantseva1 Annotation Convergence of an implicit second-order iterative method with constant coefficients for nonlinear monotone equations in Hilbert space is investigated. For non-negative solutions of a second-order difference numerical inequality, a top-down estimate is established. This estimate is used to prove the convergence of the iterative method under study. The convergence of the iterative method is established under the assumption that the operator of the equation on a Hilbert space is monotone and satisfies the Lipschitz condition. Sufficient conditions for convergence of proposed method also include some relations connecting parameters that determine the specified properties of the operator in the equation to be solved and coefficients of the second-order difference equation that defines the method to be studied. The parametric support of the proposed method is confirmed by an example. The proposed second-order method with constant coefficients has a better upper estimate of the convergence rate compared to the same method with variable coefficients that was studied earlier. hilbert space, strongly monotone operator, Lipschitz condition, difference equation, second-order iterative process, top-down estimate of the solution of a second-order numerical difference inequality, Stolz's theorem, convergence 1Irina P. Ryazantseva, Professor, Department of Applied Mathematics, Nizhny Novgorod State Tehnical University named after R.,E. Alekseev (24 Minina St., Nizhny Novgorod 603950, Russia), Dr.Sci. (Physics and Mathematics), ORCID: http://orcid.org/0000-0001-6215-1662, lryazantseva@applmath.ru Citation: I. P. Ryazantseva, "[Continuous method of second order with constant coefficients for monotone equations in Hilbert space]", Zhurnal Srednevolzhskogo matematicheskogo obshchestva,22:4 (2020) 449–455 (In Russian) DOI 10.15507/2079-6900.22.202004.449-455
# American Institute of Mathematical Sciences • Previous Article Lyapunov type inequalities for $n$th order forced differential equations with mixed nonlinearities • CPAA Home • This Issue • Next Article Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients November  2016, 15(6): 2247-2280. doi: 10.3934/cpaa.2016036 ## Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in four and more spatial dimensions 1 Graduate School of Mathematics, Nagoya University, Chikusa-ku, Nagoya, 464-8602, Japan Received  January 2016 Revised  July 2016 Published  September 2016 We study the Cauchy problem of the Klein-Gordon-Zakharov system in spatial dimension $d \ge 4$ with radial or non-radial initial datum $(u, \partial_t u, n,$ $\partial_t n)|_{t=0} \in H^{s+1}(R^d) \times H^s(R^d) \times\dot{H}^s(R^d) \times \dot{H}^{s-1}(R^d)$. The critical value of $s$ is $s_c=d/2-2$. By the radial Strichartz estimates and $U^2, V^2$ type spaces, we prove that the small data global well-posedness and scattering hold at $s=s_c$ in $d \ge 4$ for radial initial datum. For non-radial initial datum, we prove that the local well-posedness hold at $s=1/4$ when $d=4$ and $s=s_c+1/(d+1)$ when $d \ge 5$. Citation: Isao Kato. Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in four and more spatial dimensions. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2247-2280. doi: 10.3934/cpaa.2016036 ##### References: [1] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and application to nonlinear evolution equations I. Schrödinger equations,, \emph{GAFA}, 3 (1993), 107. doi: 10.1007/BF01896020. Google Scholar [2] Y. Cho and S. Lee, Strichartz estimates in spherical coordinates,, \emph{Indi. Univ. Math. J.}, 62 (2013), 991. doi: 10.1512/iumj.2013.62.4970. Google Scholar [3] J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system,, \emph{J. Funct. Anal.}, 151 (1997), 384. doi: 10.1006/jfan.1997.3148. Google Scholar [4] J. Ginibre and G. Velo, Generalized Strichartz inequalities for the wave equation,, \emph{J. Funct. Anal.}, 133 (1995), 50. doi: 10.1006/jfan.1995.1119. Google Scholar [5] Z. Guo and K. Nakanishi, Small energy scattering for the Zakharov system with radial symmetry,, \emph{Int. Math. Res. Not.}, 9 (2014), 2327. Google Scholar [6] Z. Guo, K. Nakanishi and S. Wang, Small energy scattering for the Klein-Gordon-Zakharov system with radial symmetry,, \emph{Math. Res. Nett.}, 21 (2014), 733. doi: 10.4310/MRL.2014.v21.n4.a8. Google Scholar [7] Z. Guo and Y. Wang, Improved Strichartz estimates for a class of dispersive equations in the radial case and their applications to nonlinear Schrödinger and wave equation,, \emph{J. Anal. Math.}, 124 (2014), 1. doi: 10.1007/s11854-014-0025-6. Google Scholar [8] M. Hadac, S. Herr and H. Koch, Well-posedness and scattering for the KP-II equation in a critical space,, \emph{Ann. I. H. Poincar\'e AN}, 26 (2009), 917. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [9] M. Hadac, S. Herr and H. Koch, Erratum to "Well-posedness and scattering for the KP-II equation in a critical space"[Ann. I. H. Poincaré AN, 26 (2009), 917-941],, \emph{Ann. I. H. Poincar\'e AN}, 27 (2010), 971. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [10] S. Herr, D. Tataru and N. Tzvetkov, Global well-posedness of the energy-critical nonlinear Schrödinger equation with small initial data in $H^1(T^3)$,, \emph{Duke. Math. J.}, 159 (2011), 329. doi: 10.1215/00127094-1415889. Google Scholar [11] H. Hirayama, Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data,, \emph{Comm. Pure. Appl. Anal.}, 13 (2014), 1563. doi: 10.3934/cpaa.2014.13.1563. Google Scholar [12] T. Kato, An $L^{q,r}$-theory for nonlinear Schrödinger equations, Spectral and scattering theory and applications,, \emph{Adv. Stud. Pure. Math.}, 23 (1994), 223. Google Scholar [13] I. Kato and K. Tsugawa, Scattering and well-posedness for the Zakharov system at a critical space in four and more spatial dimensions, preprint,, \arXiv{1512.00551}., (). Google Scholar [14] J. Kato and T. Ozawa, Endpoint Strichartz estimates for the Klein-Gordon equation in two space dimensions and some applications,, \emph{J. Math. Pures Appl.}, 95 (2011), 48. doi: 10.1016/j.matpur.2010.10.001. Google Scholar [15] C. Kenig, G. Ponce and L. Vega, A bilinear estimate with applications to the KdV equation,, \emph{J. Amer. Soc.}, 9 (1996), 573. doi: 10.1090/S0894-0347-96-00200-7. Google Scholar [16] M. Keel and T. Tao, Endpoint Strichartz estimates,, \emph{Amer. J. Math.}, 120 (1998), 955. Google Scholar [17] H. Koch and D. Tataru, Dispersive estimates for principally normal pseudodifferential operators,, \emph{Comm. Pure. Appl. Math.}, 58 (2005), 217. doi: 10.1002/cpa.20067. Google Scholar [18] H. Koch and D. Tataru, A priori bounds for the 1D cubic NLS in negative Sobolev spaces,, \emph{Int. Math. Res. Not.}, 16 (2007). doi: 10.1093/imrn/rnm053. Google Scholar [19] H. Lindblad, Counterexamples to local existence for semi-linear wave equations,, \emph{Amer. J. Math.}, 118 (1996), 1. Google Scholar [20] H. Lindblad and C. Sogge, On existence and scattering with minimal regularity for semilinear wave equations,, \emph{J. Funct. Anal.}, 130 (1995), 357. doi: 10.1006/jfan.1995.1075. Google Scholar [21] S. Machihara, K. Nakanishi and T. Ozawa, Nonrelativistic limit in the energy space for nonlinear Klein-Gordon equations,, \emph{Math. Ann.}, 322 (2002), 603. doi: 10.1007/s002080200008. Google Scholar [22] S. Machihara, K. Nakanishi and T. Ozawa, Small global solutions and the nonrelativistic limit for the Dirac equation,, \emph{Rev. Mat. Iberoamericana.}, 19 (2003), 179. doi: 10.4171/RMI/342. Google Scholar [23] N. Masmoudi and K. Nakanishi, From the Klein-Gordon-Zakharov system to the nonlinear Schrödinger equation,, \emph{J. Hyperbolic Differ. Equ.}, 2 (2005), 975. doi: 10.1142/S0219891605000683. Google Scholar [24] N. Masmoudi and K. Nakanishi, From the Klein-Gordon-Zakharov system to a singular nonlinear Schrödinger system,, \emph{Ann. I. H. Poincar\'e AN}, 27 (2010), 1073. doi: 10.1016/j.anihpc.2010.02.002. Google Scholar [25] T. Ozawa, K. Tsutaya and Y. Tsutsumi, Normal form and global solutions for the Klein-Gordon-Zakharov equations,, \emph{Ann. I. H. Poincar\'e AN}, 12 (1995), 459. Google Scholar [26] T. Ozawa, K. Tsutaya and Y. Tsutsumi, Well-posedness in energy space for the Cauchy problem of the Klein-Gordon-Zakharov equations with different propagation speeds in three space dimensions,, \emph{Math. Ann.}, 313 (1999), 127. doi: 10.1007/s002080050254. Google Scholar [27] T. Schottdorf, Global existence without decay for quadratic Klein-Gordon equations, preprint,, \arXiv{1209.1518}., (). Google Scholar [28] T. Tao, Nonlinear Dispersive Equations: Local and Global Analysis,, in \emph{AMS}, (2006). doi: 10.1090/cbms/106. Google Scholar [29] D. Tataru, Local and global results for wave maps I,, \emph{Comm. Part. Diff. Eq.}, 23 (1998), 1781. doi: 10.1080/03605309808821400. Google Scholar show all references ##### References: [1] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and application to nonlinear evolution equations I. Schrödinger equations,, \emph{GAFA}, 3 (1993), 107. doi: 10.1007/BF01896020. Google Scholar [2] Y. Cho and S. Lee, Strichartz estimates in spherical coordinates,, \emph{Indi. Univ. Math. J.}, 62 (2013), 991. doi: 10.1512/iumj.2013.62.4970. Google Scholar [3] J. Ginibre, Y. Tsutsumi and G. Velo, On the Cauchy problem for the Zakharov system,, \emph{J. Funct. Anal.}, 151 (1997), 384. doi: 10.1006/jfan.1997.3148. Google Scholar [4] J. Ginibre and G. Velo, Generalized Strichartz inequalities for the wave equation,, \emph{J. Funct. Anal.}, 133 (1995), 50. doi: 10.1006/jfan.1995.1119. Google Scholar [5] Z. Guo and K. Nakanishi, Small energy scattering for the Zakharov system with radial symmetry,, \emph{Int. Math. Res. Not.}, 9 (2014), 2327. Google Scholar [6] Z. Guo, K. Nakanishi and S. Wang, Small energy scattering for the Klein-Gordon-Zakharov system with radial symmetry,, \emph{Math. Res. Nett.}, 21 (2014), 733. doi: 10.4310/MRL.2014.v21.n4.a8. Google Scholar [7] Z. Guo and Y. Wang, Improved Strichartz estimates for a class of dispersive equations in the radial case and their applications to nonlinear Schrödinger and wave equation,, \emph{J. Anal. Math.}, 124 (2014), 1. doi: 10.1007/s11854-014-0025-6. Google Scholar [8] M. Hadac, S. Herr and H. Koch, Well-posedness and scattering for the KP-II equation in a critical space,, \emph{Ann. I. H. Poincar\'e AN}, 26 (2009), 917. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [9] M. Hadac, S. Herr and H. Koch, Erratum to "Well-posedness and scattering for the KP-II equation in a critical space"[Ann. I. H. Poincaré AN, 26 (2009), 917-941],, \emph{Ann. I. H. Poincar\'e AN}, 27 (2010), 971. doi: 10.1016/j.anihpc.2008.04.002. Google Scholar [10] S. Herr, D. Tataru and N. Tzvetkov, Global well-posedness of the energy-critical nonlinear Schrödinger equation with small initial data in $H^1(T^3)$,, \emph{Duke. Math. J.}, 159 (2011), 329. doi: 10.1215/00127094-1415889. Google Scholar [11] H. Hirayama, Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data,, \emph{Comm. Pure. Appl. Anal.}, 13 (2014), 1563. doi: 10.3934/cpaa.2014.13.1563. Google Scholar [12] T. Kato, An $L^{q,r}$-theory for nonlinear Schrödinger equations, Spectral and scattering theory and applications,, \emph{Adv. Stud. Pure. Math.}, 23 (1994), 223. Google Scholar [13] I. Kato and K. Tsugawa, Scattering and well-posedness for the Zakharov system at a critical space in four and more spatial dimensions, preprint,, \arXiv{1512.00551}., (). Google Scholar [14] J. Kato and T. Ozawa, Endpoint Strichartz estimates for the Klein-Gordon equation in two space dimensions and some applications,, \emph{J. Math. Pures Appl.}, 95 (2011), 48. doi: 10.1016/j.matpur.2010.10.001. Google Scholar [15] C. Kenig, G. Ponce and L. Vega, A bilinear estimate with applications to the KdV equation,, \emph{J. Amer. Soc.}, 9 (1996), 573. doi: 10.1090/S0894-0347-96-00200-7. Google Scholar [16] M. Keel and T. Tao, Endpoint Strichartz estimates,, \emph{Amer. J. Math.}, 120 (1998), 955. Google Scholar [17] H. Koch and D. Tataru, Dispersive estimates for principally normal pseudodifferential operators,, \emph{Comm. Pure. Appl. Math.}, 58 (2005), 217. doi: 10.1002/cpa.20067. Google Scholar [18] H. Koch and D. Tataru, A priori bounds for the 1D cubic NLS in negative Sobolev spaces,, \emph{Int. Math. Res. Not.}, 16 (2007). doi: 10.1093/imrn/rnm053. Google Scholar [19] H. Lindblad, Counterexamples to local existence for semi-linear wave equations,, \emph{Amer. J. Math.}, 118 (1996), 1. Google Scholar [20] H. Lindblad and C. Sogge, On existence and scattering with minimal regularity for semilinear wave equations,, \emph{J. Funct. Anal.}, 130 (1995), 357. doi: 10.1006/jfan.1995.1075. Google Scholar [21] S. Machihara, K. Nakanishi and T. Ozawa, Nonrelativistic limit in the energy space for nonlinear Klein-Gordon equations,, \emph{Math. Ann.}, 322 (2002), 603. doi: 10.1007/s002080200008. Google Scholar [22] S. Machihara, K. Nakanishi and T. Ozawa, Small global solutions and the nonrelativistic limit for the Dirac equation,, \emph{Rev. Mat. Iberoamericana.}, 19 (2003), 179. doi: 10.4171/RMI/342. Google Scholar [23] N. Masmoudi and K. Nakanishi, From the Klein-Gordon-Zakharov system to the nonlinear Schrödinger equation,, \emph{J. Hyperbolic Differ. Equ.}, 2 (2005), 975. doi: 10.1142/S0219891605000683. Google Scholar [24] N. Masmoudi and K. Nakanishi, From the Klein-Gordon-Zakharov system to a singular nonlinear Schrödinger system,, \emph{Ann. I. H. Poincar\'e AN}, 27 (2010), 1073. doi: 10.1016/j.anihpc.2010.02.002. Google Scholar [25] T. Ozawa, K. Tsutaya and Y. Tsutsumi, Normal form and global solutions for the Klein-Gordon-Zakharov equations,, \emph{Ann. I. H. Poincar\'e AN}, 12 (1995), 459. Google Scholar [26] T. Ozawa, K. Tsutaya and Y. Tsutsumi, Well-posedness in energy space for the Cauchy problem of the Klein-Gordon-Zakharov equations with different propagation speeds in three space dimensions,, \emph{Math. Ann.}, 313 (1999), 127. doi: 10.1007/s002080050254. Google Scholar [27] T. Schottdorf, Global existence without decay for quadratic Klein-Gordon equations, preprint,, \arXiv{1209.1518}., (). Google Scholar [28] T. Tao, Nonlinear Dispersive Equations: Local and Global Analysis,, in \emph{AMS}, (2006). doi: 10.1090/cbms/106. Google Scholar [29] D. Tataru, Local and global results for wave maps I,, \emph{Comm. Part. Diff. Eq.}, 23 (1998), 1781. doi: 10.1080/03605309808821400. Google Scholar [1] Nobu Kishimoto. Local well-posedness for the Cauchy problem of the quadratic Schrödinger equation with nonlinearity $\bar u^2$. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1123-1143. doi: 10.3934/cpaa.2008.7.1123 [2] Magdalena Czubak, Nina Pikula. Low regularity well-posedness for the 2D Maxwell-Klein-Gordon equation in the Coulomb gauge. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1669-1683. doi: 10.3934/cpaa.2014.13.1669 [3] Shinya Kinoshita. Well-posedness for the Cauchy problem of the Klein-Gordon-Zakharov system in 2D. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1479-1504. doi: 10.3934/dcds.2018061 [4] Hiroyuki Hirayama. Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1563-1591. doi: 10.3934/cpaa.2014.13.1563 [5] Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831 [6] Changxing Miao, Bo Zhang. Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 181-200. doi: 10.3934/dcds.2007.17.181 [7] Takamori Kato. Global well-posedness for the Kawahara equation with low regularity. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1321-1339. doi: 10.3934/cpaa.2013.12.1321 [8] Hyungjin Huh, Bora Moon. Low regularity well-posedness for Gross-Neveu equations. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1903-1913. doi: 10.3934/cpaa.2015.14.1903 [9] Hongjie Dong, Dapeng Du. Global well-posedness and a decay estimate for the critical dissipative quasi-geostrophic equation in the whole space. Discrete & Continuous Dynamical Systems - A, 2008, 21 (4) : 1095-1101. doi: 10.3934/dcds.2008.21.1095 [10] Zhaohui Huo, Boling Guo. The well-posedness of Cauchy problem for the generalized nonlinear dispersive equation. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 387-402. doi: 10.3934/dcds.2005.12.387 [11] Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934/krm.2019032 [12] Kenji Nakanishi, Hideo Takaoka, Yoshio Tsutsumi. Local well-posedness in low regularity of the MKDV equation with periodic boundary condition. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1635-1654. doi: 10.3934/dcds.2010.28.1635 [13] Takafumi Akahori. Low regularity global well-posedness for the nonlinear Schrödinger equation on closed manifolds. Communications on Pure & Applied Analysis, 2010, 9 (2) : 261-280. doi: 10.3934/cpaa.2010.9.261 [14] Hartmut Pecher. Low regularity well-posedness for the 3D Klein - Gordon - Schrödinger system. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1081-1096. doi: 10.3934/cpaa.2012.11.1081 [15] Wenming Hu, Huicheng Yin. Well-posedness of low regularity solutions to the second order strictly hyperbolic equations with non-Lipschitzian coefficients. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1891-1919. doi: 10.3934/cpaa.2019088 [16] Fucai Li, Yanmin Mu, Dehua Wang. Local well-posedness and low Mach number limit of the compressible magnetohydrodynamic equations in critical spaces. Kinetic & Related Models, 2017, 10 (3) : 741-784. doi: 10.3934/krm.2017030 [17] Neal Bez, Chris Jeavons. A sharp Sobolev-Strichartz estimate for the wave equation. Electronic Research Announcements, 2015, 22: 46-54. doi: 10.3934/era.2015.22.46 [18] Minjia Shi, Yaqi Lu. Cyclic DNA codes over $\mathbb{F}_2[u,v]/\langle u^3, v^2-v, vu-uv\rangle$. Advances in Mathematics of Communications, 2019, 13 (1) : 157-164. doi: 10.3934/amc.2019009 [19] Yuanyuan Ren, Yongsheng Li, Wei Yan. Sharp well-posedness of the Cauchy problem for the fourth order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 487-504. doi: 10.3934/cpaa.2018027 [20] Youngwoo Koh, Ihyeok Seo. Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4877-4906. doi: 10.3934/dcds.2017210 2018 Impact Factor: 0.925
# zbMATH — the first resource for mathematics The Manin-Mumford conjecture: a brief survey. (English) Zbl 1073.14525 Summary: The Manin-Mumford conjecture asserts that if $$K$$ is a field of characteristic zero, $$C$$ a smooth proper geometrically irreducible curve over $$K$$, and $$J$$ the Jacobian of $$C$$, then for any embedding of $$C$$ in $$J$$, the set $$C(K) \cap J(K)_{\text{tors}}$$ is finite. Although the conjecture was proved by M. Raynaud [Invent. Math. 71, 207–233 (1983; Zbl 0564.14020)] in 1983, and several other proofs have appeared since, a number of natural questions remain open, notably concerning bounds on the size of the intersection and the complete determination of $$C(K) \cap J(K)_{\text{tors}}$$ for special families of curves $$C$$. The first half of this survey paper presents the Manin-Mumford conjecture and related general results, while the second describes recent work mostly dealing with the above questions. ##### MSC: 14G25 Global ground fields in algebraic geometry 11G10 Abelian varieties of dimension $$> 1$$ 11G30 Curves of arbitrary genus or genus $$\ne 1$$ over global fields 14H40 Jacobians, Prym varieties Full Text:
# STARBUCKS Free Cash Flows Valuation of Starbucks’ Common Equity In Integrative Case 10.1, we... 1 answer below » STARBUCKS Free Cash Flows Valuation of Starbucks’ Common  Equity In Integrative Case 10.1, we projected financial statements for Starbucks for Years +1 through +5. In this portion of the Starbucks Integrative Case, we use the projected finan- cial statements from Integrative Case 10.1 and apply the techniques in Chapter 12 to com- pute Starbucks’ required rate of return on equity and share value based on the free cash flows valuation model. We also compare our value estimate to Starbucks’ share price at the time of the case development to provide an investment recommendation. The market equity beta for Starbucks at the end of 2008 is 0.58. Assume that the risk- free interest rate is 4.0 percent and the market risk premium is 6.0 percent. Starbucks has 735.5     million shares outstanding at the end of 2008. At the start of Year +1, Starbucks’ share price was $14.17. Required #### Part I—Computing Starbucks’ Share Value Using Free Cash Flows to Common Equity Shareholders a. Use the CAPM to compute the required rate of return on common equity capital for Starbucks. b. Using your projected financial statements from Integrative Case 10.1 for Starbucks, begin with projected net cash flows from operations and derive the projected free cash flows for common equity shareholders for Starbucks for Years +1 through +5. You must determine whether your projected changes in cash are necessary for oper- ating liquidity purposes. c. Project the continuing free cash flow for common equity shareholders in Year +6. Assume that the steady-state long-run growth rate will be 3 percent in Year +6 and beyond. Project that the Year +5 income statement and balance sheet amounts will grow by 3 percent in Year +6; then derive the projected statement of cash flows for Year +6. Derive the projected free cash flow for common equity shareholders in Year +6 from the projected statement of cash flows for Year +6. d. Using the required rate of return on common equity from Part a as a discount rate, compute the sum of the present value of free cash flows for common equity share- holders for Starbucks for Years +1 through +5. e. Using the required rate of return on common equity from Part a as a discount rate and the long-run growth rate from Part c, compute the continuing value of Starbucks as of the start of Year +6 based on Starbucks’ continuing free cash flows for common equity shareholders in Year +6 and beyond. After computing continuing value as of the start of Year +6, discount it to present value at the start of Year +1. f. Compute the value of a share of Starbucks common stock. (1) Compute the total sum of the present value of free cash flows for equity shareholders (from Parts d and e). (2) Adjust the total sum of the present value using the midyear discounting adjust- ment factor. (3) Compute the per-share value estimate. Note: If you worked Integrative Case 11.1 from Chapter 11 and computed Starbucks’ share value using the dividends valuation approach, compare your value estimate from that case with the value estimate you obtain here. They should be the same. #### Part II—Computing Starbucks’ Share Value Using Free Cash Flows to All Debt and Equity Stakeholders g. At the end of 2008, Starbucks had$1,263 million in outstanding interest-bearing short-term and long-term debt on the balance sheet and no preferred stock. Assume that the balance sheet value of Starbucks’ debt equals the market value of the debt. Starbucks faces an interest rate of roughly 6.25 percent on its outstanding debt. Assume that Starbucks will continue to face the same interest rate on this outstand- ing debt capital over the remaining life of the debt. Using the amounts on Starbucks’ 2008 income statement in Exhibit 1.27 for Integrative Case 1.1 in Chapter 1, com- pute Starbucks’ average tax rate in 2008. Assume that Starbucks will continue to face the same income tax rate over the forecast horizon. Compute the weighted average cost of capital for Starbucks as of the start of Year +1. Compare your computation of Starbucks’ weighted average cost of capital with your estimate of Starbucks’ required return on equity from Part a. Why do the two amounts differ? h.   Based on your projections of Starbucks’ financial statements, begin with projected net cash flows from operations and derive the projected free cash flows for all debt and equity stakeholders for Years +1 through +5. Compare your forecasts of Starbucks’ free cash flows for all debt and equity stakeholders Years +1 through +5 with your forecast of Starbucks’ free cash flows for equity shareholders in Part b. Why are the amounts not identical—what causes the difference each year? i.     Project the continuing free cash flows for all debt and equity stakeholders in Year +6. Use the projected financial statements for Year +6 from Part c to derive the pro- jected free cash flow for all debt and equity stakeholders in Year +6. j.     Using the weighted average cost of capital from Part g as a discount rate, compute the sum of the present value of free cash flows for all debt and equity stakeholders for Starbucks for Years +1 through +5. k.   Using the weighted average cost of capital from Part g as a discount rate and the long-run growth rate from Part c, compute the continuing value of Starbucks as of the start of Year +6 based on Starbucks’ continuing free cash flows for all debt and equity stakeholders in Year +6 and beyond. After computing continuing value as of the start of Year +6, discount it to present value at the start of Year +1. l.     Compute the value of a share of Starbucks common stock. (1) Compute the value of Starbucks’ net operating assets using the total sum of the present value of free cash flows for all debt and equity stakeholders (from Parts j and k). (2) Subtract the value of outstanding debt to obtain the value of equity. (3) Adjust the present value of equity using the midyear discounting adjustment factor. (4) Compute the per-share value estimate. m.  Compare your share value estimate from Part f with your share value estimate from Part l. These values should be similar. #### Part III—Sensitivity Analysis and Recommendation n.   Using the free cash flows to common equity shareholders, recompute the value of Starbucks shares under two alternative scenarios. Scenario 1: Assume that Starbucks’ long-run growth will be 2 percent, not 3 percent as before, and assume that Starbucks’ required rate of return on equity is 1 percentage point higher than the rate you computed using the CAPM in Part a. Scenario 2: Assume that Starbucks’ long-run growth will be 4 percent, not 3 percent as before, and assume that Starbucks’ required rate of return on equity is 1 percentage point lower than the rate you computed using the CAPM in Part a. To quantify the sensitivity of your share value estimate for Starbucks to these variations in growth and discount rates, com- pare (in percentage terms) your value estimates under these two scenarios with your value estimate from Part f. o.    At the end of 2008, what reasonable range of share values would you have expected for Starbucks common stock? At that time, where was the market price for Starbucks shares relative to this range? What would you have recommended? p.   If you computed Starbucks’ common equity share value using the dividends-valua- tion approach in Integrative Case 11.1, compare the value estimate you obtained in that case with the estimate you obtained in this case. They should be identical. ## Solutions: 5 Ratings, (9 Votes) ## Recent Questions in Financial Accounting Submit Your Questions Here ! Copy and paste your question here... Attach Files
# An exact sequence on the ideal class group of a Noetherian domain of dimension 1 Let $A$ be a Noetherian domain of dimension 1. Let $K$ be its field of fractions. Let $B$ be the integral closure of $A$ in $K$. Suppose $B$ is finitely generated as an $A$-module. It is well-known that $B$ is a Dedekind domain. Let $I(A)$ be the group of invertible fractional ideals of $A$. Let $P(A)$ be the group of principal fractional ideals of $A$. Similarly we define $I(B)$ and $P(B)$. Then there exists the following exact sequence of abelian groups(Neukirch, Algebraic number theory p.78). $0 \rightarrow B^*/A^* \rightarrow \bigoplus_{\mathfrak{p}} (B_{\mathfrak{p}})^*/(A_{\mathfrak{p}})^* \rightarrow I(A)/P(A) \rightarrow I(B)/P(B) \rightarrow 0$ Here, $\mathfrak{p}$ runs over all the maximal ideals of $A$. Since we use this result to prove this, it'd be nice that we have the proof here(I don't understand well Neukirich's proof). EDIT Since someone wonders what my question is(though I think it is obvious), I state it more clearly: How do you prove it? EDIT[July 11, 2012] May I ask the reason for the downvote so that I could improve my question? • Thank you for the clarification. It would be best if you included a summary (or scan of the relevant pages) of Neukirch's proof, so people know what approach you are confused by and can either explain it in more detail, or offer alternative proofs. – Zev Chonoles Jun 26 '12 at 2:34 • I forgot why I didn't understand well Neukirch's proof. Personally this does not matter anymore because I came up with my own proof. – Makoto Kato Jun 26 '12 at 2:53 • I noticed that someone serially upvoted for my questions. While I appreciate them, I would like to point out that serial upvotes are automatically reversed by the system. – Makoto Kato Nov 27 '13 at 7:07 Lemma 1 Let $A$ be a Noetherian domain of dimension 1. Let $K$ be its field of fractions. Then there exists the following exact sequence of abelian groups. $0 \rightarrow K^*/A^* \rightarrow \bigoplus K^*/(A_{\mathfrak{p}})^* \rightarrow I(A)/P(A) \rightarrow 0$ Here, $\mathfrak{p}$ runs over all the maximal ideals of $A$. Proof: By this, $I(A)$ is canonically isomorphic to $\bigoplus I(A_{\mathfrak{p}})$. By this, $I(A_{\mathfrak{p}})$ is the group of principal fractional ideals of $A_{\mathfrak{p}}$. Hence $I(A_{\mathfrak{p}})$ is canonically isomorphic to $K^*/(A_{\mathfrak{p}})^*$. On the other hand, $P(A)$ is canonically isomorphic to $K^*/A^*$. QED Lemma 2 Let $A$ be a Noetherian domain of dimension 1. Let $K$ be its field of fractions. Let $B$ be the integral closure of $A$ in $K$. Suppose $B$ is finitely generated as an $A$-module. Then there exists the following exact sequence of abelian groups. $0 \rightarrow K^*/B^* \rightarrow \bigoplus K^*/(B_{\mathfrak{p}})^* \rightarrow I(B)/P(B) \rightarrow 0$ Here, $\mathfrak{p}$ runs over all the maximal ideals of $A$. Proof: This follows immediately from the proposition of this. QED The proof of the exactness of the title sequence There exists a canonical morphism from the exact sequence of Lemma 1 to that of Lemma 2. The exactness of the title sequence follows immediately by snake lemma. QED
Go to content The minor of a particular element of a matrix is found by eliminating the row and column of that element and finding the determinant of the remaining matrix. You can get a better display of the maths by downloading special TeX fonts from jsMath. In the meantime, we will do the best we can with the fonts you have, but it may not be pretty and some equations may not be rendered correctly. ## Glossary ### matrix a rectangular or square grid of numbers. ### minor of an element of a matrix found be eliminating the row and column containing that element and finding the determinant of the remaining matrix. ### union The union of two sets A and B is the set containing all the elements of A and B. Full Glossary List ## This question appears in the following syllabi: SyllabusModuleSectionTopicExam Year AQA A-Level (UK - Pre-2017)FP4Matrix algebra3x3 matrices- AQA A2 Further Maths 2017Pure MathsFurther Matrices3x3 Matrices- AQA AS/A2 Further Maths 2017Pure MathsFurther Matrices3x3 Matrices- CBSE XII (India)AlgebraDeterminantsDeterminant of a square matrix (up to 3 x 3)- CCEA A-Level (NI)FP1Matrix algebra3x3 matrices- Edexcel A-Level (UK - Pre-2017)FP3Matrix algebra3x3 matrices- Edexcel AS Further Maths 2017Core Pure MathsMatrices3x3 Matrices- Edexcel AS/A2 Further Maths 2017Core Pure MathsMatrices3x3 Matrices- Methods (UK)M5Matrix algebra3x3 matrices- OCR A-Level (UK - Pre-2017)FP1Matrix algebra3x3 matrices- OCR AS Further Maths 2017Pure CoreDeterminants, Inverses and Equations3x3 Matrices- OCR MEI A2 Further Maths 2017Core Pure BMatrices and Transformations3x3 Matrices- OCR-MEI A-Level (UK - Pre-2017)FP2Matrix algebra3x3 matrices- Universal (all site questions)MMatrix algebra3x3 matrices- WJEC A-Level (Wales)FP1Matrix algebra3x3 matrices-
# A Deriving the derivative boundary conditions from natural formulation #### maistral Summary How to derive the finite difference derivative formulation from the natural boundary formulation? PS: This is not an assignment, this is more of a brain exercise. I intend to apply a general derivative boundary condition f(x,y). While I know that the boxed formulation is correct, I have no idea how to acquire the same formulation if I come from the general natural boundary condition formulation. I honestly do not know what am I doing wrong. Can someone check where am I incorrect? Related Differential Equations News on Phys.org #### Chestermiller Mentor Looks OK. So what is the problem? #### maistral Looks OK. So what is the problem? The left side and the right side have different results. #### Chestermiller Mentor If $q_x$ is the heat flux in the x direction, then $q_x$ is always given by $$q_x=-k\frac{dT}{dx}$$irrespective of whether it is the left- or the right boundary. Of course, the sign of the flux in the x direction can be negative. So, at the fictitious point at the left boundary, you have: $$T(-\Delta x)=T(+\Delta x)+q_x(0)\Delta x$$and, at the fictitious point at the right boundary, you have: $$T(L+\Delta x)=T(L-\Delta x)-q_x(L)\Delta x$$
It is currently 19 Jun 2019, 23:42 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The company at which Mark is employed has 80 employees, each Author Message TAGS: Founder Joined: 18 Apr 2015 Posts: 6920 Followers: 114 Kudos [?]: 1344 [1] , given: 6318 The company at which Mark is employed has 80 employees, each [#permalink]  18 Jan 2016, 15:48 1 KUDOS Expert's post 00:00 Question Stats: 77% (01:16) correct 22% (02:21) wrong based on 22 sessions The company at which Mark is employed has 80 employees, each of whom has a different salary. Mark’s salary of \$43,700 is the second-highest salary in the first quartile of the 80 salaries. If the company were to hire 8 new employees at salaries that are less than the lowest of the 80 salaries, what would Mark’s salary be with respect to the quartiles of the 88 salaries at the company, assuming no other changes in the salaries? A The fourth-highest salary in the first quartile B The highest salary in the first quartile C The second-lowest salary in the second quartile D The third-lowest salary in the second quartile E The fifth-lowest salary in the second quartile Practice Questions Question: 13 Page: 341 Difficulty: hard [Reveal] Spoiler: OA _________________ Founder Joined: 18 Apr 2015 Posts: 6920 Followers: 114 Kudos [?]: 1344 [2] , given: 6318 Re: The company at which Mark is employed has 80 employees, each [#permalink]  18 Jan 2016, 15:55 2 KUDOS Expert's post Solution In this question you are told that Mark’s salary is the second-highest in the first quartile. From this you can conclude that the word quartile refers to one of the four groups that are created by listing the data in increasing order and then dividing the data into four groups of equal size. When the salaries of the 80 employees are listed in order, the 20 lowest salaries (that is, the salaries in the first quartile) are the first 20 salaries in the list. Since Mark’s salary is the second-highest in the first quartile, 18 salaries in that quartile are lower than his, and one salary in that quartile is higher than his. After the salaries of the 8 new employees are added, there are 26 salaries that are lower than Mark’s. The lowest 22 of those would be in the first quartile of the 88 salaries, and the remaining 4 (salaries 23 to 26) would be in the second quartile, followed by Mark’s salary. This puts Mark at the fifth-lowest salary in the second quartile. The correct answer is $$E.$$ Alternative Solution Another way to approach this problem is to think of all 80 salaries numbered in order from least to greatest, the lowest salary at the number 1 position and the greatest salary at the number 80 position. There are 20 positions in each quartile, and Mark’s salary is at position 19. The diagram below shows the salary positions and the quartile into which each position falls. Note that position 19, where Mark’s salary appears, is second-highest in the first quartile. To see what Mark’s position is with respect to the quartiles of the 88 salaries, you need add the 8 new salaries to the list, renumber the list from 1 to 88, and put 22 salaries in each quartile. Because the 8 new salaries are less than the original 80 salaries, they must be listed in positions 1 through 8, and all salaries in the original list must move up by 8 positions in the renumbered list. In particular, Mark’s salary moves from position 19 to position 27. The diagram below shows the renumbered list. Note that Mark’s salary is in position 27, the fifth position in the second quartile. Since Mark’s salary is in the fifth position in the second quartile and the salaries are listed in order from least to greatest, Mark’s salary would be the fifth-lowest in the second quartile. _________________ Intern Joined: 24 May 2016 Posts: 33 Followers: 0 Kudos [?]: 8 [1] , given: 22 Re: The company at which Mark is employed has 80 employees, each [#permalink]  08 Jan 2017, 11:50 1 KUDOS 2nd approach is great thanks Intern Joined: 12 Jan 2017 Posts: 5 Followers: 0 Kudos [?]: 2 [0], given: 0 Re: The company at which Mark is employed has 80 employees, each [#permalink]  16 Jan 2017, 19:57  D The third-lowest salary in the second quartile Re: The company at which Mark is employed has 80 employees, each   [#permalink] 16 Jan 2017, 19:57 Display posts from previous: Sort by
Search: # Functions This page describes some of the internal workings of PmWiki by explaining how some of the functions in pmwiki.php work. For a more brief list/overview on functions useful to for instance cookbook writers, see Cookbook:Functions. ## pmsetcookie($name,$val="", $exp=0,$path="", $dom="",$secure=null, $httponly=null) This function is intended as a replacement for setcookie(). It will automatically set the$secure and $httponly arguments if they are not set by the caller function and if $EnableCookieSecure and $EnableCookieHTTPOnly are enabled. ## PCCF($php_code, $callback_template='default',$callback_arguments = '$m') Deprecated since PHP 7.2 The PCCF() function (PmWiki Create Callback Function) can be used to create callback functions used with preg_replace_callback. It is required for PHP 5.5, but will also work with earlier PHP versions. The first argument is the PHP code to be evaluated. The second argument (optional) is the callback template, a key from the global$CallbackFnTemplates array. There are two templates that can be used by recipe authors: • 'default' will pass $php_code as a function code • 'return' will wrap$php_code like "return $php_code;" (since PmWiki 2.2.62) The third argument (optional) is the argument of the callback function. Note that PmWiki uses the '$m' argument to pass the matches of a regular expression search, but your function can use other argument(s). PCCF() will create an anonymous (lambda) callback function containing the supplied code, and will cache it. On subsequent calls with the same $php_code, PCCF() will return the cached function name. PHP 7.2 deprecates create_function() and future versions will remove it. If you need to migrate older code that used PCCF(), you can usually write regular functions and pass the function name where you previously passed the result of PCCF(). For example, suppose you had a pattern like this: '/(?<=^| )((:cell PQA(PSS(a-z):))/' => PCCF("return strtoupper(\$m(:cell PQA(PSS(1):));"), For PHP 7.2 compatibility, you can write a callback function: function my_callback($m) { return strtoupper($m(:cell PQA(PSS(1):)); } then change the pattern to look like this: '/(?<=^| )((:cell PQA(PSS(a-z):))/' => 'my_callback', See also: the recipe PccfToPcfOverride allows existing recipes to run on PHP 7 without causing deprecated create_function() messages. ## PPRA($array_search_replace,$string) The PPRA() function (PmWiki preg_replace array) can be used to perform a regular expression replacement with or without evaluation, for PHP 5.5 compatibility. Since PmWiki 2.2.56, PmWiki uses this function to process the following arrays: $MakePageNamePatterns, $FmtP, $QualifyPatterns, $ROEPatterns, $ROSPatterns,$SaveAttrPatterns, $MakeUploadNamePatterns. Any custom settings should continue to work for PHP 5.4 and earlier, but wikis running on PHP 5.5 may need to make a few changes. The first argument contains the 'search'=>'replace' pairs, the second is the "haystack" string to be manipulated. The 'replace' parts of the array can be strings or function names. If the 'replace' part is a callable function name, it will be called with the array of matches as a first argument via preg_replace_callback(). If not a callable function, a simple preg_replace() will be performed. Previously, PmWiki used such constructs: $fmt = preg_replace(array_keys($FmtP), array_values($FmtP), $fmt); It is now possible to use simply this: $fmt = PPRA($FmtP,$fmt); Note that since PHP 5.5, the search patterns cannot have an /e evaluation flag. When creating the $array_search_replace array, before PHP 5.5 we could use something like (eg. for $MakePageNamePatterns): '/(?<=^| )((:cell PQA(PSS(a-z):))/e' => "strtoupper('$1')", Since PHP 5.5, we should use this (will also work in PHP 5.4 and earlier): '/(?<=^| )((:cell PQA(PSS(a-z):))/' => PCCF("return strtoupper(\$m(:cell PQA(PSS(1):));"), Note that the /e flag should be now omitted, instead of '$0', '$1', '$2', we should use$m(:cell PQA(PSS(0):), $m(:cell PQA(PSS(1):),$m(:cell PQA(PSS(2):), etc. in the replacement code, and there is no need to call PSS() in the replacement code, as backslashes are not automatically added. For PHP 7.2 and newer, instead of using PCCF() to create anonymous functions, we add a real function in our add-on, and then pass the function name as the pattern replacement (see example at PCCF, which will also work on PHP 4 and 5): '/(?<=^| )((:cell PQA(PSS(a-z):))/' => 'my_callback', ## PPRE($search_pattern,$replacement_code, $string) Deprecated since PHP 7.2 The PPRE() function (PmWiki preg_replace evaluate) can be used to perform a regular expression replacement with evaluation. Since PHP 5.5, the preg_replace() function has deprecated the /e evaluation flag, and displays warnings when the flag is used. The PPRE() function automatically creates a callback function with the replacement code and calls it. Before PHP 5.5, it was possible to use such calls: $fmt = preg_replace('/\\$((:cell PQA(PSS(A-Z):)\\w*Fmt)\\b/e','$GLOBALS(:cell PQA(PSS("$1"):)',$fmt); Since PHP 5.5, it is possible to replace the previous snippet with the following (also works before PHP 5.5): $fmt = PPRE('/\\$((:cell PQA(PSS(A-Z):)\\w*Fmt)\\b/','$GLOBALS(:cell PQA(PSS($m[1):)]',$fmt); Note that the /e flag should be now omitted, instead of '$0', '$1', '$2', we should use $m(:cell PQA(PSS(0):),$m(:cell PQA(PSS(1):), $m(:cell PQA(PSS(2):), etc. in the replacement code, and there is no need to call PSS() in the replacement code, as backslashes are not automatically added. For PHP 7.2 and newer, calling this function will raise "deprecated" notices. You need to rewrite your code to use preg_replace_callback, by moving the code into real functions: $fmt = preg_replace_callback('/\\$((:cell PQA(PSS(A-Z):)\\w*Fmt)\\b/', 'my_global_var_callback',$fmt); function my_global_var_callback($m) { return$GLOBALS(:cell PQA(PSS($m[1):)]; } instead of using PCCF() to create anonymous functions, we add a real function in our add-on, and then pass the function name as the pattern replacement (see example at PCCF, which will also work on PHP 4 and 5): '/(?<=^| )((:cell PQA(PSS(a-z):))/' => 'my_callback', ## Qualify($pagename, $text) Qualify() applies $QualifyPatterns to convert relative links and references into absolute equivalents. This function is called by usual wiki markups that include text from other pages. It will rewrite links like [[Page]] into [[Group/Page]], and page (text) variables like {$Title} into {Group.Page$Title} so that they work the same way in the source page and in the including page. See also $QualifyPatterns and RetrieveAuthSection(). ## PHSC($string_or_array, $flags=ENT_COMPAT,$encoding=null, $double_encode=true) The PHSC() function (PmWiki HTML special characters) is a replacement for the PHP function htmlspecialchars. The htmlspecialchars() function was modified since PHP 5.4 in two ways: it now requires a valid string for the supplied encoding, and it changes the default encoding to UTF-8. This can cause sections of the page to become blank/empty on many sites using the ISO-8859-1 encoding without having set the third argument ($encoding) when calling htmlspecialchars(). The PHSC() function calls htmlspecialchars() with an 8-bit encoding as third argument, whatever the encoding of the wiki (unless you supply an encoding). This way the string never contains invalid characters. It should be safe for recipe developers to replace all calls to htmlspecialchars() with calls to PHSC(). Only the first argument is required when calling PHSC(), although authors may wish to call PHSC($string_or_array, ENT_QUOTES). Unlike htmlspecialchars(), the PHSC() function can process arrays recursively (only the values are converted, not the keys of the array). ## PSS($string) The PSS() function (PmWiki Strip Slashes) removes the backslashes that are automatically inserted in front of quotation marks by the /e option of PHP's preg_replace function. PSS() is most commonly used in replacement arguments to Markup(), when the pattern specifies /e and one or more of the parenthesized subpatterns could contain a quote or backslash. ("PSS" stands for "PmWiki Strip Slashes".) From PM: PmWiki expects PSS() to always occur inside of double-quoted strings and to contain single quoted strings internally. The reason for this is that we don't want the $1 or$2 to accidentally contain characters that would then be interpreted inside of the double-quoted string when the PSS is evaluated. Markup('foo', 'inline', '/(something)/e', 'Foo(PSS("$1"))'); # wrong Markup('foo', 'inline', '/(something)/e', "Foo(PSS('$1'))"); # right Note, the extra slashes are only added by preg_replace with an /e modifier. The markup definitions with Markup_e() do NOT need to use PSS() in the replacement strings. The new-type markup definitions with Markup() and a simple function name as a replacement do NOT need to use PSS() inside the replacement function. If you migrate old markup rules to the new format, delete the PSS() calls. ### Example This is a fictitious example where PSS() should be used. Let us assume that you wish to define a directive (:example:) such that (:example "A horse":) results in the HTML <div>"A horse"</div>. Here is how the markup rule can be created: Markup('example', 'directives', '/\$$:example\\s(.*?):\$$/e', "Keep('<div>'.PSS('$1').'</div>')"); We need to use PSS() around the '$1' because the matched text could contain quotation marks, and the /e will add backslashes in front of them. ## stripmagic($string) This function should be used when processing the contents of $_POST or _GET variables when they could contain quotes or backslashes. It verifies get_magic_quotes(), if true, strips the automatically inserted escapes from the string. The function can process arrays recursively (only the values are processed). Returns $fmt, with$variable and $[internationalisation] substitutions performed, under the assumption that the current page is pagename. See PmWiki.Variables for an (incomplete) list of available variables, PmWiki.Internationalizations for internationalisation. Security: not to be run on user-supplied data. This is one of the major functions in PmWiki, see PmWiki.FmtPageName for lots of details. ## Markup($name, $when,$pattern, $replace) Adds a new markup to the conversion table. Described in greater detail at PmWiki.CustomMarkup. This function is used to insert translation rules into the PmWiki's translation engine. The arguments to Markup() are all strings, where: $name The string names the rule that is inserted. If a rule of the same name already exists, then this rule is ignored. $when This string is used to control when a rule is to be applied relative to other rules. A specification of "<xyz" says to apply this rule prior to the rule named "xyz", while ">xyz" says to apply this rule after the rule "xyz". See CustomMarkup for more details on the order of rules. $pattern This string is a regular expression that is used by the translation engine to look for occurences of this rule in the markup source. The function mkdirp($dir) creates a directory, $dir, if it doesn't already exist, including any parent directories that might be needed. For each directory created, it checks that the permissions on the directory are sufficient to allow PmWiki scripts to read and write files in that directory. This includes checking for restrictions imposed by PHP's safe_mode setting. If mkdirp() is unable to successfully create a read/write directory, mkdirp() aborts with an error message telling the administrator the steps to take to either create $dir manually or give PmWiki sufficient permissions to be able to do it. ## MakeLink($pagename, $target,$txt, $suffix,$fmt) The function MakeLink($pagename,$target, $txt,$suffix, $fmt) returns an html-formatted anchor link. Its arguments are as follows: $pagename is the source page $target is where the link should go$txt is the value to use for '$LinkText' in the output$suffix is any suffix string to be added to $txt$fmt is a format string to use If $txt is NULL or not specified, then it is automatically computed from$target. If $fmt is NULL or not specified, then MakeLink uses the default format as specified by the type of link. For page links this means the $LinkPageExistsFmt and $LinkPageCreateFmt variables, for intermap-style links it comes from either the $IMapLinkFmt array or from $UrlLinkFmt. Inside of the formatting strings,$LinkUrl is replaced by the resolved url for the link, $LinkText is replaced with the appropriate text, and$LinkAlt is replaced by any "title" (alternate text) information associated with the link. ## IsAuthorized($chal,$source, &$from) IsAuthorized takes a pageattributesstring (e. g. "id:user1$1$Ff3w34HASH...") in $chal. $source is simply returned and used for building the authcascade (pageattributes - groupattributes - $DefaultPassword). $from will be returned if $chal is empty, because it is not checked before calling IsAuthorized(), this is needed for the authcascade. IsAuthorized() returns an array with three values: $auth 1 - authenticated, 0 - not authenticated, -1 - refused; $passwd; $source from the parameter list. ## CondAuth ($pagename, 'auth level') CondAuth implements the ConditionalMarkup for (:if auth level:). For instance CondAuth($pagename,'edit') is true if authorization level is 'edit'. Use inside local configuration files to build conditionals with a check of authorization level, similar to using (:if auth level:) on a wiki page. Note that CondAuth() should be called after all authorization levels and passwords have been defined. For example, if you use it with Drafts, you should include the draft.php script before calling CondAuth(): $EnableDrafts = 1; $DefaultPasswords['publish'] = pmcrypt('secret'); include_once("$FarmD/scripts/draft.php"); if (! CondAuth($pagename, 'edit')) { /* whatever */ } Best is to use CondAuth() near the bottom of your config.php script. ## RetrieveAuthPage($pagename, $level,$authprompt=true, $since=0) where: $pagename - name of page to be read $level - authorization level required (read/edit/auth/upload)$authprompt - true if user should be prompted for a password if needed $since - how much of the page history to read 0 == read entire page including all of history READPAGE_CURRENT == read page without loading history timestamp == read history only back through timestamp The$since parameter allows PmWiki to stop reading from a page file as soon as it has whatever information is needed -- i.e., if an operation such as browsing isn't going to need the page's history, then specifying READPAGE_CURRENT can result in a much faster loading time. (This can be especially important for things such as searching and page listings.) However, if combined with UpdatePage, the updated page will have no history. Use e.g. $page = @RetrieveAuthPage('Main.MyPage', 'read') to obtain a page object that contains all the information of the correspondent file in separate keys, e.g. $page['text'] will contain a string with the current wiki markup of Main.MyPage. Use this generally in preference to the alternative function ReadPage($pagename,$since=0) since it respects the authorisation of the user, i.e. it checks the authorisation level before loading the page, or it can be set to do so. ReadPage() reads a page regardless of permission. Passing 'ALWAYS' as the authorization level (instead of 'read', 'edit', etc.) will cause RetrieveAuthPage to always read and return the page, even if it happens to be protected by a read password. ## RetrieveAuthSection($pagename,$pagesection, $list=NULL,$auth='read') RetrieveAuthSection extracts a section of text from a page. If $pagesection starts with anything other than '#', the text before the first '#' (or all of it, if there is no '#') identifies the page to extract text from. Otherwise RetrieveAuthSection looks in the pages given by$list (should be an array), or in $pagename if$list is not specified. • The selected page is placed in the global $RASPageName variable. • The caller is responsible for calling Qualify() as needed, i.e. if you need to control how unqualified page and variable names shall be resolved. • To have them act as in the original text, let Qualify() resolve them relative to the source page. • If the imported text was not meant as wikitext but as some other kind of markup that might happen to contain double pairs of square brackets, or dollar signs inside curly brackets, you probably don't want to Qualify() them. If you output them into wikitext, you'll probably need to Keep() the text (in case of HTML, XML, RSS or similar output, PHSC() first!), to prevent later stages of processing from interpreting the apparent wiki markups in context of the target page. • If your code produces wikitext for an auxiliary page that is meant to be included by another page higher up in the inclusion chain, and want links and variables to work as if they were in the auxiliary page, use the auxiliary page's "GroupName.PageName" as the $pagename argument for Qualify(). Provides a way to limit the array that is returned by ReadPage, so that it only pulls the content up to a specific section marker. For example, pulling from start of page to '##blogend': function FeedText($pagename, &$page, $tag) {$text = RetrieveAuthSection($pagename, '##blogend');$content = MarkupToHTML($pagename,$text); return "<$tag><![CDATA[$content]]></$tag>"; } The '##blogend' argument says to read from the beginning of the page to just before the line containing the marker. See IncludeOtherPages for more information about the section specifications. This version won't read text from pages that are read-protected; if you want to get text even from read-protected pages, then $text = RetrieveAuthSection($pagename, '##blogend', NULL, 'ALWAYS'); ## UpdatePage($pagename, $old (page object),$new (page object)); UpdatePage() allows cookbook recipes to mimic the behavior of editing wiki pages via the browser. Internally, PmWiki does several house keeping tasks which are accessible via this function (preserving history/diff information, updating page revision numbers, updating RecentChanges pages, sending email notifications, etc._ • "Page object" refers to an array pulled from RetrieveAuthPage($pagename,$level, $authprompt=true,$since=0); (preferred), or ReadPage($pagename); (disregards page security). Note that$new(:cell PQA(PSS('text'):) should contain all page data for the new version of the page. • If a page doesn't exist, UpdatePage() will attempt to create it. • Ignoring $old (e.g. UpdatePage($pagename, '', $new);) will erase all historical page data---a tabula rasa. • If you retrieved$old using RetrieveAuthPage($pagename,$auth,$prompt,READPAGE_CURRENT) and set$new=\$old, then UpdatePage will also erase all historical data UpdatePage() cannot be called directly from config.php because there are necessary initializations which occur later in pmwiki.php. It is not enough to just load stdconfig.php. If you want to use UpdatePage() you will need to do it within a custom markup, a custom markup expression, or a custom action. Categories: PmWiki Developer This page may have a more recent version on pmwiki.org: PmWiki:Functions, and a talk page: PmWiki:Functions-Talk. Page last modified on July 09, 2018, at 01:37 PM
## Brazilian Journal of Probability and Statistics ### Assigning probabilities to hypotheses in the context of a binomial distribution #### Abstract Given is the outcome $s$ of $S\sim{\mathrm{B}}(n,p)$ ($n$ known, $p$ fully unknown) and two numbers $0<a\leq b<1$. Required are probabilities $\alpha_{<}(s)$, $\alpha_{a,b}(s)$, and $\alpha_{>}(s)$ of the hypotheses $\mathrm{H}_{<}$: $p<a$, $\mathrm{H}_{a,b}$: $a\leq p\leq b$, and $\mathrm{H}_{>}$: $p>b$, such that their sum is equal to 1. The degenerate case $a=b(=c)$ is of special interest. A method, optimal with respect to a class of functions, is derived under Neyman–Pearsonian restrictions, and applied to a case from medicine. #### Article information Source Braz. J. Probab. Stat., Volume 30, Number 1 (2016), 127-144. Dates Accepted: September 2014 First available in Project Euclid: 19 January 2016 https://projecteuclid.org/euclid.bjps/1453211806 Digital Object Identifier doi:10.1214/14-BJPS264 Mathematical Reviews number (MathSciNet) MR3453518 Zentralblatt MATH identifier 1381.62046 #### Citation Albers, Casper J.; Kardaun, Otto J. W. F.; Schaafsma, Willem. Assigning probabilities to hypotheses in the context of a binomial distribution. Braz. J. Probab. Stat. 30 (2016), no. 1, 127--144. doi:10.1214/14-BJPS264. https://projecteuclid.org/euclid.bjps/1453211806 #### References • Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Philos. Trans. R. Soc. Lond. 53, 370–418. Reprinted in Biometrika 45 (1958), 293–315. (With an introduction by G. A. Barnard.) • Berger, J. O. (2003). Could Fisher, Jeffreys and Neyman have agreed on testing? (with discussion). Statist. Sci. 18, 1–32. • Bernoulli, J. (1713). Ars Conjectandi. Accedit tractatus de seriebus infinitis, et epistola Gallice scripta de ludo pilæ reticularis. Basel: Impensis Thurnisiorum. Translation into German as “Wahrscheinlichkeitsrechtnung (Ars Conjectandi)” and reprinted by R. Haussner (1899) (162 pages). Leipzig: Verlag von Wilhelm Engelmann. • Besselink, M. G. H., et al. for the Dutch Acute Pancreatitis Study Group (2008). Probiotic prophylaxis in predicted severe acute pancreatitis: A randomised, double-blind, placebo-controlled trial (with discussion). The Lancet 371, 651–659. • Epstein, E. S. (1969). A scoring system for probability forecasts of ranked categories. J. Appl. Meteorol. 8, 985–987. • Greene, D. H. and Knuth, D. E. (1982). Mathematics for the Analysis of Algorithms, Chapter 1, 2nd ed. Basel: Birkhäuser. • Hirji, K. F., Tan, S.-J. and Elashoff, R. M. (1991). A quasi-exact test for comparing two binomial proportions. Stat. Med. 10, 1137–1153. • Hwang, J. T., Casella, G., Robert, C., Wells, M. and Farrell, R. (1992). Estimation of accuracy of testing. Ann. Statist. 20, 490–509. • Hwang, J. T. and Yang, M. (2001). An optimality theory for mid $p$-values in $2\times2$ contingency tables. Statist. Sinica 11, 807–826. • Kardaun, O. J. W. F. and Schaafsma, W. (2015). Distributional Inference. In preparation. • Kroese, A. H. (1994). Distributional inference: A loss function approach. Ph.D. thesis, Univ. Groningen. • Kroese, A. H., Van der Meulen, E. A., Poortema, K. and Schaafsma, W. (1995). Distributional inference. Stat. Neerl. 49, 63–82. • Lancaster, H. O. (1961). Significance tests in discrete distributions. J. Amer. Statist. Assoc. 56, 223–234. • Lehmann, E. L. (1959). Testing Statistical Hypotheses, 1st ed. New York: Wiley. 3rd ed. with J. P. Romano (1997). Heidelberg: Springer. • Pearson, K. (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philosophical Magazine 50, 157–175. Reprinted in Breakthroughs in Statistics, Vol. II (N. L. Johnson and K. Kotz, eds.) 1–11. (With an introduction by G. A. Barnard.) • Rao, C. R. (1999). Statistics and Truth: Putting Chance to Work, 2nd ed. River Edge, NJ: World Scientific Publishing Co. • Rescher, N. (1969). Many-Valued Logic. New York: McGraw Hill. • Salomé, D. (1994). Statistical inference via fiducial methods. Ph.D. thesis, Univ. Groningen. • Savage, L. J. (1951). The theory of statistical decision. J. Amer. Statist. Assoc. 46, 55–67. • Schaafsma, W. (1989). Discussing the truth or falsity of a statistical hypothesis $H$ and its negation $A$. In Proceedings of the International Workshop on Theory and Practice in Data Analysis. Rep. Math. 89, 150–166. Berlin: Akademie der Wissenschaften der DDR. • Schaafsma, W., Tolboom, J. and Van der Meulen, E. A. (1989). Discussing truth of falsity by computing a $q$-value. In Statistical Data Analysis and Inference (Y. Dodge, ed.) 85–100. Amsterdam: North-Holland. • Wells, M. T. (2010). Optimality results for mid $p$-values. In Borrowing Strength: Theory Powering Applications—A Festschrift for Lawrence D. Brown. IMS Collections 6, 184–198. Beachwood, OH: IMS.
# Remove boxes under preservation of shape This challenge is specific to languages of the APL family. Although it might be solvable in other languages, please do not expect this to be easy as it depends on some specific concepts of this language family. Many array oriented languages have the concept of a box (also called enclosure). A box encapsulates an array, hiding its shape. Here is an example of a boxed array: ┌─┬───────┬─────┐ │0│1 2 3 4│0 1 2│ │ │ │3 4 5│ └─┴───────┴─────┘ This array was generated from the J sentence 0 ; 1 2 3 4 ; i. 2 3 and contains three boxes, the first containing the scalar 0, the second containing a vector of shape 4, and the third containing a matrix of shape 2 3. Some arrays of boxes look like they are plain arrays broken apart by boxes: ┌─┬───────────┐ │0│1 2 3 4 │ ├─┼───────────┤ │5│ 9 10 11 12│ │6│13 14 15 16│ │7│17 18 19 20│ │8│21 22 23 24│ └─┴───────────┘ The previous array was generated from the J expression 2 2 $0 ; 1 2 3 4 ; (,. 5 6 7 8) ; 9 + i. 4 4. Your goal in this challenge is to take an array like that and remove the boxes separating the subarrays from one another. For instance, the previous array would be transformed into 0 1 2 3 4 5 9 10 11 12 6 13 14 15 16 7 17 18 19 20 8 21 22 23 24 Submit a solution as a monadic verb / function. The verb must work on arrays of arbitrary rank. You may assume that the shapes of the boxed arrays fit, that is, boxed subarrays adjacent on an axis have to have the same dimensions in all but that axis. Behaviour is undefined if they don't. This challenge is code golf. The submission compromising the least amount of characters wins. • Which languages are considered part of the APL family? – Alex A. Apr 1 '15 at 15:37 • @AlexA. Languages like APL, J, and K. – FUZxxl Apr 1 '15 at 15:37 • @randomra An array of boxes such that the arrays in the boxes if concatenated along the respective axes would form one large array. Imagine each boxed subarray to be a cuboid or rectangle and you put these together as specified by the boxes' positions to get a large cuboid or rectangle. – FUZxxl Apr 1 '15 at 15:46 • @randomra Your input would be considered invalid as the subarrays are not directly concatenable along the axes they would be concatenated. – FUZxxl Apr 1 '15 at 15:51 • In the interest of making this less language-specific, why not provide the example also using simple nested array notation for other languages? – Martin Ender Apr 1 '15 at 18:43 ## 2 Answers # J, 73 chars f=.3 :0 >".'z',~,(',"','&.>/',~":)"*1+i.#$z=.(,$~$,~1#~(#$y)-#@$)&.>y ) Usage and tests: a=.2 2 $0 ; 1 2 3 4 ; (,. 5 6 7 8) ; 9 + i. 4 4 f a 0 1 2 3 4 5 9 10 11 12 6 13 14 15 16 7 17 18 19 20 8 21 22 23 24 f a,:a NB. rank 3 test 0 1 2 3 4 5 9 10 11 12 6 13 14 15 16 7 17 18 19 20 8 21 22 23 24 0 1 2 3 4 5 9 10 11 12 6 13 14 15 16 7 17 18 19 20 8 21 22 23 24 Golfing and explanation coming tomorrow. • Your solutions looks like you took a sledgehammer to the problem and beat it with brute force until it ceased to exist. – FUZxxl Apr 1 '15 at 18:43 • @FUZxxl Exactly. Interested in nice solutions. – randomra Apr 1 '15 at 18:47 # J, 39 f=:([:;(([:(;"1@|:)(;/&> ::]))&.>)@<"1) Usage and tests a=.2 2$ 0 ; 1 2 3 4 ; (,. 5 6 7 8) ; 9 + i. 4 4 f a 0 1 2 3 4 5 9 10 11 12 6 13 14 15 16 7 17 18 19 20 8 21 22 23 24 • Is the name for v really needed? This csn be shortened I think. – FUZxxl Apr 1 '15 at 20:33 • good point. shortened – joebo Apr 1 '15 at 20:37 • You can leave out some parentheses: [:;([:;"1@|:;/&> ::])&.>@<"1 it's not needed to assign the verb to a name either. – FUZxxl Apr 2 '15 at 10:45 • This looks a little bit like it would only work with two-dimensional test-cases. – FUZxxl Apr 2 '15 at 10:46
# Difference between revisions of "Getting Started/Build/Windows/GCC And MinGW" Tip cleanup confusing sections and fix sections which contain a todo ## Contents ### Introduction This step-by-step tutorial shows how to get the current kdelibs from the upcoming KDE4 based on Qt4 compiled under Microsoft Windows. ### Installing kdewin-installer This is an installer that lets you easily install all the requirements for building kdelibs. It also has a list of tools in its list that are helpful and needed, like the mingw compiler suite, subversion clients, debugging tools and so on. The kdewin-installer is available in a GUI and console-only form from: the kdewin-installer download page. ### Using kdewin-installer to install the requirements For the GUI version: If you are running the Installer for the first time, you should have a look in the Settings page (klick on the button 'Settings'). You can select another installation directory if you want. I suggest to choose one that has no spaces in it for now, as it seems some parts of the kde4 buildsystem have problems with that. Choose the compiler you use and accept the Settings. You'll see a tree of all available packages on the right side. In that tree select kdesupport-mingw and win32libs in the all category under the dependecies subtree. This should select all packages that are needed to build kdelibs. Additionally you might want to select the MinGW, cmake, TortoiseSVN, mingw-utils and zip package from the tools category if you haven't already installed those. TortoiseSVN lets you checkout kde4 from subversion, cmake is a required buildtool and mingw contains the compiler. After clicking on 'Finish' the installer will download and install the packages. For mingw, cmake and tortoise it only downloads and starts an installer executable so expect more installation dialogs from those. ### Setting up the users environment The next step is setting up a proper environment to build KDE4 applications. This includes setting the PATH variable and some additional variables needed for CMake and running KDE applications. The path C:\kde4\win32libs is used as the installation directory from the kdewin-installer, C:\kde4\kdelibs-install as the path where all kde4 modules will be installed to. Obviously you have to change the actual values to suite your system. The Sources from SVN reside in C:\kde4\kdelibs-src. Create a file environment.bat and add the following lines: @set SOURCE_PATH=C:\kde4\kdelibs-src @set INSTALL_PATH=C:\kde4\kdelibs-install @set UTILS_PATH=C:\kde4\win32libs @set DBUSDIR=%UTILS_PATH% @set KDEDIRS=%UTILS_PATH%;%INSTALL_PATH% @set KDEWIN_DIR=%UTILS_PATH% @set PATH=%PATH%;%UTILS_PATH%\bin;%INSTALL_PATH%\bin;%INSTALL_PATH%\lib @set QT_PLUGIN_PATH=%INSTALL_PATH%\lib\kde4\plugins;%UTILS_PATH%\plugins @set STRIGI_HOME=%UTILS_PATH% @set XDG_DATA_DIRS=%UTILS_PATH%\share;%INSTALL_PATH%\share You will have to run this file every time you start a new cmd-shell. If you don't want to run this file you can make the variable entries permanent: Open up the control panel and select the System Entry. Then go to the Extended tab and select Environment Variables. In the section titled 'user variables' add the above entries. You should have cmake and mingw32-make etc. in your path as well. ### Building kdelibs from svn Now that all requirements are installed and the environment variables are set you can checkout the kdelibs module from svn://anonsvn.kde.org/home/kde/trunk/KDE/kdelibs into the dedicated source directory. Afterwards create a new subdirectory inside the kdelibs source directory named build. Then open a command console, select Start->Run Command and type cmd into the field. Then navigate to the just created build directory. The next step is running CMake. For an easy start just run (Using the installation directory that was set in the environment variables section): cmake -G "MinGW Makefiles" .. -DCMAKE_INSTALL_PREFIX=%INSTALL_PATH%\ -DCMAKE_INCLUDE_PATH=%UTILS_PATH%\include\ -DCMAKE_LIBRARY_PATH=%UTILS_PATH%\lib This command should appear in just one line. You can set other CMake variables in the same way, also interesting might be to create a debug build. This can be achieved by adding -DCMAKE_BUILD_TYPE=Debug. Now you can let MinGW build and install the module by issuing mingw32-make mingw32-make install This will take some time. ### Building KDE4 applications The process for building other modules that contain applications that are of interest for you is the same as the process you've just done for kdelibs. There might be additional requirements for some KDE4 modules that you have to build first, in particular kdebase might be required (which in turn requires kdepimlibs). ### Running KDE4 applications This can be done from the same command window where you've built KDE4. Just type the application name and hit enter. This should automatically start all kde4 daemons and dbus which are required to run for kde4 applications. For example after building and installing both kdevplatform and kdevelop module the KDevelop4 IDE can be started by executing kdevelop. If you get some errors about missing DLLs, make sure that all DLLs are set up in your PATH variable. You can add the path either by changing the environment variables over the control panel (be sure to restart cmd.exe!) or by entering (if C:\example\path\to\dll\lib is the location of the dll): C:\>set PATH=%PATH%;C:\example\path\to\dll\lib KDE® and the K Desktop Environment® logo are registered trademarks of KDE e.V.Legal
## Precalculus (6th Edition) Blitzer The solution is $\underline{x=\frac{\pi }{2},\frac{7\pi }{6},\frac{3\pi }{2},\frac{11\pi }{6}}$. We have to solve the equation: \begin{align} & \sin 2x+\cos x=0 \\ & 2\sin x\cos x+\cos x=0 \end{align} And use the formula, $\sin 2\theta =2\sin \theta \cos \theta$ Factor: $co\operatorname{s}x\left( 2\sin x+1 \right)=0$ From the above result it can be concluded that, $\cos x=0$ Another term will be: $2\sin x+1=0$ Compute the value of x: \begin{align} & \cos x=0 \\ & x=\left( 2n+1 \right)\frac{\pi }{2} \end{align} And the value of $\sin x$ will be: \begin{align} & 2\sin x+1=0 \\ & 2\sin x=-1 \\ & \sin x=\frac{-1}{2} \end{align} So, in the interval $\left[ 0,2\pi \right),$ the sine function is $\frac{-1}{2}$ at $\frac{7\pi }{6}\text{ and }\frac{11\pi }{6}$, according to the trigonometric table. \begin{align} & \sin x=\frac{-1}{2} \\ & x=\frac{7\pi }{6} \end{align} And another value is: \begin{align} & \sin x=\frac{-1}{2} \\ & x=\frac{11\pi }{6} \end{align} The period of the sine function is $2\pi$; the general solution of the equation is: $x=\frac{7\pi }{6}+2n\pi$ and $x=\frac{11\pi }{6}+2n\pi$ To get different solutions, $\text{put }n=0,1,2,3\ldots$ \begin{align} & \text{For }n=0, \\ & x=\left( 2n+1 \right)\frac{\pi }{2} \\ & =\frac{\pi }{2} \end{align} And the other value when n is 0: \begin{align} & x=\frac{7\pi }{6}+2n\pi \\ & =\frac{7\pi }{6} \end{align} Or \begin{align} & x=\frac{11\pi }{6}+2n\pi \\ & =\frac{11\pi }{6} \end{align} When the value of n is 1: \begin{align} & \text{ }n=1 \\ & x=\left( 2n+1 \right)\frac{\pi }{2} \\ & =\frac{3\pi }{2} \end{align} When we put other values of $n$, the solution becomes outside the interval $\left[ 0,2\pi \right).$