content
stringlengths
86
994k
meta
stringlengths
288
619
Lab 8: Hypothesis Testing & Randomization Now that we have covered the basics of the Central Limit Theorem, today we move one step ahead and discuss hypothesis testing, the gold-standard practice in science. Hypothesis testing is very much like the saying that “every person is innocent until proven guilty beyond a reasonable doubt”. Similarly, in hypothesis testing, we consider each estimate (e.g., mean, the difference between groups) as a random estimate (i.e., innocent) until we prove that chance can’t produce this estimate. So the presumption of innocence is the presumption of chance in the world of statistics. What is reasonable doubt? Let’s take a more practical example. You found a magic pill that improves memory. To test its effectiveness, you need to run an experiment. So you sampled random students from Brooklyn College and assigned them to two groups: the control group and the experimental group. For the experimental group, you gave them the magic pill just before they study for their exam. For the control group, you gave them a placebo pill that does nothing. After their memory test, you now have two sets of grades: those from your control group and those from the experimental group. If you take the mean of the two groups and subtract the control mean from the experimental group mean, you found the average difference is 6% (meaning, the grades of the experimental group were, on average, 6% better than the grades of the control group). In hypothesis testing, you first start with defining the alternative hypothesis which is the magic pill improves students’ memory (and hence their grades are better). The null hypothesis is that the magic pill does not improve students’ memory. The basic concept is that we can’t prove that the pill actually improves students’ memory because you can’t calculate the probability of scoring 6% better if you take the pill. However, we can calculate the probability of achieving 6% better by chance, and then decide for ourselves if this probability is “beyond the reasonable doubt” or, in a more statistical context, if this probability is less than our alpha criterion (5%). If it is less than alpha, then we reject the null hypothesis and conclude that this 6% is very unlikely to be produced by chance. However, if the probability is bigger than alpha, then we say that we failed to reject the null hypothesis. Be Careful: this does not prove that the null hypothesis is true! If a person is proved to be innocent beyond a reasonable doubt, that does not mean this person did not commit the crime. Let’s now talk simulations. We will simulate two groups with the same parameters (mean and std) from a normal distribution (or any distribution you like), then calculate the mean difference between the two groups, and finally visualize the distribution of those differences. I will repeat this process 100 times mean_differnces <- c() sample_size <- 20 n_exps <- 1000 pop_mean <- 70 pop_sd <- 20 for (i in 1:n_exps){ group_experimental <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) group_control <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) mean_experimental <- mean(group_experimental) mean_control <- mean(group_control) mean_diff <- mean_experimental- mean_control mean_differnces[i] <- mean_diff Now, since both distributions are coming from the same mean and standard deviation, we know that any reported difference is only possible due to chance (random sampling). Let’s investigate those mean ## [1] -18.20509 15.61411 Ops! we can actually observe large differences, even though those differences are purely due to random sampling. Let’s make a histogram ggplot(data.frame(mean_differnces), aes(x=mean_differnces)) + geom_histogram(bins = 15) We see that we have a few extreme values, but most values are actually closer to zero. This is the range of differences that could be obtained if NO manipulation took place. In another world, this is the null distribution of differences. According to the CLT, we know that this distribution should approximate a normal distribution if we increased the sample size of each group. Since we know how to calculate the probabilities from a normal distribution, that means we can now calculate the probability of observing any difference by chance. Let’s calculate the probability of observing a difference of 6% or higher, but using the samples we have mean(mean_differnces > 6 | mean_differnces < -6) ## [1] 0.353 And using pnorm that gives us the true mathematical probability if we sampled an infinite number of subjects in both groups: pnorm(q=6, mean=mean(mean_differnces), sd = sd(mean_differnces), lower.tail = FALSE) + pnorm(q=-6, mean=mean(mean_differnces), sd = sd(mean_differnces), lower.tail = TRUE) ## [1] 0.333289 This is the role of chance in this experiment. Is that role small enough for you beyond the reasonable doubt to reject the null hypothesis and claim a victory? Well, we usually set alpha to 5% or lower, so that won’t be considered significant by scientific standards. Does that make your magic pill effectless? Not at all. Let’s repeat the same simulation but with a much larger sample (N=200 this time): mean_differnces <- c() sample_size <- 200 n_exps <- 1000 pop_mean <- 70 pop_sd <- 20 for (i in 1:n_exps){ group_experimental <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) group_control <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) mean_experimental <- mean(group_experimental) mean_control <- mean(group_control) mean_diff <- mean_experimental- mean_control mean_differnces[i] <- mean_diff And now we calculate the probability of observing 6% by chance: mean(mean_differnces > 6 | mean_differnces < -6) ## [1] 0.001 The probability now is so small that it can actually be your chance to enter the billionaire club with the magic pill. But 6% is actually not that big! A small probability of chance does not mean the magnitude of the effect is big or meaningful in any way. The Randomization Test Until now, we have assumed that the difference scores are normally distributed. And the difference scores are normally distributed because we are sampling both groups from the same normal distributions. However, that may not be the case with all the things we are interested in. What should we do to calculate the probability of chance given any data (or any distribution)? Try the randomization test. It is a non-parametric method that we’ll begin to use more often to make inferential statements. Basically, we no longer require the real-world to behave according to our math but that means we’ll reply on simulations to make inferential statements. Let’s explain how the randomization test works in the context of hypothesis testing. You first have two sets of scores from your experiment, one for the control group and the other from the experimental group. You take the difference between the two, and you want to calculate the probability of observing this difference by chance. We’ll first calculate the observed difference. Then, to make the null distribution, we repeatedly shuffle group assignments and calculate the difference. After we do that multiple times, we can now use this distribution of differences (under shuffled group assignments) as our null distribution and use it to calculate the probability. Let’s see how to do that: sample_size <- 20 n_exps <- 1000 pop_mean <- 70 pop_sd <- 20 group_experimental <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) group_control <- rnorm(n=sample_size, mean=pop_mean, sd=pop_sd) group_score_both <- c(group_experimental, group_control) group_assignments <- rep(c("A","B"), each=20) Let’s see what both group_scores_both and group_assignments look like: ## [1] 60.40451 87.56891 71.95822 42.40252 75.02995 29.60500 ## [1] "A" "A" "A" "A" "A" "A" Basically, the first score is from group “A”, second from group “A” and etc. let’s review what sampling without replacement does: sample(group_assignments, replace=FALSE) ## [1] "A" "B" "A" "A" "A" "B" "A" "B" "B" "B" "A" "B" "B" "A" "B" "B" "B" ## [18] "B" "A" "A" "B" "A" "A" "A" "B" "A" "B" "B" "B" "B" "A" "A" "B" "A" ## [35] "B" "A" "A" "A" "A" "B" As you can see, sampling without replacement will just shuffle the letters all around the place and break their order. Now, we will perform the randomization test. Basically, shuffle group assignments by random, and then calculate the difference. mean_differnces <- c() # Note, we already have a set of scores for both groups for (i in 1:n_exps){ shuffled_assignments <- sample(group_assignments, replace=FALSE) # without replacement group_experimental_shuffled <- group_score_both[shuffled_assignments=='A'] group_control_shuffled <- group_score_both[shuffled_assignments=='B'] mean_experimental <- mean(group_experimental_shuffled) mean_control <- mean(group_control_shuffled) mean_diff <- mean_experimental- mean_control mean_differnces[i] <- mean_diff Now we can make a distribution of those differences (using the absolute value, because we are doing a two-tailed or non-directional test). ggplot(data.frame(mean_differnces), aes(x=mean_differnces)) + geom_histogram(bins = 15) Those differences are examples of what you would expect if there were no effect. If the observed difference looks like all those differences, then we can conclude that chance might have produced those effects. However, if it looks more extreme, then we can conclude that it is very unlikely to be produced by chance. We can assume that our alpha criterion is 5%, we can now calculate the critical value (the least difference that would be accepted): quantile(mean_differnces, c(0.025, 0.975)) # calcuating 5% divided by two tails: 2.5% each ## 2.5% 97.5% ## -11.05458 10.76974 Now, any observed difference that is between the range of those two numbers would be a failure to reject the null hypothesis. All the observed differences that are outside that range would only occur 5% of the time and hence we can reject the null hypothesis. Having observed 6%, we can simply calculate the probability of that difference: mean(mean_differnces < -6 | mean_differnces > 6) ## [1] 0.275 Which is much higher than our alpha (and inside the range). Make a simulation of two groups (N=10 per group) drawn from the same population with the following parameters: mean=40 and sd=10. • Use the CLT to find the probability of observing a difference of 4 or more between the two groups (either positive or negative), by chance. • Calculate the same probability using the randomization test. • Given the similarity of the two probabilities, what is the biggest advantage of using the randomization test? • While keeping everything else fixed, what happens to those probabilities if we increase the number of subjects (N) from 10 to 1000 – and why? (no need to provide the code, just write what happens and the reason you think behind this effect)
{"url":"https://falhazmi.com/wp-content/uploads/2020/01/Lab-8-notes.html","timestamp":"2024-11-10T08:43:30Z","content_type":"application/xhtml+xml","content_length":"685649","record_id":"<urn:uuid:6fb220fc-ae32-474d-aa1a-cc8f55c3c9e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00827.warc.gz"}
Explaining changes in real-world data {"value":"The success of deep learning is a testament to the power of statistical correlation: if certain image features are consistently correlated with the label “cat”, you can teach a machine learning model to identify cats.\n\nBut sometimes, correlation is not enough; you need to identify causation. For example, during the COVID-19 pandemic, a retailer might have seen a sharp decline in its inventory for a particular product. What caused that decline? An increase in demand? A shortage in supply? Delays in shipping? The failure of a forecasting model? The remedy might vary depending on the cause.\n\nEarlier this month, at the International Conference on Artificial Intelligence and Statistics (AISTATS), my colleagues and I [presented a new technique](https://www.amazon.science/ publications/why-did-the-distribution-change) for identifying the causes of shifts in a probability distribution. Our approach involves causal graphs, which are [graphical](https://en.wikipedia.org/ wiki/Graph_(discrete_mathematics)) blueprints of sequential processes.\n\nEach node of the graph, together with its incoming edges, represents a causal mechanism, or the probability that a given event will follow from the event that precedes it. We show how to compute the contribution that changes in the individual mechanisms make to changes in the probability of the final outcome.\n\nWe tested our approach using simulated data, so that we could stipulate the probabilities of the individual causal mechanisms, giving us a ground truth to measure against. Our approach yielded estimates that were very close to the ground truth — a deviation of only 0.29 according to [L](https://en.wikipedia.org/wiki/Taxicab_geometry)[1](https://en.wikipedia.org/wiki/Taxicab_geometry) [distance] (https://en.wikipedia.org/wiki/Taxicab_geometry). And we achieved that performance even at small sample sizes — as few as 500 samples, drawn at random from the probability distributions we stipulated.\n\nConsider a causal graph, which represents factors contributing to the amount of inventory that a retailer has on hand. (This is a drastic simplification; the causal graphs for real-world inventory counts might have dozens of factors, rather than five.)\n\n\n\nIn this simplified model, the simulation system estimates the cost (X1) of replenishing inventory; the forecasting algorithm estimates demand (X2); a planning algorithm (X3) determines the size and timing of purchase orders; bidding (X4) occurs opportunistically, as when a large supply of some product becomes available at a discounted rate; and together, all those factors contributed to the inventory on hand (X5).\n\ nEach input-output relation in this network has an associated conditional probability distribution, or causal mechanism. The probabilities associated with the individual causal mechanisms determine the joint distribution of all the variables (X1-X5), or the probability that any given combination of variables will occur together. That in turn determines the probability distribution of the target variable — the amount of inventory on hand.\n\nA large change to the final outcome may be accompanied by changes to all the causal mechanisms in the graph. Our technique identifies the causal mechanism whose change is most responsible for the change in outcome.\n\nOur fundamental insight is that any given causal mechanism in the graph could, in principle, change without affecting the others. So given a causal graph, the initial causal mechanisms, and data that imply new causal mechanisms, we update the causal mechanisms one by one to determine the influence each has on the outcome.\n\n\n\nIn this version of the graph, the mechanism for cost has been updated, followed by the mechanism for demand, which accounts for 25% of the total change in on-hand inventory.\n\nThe problem with this approach is that our measurement of each node’s contribution depends on the order in which we update the nodes. The measurement evaluates the consequences of changing the node’s causal mechanism given every possible value of the other variables in the graph. But the probabilities of those values change when we update causal mechanisms. So we’ll get different measurements, depending on which causal mechanisms have been updated.\n\nTo address this problem, we run through every permutation of the update order and average the per-node results, an adaptation of a technique from game theory called computing the Shapley value.\n\nIn practice, of course, causal mechanisms are something we have to infer from data; we’re not given probability distributions in advance. But to test our approach, we created a simple causal graph in which we could stipulate the distributions. Then, using those distributions, we generated data samples.\n\nAcross 100 different random changes to the causal mechanisms of our graph, our method performed very well; with 500 data samples per change, it achieved an average deviation from ground truth of 0.29 as measured by [L](https://en.wikipedia.org/wiki/Taxicab_geometry)[1](https://en.wikipedia.org/wiki/Taxicab_geometry) [distance] (https://en.wikipedia.org/wiki/Taxicab_geometry). Our ground truth is at least a 3-D vector (6-D at most), with at least one component whose magnitude is at least one (five at most). Ther\n\nWe tested different volumes of data samples, from 500 to 4,000, but adding more samples had little effect on the accuracy of the approximation.\n\nInternally, we have also applied our technique to questions of supply chain management. For a particular family of products, we were able to identify the reasons for a steady decline in on-hand inventory during the pandemic, when that figure had held steady for the preceding year. \n\nABOUT THE AUTHOR\n#### **[Kailash Budhathoki](https://www.amazon.science/author/kailash-budhathoki)**\nKailash Budhathoki is an applied scientist at Amazon.","render":"<p>The success of deep learning is a testament to the power of statistical correlation: if certain image features are consistently correlated with the label “cat”, you can teach a machine learning model to identify cats.</p>\n<p>But sometimes, correlation is not enough; you need to identify causation. For example, during the COVID-19 pandemic, a retailer might have seen a sharp decline in its inventory for a particular product. What caused that decline? An increase in demand? A shortage in supply? Delays in shipping? The failure of a forecasting model? The remedy might vary depending on the cause.</p>\n<p>Earlier this month, at the International Conference on Artificial Intelligence and Statistics (AISTATS), my colleagues and I <a href=\\"https:// www.amazon.science/publications/why-did-the-distribution-change\\" target=\\"_blank\\">presented a new technique</a> for identifying the causes of shifts in a probability distribution. Our approach involves causal graphs, which are <a href=\\"https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)\\" target=\\"_blank\\">graphical</a> blueprints of sequential processes.</p>\\n<p>Each node of the graph, together with its incoming edges, represents a causal mechanism, or the probability that a given event will follow from the event that precedes it. We show how to compute the contribution that changes in the individual mechanisms make to changes in the probability of the final outcome.</p>\n<p>We tested our approach using simulated data, so that we could stipulate the probabilities of the individual causal mechanisms, giving us a ground truth to measure against. Our approach yielded estimates that were very close to the ground truth — a deviation of only 0.29 according to <a href= \\"https://en.wikipedia.org/wiki/Taxicab_geometry\\" target=\\"_blank\\">L</a><a href=\\"https://en.wikipedia.org/wiki/Taxicab_geometry\\" target=\\"_blank\\">1</a> <a href=\\"https:// en.wikipedia.org/wiki/Taxicab_geometry\\" target=\\"_blank\\">distance</a>. And we achieved that performance even at small sample sizes — as few as 500 samples, drawn at random from the probability distributions we stipulated.</p>\\n<p>Consider a causal graph, which represents factors contributing to the amount of inventory that a retailer has on hand. (This is a drastic simplification; the causal graphs for real-world inventory counts might have dozens of factors, rather than five.)</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/c94fa9868d9845069d3a860854e5d29e_image.png\\" alt=\ \"image.png\\" /></p>\n<p>In this simplified model, the simulation system estimates the cost (X1) of replenishing inventory; the forecasting algorithm estimates demand (X2); a planning algorithm (X3) determines the size and timing of purchase orders; bidding (X4) occurs opportunistically, as when a large supply of some product becomes available at a discounted rate; and together, all those factors contributed to the inventory on hand (X5).</p>\n<p>Each input-output relation in this network has an associated conditional probability distribution, or causal mechanism. The probabilities associated with the individual causal mechanisms determine the joint distribution of all the variables (X1-X5), or the probability that any given combination of variables will occur together. That in turn determines the probability distribution of the target variable — the amount of inventory on hand.</p>\n<p>A large change to the final outcome may be accompanied by changes to all the causal mechanisms in the graph. Our technique identifies the causal mechanism whose change is most responsible for the change in outcome.</p>\n<p>Our fundamental insight is that any given causal mechanism in the graph could, in principle, change without affecting the others. So given a causal graph, the initial causal mechanisms, and data that imply new causal mechanisms, we update the causal mechanisms one by one to determine the influence each has on the outcome.</p>\n<p><img src=\\"https://dev-media.amazoncloud.cn/76a475081643459e9595d4f75c899c19_image.png\\" alt=\\"image.png\\" /></p> \n<p>In this version of the graph, the mechanism for cost has been updated, followed by the mechanism for demand, which accounts for 25% of the total change in on-hand inventory.</p>\n<p>The problem with this approach is that our measurement of each node’s contribution depends on the order in which we update the nodes. The measurement evaluates the consequences of changing the node’s causal mechanism given every possible value of the other variables in the graph. But the probabilities of those values change when we update causal mechanisms. So we’ll get different measurements, depending on which causal mechanisms have been updated.</p>\n<p>To address this problem, we run through every permutation of the update order and average the per-node results, an adaptation of a technique from game theory called computing the Shapley value.</p>\n<p>In practice, of course, causal mechanisms are something we have to infer from data; we’re not given probability distributions in advance. But to test our approach, we created a simple causal graph in which we could stipulate the distributions. Then, using those distributions, we generated data samples.</p>\n<p>Across 100 different random changes to the causal mechanisms of our graph, our method performed very well; with 500 data samples per change, it achieved an average deviation from ground truth of 0.29 as measured by <a href=\\ "https://en.wikipedia.org/wiki/Taxicab_geometry\\" target=\\"_blank\\">L</a><a href=\\"https://en.wikipedia.org/wiki/Taxicab_geometry\\" target=\\"_blank\\">1</a> <a href=\\"https://en.wikipedia.org/ wiki/Taxicab_geometry\\" target=\\"_blank\\">distance</a>. Our ground truth is at least a 3-D vector (6-D at most), with at least one component whose magnitude is at least one (five at most). Ther</ p>\\n<p>We tested different volumes of data samples, from 500 to 4,000, but adding more samples had little effect on the accuracy of the approximation.</p>\n<p>Internally, we have also applied our technique to questions of supply chain management. For a particular family of products, we were able to identify the reasons for a steady decline in on-hand inventory during the pandemic, when that figure had held steady for the preceding year.</p>\n<p>ABOUT THE AUTHOR</p>\n<h4><a id=\\"Kailash_Budhathokihttpswwwamazonscienceauthorkailashbudhathoki_39\\"></a><strong><a href=\\"https:// www.amazon.science/author/kailash-budhathoki\\" target=\\"_blank\\">Kailash Budhathoki</a></strong></h4>\n<p>Kailash Budhathoki is an applied scientist at Amazon.</p>\n"}
{"url":"https://dev.amazoncloud.cn/column/article/630893dacb8ea75c21465e26","timestamp":"2024-11-03T13:23:50Z","content_type":"text/html","content_length":"120509","record_id":"<urn:uuid:df5e8f83-ac19-486a-bd98-80faebe01361>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00221.warc.gz"}
Question #7d62e + Example Question #7d62e 1 Answer Assuming $15 , 000 c {m}^{3}$ of wood I got: $12 , 750 g$ Kate, remember that you need to include the units as well! For example you have a volume of $15000$....$c {m}^{3}$? You may use a table to find the DENSITY of your material (basically how much stuff fits into a given volume of this stuff); these tables can be found in textbooks, websites...as for example: Density tells you the amount of mass in grams that fits into a volume of $1 c {m}^{3}$....for wood $0.85 \frac{g}{c {m}^{3}}$ So: $\text{density"="mass"/"volume}$ and rearranging: If you have $15000 c {m}^{3}$ of wood then: $\text{mass} = 0.85 \times 15000 = 12 , 750 g$ If you have different types of wood (cedar, fir, pine...etc) you can change the corresponding value of density and evaluate the new value of mass. Impact of this question 4654 views around the world
{"url":"https://socratic.org/questions/562eb46611ef6b0870c7d62e","timestamp":"2024-11-02T17:37:59Z","content_type":"text/html","content_length":"34088","record_id":"<urn:uuid:1cf1a1f7-5a5e-49f4-acd7-5ddcb73eff66>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00564.warc.gz"}
TreeView Knowledge Base Browser - NumberSigma KEE - Number Parents Quantity Any specification of how many or how much of something there is. Accordingly, there are two subclasses of Quantity: Number (how many) and PhysicalQuantity (how much). Children ComplexNumber A Number that has the form: x + yi, where x and y are RealNumbers and i is the square root of -1. ImaginaryNumber Any Number that is the result of multiplying a RealNumber by the square root of -1. MultipoleVariable a variable that describes energetical interactions between multipoles. RealNumber Any Number that can be expressed as a (possibly infinite) decimal, i.e. any Number that has a position on the number line.
{"url":"https://sigma.ontologyportal.org:8443/sigma/TreeView.jsp?lang=GermanLanguage&simple=yes&kb=SUMO&term=Number","timestamp":"2024-11-02T08:57:52Z","content_type":"text/html","content_length":"13899","record_id":"<urn:uuid:de3e79bf-1e14-4d78-9741-fdbffb9a546c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00502.warc.gz"}
Physics - Online Tutor, Practice Problems & Exam Prep (II) Draw a graph like Fig. 14–11 for a horizontal spring whose spring constant is 95 N/m and which has a mass of 75 g on the end of it. Assume the spring was started with an initial amplitude of 2.0 cm. Neglect the mass of the spring and any friction with the horizontal surface. Use your graph to estimate , (b) the kinetic energy, for 𝓍 = 1.5 cm.
{"url":"https://www.pearson.com/channels/physics/explore/periodic-motion-new/energy-in-simple-harmonic-motion?chapterId=0214657b","timestamp":"2024-11-13T01:25:51Z","content_type":"text/html","content_length":"481774","record_id":"<urn:uuid:365613c5-2078-427c-8c58-18731f569a28>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00057.warc.gz"}
Volkan Emre Arpinar^1,2, Kevin Koch^1,2, Sampada Bhave^1, L Tugan Muftuler^2,3, Baolian Yang^4, S Sivaram Kaushik^4, Suchandrima Banerjee^4, and Andrew Nencka^1,2 ^1Radiology, Medical College of Wisconsin, Milwaukee, WI, United States, ^2Center for Imaging Research, Medical College of Wisconsin, Milwaukee, WI, United States, ^3Neurosurgery, Medical College of Wisconsin, Milwaukee, WI, United States, ^4GE Healthcare, Waukesha, WI, United States Simultaneous multi-slice (SMS) imaging requires the application of a parallel imaging algorithm for image unaliasing. Including coil compression in SMS image reconstruction offers a benefit of reducing the computational load of the reconstruction algorithm and can better condition the matrix which is inverted in the unaliasing algorithm. The goal of this abstract is to evaluate the optimal level of coil compression to utilize with slice-ARC in Human Connectome Project (HCP) compliant, and other SMS protocols with a Nova Medical 32-channel head coil. It was found that, for all levels of coil compression, application of the compression algorithm yielded a benefit in reconstruction performance. Additionally, it was found that the application of coil compression does not significantly impact the selection of a CAIPI shift factor unless a coil compression of 50% or greater is used. SMS imaging requires the application of a parallel imaging algorithm for image unaliasing. While an increase in the number of distinct coil array elements better conditions the matrix inversion in the unaliasing problem, it also increases the computational burden of the unaliasing algorithm. Field of view shifts from CAIPIRINHA are also applied while acquiring data to further improve the conditioning of the matrix inverted in the unaliasing problem^1.Coil compression includes a means to reduce the number of coils to be reconstructed, while yielding synthetic channels with reduced correlation and preserving maximal information. It was hypothesized that a given level of coil compression will yield image reconstruction quality which is equivalent to reconstruction without compression, and it will require reduced reconstruction time. Phantom and human imaging experiments were conducted. In each case, a time series of 114 were acquired with a multi-phase echo planar imaging acquisitions including TE 30ms, TR 1100ms, acquisition duration 2:05, flip angle 50o, and matrix size 104x104. In the phantom acquisition, SMS factors with CAIPI field of view shifts (SMS factor/FOV shift) of 2/0.5, 3/0.33, 4/0.5, 4/0.25, 5/0.2, 6/0.5, 6 /0.33, 6/0.17, 7/0.14, 8/0.5, 8/0.25, 8/0.12 were collected. Optimal shifts from the phantom experiment with SMS factors of 4, 6, and 8 were acquired in two consented healthy volunteers (Mean Age:30 years; Mean Weight:155lbs). Data were saved for off-line reconstruction wherein an Orchestra C++ algorithm was modified to include coil compression^2,3 with different number of virtual coils (range from 8 to 32) before image unaliasing. Reconstruction performance was evaluated via time series temporal signal-to-noise ratio (tSNR—larger values indicate better performance) and temporal derivative of time course of RMS variance over voxels (DVARS—smaller values indicate better performance)^3 metrics which are calculated using connectome project’s functional quality pipeline. Additionally a 4 minutes acquisition with right hand finger tapping (20s OFF, 20s ON) were conducted with TR of 0.8s, SMS Factor/FOV shift of 8/0.25 to evaluate the effect of coil compression to task’s t-score maps on the brain and activation curves within 0.042cm^3 ROI. Figure 1 (a) and (b) show the 32-channel Nova head coil receive element coil sensitivities in Axial, Coronal and Sagittal views with a human participant without and with coil compression. This demonstrates that the virtual coil sensitivities are different from the physical coil elements. The total signal with respect to the coil/virtual coil elements are also shown. It is an indicator of the element’s sensitivity to the imaging volume. Figure 1 (c) shows that 22 virtual coils contain over 90% of the image signal. Figure 2 shows the results of the phantom experiment with tSNR increasing with number of virtual coils and DVARS decreasing with number of virtual coils. Of interest, the final point in each plot shows the measurements with no coil compression, yielding compromised metrics compared to reduced numbers of virtual coils. Of further interest, in all cases, the optimal CAIPI shift used for uncompressed coils remained preferred for coil compression factors up to 50%. Figure 3 shows the results for the average of healthy volunteers’ experiments, which are consistent with the results of the phantom experiment. As shown in Figure 3(c), overall processing times are linearly related to the number of coil elements. Processing times for different SMS factors are similar because the FOV coverage and number of slices along with resolution matched. Figure 4 shows activation maps (left), maps of activation t-scores (top right), time series plots in a region of activation (middle right), and a table of correlation coefficients between uncompressed and compressed time series, as well as SSIMs between t-statistic maps without and with compression (bottom right). Activation curves with and without coil compression are extremely similar. All the activation curves are correlated within ROI (Figure 4, Blue Line) with the Pearson correlation coefficient larger than 0.93 (p<0.001) and higher than 0.98 (p<0.001) with coil compression yielding at least 50% of the input channel count. Similarly the task’s t-core maps did not affected by coil compression. If coil compression is 50% or higher, the structural similarity index (SSIM) between the coil compression to without coil compression case is larger than 0.89. Discussion / Conclusions Coil compression is beneficial in SMS image reconstruction as the reduction in coils reduces computational time, as would be expected and the included image processing yields improved unaliasing performance and image quality metrics. Similarly, it does not have substantial impact of fMRI activation curves and task t-score maps. This work supported by GE Healthcare technological development grand and Daniel M. Soref Charitable Foundation. I would like to thank Dr. Alexander Cohen for his help on fMRI processing and AFNI. 1. Setsompop K, Gagoski BA, Polimeni JR, et al., Blipped-controlled aliasing in parallel imaging for simultaneous multislice echo planar imaging with reduced g-factor penalty. Magn Reson Med. 2012; 2. Zhang T, Pauly JM, Vasanawala S, Lustig M., Coil compression for accelerated imaging with Cartesian sampling. Magn Reson Med. 2013; 69:571–582. 3. Buehrer M, Pruessmann KP, Boesiger P, Kozerke S, Array compression for MRI with large coil arrays. Magn Reson Med. 2007; 57:1131-1139. 4. Marcus DS, Harms MP, Snyder AZ, Jenkinson M, et. al., Human Connectome Project Informatics: quality control, database services, and data visualization. Neuroimage, 2013; 80:202-219.
{"url":"https://cds.ismrm.org/protected/19MProceedings/PDFfiles/3936.html","timestamp":"2024-11-05T22:37:20Z","content_type":"application/xhtml+xml","content_length":"17121","record_id":"<urn:uuid:f720658d-b3d4-47d4-b845-1b4bbcc98e50>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00003.warc.gz"}
Making Angles To explore different types of angles including acute, right and obtuse Pencils or Cuisanaire© rods To help students practice the terms acute, right, and obtuse angles, ask them to make various types of angles using two pencils or Cuisenaire© rods and describe them in words. Some angles they might create are: When there is a disagreement about whether a particular angle fits the classification you gave, allow a student to compare the angle to the corner of a pieces of paper (a right angle). See how many 90 degree right angles they can make with pencils or rods facing different directions. Then have children make acute and obtuse angles. To extend the activity, call out a angle measurement and have students show the kind of angle (not exact measurement) needed to equal 180 degrees. For example, if I say 45 degrees (acute angle), students create an obtuse angle because this is the type of angle needed to measure 180 degrees. About the sequence Part 1 asks students to use two pencils or Cuisanaire© Rods to make right angles. Part 2 asks students to use the pencils or rods to make acute and obtuse angles. The extension is based on students’ understanding that a straight line = 180 degrees. Part 1 Let’s use our pencils/rods to create right angles. Once you have 1 right angle, can you use your pencils to create another right angle facing a different direction? Remember you can use the corner of your paper to visualize the exact measurement of a right angle. While children are enjoying their building of mastery, feel free to repeat. When children are eager for more, try Part 2. Part 2 Now, let’s use our pencils or rods to create some acute angles (less than 90 degrees) as well as some obtuse angles (more than 90 degrees). Once you’ve created one of each type of angle, see if you can make other examples of each angle with your pencils laid out differently. As always, when children seem excited for a new challenge, move on. I’ll name a measurement, then you use your pencils to model whether a right, acute or obtuse angle is needed for the sum of the two angles to equal a total of 180 degrees. • 50 degrees (needs an obtuse angle) • 120 degrees (needs an acute angle) • 93 degrees (needs an acute angle) • 90 degrees (needs a right angle)
{"url":"https://elementarymath.edc.org/mindset/making-angles/","timestamp":"2024-11-09T17:42:52Z","content_type":"text/html","content_length":"127204","record_id":"<urn:uuid:b23a17c2-f322-4a75-be0c-2e9ce833f186>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00719.warc.gz"}
Breakthrough Curve Kinetics in Breakthrough Curve Experiments General information As already explained in the section “Breakthrough Curves“, the kinetics play a major role for the spreading of the mass transfer zone. Vice versa, results from dynamic sorption experiments can be used for the quantification of the sorption rate. If similar conditions like in industral adsorbers are selected, the values can serve as input parameters for process simulations and support the dimensioning of technical plants. A direct comparision of different adsorbent materials is possible. Such conditions are: • Ratio of adsorber diameter to particle diameter equal or greater than 10:1 • Gas velocity in the range of 0.2-0.4 m/s • Sufficient sample volume (> 100 ccm) • Similar adsorptive concentration in experiment and industrial process • Well-known presence of other impurities in the industrial process should be implemented into the experiment • Same pressure range • Same temperature range Mass transfer and Flow Rate Dependence in Breakthrough Curve Experiments The shape of the breakthrough curves are mainly influenced by sorption rate or mass transfer from gas phase to the adsorption sites inside the adsorbent particles. Therefore, it is possible to quantify this parameter from the experimental breakthrough curves by applying different models. The relationship between the mass transfer and breakthrough curves is shown in the next figure. In particular the black breakthrough curve shows an interesting behaviour. In this case and for a certain gas flow, the kinetics are too slow and adsorptive molecules travel to the adsorber outlet before they can enter the pores of the particle. This effect is widely used for separation applications i.e. CO[2] removal in CH[4]-rich gas mixtures on carbon molecular sieves. Here, the methane will have a very slow sorption kinetic which leads to a spontaneous breakthrough whereas carbon dioxide will be held back. Based on these considerations, very effective separation processes with reduced loss on methane can be derived. Dependence of breakthrough curves on mass transfer (schematic). In this figure the mass transfer coefficient of the black breakthrough curve is 10 times lower than the coefficient of the green curve, 100 times lower than for the blue curve and 1000 times lower compared to the red curve, respectively. Due to lower sorption rates the breakthrough curves become more and more flat and below a certain value a spontaenous breakthrough can be observed (black curve). Axial Dispersion and Flow Rate Dependence If a fixed bed is charged with a gas mixture as a sharp step function, the response signal after passing the fixed bed will be broadened. This axial dispersion effect is not coupled by adsorption and can also observed by using inert material. The axial dispersion increases the unsymmetry of the breakthrough curve, which is shown in the next figure. Influence of different axial dispersion coefficients on the shape of breakthrough curves. Here the black curve is based on an axial dispersion coefficient that is 2 times higher than the value for the green curve, 5 times higher than for the blue curve, and 100 times higher than for the red curve, respectively. You can see the assymmetric impingement on the point of intersection above a realtive concentration of 0.5. The shape of breakthrough curves are mainly influenced by the mass transfer of adsorptive molecules into the adsorbent particles. Based on well-established models this relationship can be used for the determination of transport properties for the system under investigation.
{"url":"https://www.dynamicsorption.com/dynamic-sorption-method/kinetics/","timestamp":"2024-11-08T14:18:12Z","content_type":"text/html","content_length":"164983","record_id":"<urn:uuid:96653ea5-bae3-46ee-a097-48c72f4c9e2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00442.warc.gz"}
SGEGV - Linux Manuals (3) SGEGV (3) - Linux Manuals sgegv.f - subroutine sgegv (JOBVL, JOBVR, N, A, LDA, B, LDB, ALPHAR, ALPHAI, BETA, VL, LDVL, VR, LDVR, WORK, LWORK, INFO) SGEEVX computes the eigenvalues and, optionally, the left and/or right eigenvectors for GE matrices Function/Subroutine Documentation subroutine sgegv (characterJOBVL, characterJOBVR, integerN, real, dimension( lda, * )A, integerLDA, real, dimension( ldb, * )B, integerLDB, real, dimension( * )ALPHAR, real, dimension( * )ALPHAI, real, dimension( * )BETA, real, dimension( ldvl, * )VL, integerLDVL, real, dimension( ldvr, * )VR, integerLDVR, real, dimension( * )WORK, integerLWORK, integerINFO) SGEEVX computes the eigenvalues and, optionally, the left and/or right eigenvectors for GE matrices This routine is deprecated and has been replaced by routine SGGEV. SGEGV computes the eigenvalues and, optionally, the left and/or right eigenvectors of a real matrix pair (A,B). Given two square matrices A and B, the generalized nonsymmetric eigenvalue problem (GNEP) is to find the eigenvalues lambda and corresponding (non-zero) eigenvectors x such A*x = lambda*B*x. An alternate form is to find the eigenvalues mu and corresponding eigenvectors y such that mu*A*y = B*y. These two forms are equivalent with mu = 1/lambda and x = y if neither lambda nor mu is zero. In order to deal with the case that lambda or mu is zero or small, two values alpha and beta are returned for each eigenvalue, such that lambda = alpha/beta and mu = beta/alpha. The vectors x and y in the above equations are right eigenvectors of the matrix pair (A,B). Vectors u and v satisfying u**H*A = lambda*u**H*B or mu*v**H*A = v**H*B are left eigenvectors of (A,B). Note: this routine performs "full balancing" on A and B JOBVL is CHARACTER*1 = 'N': do not compute the left generalized eigenvectors; = 'V': compute the left generalized eigenvectors (returned in VL). JOBVR is CHARACTER*1 = 'N': do not compute the right generalized eigenvectors; = 'V': compute the right generalized eigenvectors (returned in VR). N is INTEGER The order of the matrices A, B, VL, and VR. N >= 0. A is REAL array, dimension (LDA, N) On entry, the matrix A. If JOBVL = 'V' or JOBVR = 'V', then on exit A contains the real Schur form of A from the generalized Schur factorization of the pair (A,B) after balancing. If no eigenvectors were computed, then only the diagonal blocks from the Schur form will be correct. See SGGHRD and SHGEQZ for details. LDA is INTEGER The leading dimension of A. LDA >= max(1,N). B is REAL array, dimension (LDB, N) On entry, the matrix B. If JOBVL = 'V' or JOBVR = 'V', then on exit B contains the upper triangular matrix obtained from B in the generalized Schur factorization of the pair (A,B) after balancing. If no eigenvectors were computed, then only those elements of B corresponding to the diagonal blocks from the Schur form of A will be correct. See SGGHRD and SHGEQZ for details. LDB is INTEGER The leading dimension of B. LDB >= max(1,N). ALPHAR is REAL array, dimension (N) The real parts of each scalar alpha defining an eigenvalue of ALPHAI is REAL array, dimension (N) The imaginary parts of each scalar alpha defining an eigenvalue of GNEP. If ALPHAI(j) is zero, then the j-th eigenvalue is real; if positive, then the j-th and (j+1)-st eigenvalues are a complex conjugate pair, with ALPHAI(j+1) = -ALPHAI(j). BETA is REAL array, dimension (N) The scalars beta that define the eigenvalues of GNEP. Together, the quantities alpha = (ALPHAR(j),ALPHAI(j)) and beta = BETA(j) represent the j-th eigenvalue of the matrix pair (A,B), in one of the forms lambda = alpha/beta or mu = beta/alpha. Since either lambda or mu may overflow, they should not, in general, be computed. VL is REAL array, dimension (LDVL,N) If JOBVL = 'V', the left eigenvectors u(j) are stored in the columns of VL, in the same order as their eigenvalues. If the j-th eigenvalue is real, then u(j) = VL(:,j). If the j-th and (j+1)-st eigenvalues form a complex conjugate pair, then u(j) = VL(:,j) + i*VL(:,j+1) u(j+1) = VL(:,j) - i*VL(:,j+1). Each eigenvector is scaled so that its largest component has abs(real part) + abs(imag. part) = 1, except for eigenvectors corresponding to an eigenvalue with alpha = beta = 0, which are set to zero. Not referenced if JOBVL = 'N'. LDVL is INTEGER The leading dimension of the matrix VL. LDVL >= 1, and if JOBVL = 'V', LDVL >= N. VR is REAL array, dimension (LDVR,N) If JOBVR = 'V', the right eigenvectors x(j) are stored in the columns of VR, in the same order as their eigenvalues. If the j-th eigenvalue is real, then x(j) = VR(:,j). If the j-th and (j+1)-st eigenvalues form a complex conjugate pair, then x(j) = VR(:,j) + i*VR(:,j+1) x(j+1) = VR(:,j) - i*VR(:,j+1). Each eigenvector is scaled so that its largest component has abs(real part) + abs(imag. part) = 1, except for eigenvalues corresponding to an eigenvalue with alpha = beta = 0, which are set to zero. Not referenced if JOBVR = 'N'. LDVR is INTEGER The leading dimension of the matrix VR. LDVR >= 1, and if JOBVR = 'V', LDVR >= N. WORK is REAL array, dimension (MAX(1,LWORK)) On exit, if INFO = 0, WORK(1) returns the optimal LWORK. LWORK is INTEGER The dimension of the array WORK. LWORK >= max(1,8*N). For good performance, LWORK must generally be larger. To compute the optimal value of LWORK, call ILAENV to get blocksizes (for SGEQRF, SORMQR, and SORGQR.) Then compute: NB -- MAX of the blocksizes for SGEQRF, SORMQR, and SORGQR; The optimal LWORK is: 2*N + MAX( 6*N, N*(NB+1) ). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value. = 1,...,N: The QZ iteration failed. No eigenvectors have been calculated, but ALPHAR(j), ALPHAI(j), and BETA(j) should be correct for j=INFO+1,...,N. > N: errors that usually indicate LAPACK problems: =N+1: error return from SGGBAL =N+2: error return from SGEQRF =N+3: error return from SORMQR =N+4: error return from SORGQR =N+5: error return from SGGHRD =N+6: error return from SHGEQZ (other than failed =N+7: error return from STGEVC =N+8: error return from SGGBAK (computing VL) =N+9: error return from SGGBAK (computing VR) =N+10: error return from SLASCL (various calls) Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. November 2011 Further Details: This driver calls SGGBAL to both permute and scale rows and columns of A and B. The permutations PL and PR are chosen so that PL*A*PR and PL*B*R will be upper triangular except for the diagonal blocks A(i:j,i:j) and B(i:j,i:j), with i and j as close together as possible. The diagonal scaling matrices DL and DR are chosen so that the pair DL*PL*A*PR*DR, DL*PL*B*PR*DR have elements close to one (except for the elements that start out zero.) After the eigenvalues and eigenvectors of the balanced matrices have been computed, SGGBAK transforms the eigenvectors back to what they would have been (in perfect arithmetic) if they had not been Contents of A and B on Exit -------- -- - --- - -- ---- If any eigenvectors are computed (either JOBVL='V' or JOBVR='V' or both), then on exit the arrays A and B will contain the real Schur form[*] of the "balanced" versions of A and B. If no eigenvectors are computed, then only the diagonal blocks will be correct. [*] See SHGEQZ, SGEGS, or read the book "Matrix Computations", by Golub & van Loan, pub. by Johns Hopkins U. Press. Definition at line 306 of file sgegv.f. Generated automatically by Doxygen for LAPACK from the source code.
{"url":"https://www.systutorials.com/docs/linux/man/3-SGEGV/","timestamp":"2024-11-09T01:01:16Z","content_type":"text/html","content_length":"16570","record_id":"<urn:uuid:dbfe51a7-4870-48d4-897a-260c3b0cce83>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00264.warc.gz"}
• > • > • YEAR 2 MASTERY MATHS COVERS EVERY OBJECTIVE per item •count in steps of 2, 3, and 5 from 0, and in 10s from any number, forward and backward •recognise the place value of each digit in a two-digit number (10s, 1s) •compare and order numbers from 0 up to 100; use and = signs •read and write numbers to at least 100 in numerals and in words •use place value and number facts to solve problems •Solve problems with addition and subtraction: using concrete objects and pictorial representations (including those involving numbers, quantities and measures) and applying their increasing knowledge of mental and written methods •recall and use addition and subtraction facts to 20 fluently, and derive and use related facts to 100 •add and subtract numbers using concrete objects, pictorial representations, and mentally, including: a two-digit number and 1s; a two-digit number and 10s; 2 two-digit numbers; adding 3 one-digit •show that addition of two numbers can be done in any order (commutative) and subtraction cannot •recognise and use the inverse relationship between addition and subtraction and use this to check calculations and solve missing number problems •recall and use multiplication and division facts for the 2, 5 and 10 times tables •recognise odd and even numbers •calculate mathematical statements for multiplication and division within the times tables and write them using the multiplication (×) division (÷) and equals (=) signs •show that multiplication of two numbers can be done in any order (commutative) and division cannot •solve problems involving multiplication and division, using materials, arrays, repeated addition, mental methods, and multiplication and division facts, including problems in contexts •recognise, find, name and write fractions (third/quarter/two quarters/three quarters) of a length, set of objects or quantity •write simple fractions of quantities (eg. Half of 6 is 30) •recognise the equivalence of half and two quarters •choose and use appropriate standard units to estimate and measure length/height in any direction (m/cm); mass (kg/g); temperature (°C); capacity (litres/ml) to the nearest appropriate unit, using rulers, scales, thermometers and measuring vessels •compare and order lengths / mass / volume/capacity and record the results using > < and = •recognise and use the symbols for pounds (£) and pence (p); combine amounts to make a particular value •find different combinations of coins that equal the same amounts of money other year groups available
{"url":"https://www.uniqueclassrooms.com/store/p472/YEAR_2_MASTERY_MATHS_COVERS_EVERY_OBJECTIVE.html","timestamp":"2024-11-08T21:10:36Z","content_type":"text/html","content_length":"105103","record_id":"<urn:uuid:65913e60-da5d-4f91-9387-8c1ae5133766>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00658.warc.gz"}
Assignment Statements :: CIS 301 Textbook Assignment Statements Assignment statements in a program come in two forms – with and without mutations. Assignments without mutation are those that give a value to a variable without using the old value of that variable. Assignments with mutation are variable assignments that use the old value of a variable to calculate a value for the variable. For example, an increment statement like x = x + 1 MUTATES the value of x by updating its value to be one bigger than it was before. In order to make sense of such a statement, we need to know the previous value of x. In contrast, a statement like y = x + 1 assigns to y one more than the value in x. We do not need to know the previous value of y, as we are not using it in the assignment statement. (We do need to know the value of x). Assignments without mutation We have already seen the steps necessary to process assignment statements that do not involve variable mutation. Recall that we can declare as a Premise any assignment statement or claim from a previous proof block involving variables that have not since changed. For example, suppose we want to verify the following program so the assert statement at the end will hold (this example again eliminates the Logika mode notation and the necessary import statements, which we will continue to do in subsequent examples): val x: Z = 4 val y: Z = x + 2 val z: Z = 10 - x //the assert will not hold yet assert(y == z & y == 6) Since none of the statements involve variable mutation, we can do the verification in a single proof block: val x: Z = 4 val y: Z = x + 2 val z: Z = 10 - x 1 ( x == 4 ) by Premise, //assignment of unchanged variable 2 ( y == x + 2 ) by Premise, //assignment of unchanged variable 3 ( z == 10 - x ) by Premise, //assignment of unchanged variable 4 ( y == 4 + 2 ) by Subst_<(1, 2), 5 ( z == 10 - 4 ) by Subst_<(1, 3), 6 ( y == 6 ) by Algebra*(4), 7 ( z == 6 ) by Algebra*(5), 8 ( y == z ) by Subst_>(7, 6), 9 ( y == z ∧ y == 6 ) by AndI(8, 6) //now the assert will hold assert(y == z & y == 6) Note that we did need to do AndI so that the last claim was y == z ∧ y == 6 , even though we had previously established the claims y == z and y == 6. In order for an assert to hold (at least until we switch Logika modes in chapter 10), we need to have established EXACTLY the claim in the assert in a previous proof block. Assignments with mutation Assignments with mutation are trickier – we need to know the old value of a variable in order to reason about its new value. For example, if we have the following program: var x: Z = 4 x = x + 1 //this assert will not hold yet assert(x == 5) Then we might try to add the following proof blocks: var x: Z = 4 1 ( x == 4 ) by Premise //from previous variable assignment x = x + 1 1 ( x == x + 1 ) by Premise, //NO! Need to distinguish between old x (right side) and new x (left side) 2 ( x == 4 ) by Premise, //NO! x has changed since this claim //this assert will not hold yet assert(x == 5) &mldr;but then we get stuck in the second proof block. There, x is supposed to refer to the CURRENT value of x (after being incremented), but both our attempted claims are untrue. The current value of x is not one more than itself (this makes no sense!), and we can tell from reading the code that x is now 5, not 4. To help reason about changing variables, Logika has a special Old(varName) function that refers to the OLD value of a variable called varName, just before the latest update. In the example above, we can use Old(x) in the second proof block to refer to x’s value just before it was incremented. We can now change our premises and finish the verification as follows: var x: Z = 4 1 ( x == 4 ) by Premise //from previous variable assignment x = x + 1 1 ( x == Old(x) + 1 ) by Premise, //Yes! x equals its old value plus 1 2 ( Old(x) == 4 ) by Premise, //Yes! The old value of x was 4 3 ( x == 4 + 1 ) by Subst_<(2, 1), 4 ( x == 5 ) by Algebra*(3) //Could have skipped line 3 and used "Algebra*(1, 2)" instead //now the assert will hold assert(x == 5) By the end of the proof block following a variable mutation, we need to express everything we know about the variable’s current value WITHOUT using the Old terminology, as its scope will end when the proof block ends. Moreover, we only ever have one Old value available in a proof block – the variable that was most recently changed. This means we will need proof blocks after each variable mutation to process the changes to any related facts. Variable swap example Suppose we have the following program: var x: Z = Z.read() var y: Z = Z.read() val temp: Z = x x = y y = temp //what do we want to assert we did? We can see that this program gets two user input values, x and y, and then swaps their values. So if x was originally 4 and y was originally 6, then at the end of the program x would be 6 and y would be 4. We would like to be able to assert what we did – that x now has the original value from y, and that y now has the original value from x. To do this, we might invent dummy constants called xOrig and yOrig that represent the original values of those variables. Then we can add our assert: var x: Z = Z.read() var y: Z = Z.read() //the original values of both inputs val xOrig: Z = x val yOrig: Z = y val temp: Z = x x = y y = temp //x and y have swapped //x has y's original value, and y has x's original value assert(x == yOrig ∧ y == xOrig) //this assert will not yet hold We can complete the verification by adding proof blocks after assignment statements, being careful to update all we know (without using the Old value) by the end of each block: var x: Z = Z.read() var y: Z = Z.read() //the original values of both inputs val xOrig: Z = x val yOrig: Z = y 1 ( xOrig == x ) by Premise, 2 ( yOrig == y ) by Premise //swap x and y val temp: Z = x x = y 1 ( x == y ) by Premise, //from the assignment statement 2 ( temp == Old(x) ) by Premise, //temp equaled the OLD value of x 3 ( xOrig == Old(x) ) by Premise, //xOrig equaled the OLD value of x 4 ( yOrig == y ) by Premise, //yOrig still equals y 5 ( temp == xOrig ) by Algebra*(2, 3), 6 ( x == yOrig ) by Algebra*(1, 4) y = temp 1 ( y == temp ) by Premise, //from the assignment statement 2 ( temp == xOrig ) by Premise, //from the previous proof block (temp and xOrig are unchanged since) 3 ( yOrig == Old(y) ) by Premise, //yOrig equaled the OLD value of y 4 ( x == xOrig ) by Algebra*(1, 2), 5 ( x == yOrig ) by Premise, //from the previous proof block (x and yOrig are unchanged since) 6 ( x == yOrig ∧ y == xOrig ) by AndI(5, 4) //x and y have swapped //x has y's original value, and y has x's original value assert(x == yOrig ∧ y == xOrig) //this assert will hold now Notice that in each proof block, we express as much as we can about all variables/values in the program. In the first proof block, even though xOrig and yOrig were not used in the previous assignment statement, we still expressed how the current values our other variables compared to xOrig and yOrig. It helps to think about what you are trying to claim in the final assert – since our assert involved xOrig and yOrig, we needed to relate the current values of our variables to those values as we progressed through the program.
{"url":"https://textbooks.cs.ksu.edu/cis301/8-chapter/8_5-assignment/","timestamp":"2024-11-05T12:32:35Z","content_type":"text/html","content_length":"44197","record_id":"<urn:uuid:09c2da10-b82a-449a-b877-d8e17e6b1077>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00631.warc.gz"}
Yield Curve Empirics | Scalable Capital Disclaimer – The views and opinions expressed in this blog are those of the author and do not necessarily reflect the views of Scalable Capital GmbH or its subsidiaries. Further information can be found at the end of this article. In the first blog post of this series on fixed-income securities we have seen how yield curves reflect the level of compensation that the financial market requires for lending money. This level of compensation is derived from market prices of fixed-income securities and as such it significantly varies over time depending on the prevailing market environment. How this level of compensation did vary in the past is the subject of this blog post, which explores historic yield curves for multiple markets. The whole analysis of this post can be done with publicly available data, as most central banks provide historic data on several interest rates. However, interest rates of different central banks generally do not come with a fully aligned setup. In particular, interest rates could differ with regards to the methodology used to compute them, their unit of measurement or the selection of eligible securities that are used for estimation. For example, interest rates could be given as zero-coupon yields with continuous compounding, as par yields with bi-annual compounding, or one of multiple other different units. Knowing this unit of measurement is particularly important if the data is used for pricing of fixed-income securities, because the unit will determine the exact formula that is required to translate yield curves back to security prices. Hence, we will also give some background on all data shown, such that it becomes more clear how to use it in potential Let's start with interest rates estimated from US Treasury securities. Probably the easiest way to get US interest rate data is by downloading constant maturity Treasury (CMT) rates from the US Treasury. Constant maturity means that on each date the respective estimated yield curve is evaluated at certain constant maturities. This way, fixed-income security prices are used to determine the parameters of a yield curve, such that interest rates can also be estimated for maturities where no matching fixed-income security is available. This data can also be obtained from Quandl. Please note the following characteristics of these CMT rates which can also be found on either the Treasury Yield Curve Methodology page or the Frequently Asked Questions page: „Negative Yields and Nominal Constant Maturity Treasury Series Rates (CMTs): At times, financial market conditions, in conjunction with extraordinary low levels of interest rates, may result in negative yields for some Treasury securities trading in the secondary market. Negative yields for Treasury securities most often reflect highly technical factors in Treasury markets related to the cash and repurchase agreement markets, and are at times unrelated to the time value of money.At such times, Treasury will restrict the use of negative input yields for securities used in deriving interest rates for the Treasury nominal Constant Maturity Treasury series (CMTs). Any CMT input points with negative yields will be reset to zero percent prior to use as inputs in the CMT derivation. This decision is consistent with Treasury not accepting negative yields in Treasury nominal security auctions.In addition, given that CMTs are used in many statutorily and regulatory determined loan and credit programs as well as for setting interest rates on non-marketable government securities, establishing a floor of zero more accurately reflects borrowing costs related to various programs.“ Now that we know the data that we are dealing with, let's take a look at it. Figure 1 shows CMT rates over time since January 1st, 1990 for several maturities: four different maturities below one year and eight maturities of at least one year. As we can see, there was a rather persistent downward trend of interest rates, particularly for rates of longer durations. In line with what we have read regarding negative yields there is not a single observation for any maturity that is below zero, even though interest rates have been fairly close to zero at the short end for a period of multiple years between 2008 and 2016. Only afterwards have they increased again such that rates of all maturities are between 1.5% and 2.5% at the time of writing. Although the CMT rates dataset can be used quite conveniently to get an overview of interest rates over time, my preferred dataset for US Treasury yields is the one published by the Federal Reserve where yields follow the methodology of Gurkaynak et al. 2006. This dataset consists of daily parameter estimates of a Svensson yield curve model (Svensson 1994) with continuous compounding applied to US Treasuries from 1961 to the present. It also can be obtained from Quandl. Given the parameters over time, one can derive a yield curve for each day given the functional form of the Svensson yield curve model. This functional form can be expressed in terms of either yields, forward rates or discount functions, but it is most concisely expressed for forward rates: In Figure 2 we can see yields over time that we get by converting forward curves into yield curves and evaluating these yield curves at constant maturities over time. From this chart we can see that the downwards trend of interest rates actually already persists since the 1980s, where rates of all maturities have been above 12%. And before that time, the evolution of interest rates has been characterised by an upward trend that pushed interest rates from a range of 2.5% to 5% for all maturities at the beginning of the dataset to above 12% within 15 years. From the chart we can also see that the level of interest rates that we face today is pretty similar to the level of rates that prevailed at the beginning of the dataset. Probably unexpectedly at that time, the period of low interest rates in the 1960s was followed by a period of astonishingly high interest rates in the 1980s. This raises the question: should we expect interest rates to rise again to such heights at some point in the future, if such an increase already happened in the past? Or what are driving forces behind interest rates that might have changed and will lead to a different behaviour of interest rates going forward? Figure 2: Historic Constant Maturity US Treasury Rates from Svensson Yield Curves But before we try to find an answer to these questions, there is one misleading characteristic of the chart that deserves mentioning. From the chart it seems like long-term interest rates would experience sudden temporary shifts from time to time. For example, long-term rates jumped from roughly 5% to 10% and back again in a very short time in 1970. However, this is just an artefact of the data which is caused by the selection of fixed-income securities that are used to estimate the yield curve parameters. In this particular case, the problem is that 30 year Treasury securities did not exist at that time. Hence, interest rates for such long maturities have to be estimated with market prices of fixed-income securities of shorter duration only. In other words: the estimated yield curve has to extrapolate, going beyond the range of maturities where market prices actually exist. This might not work too well some of the time. A similar distortion also exists on the short end of interest rates, as this part of the yield curve is estimated from bond prices with maturity longer than 1 year, such that any yield below 1 year has to be obtained by extrapolation. So one important takeaway is that one should always be aware of the set of underlying securities that were used for estimation, because for any interest rates for maturities outside of this set the yield curve might not be particularly representative. As we have seen so far, US interest rates have varied wildly over the last 50 years. Hence, the question arises: what exactly were the driving forces behind these large fluctuations of interest rates? Probably the most important force behind these changes is the Effective Federal Funds Rate which represents the weighted average of rates at which financial institutions trade federal funds. Federal funds are reserves held at the Federal Reserve Banks. Institutions with excess liquidity lend money overnight to other banks that are in need of further liquidity. Although the rate that needs to be paid from the borrowing institution is freely negotiated between both counterparties, it is strongly influenced by the Federal Reserve through open market operations or by buying and selling government bonds. This way, the Federal Reserve tries to navigate the Effective Federal Funds Rate into a direction such that it is in line with the Federal Funds target rate that was determined in the previous Federal Open Market Committee (FOMC) meeting. The Effective Federal Funds Rate is the most important interest rate in the US financial markets and it influences all other interest rates. Figure 3 shows the Effective Federal Funds Rate from the FRED Economic Data section2 of the Federal Reserve Bank of St. Louis together with two different constant maturity rates. As can be seen, constant maturity rates of 1 year maturity almost perfectly move in line with the Effective Federal Funds Rate. But even 10 year CMT rates follow the Effective Federal Funds Rate to a significant degree. Figure 3: Historic Constant Maturity US Treasury Rates Compared to Federal Funds Rate So if the Effective Federal Funds Rate has such a huge influence on short-term yields, the question arises: how does the FOMC actually determine which level of rates to target? Since the late 1970s, the Federal Reserve System and the FOMC have been instructed with the so-called "dual mandate": interest rates should be set in such a way that they effectively promote the goals of both maximum employment and price stability3. So in order to make its monetary policy decisions, the FOMC needs to have a good understanding of how interest rates interact with inflation and the economy. On a higher level, these quite complicated relationships are described by FRED as follows2: „If the FOMC believes the economy is growing too fast and inflation pressures are inconsistent with the dual mandate of the Federal Reserve, the Committee may set a higher federal funds rate target to temper economic activity. In the opposing scenario, the FOMC may set a lower federal funds rate target to spur greater economic activity.“ Given the dual mandate of the FED, inflation is already set to have an impact on interest rate decisions by specification. This relationship we now want to see with data as well. As a measure of inflation we will use Consumer Price Index (CPI) data measured by the US Bureau of Labor Statistics. CPI data measures the average prices paid by urban consumers for a basket of goods and services. In particular, we will use core CPI data4 to derive inflation rates, such that food and energy prices are excluded from the basket, because both tend to have comparatively volatile prices. For more details on CPI data you can also have a look at the CPI FAQ page. Changes of core CPI over time can be seen in Figure 4, in comparison to the 1 year and 10 year CMT rates. As we can see, there is a similar downwards trend in inflation rates to the downwards trend that we have seen for interest rates. During the 1980s, inflation rates reached levels above 10% in some years. After that, inflation rates gradually decreased to a level of roughly 2% in recent years. Figure 4: Historic Constant Maturity US Treasury Rates Compared to Inflation Rates Note: General wisdom seems to be that inflation rates and interest rates are negatively correlated. An increase in interest rates will decrease the supply of money, and hence the value of money will increase (so inflation decreases). But from Figure 4 we can see that inflation rates and yields actually do have a rather positive relationship. A different way to also see this is by looking at inflation rates and interest rates in the cross-section of individual countries, like it is shown in Figure 5 for two exemplary years: 2005 and 2018. The data used for the charts is similar to Uribe 2018: CPI inflation rates5 and interest rates derived from lending rates6 and risk premia on lending7 from the World Development Indicators database provided by The World Bank. Figure 5: Inflation Rates and Interest Rates for Selected Countries It actually makes sense that interest rates and inflation rates are rather positively related in the long run, because the difference between both is another meaningful financial indicator: the real rate. It measures the compensation for lending money at financial markets after subtraction of inflation rates. Only when interest rates and inflation rates move approximately in line (hence having a positive relationship) it is ensured that real rates also stay within a reasonable range. One potential way to compute real rates is by subtracting the inflation rates that we get from core CPI from CMT rates as shown in Figure 6. But keep in mind that this is just a rough proxy for real rates due to mainly two reasons. First, prevailing interest rates are actually forward looking, while inflation rates are backwards looking: CPI measures the evolution of prices in the past. And second, we did not correct for differences in maturities. The real rates shown are just the difference between interest rates of different maturities and inflation rates that were computed for individual years. Nevertheless, the real rates computed that way should still capture the bigger patterns of real rates over time. To verify that these real rates are meaningful, they are also compared to a different way of estimating real rates. Besides regular Treasury securities that are quoted in nominal terms and are subject to inflation risk, the US Treasury also issues so called US Treasury Inflation-Protected Securities (TIPS). TIPS do not have a certain payoff at maturity, but the bond's par value is coupled to the inflation rate which is usually derived from the CPI. This way, the payoff generated from TIPS reflects both the changes in CPI and the prevailing real yield, and we get another proxy for real rates simply by taking the quoted yield from TIPS. As a comparison to the other derived real rates, Figure 6 also shows the trajectory of the 10 year real constant maturity Treasury rate (R-CMT) which is derived from estimated real yield curves and can be obtained with ticker DFII10 from FRED8. Note that another nice use case of R-CMT rates is to derive expectations about long-term inflation rates by computing the difference between CMT and R-CMT rates9. In Figure 3 we have already seen how the short end of yield curves is largely driven by the Effective Federal Funds Rate. However, we can also see that this does not equally hold for long-term interest rates which follow the FED target rate less closely. In general, long-term interest rates tend to be higher than short-term rates, because when you commit to lend money for a longer period of time, you usually also have to face higher risks than for shorter periods. As we will see in more detail in subsequent blog posts in this series on fixed-income securities, bonds with long durations are more sensitive to interest rate changes, because changing interest rates will just change the discounting rates on more cash-flows that are further in the future. The higher compensation required for long-term commitments is also called the liquidity premium. However, even though financial market participants usually require more compensation for longer durations, the actual level of compensation nevertheless fluctuates over time quite significantly. This can be seen in Figure 7 which shows the difference between 10 year and 1 year CMT rates over time (also called the "slope" of the yield curve). As we can see, most of the time the line is above zero, because long-term rates are usually higher than short-term rates. However, at some specific points in time this relationship does invert and short-term rates have actually been higher than long-term rates, which is shown by the line being below zero. In these situations one speaks of an "inverted" yield curve. The presence of an inverted yield curve with negative liquidity premium on long-term rates is actually such an abnormal situation that it is usually an indicator of more fundamental distortions in financial markets and as such has been a good predictor of US recessions which are indicated by the grey colored bars in the background of the chart. As we can see, every US recession in the last 60 years has been preceded by a period of inverted yield curves10. Figure 7: Yield Curve Slopes (10 Year minus 1 Year CMT Rates) and US Recessions Until now we have only looked at US Treasury rates, because publicly available data goes back much further than for the EU area. From the ECB we can obtain a dataset for EU government interest rates similar to what we have for US Treasury rates: daily parameter estimates of a Svensson yield curve model, estimated for EU Government bonds with AAA rating11. The data goes back until the end of 2004, and yield curve parameters again can be converted into constant maturity rates. Figure 8 shows constant maturity rates for some selected maturities over time. In contrast to US rates, EU rates actually did cross the line into negative territory in 2015, remaining below zero until the time of writing. At the end of the sample rates are even negative for almost all shown maturities. Figure 8: Constant Maturity Rates for EU Government Bonds with AAA Rating Similar to the US, the short end of the yield curve is driven by central bank decisions in the EU as well. Figure 9 shows constant maturity rates of maturity one year, together with so-called ECB deposit (DFR) and marginal lending facility rates (MLFR). Deposit facility rates define the interest that banks receive when they deposit money with the ECB overnight, marginal lending facility rates are the rates at which banks can borrow overnight from the ECB. Both these rates are set every six weeks by the ECB and hence are piecewise constant. As we can see from the chart, short-term interest rates are essential driven by these two rates. So far we have seen that interest rates in both the US and EU are significantly driven by central banks at their short end and how both have evolved over time in the past. Last but not least, Figure 10 shows how official central bank rates of both regions tend to move together over time. Both US Government rates as well as EU Government rates for countries with AAA rating are usually perceived to be almost free of credit risk. Without credit risk, you will always get back your principal as well as all coupon payments that have been promised to you by your counterpart. However, many bonds do have a non-negligible probability of default, such that financial market participants require a compensation for the credit risk involved. This means that they either require higher coupon payments, or they are only willing to pay less for any given promised series of future cash-flows, because some of these future cash-flows might actually never get paid. This characteristic of the asset pricing process for bonds with credit risks is reflected in higher interest rates: market participants require an additional spread on top of risk-free rates. One way to see this is by looking at yields of different corporate bonds in comparison to risk-free government bond yields. Figure 11 shows constant maturity rates of US government bonds with 20 year maturities together with yields for corporate bonds of two different credit qualities: AAA and BAA rated bonds. Data for both corporate bond yields is from Moody's and provided by FRED12. These yields are based on bonds with maturities of 20 years and above, hence the comparison to CMT rates with 20 years maturity. As economic theory suggests, corporate bond yields are higher than US government yields at basically all points in time. Even AAA rated bonds already traded at a small spread compared to government yields, and BAA rated bonds just exhibit even higher rates. Figure 11: Historic US Corporate and US Government Yields for Long Maturities A similar conclusion we also can get for bonds with shorter maturities. Figure 12 shows a comparison of the ICE BofAML US High Yield Master II Effective Yield (BAMLH0A0HYM2EY)13 and government bond yields. The ICE US High Yield relates to USD denominated corporate debt with a rating below investment grade, and issuance in the US domestic market. It is a capitalisation-weighted basket of fixed-income securities with remaining maturity greater than 1 year. From the chart we can see that the spread between US government and corporate bond yields significantly varies over time. While the spread was roughly 2.5% at the end of 2007, it increased to more than 15% at the peak of the Global Financial Crisis of 2008 / 2009. This, however, does not mean that expected returns of US corporate bonds were more than 15% larger than for US government bonds. The additional yield of bonds with credit risk could only be fully harvested if one actually receives all promised future cash-flows. This would be tantamount to a case of zero defaults which is highly unlikely in the case of the ICE US High Yield, given that it consists of securities with ratings below investment grade. In other words: the ex-ante corporate bond yields are larger than the actually expected returns and the difference between yields and returns is caused by losses due to defaults. Figure 12: Historic US Corporate and US Government Yields for Medium Maturities Credit spreads are also a popular indicator for the amount of credit risks in an economy. For example, the so-called TED spread (or TED rate) measures the spread between 3-month LIBOR based on US dollars and 3-month Treasury Bill rates. Historic values of the TED spread (TEDRATE14 are shown in Figure 13. The maximum value of the time series occurred during the Global Financial Crisis of 2008 / 2009. Credit risk is not only a characteristic of corporate bonds, but also government bonds of countries with high levels of debt or other economic vulnerabilities could be subject to defaults and hence generally have higher interest rates than governments with highest creditworthiness. One example to see this is by looking at long-term yields (maturities of close to ten years15) of several European government bonds as shown in Figure 14. Although interest rates of most countries follow a similar trend of decreasing interest rates, some individual countries traded at significantly higher yields in the past. Looking at yields of different countries today (Figure 15) we can also see the wide variations in yields. For example, Greece government bonds of ten year maturity currently trade at a yield of slightly above 1.25%, while German government bonds only trade at a negative yield of almost -0.5%. This is the lowest value of all the selected countries, but it is not the only one with interest rates in negative territory: roughly half of the countries currently have interest rates below zero. Gurkaynak, Refet. S., Sack, Brian, and Wright, Jonathan H. (2006), The U.S. Treasury Yield Curve: 1961 to the Present, Federal Reserve Board Finance and Economics Discussion Series. Svensson, L. E. O. (1994), Estimating and Interpreting Forward Rates: Sweden 1992-4, National Bureau of Economic Research Working Paper #4871. Uribe, M. (2018), The Neo-Fisher Effect: Econometric Evidence from Empirical and Optimizing Models, National Bureau of Economic Research Working Paper #25089. 10 For further details on this topic you can listen to the Meb Faber Show podcast episode with Cam Harvey: https://mebfaber.com/2019/08/28/ 11 Data is available under the Financial market instrument ticker G_N_A (e.g. YC.B.U2.EUR.4F.G_N_A.SV_C_YM.BETA0) at the ECB Yield curves section of the Statistical Data Warehouse: http:// Disclaimer – The views and opinions expressed in this blog are those of the author and do not necessarily reflect the views of Scalable Capital GmbH, its subsidiaries or its employees ("Scalable Capital", "we"). The content is provided to you solely for informational purposes and does not constitute, and should not be construed as, an offer or a solicitation of an offer, advice or recommendation to purchase any securities or other financial instruments. Any representation is for illustrative purposes only and is not representative of any Scalable Capital product or investment strategy. The academic concepts set forth herein are derived from sources believed by the author and Scalable Capital to be reliable and have no connection with the financial services offered by Scalable Capital. Past performance and forward-looking statements are not reliable indicators of future performance. The return may rise or fall as a result of currency fluctuations. Please refer to our risk information. Risikohinweis – Die Kapitalanlage ist mit Risiken verbunden und kann zum Verlust des eingesetzten Vermögens führen. Weder vergangene Wertentwicklungen noch Prognosen haben eine verlässliche Aussagekraft über zukünftige Wertentwicklungen. Wir erbringen keine Anlage-, Rechts- und/oder Steuerberatung. Sollte diese Website Informationen über den Kapitalmarkt, Finanzinstrumente und/oder sonstige für die Kapitalanlage relevante Themen enthalten, so dienen diese Informationen ausschließlich der allgemeinen Erläuterung der von Unternehmen unserer Unternehmensgruppe erbrachten Wertpapierdienstleistungen. Bitte lesen Sie auch unsere Risikohinweise und Nutzungsbedingungen.
{"url":"https://de.scalable.capital/finanzen-boerse/yield-curve-empirics","timestamp":"2024-11-08T15:01:31Z","content_type":"text/html","content_length":"251985","record_id":"<urn:uuid:c08a7059-c43d-4600-a121-096372831754>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00770.warc.gz"}
seminars - Topology of singular toric varieties By “toric topology”, we mean a branch of mathematics studying various topological spaces with torus symmetries. One of the central objects in toric topology is the toric variety which has provided a fertile testing ground for general theories in different fields of mathematics such as algebraic geometry, representation theory and combinatorics. Due to their nice torus symmetries, one can expect several nice properties, one of which is the cohomological rigidity conjecture. It asks if the family of smooth toric varieties can be classified by their cohomology rings. In this talk, we will look at this conjecture in the context of singular toric varieties and introduce several recent works on the topology of singular toric varieties. Zoom 병행 : https://snu-ac-kr.zoom.us/j/2473239867
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&page=89&l=en&sort_index=room&order_type=desc&document_srl=923656","timestamp":"2024-11-13T16:18:57Z","content_type":"text/html","content_length":"48494","record_id":"<urn:uuid:69d0776d-7dd5-467e-9907-bea64a81e568>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00654.warc.gz"}
Basic Statistics for the Behavioral Sciences 7th Edition by Gary Heiman, ISBN-13: 978-1133956525 - ebookschoice.comBasic Statistics for the Behavioral Sciences 7th Edition by Gary Heiman, ISBN-13: 978-1133956525ebookschoice.com - The best ebooks collection Basic Statistics for the Behavioral Sciences 7th Edition by Gary Heiman, ISBN-13: 978-1133956525 [PDF eBook eTextbook] • Publisher: Cengage Learning; 7th edition (January 1, 2013) • Language: English • 504 pages • ISBN-10: 1133956521 • ISBN-13: 978-1133956525 Packed with real-world illustrations and the latest data available, BASIC STATISTICS FOR THE BEHAVIORAL SCIENCES, 7e demystifies and fully explains statistics in a lively, reader-friendly format. The author’s clear, patiently crafted explanations with an occasional touch of humor, teach readers not only how to compute an answer but also why they should perform the procedure or what their answer reveals about the data. Offering a conceptual-intuitive approach, this popular book presents statistics within an understandable research context, deals directly and positively with potential weaknesses in mathematics, and introduces new terms and concepts in an integrated way. Table of Contents: Half Title Brief Contents Preface to the Instructor Ch 1: Introduction to Statistics Getting Started Why is it Important to Learn Statistics (and how do I do that?) Review of Mathematics Used in Statistics Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Ch 2: Statistics and the Research Process Getting Started The Logic of Research Applying Descriptive and Inferential Statistics Understanding Experiments and Correlational Studies The Characteristics of Scores Statistics in Published Research: Using Statistical Terms Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Ch 3: Frequency Distributions and Percentiles Getting Started New Statistical Notation Why is it Important to know about Frequency Distributions? Simple Frequency Distributions Types of Simple Frequency Distributions Relative Frequency and the Normal Curve Computing Cumulative Frequency and Percentile Statistics in Published Research: Apa Publication Rules A Word about Grouped Frequency Distributions Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 4: Measures of Central Tendency: The Mean, Median, and Mode Getting Started New Statistical Notation Why is it Important to know about Central Tendency What is Central Tendency The Mode The Median Transformations and the Mean Deviations around the Mean Describing the Population Mean Summarizing Research Statistics in Published Research: Using the Mean Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 5: Measures of Variability: Range, Variance, and Standard Deviation Getting Started New Statistical Notation Why is it Important to know about Measures of Variability? Understanding the Variance and Standard Deviation The Population Variance and the Population Standard Deviation A Summary of the Variance and Standard Deviation Computing Formulas for the Variance and Standard Deviation Applying the Variance and Standard Deviation to Research Statistics in Published Research: Reporting Variability Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 6: z-Scores and the Normal Curve Getting Started New Statistical Notation Why is it Important to know about z-Scores? Understanding z-Scores Interpreting z-Scores Using the z-Distribution Using z-Scores to Compare Different Variables Using z-Scores to Determine the Relative Frequency of Raw Scores Statistics in Published Research: Using z-Scores Using z-Scores to Describe Sample Means Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 7: The Correlation Coefficient Getting Started New Statistical Notation Why is it Important to know about Correlation Coefficients? Understanding Correlational Research Types of Relationships Strength of the Relationship The Pearson Correlation Coefficient The Spearman Rank-Order Correlation Coefficient The Restriction of Range Problem Statistics in Published Research: Correlation Coefficients Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 8: Linear Regression Getting Started New Statistical Notation Why is it Important to know about Linear Regression? Understanding Linear Regression The Linear Regression Equation The Standard Error of the Estimate Computing the Proportion of Variance Accounted for A Word About Multiple Correlation and Regression Statistics in Published Research: Linear Regression Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Halfway Review Ch 9: Using Probability to Make Decisions about Data Getting Started New Statistical Notation Why is it Important to know about Probability? The Logic of Probability Computing Probability Obtaining Probability from the Standard Normal Curve Random Sampling and Sampling Error Deciding Whether a Sample Represents a Population Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 10: Introduction to Hypothesis Testing Getting Started New Statistical Notation Why is it Important to know about the z-Test? The Role of Inferential Statistics in Research Setting Up Inferential Procedures Performing the z-Test Interpreting Significant Results Interpreting Nonsignificant Results Summary of the z-Test The One-Tailed Test Errors in Statistical Decision Making Statistics in Published Research: Reporting Significance Tests Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 11: Performing the One-Sample t-Test and Testing Correlation Coefficients Getting Started Why is it Important to know about t-Tests? Performing the One-Sample t-Test Estimating µ by Computing a Confidence Interval Statistics in Published Research: Reporting the t-Test Significance Tests for Correlation Coefficients Maximizing the Power of Statistical Tests Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 12: The Two-Sample t-Test Getting Started New Statistical Notation Why is it Important to know about the two-Sample t-Test? Understanding the Two-Sample t-Test The Independent-Samples t-Test Summary of the Independent-Samples t-Test The Related-Samples t-Test Statistical Hypotheses for the Related-Samples t-Test Summary of the Related-Samples t-Test Describing the Relationship in a Two-Sample Experiment Statistics in Published Research: The Two-Sample Experiment Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 13: The One-Way Analysis of Variance Getting Started New Statistical Notation Why is it Important to know about ANOVA? An Overview of ANOVA Understanding the ANOVA Performing the ANOVA Performing Post HOC Comparisons Summary of Steps in Performing a One-Way ANOVA Additional Procedures in the One-Way ANOVA Statistics in Published Research: Reporting ANOVA Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 14: The Two-Way Analysis of Variance Getting Started New Statistical Notation Why is it Important to know about the Two-Way ANOVA? Understanding the Two-Way Design Overview of the Two-Way, Between-Subjects ANOVA Computing the Two-Way ANOVA Completing the Two-Way Experiment Summary of the Steps in Performing a Two-Way ANOVA Putting it All Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Ch 15: Chi Square and Other Nonparametric Procedures Getting Started Why is it Important to know about Nonparametric Procedures? Chi Square Procedures One-Way Chi Square The Two-Way Chi Square Statistics in Published Research: Reporting Chi Square Nonparametric Procedures for Ranked Data Putting it all Together Chapter Summary Key Terms Review Questions Application Questions Integration Questions Summary of Formulas Second-Half Review A: Additional Statistical Formulas B: Using SPSS C: Statistical Tables D: Answers to Odd-Numbered Questions Gary Heiman is a professor at Buffalo State College. Praised by reviewers and adopters for his readable prose and effective pedagogical skills, he has written four books for Houghton Mifflin (now Cengage Learning): STATISTICS FOR THE BEHAVIORAL SCIENCES, RESEARCH METHODS IN PSYCHOLOGY, UNDERSTANDING RESEARCH METHODS AND STATISTICS, AND ESSENTIAL STATISTICS FOR THE BEHAVIORAL SCIENCES. He received his Ph.D. in cognitive psychology from Bowling Green State University. What makes us different? • Instant Download • Always Competitive Pricing • 100% Privacy • FREE Sample Available • 24-7 LIVE Customer Support There are no reviews yet.
{"url":"https://ebookschoice.com/product/basic-statistics-for-the-behavioral-sciences-7th-edition-by-gary-heiman-isbn-13-978-1133956525/","timestamp":"2024-11-02T03:01:57Z","content_type":"text/html","content_length":"107410","record_id":"<urn:uuid:eafea035-4e5f-4661-a585-f4c734681493>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00853.warc.gz"}
Convexity Adjustment in Bonds: Calculations and Formulas What Is a Convexity Adjustment? A convexity adjustment is a change required to be made to a forward interest rate or yield to get the expected future interest rate or yield. This adjustment is made in response to a difference between the forward interest rate and the future interest rate; this difference has to be added to the former to arrive at the latter. The need for this adjustment arises because of the non-linear relationship between bond prices and yields. Key Takeaways • Convexity adjustment involves modifying a bond's convexity based on the difference in forward and future interest rates. • As its name suggests, convexity is non-linear. It is for this reason that adjustments to it must be made from time to time. • A bond's convexity measures how its duration changes as a result of changes in interest rates or time to maturity. The Formula for Convexity Adjustment Is \begin{aligned} &CA = CV \times 100 \times (\Delta y)^2 \\ &\textbf{where:} \\ &CV=\text{Bond's convexity} \\ &\Delta y=\text{Change of yield} \\ \end{aligned} What Does the Convexity Adjustment Tell You? Convexity refers to the non-linear change in the price of an output given a change in the price or rate of an underlying variable. The price of the output, instead, depends on the second derivative. In reference to bonds, convexity is the second derivative of bond price with respect to interest rates. Bond prices move inversely with interest rates—when interest rates rise, bond prices decline, and vice versa. To state this differently, the relationship between price and yield is not linear, but convex. To measure interest rate risk due to changes in the prevailing interest rates in the economy, the duration of the bond can be calculated. Duration is the weighted average of the present value of coupon payments and principal repayment. It is measured in years and estimates the percent change in a bond’s price for a small change in the interest rate. One can think of duration as the tool that measures the linear change of an otherwise non-linear function. Convexity is the rate that the duration changes along the yield curve. Thus, it's the first derivative of the equation for the duration and the second derivative of the equation for the price-yield function or the function for change in bond prices following a change in interest rates. Because the estimated price change using duration may not be accurate for a large change in yield due to the convex nature of the yield curve, convexity helps to approximate the change in price that is not captured or explained by duration. A convexity adjustment takes into account the curvature of the price-yield relationship shown in a yield curve in order to estimate a more accurate price for larger changes in interest rates. To improve the estimate provided by duration, a convexity adjustment measure can be used. Example of How to Use Convexity Adjustment Take a look at this example of how convexity adjustment is applied: \begin{aligned} &\text{AMD} = -\text{Duration} \times \text{Change in Yield} \\ &\textbf{where:} \\ &\text{AMD} = \text{Annual modified duration} \\ \end{aligned} \begin{aligned} &\text{CA} = \frac{ 1 }{ 2 } \times \text{BC} \times \text{Change in Yield} ^2 \\ &\textbf{where:} \\ &\text{CA} = \text{Convexity adjustment} \\ &\text{BC} = \text{Bond's convexity} \\ \end{aligned} Assume a bond has an annual convexity of 780 and an annual modified duration of 25.00. The yield to maturity is 2.5% and is expected to increase by 100 basis points (bps): $\text{AMD} = -25 \times 0.01 = -0.25 = -25\%$ Note that 100 basis points is equivalent to 1%. $\text{CA} = \frac{1}{2} \times 780 \times 0.01^2 = 0.039 = 3.9\%$ The estimated price change of the bond following a 100 bps increase in yield is: $\text{Annual Duration} + \text{CA} = -25\% + 3.9\% = -21.1\%$ Remember that an increase in yield leads to a fall in prices, and vice versa. An adjustment for convexity is often necessary when pricing bonds, interest rate swaps, and other derivatives. This adjustment is required because of the unsymmetrical change in the price of a bond in relation to changes in interest rates or yields. In other words, the percentage increase in the price of a bond for a defined decrease in rates or yields is always more than the decline in the bond price for the same increase in rates or yields. Several factors influence the convexity of a bond, including its coupon rate, duration, maturity, and current price. Compare Accounts The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
{"url":"http://shava.org/convexity-adjustment.html","timestamp":"2024-11-12T20:22:53Z","content_type":"text/html","content_length":"273674","record_id":"<urn:uuid:6b15af29-10c0-46e3-9322-06fc625015ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00799.warc.gz"}
1985 AHSME Problems/Problem 30 Let $\left\lfloor x\right\rfloor$ be the greatest integer less than or equal to $x$. Then the number of real solutions to $4x^2-40\left\lfloor x\right\rfloor+51 = 0$ is $\mathrm{(A)\ } 0 \qquad \mathrm{(B) \ }1 \qquad \mathrm{(C) \ } 2 \qquad \mathrm{(D) \ } 3 \qquad \mathrm{(E) \ }4$ We rearrange the equation as $4x^2 = 40\left\lfloor x\right\rfloor-51$, where the right-hand side is now clearly an integer, meaning that $4x^2 = n$ for some non-negative integer $n$. Therefore, in the case where $x \geq 0$, substituting $x = \frac{\sqrt{n}}{2}$ gives $\[40\left\lfloor\frac{\sqrt{n}}{2}\right\rfloor-51 = n.\]$ To proceed, let $a$ be the unique non-negative integer such that $a \leq \frac{\sqrt{n}}{2} < a+1$, so that \begin{align*}&\left\lfloor \frac{\sqrt{n}}{2}\right\rfloor = a, \text{ and} \\ &4a^2 \leq n < 4a^2+8a+4,\end{align*} and our equation reduces to $\[40a-51 = The above inequalities therefore become $\[4a^2 \leq 40a-51 < 4a^2+8a+4 \iff 4a^2-40a+51 < 0 \text{ and } 4a^2-32a+55 > 0,\]$ where the first inequality can now be rewritten as $(2a-10)^2 \leq 49$, i.e. $\left\lvert 2a-10\right\rvert \leq 7$. Since $(2a-10)$ is even for all integers $a$, we must in fact have \begin{align*}\left\lvert 2a-10\right\rvert \leq 6 &\iff \left\lvert a-5\right\rvert \ leq 3 \\ &\iff 2 \leq a \leq 8.\end{align*} The second inequality similarly simplifies to $(2a-8)^2 > 9$, i.e. $\left\lvert 2a-8\right\rvert > 3$. As $(2a-8)$ is even, this is equivalent to \begin {align*}\left\lvert 2a-8 \right\rvert \geq 4 &\iff \left\lvert a-4\right\rvert \geq 2 \\ &\iff a \geq 6 \text{ or } a \leq 2,\end{align*} so the values of $a$ satisfying both inequalities are $2$, $6$, $7$, and $8$. Since $n = 40a-51$, each of these distinct values of $a$ gives a distinct solution for $n$, and thus for $x = \frac{\sqrt{n}}{2}$, giving a total of $4$ solutions in the $x \geq 0$ As $4$ is already the largest of the answer choices, this suffices to show that the answer is $\text{(E)}$, but for completeness, we will show that the $x < 0$ case indeed gives no other solutions. If $x = -\frac{\sqrt{n}}{2}$ (and so $n > 0$), we require $\[40\left\lfloor -\frac{\sqrt{n}}{2}\right\rfloor-51 = n,\]$ and recalling that $\left\lfloor -x\right\rfloor = -\left\lceil x\right\rceil$ for all $x$, this equation can be rewritten as $\[-40\left\lceil \frac{\sqrt{n}}{2}\right\rceil-51 = n.\]$ Since $n$ is positive, the least possible value of $\left\lceil \frac{\sqrt{n}}{2}\right\ rceil$ is $1$, but this means \begin{align*}n &= -40\left\lceil\frac{\sqrt{n}}{2}\right\rceil-51 \\ &\leq -40 \cdot 1 - 51 \\ &= -91,\end{align*} which is a contradiction. Therefore the $x < 0$ case indeed gives no further solutions, confirming that the total number of solutions is precisely $\boxed{\text{(E)} \ 4}$. See Also The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
{"url":"https://artofproblemsolving.com/wiki/index.php/1985_AHSME_Problems/Problem_30","timestamp":"2024-11-11T20:36:46Z","content_type":"text/html","content_length":"50685","record_id":"<urn:uuid:1e1edda8-3415-4215-9890-57eb62d92981>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00756.warc.gz"}
Leverage Score Sampling for Tensor Product Matrices in Input Sparsity Time Leverage Score Sampling for Tensor Product Matrices in Input Sparsity Time Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23933-23964, 2022. We propose an input sparsity time sampling algorithm that can spectrally approximate the Gram matrix corresponding to the q-fold column-wise tensor product of q matrices using a nearly optimal number of samples, improving upon all previously known methods by poly(q) factors. Furthermore, for the important special case of the q-fold self-tensoring of a dataset, which is the feature matrix of the degree-q polynomial kernel, the leading term of our method’s runtime is proportional to the size of the dataset and has no dependence on q. Previous techniques either incur a poly(q) factor slowdown in their runtime or remove the dependence on q at the expense of having sub-optimal target dimension, and depend quadratically on the number of data-points in their runtime. Our sampling technique relies on a collection of q partially correlated random projections which can be simultaneously applied to a dataset X in total time that only depends on the size of X, and at the same time their q-fold Kronecker product acts as a near-isometry for any fixed vector in the column span of $X^{\otimes q}$. We also show that our sampling methods generalize to other classes of kernels beyond polynomial, such as Gaussian and Neural Tangent kernels. Cite this Paper Related Material
{"url":"https://proceedings.mlr.press/v162/woodruff22a.html","timestamp":"2024-11-09T09:28:43Z","content_type":"text/html","content_length":"16621","record_id":"<urn:uuid:ecc7607c-2511-42f0-b238-1bf02fe12787>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00378.warc.gz"}
316 | Working With Heat-Molded Boots Molded boots and transitions are an important part of professional motorsport harness construction. In this webinar we’ll learn what these are, how to select a suitable boot, how to recover them correctly and how to seal them with epoxy. 00:00 - Hey team, Andre from High Performance Academy, welcome along to another one of our webinars and in this one we're going to be dealing with heat molded boots, heat molded parts, something that we will be dealing with a lot, particularly at the professional level of motorsport wiring. 00:15 Exhibit A here is our little test harness and I'll just get this under our overhead camera. 00:21 I'm going to assume it's probably a safe assumption that most people have already seen these heat molded boots before but for those who haven't, if you have been living under a rock, that's essentially what we're going to be talking about. 00:34 Now as usual, if you do have any questions, as we move through today's webinar, you can ask those questions in the chat and we'll deal with those at the end. 00:44 So let's get started with what a heat molded boot is or heat molded part and why we would need them. 00:51 So essentially they're a semi rigid boot that we can heat shrink down using a controlled temperature from a heat gun like this Dewalt one that I've got here. 01:02 And I'll just get this out of the way, they come in a range of different shapes and sizes so in this case in our overhead here, we've got a T intersection heat molded boot which is a really common way of branching out our wiring harnesses. 01:18 Particularly if we've got a V configuration engine and we want to run the harness down both banks of the engine, maybe for our injectors etc, then a T boot is a really nice way of making that 01:30 We've also got straight boots like this one here, obviously this one is a little bit smaller, they come in a range of sizes. 01:39 We've got all sorts of different shapes and I'm only scratching the surface here on what is available. 01:47 This particular one here is from Hellermann and this is one that I actually used on our FJ40 wiring harness which has just been added in as a worked example to our pro wiring course. 01:58 And this is being used in the injector and ignition coil wiring harness. 02:03 So essentially that harness, the main branch runs along one of the fuel rails on each side. 02:08 And then at each point we can branch out the wiring for our injectors and our coils. 02:14 So it kind of gets everything nice and neatly arranged and heading in the right direction. 02:19 Unrecovered, they tend to look a little bit more like this. 02:24 So we'll get this under our overhead as well. 02:26 This one here is a 90° boot so we would use this on the likes of the back of an autosport connector. 02:32 So we can see that in their stock form they are actually reasonably flexible. 02:36 But if we look at one of these recovered parts, I mean I can't budge that, it is absolutely solid. 02:43 And that really comes down to two of the points of why we would use these heat molded boots in the first place. 02:50 One is that they do a great job of sealing our harness from the environment. 02:55 So they're going to prevent dust and moisture ingress into our harness when they're installed properly and when they are sealed with a epoxy glue. 03:05 The other element though that's just as important is that they provide really good strain relief to our harness when they are recovered down because they are essentially semi rigid. 03:16 So if we get our harness back into our shot here and we can see that harness, I cannot move that. 03:25 So obviously in behind this boot, we've got all of our service loops anyway. 03:30 So that's helping with our strain relief but there's likely to also be a number of crimps and splices, splices is what I meant to say, in there, so this is probably the most dangerous part of our harness in terms of reliability so having that heat molded part in there, it is glued to the DR25 of our main harness, it is also glued to the back of the autosport connector here and basically then protects our harness from any strain being placed into the contacts at the back of that autosort connector. 04:03 There's a couple of suppliers, the main suppliers in this market, Raychem and Hellermann. 04:08 The parts, for all intents and purposes, at least as I've experienced them, are interchangeable. 04:14 Part numbers are different between Hellermann and Raychem but essentially it's going to come down, in terms of preference it's going to come down to the supplier that you have access to and what they're actually stocking. 04:28 So the parts are interchangeable, there is a cross reference between a Hellermann and Raychem part so not a real big deal there using one or the other. 04:39 We tend to end up with a bit of a combination of both just depending on what we can get access to. 04:45 Now what I'll do is just bring into shot as well a couple of other examples here. 04:53 So the heat moulded parts are not the only way of dealing with environmentally protecting our harness. 05:01 So this DTM 4 pin connector here has actually been sealed with a section of SCL heat shrink. 05:07 So similar in some regards in terms of its a semi rigid product so obviously not a lot of flexibility in there. 05:17 It's doing the same job in so much as it's sealing the harness from dust and moisture and that is also a glue lined heat shrink as well so basically when you shrink down SCL, it will have a bit of a black epoxy ooze out, that's pretty typical. 05:34 Problem with SCL is that it is a straight heat moulded heat shrink so it gets difficult again in our example here, we've obviously gone from quite a large diameter down to quite a small 05:49 It's not so bad with our little 2 pin here, you can see the difference in diameters are not quite so dramatic but it can get a little bit tricky with the SCL. 05:59 In order to get that to shrink down to the smaller diameter, so often you'll see people using 2 lengths of SCL, one is just going to be shrunk down directly onto the DR25 to essentially create a little bit more bulk so that the second piece that fits over the back of the DTM connector can then shrink down onto it. 06:19 There's a few little oddities like that whereas the heat moulded parts from TE Connectivity, Raychem and Hellermann are actually designed to actually shrink down into different sizes at their inlet and their outlet so that's why if we again get our little sample under the overhead, we can see that we've got a dramatically different diameter at the large side compared to at the small side and we can size these to suit our harness requirements. 06:52 Now before we get into sizing, also talk about glue versus non glued boots. 07:01 They are available in both forms and really a lot of this comes down to personal preference. 07:06 Let me just get this one under the overhead. 07:10 It might be a little tricky to see, no actually I don't think it's too tricky to see at all. 07:14 This sort of grey finish that we can see in here, this is actually a glue lined heat shrink boot so this glue here is pre applied to the inside of the heat moulded boot and when we recover that down using our heat gun that's actually going to essentially melt and can be used them to glue that boot down to whatever we are recovering it onto. 07:38 So that's one option, the other option is non glue lined and this one here is a good example of this. 07:46 So we can see, nothing in this, it is just purely a heat moulded boot and we're going to need to apply our own glue to that. 07:56 So pros and cons with this, using a non glue lined boot allows us a little bit more control over the application of the epoxy and what we're going to use. 08:09 Obviously it is a bit of a messier process and it does involve another step and some more consumables so there is that to consider. 08:17 Either will work, generally I prefer to work with non glue lined boots but that is partly a personal preference here. 08:26 If we are working with non glue lined boots then you are going to need an epoxy. 08:32 There are two options that are sort of the industry standard, this is the Hellermann Tyton V9500. 08:39 The other one is the ResinTech RT125. 08:43 I haven't been able to find a difference between them, I'd suspect they're actually exactly the same product but I can't guarantee that. 08:52 As its name implies, it is a 2 part epoxy, you use a mixing gun like this. 08:57 There are a couple of options for application. 08:59 You can get a mixing nozzle, so that has essentially a spiral pass through it, it is quite long, probably sort of a good 75 mm in length, 3 inches in length or thereabouts. 09:12 And basically by the time the 2 parts of the epoxy have gone through that spiral, they've been properly mixed together and when they come out, you can be comfortable in knowing that they are mixed as they should be. 09:24 I find particularly when we're only using a small amount of epoxy though for maybe gluing just one boot down, that's a really wasteful way of using what is quite an expensive product. 09:36 I tend to use this without the mixing nozzle and what I'll do is apply the amount of epoxy that I think I'm going to use onto something that I can physically mix with a spatula or something of that nature and then what I do is I use a small syringe like this. 09:57 I'll get this under our overhead so you can get a better shot of it. 10:00 I mean nothing particularly special here, this is just a 5 mL syringe. 10:03 They will come like this and these are probably $1 or $2 at your local pharmacy or drug store. 10:11 So all you need to do is remove the plunger from it, you can load your epoxy into there and then that gives you a lot more control over how it's going to be applied. 10:21 I do however also use these little extensions here, I guess it's kind of a needle. 10:29 Relatively small point and it allows you really precise application of the epoxy to the inside of the boot or to the back of a autosport connector or wherever you're applying it. 10:40 Little trick when you are using a syringe like this, the epoxy can be quite thick. 10:47 Particularly if you're in a colder climate, maybe coming into winter. 10:50 So what I find is a really good idea just before you go and apply the epoxy, I use my heat gun and just lightly heat the epoxy in the syringe. 11:00 That just allows it to flow a little bit better. 11:03 Do need to be a little bit mindful if you're doing that though, obviously that can result in the epoxy setting up a little bit faster. 11:10 In general I've never found that to be an issue because these epoxies really need a good 24 hours to properly cure so not too much of a concern. 11:20 Now that is the process but I won't be demonstrating, that's what we're trying to do but I won't be demonstrating the full process with the epoxy just because it does tend to get a little bit messy, particularly when we're trying to do this on camera. 11:35 Alright so talking about sizing, the key with any of these heat molded parts is making sure that they are the correct size and they do come in a range of sizes. 11:47 I haven't got the right one for this autosport connector but one of the nice features with these is that when you have got the correct boot size, it will actually go over the autosport connector so you don't have to assemble these onto your harness as you're building it. 12:04 As I've said, this one is not the correct size, it is a little bit too small but yeah if you've got the right one, you'll actually be able to feed this over the autosport connector once the harness is built and shrink it into place. 12:15 That's also handy because if you do ever need to cut that boot off, you can install another one without having to depin the entire autosport connector and start from scratch. 12:25 So it's a fine, relatively fine line between a boot that's too big and a boot that's too small. 12:31 These do have a really high shrink ratio so they're actually going to shrink down a lot more than you think. 12:39 And the other element which can be a little bit surprising, we'll get this one under our overhead, I'm going to demonstrate shrinking this one down but as it shrinks, what they're going to do is actually grow in length as well. 12:53 So when you're getting some of these boots and you're sort of looking at them compared to the junction that you may be shrinking them over, it can be hard to believe that they're actually going to shrink out and cover the entire section of your harness but you will be quite surprised. 13:10 You don't need to guess though because there is a wide range of information available from TE on exactly what these are going to do. 13:20 So first of all, let's jump across to my laptop screen, this is a resource that we include in our practical motorsport harness construction course. 13:27 You'll be able to find this by searching TE Connectivity molded parts. 13:32 It is a pretty extensive list of all of the parts and part numbers that they offer and as I scroll through this, you can sort of get an understanding of the wide variation in shapes that they do offer. 13:45 What I would say as well though is depending where you are in the world and who your suppliers are, just because you can find a shape or a part that will suit your application, let's say this crazy looking thing here which branches off in 4 different directions, you may not be able to find that locally so this could end up being difficult or you may need to import it from a supplier overseas so it's always a good idea to really think about these things as you're doing the construction plan for your harness so that if you do need any peculiar boots that you can actually order these in ahead of time so if there's a delay on them, it's not going to affect your harness construction. 14:28 I would say, at least in my experience here in New Zealand, all of the conventional straight and 90° boots like this one here that we use on the back of the autosport connectors, there's never really an issue with those. 14:41 The little angled boot that I showed for our injector and ignition wiring I did have to get these in from overseas but again depends who your supplier is and what they're using. 14:52 There's also a really good breakdown on the part numbering for the Raychem boots so you actually understand what these numbers mean. 15:03 The next element that I want to talk about, oh no we'll talk about sizing still. 15:07 So that's the main catalog, again I'm not going to go through all of it. 15:10 Then it comes down to sizing. 15:15 So let's talk about a pretty conventional straight boot which would be like this little one here. 15:22 And we want to know what size or what boot part number we need to order. 15:26 So this is another one of the resources from Raychem for their boots. 15:32 Hellermann have their own. 15:34 But this gives you a really good rundown on exactly what's what. 15:37 So first of all we have the boot in its unrecovered form here. 15:41 Really important, particularly when we start dealing with some of the more intricate shapes like the T boots, to make sure we understand the letters on each of the ends of the boot. 15:56 So let's get this under our overhead. 15:58 And might be a little bit tricky to see but what we've got here, this side here is J, this side is H and this side is K. 16:09 And when you've got this unrecovered, and unfortunately I don't have one here that I can show you, it actually looks like it's a straight through boot so it's important to know once it's recovered, which of the legs is going to actually transition through 90°, otherwise you could end up creating a bit of an issue for yourself if you recover it down and you find that you've actually got your harness in the wrong leg of that particular boot. 16:32 So in this instance though, what we can see is H, that is the inlet when it recovers down that becomes the biggest side and J is our outlet and when that is recovered down, that's the one that's going to shrink down like this. 16:49 When you are holding these it does become pretty apparent, the H side obviously it is labelled but that is a bit thicker in the material, it's a bit stiffer whereas if we get the J side, that one is much much thinner, much more pliable because that's the side that is designed to actually shrink down in that bottle shape. 17:10 We've got all of the dimensions though for each of the parts. 17:13 So our recovered and unrecovered size is for our H so for example here if we're looking at a 202K121, the unrecovered size for the H is 24 mm and when it is recovered down, the minimum size it'll recover down to is 10.4 mm. 17:35 So it goes from 24 down to 10 mm so it is a massive shrink ratio, a fairly significant shrink ratio. 17:42 So what you want to do in this instance here is essentially measure whatever you're shrinking that down onto. 17:48 So let's say we've got our autosport connector. 17:50 We're going to be measuring the outside diameter of the section it's going to shrink down on and that's going to give us a guide. 17:58 Obviously when we're choosing the correct boot for this, we want to make sure that we're choosing a boot where that size falls within that minimum and maximum range. 18:08 However it's not just the back of our connector we need to consider. 18:11 Obviously the wiring that comes out of the connector, we also need the diameter of that wiring and that becomes our J diameter, this one here and our J diameter is listed here, again minimum and maximum so we can see what that's going to be. 18:29 So for that 202K121, the inlet, the H leg will shrink down to 10.4 mm, the J leg starts at the same 24 mm, that'll shrink all the way down to 5.6 mm. 18:43 So within this range, you're probably going to be able to find a boot that's going to be able to suit most of the common applications you'd need them for. 18:53 We've got another resource in our practical motorsport level wiring course that lists the correct boot size for the common shell sizes for the autosport connectors. 19:03 These documents that you've just seen here, these also exist for the T boots and I've got this one here for our 90° boots. 19:14 Now a little tip that I would give as well is that when you are terminating the back of an autosport connector, particularly ones where it's getting a little bit busy, maybe you've got a bunch of splices going on there. 19:28 It's really a good idea to print out and scale the drawing from TE Connectivity like this, or Hellermann for that matter and scale it out so it suits the finished size of your connector and then use that essentially as a silhouette, you can cut it out and you can use it as a silhouette when you are locating all of your splices to make sure that they are going to nicely sit within the finished recovered boot space. 19:58 Otherwise you're going to end up with something that's going to be a little bit messy to work with and probably just not look that aesthetically pleasing when it's finished. 20:08 So that's the process of sizing it, going through those documents and again you'll find all of these if you just search for them. 20:15 If you are looking at a boot number, if you search that particular boot number, you're going to end up getting to this document that I've just shown you here, not very difficult. 20:25 The other element to consider with these is that these boots are available in lipped or non lipped variants. 20:34 And what that refers to is the end, or the H end here, if you are going to be using this in conjunction with an autosport connector, then if we get this one under our overhead, what you'll see is that we've got this gnurled section here and then it actually steps down and we've got a lip in here so the lip on the boot is designed to actually lock in behind that gnurling so we do want to use a lipped variant when we are using autosport connectors however, not always do we want to do that. 21:10 Let's just get our harness under our overhead and this is where we've used a raychem boot for a transition from our harness, obviously we've got 2 branches coming in here and we've got several coming out and in this instance obviously we don't need the lipped variant so just depending on your application whether or not you want to use the lipped variant. 21:35 Alright so I'm going to try a couple of demonstrations here and hopefully everything works out really nicely. 21:41 We'll see how that all goes and the first of those we're going to look at applying a heat moulded boot to this potted ignition coil. 21:51 Little bit of a back story, this is a Toyota ignition coil, these have a known problem with the pin retention, contact retention on the factory connector. 22:03 So they're fine in a road car application until they get very high in the mileage but with the vibration associated with motorsport use, they can be problematic. We had these coils on a 1ZZ-FE that was on our engine dyno and we potted them to give us a little bit more reliability. 22:22 So once we've potted them we can essentially leave them like this, I'll get it under our overhead and we can see that essentially the wires have been soldered directly to the terminals inside here and then once that's done, to provide some mechanical strain relief, it's been back filled with a 2 pot epoxy, our ResinTech RT125. 22:42 So we could leave it like this, it is environmentally sealed but what we want to do is provide a little bit more strain relief as well. 22:49 It is possible to have that solder essentially wick up the conductor strands and make its way further up into the wires which is really the problem with solder. 22:57 So a heat molded boot is a really nice way of sort of finishing that off and adding reliability. 23:05 So the boot that we've actually got, this is a raychem boot, it is, for those who are interested, a 202D932, slightly more unusual shape in that it is already quite reduced but we do need that due to the difference in the size of the plastic section for the connector body and obviously we've only got three 20 gauge tefzel wires coming out of that with some DR25 over the top of it. 23:38 So we'll have a look at this in a second, I haven't at this stage talked about what you're going to need in order to recover these boots in too much detail. 23:47 We're going to need a good quality heat gun and you're going to need one where you can actually control the temperature. 23:55 So I'll get this hopefully under our overhead, that's probably not ideal but we can see we've got a screen there. 24:02 I'll just turn this on and our screen, there we go, lights up and we can control the temperature. 24:11 So at the moment we definitely don't want that to be at 500°C+ so for our heat molded boot recovery, we want to be, there will be a guide for each of the manufacturer's boots and we want to be making sure we follow that guide. 24:29 Generally somewhere in the 250, maybe 275°C vicinity is about right. 24:36 Now there's a couple of reasons why we want to be very careful with that temperature. 24:41 The first of these is that if we use too much heat, we can end up damaging the heat molded part, we can damage the connector body beneath that and we can end up damaging our wiring so obviously not really a lot of good's going to come out of that. 24:55 The other element there though is that we can actually end up completely melting the boot if we're not too careful and we can end up with that boot recovering too fast for us to be able to actually manipulate it, control it and get it in the right position. 25:12 These can be quite tricky, particularly as we get into the more complex T molded boots. 25:18 And getting everything to align in the right spot and basically manipulating it as it's starting to shrink, that can be a pretty tricky job so if you are using too much temperature, the boot is going to recover very quickly and you can find that you've got a bit of a mess on your hands because it's recovered before you can really get it properly into location so lower temperature gives you just a little bit more time to work with that and makes your life a little bit easier. 25:45 The other thing we're going to need in order to work with this or at least I recommend it is a pair of gloves or at least a glove on the hand you're going to be working with. 25:53 Hot things are in fact quite hot and you can end up burning yourself if you're not careful. 25:59 And there's nothing worse than having a boot that you know needs to be repositioned or moved but it's also absolutely stinking hot and when you touch it, it burns you so a glove is a really good way of getting around that. 26:12 Couple of other elements that we will need, before we apply our heat molded part, we are going to need to abrade the DR25 around it and we're going to want to just rough that surface up a little bit so it gives something for that glue to really bite into. 26:30 So just some emory paper, doesn't need to be too rough. 26:34 I'll just rip a little bit off there and we'll get our ignition coil back under our overhead and do it under here. 26:42 So we can kind of get a sense for where this boot is going to recover down and what we want to do is just gently abrade the surface of the DR25 and if you are using a transition on your harness and it's not actually going onto an autosport connector or the like, you want to abrade all of the DR25 on both sides of the boot, the inlet and the outlet. 27:06 Right so that's abraded down enough now and what we also want to do is use some isopropyl alcohol and just clean back that surface as well as the back of the connector body. 27:20 So I've just got a bit of isopropyl alcohol in a squirty bottle here and we'll just go ahead and clean that down so not exactly rocket science, basically just make sure particularly on the connector body that we've removed any oils, anything that's going to prevent the boot from sticking. 27:38 Not a bad idea to abrade the back of that connector, plastic housing as well just to allow a little bit better grip so I'll get this back into our little vice here and we'l get this a little bit more central so we can see what's going on. 27:55 So what I'm going to do is just turn my heat gun on and allow that to warm up and we'll just gently get this located. 28:04 This one should be reasonably easy and what we're going to do is we're going to start by shrinking down the inlet and this will allow us to correctly position this as we're going and what I want to do is continuously move the heat gun around so that I'm not shrinking it down, or concentrating the heat for too long in one particular area. 28:23 And obviously with the heat as well, again we want to concentrate on a sensitive product like an ignition coil, making sure that we're not heating the coil itself too much so let's grab this glove here, get that on and we'll start shrinking that down. 28:39 So again I'll just check my temperature which has for some reason again reverted to 600°C which probably wouldn't end too well. 28:48 Alright so at the moment I don't really need to start by touching the boot and we're just going to focus our heat around the outside here, constantly moving. 29:02 And what we'll find is to start with, it doesn't really seem like too much is happening but as we start building up a bit more heat into that boot, we will start to find that it will shrink 29:17 So it's just starting to move now and the other thing I will mention as well is it's quite important to have a fairly narrow tip in your heat gun for this process just so that you don't end up sort of overheating something that you don't want to that's kind of in the path of the heat gun. 29:37 Alright so it is starting to move, I promise. 29:39 I might actually pull this up a little bit in terms of temperature. 29:43 Generally sort of between 250 and up to 300°. 29:47 300° you'll find is a little bit higher than will be recommended in the literature but I have found over time that a little bit more heat than is recommended actually seems to be required in some instances like this one and it doesn't do any damage as long as you're again not concentrating that heat in one place for too long. 30:11 So we are just getting this to start shrinking now and this is the point where as it's starting to shrink down, you do want to start just manipulating the positioning of the boot, making sure that we have got it in the right spot, looking pretty good at the moment so I'll just continue going around here. 30:36 And I've started obviously right at that lip, right at the entrance and now as it's starting to shrink down, I am just bringing my heat gun back out to start applying heat to the remainder of that boot as well and we want to just basically slowly but surely work our way out. 31:03 And I must admit, it does take an absolute eternity when you're trying to do this for a webinar vs when you're actually doing it and you're building a harness. 31:13 This thing always seems to go really really quickly, much quicker than you want it to when you are shrinking it down onto a harness but now that I'm doing it for a live webinar, it is taking its sweet time but bear with us because we will get there, it's almost there. 31:31 But again the really important point here is not to concentrate the heat in one place for too long, keep it moving, keep working your way around. 31:43 And then keep working your way from the inlet side of the boot to the outlet side. 31:50 I'm actually just going to give it a little bit more heat again for the purposes of our webinar here so that we're not here for the next half an hour 'cause I'm sure everyone else has places to be and things to do as well. 32:02 Alright so we're just getting down to the outlet now and this is where positioning the wiring as well becomes a little bit important because we do have flexibility in this until it actually cools down and once it's cooled down, everything's going to essentially be stuck in the location that we've got it so we want to make sure that we've got our wiring coming out at the right angle, that we haven't applied any undue stress and strain to that. 32:34 Once we've got everything shrunk down and we're happy with it, we also just want to spend a little bit of time basically going over the whole boot and applying a little bit more heat and just making sure that everything is fully recovered. 32:48 I'm going to leave it there, it does actually require a little bit more time than that but I'm sure at this point you basically can get the jist of the process that is required. 32:56 Obviously it can end up being hot when we remove it from our vice as well so little bit of care is required in how we deal with that. 33:04 So even now, there's still a little bit of flexibility left in that but it is starting to set up. 33:10 We'll get this under our overhead shot and we can see again this does require a little bit more heat because it hasn't fully recovered here but all that is is just a case of time. 33:21 Now I'll just go back over those temperature settings as well because as I mentioned, sort of 250 to a maximum of 300, that's about as much heat as I would recommend. 33:32 I have punched that up just a little bit for the purposes of our demonstration just to save us a little bit of time. 33:37 But you will be thankful of that lower temperature when you're actually doing this because everything does happen really quickly, particularly with the thinner conventional boots, this one is a little bit unique in the shape that we've got but these ones will shrink reasonably quickly which we'll see in our second demonstration in a moment. 33:57 I promise this one will go a little bit more smoothly. 34:00 Before we do that though, I will mention that if you've got any questions, now's a great time to ask them. 34:07 Once we've got through this second demonstration we'll jump into those. 34:09 So for our second demonstration, I've prepared a little 2 pin DTM connector just with two 22 gauge wires crimped into it. 34:20 Now if you are working with DTM connectors and you want to boot them, you will need to make sure that you are purchasing the DTM connectors that have this little modification here with a lip for a heat molded boot. 34:36 That's really important. 34:38 Now on the heat front as well with these, these are probably one of the connector bodies that you will really notice if you are using excessive heat because it is quite easy to overheat the connector body and it will end up melting so care is required there. 34:54 The boot that we will be using here is this little Hellermann one. 34:59 Again the process is exactly the same as what we've just seen, we need to start by abrading the DR25 and making sure everything is nice and clean so we will go ahead and do that now. 35:11 Not particularly tricky, this is an area that a lot of people overlook but it is really important because otherwise that epoxy, that glue really has nothing to get a good grip or bite into. 35:26 So once we've done that, again a little bit of our IPA and then we can just clean that down on the DR25 as well as cleaning the back of the connector body. 35:40 Now this particular boot that I am using here is not glue lined and normally again we would be applying our epoxy onto this as we go and for simplicity and cleanliness I'm not going to but you'll be able to sort of understand that process anyway. 35:59 So we'll just get the boot installed. 36:03 Now these are ones actually I will mention as well where you do need to be careful because the shape of this, it will not fit over the DTM connector which is obviously, get that back out, it's pretty apparent, it's not going to go. 36:17 So you're going to need to think ahead when you are assembling the harness here and actually feed these on before you terminate and pin out the connector bodies, otherwise you're not going to get them on. 36:28 So we'll get this connector body back into our little vice here. 36:32 We'll turn our heat gun on and again make sure that it hasn't actually just jumped back up to 600° which it hasn't I don't think. 36:45 Right so I'll get my glove on here and again we're going to start by just working our way around the connector body itself trying to make sure that we are targeting just the inlet end of the boot and this one is already starting to shrink down so it's responding a little bit quicker than the last boot and here I'm just actually rotating the boot rather than the heat gun and we want to make sure that as that shrinks down we've got it located nicely on the lipped area at the back of the DTM connector. 37:30 So again our heat gun temperature is giving me enough time to work with this. 37:34 It's shrunk down enough now that it's sitting on the connector body without me needing to sort of manipulate it. 37:41 So I'm just going to work my way around here. 37:47 And now that we've got the heat moulded boot shrunk down nicely onto the connector body, let's get some heat around the underside of it as well. 37:56 Now I'm just going to work my way out a little bit further and we'll start to see the outlet end of that boot start to shrink down as well. 38:05 So it's a case of working our way from the inlet to the outlet slowly allowing the boot to properly recover. 38:18 And we're almost there. 38:29 Well one out of two demonstrations that went pretty smoothly. 38:32 So that one has shrunk down quite nicely. 38:34 Let's just get it under our overhead and we'll have a look at the finished result. 38:39 So at this point it is still nice and soft so this is a point where if we've got our wiring coming off at an angle like this and we leave it like that, that's how it's going to set up so we want to make sure we've got everything manipulated so it's in the right spot. 38:53 At this point we can also move the boot still a little bit and make sure that it is in exactly the right position but essentially job done there and we just need to allow that to cool down and it's ready for installation so nice way, albeit slightly more expensive way than using SCL, of finishing off our connections, waterproofing or moisture proofing our harness and protecting it from our dirt and dust ingress. 39:22 Let's get into our questions and we'll see what we've got. 39:30 DynoDoug's asked, I've been working on repairing an OEM harness for a 350Z. 39:34 All of the existing PVC sheathing is hard and cracking. 39:37 Depinning every connector to resheath the entire harness seems like a lot more effort than it is worth. 39:42 Do you guys have any favourite products for putting new sheathing on an existing harness that does not require the harness to be disassembled? I'm also curious how other folks clean harnesses and connectors. 39:53 Is there a good chemical out there that cleans quickly and will not cause corrosion to the terminals? Ideally something I can put in my ultrasonic cleaner, then just submerge the entire 40:02 OK I can't speak to the ultrasonic cleaner, I don't have one and I've never used one so I couldn't really give you any input there. 40:10 Isopropyl alcohol in terms of cleaning down a harness is a good product, it doesn't degrade the sheathing on the harness so that's probably my go to. 40:20 Can be a bit difficult obviously with an existing harness that hasn't been protected with the likes of DR25, over time you're likely to get dirt, grease, grime and oil impregnated into the harness so can be quite tricky to clean out. 40:36 I don't use it very often so I cannot off the top of my head remember the name but there is a looming tape from Hellermann, I'll just see if I can find it while I'm talking. 40:48 Yeah a looming tape from Hellermann, there we go, it's call Hellermann Tyton cloth tape. 40:55 Let's just head over to my laptop screen for a moment and this is just from RS Components but this is essentially what it looks like. 41:01 Oh that's the wrong button. 41:04 So that one there you can basically wrap around the harness so it's a nicer variant of the old PVC black electrical insulation tape which never really lasts and melts and goes all gooey and horrible as soon as it gets hot. 41:20 This is designed for protecting a harness. 41:22 There's a number of other products like this. 41:25 I don't tend to use them so I can't speak extensively about them and their pros and cons. 41:30 Obviously the biggest pro is you don't need to disassemble a harness but because it is a cloth type tape, you aren't going to get the level of protection to the harness from further moisture, dust, dirt etc getting into it that you would with the likes of DR25. 41:47 Downside is the only way of getting DR25 onto the harness is going to be depinning all of the connectors as you mentioned so that might be something that's worth considering. 41:56 Yeah there are some split loom products as well, again you're not going to get the sort of protection for the harness using those though. 42:07 Right we'll head back over to our questions. 42:16 Our next question comes from Zack who's asked, I am about to start a project where I am delooming, rerouting some of the wiring and relooming an OEM Honda wiring harness. 42:23 Is there a place for Raychem tubing and boots on a project like this? Problem with using heat molded parts is that it really requires you to be working with motorsport grade products from the get go. 42:37 The heat that you need to use in order to recover those boots, is going to melt cheaper lower grade PVC insulation. 42:46 So you really need to be using TXL at a minimum wire, or ideally Tefzel and really it's designed to work with the likes of DR25 sheathing. 42:57 So it's kind of a step up and yeah you're not really going to incorporate this on entry level products, entry level harnesses. 43:06 So depends how much time and money you really want to put into the harness as to what the correct materials are to work with. 43:12 If you are working with DR25 and TXL then absolutely these are a great addition. 43:18 Justin has asked, how would you go about sealing the end of the harness to a factory connector, say coils, cam position sensors, TPS, things that don't have a step for a boot to attach to? So I mean really not to dissimilar to what I just looked at with the little demonstration here. 43:36 This is a coil, it's an OEM coil and there isn't a lip for a boot on this but by, in this instance there are a couple of little tabs that suit to locate and positively lock into the factory connector body so those work as a nice locating device for that particular boot anyway. 43:58 But provided you are going to abrade the plastic surface that you're going to shrink that down onto, it's still not going to move around once it's glued down, it's still going to do a pretty good job there. 44:12 Zach has asked, are there any negatives to not heating the boot up fast enough? Other than taking an eternity. 44:17 No just taking an eternity, I mean there is a minimum temperature rated for recovery. 44:24 I've always found that that minimum temperature doesn't even get the boot budging so you really do need to be sort of in that 250°C + vicinity but yeah it's just a case of time but that time honestly, when you're not presenting a webinar, you'd be thankful for that time because it allows you to really position that boot and do a good job of getting it all aligned and nice exactly how you want it to sit. 44:50 Again as I mentioned, particularly when you're working with some of the more complex transitions such as those T boots. 44:58 DynoDoug's asked, on the harness shown early in the webinar, there were two trunks coming into a junction with multiple branches leaving the junction. 45:06 Why are two trunks used in this case in lieu of one? Is it a strategy to keep looms more flexible or avoid the need for extra large DR25 which is expensive and difficult to source? No the only reason that that was done like that, and I've actually since remade this harness with a single autosport connector, at the time we could not source a 79 way connector for the bulkhead so two we used instead. 45:34 There are some advantages in that, the two autosport connectors on that harness, one uses size 22 contacts and one uses size 20 so when we want to use wiring that can support a little bit more current handling, 20 gauge is around 7.5, 8 amps whereas 22 gauge, you're only going to be able to get about 5 amps through that safely so few idiosyncracies in terms of what you want to choose and why. 46:02 Really a lot of it comes down to the planning of your harness construction but absolutely that would not be the norm and certainly not an essential element. 46:12 Chase has asked, would there be any benefits or drawbacks of using a 2-part RTV to backfill boots as opposed to the traditional epoxy? Slightly softer cure, maybe less stress imparted on the pin retaining tabs? So if you're talking about a RTV that stays semi, sort of cures not fully rigid, the answer to that I would expect would be a no. 46:36 We're not back filling here and if we do backfill, what you're going to end up with is a really big problem if you ever come to actually effect a repair on that particular connector. 46:47 Now this little example here, no big deal on this guy but if you are looking at the autosport connectors, there's a lot going on behind these, now granted the actual wiring is wrapped in kapton tape first anyway, the kapton tape is pretty important because it stops the epoxy from actually getting onto the individual conductor strands and means that if we have to cut that boot off we can remove the kapton tape and the wiring is accessible beneath. 47:19 If you want to fill that with an RTV style product then that's going to make it a little bit harder again I think if you need to effect a repair. 47:26 Really in terms of strain relief, the two key elements in that, or 3 key elements are first of all the actual rigidity of the shrunk down recovered boot, secondly the ability of the epoxy to actually secure it to the autosport connector in that gnurling and then thirdly the ability of the epoxy to also grip down onto the DR25 there. 47:48 So those are the elements that are going to give us the protection and strain relief that we want. 47:58 Next question comes from Papapetad I think it is, sorry if I've messed up your name there, you're using heat shrink tubing for all the main lengths and runs of the harness, right? And is it all glue lined or are you relying mainly on the branching/splicing/connection zones to insulate the harness from the elements? So to be clear, yeah everything is protected using heat shrink. 48:20 This one, the main branch is here covered in a product called DR25. 48:27 That is not glue lined, that is just a heat shrink product that is designed for harness protection, particularly it is impermeable to all of the common chemicals that we see in the automotive 48:38 It's to a degree, a reasonable degree, abrasion resistant as well so it provides good mechanical protection to the harness but no it is not glue lined as such. 48:50 We are relying on the heat molded parts to provide the protection from the environment as you've correctly guessed there. 48:59 OK last question here, little bit off the actual topic of today, from TheKillerMarine who's asked, what membership would you recommend for someone who just wants to learn how to build a wire harness for an off road trail rig? So it would have to be watertight and handle the abuse of hitting trails. 49:14 So I would probably recommend with that, we have our Wiring Starter Package and you can find that on our hpacademy.com website. 49:25 That starter package is still at 50% off with our Black Friday sale as well at the moment. 49:31 So with that you can choose our club level or our professional motorsport level course, slightly different price points on those. 49:39 Both include our Wiring Fundamentals course which is really important no matter what sort of harness you're building, the wiring fundamentals remain the same. 49:48 If you do want to produce a properly sealed harness then typically you're going to be working at the professional level though using DR25 and these heat molded boots. 49:58 Particularly with an offroad vehicle or trail rig, it's pretty common to get pretty muddy and to clean it down maybe with a pressure washer, you would not be wanting to do that with an unsealed harness because you can almost guarantee in time you're going to end up with some moisture getting into some of the connector bodies. 50:17 Alright that's brought us to the end for today. 50:19 Thanks to everyone who has joined in. 50:22 Now for those who have not watched live and are watching this in our archive, if you do have any questions, please feel free to ask those on the forum and I'll be happy to answer them there. 50:32 Thanks to everyone for joining today and we look forward to seeing you along at the next one. 0:00 - Intro 0:44 - What are they 2:43 - Why we use them 4:03 - Raychem vs Hellermann 4:46 - SCL heat shrink 6:52 - Glued vs non glued boots 11:35 - Sizing 20:26 - Lipped or non lipped variants 21:35 - Demo | Heat molded boot to potted ignition coil 34:09 - Demo | DTM connector 39:22 - Questions Need Help? Need help choosing a course? Experiencing website difficulties? Or need to contact us for any other reason?
{"url":"https://www.hpacademy.com/previous-webinars/316-working-with-heat-molded-boots/?vvst=163","timestamp":"2024-11-10T14:16:54Z","content_type":"text/html","content_length":"321881","record_id":"<urn:uuid:df160d4c-6515-4089-ba42-908993c6b8ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00033.warc.gz"}
Introduction to Mathematical Physics/N body problem in quantum mechanics/Molecules - Wikibooks, open books for an open world Vibrations of a spring model We treat here a simple molecule model \index{molecule} to underline the importance of symmetry using \index{symmetry} in the study of molecules. Water molecule H${\displaystyle _{2}}$ 0 belongs to point groups called ${\displaystyle C_{2v}}$ . This group is compound by four symmetry operations: identity ${\displaystyle E}$ , rotation ${\displaystyle C_{2}}$ of angle ${\displaystyle \pi }$ , and two plane symmetries ${\displaystyle \sigma _{v}}$ and ${\displaystyle \sigma '_{v}}$ with respect to two planes passing by the rotation axis of the ${\displaystyle C_{2}}$ operation (see figure Water molecule. Symmetry group ${\displaystyle C_{2v}}$ corresponds to the set of operations: identity ${\displaystyle E}$ , Rotation ${\displaystyle C_{2}}$ of angle ${\displaystyle \pi }$ around vertical axis, symmetry ${\displaystyle \sigma _{v}}$ with respect to plane perpendicular to paper sheet and symmetry ${\displaystyle \sigma '_{v}}$ with respect to sheet's plane. Group ${\displaystyle C_{2v}}$ is one of the 32 possible point group ([ma:group:Jones90][ph:solid:Ashcroft76]). Nomenclature is explained at figure ---figsymetr---. Nomenclature of symmetry groups in chemistry. The occurrence of symmetry operations is successively tested, starting from the top of the tree. The tree is travelled hrough depending on the answers of questions, "o" for yes, and "n" for no. ${\displaystyle C_{n}}$ labels a rotations of angle ${\displaystyle 2\pi /n}$ , ${\displaystyle \sigma _{h}}$ denotes symmetry operation with respect to a horizontal plane (perpendicular the ${\displaystyle C_{n}}$ axis), ${\displaystyle \sigma _{h}}$ denotes a symmetry operation with respect to the vertical plane (going by the ${\displaystyle C_{n}}$ axis) and ${\displaystyle i}$ the inversion. Names of groups are framed. each of these groups can be characterized by tables of "characters" that define possible irreducible representations \index{irreducible representation} for this group. Character table for group ${\ displaystyle C_{2v}}$ is: Character group for group ${\displaystyle C_{2v}}$ .} ${\displaystyle C_{2v}}$ ${\displaystyle E}$ ${\displaystyle C_{2}}$ ${\displaystyle \sigma _{v}}$ ${\displaystyle \sigma '_{v}}$ ${\displaystyle A_{1}}$ 1 1 1 1 ${\displaystyle A_{2}}$ 1 1 -1 -1 ${\displaystyle B_{1}}$ 1 -1 1 -1 ${\displaystyle B_{2}}$ 1 -1 -1 1 All the representations of group ${\displaystyle C_{2v}}$ are one dimensional. There are four representations labelled ${\displaystyle A_{1}}$ , ${\displaystyle A_{2}}$ , ${\displaystyle B_{1}}$ and ${\displaystyle B_{2}}$ . In water molecule case, space in nine dimension ${\displaystyle e_{i}}$ ${\displaystyle i=1,\dots ,9}$ . Indeed, each atom is represented by three coordinates. A representation corresponds here to the choice of a linear combination ${\displaystyle u}$ of vectors ${\displaystyle e_{i}}$ such that for each element of the symmetry group ${\displaystyle g}$ , one ${\displaystyle g(u)=M_{g}u.}$ Character table provides the trace, for each operation ${\displaystyle g}$ of the representation matrix ${\displaystyle M_{g}}$ . As all representations considered here are one dimensional, character is simply the (unique) eigenvalue of ${\displaystyle M_{g}}$ . Figure figmodesmol sketches the nine representations of ${\displaystyle C_{2v}}$ group for water molecule. It can be seen that space spanned by the vectors ${\displaystyle e_{i}}$ can be shared into nine subspaces invariant by the operations ${\displaystyle g}$ . Introducing representation sum ([ma:group:Jones90]), considered representation ${\displaystyle D}$ can be written as a sum of irreducible representations: ${\displaystyle D=3A_{1}\oplus A_{2}\oplus 2B_{1}\oplus 3B_{2}.}$ Eigenmodes of ${\displaystyle H_{2}O}$ molecule. Vibrating modes are framed. Other modes correspond to rotations and translations. It appears that among the nine modes, there are \index{mode} three translation modes, and three rotation modes. Those mode leave the distance between the atoms of the molecule unchanged. Three actual vibration modes are framed in figure figmodesmol. Dynamics is in general defined by: ${\displaystyle {\frac {d^{2}x}{dt^{2}}}=Mx}$ where ${\displaystyle x}$ is the vector defining the state of the system in the ${\displaystyle e_{i}}$ basis. Dynamics is then diagonalized in the coordinate system corresponding to the three vibration modes. Here, symmetry consideration are sufficient to obtain the eigenvectors. Eigenvalues can then be quickly evaluated once numerical value of coefficients of ${\displaystyle M}$ are Two nuclei, one electron This case corresponds to the study of H${\displaystyle _{2}^{+}}$ molecule ([#References|references]). The Born-Oppenheimer approximation we use here consists in assuming that protons are fixed (movement of protons is slow with respect to movement of electrons). Remark: This problem can be solved exactly. However, we present here the variational approximation that can be used for more complicated cases. The LCAO (Linear Combination of Atomic Orbitals) method we introduce here is a particular case of the variational method. It consists in approximating the electron wave function by a linear combination of the one electron wave functions of the atom\footnote{That is: the space of solution is approximated by the subspace spanned by the atom wave functions.}. ${\displaystyle \psi =a\psi _{1}+b\psi _{2}}$ More precisely, let us choose as basis functions the functions ${\displaystyle \phi _{s,1}}$ and ${\displaystyle \phi _{s,2}}$ that are ${\displaystyle s}$ orbitals centred on atoms ${\displaystyle 1}$ and ${\displaystyle 2}$ respectively. This approximation becomes more valid as R is large (see figure figH2plusS). Molecule H${\displaystyle _{2}^{+}}$ : Choice of the functions ${\displaystyle 1s}$ associated to each of the hydrogen atoms as basis used for the variational approach.} Problem's symmetries yield to write eigenvectors as: ${\displaystyle {\begin{matrix}\psi _{g}&=&N_{g}(\psi _{1}+\psi _{2})\\\psi _{u}&=&N_{u}(\psi _{1}-\psi _{2})\end{matrix}}}$ Notation using indices ${\displaystyle g}$ and ${\displaystyle u}$ is adopted, recalling the parity of the functions: ${\displaystyle g}$ for {\it gerade}, that means even in German and ${\ displaystyle u}$ for {\it ungerade} that means odd in German. Figure figH2plusLCAO represents those two functions. Functions ${\displaystyle \psi _{g}}$ and ${\displaystyle \psi _{u}}$ are solutions of variational approximation's problem on the basis of the two ${\displaystyle s}$ orbitals of the hydrogen atoms.} Taking into account the hamiltonian allows to rise the degeneracy of the energies as shown in diagram of figure figH2plusLCAOener. Energy diagram for ${\displaystyle H_{2}^{+}}$ molecule deduced from LCAO method using orbitals ${\displaystyle s}$ of the hydrogen atoms as basis.} N nuclei, n electrons In this case, consideration of symmetries allow to find eigensubspaces that simplify the spectral problem. Those considerations are related to point groups representation theory. When atoms of a same molecule are located in a plane, this plane is a symmetry element. In the case of a linear molecule, any plane going along this line is also symmetry plane. Two types of orbitals are distinguished: Orbitals ${\displaystyle \sigma }$ are conserved by reflexion with respect to the symmetry plane. Definition: Orbitals ${\displaystyle \pi }$ change their sign in the reflexion with respect to this plane. Let us consider a linear molecule. For other example, please refer to ([#References|references]). Example: Molecule BeH${\displaystyle _{2}}$ . We look for a wave function in the space spanned by orbitals ${\displaystyle 2s}$ and ${\displaystyle z}$ of the beryllium atom Be and by the two orbitals ${\displaystyle 1s}$ of the two hydrogen atoms. Space is thus four dimension (orbitals ${\displaystyle x}$ and ${\displaystyle y}$ are not used) and the hamiltonian to diagonalize in this basis is written in general as a matrix ${\displaystyle 4\times 4}$ . Taking into account symmetries of the molecule considered allows to put this matrix as {\it block diagonal matrix}. Let us choose the following basis as approximation of the state space: \{${\displaystyle 2s,z,1s_{1}+1s_{2},1s_{1}-1s_{2}}$ \}. Then symmetry considerations imply that orbitals have to be: ${\displaystyle {\begin{matrix}\sigma _{s}&=&\alpha _{1}2s+\beta _{1}(1s_{1}+1s_{2})\\\sigma _{s}^{*}&=&\alpha _{2}2s-\beta _{2}(1s_{1}+1s_{2})\\\sigma _{p}&=&\alpha _{3}z+\beta _{3}(1s_{1}-1s_{2})\\ \sigma _{p}^{*}&=&\alpha _{4}z-\beta _{4}(1s_{1}-1s_{2})\end{matrix}}}$ Those bindings are delocalized over three atoms and are sketched at figure figBeH2orb. Study of ${\displaystyle H_{2}^{+}}$ molecule by the LCAO method. The basis chosen is the two orbitals ${\displaystyle s}$ of hydrogen atoms.} We have two binding orbitals and two anti—binding orbitals. Energy diagram is represented in figure figBeH2ene. In the fundamental state, the four electrons occupy the two binding orbitals. Energy diagram for ${\displaystyle H_{2}^{+}}$ molecule by the LCAO method. The basis chosen is the two orbitals ${\displaystyle s}$ of hydrogen atoms.} Experimental study of molecules show that characteristics of bondings depend only slightly on on nature of other atoms. The problem is thus simplified in considering ${\displaystyle \sigma }$ molecular orbital as being dicentric, that means located between two atoms. Those orbitals are called hybrids. Example: let us take again the example of the BeH${\displaystyle _{2}}$ molecule. This molecule is linear. This geometry is well described by the ${\displaystyle s-p}$ hybridation. Following hybrid orbitals are defined: ${\displaystyle {\begin{matrix}d_{1}&=&{\frac {1}{\sqrt {2}}}(s+z)\\d_{2}&=&{\frac {1}{\sqrt {2}}}(s-z)\end{matrix}}}$ Instead of considering the basis \{${\displaystyle 2s,z,1s_{1},1s_{2}}$ \}, basis \{${\displaystyle d_{1},d_{2},1s_{1},1s_{2}}$ \} is directly considered. Spectral problem is thus from the beginning well advanced.
{"url":"https://en.m.wikibooks.org/wiki/Introduction_to_Mathematical_Physics/N_body_problem_in_quantum_mechanics/Molecules","timestamp":"2024-11-04T14:07:37Z","content_type":"text/html","content_length":"175498","record_id":"<urn:uuid:68d75316-baa5-4db7-8900-aa3ef8d7e119>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00153.warc.gz"}
Unit 4 Rational Functions and Expressions Lesson 1 Learning Focus Understand the behavior of for very large values and for values near . Graph and describe the features of using appropriate notation. Lesson Summary In this lesson, we learned about the function , a rational function. We learned about the features of the function and its behavior near the horizontal and vertical asymptotes. Lesson 2 Learning Focus Transform the graph of . Write equations from graphs. Predict the horizontal and vertical asymptotes of a function from the equation. Lesson Summary In this lesson, we learned to graph functions that are transformations of . We learned that the transformations work just like other functions with horizontal shifts associated with the inputs to the function and the vertical effects associated with the outputs. Using these ideas, we also wrote equations to correspond with graphs and generalized each part of the equation in the form: . Lesson 3 Learning Focus Define a rational function. Explore rational functions, and find patterns that predict the asymptotes and intercepts. Lesson Summary In this lesson, we learned to identify the horizontal and vertical asymptotes of a rational function by comparing the degree of the numerator to the degree of the denominator. The vertical asymptotes occur where the function is undefined, and the horizontal asymptote describes the end behavior of the function. Finding the intercepts is the same as other functions we know but there are ways to be more efficient with rational functions. Lesson 4 Learning Focus Write equivalent rational expressions. Find the features of rational functions with numerators that are one degree greater than the denominator. Lesson Summary In this lesson, we learned that equivalent expressions can be found for rational expressions like rational numbers when there are common factors in the numerator and denominator. When the degree of the numerator is greater than the degree of the denominator, we learned that a rational expression can be written in an equivalent form by dividing the numerator by the denominator. When this operation is performed on a rational function, the quotient indicates the end behavior or slant asymptote of the function. Lesson 5 Learning Focus Add, subtract, multiply, and divide rational expressions. Lesson Summary In this lesson, we learned that performing operations on rational expressions is just like performing operations on rational numbers. Multiplication is performed by multiplying the numerators together, multiplying the denominators together, and dividing out any common factors. Division is performed by inverting the divisor and then multiplying the two fractions. Addition and subtraction require obtaining a common denominator and then combining the numerators into one fraction with the common denominator. Lesson 6 Learning Focus Determine a process for graphing rational functions from an equation. Lesson Summary In this lesson, we learned to sketch graphs of rational functions by finding the intercepts and the asymptotes, and by determining the behavior near the asymptotes. From this information we sketched the general shape of the graph without calculating exact points. Lesson 7 Learning Focus Solve equations that contain rational expressions. Lesson Summary In this lesson, we learned several strategies for solving rational equations. We found that it is often useful to combine two fractions into one expression or to multiply both sides of the equation by the common denominator of the fractions. Solving rational equations sometimes produces an extraneous solution that makes the denominator of one of the rational expressions in the original equation equal to zero and is therefore not an actual solution to the equation.
{"url":"https://access.openupresources.org/curricula/our-hs-math/en/integrated/math-3/unit-4/family.html","timestamp":"2024-11-12T16:51:15Z","content_type":"text/html","content_length":"96189","record_id":"<urn:uuid:80f789ef-8bd6-4478-984a-1f6551ad9077>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00314.warc.gz"}
Phasor Diagram & Back EMF of Synchronous Motor In this article let us draw the phasor diagram of synchronous motor and also derive the expression for back emf E[b] and load angle α for various factors. Analysis of Phasor Diagram Under Normal Conditions : • V = Supply voltage per phase • I[a] = Armature current per phase • Φ = p.f. angle or angle between V and I[a] • cos Φ = p.f. at which motor is working • α = Load angle or Torque angle corresponding to the load on the motor The phasor diagram with all the above details at normal excitation is shown below. When the value of back emf is equal to applied voltage E[b] = V then the synchronous motor is said to be at normal excitation. The angle θ is called an internal machine angle or impedance angle. It is constant for a motor. The significance of θ is that it tells us that I[a] lags behind E[r] by an angle θ. Practically armature resistance R[a] is very small compared to reactance X[s] and hence θ tends to 90°. It is expressed as, From the phasor diagram, armature current is given by I[a] = E[r] / Z[s] and, synchronous impedance Z[s] = R[a] + jX[s]. The vector difference of E[b] and V gives the resultant emf E[r] which represents I[a] Z[s]. The resultant emf E[r] is expressed as, The nature of the power factor is lagging if I[a] lags V by angle Φ while it is leading if I[a] leads V by angle Φ. Let us see the phasor diagram and expression for back emf and load angle at different power factor loads. Phasor Diagram at Lagging PF : When field excitation is made in such a way that back emf is less than the applied voltage (E[b] < V) then the motor is said to be 'Under-Excited'. Here the torque angle α is small and I[a] lags behind V with poor power factor angle Φ. The phasor diagram is shown below. Applying cosine rule to triangle OAB, Applying sine rule to triangle OAB, Hence load angle α can be calculated once E[b] is known. Phasor Diagram at Leading PF : When excitation is increased in such a way that E[b] > V, the motor is said to be 'Over-excited'. Here the current I[a] is comparatively larger and leads the voltage V by an angle Φ as shown below. Applying cosine rule to triangle OAB, Applying sine rule to triangle OAB, Hence load angle α can be calculated once E[b] is known. Phasor Diagram at Unity PF : The change in excitation for which the armature current will be in phase with voltage so that power factor becomes unity, this occurs when E ≅ V. Therefore Φ = 0 and cos Φ = 1. Applying cosine rule to triangle OAB, Applying sine rule to triangle OAB, Do not enter any spam links and messages Post a Comment (0)
{"url":"https://www.electricaldeck.com/2021/01/phasor-diagram-and-back-emf-of-synchronous-motor.html","timestamp":"2024-11-08T05:51:29Z","content_type":"application/xhtml+xml","content_length":"184135","record_id":"<urn:uuid:02ae3ceb-fa86-4e28-8b2c-17f9ca499ef3>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00001.warc.gz"}
Can You Hear a Room? Sound propagation has distinctive features associated with the environment where it happens. Human ears can often clearly distinguish whether a given sound recording was produced in a small room, large room, or outdoors. One can even get a sense of a direction or a distance from the sound source by listening to a recording. These characteristics are defined by the objects around the listener or a recording microphone such as the size and material of walls in a room, furniture, people, etc. Every object has its own sound reflection, absorption, and diffraction properties, and all of them together define the way a sound propagates, reflects, attenuates, and reaches the listener. In acoustic signal processing, one often needs a way to model the sound field in a room with certain characteristics, in order to reproduce a sound in that specific setting, so to speak. Of course, one could simply go to that room, reproduce the required sound and record it with a microphone. However, in many cases, this is inconvenient or even infeasible. For example, suppose we want to build a Deep Neural Net (DNN)-based voice assistant in a device with a microphone that receives pre-defined voice commands and performs actions accordingly. We need to make our DNN model robust to various room conditions. To this end, we could arrange many rooms with various conditions, reproduce/record our commands in those rooms, and feed the obtained data to our model. Now, if we decide to add a new command, we would have to do all this work once again. Other examples are Virtual Reality (VR) applications or architectural planning of buildings where we need to model the acoustic environment in places that simply do not exist in reality. In the case of our voice assistant, it would be beneficial to be able to encode and digitally record the acoustic properties of a room in some way so that we could take any sound recording and “embed” it in the room by using the room “encoding”. This would free us from physically accessing the room every time we need it. In the case of VR or architectural planning applications, the goal then would be to digitally generate a room’s encoding only based on its desired physical dimensions and the materials and objects contained in it. Thus, we are looking for a way to capture the acoustic properties of a room in a digital record, so that we can reproduce any given audio recording as if it was played in that room. This would be a digital acoustic model of the room representing its geometry, materials, and other things that make us “hear a room” in a certain sense. What is RIR? Room impulse response (RIR for short) is something that does capture room acoustics, to a large extent. A room with a given sound source and a receiver can be thought of as a black-box system. Upon receiving on its input a sound signal emitted by the source, the system transforms it and outputs whatever is received at the receiver. The transformation corresponds to the reflections, scattering, diffraction, attenuation and other effects that the signal undergoes before reaching the receiver. Impulse response describes such systems under the assumption of time-invariance and linearity. In the case of RIR, time-invariance means that the room is in a steady state, i.e, the acoustic conditions do not change over time. For example, a room with people moving around, or a room where the outside noise can be heard, is not time invariant since the acoustic conditions change with time. Linearity means that if the input signal is a scaled superposition of two other signals, x and y, then the output signal is a similarly scaled superposition of the output signals corresponding to x and y, individually. Linearity holds with sufficient fidelity in most practical acoustic environments (while time-invariance can be achieved in a controlled environment). Let us take a digital approximation of a sound signal. It is a sequence of discrete samples, as shown in Fig. 1. Each sample is a positive or negative number that corresponds to the degree of instantaneous excitation of the sound source, e.g., a loudspeaker membrane, as measured at discrete time steps. It can be viewed as an extremely short sound, or an impulse. The signal can thus be approximately viewed as a sequence of scaled impulses. Now, given time-invariance and linearity of the system, some mathematics shows that the effect of a room-source-receiver system on an audio signal can be completely described by its effect on a single impulse, which is usually referred to as an impulse response. More concretely, impulse response is a function h(t) of time t > 0 (response to a unit impulse at time t = 0) such that for an input sound signal x(t), the system’s output is given by the convolution between the input and the impulse response. This is a mathematical operation that, informally speaking, produces a weighted sum of the delayed versions of the input signal where weights are defined by the impulse response. This reflects the intuitive fact that the received signal at time t is a combination of delayed and attenuated values of the original signal up to time t, corresponding to reflections from walls and other objects, as well as scattering, attenuation and other acoustic effects. For example, in the recordings below, one can see the RIR recorded by a clapping sound (see below), an anechoic recording of singing, and their convolution. Singing anechoic Singing with RIR It is often useful to consider sound signals in the frequency domain, as opposed to the time domain. It is known from Fourier analysis that every well-behaved periodic function can be expressed as a sum (infinite, in general) of scaled sinusoids. The sequence of the (complex) coefficients of sinusoids within the sum, the Fourier coefficients, provides another, yet equivalent representation of the function. In other words, a sound signal can be viewed as a superposition of sinusoidal sound waves or tones of different frequencies, and the Fourier coefficients show the contribution of each frequency in the signal. For finite sequences such as digital audio, that are of practical interest, such decompositions into periodic waves can be efficiently computed via the Fast Fourier Transform For non-stationary signals such as speech and music, it is more instructive to do analysis using the short-time Fourier transform (STFT). Here, we split the signal into short equal-length segments and compute the Fourier transform for each segment. This shows how the frequency content of the signal evolves with time (see Fig. 2). That is, while the signal waveform and Fourier transform give us only time and only frequency information about the signal (although one being recoverable from another), the STFT provides something in between. A visual representation of an STFT, such as the one in Fig. 2, is called a spectrogram. The horizontal and vertical axes show time and frequency, respectively, while the color intensity represents the magnitude of the corresponding Fourier coefficient on a logarithmic scale (the brighter the color, the larger is the magnitude of the frequency at the given time). Measurement and Structure of RIR In theory, the impulse response of a system can be measured by feeding it a unit impulse and recording whatever comes at the output with a microphone. Still, in practice, we cannot produce an instantaneous and powerful audio signal. Instead, one could record RIR approximately by using short impulsive sounds. One could use a clapping sound, a starter gun, a balloon popping sound, or the sound of an electric spark discharge. The results of such measurements (see, for example, Fig. 3) may be not sufficiently accurate for a particular application, due to the error introduced by the structure of the input signal. An ideal impulse, in some mathematical sense, has a flat spectrum, that is, it contains all frequencies with equal magnitude. The impulsive sounds above usually significantly deviate from this property. Measurements with such signals may also be poorly reproducible. Alternatively, a digitally created impulsive sound with desired characteristics could be played with a loudspeaker, but the power of the signal would still be limited by speaker characteristics. Among other limitations of measurements with impulsive sounds are: particular sensitivity to external noise (from outside the room), sensitivity to nonlinear effects of the recording microphone or emitting speaker, and the directionality of the sound source. Fortunately, there are more robust methods of measuring room impulse response. The main idea behind these techniques is to play a transformed impulsive sound with a speaker, record the output, and apply an inverse transform to recover impulse response. The rationale is, since we cannot play an impulse as it is with sufficient power, we “spread” its power across time, so to speak, while maintaining the flat spectrum property over a useful range of frequencies. An example of such a “stretched impulse” is shown in Fig. 4. Other variants of such signals are Maximum Length Sequences and Exponential Sine Sweep. An advantage of measurement with such non-localized and reproducible test signals is that ambient noise and microphone nonlinearities can be effectively averaged out. There are also some technicalities that need to be dealt with. For example, the need for synchronization of emitting and recording ends, ensuring that the test signal covers the whole length of impulse response, and the need for deconvolution, that is applying an inverse transform for recovering the impulse response. The waveform on Fig. 5 shows another measured RIR. The initial spike at 0-3 ms corresponds to the direct sound that has arrived to the microphone along a direct path. The smaller spikes following it and starting from about 3-5 ms from the first spike clearly show several early specular reflections. After about 80 ms there are no distinctive specular reflections left, and what we see is the late reverberation or the reverberant tail of the RIR. While the spectrogram of RIR seems not very insightful apart from the remarks so far, there is some information one can extract from it. It shows, in particular, how the intensity of different frequencies decreases with time due to losses. For example, it is known that intensity loss due to air absorption (attenuation) is stronger for higher frequencies. At low frequencies, the spectrogram may exhibit distinct persistent frequency bands, room modes, that correspond to standing waves in the room. This effect can be seen below a certain frequency threshold depending on the room geometry, the Schroeder frequency, which for most rooms is 20 – 250 Hz. Those modes are visible due to the lower density of resonant frequencies of the room near the bottom of the spectrum, with wavelength comparable to the room dimensions. At higher frequencies, modes overlap more and more and are not distinctly visible. RIR can also be used to estimate certain parameters associated with a room, the most well-known of them being the reverberation time or RT60. When an active sound source in a room is abruptly stopped, it will take longer or shorter time for the sound intensity to drop to a certain level, depending on the room’s geometry, materials, and other factors. In the case of RT60, the question is, how long it takes for the sound energy density to decrease by 60 decibels (dB), that is, to the millionth of its initial value. As noted by Schroeder (see the references), the average signal energy at time t used for computing reverberation time is proportional to the tail energy of the RIR, that is the total energy after time t. Thus, we can compute RT60 by plotting the tail energy level of the RIR on a dB scale (with respect to the total energy). For example, the plot corresponding to the RIR above is shown in Fig. 6: In theory, the RIR tail energy decay should be exponential, that is, linear on a dB scale, but, as can be seen here, it drops irregularly starting at -25 dB. This is due to RIR measurement limitations. In such cases, one restricts the attention to the linear part, normally between the values -5 dB and -25 dB, and obtains RT60 by fitting a line to the measurements of RIR in logarithmic scale, by linear regression, for example. RIR Simulation As mentioned in the introduction, one often needs to compute a RIR for a room with given dimensions and material specifications without physically building the room. One way of achieving this would be by actually building a scaled model of the room. Then we could measure the RIR by using test signals with accordingly scaled frequencies, and rescale the recorded RIR frequencies. A more flexible and cheaper way is through computer simulations, by building a digital model of the room and modeling sound propagation. Sound propagation in a room (or other media) is described with differential equations called wave equations. However, the exact solution of these equations is out of reach in most practical settings, and one has to resort to approximate methods for simulations. While there are many approaches for modeling sound propagation, most common simulation algorithms are based on either geometrical simplification of sound propagation or element-based methods. Element-based methods, such as the Finite Element method, rely on numerical solution of wave equations over a discretized space. For this purpose, the room space is approximated with a discrete grid or a mesh of small volume elements. Accordingly, functions describing the sound field (such as the sound pressure or density) are defined down to the level of a single volume element. The advantage of these methods is that they are more faithful to the wave equations and hence more accurate. However the computational complexity of element-based methods grows rapidly with frequency, as higher frequencies require higher resolution of a mesh (smaller volume element size). For this reason, for wideband applications like speech, element-based methods are often used to model sound propagation only for low frequencies, say, up to 1 kHz. Geometric methods, on the other hand, work in the time domain. They model sound propagation in terms of sound rays or particles with intensity decreasing with the squared path length from the source. As such, wave-specific interference between rays is abstracted away. Thus rays effectively become sound energy carriers, with the sound energy at a point being computed by the sum of the energies of rays passing through that point. Geometric methods give plausible results for not-too-low frequencies, e.g., above the Schroeder frequency. Below that, wave effects are more prominent (recall the remarks on room modes above), and geometric methods may be inaccurate. The room geometry is usually modeled with polygons. Walls and other surfaces are assigned absorption coefficients that describe the fraction of incident sound energy that is reflected back into the room by the surface (the rest is “lost” from the simulation perspective). One may also need to model air absorption and sound scattering by rough materials with not too small features as compared to the sound wavelengths. Two well-known geometric methods are stochastic Ray Tracing and Image Source methods. In Ray Tracing, a sound source emits a (large) number of sound rays in random directions, also taking into account directivity of the source. Each ray has some starting energy. It travels with the speed of sound and reflects from the walls while losing energy with each reflection, according to the absorption coefficients of walls, as well as due to air absorption and other losses. The reflections are either specular (incident and reflected angles are equal) or scattering happens, the latter usually being modeled by a random reflection direction. The receiver registers the remaining energy, time and angle of arrival of each ray that hits its surface. Time is tracked in discrete intervals. Thus, one gets an energy histogram corresponding to the RIR with a bucket for each time interval. In order to synthesize the temporal structure of the RIR, a random Poisson-distributed sequence of signed unit impulses can be generated, which is then scaled according to the energy histogram obtained from simulation to give a RIR. For psychoacoustic reasons, one may want to treat different frequency bands separately. In this case, the procedure of scaling the random impulse sequence is done for band-passed versions of the sequence, then their sum is taken as the final RIR. The Image Source method models only specular reflections (no scattering). In this case, a reflected ray from a source towards a receiver can be replaced with rays coming from “mirror images” of the source with respect to the reflecting wall, as shown in Fig. 8. This way, instead of keeping track of reflections, we construct images of the source relative to each wall and consider straight rays from all sources (including the original one) to the receiver. These first order images cover single reflections. For rays that reach the receiver after two reflections, we construct the images of the first order images, call them second order images, and so on, recursively. For each reflection, we can also incorporate material absorption losses, as well as air absorption. The final RIR is constructed by considering each ray as an impulse that undergoes scaling due to absorption and distance-based energy losses, as well as a distance-based phase shift (delay) for each frequency component. Before that, we need to filter out invalid image sources for which the image-receiver path does not intersect the image reflection wall or is blocked by other walls. While the Image Source method captures specular reflections, it does not model scattering that is an important aspect of the late reverberant part of a RIR. It does not model wave-based effects either. More generally, each existing method has its advantages and shortcomings. Fortunately, shortcomings of different approaches are often complementary, so it makes sense to use hybrid models that combine several of the methods described above. For modeling late reverberations, stochastic methods like Ray Tracing are more suitable, while they may be too imprecise for modeling the early specular reflections in a RIR. One could further rely on element-based methods like the Finite Element method for modeling RIR below the Schroeder frequency where wave-based effects are more Room impulse response (RIR) plays a key role in modeling acoustic environments. Thus, when developing voice-related algorithms, be it for voice enhancement, automatic speech recognition, or something else, here at Krisp we need to keep in mind that these algorithms must be robust to changes in acoustics settings. This is usually achieved by incorporating the acoustic properties of various room environments, as was briefly discussed here, into the design of the algorithms. This provides our users with a seamless experience, largely independent of the room from which Krisp is being used: they don’t hear the room. Try next-level audio and voice technologies Krisp licenses its SDKs to embed directly into applications and devices. Learn more about Krisp’s SDKs and begin your evaluation today. 1. [Overview of room acoustics techniques] M. Vorländer, Auralization: Fundamentals of Acoustics, Modelling, Simulation, Algorithms and Acoustic Virtual Reality. Springer, 2008. 2. [Overview of room acoustics techniques] H. Kuttruff, Room Acoustics (5th ed.). CRC Press, 2009. 3. [Signals and systems, including some Fourier analysis] K. Deergha Rao, Signals and Systems. Birkhäuser Cham, 2018. 4. [Exposition of simulation methods] D. Schröder, Physically Based Real-Time Auralization of Interactive Virtual Environments. PhD thesis, RWTH Aachen, 2011. 5. [Maximum Length Sequences for RIR measurement] M. R. Schroeder, “Integrated-impulse Method for Measuring Sound Decay without using Impulses”. The Journal of the Acoustical Society of America, vol. 66, pp. 497–500, 1979. 6. [Stretched impulse method for RIR measurement] N. Aoshima, “Computer-generated Pulse Signal applied for Sound Measurement”. The Journal of the Acoustical Society of America, vol. 69, no. 5, pp. 1484–1488, 1981. 7. [Exponential Sine Sweep technique for RIR measurement] A. Farina, “Simultaneous Measurement of Impulse Response and Distortion with a Swept-sine Technique”. In Audio Engineering Society Convention 108, 2000. 8. [Comparison of RIR measurement techniques] G. B. Stan, J. J. Embrechts, and D. Archambeau, “Comparison of different Impulse Response Measurement Techniques”. Journal of the Audio Engineering Society, vol. 50, pp. 249–262, 2002. 9. [Schroeder Integration for RT60 calculation] M. R. Schroeder; New Method of Measuring Reverberation Time. The Journal of the Acoustical Society of America, vol. 37, no. 3, pp. 409–412, 1965. The article is written by: • Tigran Tonoyan, PhD in Computer Science, Senior ML Engineer II • Hayk Aleksanyan, PhD in Mathematics, Architect, Tech Lead • Aris Hovsepyan, MS in Computer Science, Senior ML Engineer I
{"url":"https://krisp.ai/blog/can-you-hear-a-room/","timestamp":"2024-11-14T04:17:39Z","content_type":"text/html","content_length":"107558","record_id":"<urn:uuid:f6ba1e52-1061-4c8a-be9f-260ce363753e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00481.warc.gz"}
Using Units in Calc with Emacs Calc is an advanced desk calculator and mathematical tool written by Dave Gillespie that runs as part of the GNU Emacs environment. One special interpretation of algebraic formulas is as numbers with units. For example, the formula 5 m / s^2 can be read “five meters per second squared.” Of course it can! 3 thoughts on “Using Units in Calc with Emacs” 1. Grant Rettke: Using Units in Calc with Emacs http://t.co/KgXh27E50g 2. Cool! Btw, where’s the quote from? 3. RICK HANSON: The quotes are referenced with the links inside each quote itself and appear in or near the first line of content on each of those referenced pages. Thought it might be simpler than two links. What do you think?
{"url":"https://www.wisdomandwonder.com/link/9370/using-units-in-calc-with-emacs","timestamp":"2024-11-03T00:57:37Z","content_type":"text/html","content_length":"46310","record_id":"<urn:uuid:73dba4d4-c8ef-4498-b5b4-6dd516ff888a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00426.warc.gz"}
Lie integration I have now typed a detailed proof of the claim that for the Lie $n$-algebra$b^n \mathbb{R}$ its Lie integration to a smooth $n$-group is indeed $\mathbf{B}^n \mathbb{R}$. I always expected that this should be a corollary of a relation between two model structures, responsible for two nonabelian cohomology theories – for Lie groups and Lie algebras; or corresponding infinity categories. Is this too far from the present understanding ? okay, i have now fully boosted up statement and proof at Lie integration to line n-groups using Domenico’s argument. Okay, good. That then gives a nice elegant proof that $\exp(b^{n-1}\mathbb{R}) \simeq \mathbf{B}^n \mathbb{R}$ also without the truncation. (One can show it with my original style of argument, too, but by far not as elegantly.) yes, it should: the argument picks a solution of $d A=\omega$ once it is known that $\omega$ is exact. So on a $n$-sphere it applies to any closed $(0\lt k\lt n)$-form, and to closed $n$-forms whose integral over $S^n$ vanishes. Thanks, Domenico. That looks of course like a more elegant/powerful/better argument. Hm, I still need to think about this. But maybe not tonight. But this should also give the result nicely for extension of closed $(k \lt n)$-forms from $S^n$ to $D^{n+1}$. added an argument to pass from a single $n$-form to a smooth family. I have further polished the proof at integration to line n-group (now the use of symbols might even be consistent…) and have tried – following a suggestion by Domenico – to indicate better how we are essentially just invoking the de Rham theorem but need to be careful to do it properly in smooth families. Your SVG works for me (Firefox 3.5.5 on Windows - this is work’s setup) Oh, for me, too. What I meant as that the editor didn’t work! When I open an nLab page, hit “edit”, click on a point in the edit pane such that the “create an SVG”-button appears, then click on that button, what I get is a window that tries to display the SVG editor properly but fails, and which does not accept any mouse click input. But the same SVG editor can be found elsewhere on the web, and that works for me. and looks very pretty I might add! Okay. I was very dissatisfied, as it is has a large amount of free hand drawing and I didn’t really have the nerve for that. I didn’t figure out how to create a copy of some element, for instance. And Is there a way to invoke a grid such as to facilitate drawing well-aligned lines? (Just asking. If the answer is: no, but you can use any of the one hundred other SVG editors out there, that’s fine with me.) Your SVG works for me (Firefox 3.5.5 on Windows - this is work’s setup) - and looks very pretty I might add! didn’t find much time over the weekend, but tried to work a bit on material related to Lie integration. I have now typed a detailed proof of the claim that for the Lie $n$-algebra$b^n \mathbb{R}$ its Lie integration to a smooth $n$-group is indeed $\mathbf{B}^n \mathbb{R}$. That’s in the section Integration to line n-groups. I had gone through the trouble of preparing an SVG graphics, displayed there, that is supposed to illustrate the idea of how to identify smooth forms with sitting instants on the $n$-simplex with smooth forms on the $n$-ball. This was the first time I used the SVG editor, but we haven’t become friends yet. (And by the way: the version linked to from within the nLab edit pages does not work for me (Firefox on Win). It appears with duplicated menu items that don’t react properly to mouse clicks. I used the corresponding version found elsewhere on the web.) I tried to polish the discussion of forms on simplices that have sitting instants a bit. But it is still not really good. did some polishing of the exposition at Lie integration, following a list of comments by Jim Stasheff I have now expanded the (currently) three Examples-sections at Lie integration: • integration of Lie algebras to Lie groups • integration to line/circle Lie n-groups; • integration of Lie 2-algebras to Lie 2-groups. There is considerably more to be said. But I am running out of steam. am working on the entry Lie integration Here is what I did so far: • moved the discussion of references from the introduction to a References-section at the end and polished slightly. • created a Definition-section with two subsections: □ first is the Sullivan-Hinich-Getztler integration to a “bare oo-groupoid” (no smooth structure), □ second is the integration to $\infty$-Lie groupoids. By the way, I added now to Lie infinity-groupoid in the section Lie group: differential coefficients a discussion of the general abstract mechanism underlying the Getzler-Henriques prescription for integration of oo-Lie algebras: the claim is that for $G$ an $\infty$-Lie group 1. the object $\mathbf{\flat}_{dR} \mathbf{B}G$ is essentially given by the sheaf of flat $\mathfrak{g}$-valued forms, for $\mathfrak{g}$ the corresponding $L_\infty$-algebra; (and this is demonstrated inm the entry) 2. moreover the object $\mathbf{\Pi}_{dR} \mathbf{\flat}_{dR} \mathbf{B}G$ is $\exp(\mathfrak{g})$ (in the notation of the entry) i.e. is the Getzler-Henriques-prescription extended to simplicial presheaves in the evident way. This is (in somewhat different notation) that old observation of mine that we should be thinking of the Getzler-Henriques prescription as forming the path oo-groupoid of the sheaf of $L_\infty$ -algebra valued forms. And indeed, this is evidently true for the evident naive model of the path $\infty$-groupoid: if for $X$ a sheaf I write ${\tilde \mathbf{\Pi}}(X) : U \mapsto Hom(U \times \Delta^\bullet_{diff}, X)$ for the simplicial presheaf whose k-cells are $k$-dimensional path in $X$, then for $X = \mathbf{\flat}_{dR} \mathbf{B}G = \Omega^1_{flat}(-, \mathfrak{g})$ we have the ordinary pushout $\array{ \mathbf{\flat}_{dR} \mathbf{B}G &\to& * \\ \downarrow && \downarrow \\ {\tilde \mathbf{\Pi}} \mathbf{\flat}_{dR} \mathbf{B}G &\to& \exp(\mathfrak{a}) }$ in the category of simplicial presheaves. So this all looks like it should look. Unfortunately, I have still to fully understand if and why the naive model $\tilde \mathbf{\Pi}$ in fact does model the correct left derived functor that defines $\mathbf{\Pi}$ in this case: because the formula for that which I have is like this $\tilde \mathbf{\Pi}$, but much thicker, with various cofibrant replacements thrown in. I still cannot show that it may be modeled by $\tilde \mathbf{\Pi}$ when applied to 0-truncated simplicial sheaves. That’s been a stumbling block for me for quite some time now. And I replied: I forgot why I had put it there. Somebody had told me about some coomplaint about false attribution of originality. But I forget the details. Maybe we should just remove it. In 2 and 3 Zoran and David are asking you what is the meaning of strange sentence that the origin’s of Henriques paper from 2004 predate the originćs of 2002 Getzler’s paper. Ahm, Zoran? And Getzler does not treat arbitrary L_oo algebras, but nilpotent ones :) No, wait before you edit, I tried to say that correctly: Getztler notices that for all $L_\infty$-algebra, the Sullivan-like construction should be regarded as Lie integration. It is only for the special case of nilpotent $L_\infty$-algebras that he gives a prescription to cut down the large result of the Sullivan construction to a smaller equivalent one. For the general notion of Lie-integration however, this is pretty irrelevant. It point is to give more tractable models. I do not understand “whose origin possibly preceeds that of the previous article and which considers Banach manifold structure on the resulting ∞-groupoids” I saw this comment again when I reworked the entry now, but I noticed that I forgot why I had put it there. Somebody had told me about some coomplaint about false attribution of originality. But I forget the details. Maybe we should just remove it. And Getzler does not treat arbitrary L_oo algebras, but nilpotent ones :) I’d edit the page but for the spam blocker. Perhaps, Zoran, Urs means the technique or idea from Henriques predates I updated references a bit (it would be good when you put the reference link to arxiv to have the number of the article in the link name sometimes, e.g. when using the prinpouts of lab pages offline it is nice to have the reference handy, just arxiv does not help much). I do not understand “whose origin possibly preceeds that of the previous article and which considers Banach manifold structure on the resulting ∞-groupoids” about Henriques’ article in comparison to Getzler’s. What does it mean that it’s origin is earlier ? Getzler claims e.g. that it took him 7 years just to get a crucial improvement into a calculus of Dupont which is essential in his paper. polished/reworked the entry Lie integration. But it’s still somewhat stubby. Right, I think it is a little subtle: consider the non-smooth case, the $\infty$-topos just over bare dgc-algebras. There is then the kind of inclusion of dg-algebras discussed at function algebras on infinity-stacks which gives a right Quillen functor (roughly) $Spec : dgcAlg^{op} \to [dgcAlg,sSet]_{proj}$ that is given by the “Lie integration”-formula $Spec A : (Spec U, [k]) \mapsto dgcAlg(A , U \otimes \Omega^\bullet_{pl}(\Delta^k)) \,,$ where on the right we have the standard polynomial differential forms on the simplex. The fibrant objects in $dgcAlg^{op}$ are cofibrant in $dgcAlg$ hence in particular semi-free hence may be thought of as CE-algebras of $L_\infty$-algebras. So that’s good. And we can generalize this general situation to a smooth setup, as Herman Stel has done in his thesis (master thesis Stel (schreiber)). However, what I am still unfortunately lacking is an understanding of the relation of the Spec-functor as obtained there, to the formula for smooth Lie integration with smooth forms on the simplex. I can guess how it should all be related, but I cannot prove it yet. Maybe it’s easy and I am just being dense, of course. But for the present purpose I like to adopt a slightly different perspective anyway. From general abstract reasoning in cohesive $\infty$-toposes, one finds that “exponentiated $\infty$-Lie algebras” are objects that are sent by $\Pi$ to the point. One can show that smooth Lie integration produces such objects in $Smooth \infty Grpd$ and then just take it as a machine that acts as a source for examples of some objects. Then you can work backwards and check if that machine sends an $L_\infty$-algebra to the smooth $\infty$-groupoid that you would expect to be associated it, and the currentl proofs at Lie integration serve to confirm that. There is yet another issue: even in the non-smooth case the Spec functor is not necessarily doing what you would think it does. That’s because its right derived functor involves fibrant replacement. Now $b^{n-1}\mathbb{R}$ is fibrant in $dgcAlg^{op}$, for for instance a general non-abelian Lie algebra $\mathfrak{g}$ is not. It’s not clear that $Spec CE(\mathfrak{g})$ is indeed equivalent to what we are calling here $\exp(\mathfrak{g})$, because it is $\exp( P \mathfrak{g})$ for some fibrant replacement $P\mathfrak{g}$. I think there are different ways here in which $L_\infty$-algebras map into $\infty$-stacks, and it is important so keep them sorted out. For instance there is yet another way where we don’t apply $\exp(-)$ or $Spec$ but realized the $L_\infty$-algebras directly as “infinitesimal $\infty$-groupoids”. After some back and forth I am now thinking that this is described by the discussion at cohesive oo-topos – Infinitesimal cohesion. (sorry, I am writing all this in a bit of a haste, this really deserves to be discussed in more detail) (sorry, I am writing all this in a bit of a haste It is still a fantastic answer! lacking is an understanding of the relation of the Spec-functor as obtained there, to the formula for smooth Lie integration with smooth forms on the simplex. I can guess how it should all be related, but I cannot prove it yet. It looks very optimistic conceptually being squeezed to somewhat lower level technical question! Thanks for sharing the state of the art. I have worked a bit more on the Idea section at Lie integration, expanded it, tried to make it read more smoothly, and added more pointers to the references. I have worked on further polishing and streamlining the entry. Have collected all the discussion of $\mathfrak{a}$-valued differential forms on simplices into a new subsection Higher dimensional paths in an infinity-Lie algebroid. Today’s reference • Pavol Ševera, Michal Širaň, Integration of differential graded manifolds, arxiv/1506.04898 insereted into Lie integration, without comments. Thanks! Pavol told me about this result a few weeks back when he visited Prague. That’s neat. added statement of Vincent Braunack-Mayer’s result that higher Lie integration as defined in FSS 12 is right Quillen as a functor to smooth $\infty$-groupoids (here): There is a Quillen adjunction $dgcAlg^{op}_{\mathbb{R}, \geq 0, proj} \; \underoverset {\underset{ Spec }{\longrightarrow}} {\overset{ \mathcal{O} }{\longleftarrow}} {\phantom{A}\phantom{{}_{Qu}}\bot_{Qu}\phantom{A}} \; [CartSp^ 1. the projective local model structure on simplicial presheaves over CartSp, regarded as a site via the good open cover coverage (i.e. presenting smooth ∞-groupoids); given by nerve and realization with respect to the functor of smooth differential forms on simplices $CartSp \times \Delta \overset{\Omega^\bullet_{vert,si}}{\longrightarrow} dgcAlg_{\mathbb{R}, conn}^{op}$ from this Def.: 1. the right adjoint $Spec$ sends a dgc-algebra $A \in dgcAlg_{\mathbb{R},\geq 0}$ to the simplicial presheaf which in degree $k$ is the set of dg-algebra-homomorphism form $A$ into the dgc-algebras of smooth differential forms on simplices $\Omega^\bullet_{si,vert}(-)$ (this Def.): $Spec(A) \;\colon\; \mathbb{R}^n \times \Delta[k] \;\mapsto\; Hom_{dgcAlg_{\mathbb{R}}} \left( A , \Omega^\bullet_{si, vert}(\mathbb{R}^n \times \Delta^k_{mfd}) \right)$ 2. the left adjoint $\mathcal{O}$ is the Yoneda extension of the functor $\Omega^\bullet_{vert,si} \;\colon\; CartSp \times \Delta \to dgcAlg_{\mathbb{R},conn}^{op}$ assigning dgc-algebras of smooth differential forms on simplices from this Def., hence which acts on a simplicial presheaf $\mathbf{X} \in [CartSp^{op}, sSet] \simeq [\CartSp^{op} \times \Delta^{op}, Set]$, expanded via the co-Yoneda lemma as a coend of representables, as $\mathcal{O} \;\colon\; \mathbf{X} \simeq \int^{n,k} y(\mathbb{R}^n \times \Delta[k]) \times \mathbf{X}(\mathbb{R}^n)_k \;\mapsto\; \int_{n,k} \underset{\mathbf{X}(\mathbb{R}^n)_k}{\prod} \Omega^ \bullet_{si,vert}(\mathbb{R}^n \times \Delta^k_{mfd})$ diff, v61, current I think something is the wrong way around. You have, in the adjunction, written $Spec$ as going from presheaves to algebras. Thanks for catching this. Fixed now. The big question now is whether this Quillen adjunction exhibits R-cohomology localization, as in section 3 of function algebras on infinity-stacks. It must come at least close…. Isn’t this a particular instance of the nerve-realization adjunction? Shouldn’t it be indicated as such? Yes, it does say so: …given by nerve and realization with respect to the functor of smooth differential forms on simplices $CartSp \times \Delta \overset{\Omega^\bullet_{vert,si}}{\longrightarrow} dgcAlg_{\mathbb{R}, conn}^{op}$ from this Def.:… Indeed, somehow I didn’t notice it originally. Integration from Lie algebroids to groupoids is also studied in the dual language and generality of integration of Lie-Reinhart algebras and commutative Hopf algebroids, • Alessandro Ardizzoni, Laiachi El Kaoutit, Paolo Saracco, Towards differentiation and integration between Hopf algebroids and Lie algebroids, arXiv:1905.10288 diff, v67, current added pointer to: • Rui Loja Fernandes, Marius Crainic, Lectures on Integrability of Lie Brackets, Geometry & Topology Monographs 17 (2011) 1–107 &lbrack;arxiv:math.DG/0611259, doi:10.2140/gtm.2011.17.1&rbrack; diff, v68, current Some questions I didn’t find answers to in Cech cocycles or $L_{\infty}-$algebra connections or on this page (maybe for lack of a thorough search): 1. Do we know if $\mathrm{exp}_{\Delta}(\mathfrak{g})$ satisfies homotopy descent over cartesian spaces? 2. Given a Lie $\infty-$group $G$, and its tangent $L_{\infty}$ algebra $\g$, we have a canonical map $\mathrm{exp}_{\Delta}(\mathfrak{g})\to \mathbf{B}G$. Can we say when this map is a weak equivalence? We know that it is when $G$ is a simply connected Lie group for example (or the other cases listed on this page). What can be said about general Lie $n-$groups and Lie $\infty-$ On the second question: For $G$ a Lie group and $\mathfrak{g}$ its Lie algebra, it will be the 1-truncation $\tau_1 \exp_\Delta(\mathfrak{g})$ which is weakly equivalent to $\mathbf{B}G$. The higher truncations of $\exp_\Delta(\mathfrak{g})$ will pick up higher stacky homotopy groups from the ordinary homotopy groups of $G$. For simply connected Lie groups we can equivalently take $\tau_2 \exp_\Delta(\mathfrak{g})$. This is modeled by $cosk_3 \exp_\Delta(\mathfrak{g})$ and this is for instance made use of in constructing the stacky refinement of $\tfrac{1}{2} \mathbf{p}_1 \,:\, \mathbf{B} Spin(n) \to \mathbf{B}^3 U(1)$ as a map of simplicial presheaves out of $cosk_3 \exp_\Delta(\mathfrak{g})$. Regarding the first question: This is a good question, which, I am afraid, I had never really discussed. But one can make some progress using the recognition Lemma for local fibrancy over $CartSp$ which we more recently we proved with Dmitri Pavlov, recorded on pp. 134 in our “Equivariant Principal $\infty$-bundles”. This gives that $cosk_2 \exp_\Delta(\mathfrak{g})$, being isomorphic to the $\overline{W}(-)$ of the sheaf of groups $G$, is locally fibrant on $CartSp$.
{"url":"https://nforum.ncatlab.org/discussion/1596/lie-integration/?Focus=114547","timestamp":"2024-11-10T22:10:34Z","content_type":"application/xhtml+xml","content_length":"145859","record_id":"<urn:uuid:f85344e5-59e6-459e-a0a9-6f5919d22ed5>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00070.warc.gz"}
Missing Number Multiplication Worksheet Mathematics, specifically multiplication, creates the keystone of many academic techniques and real-world applications. Yet, for many students, grasping multiplication can present a challenge. To address this difficulty, teachers and parents have actually accepted an effective device: Missing Number Multiplication Worksheet. Introduction to Missing Number Multiplication Worksheet Missing Number Multiplication Worksheet Missing Number Multiplication Worksheet - Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets Missing Number Multiplication Worksheets Click here to return to the main worksheet index Click here for our other Missing Number Problems Worksheets Fill in the missing value in multiplication problems e g 9 4 36 Next try our Missing Number Division problems Missing Number Multiplication Worksheet 1 Significance of Multiplication Technique Understanding multiplication is crucial, laying a strong foundation for sophisticated mathematical ideas. Missing Number Multiplication Worksheet offer structured and targeted method, promoting a much deeper comprehension of this essential arithmetic procedure. Development of Missing Number Multiplication Worksheet Multiplication Worksheet Learning Printable Multiplication Worksheet Learning Printable Number Line Fill in the Missing Numbers Type 1 Grade 2 and grade 3 students need to observe the hops on the number lines and write down the missing multipliers multiplicands or products to complete the multiplication sentences The products are limited to 20 in these type 1 worksheets Number Line Fill in the Missing Numbers Type 2 Welcome to The Missing Numbers in Equations Blanks Multiplication Range 1 to 9 A Math Worksheet from the Algebra Worksheets Page at Math Drills This math worksheet was created or last revised on 2013 02 14 and has been viewed 228 times this week and 31 times this month From typical pen-and-paper exercises to digitized interactive styles, Missing Number Multiplication Worksheet have advanced, dealing with varied learning styles and preferences. Kinds Of Missing Number Multiplication Worksheet Basic Multiplication Sheets Easy exercises focusing on multiplication tables, aiding learners develop a solid arithmetic base. Word Issue Worksheets Real-life scenarios integrated into issues, enhancing important reasoning and application abilities. Timed Multiplication Drills Examinations designed to improve rate and accuracy, assisting in fast psychological math. Benefits of Using Missing Number Multiplication Worksheet missing number worksheet NEW 930 MISSING NUMBER ADDITION AND SUBTRACTION TO 20 missing number worksheet NEW 930 MISSING NUMBER ADDITION AND SUBTRACTION TO 20 4567 To find the missing number in a multiplication sentence students may use skip counting or the relationship between multiplication and division Students practice this concept extensively in multiplication facts of 2 3 4 and 5 worksheet This worksheet is about practicing with the horizontal format in which numbers are written side by How can I use this resource to teach my children A complete set of missing number multiplication challenges to help you see how your class is getting on with their times tables Each times table has an individual worksheet There are also mixed table sheets according to the curriculum requirements for each year group in KS2 Show more Improved Mathematical Skills Constant technique develops multiplication effectiveness, boosting general mathematics capacities. Improved Problem-Solving Abilities Word troubles in worksheets develop logical reasoning and approach application. Self-Paced Discovering Advantages Worksheets fit individual learning rates, promoting a comfortable and adaptable knowing atmosphere. How to Produce Engaging Missing Number Multiplication Worksheet Incorporating Visuals and Shades Dynamic visuals and colors capture focus, making worksheets visually appealing and engaging. Including Real-Life Scenarios Associating multiplication to daily situations includes relevance and practicality to exercises. Tailoring Worksheets to Different Skill Degrees Tailoring worksheets based on varying efficiency levels ensures comprehensive understanding. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Games Technology-based resources supply interactive knowing experiences, making multiplication engaging and enjoyable. Interactive Sites and Apps On-line platforms give varied and available multiplication method, supplementing traditional worksheets. Tailoring Worksheets for Numerous Discovering Styles Aesthetic Students Aesthetic help and representations help comprehension for students inclined toward aesthetic understanding. Auditory Learners Verbal multiplication problems or mnemonics cater to students that grasp principles with acoustic ways. Kinesthetic Learners Hands-on tasks and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Application in Knowing Uniformity in Practice Regular practice strengthens multiplication skills, advertising retention and fluency. Balancing Repetition and Range A mix of recurring workouts and varied issue layouts keeps interest and comprehension. Providing Positive Comments Feedback aids in recognizing areas of improvement, urging continued development. Challenges in Multiplication Practice and Solutions Inspiration and Interaction Hurdles Dull drills can bring about uninterest; innovative techniques can reignite inspiration. Getting Rid Of Fear of Mathematics Unfavorable assumptions around mathematics can impede progress; developing a positive discovering environment is crucial. Effect of Missing Number Multiplication Worksheet on Academic Efficiency Research Studies and Research Searchings For Study suggests a favorable correlation between regular worksheet use and boosted mathematics efficiency. Missing Number Multiplication Worksheet emerge as flexible tools, cultivating mathematical efficiency in learners while suiting varied understanding styles. From basic drills to interactive on the internet resources, these worksheets not just enhance multiplication skills but also advertise critical thinking and analytical capabilities. Missing Number Multiplication Worksheet Worksheet Fill The missing Numbers Math Worksheets MathsDiary Check more of Missing Number Multiplication Worksheet below Multiplication missing Digits 2 X 1 Multiplication By URBrainy Number Worksheet Category Page 21 Worksheeto Find The Missing Number Multiplication Worksheet Free Printable Missing Number Multiplication Worksheet 3 KidsPressMagazine Find The Missing Number Multiplication Worksheet Free Printable Missing Number Multiplication And Division Worksheet Worksheet Missing Number Multiplication Worksheets Free Printable PDF Missing Number Multiplication Worksheets Click here to return to the main worksheet index Click here for our other Missing Number Problems Worksheets Fill in the missing value in multiplication problems e g 9 4 36 Next try our Missing Number Division problems Missing Number Multiplication Worksheet 1 Multiplication with missing number Math Math Worksheets Fun Printable math multiplication with missing numbers Learn to multiply two numbers and then try finding out which number is missing in the given multiplication equation Before doing these multiplication problems one should do the basic multiplication vertical and horizontal problem worksheets Missing Number Multiplication Worksheets Click here to return to the main worksheet index Click here for our other Missing Number Problems Worksheets Fill in the missing value in multiplication problems e g 9 4 36 Next try our Missing Number Division problems Missing Number Multiplication Worksheet 1 Printable math multiplication with missing numbers Learn to multiply two numbers and then try finding out which number is missing in the given multiplication equation Before doing these multiplication problems one should do the basic multiplication vertical and horizontal problem worksheets Missing Number Multiplication Worksheet 3 KidsPressMagazine Number Worksheet Category Page 21 Worksheeto Find The Missing Number Multiplication Worksheet Free Printable Missing Number Multiplication And Division Worksheet Worksheet Multiplication Tables Missing Numbers Worksheets Twinkl Short multiplication With A missing number Studyladder Interactive Learning Games Short multiplication With A missing number Studyladder Interactive Learning Games The worksheet Is Filled With Numbers For Addition And Subtraction FAQs (Frequently Asked Questions). Are Missing Number Multiplication Worksheet appropriate for all age teams? Yes, worksheets can be customized to various age and ability levels, making them adaptable for different students. Just how frequently should trainees practice making use of Missing Number Multiplication Worksheet? Constant method is key. Regular sessions, ideally a couple of times a week, can produce considerable renovation. Can worksheets alone boost math abilities? Worksheets are an useful tool however needs to be supplemented with diverse learning techniques for comprehensive skill growth. Are there on the internet systems using complimentary Missing Number Multiplication Worksheet? Yes, lots of academic web sites offer free access to a large range of Missing Number Multiplication Worksheet. Just how can parents support their youngsters's multiplication method at home? Urging constant technique, supplying aid, and developing a positive understanding atmosphere are advantageous actions.
{"url":"https://crown-darts.com/en/missing-number-multiplication-worksheet.html","timestamp":"2024-11-13T22:28:26Z","content_type":"text/html","content_length":"28714","record_id":"<urn:uuid:8fab2639-668d-4218-9e11-244071ae97b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00645.warc.gz"}
6-2B Solving by Linear Combinations Warm-up (IN) Learning Objective: to solve systems of equations using linear combinations. Solve the systems using substitution. - ppt download Presentation is loading. Please wait. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy. Ads by Google
{"url":"http://slideplayer.com/slide/8793399/","timestamp":"2024-11-11T13:37:09Z","content_type":"text/html","content_length":"139298","record_id":"<urn:uuid:cd14f4f2-025b-4e3b-8973-d4f622e833b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00538.warc.gz"}
Torque moment: definition, formula and examples Torque, also known as torque, is a fundamental concept in the study of dynamics and mechanical engineering that describes the tendency of a force to rotate an object about an axis. This phenomenon is vitally important in a variety of applications, from simple engines and machines to architectural structures. Definition and basic fundamentals Torque is defined as the product of an applied force and the distance from the point of application to the axis of rotation. Mathematically, it is expressed as • τ is the torque, • r is the distance from the point of application of the force to the axis of rotation, and • F is the applied force. The standard unit for torque in the SI is the Newton-meter (Nm). The direction of the torque is determined by the right-hand rule. If the thumb of the right hand is placed along the axis of rotation and the fingers in the direction of the applied force, the torque is positive if the rotation is counterclockwise and negative if it is counterclockwise. clockwise direction. Calculation formula The calculation of the torque involves not only the magnitude of the applied force and the distance to the axis of rotation, but also the angle between the line of action of the force and the line connecting the point of application to the axis of rotation. The complete formula is • τ is the torque. • r is the distance from the point of application of the force to the axis of rotation. • F is the magnitude of the applied force. • θ is the angle between the line of action of the force and the line connecting the point of application to the axis of rotation. This formula takes into account the perpendicular component of the force that contributes to the torque. If the force is applied directly in the radius direction (no perpendicular component), the term sin(θ) becomes 0, and therefore the torque will also be 0. Resulting torque The resulting torque is the algebraic sum of all the individual torques acting on an object. When multiple forces exert torque in different directions, it is crucial to calculate the net torque to understand the combined effect on the rotating object. This resulting moment is obtained by adding or subtracting the individual torque moments, taking into account their direction and magnitude. Accurate consideration of these moments is essential in engineering and physics to predict the behavior of rotating structures, machines and systems. The basic formula for calculating the resulting torque is τ[r] =∑τ [i] τ [i] represents each individual torque moment. Relationship between torque and angular acceleration The relationship between torque (τ) and angular acceleration (α) is described by Newton's second law for rotation. The fundamental equation is τ=I×α. Where I is the moment of inertia of the rotating This relationship states that the torque applied to an object is equal to the product of its moment of inertia and the resulting angular acceleration. In other words, the magnitude of the torque determines how fast an object will rotate. A larger torque will require greater angular acceleration to keep the equation balanced, highlighting the essential interconnection between applied force and an object's rotational response. Examples of practical applications • Wrench: When you apply force to a wrench to tighten or loosen a bolt, you are generating a torque. The distance from the point of force application to the axis of rotation (the bolt) and the force applied determine the torque. • Doors: When you open or close a door, you are applying a torque around its hinges. The force you apply to the edge of the door and the distance from the edge to the hinges determine the torque. • Exercise with weights: When you lift a weight with a bar, you are generating torque moments. The distance from the axis of rotation (the elbow joint) to where you hold the bar, multiplied by the gravitational force acting on the weight, determines the torque. • Screws and nuts: When tightening a screw with a wrench, the force applied and the distance from the axis of rotation (the axis of the screw) generate a torque that secures the joint. • Human arm: When you raise your forearm in the air, the muscles apply a torque around the elbow joint. The distance from the elbow to where the force is applied and the force itself determine this
{"url":"https://nuclear-energy.net/physics/classical/dynamics/torque-moment","timestamp":"2024-11-15T02:38:47Z","content_type":"text/html","content_length":"69467","record_id":"<urn:uuid:b98c604f-902b-46da-b6bf-a5720303e3e7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00227.warc.gz"}
KSEEB Solutions for Class 7 Maths Chapter 9 Rational Numbers Ex 9.2 Students can Download Chapter 9 Rational Numbers Ex 9.2, Question and Answers, Notes Pdf, KSEEB Solutions for Class 7 Maths, Karnataka State Board Solutions help you to revise complete Syllabus and score more marks in your examinations. Karnataka State Syllabus Class 7 Maths Chapter 9 Rational Numbers Ex 9.2 Question 1. Find the sum : L.C.M of 3 and 5 is 15 LCM of 10 and 15 is 30 LCM of 11 and 9 is 99 L.C.M of 19 and 57 is 57 L.C.M of 3 and 5 is 15 Question 2. L.C.M of 13 and 15 is 195 L.C.M. of 9 and 1 is 9 Question 3. Find the product: Question 4. Find the value of:
{"url":"https://kseebsolutions.net/kseeb-solutions-for-class-7-maths-chapter-9-ex-9-2/","timestamp":"2024-11-13T21:31:06Z","content_type":"text/html","content_length":"83388","record_id":"<urn:uuid:5a032b6f-b298-4dfd-8cce-31979243e5ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00068.warc.gz"}
How do we take into account QN conservation in self-writing an MPO? I want to use the iDMRG code in a 2D system, so I am trying to write a 2D version of the Heisenberg.h file for my 2D spinless fermionic system. I managed to write the MPO with the W-matrices, similarly as in the Heisenberg.h code, and also taken into account the Jordan-Wigner transformation. Still, I do not know how to handle QN conservation (I want the number of fermions to be conserved), and when I run the code I get an error related to the divergence of the IQTensors. I understand that I should arrange the tensor in non-zero blocks, but in practice how can I do it when adding terms to the W-matrices? I do not see any special line in the Heisenberg.h file related to this (I see that from lines 84-94 the different terms are added and that's all), but I guess there is something more involved in my system, which is 2D and requires the Jordan-Wigner strings mediating interactions. Thanks a lot for your time, Hi Sergi, Thanks for the question. This is definitely a rather more complicated, technical topic that we have documentation for at the moment. To begin with, please look at the question and answer here, which is similar to yours: If you understand everything in the discussion there, then you will be a long way toward understanding all of the aspects of the code in Heisenber.h. If not, then please ask some more questions in the comments below about the parts you do not yet understand. To answer some of your questions above more specifically: • the convention for the blocking of the tensors in the Heisenberg.h code (and in QN conserving ITensors more generally) is set by how the Index objects are constructed. The ordering and arrangement of QN blocks (QN subspaces) determines, ahead of time, the blocking pattern of the tensors • when various terms are added into the W tensors in that code, the addition operator of ITensor automatically figures out where the terms should go into the existing W tensor. So if the term being added in has certain non-zero blocks, then because the Index objects are the same (and carry all of the QN details), the ITensor system can figure out on its own how to correctly place the tensor being added into the block structure of the W tensor. So you don’t really have to manage this yourself; you just have to add ITensors together that carry the same total QN flux and it always works correctly (or you get an error if the flux is different). • for your Jordan-Wigner strings, it is a complication but you may find it’s not too bad of one. In a normal (non fermionic) MPO, there will be identity operators that go before a set of non-identity operators, or after. These will remain identity operators in your case. But there will be some identity operators which go in between a pair of fermionic operators (operators which change the fermion parity, such as C and Cdag). These should be replaced by F operators. • the 2D aspect is more of a serious complication, but I’m sure you know how to handle it, just mapping terms in 2D into various 1D distances. E.g. on a square lattice of “width” Ny, horizontal, nearest-neighbor terms (which have a 2D distance of 1) have a 1D distance of Ny (assuming you are using the “zig zag” MPS path which always goes up each column, rather than a snaking path which goes up odd columns and down even columns. I recommend the zig zag path.) Hope that helps you make some progress. One thing I like to do is to devise a bunch of tests of a new MPO I’m making by using the ITensor function inner to compute matrix elements between various product states. Like you can make a product state |phij> which has a single fermion on some site j and |phik> which has a single fermion on site k, then compute <phik|H|phij> and work out by hand what it should be for your H. Then you can check whether you get the right thing from inner. Often you can do this for every single term in the Hamiltonian and thus be nearly sure that it’s bug free. (I say nearly because checking the JW string requires at least two particles to be present, but you can do checks for that too.) Also of course you can do some DMRG calculations in limits where you know what the answer should be. P.S. as mentioned please ask more question in the comments below if you have them Dear Miles, thank you very much for your fast and extended answer, it really helped me a lot. I think that the key is in your second point. If I understood correctly, regarding QN conservation, the main modification that I have to do to Heisenberg.h is: links.at(l) = Index(QN({"Nf", 0}),1, and the rest will be carried out automatically by the ITensor system, am I right? I wasn't understanding properly how the link Index works until I read the related post, and I had this part of the code wrong. Then, I think that I have the other points (JW strings, 2D system) under control, but for sure I will need to check things with this trick that you said in order to debug the Hamiltonian, so thanks again. Glad the answer was helpful! Yes what you said sounds right, depending on what Hamiltonian you’re making of course, in terms of the QN sub spaces needed and their sizes. You can work that out by determining the fluxes of each operator then enforcing the convention of making each MPO a zero-flux ITensor. Then the role of the virtual/link Index QNs are to cancel off those of the operators. If you make each term that you are adding in out of the same link indices, then yes the ITensor system (the ITensor “+” operator specifically) will handle all the details of summing terms into your MPO tensors. You can print out each step using PrintData to see the contents of the tensors changing as you go if you’d like. Actually I think I would still need to figure out some details for the part: links.at(l) = Index(QN({"Nf", 0}),?, I guess that the 0, +1, -1 are okay, as my Hamiltonian only contains density-density interactions ,which do not change the QN, and hoppings which carry +-1. Still, even after reading carefully your explanation in the related post, I do not how to figure out the value of the dimensions ?'s above. Naively, I would say that for the QNs +-1 I would have dimension 1, as I only have one type of term for each in the Hamiltonian (c^tc +h.c.). For the QN(0) I would say that I have dimension 2 for identities, plus 1 for density-density interactions (analogous to S_zS_z), plus 1 for F operators appearing due to the Jordan-Wigner transformation, which would make it 4 overall. But this is not working. I guess my reasoning is wrong and the long-range nature of interactions, with F operators appearing in between make these dimensions more difficult to count, right? Further comment: I now understand that the link indexes are very much related to the matrix representation of my 2D MPO, which is much more involved than the Heisenberg.h one. Therefore, even though it is maybe too technical to ask/answer, should I write the link indexes according to the operators on the first row of my matrix representation? If with Heisenberg.h the first operators on the left are I, I, S_z, Sm, Sp and this gives {0,0,0,-2, 2}, then if the left part of my matrix is something like I, I, N, C, Cdag, F, should it be {0, 0, 0, -1, 1, 0} ? Hi, so to hopefully answer your questions, there is a great deal of freedom of how you arrange the operators inside your MPO construction, and thus the pattern of the link indices can vary quite a lot depending on these choices too. So there's no one standard choice. For example, in a lot MPO's that you see written in papers, by authors such as McCulloch, Zaletel, etc. you will see an identity in the upper left and lower right corners. This is pretty common. But for QN conserving MPOs I often group those two identities together into the same block. Which leads me to my main point: A key thing when constructing QN conserving MPOs is to arrange the operators so that when you compute the QN fluxes of the link index subspaces, the subspaces (settings, or values of the Index) which have the *same flux get grouped together*. This allows for maximum blocking of non-zero elements together. So that's the pattern I always follow: first the flux-zero sector which includes things like the starting and ending identity operators, as well as on-site operators or density-density operators. Then the rest of the flux sectors after that. So I would do the following: 1. write out the matrix form of your MPO the best way you know how. 2. Now label the rows and columns by the fluxes, which are uniquely determined by requiring every MPO tensor to have flux zero (so once you start on the left hand side, and use the fluxes of the individual operators inside each tensor, the fluxes of the links will get determined). 3. Then ask: are all of the fluxes of the same value grouped together? 4. If not, swap rows and corresponding columns so that they do get grouped more togther. Such a swap will be a gauge transformation of the MPO and will not change the overall operator it makes. 5. Continue doing these row/column swaps until all of the same-flux Index values are grouped together, then you're done. If you follow the above steps it should give you the MPO that you want. It's also technically ok to have more than one subspace of a QN Index in ITensor with the same QN value. It's just that it will lead to ITensors with more blocks than necessary.
{"url":"https://www.itensor.org/support/2502/how-do-take-into-account-qn-conservation-self-writing-an-mpo?show=2503","timestamp":"2024-11-08T21:21:23Z","content_type":"text/html","content_length":"40034","record_id":"<urn:uuid:58411705-c3ce-45ab-8c27-1de4966c1e7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00893.warc.gz"}
GR9677 #60 Problem GREPhysics.NET Official Solution Alternate Solutions This problem is still being typed. Wave Phenomena$\Rightarrow$}Light Doppler Shift One can derive the Doppler Shift for light as follows: For source/observer moving towards each other, one has the wavelength emitted from the source decreasing, thus $\lambda = (cdt - vdt) = (c - v) t_0\gamma$. Thus, $\lambda = (c-v) For source/observer moving away from each other, one has the wavelength emitted from the source increasing, thus $\lambda = (cdt + vdt)=(c + v) t_0\gamma$. Thus, $\lambda = (c+v) Where in the last equality in the above, one applies time dilation from special relativity, $t = t_0 \gamma$ and the fact that $c=\lambda f = \lambda/t$ in general. Now that one has the proper battle equipment, one can proceed with the problem. This problem is essentially the difference in wavelengths seen from a red shift and blue shift, i.e., light moving towards and away from the observer. $\Delta \lambda = \left( 2v \right)\gamma\lambda_0/c \Rightarrow v = \frac{\Delta \lambda c}{2 \gamma \lambda_0} \approx \frac{\Delta \lambda c}{2 \lambda_0} \approx \frac {1.8E-12 \times 3E8}{244 E-9}\approx 2E3$, where the approximation $\gamma \approx 1$ is made since one assumes the particle is moving at a non-relativistic speed. 2 km is closest to choice (B). Alternate Solutions rizkibizniz just elaborating on, and correcting @nitin's answer. Since we know the sun speed is much less than c, we use the nonrelativistic doppler effect: 2012-02-25 19:08:28 $\lambda ' = \lambda \frac{c \pm v}{c}$ $\lambda ' = \lambda \pm \Delta\lambda$ $\pm$ depending if its going towards (-) or away (+) the observer. since the difference is taken when the particle is at oppostite ends, $(\lambda+\Delta\lambda)-(\lambda-\Delta\lambda) = 1.8e-12$ m $2\Delta \lambda = 1.8e-12$ m so to solve this correctly, not just correct order, we must take $1.8e-12$ m as twice $\Delta \lambda$. so: $\Delta\lambda = .9e-12$ m $\lambda = 122e-9$ m plug them in to the equation $v = c\frac{\Delta\lambda}{\lambda}$ we get 2.2 km/s (B) peglegjeff I thought up a different approach that simplifies things, while still using the relativistic doppler. 2009-10-09 16:24:44 The two sources a at opposite sides of the sun on the equator - one is moving towards you at V, one away at V. $\bigtriangleup\lambda$ = $\lambda\{\sqrt{\frac{1+\beta}{1-\beta}}+\sqrt{\frac{1-\beta}{1+\beta}} \}$ Combining under a common denominator and dividing by $\lambda$, $\frac{\bigtriangleup\lambda}{\lambda} = \frac{1.8E-12}{122E-9} \approx 1.5E-5$ so by inspection $\beta$ << 1, and $\sqrt{1-\beta^2} \approx 1$ $\beta \approx \frac{\bigtriangleup\lambda}{2\lambda} \approx 7.5E-6$ $V = c \beta \approx 3E8*7.5E-6 \frac{m}{s} \approx 2.2E3 \frac{m}{s} = 2.2 \frac{km}{s}$ Thus B is correct. qazx you must not forget that you can call the customer care number of flipkart.com at http://thedailyburp.com/flipkart-com-toll-free-contact-helpline/ for any inconvenience. rizkibizniz just elaborating on, and correcting @nitin's answer. Since we know the sun speed is much less than c, we use the nonrelativistic doppler effect: 19:08:28 $\lambda ' = \lambda \frac{c \pm v}{c}$ $\lambda ' = \lambda \pm \Delta\lambda$ $\pm$ depending if its going towards (-) or away (+) the observer. since the difference is taken when the particle is at oppostite ends, $(\lambda+\Delta\lambda)-(\lambda-\Delta\lambda) = 1.8e-12$ m $2\Delta \lambda = 1.8e-12$ m so to solve this correctly, not just correct order, we must take $1.8e-12$ m as twice $\Delta \lambda$. so: $\Delta\lambda = .9e-12$ m $\lambda = 122e-9$ m plug them in to the equation $v = c\frac{\Delta\lambda}{\lambda}$ we get 2.2 km/s (B) faith i got lucky and guessed that speed is very much lesser than c. and also the fact that, heck i'm not gonna let ets eat up my time. 19:47:29 hence the formula (from halliday) v= (| delta lamda|/ Lamda) c i divide it by two to average it out. and arrived at the same answer, B. again i got lucky. wittensdog Just to expand on what nitin said, if you look at the answer choices, the fastest answer, 2200 km/s, is still more than 100 times less than c. All of the answer choices are separated by 2009-11-04 powers of ten, and at 100 times less than c, there's no way we're seeing effects that big. So we're definitely pretty safely in the non-relativistic range here. So using the 12:55:56 non-relativistic version of the formula seems like a very good idea. In general, whenever you have a problem where it looks like relativity may be involved, I would always examine the characteristic speeds in the problem. If they're something like 100 times less than c, and the answer choices have some reasonable spacing between them, I would toss the relativistic version of the formula in the interest of time. If you don't remember the non-relativistic version, you can always get it by taking the relativistic version and throwing out terms of the form (v/c) ^ 2. istezamer I got this question right in less than 5 seconds !! 2009-11-04 First of all I knew that this is ONLY a trick question!! And the trick is in changing the answer that you will get from m/sec to km/sec.. The only number that can be changed into km/s 05:10:23 and is listed in the choices given is (E) 2200 so if we changed it to km/s it yields 2.2 k/m .. Choice (B) istezamer Oh of course noting that the particle at the sun moves with a relatively high velocity more than 0.22 km/sec. 2009-11-04 05:26:40 Although it is completely non scientific approach!! but i hate to see a question that takes that long time in solving and i still have 99 more questions to solve!! wittensdog It took me a minute to realize what you were saying, but this is very clever. I guess it relies on the assumption that ETS will always try to trick people. Then again I 2009-11-04 guess that's a pretty solid bet. Barney you could forget to change 220 m/s into 0.22 km/s just as well... not a reliable solution. 2012-11-06 10:32:04 jmason86 I guess I'm just lucky in that I've been doing solar physics research for the last 2 years.. I look at data of the sun all the time and we frequently have to correct for the differential 2009-10-09 rotation. So if you had all that under your belt.. it would be easy to see that answers (A) (D) and (E) were all absurd. The sun takes 27 days (on average) to rotate and it has a pretty 18:11:14 large radius... so something on the order of 2km/s is reasonable. peglegjeff I thought up a different approach that simplifies things, while still using the relativistic doppler. 16:24:44 The two sources a at opposite sides of the sun on the equator - one is moving towards you at V, one away at V. $\bigtriangleup\lambda$ = $\lambda\{\sqrt{\frac{1+\beta}{1-\beta}}+\sqrt{\frac{1-\beta}{1+\beta}} \}$ Combining under a common denominator and dividing by $\lambda$, $\frac{\bigtriangleup\lambda}{\lambda} = \frac{1.8E-12}{122E-9} \approx 1.5E-5$ so by inspection $\beta$ << 1, and $\sqrt{1-\beta^2} \approx 1$ $\beta \approx \frac{\bigtriangleup\lambda}{2\lambda} \approx 7.5E-6$ $V = c \beta \approx 3E8*7.5E-6 \frac{m}{s} \approx 2.2E3 \frac{m}{s} = 2.2 \frac{km}{s}$ Thus B is correct. niux did you mean a minus sign instead of summing? when initially stating delta(lambda)=..... 2009-11-05 13:25:35 shak Thank you very much! this one is the easiest approach:) 2010-08-16 14:11:06 faith a lil'r typo on 1st equation. its, i think, to be a subtraction of the two wavelengths. which was immediately corrected at the 2nd line. otherwise, a thorough 2010-10-17 00:40:26 solution. maryami Thanks alot, I got confused by Yuson's soloution(I thank Yoson too for its helpful site),it is a complete and understandable one! 2011-04-03 06:06:11 maryami Thanks alot, I got confused by Yuson's soloution(I thank Yoson too for its helpful site),it is a complete and understandable one! 2011-04-03 06:08:48 maryami Thanks alot, I got confused by Yuson's soloution(I thank Yoson too for its helpful site),it is a complete and understandable one! 2011-04-03 06:14:02 dstahlke There is one last stupid trick to this problem. I computed the answer in meters but the problem wants the answer in kilometers. Perhaps that is why only 9% of the people got this right, 2009-10-09 making it the second hardest question on the test. It's just something to keep your eye on. dirichlet I think there is a mistake in using \gamma 08:17:53 that should be 1/\gamma in formulating the shift as suggested by S.T.R. Though it doesnot affect the result as the velocity of sun along the equator gets small compared to that of light both \gamma and 1/\gamma approach the factor 1. dirichlet Sorry, Iwas wrong , that was correct. 2006-11-15 08:29:25 nitin Use the nonrelativistic Doppler shift formula instead 21:26:26 $v=\frac{\Delta\lambda}{\lambda}c$, where $v$ is the speed of the particle. In this case, $\Delta\lambda=1.8\times 10^{-12}m$, globalphysics Great simple way of getting the answer. With that approach it takes about 10 second! 2006-11-03 17:32:52 welshmj This method gives an answer that is off by a factor of 1/2, however it gives the correct order which is really what is important. 2007-07-31 20:42:40 theodiggers Funny, but derivation from the non-relativistic doppler shift formula correctly gives the 1/2 factor. Still, I didn't think to express it in such a compact form, rock 2007-10-18 01:43:39 on. Shtego the change in lambda given is difference between source coming towards and going away. I suspect the change in lambda written in the non-relavistic doppler shift formula is 2007-10-29 between a source moving and as if the source were not moving. So, cut the given change in lambda by half, and now you've got the missing half factor. hisperati All wrong... The 1/2 comes from the question. Read it. Your calculating the difference in velocity of the different sides of the sun, but the sun is only rotating at half that 2007-10-31 speed. Further you must subtract unity from the right side of that equation nitin has there. But the real problem is doing the arithmetic. In fact the arithmetic is so 23:34:03 hellish, only 9% of the people takin this exam get this question right. engageengage Actually the relativistic equation turns out to simplify to this once you work through all the algebrarnrnrn$\frac{1}{\lambda}=\frac{1}{\lambda_o}\sqrt{\frac{1-\beta}{1+\ 2009-01-05 beta}}$rnrnYou then have to subtract $\lambda_o$rnrn from this to get delta lambda on the left, and then simplify all of it. You end up getting:rnrn$v=\frac{c \Delta \ 14:30:21 lambda}{\lambda}$rnrnand then you have to remember to take half of the speed since, as already mentioned, the velocity is actually half since you are comparing the shifts from opposite ends of the planet, which effectively gives you double the shift.rnthat might just be a good one to memorize engageengage adding onto my earlier comment, to get that same result, you have to throw away second order powers of $\Delta \lambda$, which are tiny anyways. 2009-01-05 14:33:55 niux Hisperati is right, sun speed is half of that. And if you want a similar but more accurate approach than nitin (very nice i have to admit), then start from the expression 2009-11-05 that plegegjeff gets. It is the same used by nitin but counts for the 2 factor. Post A Comment! You are replying to: Bare Basic LaTeX Rosetta Stone Use the nonrelativistic Doppler shift formula instead LaTeX syntax supported through dollar sign wrappers $, ex., $\alpha^2_0$ produces $\alpha^2_0$. $v=\frac{\Delta\lambda}{\lambda}c$, type this... to get... where $v$ is the speed of the particle. In this case, $\int_0^\infty$ $\int_0^\infty$ $\Delta\lambda=1.8\times 10^{-12}m$, $\partial$ $\partial$ $\lambda=122nm$ $\Rightarrow$ $\Rightarrow$ $\ddot{x},\dot{x}$ $\ddot{x},\dot{x}$ $\sqrt{z}$ $\sqrt{z}$ $\langle my \rangle$ $\langle my \rangle$ $\left( abacadabra \right)_{me}$ $\left( abacadabra \right)_{me}$ $\vec{E}$ $\vec{E}$ $\frac{a}{b}$ $\frac{a}{b}$ The Sidebar Chatbox... Scroll to see it, or resize your browser to ignore it...
{"url":"http://www.grephysics.net/ans/9677/60/555","timestamp":"2024-11-02T01:46:31Z","content_type":"application/xhtml+xml","content_length":"44281","record_id":"<urn:uuid:a1cb8d37-46ec-4a6e-ba84-f706a7913897>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00334.warc.gz"}
What is the frequency response of Sparkfun's Qwiic Speaker Amp DEV-20690? I am pleased with the SparkFun Qwiic Speaker Amp DEV-20690 Dynamic Range Compressor but not pleased with the low frequency response. I am using the default (out of the box) settings. The TPA2016D2 amp spec is 20 to 20kHz. I notice that Sparkfun put a 100 ohm resistor and 1.0uF capacitor in series with the inputs. (The same 100 ohm resistor and another 47nF capacitor form a 33kHz low-pass filter, which is fine.) I am worried that the series resistor and capacitor on the inputs are cutting off the low frequencies. Using the Equation 6 on the TI data sheet, one over two Pi x Ri x Ci, I calculate a corner frequency of 1591Hz. This doesn’t make sense, so I ask this question. What is the frequency response of Sparkfun’s Qwiic Speaker Amp DEV-20690? The 1.0 uF input capacitor functions as a DC blocking capacitor. The cap has impedance 1/(2PIfC), which is about 8K at 20 Hz, but to estimate the low frequency cutoff, you need to know the input impedance of the amplifier. That does not seem to be stated in the TPA2016D2 data sheet, but to extrapolate from the input current spec of 1 uA, is presumably very high, in which case 8K is negligible and not a limiting factor. Poor perceived low frequency response is more likely to be associated with whatever transducer (speaker) you are using to reproduce the sound, and whether the amplifier has sufficient output power to drive it properly. True. TI states the cutoff frequency is 1(2πRiCi). Your reply suggests Ri is the unstated input impedance. I took it to mean the input series resistor. Your interpretation is more accurate. The 1uA input current at line level of 1V p-p (0 dBV) suggests an input impedance of 1 megohm, giving a low frequency cutoff of <1 Hz. The amp drives the input of a mixer that has very good low frequency response. The speakers are driven by internal amplifiers. The mixer is likely high impedance and not meeting the “Minimum Load Resistance 3.2 ohms.” I suspect that has something to do with it. Thank you for your thoughts.
{"url":"https://community.sparkfun.com/t/what-is-the-frequency-response-of-sparkfuns-qwiic-speaker-amp-dev-20690/46696","timestamp":"2024-11-06T11:27:25Z","content_type":"text/html","content_length":"31415","record_id":"<urn:uuid:11d99763-fc1c-4d25-b3c4-863887e118bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00361.warc.gz"}
DSpace Arşivi :: by Yazar "Sanver, MR" değerine göre listeleniyor Yazar "Sanver, MR" seçeneğine göre listele Listeleniyor 1 - 9 / 9 Sayfa Başına Sonuç Sıralama seçenekleri • Another characterization of the majority rule (Elsevier Science Sa, 2002) Asan, G; Sanver, MR Given any (finite) society confronting two alternatives, May [Econometrica 20 (1995) 680] characterizes the majority rule in terms of anonymity, neutrality and positive responsiveness. This final condition is usually criticized to be too strong. Thus, we drop it and give a similar characterization in terms of anonymity, neutrality, Pareto optimality and a condition we call weak path independence. (C) 2002 Elsevier Science B.V. All rights reserved. • Efficiency in the degree of compromise (Kluwer Academic Publ, 2004) Özkal-Sanver, I; Sanver, MR We introduce a social choice axiom called efficiency in the degree of compromise. Our axiom is based on the trade-off between the quantity and quality of support that an alternative receives. What we mean by the quantity of support is the number of voters behind an alternative, while the quality of support is about the definition of being behind depending on the rank of an alternative in voters' preference orderings. Naturally, one can increase the quantity of support of an alternative to the expense of giving up from its quality. We say that an alternative is an efficient compromise if there exists no other alternative with at least an equal quantity of support with a higher quality. Our efficient compromise axiom is based on not choosing inefficient compromises. We introduce it and show that many standard social choice rules of the literature, such as Condorcet-consistent rules, plurality with a runoff, the Borda count and the single transferable vote, may choose inefficient compromises. • Implementing matching rules by type pretension mechanisms (Elsevier Science Bv, 2005) Özkal-Sanver, I; Sanver, MR We consider a two-sided matching model where agents' preferences are a function of the types of their potential mates. Matching rules are manipulated by type misrepresentation. We explore the implementability of the G-core in G-Strong Nash Equilibria. Although direct type pretension mechanisms rule out bad equilibria, the existence of equilibrium cannot be generally guaranteed. However, taking G as the discrete partition, the individually rational matching correspondence is partially implementable in Nash equilibria. On the other hand, incorporating a certain degree of hypocrisy in the mechanism, i.e., allowing agents to pretend different types to different potential mates, ensures the full implementability of the G-core in G-Strong Nash Equilibria. (c) 2005 Elsevier B.V. All rights reserved. • Maskin monotonic aggregation rules (Elsevier Science Sa, 2006) Asan, G; Sanver, MR Given a society confronting two alternatives, we show that the set of anonymous, neutral and Maskin monotonic aggregation rules coincides with the family of absolute qualified majority rules. We also explore the effect of incorporating Pareto optimality in our characterization. (c) 2005 Elsevier B.V. All rights reserved. • Minimal monotonic extensions of scoring rules (Springer, 2005) Erdem, O; Sanver, MR Noting the existence of social choice problems over which no scoring rule is Maskin monotonic, we characterize minimal monotonic extensions of scoring rules. We show that the minimal monotonic extension of any scoring rule has a lower and upper bound, which can be expressed in terms of alternatives with scores exceeding a certain critical score. In fact, the minimal monotonic extension of a scoring rule coincides with its lower bound if and only if the scoring rule satisfies a certain weak monotonicity condition (such as the Borda and antiplurality rule). On the other hand, the minimal monotonic extension of a scoring rule approaches its upper bound as its degree of violating weak monotonicity increases, an extreme case of which is the plurality rule with a minimal monotonic extension reaching its upper bound. • Nash implementing non-monotonic social choice rules by awards (Springer, 2006) Sanver, MR By a slight generalization of the definition of implementation (called implementation by awards), Maskin monotonicity is no more needed for Nash implementation. In fact, a weaker condition, to which we refer as almost monotonicity is both necessary and sufficient for social choice correspondences to be Nash implementable by awards. Hence our framework paves the way to the Nash implementation of social choice rules which otherwise fail to be Nash implementable. In particular, the Pareto social choice rule, the majority rule and the strong core are almost monotonic (hence Nash implementable by awards) while they are not Maskin monotonic (hence fail to be Nash implementable in the standard framework). • Scoring rules cannot respect majority in choice and elimination simultaneously (Elsevier Science Bv, 2002) Sanver, MR I show that there exists no scoring rule which ensures that an alternative considered as best by a strict majority is chosen while an alternative considered as worst by a strict majority remains outside of the choice set. The negative result is valid for standard scoring rules where scores depend on the number of alternatives only, as well as for generalized ones defined via vectors of scores which are functions of both the number of alternatives and agents. (C) 2002 Elsevier Science BY. All rights reserved. • Sets of alternatives as Condorcet winners (Springer-Verlag, 2003) Kaymak, B; Sanver, MR We characterize sets of alternatives which are Condorcet winners according to preferences over sets of alternatives, in terms of properties defined on preferences over alternatives. We state our results under certain preference extension axioms which, at any preference profile over alternatives, give the list of admissible preference profiles over sets of alternatives. It turns out to be that requiring from a set to be a Condorcet winner at every admissible preference profile is too demanding, even when the set of admissible preference profiles is fairly narrow. However, weakening this requirement to being a Condorcet winner at some admissible preference profile opens the door to more permissive results and we characterize these sets by using various versions of an undomination condition. Although our main results are given for a world where any two sets - whether they are of the same cardinality or not - can be compared, the case for sets of equal cardinality is also considered. • Strong equilibrium outcomes of voting games are the generalized Condorcet winners (Springer, 2004) Sertel, MR; Sanver, MR We consider voting games induced by anonymous and top-unanimous social choice functions. The class of such social choice functions is quite broad, including every t-refinement of the Plurality Rule, Plurality with a Runoff, the Majoritarian Compromise and the Single Transferable Vote, i.e., any selection from either of these social choice rules which is obtained via tie-breaking among candidates according to any total order t on the set of alternatives. As announced in our title, the strong equilibrium outcomes of the voting games determined by such social choice functions turn out to be nothing but generalized Condorcet winners, namely the (n,q)-Condorcet winners. In the case of social choice functions (such as those just listed) which are furthermore top-majoritarian, they coincide with the classical Condorcet winners.
{"url":"https://openaccess.bilgi.edu.tr/browse/author?value=Sanver,%20MR","timestamp":"2024-11-11T16:09:20Z","content_type":"text/html","content_length":"533304","record_id":"<urn:uuid:9b6e642c-74d5-47c9-b6ea-0d75ed77f59e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00072.warc.gz"}
screenshot of monoface, mono‘s happy new year flash greeting card (slightly modified in order to make it more look like me..:)). via cuartoderecha. Tha application monoface mixes up face parts of people from mono. The application is reminicent to a popular game on paper, where various people have to draw body parts in order to design a complete cloneGiz (daytar 3/2004) also “mixes parts of faces”. It has a mathematical “selection layer”, means you do not select by picking with your mouse as in monoface but you have to write a formula, which produces the outcome. The main idea of cloneGiz was not to mix funny faces (although you can do this) but to ask the question of how one can controll the final product (the mixed face) by changing the mathematical formula. It links the mathematical language to a visual “representation”. This project was also made in order to illustrate an ongoing project of linking “objects” to “mathematical code” (sofar via the string rewriter jSymbol). Fubiz Says: January 27th, 2007 at 12:27 am vraiment excellent Victor Says: August 10th, 2011 at 2:38 pm I took above image to find you in social media crowd facepook, but I couldn’t find you. Fränzy Says: August 10th, 2011 at 8:18 pm Ey Victor hast völlich recht: Picksel sollte man nur mit saubren Fingern ausdrückn Wampe Logorythmus von halb und halb Says: August 10th, 2011 at 9:05 pm Tomatn uff de Ojen Fränzy? dit sind Kieksl, keene Picksl
{"url":"https://www.randform.org/blog/?p=852","timestamp":"2024-11-08T20:51:48Z","content_type":"application/xhtml+xml","content_length":"21259","record_id":"<urn:uuid:e911cd6a-9f74-4356-8ca3-fe3fd8453f64>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00653.warc.gz"}
Postdoctoral Fellow Spotlight: Sarah Dijols Can you tell me about your research background, how you became interested in this field and why you chose UBC? I work in representation theory, and more precisely on the representation of reductive groups over p-adic fields. This is also closely related to automorphic (or modular) forms which can be thought as functions on reductive groups over global fields with many symmetries. A quote attributed to the mathematician Martin Eichler is that "There are five fundamental operations in mathematics: Addition, subtraction, multiplication, division and modular forms.” I think I accidently ended up working in this area! As a master student in Jussieu, Paris, I asked one of my favorite instructor from undergrad, who specialized in hyperbolic geometry, if he could be my master thesis advisor. I told him I was looking for a topic at the intersection of number theory and geometry, which were my favorite topics. I may have heard about the Langlands program at the time, but had no idea what it was, and he told me: "If you are looking for something at this intersection, then you should work on the Langlands program". He then sent me to meet Jens Funke who became my main advisor for the master thesis and taught me a lot about theta series and L-functions. I loved that topic, and felt I had been sent in the right direction. I chose UBC mostly because of my postdoc mentor here, Julia Gordon. It happens that UBC was also familiar to me as I was an exchange undergraduate student here in 2010! What is the focus of your current research, and what are the key questions or problems you are addressing? I am currently working on three different projects. I will describe them in chronological order: The first one addresses the question of classifying the representations of the exceptional (p-adic) group G2 which are distinguished by its subgroup SO4 (meaning they have a SO4 invariant linear form). I have already written a paper on this case, but the current work, joint with Nadir Matringe, aims at completing it, and bringing in some new perspective. The quotient G2/SO4 being a symmetric space, we can mostly apply some known recipe to discover those representations. The difficulty lies in the fact that the group G2 is very bizarre, for instance this quotient corresponds to the set of quaternionic subalgebras of the p-adic split octonions! On another project, joint with Taiwang Deng, we are exploring how a certain root system known as \Sigma_sigma could be related to certain endoscopic subgroups in the context of the local Langlands In a third project with Mishty Ray, we are working on the description of the ABV-packet (a packet of representations) associated to a specific Arthur parameter of G2 (again!) that Mishty studied in a different paper. What methodologies or techniques are you using in your research, and why did you choose them? I think my area is good at combining techniques developed in different other areas. For instance, in the G2/SO4 project, there is a lot of geometric intuition coming in, and some linear algebra. Recently, I have been learning techniques and concepts coming from geometric representation theory that we can use in the project with Mishty (in particular) and there I feel that again algebra won't be as useful as developing one's geometric intuition and finding mental pictures for a number of objects. I think each project's theorical constraints pushes the need to learn and use a limited set of techniques, and in my experience, I have rarely found different techniques to tackle the same question (albeit that has happened!) so I would say that my choice mostly lies in what question/topic to study. What have been some of the biggest challenges or obstacles you've faced in your research, and how have you addressed them? Surprisingly maybe, the biggest obstacles I have faced in research were more related to the context of my research than to the research itself (which can be challenging too but in a richer way). I was very isolated during my PhD years, and in an environment that, back then and retrospectively (based on observing other PhD students and advisors around me in the past many years) I see as uncaring, and even hostile toward the end of my PhD as my independance was becoming more obvious. After my PhD, I therefore had to win back the confidence in myself I had progressively lost, relearn the capacity to discuss mathematics in a relax way, and to enjoy myself doing it. This may sound very surprising from a North-American perspective, where the well-being and respect due to students or learners, or even professionals in general, seem quite central, and their opinions is being asked and listened to. This is not as much the case in French universities, and my story is not so unusual there. Can you share any significant findings or outcomes from your research so far? I find it difficult to calibrate what is "significant" in my area, and it might be significant to someone who knows the history and the context of a problem, and very technical and narrow to someone who doesn't. I guess results which have been significant to me were the ones that either took me quite some time to reach (for instance the proof of the generalized injectivity conjecture) or contradicted some initial intuition I had. For instance, I have finally ended some computations on the root systems \Sigma_sigma using SageMath in the context of exceptional groups, and the results are surprising, and reveal some interesting patterns (and even some possible connection to a seemingly unrelated concept, but this is still a secret, as it is part of the work in progress with Taiwang Deng). What are your future research plans and career goals, and how do you see your work evolving over the next few years? Outside the three projects I already mentioned, I have started some discussion with my mentor Julia Gordon here, and we are exploring the possibility to characterize some properties of Langlands parameters using nilpotent orbits. I also started discussing with Felix Baril Boudreau, a former PIMS postdoc at the University of Lethbridge, some ideas toward a joint work. I would love to continue in academia, and to get a job in Europe or in France, to be closer to my relatives. The last couple of years in Canada have been enriching and productive for me and I am trying to take as much as possible advantage of the excellent work conditions and opportunities given here!
{"url":"https://www.math.ubc.ca/postdoctoral-fellow-spotlight-sarah-dijols","timestamp":"2024-11-07T09:56:14Z","content_type":"text/html","content_length":"31149","record_id":"<urn:uuid:2259e52d-1f53-4aa6-9ab4-ba5269a90da3>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00013.warc.gz"}
what is the calculation delay in PID used for? If you open the Controller block, you can have a look at the implementation (Ctrl+U) of the digital as well as the analog controller. The digital version is implemented using a C-Script block and the analog one uses gain blocks and a continuous time integrator. Both versions are implemented in the known parallel controller structure (as opposed to serial). I would suggest you use the analog version if you don’t have experience in discrete time domain modelling. The PI-element can be expressed as a n inverted zero in the s-domain. R. W. Erickson has some good lecture notes on bode plots, google it. G(s) = k_r * (1 + w_c / s) When packing the Analog PI version of the controller into an equation, we get G(s) = k_p + k_i / s. Using coefficient comparison you, can determine the values for k_p and k_i: k_r + k_r * w_c / s = k_p + k_i / s ->> k_p = k_r ->> k_i = k_r * w_c For the digital version, there is more stuff involved. Hope this helps.
{"url":"https://forum.plexim.com/t/what-is-the-calculation-delay-in-pid-used-for/270/4","timestamp":"2024-11-13T02:37:39Z","content_type":"text/html","content_length":"24061","record_id":"<urn:uuid:4a3de9ef-0d31-4676-ac14-27129e735000>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00781.warc.gz"}
Arithmetic operators in Python - techPiezo In this article, we would cover Arithmetic operators in Python. These help us perform certain mathematical operations – Addition, Division, Exponentiation, Floor Division, Modulus, Multiplication and Arithmetic operators in Python We use two variables x and y here, which would be common for all except for few. x = 100 y = 10 I. Addition (+) – sum of two numbers. It would result in – II. Division (/) – to split a larger part into smaller equal parts. It would return with – III. Exponentiation (**) – raise x to the power the of y i.e. x^y Output – IV. Floor division (//) – Pretty similar to regular division we do. But, there is a difference. It results in outcome which gets rounded-off to the greatest possible integer. Here, we need different variables a and b. a = 24 b = 7 It would result in – But, what if a is equal to 27 – a = 27 b = 7 Again, it would return with – Clearly, we can see how its rounds-off the outcome (i.e. 3.42 and 3.85) to the greatest integer value (i.e 3). V. Modulus (%) – It returns the remainder of division. Again, using different variables a and b. a = 25 b = 7 It would result in – VI. Multiplication (*) – product of two numbers. It would return with – VII. Subtraction (-) – difference between two numbers. Output – In conclusion, we have discussed arithmetic operators in Python here.
{"url":"https://techpiezo.com/python/arithmetic-operators-in-python/","timestamp":"2024-11-04T23:54:36Z","content_type":"text/html","content_length":"98417","record_id":"<urn:uuid:d62c7804-053f-43fc-9aa4-1a72fd5f50c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00490.warc.gz"}
The History of the Strongtalk Project There were really two different threads to the prehistory of the Strongtalk system, starting with two separate research efforts on different sides of the country. On the West Coast, the Self group at Sun Microsystems Labs, headed by David Ungar and Randall Smith, spent years working on some really radical virtual machine technology, originally with the goal of getting their prototype-based object language, Self, to perform well. They had a very advanced VM architecture, with excellent garbage collection, but the real challenge was in the compilation technology, because Self, like Smalltalk, was a pure object language, meaning that all the basic data-types in the system were real objects, unlike C++, Java, or C#. That, combined with the fact that Self (also like Smalltalk) is a dynamically-typed language, imposes a significant cost when manipulating really fundamental things like booleans and integers, because when the compiler sees a+b, or (flag ifTrue: [...]), it can't assume that they are integers or booleans, because they might be something else, each and every time they get executed, and you have to handle those other cases somehow. Also, both Self and Smalltalk depend on Blocks (function objects with closures) for all control structures, which also imposes a lot of overhead. Making the problem even worse for Self was the fact that they didn't have any direct variable access- ALL variable access had to go through accessor methods (the apparent variable access syntax was just sugar for accessor messages). So they put a tremendous amount of effort into better compilation technology. The real breakthrough on the VM side came with Urs Hoelzle's type-feedback compiler, which for the first time allowed the vast majority of message sends in general purpose code to be inlined. Once things are inlined (often many levels deep), the compiler can do a much better job of optimizing the code, and this is necessary to produce big performance gains. This requires a lot of really exotic technology, like optimistic inlining with the ability to deoptimize and back out on-the-fly if something happens that violates the optimistic inlining assumptions. The Self system was a real research tour-de-force, but Self has quite a few fundamental differences from Smalltalk, and the system was not designed for commercial or production use, since it was not very stable and used an enormous amount of memory. But it showed for the first time that pure, dynamically-typed languages like Self and Smalltalk in principle could be gotten much closer to the performance of C. On the East Coast, I (Dave Griswold) was frustrated with the fact that there were still a lot of obstacles to using Smalltalk in most kinds of production applications. ParcPlace and Digitalk had made a lot of progress, especially with Deutsch/Schiffman dynamic translation technology, which sped up Smalltalk by a factor of 1.6 or so at the time. But it was still way too slow for any kind of compute intensive application (which I was doing), and I felt there were several other obstacles to widespread use as well. One of the biggest among these in my mind was the lack of any kind of type system, which although it makes the language extremely flexible, also means that organizing and understanding large-scale software systems is a lot harder. Another was poor support for native user interfaces, in the interest of portability. Although this was a nice idea for people who were ideologically dedicated to portability, in practice at the time (and to a large extent even now) people needed to write UIs that weren't out of place on Windows machines (emulated widgets just don't cut it). Several people had tried to build type systems for Smalltalk (Borning&Ingalls, Palsberg&Schwartzbach, Graver&Johnson), but it was clearly an enormously difficult task, because of the vastly more flexible nature of the way Smalltalk is used compared to any existing statically-typed language, not to mention the unprecedented problem of having to retrofit a type system onto existing untyped code. In addition to the fact that none of the few existing type-system efforts worked on anything other than tiny bodies of code, it was obvious that none of the previous efforts were even close to being the right kind of technology for the real world. However, I was convinced that it was possible to do something about it, and so I hired Gilad Bracha, who knew a lot about this stuff, and who also had neat ideas about mixins and things, and we set about building a type system for Smalltalk that would actually work. The first generation of the type system, which we wrote about in the '93 OOPSLA proceedings, worked but was pretty ungainly because it was grafted on top of the ParcPlace libraries. This makes things a lot harder, because to really do a typed Smalltalk right, you need to structure your basic libraries differently so you can typecheck the inheritance relationships. The existing Smalltalk libraries are full of inheritance relationships that just aren't subtype compatible (e.g. Dictionary and Set), and so we had to use a declared hierarchy that differed from actual underlying hierarchy. At the same time, I was exploring various paths to speeding up Smalltalk (since the type system was not used for optimization), but without the kind of exotic optimistic-inlining technology the Self group used, the obstacle seemed insurmountable. The best inlining approach I could come up with without type-feedback was basically a form of a technique the Self group used, customization (copying methods down through inheritance, which means, in this case, that the class can be treated as constant, allowing self-sends to be inlined), but I computed that for the ParcPlace library the best that would do would be to inline about 25% of sends, statically. I suspect other people trying to make Smalltalk faster were running into basically the same problem, and we all thought the Self system had the kind of technology that would eventually solve the problem, but it looked so advanced and complicated that it looked at least 10 years away from commercialization. I think that incredible apparent difficulty was what stopped everyone else from adopting the Self technology. It was just too daunting. The two technologies came together when I started talking to Urs Hölzle, who had finished the second-generation Self compiler (and his Stanford thesis), and was looking for something interesting to do. After reading his thesis on type-feedback, I realized that the type-feedback technology was actually not as conceptually difficult as most people had thought: people had read all the Self papers and been impressed but terrified of it. No one else seemed to pick up on the fact that the type-feedback technology was actually nicely suited for a good, production-quality compiler, although a lot of changes and adaptations were needed compared to the way it was used in Self. So this was a perfect opportunity- with Urs' technology (as well as Lars Bak, who had done a tremendous amount of work on the Self VM and knew its architecture inside and out), we had a type system and a compilation technology, which together were perfectly suited for a great production Smalltalk system, since they were independent of each other. This independence was critical, since the system would need to accept untyped as well as typed code, so that people could use the type system as much or as little as they wanted to, without impacting performance. So then we found some other really talented people, and put together a great team (in alphabetical order): • Lars Bak was the VM wizard. • Gilad Bracha wrote the typechecker, the reflective interface support, and mixins at the Smalltalk level. • Steffen Grarup, who worked not only on the VM, especially the garbage collector; but on the Smalltalk side, where he wrote the programming environment, as well as the source code manager and other things. • Robert Griesemer wrote the interpreter, the interpreter generator, most of the compiler, and other VM stuff. (He also wrote an even better compiler than the one running in this version, but it wasn't quite finished enough for us to use for this release- it would have been considerably faster). • David Griswold wrote the typed "Blue Book" libraries, and the glyph-based user-interface framework, the widgets and the HTML browser, and also managed the group. • Urs Hölzle of course worked on the compiler and the tricky inlining infrastructure that it used, and other VM stuff. • Later, Srdjan Mitrovic joined and did most of the adaptation of the technology to Java. As mentioned in the introduction, work started on the system in the fall of 1994, and by 1996 the system was working nicely, but then the Java phenomenon happened and we eventually had to switch to Java before ever releasing it. The only public display of the technology was in late 1996, when we had a booth at OOPSLA and got quite a bit of attention. A few people got to evaluate it privately, and got terrific benchmark results (one well-known guy even got a speedup of 12 on some real Smalltalk code), but after that it disappeared from view, as we focused on Java. As for the future: Strongtalk contains innovations that are still far ahead of virtually any existing mainstream language or VM. Now that Strongtalk is open source, the future is up to you!
{"url":"https://www.strongtalk.org/history.html","timestamp":"2024-11-04T05:46:01Z","content_type":"application/xhtml+xml","content_length":"13024","record_id":"<urn:uuid:fce5ead5-090c-45ad-ad98-747f195ed9dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00768.warc.gz"}
Simplify expressions with absolute vaue sign simplify expressions with absolute vaue sign Related topics: differential equation applications .ppt how to solve algebra problem at year 14 algebra solving software solving radical expressions partial fraction decomposition online calculator 8th grade algebra exercise work gooks numbers least to greatest calculator Grade 9 Math Tree Graphes multiplying & dividing powers algebra online quiz square roots square root of 56 saxon algebra 2 answer keys Author Message Sammj Sontboj Posted: Friday 11th of Apr 11:10 Hey Friends I really hope some math wiz reads this. I am stuck on this test that I have to submit in the next week and I can’t seem to find a way to finish it. You see, my tutor has given us this test on simplify expressions with absolute vaue sign, midpoint of a line and converting decimals and I just can’t understand it. I am thinking of going to some private tutor to help me solve it. If one of you friends can lend me a hand, I will very appreciative. Registered: 23.05.2003 From: Savannah, GA Back to top Jahm Xjardx Posted: Saturday 12th of Apr 13:32 The attitude you’ve adopted towards the simplify expressions with absolute vaue sign is not the a good one. I do understand that one can’t really think of anything else in such a situation. Its nice that you still want to try. My key to successful problem solving is Algebrator I would advise you to give it a try at least once. Registered: 07.08.2005 From: Odense, Denmark, EU Back to top Xane Posted: Monday 14th of Apr 08:01 Algebrator is used by almost every student in our class. Most of the students in my class work part-time. Our teacher introduced this software to us and we all have been using it since then. Registered: 16.04.2003 From: the wastelands between insomnia and Back to top Noobarmj Posted: Monday 14th of Apr 17:58 Ok, after hearing so much about Algebrator, I think it definitely is worth a try. How can I get hold of it? Thanks! Registered: 04.04.2006 From: London Back to top Momepi Posted: Tuesday 15th of Apr 12:41 Accessing the program is simple . All you desire to know about it is available at https://softmath.com/algebra-policy.html. You are guaranteed satisfaction. And besides , there is a money-back guarantee. Hope this is the end of your hunt. Registered: 22.07.2004 From: Ireland Back to top erx Posted: Tuesday 15th of Apr 17:40 A truly piece of math software is Algebrator. Even I faced similar problems while solving scientific notation, dividing fractions and least common denominator. Just by typing in the problem from homework and clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several math classes - Remedial Algebra, Pre Algebra and Basic Math. I highly recommend the program. Registered: 26.10.2001 From: PL/DE/ES/GB/HU Back to top
{"url":"https://www.softmath.com/algebra-software/subtracting-exponents/simplify-expressions-with.html","timestamp":"2024-11-09T22:34:36Z","content_type":"text/html","content_length":"43165","record_id":"<urn:uuid:e2a53f05-37f0-4cb6-ad5d-5657bbe7ea07>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00619.warc.gz"}
Corporate bond valuation, A share of stock is currently selling for $31.80. If the, Finance Basics 1.Valuation - corporate bond A $1,000 corporate bond with 10 years to maturity pays a coupon of 8% (semi-annual) and the market required rate of return is a) 7.2% and b) 10%. What is the current selling price for a) and b)? 2.Valuation - options The following information refers to a six-month call option on the stock of XYZ, Inc. •Price of the underlying stock: $50. •Strike price of the three-month call: $45. •Market price of the option: $10. a) What is the intrinsic value of the option? b) What is the option's time premium at this price? 3.Valuation - zero-coupon bond A U.S. Government bond with a face amount of $10,000 with 13 years to maturity is yielding 5.5%. What is the current selling price? 4.A share of stock is currently selling for $31.80. If the anticipated constant growth rate for dividends is 6% and investors are seeking a 16% return, what is the dividend just paid? 5.A $1000 value convertible bond with conversion price of $50. It sells for $1,120 despite the fact bond's coupon ate and the market rate are equal. The common stock acquired upon conversion is selling for $54 per share. What is the convertible bond's conversion premium? Request for Solution File Ask an Expert for Answer!! Finance Basics: Corporate bond valuation Reference No:- TGS024286 Expected delivery within 24 Hours
{"url":"https://www.tutorsglobe.com/question/corporate-bond-valuation-524286.aspx","timestamp":"2024-11-06T05:26:47Z","content_type":"text/html","content_length":"44983","record_id":"<urn:uuid:09ec4cd5-b0da-4c81-a475-6b2f3939a64f>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00155.warc.gz"}
Quantum many-body systems all of whose excitations are gapped fall into distinct equivalence classes - quantum phases of matter. Apart from the familiar cases described within Landau paradigm of symmetry breaking, there are also more interesting phases, for example those exhibiting Integer and Fractional Quantum Hall effects. Such exotic phases are usually called topological phases of matter, since their low-energy properties can often be described by Topological Quantum Field Theory (TQFT). However, this terminology may be becoming obsolete, since the recently discovered fracton phases cannot be described by TQFT. One direction of my work is to understand invariants which distinguish different topological phases. Another one is to classify a particular class of topological phases: invertible phases (including Symmetry Protected Topological phases). I use methods from quantum statistical mechanics, quantum information theory, differential geometry, and homotopy theory. The Berry connection was discovered by Michael Berry while studying quantum systems with parameters. It refines the work of von Neumann and Wigner who studied energy level crossings in such systems and showed that in the absence of symmetries they occur in codimension 3. In my work, I study generalizations of the Berry connection for infinite-volume lattice systems. From the field theory viewpoint, these are known as Wess-Zumino-Witten terms. Equivalently, I am studying the topology of the space of gapped quantum lattice systems. The problem of the classification of gapped quantum phases is a special case, since it amounts to finding the set of connected components of the space of gapped systems. Transport coefficients such as conductivity, heat conductivity, and thermoelectric coefficients, are often defined using Kubo formulas. There are many subtleties associated with them, the most basic one being why the limits occurring in these formulas exist. Consistency with the laws of thermodynamics is also far from obvious. I am also interested in nonlinear transport. Hydrodynamics in a generalized sense describes homogeneous macroscopic systems which are in local equilibrium. Hydrodynamic equations of motion are strongly constrained by the local version of the Kubo-Martin-Schwinger condition. I am interested in classifying varieties of hydrodynamic behavior and finding examples of exotic hydrodynamics in nature.
{"url":"https://antonkapustin.caltech.edu/research","timestamp":"2024-11-04T15:45:55Z","content_type":"text/html","content_length":"127516","record_id":"<urn:uuid:a6527341-5b11-4e6a-a25f-69ae83843770>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00541.warc.gz"}
Ranking Sequence - Verbal Reasoning Multiple Choice questions | EduGoog.com In a queue of children, Kashish is fifth from the left and Mona is sixth from the right. When they interchange their places among themselves, Kashish becomes thirteenth from the left. Then, waht will be Mona's position from the right ? If Atul finds that he is twelfth from the right in a line of boys and fourth from the left, how many boys should be added to the line such that there are 28 boys in the line ? In a row of girls, Shilpa is eighth from the left and Reena is seventeenth from the right. If they interchange their positions, Shilpa becomes fourteenth from the left. How many girls are there in the row ? In a class of 60, where girls are twice that of boys, Kamal ranked seventeenth from the top. if there are 9 girls ahead of Kamal, how many boys are after him in rank ? Aruna ranks twelfth in a class of forty-six. What will be her rank from the last?
{"url":"https://edugoog.com/verbal-reasoning/number-ranking-time-sequence-test/ranking-sequence-question-answer/1.html","timestamp":"2024-11-07T13:38:11Z","content_type":"text/html","content_length":"130038","record_id":"<urn:uuid:66c402cf-4518-498e-92be-635850a74e8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00823.warc.gz"}
For Numerical Operations, if the child gave the following answer (2.84) for item 20, is this Search: Advanced search For Numerical Operations, if the child gave the following answer (2.84) for item 20, is this correct? Please enter a keyword or ID Article ID: 1444 Last updated: 23 Nov, 2008 Frequently Asked Question: For Numerical Operations, if the child gave the following answer (2.84) for item 20, is this correct? YES. If dollar signs are missing they still receive full credit if they have correct answer with decimal in right place. Article ID: 1444 Views: 3260 Last updated: 23 Nov, 2008 Prev Next Examinee who reversed has a higher standard score than someone... For Spelling, what are the exceptions for letter reversal?
{"url":"http://pearsonassessmentsupport.com/support/index.php?View=entry&EntryID=1444","timestamp":"2024-11-05T10:09:58Z","content_type":"application/xhtml+xml","content_length":"28971","record_id":"<urn:uuid:bb1ec770-0d9d-4f30-9223-e2d72a9ffd5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00570.warc.gz"}
User Documentation A monitoring model ISHM (Inductive System Health Monitoring) is an unsupervised learning algorithm which enables the systems to detect anomalies, root-cause analysis based on Failure Mode and Effect Analysis (FMEA) in a process. FMEA is a systematic method used for evaluating how a process will fail and the impact of different failure modes. ISHM Analysis The objective of ISHM is to provide a knowledge base cluster of related range values for the input parameters. Each cluster defines a range of allowable values for each parameter in a given input vector. Points that are inside the inner center of the cluster are considered to be within the system operating range, those further away can be considered as outliers. In DATAmaestro Analytics there are two clustering algorithms implemented: • K-Means: is a method which aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean. This results in a partitioning of the data space based on distance to points in a specific subset of the plane. The number of clusters K has to be defined beforehand. The algorithm cannot be used with arbitrary distance functions or on non-numerical data. • Subclu: SUBCLU stands for density-connected Subspace Clustering. Subclu uses the concept of density-based algorithm DBSCAN (Density-Based Spatial Clustering of Applications with Noise). Given a set of points in some space, the algorithm groups into clusters points that are closely gathered together and points which lie alone in low-density regions are considered as outliers. • IMS: As IMS builds small clusters characterized by the min, max and center of the clusters that become the min, max and center of the ISHM boxes. When a new data point is between the min and max values in each dimension, the point is considered inside the box and the distance is 0. If the data point is not within any box, it is associated to the nearest cluster based on the min and max Learning set empty Certain models in DATAmaestro are able to handle missing values, while other models are not. For example, clustering methods used by ISHM, like K-means, are not able to handle missing values. If any row of data has a missing value, even for just one variable, the row will need to be ignored by the algorithm. “Learning set is empty” is the message to indicate that all rows have been removed due to one or more missing values per row by the algorithm. If you have any variables with a high number of missing values, it is recommended to remove them or to use the “Fill missing values” tool under the “Transform” menu in DATAmaestro Analytics. Calculation time In DATAmaestro, the calculation time can vary depending on the number of records, number of input variables and the type of algorithm that is being used. For ISHM, for example, Subclu is significantly slower than K-means for larger data sets. If you have a large dataset, it is recommend to use K-means. To launch this model tool, select Models > ISHM from the menu. Create an ISHM Analysis The parameters for this method are defined on two tabs at the top of the page: Properties and Advanced tabs. On the Properties tab: 1. Select a Datasource from the list (if applicable). 2. Enter Model Name. 3. Select a Learning set from the list. In ISHM the learning set if a set of healthy mode operational observations. It maps the system operating range.Use a visualization tool to define the healthy record set. 4. Select a Testing set from the list. The test set should contain all types of records (healthy and non healthy). Once the system detects that the record is out of the healthy mapping clusters it calculated a distance and find which are the cause variables. 5. Select Model type,options Kmeans or Subclu. 6. If Model Type Kmeans: 1. Enter Number of Clusters, default value: 5. 2. Enter Maximum number of iterations (Default value: 100). For more information, see Maximum number of iterations. 7. If Model Type Subclu: 8. Enter a Variable prefix. 9. Enter a Distance variable name, default: ISHM-distance. 10. Enter Number of Causes, default value: 3. For more information, see Number of causes. 11. If Model Type IMS: 1. Enter Epsilon (Default value: 0.1). Epsilon is the maximal distance for a point to be in a cluster. A larger value tends to lead to a lower number of clusters. 12. Enter a Variable prefix. 13. Enter a Distance variable name, default: ISHM-distance. 14. Enter Number of Causes, default value: 3. For more information, see Number of causes. 15. Select an Variable Set, if required. 16. Select variable(s) from the list for the Input. 17. Select variable(s) from the list for the Cond (as Condition). 18. Select a variable from the list as an Index. 19. Click Save. On the Advanced tab: 1. Enter a Conditional class count, default 3 based on: cause frequency, cause importance or both frequency and importance. 2. Select Temporal Units, for Trends, options: Excel time, Mac excel time, Unix time (ms) or Unix time (s). 3. Select a Cluster Standardisation. 1. Normalize: transform the variable to have a max of 1 and a min of 0. The value is calculated with: scaled(x) = (x - min)/(max - min) where the min and max values are based on the learning data set. 2. Standardize: transform the variable to have a mean of 0 and a standard deviation of 1. The value is calculated with: scaled(x) = (x - µ)/(STDEV) where the average (µ) and standard deviation (STDEV) values are calculated on the learning data set. 4. Select Keep Predict Output. 1. Keep all: it keeps all predicted output variables namely, ISHM-actual, ISHM-predict, ISHM-predict-high and ISHM-predict-low. 2. Remove predict: it removes the output variables ISHM-predict, which is the average between ISHM-predict-high and ISHM-predict-low. 3. Remove predict and actual: it removes the output variables ISHM-predict and ISHM-actual, which is the input value of the variable.
{"url":"https://pepite.atlassian.net/wiki/spaces/UG/pages/30638138/Monitoring+Models?atl_f=content-tree","timestamp":"2024-11-11T12:42:25Z","content_type":"text/html","content_length":"900856","record_id":"<urn:uuid:97ada7c7-bbd0-42f2-bc32-a60420d4acc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00560.warc.gz"}
Modular inverse of a matrix This calculator finds the modular inverse of a matrix using adjugate matrix and modular multiplicative inverse Previous matrix calculators: Determinant of a matrix, Matrix Transpose, Matrix Multiplication, Inverse matrix calculator This calculator finds the modular inverse of a matrix using the adjugate matrix and modular multiplicative inverse. The theory, as usual, is below the calculator In linear algebra, an n-by-n (square) matrix A is called invertible if there exists an n-by-n matrix such that $AA^{-1} = A^{-1}A = E$ This calculator uses an adjugate matrix to find the inverse, which is inefficient for large matrices due to its recursion, but perfectly suits us. The final formula uses determinant and the transpose of the matrix of cofactors (adjugate matrix): $A^{-1} = \frac{1}{\det A}\cdot C^*$ Adjugate of a square matrix is the transpose of the cofactor matrix ${C}^{*}= \begin{pmatrix} {A}_{11} &amp; {A}_{21} &amp; \cdots &amp; {A}_{n1} \\ {A}_{12} &amp; {A}_{22} &amp; \cdots &amp; {A}_{n2} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ {A}_{1n} &amp; {A}_{2n} &amp; \cdots &amp; {A}_{nn} \\ \end{pmatrix}$ The cofactor of $a_{ij}$ is $A_{ij}$ where $M_{ij}$ - determinant of a matrix, which is cut down from A by removing row i and column j (first minor). The main difference between this calculator and calculator Inverse matrix calculator is modular arithmetic. Modulo operation is used in all calculations, and division by determinant is replaced with multiplication by the modular multiplicative inverse of determinant, refer to Modular Multiplicative Inverse Calculator. Similar calculators PLANETCALC, Modular inverse of a matrix
{"url":"https://planetcalc.com/3324/","timestamp":"2024-11-03T20:18:57Z","content_type":"text/html","content_length":"38298","record_id":"<urn:uuid:1afbc5dd-3a0a-44a3-aadc-2462e05fcde5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00318.warc.gz"}
4.10) Quiz 8: Answers – Unit of Measure – Edexcel GCSE Maths Foundation 1) 8.6 g/cm^3 2) 1.6 g/cm^3 3) 21.3 g/cm^3 4) 65 g 5) 129.6 g 6) 2400 g 7) 115 g 8) 4.3 g/cm^3 a) 110.4 g b) 22.8 g c) 3.1 g/cm^3 10) 4.08 g/cm^3 You may use a calculator in for this quiz. 1) A 25 cm^3 piece of copper has a mass of 215 g. What is the density of copper? Give your answer in g/cm^3. 2) A block of magnesium has a mass of 102.4 g and a volume of 64 cm^3. What is the density of magnesium? Give your answer in g/cm^3. 3) A 6.4 kg block of platinum has a volume of 300 cm^3. What is the density of platinum? Give your answer in g/cm^3 and give your answer to 1 decimal place. 4) Granite has a density of 2.6 g/cm^3. A lump of granite has a volume of 25 cm^3. What is the mass of the lump of granite? 5) I have a block of aluminium that has a volume of 48 cm^3. Aluminium has a density of 2.7 g/cm^3. What is the mass of this aluminium block? 6) I have a lump of steel that has a volume of 320 cm^3. Steel has a density of 7.6 g/cm^3. What is the mass of the lump of steel? Give your answer to two significant figures. 7) Ice has a density of 0.92 g/cm^3. What is the mass of an ice cube where each length is 5 cm? 8) The shape below is a cuboid made out of a substance. The mass of the cuboid is 1.677 kg. What is the density of the substance? Give your answer in g/cm^3. 9) I compete in a running race and get a medal for finishing. The medal has two different parts; a metal part and a ribbon. The metal part of the medal has a volume of 24 cm^3 and the metal that it is made out of has a density of 4.6 g/cm^3. The ribbon part of the medal has a density of 1.2 g/cm^3 and a volume of 19 cm^3. a) What is the mass of the metal part of the medal? b) What is the mass of the ribbon part of the medal? c) What is the overall density of the medal? Give your answer in g/cm^3 and give your answer to 2 significant figures. 10) A wooden box contains plastic dominos. The density of the box with the dominos is 3.2 g/cm^3 and the volume of the box and the dominos is 900 cm^3. The wooden box has a density of 0.9 g/cm^3. The dominos have a volume of 650 cm^3. What is the density of the dominos? Give your answer in g/cm^3 and to 3 significant figures.
{"url":"https://www.elevise.co.uk/g-e-m-f-410-q8-a.html","timestamp":"2024-11-03T18:13:08Z","content_type":"text/html","content_length":"97929","record_id":"<urn:uuid:8d75e3a8-5554-406d-b33d-1c6f3f31d082>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00114.warc.gz"}
SFU Mathematics of Computation, Application and Data ("MOCAD") Seminar: Wuyang Chen Towards Data-Efficient OOD Generalization of Scientific Machine Learning Models In recent years, there has been growing promise in coupling machine learning methods with domain-specific physical insights to solve scientific problems based on partial differential equations (PDEs). However, there are two critical bottlenecks that must be addressed before scientific machine learning (SciML) can become practically useful. First, SciML requires extensive pretraining data to cover diverse physical systems and real-world scenarios. Second, SciML models often perform poorly when confronted with unseen data distributions that deviate from the training source, even when dealing with samples from the same physical systems that have only slight differences in physical parameters. In this line of work, we aim to address these challenges using data-centric approaches. To enhance data efficiency, we have developed the first unsupervised learning method for neural operators. Our approach involves mining unlabeled PDE data without relying on heavy numerical simulations. We demonstrate that unsupervised pretraining can consistently reduce the number of simulated samples required during fine-tuning across a wide range of PDEs and real-world problems. Furthermore, to evaluate and improve the out-of-distribution (OOD) generalization of neural operators, we have carefully designed a benchmark that includes diverse physical parameters to emulate real-world scenarios. By evaluating popular architectures across a broad spectrum of PDEs, we conclude that neural operators achieve more robust OOD generalization when pretrained on physical dynamics with high-frequency patterns rather than smooth ones. This suggests that data-driven SciML methods will benefit more from learning from challenging samples. Event Type Scientific, Seminar
{"url":"https://pims.math.ca/events/241101-smocaadmswc","timestamp":"2024-11-11T14:09:11Z","content_type":"text/html","content_length":"421843","record_id":"<urn:uuid:983bc443-c6e2-4acb-a0b1-f83215c9593c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00718.warc.gz"}
On the Combinatorial Complexity of Approximating Polytopes Speaker: Guilherme D. da Fonseca, Université Clermont Auvergne. Date: 14 mai 2018, 13h. Place: Room 407, Bloco H, Campus Gragoatá, UFF. Abstract: Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body K of diameter diam(K) is given in Euclidean d-dimensional space, where d is a constant. Given an error parameter ε > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from K is at most ε diam(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε^(d-1)/2) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is Õ(1/ε^(d-1)/2), where Õ conceals a polylogarithmic factor in 1/ε. This is a significant improvement upon the best known bound, which is roughly O(1/ε^d-2). Our result is based on a novel combination of both new and old ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of Bárány andLarman's economical cap covering, which may be of independent interest. Finally, we use a deterministic variation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction. Ref.: Sunil Arya, Guilherme D. da Fonseca, and David M. Mount; SoCG 2016, 11:1-15, 2016. http://arxiv.org/abs/1604.01175.
{"url":"http://pgmat.uff.br/index.php/pt/eventos/por-area/2-uncategorised/158-palestra20180514","timestamp":"2024-11-05T09:30:36Z","content_type":"text/html","content_length":"24747","record_id":"<urn:uuid:a912a12f-1065-40c6-8311-a6019e7783d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00594.warc.gz"}
Kids.Net.Au - Encyclopedia > Push-down automaton Pushdown automata are abstract devices defined in automata theory . They are similar to finite automata , except that they have access to a potentially unlimited amount of memory in the form of a single . Pushdown automata exist in deterministic and non-deterministic varieties, and the two are not equipotent. Every pushdown automaton accepts a formal language . The languages accepted by the non-deterministic pushdown automata are precisely the context-free languages If we allow a finite automaton access to two stacks instead of just one, we obtain a device much more powerful than a pushdown automaton: it is equivalent to a Turing machine. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/pu/Push-down_automaton","timestamp":"2024-11-02T22:05:56Z","content_type":"application/xhtml+xml","content_length":"11932","record_id":"<urn:uuid:3c1b1c4d-4b96-47fc-bd2d-4349bf6702d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00074.warc.gz"}
Formulas - Need advice Topic: Formulas - Need advice (Read 3157 times) jaychant Formulas - Need advice *Smell* controller « on: March 03, 2009, 09:55:43 pm » Offline For a space game I am working on, I have four stats for each ship: mass, thrust, turn, and thrust_wait. These stats determine the actual performance of the ships. Currently I have in place the following formulas: Posts: 432 dirnum = round(16 * ((mass * 2) / (turn / 2))); accel = round((thrust * 3) / mass) / (THRUST_WAIT_MAX - (thrust_wait + 1)); Please visit my max_speed = ((thrust * 2) / mass); But I'm not sure about them. Can someone just sort of critique my formulas? Thanks! Elvish Pillager Re: Formulas - Need advice Enlightened « Reply #1 on: March 03, 2009, 10:16:33 pm » Offline so, let's see - the more turn_wait you have, the faster you accelerate? Posts: 625 jaychant Re: Formulas - Need advice *Smell* controller « Reply #2 on: March 03, 2009, 10:26:07 pm » Offline so, let's see - the more turn_wait you have, the faster you accelerate? Posts: 432 You're looking at thrust_wait. There isn't a turn_wait in my game. I wanted to keep huge ships from having a large acceleration, but I couldn't think of a way that made sense, so I added that little bit in. Please visit my I really think all my formulas suck, which is why I'm asking for advice. « Last Edit: March 03, 2009, 10:30:53 pm by jaychant » Elvish Pillager Re: Formulas - Need advice Enlightened « Reply #3 on: March 03, 2009, 10:34:52 pm » Offline I meant thrust_wait. I'd think that more thrust_wait would give you less acceleration, not more. Posts: 625 also, the "dirnum" formula (whatever that is) is equivalent to the simpler: round(64 * mass / turn); « Last Edit: March 03, 2009, 10:36:30 pm by Elvish Pillager » jaychant Re: Formulas - Need advice *Smell* controller « Reply #4 on: March 04, 2009, 12:22:21 am » Offline I meant thrust_wait. I'd think that more thrust_wait would give you less acceleration, not more. Gender: also, the "dirnum" formula (whatever that is) is equivalent to the simpler: round(64 * mass / turn); Posts: 432 dirnum is the number of directions that the ship can face. Higher numbers mean slower turning. Please visit my homepage thrust_wait is how many frames the game waits before allowing the player to thrust. Acceleration is how much speed is gained each thrust. I think I will probably take that bit out and stop the player from creating super-thrusters another way. Elvish Pillager Re: Formulas - Need advice Enlightened « Reply #5 on: March 04, 2009, 02:37:30 am » Offline ...like making large ships not get as much thrust? Posts: 625 jaychant Re: Formulas - Need advice *Smell* « Reply #6 on: March 04, 2009, 03:55:27 am » ...like making large ships not get as much thrust? Gender: Think of it this way: let's assume the propulsion happens by a series of explosions. If you increase the intensity of the explosion, it therefore costs more energy to produce each Posts: 432 explosion, reducing the amount of explosions. However, I think I will revert to my original plan, and make the thrust_wait also determined by a formula. Please visit my homepage The point is not to make large ships not have as much thrust; On the contrary, the idea is that larger ships will have more thrust, because without the extra thrust, the ship would be much too slow and unwieldly to fight in deadly combat. For example, if you take my formula for the top speed with a mass of 500 and a thrust power of 2, you get a maximum speed of 1/ 500, which is extremely low. I'm going to update the formulas right now, and I will soon have an update. UPDATE: Here are the new formulas. I actually have more confidence in these: dirnum = round(64 * (mass / turn)); accel = ((thrust) / (mass * 3)); max_speed = round((thrust * 2) / mass); thrust_wait = round(THRUST_WAIT_MAX / mass); « Last Edit: March 04, 2009, 04:05:34 am by jaychant » Alvarin Re: Formulas - Need advice Enlightened « Reply #7 on: March 04, 2009, 03:43:16 pm » Offline To behave more realistically, the relation of propultion parameters to mass should be squared . Probably energy level will be constant per engine type , E(k)=(1/2)*m*V^2 . Gender: Twice more mass , four times less speed . Posts: 801 Thrust is not related to mass , it's again engine property . But , as you can fit bigger engine into bigger ship , you could maybe positive tie them - more mass=more thrust . « Last Edit: March 04, 2009, 03:59:56 pm by Alvarin » jaychant Re: Formulas - Need advice *Smell* controller « Reply #8 on: March 04, 2009, 04:32:39 pm » Offline To behave more realistically, the relation of propultion parameters to mass should be squared . Probably energy level will be constant per engine type , E(k)=(1/2)*m*V^2 . Gender: Twice more mass , four times less speed . Posts: 432 Thrust is not related to mass , it's again engine property . But , as you can fit bigger engine into bigger ship , you could maybe positive tie them - more mass=more thrust Please visit my homepage When you say thrust, do you mean acceleration, thrust_wait, or max speed? Also, when you say "propulsion", do you mean acceleration, thrust_wait, or maximum speed? Assuming that formula would be for acceleration, I tried translating it into my game: accel = (((1/2) * mass * (thrust ^ 2)) / 1000); Is that what you meant? I assumed that V stood for Velocity. « Last Edit: March 04, 2009, 04:34:24 pm by jaychant » Death 999 Re: Formulas - Need advice Global Moderator « Reply #9 on: March 04, 2009, 04:33:57 pm » Twice more mass , four times less speed . Gender: On the other hand, momentum is the quantity that's actually conserved in this system, and it gives Posts: 3874 dP/dt = 0 = Engine thrust + (M dV/dt)ship Rearranging, we get A = engine thrust / ship mass We did. You did. Yes we can. No. You can use the energy formulation if instead of a time step you use a distance step. This is because momentum is force integrated in time, and energy is force integrated in Using a distance step would be really inconvenient for a game, as each particle would update asynchronously. Plus, you'd need to add in an extra step to apportion the energy properly between the exhaust and the ship. All around, just use the momentum approach. In short, you were right the first time. « Last Edit: March 04, 2009, 04:39:36 pm by Death 999 » jaychant Re: Formulas - Need advice *Smell* controller « Reply #10 on: March 04, 2009, 04:44:53 pm » Offline OK, thx. Changed the formula back. Gender: EDIT: These are the current formulas: Posts: 432 dirnum = round(64 * (mass / (turn * 2))); accel = (thrust / mass); Please visit my homepage thrust_wait = round(thrust / 5); I took out the max_speed formula; it will be defined by the user. « Last Edit: March 04, 2009, 06:39:41 pm by jaychant » Elvish Pillager Re: Formulas - Need advice Enlightened « Reply #11 on: March 04, 2009, 09:45:57 pm » Offline *bows out of thread* Posts: 625 Alvarin Re: Formulas - Need advice Enlightened « Reply #12 on: March 07, 2009, 08:54:34 am » Offline Yep, in space inertia would be a better parameter than just the mass. And engine parameters scientifical applications are, probably, less relevant to an arcade style game, unless you were going for a phisics game engine. Gender: By "letting the user to define top speed" you mean the actual parameter, or if the acceleration button is pressed, the ship will go faster indefinately? The latter is more correct, again Posts: 801 from real-world point of view... Unless I'm missing out something again
{"url":"https://forum.uqm.stack.nl/index.php?topic=4437.0;prev_next=next","timestamp":"2024-11-03T10:21:46Z","content_type":"application/xhtml+xml","content_length":"61877","record_id":"<urn:uuid:d8b7c7d1-9ce7-4f97-be74-8da0e9ef677e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00608.warc.gz"}
01-25-2017 11:09 PM 12-18-2017 02:37 PM 12-16-2017 09:23 PM Re: Extracting a substring by using regular expressions on SAS Programming. 01-25-2017 07:48 PM Re: Extracting a substring by using regular expressions on SAS Programming. 01-25-2017 08:37 PM Reconciliation of prescription dates on SAS Programming. 12-11-2017 07:31 PM Re: Reconciliation of prescription dates on SAS Programming. 12-11-2017 10:23 PM Re: Reconciliation of prescription dates on SAS Programming. 12-11-2017 10:47 PM Re: Reconciliation of prescription dates on SAS Programming. 12-13-2017 01:10 PM Re: How to find date of week on SAS Procedures. 12-13-2017 05:58 PM Re: How to find date of week on SAS Procedures. 12-13-2017 06:02 PM Re: How to sort a dataset by a constant incremental value? on SAS Programming. 12-13-2017 06:05 PM Re: How to find date of week on SAS Procedures. 12-14-2017 03:10 PM Re: Syntax for creating a new dataset based on codes for specific variables on SAS Programming. 12-14-2017 04:45 PM Re: Syntax for creating a new dataset based on codes for specific variables on SAS Programming. 12-14-2017 04:55 PM Re: proc means types () statement on SAS Programming. 12-15-2017 01:48 AM This is one good paper that I found useful and explains quite well the use of TYPES and WAYS statements http://www2.sas.com/proceedings/forum2008/087-2008.pdf ... View more
{"url":"https://communities.sas.com/t5/user/viewprofilepage/user-id/125494","timestamp":"2024-11-08T02:22:19Z","content_type":"text/html","content_length":"273180","record_id":"<urn:uuid:ed4bf60c-3ed2-43c1-ae27-0a5befdb8bbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00269.warc.gz"}
Global regularity for the Maxwell-Klein-Gordon equation with small critical Sobolev norm in high dimensions We show that in dimensions n ≥ 6 one has global regularity for the Maxwell-Klein-Gordon equations in the Coulomb gauge provided that the critical Sobolev norm Ḣ^n/2-1 × Ḣ^n/2-2 of the initial data is sufficiently small. These results are analogous to those recently obtained for the high-dimensional wave map equation [17, 7, 14, 12] but unlike the wave map equation, the Coulomb gauge non-linearity cannot be iterated away directly. We shall use a different approach, proving Strichartz estimates for the covariant wave equation. This in turn will be achieved by use of Littlewood-Paley multipliers, and a global parametrix for the covariant wave equation constructed using a truncated, microlocalized Cronstrom gauge. All Science Journal Classification (ASJC) codes • Statistical and Nonlinear Physics • Mathematical Physics Dive into the research topics of 'Global regularity for the Maxwell-Klein-Gordon equation with small critical Sobolev norm in high dimensions'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/global-regularity-for-the-maxwell-klein-gordon-equation-with-smal","timestamp":"2024-11-04T11:03:29Z","content_type":"text/html","content_length":"49346","record_id":"<urn:uuid:f341f4bb-2f6d-4d70-b09b-8f9f6d6afcee>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00004.warc.gz"}
How to use the accuracy check tool. How to use the accuracy check tool. Please refer to the explanations below. The accuracy check tool is an automated verification of the computed point cloud in Smart Construction Edge. The verification is done by comparing some calculated x, y and z coordinates in the resulting point cloud to the corresponding predefined locations on the jobsite that have a verified x, y and z coordinate. These verified coordinates are measured beforehand by means of survey equipment, like a GNSS rover. This accuracy check can be executed within the Smart Construction Edge (data processing screen in the projects tab): after the point cloud is rendered, there is the option to run the check, as shown in the following image: Be aware that if the Edge does not recognize the file format, it will display the following message: If this last image is displayed, you can either click on 'instructions' to view it within the app or follow the steps below to create an appropriate accuracy check file. Accuracy check file The accuracy check requires a reference checkpoint file. This file is a CSV (comma separated values) file containing checkpoint names and coordinates. Recommended is a minimum of 5 checkpoints for a single battery mission to ensure a thorough accuracy check. 1. The checkpoint file should be called ‘checkpoints.CSV’ and be in the root directory (main folder) of the SD card or USB drive. 2. Checkpoints must be in the same coordinate system and same units as the point cloud. a. Edge supports global coordinate systems and local coordinate systems (EPSG/Geoid, or custom using localization features) b. Edge support meters, feet, and US feet 3. Checkpoints will be imported in following level of precision: a. 3 decimal digits for meter, feet and US feet b. 8 decimal digits for degrees c. The system will get rid of any additional decimal digits compared to the above. This will not affect centimetre level precision of the accuracy check. Example files CSV format (ENZ) --> Easting (x coordinate), Northing (y coordinate), Z (height) Name E(m) N(m) Z(m) GPS001 335881.904 440457.002 11.498 GPS002 335882.867 440484.491 10.498 GPS003 335934.134 440455.286 11.723 GPS004 336844.283 439396.335 9.246 GPS005 336881.626 439400.588 9.858 GPS006 336838.404 439448.905 6.359 GPS007 336888.652 439348.810 7.050 GPS008 337626.177 439125.584 20.687 GPS009 337545.164 439138.801 19.187 GPS010 335503.020 440594.849 6.484 For the height, Edge supports meters (m), feet (ft) and us feet (us-ft). CSV format (degrees, minutes, seconds) Name lat(dms) lon(dms) z(m) cp1 N52 3 37.046533 E10 11 34.936252 199.83 cp2 N52 3 32.608942 E10 11 37.885621 205.787 cp3 N52 3 30.428706 E10 11 36.720154 205.415 cp4 N52 3 31.875640 E10 11 35.686721 202.431 cp5 N52 3 28.766756 E10 11 31.653096 197.927 cp6 N52 3 32.911782 E10 11 33.455756 199.652 In order to provide latitude and longitude in DMS format, the header should include (DMS) as shown above. The direction is specified by letter (N, E, S, W). There must be spaces between degrees, minutes and seconds. For the height, Edge supports meters (m), feet (ft) and us feet (us-ft). CSV format (decimal degrees) Name lat(dd) lon(dd) z(m) ch1 50.9443603 4.4424267 13.300 ch2 50.9441110 4.4456740 14.031 ch3 50.9425193 4.4453612 13.978 ch4 50.9416111 4.4445197 14.257 ch5 50.9415229 4.4424393 13.764 ch6 50.9429743 4.4419483 13.256 In order to provide latitude and longitude in decimal degree format, the header should include “(dd)” as shown above. In Europe we only have positive values, in the parts of the world underneath the equator there would be negative values. Values in decimal degrees format must have 8 digits beyond the decimal place as shown above. For the height, Edge supports meters (m), feet (ft) and us feet Each of the columns should be separated by a comma, you do this by exporting CSV through excel. Some European PCs export a csv with ";" instead of "," as separation symbol. You can check this by opening the file in a text editor (word, note pad...) and replace the symbols afterwards. Validating checkpoint file After the import, Edge will validate the checkpoint file and highlight if there is any issues with it. If there is any issue, an error message will be displayed. Checkpoints that are outside the point clouds' horizontal bounds will be excluded from the accuracy check. Edge will display a summary of accuracy check results along with a recommended vertical offset to reduce the error between the point cloud and the checkpoints. The suggested offset will always be the same value as the mean Z error but in the opposite direction. Positive Z error (mean) If the mean Z error is positive then the point cloud is above the checkpoints and the suggested offset will move the point cloud down towards the checkpoints. Negative Z error (mean) If the mean Z error is negative, then the point cloud is below the checkpoints and the suggested offset will move the point cloud up towards the checkpoints. By tapping 'apply suggested offset', the vertical offset tool will automatically shift the point cloud to reduce the error. Checkpoints that exceed the tolerance of 2 sigma from average Z error are considered outliers. Outliers are highlighted in the results and an error message will be shown. The outliers are excluded from the calculation of all values in the mean row at the bottom of the result screen. If there are outliers, the CSV will contain two sets of aggregates. 1. Aggregates including outliers 2. Aggregates excluding outliers CSV Headers Checkpoint Name User provided checkpoint name Nearby points found Number of points in point cloud that were found near the checkpoint coordinate (within search radius XY, at any Z value) Checkpoint X Checkpoint coordinate X value Checkpoint Y Checkpoint coordinate Y value Checkpoint Z Checkpoint coordinate Z value Nearest X X value of point in point cloud that is closest (in terms of X axis only) to the checkpoint Nearest Y Y value of point in point cloud that is closest (in terms of Y axis only) to the checkpoint Nearest Z Z value of point in point cloud that is closest (in terms of Z axis only) to the checkpoint Min distance Absolute value of the distance (in 3D space) between checkpoint and nearest point in point cloud Z error (mean) Mean value of error measured in Z direction between checkpoints and points found in point cloud Z error (median) Median of all Z error values. In case of an even number of error values, the average of the two middle error values is returned. Z error (Min) Smallest error measured in Z direction between checkpoint and and points found in point cloud Z error (lower bound) Lower bound (most negative) error measured in Z direction between checkpoint and points found in point cloud Z error (upper bound) Upper bound (most positive) error measured in Z direction between checkpoint and points found in point cloud Std dev Z Standard deviation based on all Z error values. Always a positive value Mean +3S This is the right side of the confidence interval resulting from adding the Z error Mean plus 3 times the std dev Z Mean -3S This is the left side of the confidence interval resulting from subtracting the Z error Mean plus 3 times the std dev Z CSV Aggregates RMSE root mean square error std dev standard deviation of the mean Z error of all checkpoints nearby points found number of points in point cloud that were found near the checkpoint coordinate (within search radius XY, at any Z value) mean Z error mean of the Z error value that was found for each checkpoint mean Z error lower bound mean of the Z error (lower bound) value that was found for each checkpoint mean Z error upper bound mean of the Z error (upper bound) value that was found for each checkpoint mean +3S Confidence interval. Total mean Z error +3S times total Z error std dev Search radius The search radius is the radius used around the checkpoint coordinate to find points in the point cloud that are used to determine the accuracy. If the search radius is too high, the algorithm will use points that are far from the checkpoint and report worse accuracy which is misleading. If the search radius is too low, the algorithm will not find enough points. To avoid these issues, the accuracy check feature automatically sets the search radius based on the points cloud density so that an average of 8-13 points can be found. Low Medium High 0,95 m 0,425 m 0,2125 m Vertical offset tool The suggested offset from the accuracy check can be automatically applied by simply tapping 'apply offset' from the results screen on the accuracy check. The vertical offset tool can also be opened at any time from the processing details screen. Total offset The processing details screen will always display the total offset that has been applied (compared to the original point cloud). A positive offset means the points cloud is now above the original. A negative offset means the points cloud is now below the original. To apply an offset manually, the user enters the amount they would like to shift the point cloud. The units will match the units of the point cloud. A positive value (f.e. 0,02m) will move the point cloud up from its current position. A negative value (f.e. -0,02m) will move the point cloud down from its current position. The screen will always display the current offset from the original in case the user would like to revert. To revert to the original, enter the current offset with the opposite sign. The screen will indicate that the final offset is 0. 0 comments Please sign in to leave a comment.
{"url":"https://sc-edge.zendesk.com/hc/en-au/articles/11037813801369-How-to-use-the-accuracy-check-tool","timestamp":"2024-11-07T10:44:58Z","content_type":"text/html","content_length":"65993","record_id":"<urn:uuid:15e2869d-a750-4643-b59f-c075e38034e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00513.warc.gz"}
Giggz's 236bhp Race-TechTD04 Glanza Turbo. Well folks, thought it was time i got around to doing a thread as I've decided to keep the Glanza now the roads are drying up. It started a year around a year ago when I got the horn for wanting a turbo car, but with being 22 it was tricky getting something biggish insured. Got a few quotes from my insurance and for a glanza it didn't change at all so I thought get in. Spent a month looking around and finally picked her up, this is how she was when I bought her: It came with the following: HKS SSQV BOV, Grooved discs on front, Painted calipers, Rota Alloys, Some exhautsuk system, Xenon Headlights. Needless to say going from a saxo vtr to this i thought it'd be slower being a 1.3,how wrong was I lol, Grinning like a Cheshire cat whilst driving it home. Gave it a good clean out and polish and got a few more snaps: Trying to be artistic with this one but the iPhone camera is semi fail Had a really good look over the car and ripped the headunit out due to the dodgy wiring from the previous owner One of the first things i wanted to do was get the intercooler out on show from behind the bumper, Spoke to Idress and at the time due to mine being a 96 the bumpers he had in stock wouldn't fit mine BUT he did have the factory optional lip so snatched that up. Also got the ID-Workz cooling panel at the same time and got that fitted. Took it to the local body shop as the front bumper needed respraying due to chips and got it back looking like this: Around a month later the vtr finally sold so I had a fair bit of cash to play with, In the end I just thought feck it bigger turbo time, Got speaking to Ricky @ Race-Tech regarding the TD04 kit and decided on that, going from pretty much a standard Glanza to that would be some power increase. Anyway I popped round for a chat and see what was what and turns out he was a old friend, small world So got booked in and Ricky had the car for just under a week, and I had the following fitted: TD04 turbo, (Reconditioned) Race-Tech SS Ramhorn Manifold Race-Tech SS 2.5" Modular Downpipe Race-Tech SS Screamer Pipe Race-Tech SS Braided Oil & Water Lines Tial 38mm Vband External Wastegate, Induction kit is right behind front bumper piggybacked straight onto turbo, E-Manage blue (setup and tuned), That's the only pic I've got on my pc of under the bonnet atm, ignore the comedy catch cans they'll be gone shortly when I get the bay all cleaned up as after this winter it's sporting rally slag Whist Ricky had it got the 3pod A Pillar ordered from TM - Developments and got a boost/oil temp/oil pressure gauge fitted. Was a good job I had the car in for the work as the ct9 that came off had alot of shaft play and was on it's way out. Got the car back from Ricky at like 11pm so took it straight out and damn the difference was unreal, screamer pipe roaring in the words of Jezza 'I had a crysis' This was just before the UKSC RR day, so decided to get myself down on the rollers. So headed down there and met up Socks, Steve GT and Ro55ifumi. I didn't half get a shock when it came back after it's first run. 236bhp was reached. Not sure who caught the back end on film but here she she just getting to end of run and dumping. Gotta love the HKS churp. Graphs from RR: After looking over the graphs realised that the wheels had spun slightly on the rollers Since then I've kept her how is due to fundage and the winter weather, just giving her a service and a few little bits like tracking getting redone. Plans to happen: Fit a proper oil catch can, Get under bonnet tidied up with coloured pipes, Rocker cover sprayed, Possibly a varis front bumper, (If she goes through MOT fine without stinging me too much) Anti lift kit, Maybe strip interior out still undecided on that. this figure is what i'm hoping for. you on standard injectors still? Yeah all internals are standard. by that i'm guessing injectors are standard too. sweet. maybe i'll get mine done by nick and peeps can see standard internals at 1.2 by him. String I'd take it to Ricky instead, especially if you're going for 1.2 Tidy build up mate, car looks mint and made excellent power too! Yeh nice to see a build tread off you have been waiting since last year for this lol, was a gob schmacking result on the day, I remember it well. was good to meet you giggz, was a good day and you werent the only that was suprised, i think u pulled the highest figure on stock internals that day! impressive, was so loud in the unit tho! lol look forward to convoying to a few events this year with u dude! Glad your keeping it, I demand some videos now after the teaser on youtube Your car is also the reason I am traveling to RaceTech for my mapping, really impressed me at the RR day Tidy up that bay and a nice front bumper and it will be one sweet car Tidy car and impressive results! Are you still running the pcv valve? Is it gutted and running like a breather? And where is the vaccum from the inlet going? wish i had the guts to turn mine up to 1.2bar on standard internals lol You'll be fine Glad your keeping it, I demand some videos now after the teaser on youtube Your car is also the reason I am traveling to RaceTech for my mapping, really impressed me at the RR day Tidy up that bay and a nice front bumper and it will be one sweet car I'll get some vid's up once the weathers better, trying to source a better quality cam or get a mate to film a few fly bys as the dash mount is just crap quality, sounds so bad on that Tidy car and impressive results! Are you still running the pcv valve? Is it gutted and running like a breather? And where is the vaccum from the inlet going? It's still there, will be gutted when I get round to fitting catch can on, I'll have a look next time i'm under iirc it goes to wastegate. Small update, somethings arrived today thanks to LukEp and thanks to Thorpy I've got a adjustable rear strut brace fitted, Though I'm not sure if it's mounted right? Only bolts onto 2 of the 4 bolts? Wont reach the coilovers? On the way home though can feel it's already made a difference going around corners, car seems to feel alot stiffer now. beleive socks has same problem with his brace. believe you have to trim the plastic surround mate. beleive socks has same problem with his brace. believe you have to trim the plastic surround mate. Went at it with stanley knife earlier and cut the whole big enough. Brace now fitted on all 4 studs. Treating the car to some new parts today, civic rad, anti-lift kit,and oil catch can and possible a small rise on the bonnet. Just got her back, So it's gone from this: To this, going to get around to cleaning rest of bay up at some point: Not to everyones taste but if it helps with cooling all good: Also got anti lift kitted but iphone no flash in this light fails awesome. take the front engine lift hook off the block.. bit neater look forward to seeing it soon! Looks ten times better mate Where does the catch can breath to? and what goes to the PCV inlet on the inlet manifold instead looks alot tidyer! Cheers, will look better when I get around to giving it a good clean. Looks ten times better mate Where does the catch can breath to? and what goes to the PCV inlet on the inlet manifold instead Vta for catch can, piping from wastegate and bov meet at a t piece that feeds in, How it was setup by Ricky when i had the kit put on.
{"url":"https://www.ukstarletowners.com/topic/40274-giggzs-236bhp-race-techtd04-glanza-turbo/","timestamp":"2024-11-02T15:36:08Z","content_type":"text/html","content_length":"363528","record_id":"<urn:uuid:44d013fa-fe35-4a4f-bc9c-683585745a72>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00018.warc.gz"}
Reviews (Secondary Math) Rated 5 Stars by Students and Parents. Join Miracle Math for maths tuition, experience a miracle today! Be a part of our success stories! Secondary Math GCE O-Level E-Math Track Record: Greatest improvement from U grade / F9 to A1. Nothing is Impossible! “Before I joined miracle math, I was getting Es and Fs. I thought that I was never going to pass math even after working really hard. I even joined multiple math tuition centres to help me improve my grades but my grades remained the same/improved by either 1 or 2 marks. However, once I joined miracle math, my grades have improved to As within half a year of joining miracle math. I was able to understand the topics better and my teacher was very approachable so I could ask her questions if I have any doubts. My teacher also makes the class very engaging by letting us play learning games online that helps us understand topics or questions better.” Shameera, Bedok View Secondary, Sec 1, A1 for Math (improved from F9 to A1). “Before joining Miracle Math about 3 months ago, I was getting C and Ds for my common tests and Weighted assessments. I have joined many tuition centers in my life, in hope to improve my math. Miracle Math has been the best for me so far. The teacher explains the concepts and questions very well and clarified our doubts. She also goes an extra mile, teaching us other more convenient ways to solve the questions, leaving us with more things to discover and learn each lesson. After these few months, my math Weighted assesment 2 had a jump in 29 Marks and I Attained A1! Miracle math has made it easier for me to understand and not only that, has made me approach math with a more enthusiastic approach! thank you miracle math” Karris, Dunman High School, Sec 1, A1 for Math (improved from D7 to A1). “I highly recommend miracle math tuition centre for their exceptional math tutoring. With their dedicated tutors, well-structured curriculum, and innovative teaching methods, my math results have significantly improved. The personalised attention and supportive learning environment make it an excellent choice for anyone seeking to excel in math. The tutor goes the extra mile just to make sure we understand the topic.” Gordon, Bedok View Secondary, Sec 1, A1 for Math. “I have joined Miracle Math tuition for a year already and all I can say is that this tuition is a very good math tuition as the teacher teaches math professionally. The teacher has also made math fun and enjoyable. Thus, I was able to get As for overall throughout the whole year for all papers. I highly recommend you to join this tuition centre if you want to get A for O’level. 🙂” Ming Jun, Maris Stella High School, Sec 2, A1 for Math (improved from B3 to A1). “Before joining Miracle math, I was always failing math as I could not grasp certain concepts. However, after joining my grades went up as the teacher is very patient in explaining and helped me understand the concepts better resulting in me enjoying lessons and the subject itself. Classes are always fun to attend and since it is in a smaller group it allows me to ask more questions without feeling pressured. I was able to attain an A for my O levels because of Miracle math!” Lok Yee, Anglican High, Sec 4, A2 for Math (improved from F9 to A2), A2 for Additional Math. “Before I joined miracle math I did horrendously bad for math, failing for almost every test, but after joining miracle math I got a A2 for my first exam Follower by A1 for my second exam, compared with other tuitions you may find the lesson hard at first but the teacher will guide you through it. I have joined miracle math since the start of the year and it had helped my math a lot overall miracle math is good and it’s really helpful for improvements.” Bryan, Bedok Green Secondary, Sec 3, A1 for Math (improved from D7 to A1). “Before I joined Miracle Math, I was borderline passing even emath and scored straight F9s for amath throughout sec 3. However the teacher at Miracle Math was extremely patient and encouraging, helping me understand numerous concepts and strategies that helped me improve substantially even when I couldn’t get them immediately. Miracle Math ultimately helped me score straight As for emath and finally pass amath in Os.” Xavier, St. Patrick’s School, Sec 4, A2 for Math (improved from C6 to A2), C6 for A-Math (improved from F9 to C6). “Before joining miracle math i was getting Cs all the time and i hated math so much. surprisingly, after joining miracle math, i started to get the motivation to do well in math. the teacher is really supportive and gives us the energy and motivation to do our homework every week. she also calls everyone out to answer some questions, ensuring that every single one of us is not lost and paying attention. i now get no less than an A1 for every math exam and i had even gotten perfect scores, topping the class thrice this year ! the teacher is also really encouraging, she made me and my friends love math a lot. the subject isn’t like the one that i had dreaded and hated for years now, since miracle math has taught me well and made math so easy for me. its absolutely a miracle that my grades have been improving at such a fast rate. im so glad that i had signed up for math tuition here!” Yeonjin, Bedok View Secondary School, Sec 2, A1 for Math (improved from C5 to A1). “After attending Miracle Math at the beginning of Sec 2, I improved from a grade of D7 in Sec 1 to scoring a grade of A1 in Sec 2. I gained confidence as I was able to understand the topics and concepts more clearly. I was also able to display accurate workings and steps in my work. When I encountered concepts that I don’t understand in school, I would be able to ask my Miracle Math teacher for help. My teacher is very patient, nice and always tries her best to help me to get the Math workings right and helps me learn in an efficient way. She also always encourages me, allowing me to have more confidence in my work.” Xin Yu, Anglican High School, Sec 2, A1 for Math (improved from D7 to A1). “I think joining this tuition centre was beneficial to me. I was able to regain my ability to pass math after consistently failing it before thanks to joining this tuition centre. The learning environment is conducive for allowing me to learn as much as i can during each lesson. I find the explanations from the teacher easy to understand which allows me to easily make sense out of the various topics. I was also able to make alot of new friends throughout the lessons as we learnt together and helped and encouraged each other to understand topics and do work. I honestly did not think that i would be able to pass again. Im happy to start thinking about getting A1 for math instead of always thinking to fail.” Rubens, St. Patrick’s School, Sec 3, A1 for Math (improved from F9 to A1). “Prior to joining the class last year, I felt lost when studying for Math and A-Math. It has been a year since I joined the class and my grades jumped from an E8 to an A1 for Math for my preliminary examinations. Without the help of Miracle Math, I believe I would be stuck with the same grades. The lessons are very active and allow the student to focus for 2 hours straight with little to no distraction. Every lesson was encouraging and motivating. I highly recommend this centre if you are willing to co-operate with the teacher.” Kashif, East Spring Secondary School, Sec 4, A1 for Math (improved from E8 to A1). “Before joining Miracle Math, I often struggled to get good results for both my EMath and Amath even though I studied so much as I was just memorising the steps of solving the question answering not understanding how to solve it. However, after joining Miracle Math I was able to improve my EMath from E8 to A1 and AMath from F9 to B4 thanks to the help of the unique approach of how they teach. The Teacher’s approach to teaching is about understanding the question which I struggled with. Moreover, the encouragement the teacher gave led me to reach new heights and made me look at results in a different way instead of being fearful by looking at mistakes I made, I actually wanted to learn why I made these mistakes. Additionally, I would like to applaud the care that the teacher showed us by having conversations with us and making sure that we were alright and how she could help us to improve and also sent us motivational quotes to keep putting in the effort and not giving up easily. The place where the lesson is conducted is in a conducive environment, it is very good for learning and the 2 hours could just fly by without me noticing as I was engaged in learning in the class . Without a doubt I would definitely recommend this Tuition Center as the approach to learning is very unique and very helpful as you cant learnt anywhere.” Zhi Yang, Manjusri Secondary, Sec 4, A1 for Math (improved from E8 to A1). “I joined miracle math near the start of s4 after getting consistent B’s throughout s3. Though i was new to the class, the teacher was very kind, patient and approachable and would always help students when needed. The teacher gives relevant math tips and explains the solution thoroughly to make sure each and every student understands. The test papers they give and the notes in the booklets they give are very concise and is an excellent tool for revision. I managed to get an A1 for math in o levels thanks to miracle maths. 10/10 recommend” Isacc, St. Patrick’s School, Sec 4, A1 for Math (improved from B4 to A1). “Last year, when I received my PSLE results for Maths, I got very discouraged and thought I could never do well for Maths. After going for your lessons, you made Maths fun for me and that inspired me to work harder for Maths. Now, I am doing well for Maths and I also find it fun. Thank you!” Yuraj, St. Gabriel’s Secondary, Sec 1, A1 for Math. “This class is so engaging and fun. This class helped me pull up my grades so much and I got an A1!” Lynette, Anglican High School, Sec 2, A1 for Math. “I used to just pass math, but after attending the tuition, I was passing wit flying colours. The teachers are nice and the work helps me a lot.” Francisca, Pasir Ris Secondary School, Sec 2, A1 for Math (improved from B4 to A1). “In Secondary 4, months before my E and A math O levels I was averaging a B4 and a C5 respectively. Lessons were very resourceful and the teacher’s dedication showed by staying back after lessons to clarify my doubts making sure that I definitely understand the concepts. Eventually for O levels I scored an A1 for Emath and A2 for Amath. There are some lessons which I missed due to competitions, with Miracle Math’s online recorded lessons, I manage to get back on track with lessons. For those students who are struggling with Emath or Amath, I would strongly recommend enrolling in Miracle Math to excel in Mathematics.” Jonathan, Maris Stella High School, Sec 4, A1 for Math (improved from B4 to A1), A2 for Additional Math (improved from C5 to A2). “I joined in Sec 2, and I can thank Miracle Math Tuition for my A1 streak. I have always struggled to find interest in math due to its tedious equations and uninteresting formulas. With Miracle Math’s help, I am able to find motivation to practise, as well as revise my math, which has helped me maintain good grades. The classes are small so it’s easier to make sure we all engage in class discussions, and the concepts taught are very clear-cut. The teacher ensures that we are all paying attention and the materials taught will be very useful during exams. In 2 hours you will be able to learn a lot, which is what made we enjoy classes every week. Please consider this tuition centre if you want to see quick improvements in math!” Sabrina, Cedar Girls’ Secondary School, Sec 2, A1 for Math. “Thank you Miracle Math for helping my son, Axel in his Math, from F9 to B3 for his O-level result he had gotten today! I’m very proud of his achievements – He is eligible to enroll in any course he wish to. Fantastic job, such a miracle indeed!” – Medelyn (parent) Axel, Broadrick Secondary School, Sec 4, B3 for Math (improved from F9 to B3). “joining this tuition centre really helped me to gain more interest and confidence in mathematics. math was one of my weaker subjects, and so i’m glad to have joined miracle math as it helped change my perspective and the way i view math as a subject. it provides a conducive and a good learning environment where students feel supported in their learning, and are able to consolidate their learning too. the teacher is extremely supportive and is always willing to help, making lessons more carefree and entertaining. i personally really recommend joining this tuition centre as it has helped me tremendously, in terms of my interest and my grades as well!” Jolie, Tanjong Katong Girls’ School, Sec 2, A2 for Math (improved from B4 to A2). “Before joining Miracle Math, my math grade was constantly at an D7/E8 . But after joining the classes , my grades started to improve gradually to a B3 . The materials and worksheets provided were very helpful and the teacher is always willing to clarify any doubts that I had . In comparison to other tuitions , Miracle Math also gave students access to recorded tuition sessions and materials ,whenever the student cannot be present for class or wants to revise weak topics they can simply use the google drive. Furthermore , teaching is done in a strategic and engaging way which made learning math less mundane and more interesting !!” Shermaine, St Anthony’s Canossian Secondary School, Sec 4, B3 for Math (improved from E8 to B3). “teacher is supportive and patient when teaching the students” Darion, Ngee Ann Secondary School, Sec 3, A1 for Math (improved from D7 to A1). “I joined last year during April, and my math has improved greatly, from c5 to a1. This was the first secondary school math tuition I had. The teacher is kind and explains the topics well.” Raelynn, Anglican High, Sec 1, A1 for Math (improved from C5 to A1). “it really helped me to understand concepts and the classes are very engaging and helped me improve from c6 to full marks !!” Erynn, Anglican High School, Sec 2, A1 for Math (improved from C6 to A1). “The teacher is very nice and takes the initiative to stay back after class to answer the additional questions I have. I managed to clear my doubts and improved from my WA1 to an A1 in WA2.” Raeanne, Anglican High School, Sec 2, A1 for Math (improved from B3 to A1). “Miracle Math is the only tuition centre that delivers results! We’ve tried many branded tuition centres and they disappointingly don’t produce results! We enrolled into Miracle Math in 2018 and have stayed with then till 2021 when my son finished his ‘O’ level. When my son first started his tuition with them, he was getting a C6 for his E. Math and F9 for his A. Math. After attending their classes, my son consistently improved and managed to get A1 for both his E.Math and A.Math for his ‘O’ level. We are absolutely grateful for their guidance and we can’t thank them enough. This is one authentic tuition centre that is committed to helping children with a heart and which is also committed to deliver results! Highly recommended!” – Maureen (parent) Joash, Victoria School, Sec 4, A1 for Math (improved from C6 to A1), A1 for Additional Math (improved from F9 to A1). “My grades improved from mostly a 60 to a 70 or 75. I’m really glad I joined this tuition. The environment there is also very calm and friendly and moreover my tuition teacher is very patient in clearing our doubts which encourages us to ask more queries we have about the topic. Joining Miracle Math certainly improved my grades from a B to an A. Once again my goal of improving my math was fulfilled when I joined this tuition 🙂 ” Bharathi, Pasir Ris Crest Secondary School, Sec 2, A1 for Math (improved from B3 to A1). “Math has always been the nemesis of my gal. Though she has worked hard and excelled in the rest of her subjects, she mostly only managed a B for Math despite home tuition. This is until we found Miracle Math. I must say the teaching method works. What I am most grateful for is that she no longer fear math. In fact, she looks forward to her lessons and is always motivated to do better. And she finally did, by scoring an A1 for her recent test. Thank you Miracle Math! Pls keep up the good work!” – Eva (parent) Chloe, Tanjong Katong Girls’ School, Sec 1, A1 for Math. “my results improved from a f9 to an a1 thanks to the help of my teacher. the teacher makes every lesson really interesting and helps me understand different methods easily.” Kayla, CHIJ Katong Convent, Sec 2, A1 for Math (improved from F9 to A1). “We are glad that Miracle Math opened a centre near our place. My daughter is in Sec 2 and all along her weakest subject is Math and she failed her Math in Sec 1 and she improved greatly to A2 in Sec 2. Some of her friends or classmates were in other branded learning centres to get that kind of grade with premium. Miracle Math class size is small and more interactive. Even during COVID-19 time when lessons were carried out in zoom, the teacher will WhatsApp me if the student is not submitting the assignment. I am glad her grade has improved and hope she can maintain her standard.” – Ang Arika, Ngee Ann Secondary School, Sec 2, A2 for Math. “Before joining Miracle Math, I was usually getting Cs and Bs. However, after being in this tuition for a few months, my E Math grade jumped to an A1 for WA1.” Sophia, Bedok Green Secondary, Sec 4, A1 for Math (improved from B4 to A1). “when i 1st joined in sec 1 i hated math and was getting f9 for my math i thought i was done for math and did not even consider amath to be one of the subj i would choose in sec 2 however miracle math made me enjoy doing math and helped me get an A for math eoy and now i have even considered taking amath that last year i would have never have thought of.” Priselle, Bedok South Secondary School, Sec 2, A2 for Math (improved from F9 to A2). “At first I barely pass my maths. However after joining miracle math for 7 months, I got my first distinction in a long time. The classroom environment is peaceful for studying and the teacher is very approachable. Materials given are also useful and enough for self practice and classes are also being recorded and uploaded so that students can review it. overall it is a great experience at miracle math.” Shannon, Hai Sing Catholic High, Sec 2, A1 for Math (improved from D7 to A1). “At the beginning of the year my math results were not consistent at all because i lacked understanding in certain topic. Miracle Math has wonderful and helpful teachers who has guided me through the thick and thins and now for my final term examination I scored an A2. The learning environment is also very conducive which helps us focus better. The teachers, the help I received, the materials as well as the productive environment help pull up my grades.” Lakshana, Temasek Secondary, Sec 2, A2 for Math. “Miracle Math has been a great tuition centre, with great teachers who are always willing to help if we need the help. The small group size means that the teacher is able to devote more time to helping each student master the topic. The teachers are also enthusiastic, approachable and knowledgeable, as such whenever I have any questions regarding the work or homework, I would always approach the teachers for help. Great job, I love this environment and it has helped me score well for both Math and A-Math. I would recommend this tuition centre to anyone if they need help with Math or A-Math!” Jun Rong, Anglican High School, Sec 4, A1 for Math (improved from B3 to A1), A1 for Additional Math. “Before joining i was only getting about C6. But after i joined i began to show improvement by scoring A’s. The teacher is very approachable which makes asking questions very easy. Under the teacher’s guidance, you just have to pit in your effort , you can expect an improvement. Classes are also enjoyable.” Royston, Bedok View Secondary School, Sec 2, A1 for Math (improved from C6 to A2). “It’s never cumbersome to go to Miracle Math for tuition. I receive great care and professional lecture from teacher and the class environment is very conducive too. My grades have also improved and stabilised with just a few months of lessons. It’s always joyful and worth looking forward to when attending tuition.” Kelly, Anglican High, Sec 3, A1 for Math. “Teacher is very patient and always explain very clearly for all the questions. Advance learning here also makes lessons in school easier. I used to get B for math in Sec 1 but after joining I got A1 all the way from Sec 2 till now Sec 3” Ding Feng, St. Patrick’s School, Sec 3, A1 for Additional Math, A1 for Math (improved from B3 to A1). “before joining miracle math tuition , i got an undergrade for my end of year exam in secondary one . however , after i joined i could see quite an improvement in my school work. due to covid , some of us prefer to attend tuition lesson at home (digital lessons) , some might think it’s not as effective as going to the tuition centre physically , but to me i still learn well and there is not much difference between the going to tuition physically or doing the digital plan ; you would be attending tuition with the other students that are going there physically and learn together with them!” Natalia, St Anthony’s Canossian Secondary School, Sec 2, A1 for Math (improved from U to A1). “Before joining miracle math my understanding of math wasn’t very good but after a few lesson and guidance from my teacher, I began to notice the improvements. When it came to my wa2 I finally scored an a1. The methods taught are also very easy to understand and the teacher is very approachable.” Fayth, Anglican High, Sec 2, A1 for Math. “To anyone who asked me how my math was, I would say that math was my most consistent subject – I was always an F9. Having gotten an F9 in my Sec 3 EOYs, many around me encouraged me to drop A math. I was also starting to lose hope in myself, believing that I was inherently bad at math. It was at the beginning of Sec 4 when I joined Miracle Math. The teachers there believed in me when no one else did and would always go the extra mile for me, often staying back after class to answer my questions. The Covid-19 pandemic came and the Miracle Math’s teaching prowess really shone through. They adapted quickly, incorporating technology with education seamlessly. Using a high tech whiteboard, they would record the lessons so that us students could rewatch the lessons as many times are we liked. Not only did my math grades improve tremendously under the guidance of Miracle Math, they also helped me find my confidence in math. They made math less intimidating and gave me hope. I am extremely grateful for Miracle Math for believing in me and giving me a renewed sense of courage to face math head on. So, if you too are fearful of math, do not despair, you are not alone. With hard work and the right guidance, you will get there. There are miracles awaiting you at Miracle Math!” Hui Si, Anglican High, Sec 4, A2 for Math (improved from E8 to A2), B4 for Additional Math (improved from F9 to B4). “My daughter joined Miracle Math when she was struggling with A. Math and very lacking in confidence in her Math abilities. Miracle Math did a wonderful job teaching and encouraging her, building her confidence in both Math. Her results at O levels were a huge improvement, esp in A. Math where she went from failing to a B3 at Os, and an A2 for E. Math as well! Thank you for your patience and nurturing… she has enjoyed your lessons very much. We are thankful to have found you!” – Serena (parent) Jadyn, Tanjong Katong Girls’ School, Sec 4, A2 for Math (improved from B3 to A2), B3 for Additional Math (improved from D7 to B3). “Before joining miracle math, i scored only a c6 for my math WA3. after joining miracle math, my math improved and i obtained an a2 for my year end exams teacher is really approachable and her methods to solving questions are easily understandable. She is also very patient and explains concepts to us until we understand. Thank u for teaching me and being patient with us” Yi Xuan, St Anthony’s Canossian Secondary School, Sec 3, A2 for Math (improved from C6 to A2). “Since joining Miracle Math at the start of my Sec1, I have seen progressive improvements from B3 to A1. Teaching methods for syllabus easy to understand and digest. In addition, Miracle Math provides me with sufficient exercises to study the methods to solve and understand. Teacher is patient and ensures all students in the class understand how to solve the questions. No one is left behind. Teacher will joke in class creating fun learning environment. I strongly recommend Miracle Math.” Jeanette, St Anthony’s Canossian Secondary School, Sec 1, A1 for Math (improved from B3 to A1). “Before I joined Miracle Math I failed my mid year examinations in Secondary 2. Mathematics was the only subject that I had a problem with. I was one of the worst in Mathematics in my class. I was almost about to give up on Mathematics because I found it too difficult to understand. However, Miracle Math managed to show me a different perspective towards Mathematics. She was extremely patient and would always ensure that I understood what she was teaching. Teacher made Mathematics very fun for me. Of course, my grades improved, but I never thought it would have improved so significantly. My F9 in my mid years turned into an A1 in my end of years allowing me to get into the stream that I wanted to. I am truly thankful to Miracle Math for believing in me and helping me achieve such great results.” Balraj, St. Joseph’s Institution, Sec 2, A1 for Math (improved from F9 to A1). “Sometimes the explanation that the teachers give in my school do not really answer my question or that I do not understand what they are saying. After I started going to this tuition center, the questions and doubts that I had before were all cleared up by the teacher when I asked. I felt that the explanations she gave were clear and easy to understand. If you are someone who is bad or needs help with math, I would recommend joining this tuition.” Yuen Zen, Anglican High School, Sec 1, A1 for Math (improved from B4 to A1). “Before joining miracle Maths, I just passed for my first weighted assignment. But after joining miracle Maths, I got an A2 for my end of year. I am very glad I got my teacher to teach me, because she is very patient, so thank you” Jerica, Loyang View Secondary School, Sec 3, A2 for Math (improved from C6 to A2). “Before joining miracle math, my math WAs only scored C5 and C6. After joining miracle math, within half a year under the teacher’s motivation and patience during class i received an A1 for math EOY. The classes were very engaging and fun to go to and i would highly encourage to join miracle math.” Mandy, Bedok Green Secondary School, Sec 2, A1 for Math (improved from C6 to A1). “before joining miracle math, i have constantly struggled with math and usually do not pass. however, after joining for just a short period of time, my grades have improved drastically from f9 to a1. thank you teacher for always being so patient and willing to teach me, i wouldn’t have done it without your help ! :> ” Valerie, Hai Sing Catholic High, Sec 3, A1 for Math (improved from F9 to A1). “before joining Miracle Math Tuition I feel math is a very common subject but after joining I feel math is very interesting. Actually I just came to Singapore but Miracle Math help me catch up math and help me scored A1 in seven months. Teacher is a very patient teacher.If you have any question, she will explain it patiently and carefully. Thank you Miracle Math for your guidance! 🙂 ” Xin Yi, Bedok View Secondary School, Sec 2, A1 for Math. “i was scoring d7s and c6s before coming to this tuition. however, after a few months of tuition with Miracle Math, i improved tremendously! i went from Ds and Cs to Bs and even As! she is very good at explaining many different concepts to us and breaking them down into understandable chunks of information. it felt like my head was underwater all along, and i could finally see through the murky water into the clear bright world!!! my bad grades have ceased and decisted. she goes through the homework every week and gives quizzes on regular basis, and i believe this has gone a long way in helping me improve. the environment is also very comfortable, very spacious room, cooling aircon. tuition mates are also very friendly, particularly the thursday 7PM-9PM sec 2 class 2020. overall, class damn good one, confirm plus chop will improve as long as you are engaged!!! 🙂 ” Khin, Chung Cheng High School (Main), Sec 2, A1 for Math (improved from D7 to A1). “Before I attended Miracle Math I have been constantly getting C5 to C6 when I was secondary 1 but after I joined Miracle Math when I was secondary 2 I have managed to score an A1 for my Wa1 and full marks for my Wa3 And this is because of the teacher who helped me along the way and I owe my gratitude to her, whenever I was confused with a question she would always tell me where my mistake was and even though I still did not understand the question she would be very patient with me until I have finally understood, she is very energetic and is also very friendly to everyone in class, I would definitely recommend attending Miracle Math to improve your maths!” Spencer, Pasir Ris Secondary School, Sec 2, A1 for Math (improved from C6 to A1). “Math has been my daughter’s weakest subject. After trying out at a few tuition centres, I am so grateful to have found Miracle Math. I am confident that Miracle Math is THE BEST tuition centre for my daughter who is currently in Secondary 1. Not only have I seen an excellent improvement in my daughter’s score for her Math, but I also observed that my daughter is starting to get more comfortable with the subject and it is all thanks to Teacher’s help. My daughter is now more confident and motivated to practice challenging Math questions on her own. As a parent, I am so relieved with these positive changes. Teacher has been the positive influence in my daughter’s attitude towards the subject. Teacher is passionate and well verse in teaching the subject. She is very patient when my daughter does not understand a certain topic and would help clear her doubts. Thank you very much Miracle Math in providing high quality teaching. Looking forward to continuing partnership with Miracle Math in achieving success for my daughter with the subject!” – Fazielah (parent) Filzah, Loyang View Secondary School, Sec 1, A1 for Math. “Teacher is very friendly and she makes sure to teach the students well! When we dont understand a topic, she tends to us individually and helps us understand the topic. She is very organised and has everything planned out well, thus there is no buffering time during lessons. I was initially very worried about my math results but thanks to her, I jumped from a F9 to an A1!” Ji Ahn, Anglican High School, Sec 2, A1 for Math (improved from F9 to A1). “Before i joined miracle math, my emath and amath were constantly stuck at either E8 or F9 🙁 even when i did according to what my previous 2 home tutors told me to do, i wasn’t able to improve. i wanted to drop amath because i couldn’t do it. however, joining miracle math helped me tremendously. teacher would go through the concepts patiently and would drill it into my head until i got it. i remember being the slowest in class and even did not have the courage to ask questions, but teacher helped me after class & i think that’s how i improved so much!! even so, i remember every time she gave us homework she would say ‘do it not because u need to but because u want to !! ‘ and somehow it motivated me HAHA… nevertheless, i completed my O levels scoring A2 in emath and B3 in amath in just 5 months 🙂 would definitely encourage joining miracle math as not only did i get my desired results, my hatred for math changed to a interest in it !! THANKS MIRACLE MATH 🙂 ” Jan, Anglican High School, Sec 4, A2 for Math (improved from F9 to A2), B3 for Additional Math (improved from F9 to B3). “before joining miraclemath, I was getting E8 or D7 in my math class tests and examinations, and I really didn’t enjoy math lessons in school. However after attending several sessions of tuition at miraclemath, I could see a great improvement in my results and I started to actually enjoy doing math. Under teacher’s guidance, I managed to improve my results from a E8 to a B3 for my end of year examinations and also received the Edusave Best Progress Award for this accomplishment! Overall, I am very happy about my experience at miraclemath and I hope others can have the same experience!” Rachel, CHIJ Katong Convent, Sec 3, B3 for Math (improved from E8 to B3). “before i started tuition at miracle math, i wasn’t doing relatively well in my math. i was struggling a lot to memories the formulas, i was so worried i wouldn’t be able to catch up with my classmates. my mom noticed how stress i was and found miracle math. on my first day at miracle math, teacher was super welcoming and friendly. she asked me to do some questions first as i was early so i tried them and i was struggling really badly. Teacher saw me struggling and explained the formula to me until i got it right. she was really very patient with me. ever since the covid 19 outbreak, we started having online lessons. miracle math will record the lessons and send it to you so if you missed out anything during them lesson, you can always watch the recording until you understand! i will always re-watch the recording and i just got back my wa3 and i got an a1! i’m pretty sure if it were not for Miracle Math, i wouldn’t have gotten my a1.” Zoanne, Anglican High School, Sec 1, A1 for Math. “Before I have joined MiracleMath, my Mathematics was really bad, I failed the first semester and got a D7. However, after I joined MiracleMath, my grades got better and got a B at the end of the year(overall grade) at the end of Sec 2. Despite not meeting the criteria of getting A Math, I managed to appeal for A-Math and get into my subject combination in Sec 3. I continued with the tuition and in the first term, I scored A1 for both my E-Math and A-Math. For people who are seeking for help in their Mathematics, I would strongly recommend you to join MiracleMath. Teacher helps you to better understand concepts and make Mathematics interesting and fun.” Jonathon, Bedok View Secondary School, Sec 3, A1 for Additional Math, A1 for Math (improved from D7 to A1). “I am Jonathan Lim from Hai Sing Catholic, sec2 this year. attended this tuition since p6 as I was failing math for my p6 mid year Exams. For PSLE , with Miracle Math’s help , I went from D to A. I was very happy! For my 2019 Sec 1 WA2 , I barely passed , so I came back to Miracle Math tuition to get some help! With Miracle Math’s help , I jumped grades from D7 to A1 again! I am very satisfied with my results ! With their relocation to Heartbeat at Bedok, a more conducive environment to study has been in place for me and my tuition mates.My tuition mates are very fun , engaging and I interact with them and we constantly discuss during pop quizzes. And with that came good grades! This demonstrates the effectiveness of small group tuition which miracle math has become known for.I would definitely recommend this tuition to students that need help with their math. 🙂 🙂 🙂 ” Jonathan, Hai Sing Catholic High, Sec 2, A1 for Math (improved from D7 to A1), A for PSLE Math (improved from D to A). “Before joining Miracle Math, I got F9 for my Math EOY. After joining, in the span of 2 months my marks went from F9 to A1 drastically. I enjoyed going to Miracle Math as the class is so fun but I still learn many new math concepts which helps me a lot in school!” Lok Yee, Anglican High School, Sec 2, A1 for Math (improved from F9 to A1). “Before joining MiracleMath, my math grades were consistently poor and i had difficulties understanding the concept of math and found it hard to pass math. However, under the recommendation of a friend, i found out about MiracleMath and went to sign up for it. Teacher is a very patient and understanding teacher and will explain all the concepts one by one to her students and will not get angry when we are slow in understanding. She makes lessons fun and under her tuition, i gradually start to find my liking for math. Thank you Teacher for changing my whole perspective on math! MiracleMath is really a miracle.” Xin Zi, Pasir Ris Secondary School, Sec 4, A1 for Math (improved from F9 to A1). “Before i joined Miracle Math, my parents had tried to send me to another tuition centre in the hopes of helping me improve my Mathematics results from a B, which disappointingly, was to no avail. No matter how hard i tried to do better in Maths, which i was always weak in because i had always found it difficult to grasp the concepts and solve word problems that i just could not understand, i had always been stuck at my slump at B. However, after joining Miracle Math, although i had newer Math topics to learn since i had just joined Secondary school, my results took a positive change and since then, i have been scoring A1’s and have been enjoying the process of learning new concepts, which was very new to me because as i had always thought of Maths as a burden and something i was just forced to do. Of course, all of this could not have been possible without the help and guidance from Teacher, who has not only taught me with care and understanding, but also made lessons fun and more engaging by chatting with us from time to time so as to not make us too tired and have a little laugh before continuing on with the lesson. Overall, my experience with Miracle Math has been very positive and enjoyable, and i hope that more people like me who need additional help with Maths will also join Miracle Math and be able to experience the true joy and fun of learning and scoring better for Maths as a result. Lastly, to Miracle Math; thank you very much for helping me improve and i definitely look forward to more fruitful lessons in the future! 🙂 ” Insyirah, Greendale Secondary School, Sec 1, A1 for Math. “I was doing very badly in math for sec 1-3 , getting C6s in sec 2 and F9 for sec 3 EOY. Taking this tuition helped me improve by a lot from getting F9s to A1 in sec 4, the environment is fun and yet still very educational. Despite having to wake up so early on a Saturday, it is something to look forward to as teacher is extremely good at teaching and funny. the tuition has really improved my understanding and foundation on math!” Jovis, Pasir Ris Secondary School, Sec 4, A1 for Math (improved from F9 to A1). “Before joining miracle maths, my maths was bad. i kept on getting C’s and sometimes fail for my exams. but when I joined miracle maths, they encouraged that I can do better in my maths and I could improve . Teacher was patient and explains anything that I m not sure of. She also gives practice papers to practise on, she would also revise topics she teaches. teacher always make the lessons fun and intresting. joining miracle maths helped me tremendously in my studies. now because of miracle math, I managed to get A for my wa1. I strongly encourage those who wants to improve their maths to join miracle math.” Yu Xuan, CHIJ Katong Convent, Sec 2, A1 for Math (improved from D7 to A1). “before joining miracle math, i was just an average student, barely passing math. however, just after one month after joining miracle math, i’ve got an a2 for my mid years. for my prelims, i’ve scored an a1! teacher has made math much more enjoyable to learn and through her lessons i was able to understand many more math concepts easily! thank you teacher for your guidance! 🙂” Hao Yue, Pasir Ris Secondary School, Sec 4, A1 for Math (improved from D7 to A1). “I am Sec 1 this year and I joined miracle math 2 months before my exam. Before I joined the class, I usually got A2 or B3 for my examination but after I joined Miracle Math my grades improved tremendously and got A1 for my examination! Tuition is fun!” Xavier, Beatty Secondary School, Sec 1, A1 for Math (improved from B3 to A1). “I really benefitted greatly from this tuition centre. Not only did it allow me to jump from a low B3 to a A1, I had improved alot even after only being in the tuition for 3 months. I really enjoyed and learnt alot from this tuition as teacher is very patient and a teacher who explains the sums extremely well. She is a passionate teacher who insists on teaching us the best and always has a bright smile while teaching us. She is also very kind and encouraging ,always sheeting us on. Being in Miracle Math it really really helped me to understand the topics well and pushed me on to do well for math. I’m glad I joined Miracle Math tuition.” Calise, Tanjong Katong Girls’ School, Sec 1, A1 for Math (improved from B3 to A1). “When I started this tution i had some basic. But after joining his tution I understand some concepts which is easy for me to understand. And for end of year exam I scored 70/100. I am very grateful for this tution and I never regret 🙂 :)” Yong Si, Bedok South Secondary School, Sec 1, A2 for Math (improved from C5 to A2). “I’m secondary one this year and I have been in this tuition for since the start of the year 2019 and its sept now. Teacher is a really great and patient teacher that really helped my grades improve. In my first WA, I scored 11/30. After a few months of this tuition, I scored 18/30 for my WA2, and now, I got an A for math, 21/30, for WA3! The improvement is really drastic and I realised that this was a tuition that was really fun, engaging to a point where I could still focus and enjoy the tuition but score really well. This tuition is different from other centres whereas of the teacher, which Teacher once again us very kind, patient and very fun in the sense that lesson is never boring. After a good few months of this tuition, my parents and myself included are very pleased with my results and we certainly recommend anyone who has trouble with Math to join this centre” Amber, Temasek Secondary School, Sec 1, A2 for Math (improved from F9 to A2). “When i first came i was only receiving a score of 56 percent. After half a year, i jumped to a score of 90 percent! Teacher is a dedicated teacher and is never tired of having to recap concepts. She has alot of materials for us to do during her class to improve our skills.” Joash, Victoria School, Sec 2, A1 for Math (improved from C5 to A1). “Before joining Miracle Math, I used to get F9 for math. After joining Miracle Math, my grades improved, from F9 to A1. Teacher is helpful, class is fun. The teacher helped me to understand and clarify things that i am not sure of.” Kah Nyee, Dunman Secondary School, Sec 2, A1 for Math (improved from F9 to A1). “Before joining miracle math I hated math and have joined many math tuitions before that didn’t really help and improve my grades. My dad was very disappointed with my results and thought that tuition didn’t really help and it is a waste of money as I don’t learn and improve. After getting my psle results my mum knew that I needed math tuition for sec one so my mum found about miracle math and I went for it. It turn out to be a really fun and engaging experience! My teacher taught me lots of math concepts like prime factorization, algebra and many others. teacher is really nice and dedicated and like I treat her like a friend where I can share problems and stories to her. After a few weeks of tuition i improved leaps and bounds and even volunteered to be the math rep and started to love math after that. For my wa2 I got an 80/100 and wa1 did pretty well too so I got an a1 for combined. My parents were pretty pleased and my dad finally saw an improvement of my math! Those who haven’t sign up yet! Sign up now! I am very sure u will have an enriching experience! She’s one of the best math tutor u can ever find!” Keira, CHIJ Katong Convent, Sec 1, A1 for Math. “Teacher is a very fun and patient teacher! Before i joined miracle math i failed math and now my recent math mye i got a b3.” Devon, Edgefield Secondary School, Sec 2, B3 for Math (improved from D7 to B3). “teacher who is always familiar with Singapore math latest syllabus compared to other learning centres and Teachers and is able to help any student excel. When in doubt of how to answer a qns or qns type, she can definitely be of help.” Wayne, Tampines Meridian Junior College, JC 1, A for H2 Math, Jurong Secondary School, A1 for Math (improved from C6 to A1), B3 for Additional Math (improved from F9 to B3). “Before I joined this tuition, I have failed my tests and after I came in, I got 80/100, an A for my math. The environment in here is very calming and very conducive one of the reasons is that Cher would go through the topics even before the school can teach us that topics” Joshua, Bedok South Secondary, Sec 2, A1 for Math (improved from D7 to A1). “Miracle math tuition is, hands down, the best tuition centres I have attended. Dedicated and patient teacher. Dedicated to get the best out of her students and patient in teaching all the concepts of that chapter properly before advancing to the next one. She never gets tired of recapping the things that students forget or don’t get the first time. It is amazing how she is able to make class so fun yet, we still learn so much. She takes every question asked seriously and is a very hardworking teacher. Would definitely recommend if u want to tremendously improve your Mathematics!” Jezer, Hai Sing Catholic High, Sec 5, A2 for Math (improved from C6 to A2), A2 for Additional Math (improved from B3 to A2). “Miracle Math is my math tutor ever since I stayed in Singapore. I was in secondary one express stream and My results weren’t good however with her much guidance, I improved much and my end year result was an A.” Kazu, Springfield Secondary School, Sec 1, A1 for Math (improved from D7 to A1). “a passionate teacher that teaches me well. The learning environment is excellent for me. Currently I am a Sec 2 Express student which is going to be promoted to Sec 3 Express next year. Before I went to Miracle Math for tuition I got 9/20 for term 2 class test 2. After I went to Miracle math I got 62/100 for Mid year Exams, and I got 81/100 for End of year exams an A1.” Hui Xuan, Manjusri Secondary School, Sec 2, A2 for Math (improved from D7 to A1). “Thank you so much for giving me all the guidance I needed in Math! With your help, I got 20/23 for one of the tests and frankly it was impossible if not for you! I do sincerely wish to soar to greater heights under your guidance. Thank you for everything! May this miracle last forever!” Nicholas, Anglican High, Sec 2, A1 for Math (improved from E8 to A1). “The best maths tutor I’ve ever had! Teacher is incredibly kind, super encouraging, and is always ready to help!” Pearl, CHIJ Katong Convent, A for A-Level H2 Math, A1 for O-Level Additional Math, A1 for O-Level Math. “Miracle Math was my math tutor when I was in secondary school. She was always kind, patient, and very encouraging. Tutoring session always fun with her.” Peam, Ping Yi Secondary School, Sec 2. “My Math Tutor while I’m in secondary school and JC! I remembered always looking forward to her sessions 🙂 She was a very patient and encouraging tutor, exactly the style that I needed in order to do well and have fun at the same time!” Pam, Victoria Junior College, CHIJ Katong Convent, A for A-Level H2 Math, A1 for O-Level A-Math, A1 for O-Level Math. “My math tutor in secondary school and JC: She was always patient and encouraging in explaining the concepts to me. Thanks for all your time and effort! 🙂 ” Jamie, St Andrew’s Junior College, St. Margaret’s Secondary School, A for A-Level H2 Math, A1 for O-Level Additional Math, A1 for O-Level Math. Miracle Math Tuition Centre Heartbeat@Bedok #01-06 By Appointment Only. Located in front of the vehicle drop off point at Heartbeat@Bedok. 5 mins walk from Bedok MRT. © 2024 Miracle Math Tuition Centre. All Rights Reserved. Registered with Ministry of Education (MOE), Singapore. Best Maths Tuition For Primary, Secondary & Junior College Sitemap | Privacy Policy
{"url":"https://miraclemath.sg/reviews-secondary-math/","timestamp":"2024-11-12T01:50:13Z","content_type":"text/html","content_length":"273243","record_id":"<urn:uuid:a8cedb74-1d8e-4c70-b232-1eb23870813c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00694.warc.gz"}
Web 3D Graphics Programming for Beginners There seems to be a bit of a misconception that libraries like Three.js make 3D graphics programming so easy that you don't need any knowledge of how a 3D graphics engine actually works. While it is certainly much easier than programming the raw WebGL code, you really should have some concept of what is happening at that layer, as it informs everything on top. Just reading the Three.js documentation, for instance, assumes a working knowledge of standard 3D graphics math and scene graph structure. If you are new to 3D graphics programming and want to really understand it, I would suggest the following course of study. A Bit of Math There's no getting around it, you have to learn a bit of math to really get anywhere. Of course, you'll need a general understanding of geometry and trigonometry. You'll be using both degrees and radians when using angles, so make sure to memorize the common angles in radians if you haven't already. If you haven't taken a Linear Algebra course, you will need to get comfortable with a Transformation Matrix as it is fundamental to dealing with points in 3D space. It is worth spending the time to get a solid understanding of this because you will see it everywhere. Don't worry, it's not difficult for these purposes. You just want to have the understanding of what they are used for and how you use them to transform 3D coordinates between different coordinates bases. The libraries themselves do all the calculations, you just need to know the concepts. You will also want to learn about the problem of Gimbal Lock when using Euler angles and how you get around this by using a rotation matrix or a quaternion. Thankfully, you can pick up these things as you need them. It's normal to bump into things that you probably never heard while reading through the documentation. Just take a minute to look things up and see how they are being used and it all usually makes sense. Graphics Vocabulary In order to learn all the graphics terminology, what everything is and what it means, I would suggest learning how to use Blender very well. This will teach you about Meshes, Materials, Textures, Lights, Animations and more. There are plenty of tutorials out there that will walk you through the whole program. Don't worry if you don't have any artistic talent, it isn't necessary to still understand what each element does. By becoming familiar with all the elements that make up a 3D scene, you will have a good context for all the classes in Three.js and most any other 3D engine for that matter. Most of the classes have a lot of parameters and you will appreciate having been exposed to them in Blender instead of having to look up the definition of 5 things every time you instantiate a new object. As a side benefit, you will get hands on knowledge of how to create a 3D scene, which will make it a lot easier to communicate with artists and designers. If you have to create a special tool, for instance, you can use the controls and workflows that they are going to be most familiar with. In fact, you may end up just using Blender's Python API to create whatever they may need. Once you have a really good understanding of all the parts of a 3D scene, I would recommend learning enough WebGL to get a textured cube drawn to the screen. This will show you enough of a lower level API to see how the CPU sends data to the GPU to be drawn. This will really help you understand why the Three.js API is structured the way it is. You will intuitively be able to tell which parts are essentially WebGL wrappers and which parts are the higher level conveniences. Finally, having learned all of this fundamental information, you are more than ready to dive deep into the full Three.js API or any other 3D engine. Onward and Upward 3D graphics programming is a very wide field of study that can range from complex light physics to very basic animation. Having a good grasp of the fundamentals will make any topic at least approachable. Suddenly you will find that listening to a John Carmack talk isn't that foreign. I hope this helps some newcomers to 3D graphics programming get a good foundation built up. If you want a more complete educational experience that will guide you through every aspect of 3D graphics programming, there are a few books that I recommend. All by David H. Eberly, these are extremely detailed textbooks on every aspect of 3D engines. If you really want to become an expert, I highly recommend you pick these up. They aren't cheap, but they are worth it.
{"url":"https://www.briankoponen.com/web-3d-graphics-programming-beginners/","timestamp":"2024-11-10T06:28:09Z","content_type":"text/html","content_length":"37862","record_id":"<urn:uuid:87b2b744-42dc-4c91-b0da-2bbcb7d15ce4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00168.warc.gz"}
Magnetic Field due to current Homework Help, Questions with Solutions - Kunduz Magnetic Field due to current Questions and Answers Magnetic Field due to current 21 d 2nd Relative permittivity and permeability of a material are e and respectively Which of the following values of these quantities are allowed for a diamagnetic material AIEEE 2008 1 1 5 H 0 5 3 1 5 H 1 5 2 0 5 0 5 4 0 5 H 1 5 height of 4 m from the ground and carries a current of 100 A from east to west 0001 Magnetic Field due to current 4 the magnetic field at all points inside the pipe is the same but not zero A charged particle with charge q enters a region of constant uniform and mutually orthogonal fields and B with a velocity v perpendicular to both and B and comes out without any change in magnitude or direction AIEEE 2007 of V Then v ExB B 4 V BxE E 3 v ExB E 2 v x B through a magnetic field perpendicular to its direction Then AIEEE 2007 Magnetic Field due to current 4 9 1 J 1 4 55 J 1 15 J 2 2 3 J A bar magnet is hung by a thin cotton thread in a uniform horizontal magnetic field and is in equilibrium state The energy required to rotate it by 60 is W Now the torque required to keep the magnet in this new position is Magnetic Field due to current 3 directed along PO 4 directed perpendicular to the plane of paper A current carrying loop in the form of a right angle isosceles triangle ABC is placed in a uniform magnetic field acting along AB If the magnetic force on the arm BC is F the force on the arm AC is AIPMT 2011 1 2 F 2 F B 3 F C 4 2 F Magnetic Field due to current 4 The iduced electric field due to the changing magnetic field A metallic rod of mass per unit length 0 5kg m is lying horizontally on a smooth inclined plane which makes an angle of 30 with the horizontal The rod is not allowed to slide down by flowing a current through it when a magnetic field 4 11 32 A of induction 0 25T is acting on it in the vertical direction The current flowing in the rod to keep it stationary is NEET 2018 1 7 14 A NEET 2018 itivity of a moving coil galvanometer is 5div mA and its voltage sensitivity angular deflection per unit 2 5 98 A 3 14 76 A Magnetic Field due to current A bar magnet of length I and magnetic dipole moment M is bent in the form of an arc as shown in figure The new magnetic dipole moment will be INEET 2013 3 1 M 2 NE M C 4 M Magnetic Field due to current 1 40 92 2 25 2 A cylindrical conductor of radius R is carrying a constant current The plot of the magnitude of the magnetic field B with the distance d from the centre of the conductor is correctly represented by the figure NEET 2019 1 B R d 2 B R d 3 B R d B 4 Rd the earth s surface the angle of dip Magnetic Field due to current A rectangular loop of length 20 cm along y axis and breadth 10 cm along z axis carries a current of 12 A If a un magnetic field 0 3 0 43 acts on the loop the torque acting on it is 1 9 6 x 10 Nm along x axis 3 9 6 x 10 2 Nm along z axis 2 9 6 x 10 Nm along y axis 4 9 6 x 10 Nm along z axis Magnetic Field due to current 3 3 F A current loop consists of two identical semicircular parts each of radius R one lying in the x y plane and the other in x z plane If the current in the loop is i The resultant magnetic field due to the two semicircular parts at their common centre is AIPMT MAINS 2010 1 Hoi 2 2R 2 Hi 2R 3 4R 4 2R Magnetic Field due to current 5 The vector form of Biot Savart s law for a current carrying element is 3 dB Ha Id xf 4 r dB Ho 4 Idl sin p r 2 dB H Id xf 4 T RPMT 2009 4 dB H Id7 xi 4 r mathe Y axis enters a region where a magnetic fie Magnetic Field due to current Two insulated wires of infinite length are lying mutually at right angles to each other as shown in Currents 2A and 1 5A respectively are flowing in them The value of magnetic induction at point P will be 10 3 N A m 2 2 x 105 N A m 3cm 4cm P 9 1 5 x105 tesla 4 2 x 10 N A m Magnetic Field due to current Two particles having charges in the ratio 1 2 are projected in a uniform magnetic field with same momentum If they are projected normally to magnetic field then the ratio of radii of their circular paths will be 1 2 3 w Magnetic Field due to current 1 A series R C circuit is connected to an alternating voltage source Consider two situations a When capacitor is air filled b When capacitor is mica filled Current through resistor is i and voltage across capacitor is V then a i i c V Vb a Subo Doubt s b V Vb d VV 2015 a Magnetic Field due to current 19 There are two squares loops A and B When A moves towards B a current starts flowing in B as shown in figure and current in B stops when A stops moving From that we can infer that Assume loop B is at rest b A a There is a constant current in clockwise direction in A C B There is a varying current in A There is no current in A d There is a constant current in counter clockwise direction in A Magnetic Field due to current FIGURE 1 35 In a certain region of space electric field is along the z direction throughout The magnitude of electric field is however not constant but increases uniformly along the positive z direction at the rate of 105 NC per metre What are the force and torque experienced by a system having a total dipole moment equal to 107 Cm in the negative z direction Magnetic Field due to current A particle of mass m and charge q enters the region between the two charged plates initially moving along x axis with speed v like particle 1 in Fig 1 33 The length of plate is L and an uniform electric field E is maintained between the plates Show that the vertical deflection of the particle at the far edge of the plate is qEL 2m v2 Compare this motion with motion of a projectile in gravitational field discussed in Secti las cic Magnetic Field due to current on external magnetic 4 No field lines exists inside the magnet A magnetic needle lying parallel to a magnetic fiel requires W units of work to turn it through 60 TH magnitude of torque required to maintain t needle in same field at an angle of 30 with fielc N mis 1 2W 3 3 W 2 W W 4 3 w Magnetic Field due to current ducting wire bent in the form of a y 4x is shown in figure and current flowing in the wire is 3 A If the wire is placed in a uniform magnetic field B 4k T then magnetic force acting on the wire in newton is y m 1 48 3 96 3 A A 4 x m B 2 48 i 4 96 i Magnetic Field due to current Amagnet is suspended horizontally in the earth s magnetic flux When it is displaced and then released it oscillates in a horizontal plane with a period T If a piece of wood of the same moment of inertia about the axis of rotation as the magnet is attached to the magnet what would the new period of oscillation of the system become b a T I 3 T c 2 T 1 2 d T 2 1 1 Magnetic Field due to current As shown in figure a rectangular loop of length L and breadth b is placed near a very long wire carrying current I The side of the loop nearer to the wire is at a distance a from the wire Find magnetic I flux linked with the loop Hint dx In x Holb L a x a dx L b x L a Magnetic Field due to current Two coils placed near to each other have number of turns equal to 600 and 300 respectively On passing a current of 3 0 A through coil A the flux linked with each turn of coil A is 1 2 x 104 Wb and the total flux linked with coil B is 9 0 x 105 Wb Find 1 self inductance of A 2 The mutual inductance of the system formed by A and B ooo ooo00 A B Magnetic Field due to current A thin wire loop carrying current is placed in a uniform magnetic field B pointing out of the plane of the coil as shown in figure The loop will tend to O B Move towards right Move towards left Contract Expond Magnetic Field due to current A conducting rod of length 1 m and mass 1 kg is suspended by two vertical wires through its ends An external magnetic field of 2 T is applied perpendicular to rod and plane of arrangement The current to be passed through the rod to make tension zero is g 10 m s 0 5 A 15 A 5 A 15 A Magnetic Field due to current The combination of straight and the circular wire carrying current i is shown in the figure It consists of a circular arc of radius R and central angle made by the arc is radians and two straight sections whose extensions intersect at the centre C of the arc The magnetic field that the current carrying wires produce at point C is O B Hoi 8 R R out of the plane of the figure OB o out of the plane of the figure SR Magnetic Field due to current lon of radius R in which uniform magnetic field increasing at a constant rate dB dt from the centre is 1 2 3 a The induced electric field at a distance r ar 2 aR 2r ar 2 ar x X X X X X X X X X X xxxx xx X X X X X X X X X X X X XXX for all values of r X for r Rand X for all values of r aR 2r aR for r R for r R 1 r 3 87 Cha In the A is Magnetic Field due to current A magnet of magnetic moment Moscillating freely in earth s horizontal magnetic field makes n oscillations per minu If the magnetic moment is doubled and the earth s horizontal field is also doubled then number of oscillations mac per minute would be 2n 4 2n 2 2n Magnetic Field due to current Question No 19 There are two circular coils of equal areas and equal magnetic flux through them The resistances of the colls are in the ratio 1 2 If the magnetic flux through the coils is switched off then the ratio of the charges flown through the coils will be 2 1 1 3 4 1 1 6 Magnetic Field due to current 13 A particle of mass m and charge q is fastened to one end of length 7 The other end of the string is fixed to the point O The whole system lies on a frictionless horizontal plane Initially the mass is at rest at A A uniform electric field in the direction shown is then switched on Then a the speed of the particle when it reaches Bis 29 El m GET b the speed of the particle when it reaches Bis V m the tension in the string when the particle reaches at Bis qE the tension in the string when the particle reaches at Bis zero 60 A Magnetic Field due to current 14 The ele a distance from the centre d 24n Eor c 6a Ep 15 Two identical coaxial rings each of radius R are separated by a distance of 3R They are uniformly charged with charges Qand Q respectively The minimum kinetic energy with which a charged particle charge q should be projected from the centre of the negatively charged ring along the axis of the rings such that it reaches the centre of the positively charged ring is a 6a Epr Qq ARE R Qg b d Qq 2TE R 3Qq 4 R Magnetic Field due to current An electron is moving in a circular orbit of radius r makes N rotations per second The magnetic field produced at the centre has a magnitude O HoNe 2r O N e 2r HoNe Magnetic Field due to current 103 How do velocity of a charge particle moving 10 in a circle depends on the radius of the circle if the field is a magnetic b electric A a vor b v o 1 Tr B a vx b v x r C a v x b vxr Tr D a v or b v c Magnetic Field due to current A triangular ent carrying loop i is placed in a uniform and transverse magnetic field as shown in the figure X A 30 C F 1 F 3 120 X X 30 X xB If the magnetic force on the sides AC CD and DA respectively have a magnitude of F F and F3 then 2 F and F have an angle of 120 between them 3 F 3 F Magnetic Field due to current 3 10 6 N A proton and an a particle enter a uniform magnetic field perpendicular with the same speed If proton takes 20 s to make 5 revolutions then the periodic time for the a particle would be 1 5 s 2 8 s 3 4 16 s from origin with a velocity voi in a uniform magnetic 10 s Magnetic Field due to current View in English A proton of mass 1 67 x 10 27 kg and charge 10 x 10 19 C is projected with a speed of 2 x 106 m s at an angle of 60 to the x axis If a uniform magnetic field of 0 104 T is applied along y axis the path of proton s helix with circular O radius 0 2m O radius 01m O radius 0 5m radius 0 7m Magnetic Field due to current A diamagnetic object is hung by a thread from a very sensitive spring balance between the poles of an electromagnet When the magnetic field of the electromagnet is increased in magnitude the reading of the spring balance will S Spring balance sample N A slightly decrease B slightly increase C increase or decrease depending on the shape of the object D increase or decrease depending on the direction of the magnetic field bat Magnetic Field due to current An electron enters in a region where magnetic E and electric fields E are mutually perpendicular one another then 1 It will always move in the direction of B 2 It will always move in the direction of E 3 It always possess circular motion also go undeflected Magnetic Field due to current associated with the spin where e is An electron has intrinsic angular momentum I called spin and a permanent magnetic dipole moment M the positive electron charge m is the electron mass and g is a number called g factor An electron with its spin aligned along its initial velocity in the x direction enters a region of uniform magnetic field in the z direction Due to magnetic force the torque on the spin is Mx B and perpendicular to M also perpendicular to I Since torque is perpendicular to I therefore it ensures that the spin turns with initial angular velocity wpwhich is given by the following equation 7 wp x L Due to charge on the e magnetic force also acts on the electron What is the value of wp Magnetic Field due to current A B Consider two very long straight insulated wires oriented at right angles The wires carry currents of equal magnitude in the directions shown in the figure above What is the net magnetic field at point P Hol 2ra x 9 Hol 2 Ho C 2 D Hol na a x y z a La Magnetic Field due to current A rectangular coil 25 cm by 45 cm has 150 turns This coil produces a maximum emf of 75 V when it rotates with an angular speed of 190 rad s in a magnetic field of strength B Find the value of B 0 98 T 0 72 T 0 023 T 0 054T Magnetic Field due to current A very long wire ABDMNDC is shown in figure carrying current i AB and BC parts are straight long and at right angle At D wire forms a circular turn DMND of radius R AB BC parts are tangential to circular turn at N and D Magnetic field at the center of circle is mathongo mathango mathongs mathon M mathongo mathongo Nhon mathongo mathongo mothongomathongo mathongo mathongo mathongo mathongo BOD mathongamathongo mathongo C athongo mathongo mathongo mathongs mchongo Al mathongo mathongo mathongo mathongo mathongo mathongo mathongo mathongo mathongo mathongo mathongo Magnetic Field due to current mathongo Q28 JEE Main 2020 9 January Evening mathongo mathongo An electron gun is placed inside a long solenoid of radius R on its axis The solenoid has nathongo turns length and carries a current I The electron gun shoots an electron along the radius of the solenoid with speed v If the electron does not hit the surface of the solenoid maximum possible valu mathongo mathongo mathongo mathongo mathongo mathongo mathongo A B C Z S y mathongo mathongo mathongo X mathon of vis all symbols have their standard meaning eponIR 2m eponIR m eponIR nathon c mathon R mathongo mathongo mathongo 2enIR mathongo mathongo mathongo mathonga mathoncomathongo OTTENTO mathongo mathongo mathango mathongo mathongo mathongo mathongo mathongo mathongo mathango Magnetic Field due to current long solenoid has n 420 turns per meter and carries a current given by I 35 0 1e 1 60t where I is in amperes and t is in seconds Inside the solenoid and coaxial with it i at has a radius of R 6 00 cm and consists of a total of N 250 turns of fine wire see figure below What emf is induced in the coil by the changing current Use the following ecessary t Assume is in mV and t is in seconds n turns m Magnetic Field due to current A steady current T flows in a small square loop of wire of side L in a horizontal plane The loop is now folded about its middle such that half of it lies in a vertical plane Let u and respectively denote the magnetic moments of the current loop before and after folding Then 1 0 2 and are in the same direction 3 2 4 Magnetic Field due to current An infinitely long conductor PQR is bent to form a right angle as shown A current I flows through PQR The magnetic field due to this current at the point M is H Now another infinitely long straight conductor QS is connected at Q so that the current in PQ remaining unchanged The magnetic field at M is now H The ratio H H is given by IM 90 90 IR Magnetic Field due to current A circular coil of radius 20 cm and 20 turns of wire is mounted vertically with its plane in magnetic meridian A smal magnetic needle free to rotate about vertical axis is placed at the center of the coil It is deflected through 45 whe a current is passed through the coil in equilibrium Horizontal component of earth s field is 0 34 x 10 T The curre in coil is 17 1 A 10t 2 6A 3 6 x 10 A 3 4 A 50 Magnetic Field due to current The magnetic circuit of Fig 2 10 has cast steel core The cross sectional area of the central limb is 800 mm and that of each outer limb is 600 mm Calculate the exciting current needed to set up a flux of 0 8 mWb in the air gap Neglect magnetic leakage and fringing The magnetization characteristic of cast steel is given in Fig 2 16 1 400 mm I 500 turns T L Fig 2 10 1 mm 160 mm I I 400 mm 1 IN Magnetic Field due to current c 0 28 A A conducting rod of 1m length and 1kg mass is suspended by two vertical wires through its ends An external magnetic field of 2T is applied normal to the rod Now the current to be passed through the rod so as to make the tension in the wires zero is Take g 10ms 2 a 0 5A 2 Kerala PET 2007 b 15A c 5A d 1 5A flows in a conductor from east to west The nints above the Magnetic Field due to current Current is flowing in a regular hexagon shaped loop of side length a Find expression for magnetic field at the centre Only one correct answer A B C pol 20 TO a 2 gl 70 Magnetic Field due to current A current i flows in a thin wire in the shape of a regular polygon with n sides The magnetic induction at the centre of the polygon when n is R is the radius of its Circumcircle 1 Moni tan 2TR 6 llo i T 2 Moni T tan 2TR n Magnetic Field due to current Three identical coils A B and C are placed coaxially with their planes parallel to each other The coil A and C carry equal currents in opposite direction as shown The coils B and C are fixed and the coil A is moved towards B with a uniform speed then B 000 A there will be induced current in coil B which will be opposite to the direction of current in A 2 there will be induced current in coil B in the same direction as in A 3 there will be no induced current in B 4 current induced by coils A and C in coil B will be equal and opposite therefore net current in B will be zero ronches to ring then direction of induced current in ring is
{"url":"https://kunduz.com/questions/physics/magnetic-field-due-to-current/?page=2","timestamp":"2024-11-03T07:03:38Z","content_type":"text/html","content_length":"339984","record_id":"<urn:uuid:d2ed85c2-3b33-41f0-9dea-ef71e567bd39>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00823.warc.gz"}
Top 7 Construction Calculator Apps of 2024 The construction industry requires precision, efficiency and production. This calculators are useful for Civil engineers, Site Supervisors, Civil Engineering students, Mechanical Engineers, Construction Project Managers, Construction Store Manager, Fresher Engineers, Construction Contractors, Building Contractors, Store Keeper, Site Execution engineers, Estimation engineer, and many more. Even a normal person who needs to do basic home calculations then he also needs this app. You can calculate the plot area so it will be useful for Real estate agents as well. Available on both the Google Play Store and the App Store. Let’s explore the top picks that should be in every construction professional’s digital toolkit. 1. Construction Calculator A1 top civil engineering app Construction Calculator A1 can calculate with the Imperial Measurement System & Metric Measurement system. App also supports the Dark theme. Construction Calculator A1 is a free android application for construction calculations. It use simple tools in the application to simplify the calculations for the construction industry also help to calculate almost all kinds of area, estimation calculation, volume calculation, unit converters, and normal calculator as well. App have divided the application into some parts like quantity calculator, area calculator, volume calculator, unit converter, and normal calculator. Building calculator is divided into the following four categories:- Quantity Calculator includes- -Reinforcement Steel Calculator, Steel weight Calculator, Concrete Calculator(with Volume, without volume, Circular Column), Excavation Calculator, Backfill Calculator, Brickwork Calculator, Tile Calculator, Plaster Calculator, Paint, Water Tank Capacity Calculation(Circular and Rectangular), Material Weight, AC Capacity calculator, Swimming pool, Solar(Electric), Solar water heater, Plywood calculator, Paver calculator, Plum concrete, Rainwater harvesting, Waterproofing material calculator, Shuttering calculator, Grout calculator, Other Quantities and many more Area Calculator includes- -Area Measurement App, Area Calculator for Land, Circle Area Calculator, Rectangle Area Calculator, Triangle Area Calculator, Rhombus Area Calculator, L plot Area Calculator, Square Area Calculator, Right Angle Area Calculator, Quadrilateral Area Calculator, Sector area calculator, Pentagon area calculator, Hexagon area calculator, Octagon calculator, Trapezoid Area calculator, Other areas and Many more Volume Calculator includes- -Sphere Volume Calculator, Cube Volume Calculator, Block Volume Calculator, Bucket Volume Calculator, Semi Sphere Volume Calculator, Cone Volume Calculator, Cylinder Volume Calculator, Trapezoid Volume Calculator, Rectangular Prism Volume Calculator, Spherical Cap Volume Calculator, Frustrum Volume Calculator, Hollow Rectangle Volume Calculator, Tube Volume Calculator, Slope volume calculator, Parallelepiped volume calculator, Sliced cylinder volume, Barrel volume calculator, Other Volume Calculator and many more Unit Convertor includes- -Length, Weight, Area, Volume, Temperature, Pressure, Time, Speed, Fuel, Angle, Force, Power, Density, Fraction to Decimal, Number to word Features available only for pro version are- -Steel Weight calculator, Steel footing calculator, Steel column calculator, Steel beam calculator, Steel slab calculator, Concrete Tube, Concrete of Gutter, Concrete of Shear Wall, Construction Cost, AAC/CLC Block, Asphalt, Anti Termite, Gypsum/POP Plaster This app features- -Easy and simple to use -Added Default standard values for Simplification -Easy interface for everyone -A non-technical person can use -Gives accurate information -Can share the answer -Fast in Calculation -Almost all construction calculations included -Can reset data if an error in entry 2. Concrete Calculator all in one As the name suggests, this app specializes in concrete calculations, offering a robust set of tools for estimating the quantity of concrete needed for various structures. It stands out with its ability to switch between measurement systems and its user-friendly design, adorned with customizable themes. Concrete Calculator all in one can calculate with the Imperial Measurement System & Metric Measurement system. App also supports the number of themes choose the color you want. Concrete Calculator All in one is a free android application for concrete calculations. App use simple tools in the application to simplify the calculations for the construction industry. Divided the application into some parts like quantity calculator, and Mix Design. Why Choose Concrete Calculator Pro? • Versatile Measurement Systems: Easily switch between Imperial and Metric measurement systems for global compatibility. • Customizable Themes: Personalize your experience with a variety of color themes. • Comprehensive Calculations: From quantity estimation to mix design, our app covers all facets of concrete calculation. • User-Friendly Interface: Designed for professionals and beginner’s alike, ensuring accuracy and simplicity in calculations. Features at Your Fingertips: • Extensive Calculation Categories: Including Columns, Footings, Beams, Slabs, Roads, Culverts, Staircases, Walls, and more. • Robust Mix Design Support: Adapt to global standards with mix designs from British, Asian, Indian, Canadian, and Australian standards, plus the option to add your own. • In-Depth Testing Tools: Evaluate cement quality, fresh and hard concrete, aggregates, and more with comprehensive testing modules. • Knowledge Hub: Enhance your expertise with study materials on concrete, cement, aggregates, and a dedicated quiz section to test your knowledge. • BOQ & Document Generation: Easily create and customize Bill of Quantities (BOQ) documents with integrated calculations. • Added Conveniences: Save favorites, share results, and access a scientific calculator for all your calculation needs. 3. Brick and Plaster Calculator Discover Precision with Every Estimate – The Brick & Plaster Calculator app is an essential tool for civil engineers, construction professionals, and DIY enthusiasts. Precision-engineered, it enables users to accurately determine the number of bricks and plaster needed for any construction project, ensuring efficiency and accuracy. Designed for the Industry’s Best – Specifically crafted for civil engineers, site supervisors, construction project managers, store managers, new engineers, contractors, site execution engineers, estimation engineers, and more. It’s also ideal for individuals embarking on home projects, offering simplicity and precision for all skill levels. • Accurate & Detailed: Effortlessly calculate bricks, mortar, plaster, and cement for any project. • Multiple Materials: Compatibility with Fly Ash, Clay, AAC, CLC, Sand Plaster, and more. • Beyond Bricks: Calculate volume, mortar needs, cement bags, and cost estimates. • Easy to Use: A user-friendly interface suitable for beginners and professionals alike. • Powerful Features: Includes options for Stretcher, Header, English, and Flemish bonds. • Bonus Tools: Equipped with a scientific calculator, customizable themes, standard values, and answer-sharing capabilities. • Keywords: brick calculator, plaster calculator, construction app, masonry calculator, quantity estimation, brickwork, mortar, cement, bonds, DIY, civil engineering. Comprehensive Calculations at Your Fingertips – Seamlessly navigate through construction projects with a broad range of calculations, including: • Fly Ash Brick, Clay Brick, AAC Block, CLC Block • Sand Plaster, Gypsum, POP • Volume of Walls, Number of Bricks, Mortar Dry Volume • Cement Bags, Sand Calculator, Brick Cost • Bonds: Stretcher, Header, English, Flemish 4. Civil Rebar, BBS Calculator Rebar is your one-stop shop for accurate & fast reinforcement calculations. Get detailed bar bending schedules, weight estimates, and more for slabs, columns, beams, footings, & retaining walls. Ideal for civil engineers, contractors, students, & homeowners. Key Features: • Comprehensive Bar Bending Schedules: Easily generate detailed bar bending schedules for slabs, columns, retaining walls, footings, and beams. Our intuitive interface simplifies complex calculations, ensuring precision and saving valuable time. • Advanced Steel Weight Calculator: Calculate the weight of reinforcement steel with unmatched accuracy. From footing steel calculations to slab steel estimations, this feature covers all aspects of construction steel requirements. • Versatile Application: Ideal for a wide range of professionals, including mechanical engineers, construction contractors, building contractors, site execution engineers, estimation engineers, and store managers. Real estate agents can also benefit from plot area calculations. • User-Friendly Design: With its easy-to-use interface, the Rebar & BBS Steel Calculator is accessible to both technical and non-technical users. Achieve accurate construction calculations with a few • Robust Quantity Calculations: Our app provides detailed insights into the total and individual lengths of bars, the weight of reinforcement steel, and more, ensuring comprehensive project planning and execution. • BBS Shapes Codes & Tensile Strength Testing: Stay ahead with BBS shapes codes calculations and tensile strength testing features, enabling you to adhere to the latest construction standards and 5. House Construction Cost Easily navigate through house construction planning with our free app. effortlessly calculate material quantities, work costs, and the overall construction budget. Perfect for DIY home builders and construction professionals. Embark on your house construction journey with “House Construction Cost,” the definitive free tool designed to simplify the planning and budgeting of your construction projects. Whether you’re crafting your dream home from the ground up or a seasoned constructor, our app provides intuitive solutions to calculate material quantities and estimate the costs of various construction phases. Key Features: • Comprehensive Material Quantity Calculator: Precisely calculate the needed quantities of essential materials like cement, sand, aggregates, steel, paint, bricks, and more for each construction • Versatile Work Cost Estimator: Gain accurate estimates of expenses for crucial construction activities, including excavation, foundation laying, RCC work, and finishing touches. • In-Depth Calculation Reports: Access detailed reports offering insights into the cost, quantity, and distribution of materials and labor required for your project. • Visual Cost Analysis: Utilize dynamic bar and pie charts to visually breakdown your construction budget and manage your finances more effectively. • Project Management with Gantt Chart: Plan and track your construction timeline, from initial design to final touches, with an easy-to-use Gantt chart feature. Engage & Plan Efficiently: Our app goes beyond mere calculations, offering a graphical representation of your project’s financial and scheduling needs. From visualizing your budget allocation with pie charts to tracking progress with Gantt charts, planning your construction project has never been easier. 6. Metal Calculator All in One Metal weight calculator can calculate with the Imperial Measurement System & Metric Measurement system. App also supports the number of themes choose the color you want. Metal Calculator is a free application for metal quantity calculations. We use simple tools in the application to simplify the calculations. We help to calculate almost all kinds of metal weight. Divided the application into some parts by shape and type of metal. Metal Calculator includes- -Pipe weight calculator, Square Bar weight calculator, T Bar weight calculator, Beam weight calculator, Channel weight calculator, Angle weight calculator, Flat Bar weight calculator, Sheet weight calculator, Hexagonal Bar weight calculator, Triangular Bar weight calculator, Triangular Pipe weight calculator. Steel weight calculator Aluminum weight calculator, Magnesium, Cobalt weight calculator, Nickel calculator, Tin weight calculator, Lead calculator, Zinc calculator, Cast iron weight calculator, Copper weight calculator, Glass weight calculator, Coal weight calculator . This app features- -Easy and simple to use -Added Default standard values for Simplification -Easy interface for everyone -A non-technical person can use -Gives accurate information -Fast in Calculation 7. Unit Converter A1 Unit Converter is an android application for Unit Conversions. Measurement conversion, We use simple tools in the application to simplify the calculations in day-to-day Unit conversion. We help to calculate almost all kinds of unit conversions. Divided the application into three parts Basic Converter, Living Converter, Science Converter. Basic Converter includes- -Length Converter -Weight Converter -Area Converter -Volume Converter Living Converter includes- Temperature Converter, Time Converter, Speed Converter, Fuel Converter. Science Converter includes- Pressure Converter, Force Converter, Power Converter, Angel Converter, Digital Converter, Frequency Converter. This app features- -Easy and simple to use – Added Default standard values for Simplification -Easy interface for everyone -Non-technical person can use -Gives accurate information -Fast in Calculation For anyone involved in construction, from professionals to DIY enthusiasts, these apps are not just helpful; they’re essential tools that can dramatically improve the accuracy and efficiency of your work. Whether you’re estimating materials, planning projects, or needing quick conversions on the go, I highly recommend installing these top construction-related apps on your device. Available on both Google Play Store and Apple App Store, they’re designed to make your life easier and your projects more successful. Don’t miss out on the benefits they offer—explore these apps and install them today to take your construction work to the next level.
{"url":"https://constropedia.com/top-7-construction-calculator-apps-of-2024/","timestamp":"2024-11-11T07:00:58Z","content_type":"text/html","content_length":"78996","record_id":"<urn:uuid:7b4b64b5-f09d-41b7-9df7-56203e46b0fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00826.warc.gz"}
On the merger rate of primordial black holes: effects of nearest neighbours distribution and clustering One of the seemingly strongest constraints on the fraction of dark matter in the form of primordial black holes (PBH) of Script O(10) M[solar] relies on the merger rate inferred from the binary BH merger events detected by LIGO/Virgo. The robustness of these bounds depends however on the accuracy with which the formation of PBH binaries in the early Universe can be described. We revisit the standard estimate of the merger rate, focusing on a couple of key ingredients: the spatial distribution of nearest neighbours and the initial clustering of PBHs associated to a given primordial power spectrum. Overall, we confirm the robustness of the results presented in the literature in the case of a narrow mass function (which constrain the PBH fraction of dark matter to be f[PBH]lesssim 0.001-0.01). The initial clustering of PBHs might have an effect tightening the current constraint, but only for very broad mass functions, corresponding to wide bumps in the primordial power spectra extending at least over a couple of decades in k-space. Journal of Cosmology and Astroparticle Physics Pub Date: October 2018 □ Astrophysics - Cosmology and Nongalactic Astrophysics; □ High Energy Physics - Phenomenology Extended computations and results reported in Sec. 4, clarifications added (notably in Sec. 4 and 5), several typos corrected. Results unchanged. Matches version to appear in JCAP
{"url":"https://ui.adsabs.harvard.edu/abs/2018JCAP...10..043B/abstract","timestamp":"2024-11-11T22:01:42Z","content_type":"text/html","content_length":"39801","record_id":"<urn:uuid:92d82b2e-5edc-4f88-80ac-8b9096f8e5ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00250.warc.gz"}
Alexander Poremba I am a postdoctoral researcher at MIT, hosted by both Vinod Vaikuntanathan and Peter Shor. I am affiliated with the Computer Science & Artificial Intelligence Laboratory (CSAIL) and the Department of Mathematics. I received my PhD from Caltech, where I was fortunate to have been advised by Thomas Vidick. My research lies at the intersection of quantum computation and cryptography. Contact: poremba (at) mit (dot) edu Office: Stata Center, 32-G678
{"url":"https://www.mit.edu/~poremba/","timestamp":"2024-11-08T05:22:35Z","content_type":"text/html","content_length":"14280","record_id":"<urn:uuid:a8ed22b8-3ab4-4f09-b9d2-ca056f73afbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00137.warc.gz"}
Matching string prefixes using a prefix-trie 2 - .NET Code Geeks Software Development Matching string prefixes using a prefix-trie 2 After discussion the general problem of matching string prefixes two weeks ago, we starting on the implementation of a prefix-trie to solve the problem efficiently last week. Our implementation so far is able to quickly construct a prefix-trie from a list of strings. What is still missing is any kind of functionality to query the data structure for information. Today we will take a look at how to use this data structure to enumerate all strings with a given prefix, and how to obtain the longest common prefix in that list. Quick recap Let us quickly remember what a prefix-trie is, and how we implemented it. As you can see in this example diagram, a prefix-trie is a tree, where each edge or arc represents a character, and each node represents a string. The root is the empty string, and each other node is the string composed of the characters on the arcs from the root to the node. Some nodes – including all leaves – are flagged to represent words, or members of our input string collection. We implemented this data structure using a single class to represent nodes. Each node has a string property of the value it contains, which is null if the node is no valid value and simply represents a prefix. Further, each node contains a dictionary from characters to nodes. This represents the outgoing arcs and the respective children of the node. A leaf of our trie does not have a dictionary and the respective property is null instead. In code, this is what we have to work with: class Node private readonly string value; private readonly Dictionary<char, Node> children; If you are interested in how we build the prefix-trie data structure from a simple list of strings, make sure to read last week’s post where we design and implement the algorithm step by step. Getting node with a given prefix A prerequisite for the actual methods we are interested in is the ability to get a node that represents a given prefix. This can be done fairly easily by starting with the root and then following the arcs of the trie according to the characters in the prefix. We can implement this algorithm either recursively, or iteratively. In code: Node getNode(string prefix) return this.getNode(prefix, 0); Node getNode(string prefix, int index) if (index == prefix.Length) return this; // node for prefix found Node child; if (!this.children.TryGet(prefix[index], out child)) return null; // no node found, invalid prefix return child.getNode(prefix, index + 1); Node getNode(string prefix) var node = this; foreach (var character in prefix) if (!node.children.TryGet(character, out node)) return null; // no node found, invalid prefix return node; Recursion often results in more elegant solutions than iteration. In this case I quite like both solutions, and leave it up to you which you prefer. Personally, I would most likely go for the iterative approach, since it is both shorter, and more importantly easier on our stack. That being said, the compiler might very well be smart enough to detect the tail recursion and might then optimise by compiling to iterative machine code. No matter which approach we take however, with this method in place we are now equipped now tackle our real questions. All strings with prefix Implementing a method that returns all strings starting with a given prefix is now almost trivial – almost. We first use our previous method to get the node that represents the prefix, and then simple have to enumerate all values in the sub-trie of which this node is the root. That enumeration however is not necessarily trivial. A recursive approach would be implemented something like this – in pseudo code: if (this.value != null) enumerate this.value if(this.children == null) foreach (child in this.children) And in fact we can implement this as follows: IEnumerable<string> enumerateAllValues() if (this.value != null) yield return this.value; if(this.children == null) yield break; foreach (var s in this.children.Values .SelectMany(c => c.enumerateAllValues()) yield return s; This works fine. But there is one caveat. Notice how we have to flatten the collections returned by each node’s children using SelectMany, and how we then enumerate the entire result again simply so that in the end we end up with a single Maybe this is efficient enough for some, but I would like to improve our solution and get rid of the many-fold stacked yield returns. Better enumeration One thing we could do is enumerate all values into a list, instead of trying to return them lazily, and then returning the entire list. IEnumerable<string> enumerateAllValues() var list = new List<string>(); return list; void enumerateTo(List<string> list) if (this.value != null) if(this.children == null) yield break; foreach (var child in this.children.Values) I consider this a much cleaner approach. However, it came at a large cost: Notice how we construct the entire output before we return it. Depending on what we want to do with the results, this might need an unnecessary amount of space, and maybe we do not even care to enumerate the entire result in the first place. In either case our solution would benefit from deferred execution. But can we add this without going back to our stacked yield returns? Yes we can, by taking an iterative approach. Instead of representing the moving through the tree using our implicit recursive calls, we will now navigate to other nodes explicitly in a loop. To make sure we do not lose track of where we are in the tree – which is easily possible since nodes do not know their parents – we need to keep a data structure representing our path from the prefix node to the current one. For this a stack’s push and pop functionality is exactly what we need, so we will use one. Note how this approach still does not use constant memory, however we moved from memory linear in the size of the output, to linear in the length of the strings. For collections of many relatively short strings, this is a great improvement. The implementation of this algorithm is fairly straight forward: var stack = new Stack<Node>(); while (stack.Count > 0) var node = stack.Pop(); if (node.value != null) yield return node.value; if (node.children == null) foreach (var child in node.children.Values) Using out data structure, this is the most efficient and clean solution possible. Unfortunately – by using an explicit stack, instead of an implicit call-stack – we add some bulk to our method which the recursive approach lacks. The only thing that could tempt me to take another look at our recursive approach would be the addition of a yield foreach functionality to C#. Extend prefix Now that we have a list of all strings starting with a given prefix, we could use its result to find the longest shared prefix within that list. The result would be the longest unique extension of the original prefix, which can come in handy for things like typing suggestions and auto completion. Instead of taking this long – and inefficient – route however, we can do much better by exploiting the properties of our data structure. Notice how a node representing a longest common prefix is either a leaf, or has exactly two properties: 1. it has no value itself; 2. it has more than one child node. If it does have a value, it trivially is itself the longest common prefix, since proceeding in its sub-trie would mean excluding itself. Further, if it has only a single child node, the prefix of the current node, plus the character of the arc to its child is also a valid prefix, and it would be longer than the first one. Using these two rules we can then find the longest common prefix we are looking for: At first we again get the node of the given prefix using our very first method from above. Then we try to extend that prefix by walking through the tree until we find a node that conforms to our criteria and therefore is the node we are looking for. In the meantime we only have to keep track of all the characters we encounter and then return them as a string. string extendPrefix(string prefix) var node = this.getNode(prefix); if (node == null) return null; return prefix + node.longestPrefixAddition(); string longestPrefixAddition() var builder = new StringBuilder(); var node = this; while (node.value == null && node.children.Count == 1) var arc = node.children.First(); node = pair.Value; return builder.ToString(); Note how I did not bother writing a recursive method in this case, even though it is entirely possible. I think that the iterative approach is clear enough, and to have good string building performance we need to use a StringBuilder in either case, which we would then have to hand down to the recursive call. After implementing the construction of prefix-tries last week, we took a look at different methods of querying the data structure today. I hope this has been interesting, or even useful to you. As always, let me know what you think or if you have any questions in the comments below. Enjoy the pixels! Reference: Matching string prefixes using a prefix-trie 2 from our NCG partner Paul Scharf at the GameDev<T> blog.
{"url":"https://www.dotnetcodegeeks.com/2015/10/matching-string-prefixes-using-a-prefix-trie-2.html","timestamp":"2024-11-11T07:12:50Z","content_type":"text/html","content_length":"233388","record_id":"<urn:uuid:810cf1ec-71d1-49d4-928e-7d746a520ca4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00001.warc.gz"}
Notices by Faculty Boards, etc. - Cambridge University Reporter 6441 Notices by Faculty Boards, etc. Annual meetings of the Faculties Clinical Medicine The Chair of the Faculty Board of Clinical Medicine gives notice that the Annual Meeting of the Faculty will be held at 2 p.m. on Monday, 14 November 2016, in the Committee Room, School of Clinical Medicine (Room 202, Level 2, Bay 13). The business of the meeting will include a report by the Chair and the election of members of the Faculty Board of Clinical Medicine in class (c) in accordance with Regulation 6 of the General Regulations for Faculties (Statutes and Ordinances, p. 584) and Regulation 1(c) of the General Regulations for the Constitution of the Faculty Boards (Statutes and Ordinances, p. 585), to fill three vacancies. Two will be filled to 31 December 2020 by holders of an NHS appointment at Consultant level, who are certified by the Faculty Board to give instruction to clinical medical students; and one will be filled to 31 December 2020 by a University Officer in the Faculty who is not a Professor or Associate Lecturer. Nominations for these elections, signed by the proposer who must be a member of the relevant constituency mentioned above, and by the nominee indicating willingness to serve if elected; and notice of any other business, should be sent to: The Secretary of the Faculty Board of Clinical Medicine, School of Clinical Medicine, Box 111, Cambridge Biomedical Campus, Cambridge, CB2 0SP, to arrive no later than 12 noon on Friday, 4 November 2016. The Chair of the Faculty Board of Economics gives notice that the Annual Meeting of the Faculty will be held at 2 p.m. on Monday, 14 November 2016, in the Meade Room, Austin Robinson Building, Sidgwick Avenue. One of the items of business will be to elect three members of the Faculty Board in class (c) (two members to serve for four years from 1 January 2017 and one member to serve for three years from 1 January 2017), in accordance with Regulation 1 of the General Regulations for the Constitution of the Faculty Boards (Statutes and Ordinances, p. 585). Nominations in writing, signed by the proposer and seconder, together with an indication of the nominee’s willingness to serve, should reach the Secretary, Marie Butcher (email: mab30@cam.ac.uk), Faculty of Economics, Austin Robinson Building, Sidgwick Avenue, by 12 noon on Tuesday, 8 November 2016. It would be helpful if notice of any other business that members wish to be discussed were sent in writing to the Secretary by 10 a.m. on Friday, 4 November 2016. Mathematical Tripos, Part III, 2017 The Faculty Board of Mathematics gives notice that, in accordance with Regulations 16 and 17 for the Mathematical Tripos (Statutes and Ordinances, p. 369), there will be set in 2017 if candidates desire to present themselves therein, a paper in each of the subjects in the following list. The duration of the paper is shown beside it. No. Subject Duration 102 Lie algebras and their representations 3 hours 103 Representation theory 3 hours 105 Analysis of partial differential equations 3 hours 106 Functional analysis 3 hours 107 Elliptic partial differential equations 3 hours 108 Topics in ergodic theory 3 hours 109 Combinatorics 2 hours 113 Algebraic geometry 3 hours 114 Algebraic topology 3 hours 115 Differential geometry 3 hours 119 Category theory 3 hours 121 Topics in set theory 3 hours 125 Elliptic curves 3 hours 128 Algebras 3 hours 129 Introduction to addictive combinatorics 2 hours 130 Ramsey theory 2 hours 131 Riemannian geometry 3 hours 132 Riemann surfaces and Teichmüller theory 3 hours 133 Geometric group theory 2 hours 134 Linear systems 2 hours 135 Logic 3 hours 136 Local fields 3 hours 137 Modular forms and L-functions 3 hours 201 Advanced probability 3 hours 202 Stochastic calculus and applications 3 hours 203 Schramm-Loewner evolutions 2 hours 205 Modern statistical methods 3 hours 206 Applied statistics 3 hours 207 Biostatistics 3 hours 210 Topics in statistical theory 2 hours 214 Percolation and random walks on graphs 2 hours 215 Mixing times of Markov chains 2 hours 216 Bayesian modelling and computation 3 hours 217 Gaussian processes 2 hours 301 Quantum field theory 3 hours 302 Symmetries, fields, and particles 3 hours 303 Statistical field theory 2 hours 304 Advanced quantum field theory 3 hours 305 The standard model 3 hours 306 String theory 3 hours 307 Supersymmetry 2 hours 308 Classical and quantum solitons 2 hours 309 General relativity 3 hours 310 Cosmology 3 hours 311 Black holes 3 hours 312 Advanced cosmology 3 hours 314 Astrophysical fluid dynamics 3 hours 315 Extrasolar planets: atmospheres and interiors 3 hours 316 Planetary system dynamics 3 hours 317 Structure and evolution of stars 3 hours 320 Galactic astronomy and dynamics 3 hours 321 Dynamics of astrophysical discs 2 hours 322 Binary stars 2 hours 324 Quantum computation 2 hours 326 Inverse problems in imaging 3 hours 327 Distribution theory and applications 2 hours 328 Boundary value problems for linear PDEs 2 hours 329 Slow viscous flow 3 hours 331 Hydrodynamic stability 3 hours 332 Fluid dynamics of the solid Earth 3 hours 336 Perturbation methods 2 hours 337 Convection and magnetoconvection 2 hours 338 Optical and infrared astronomical telescopes and instruments 2 hours 339 Topics in convex optimization 2 hours 340 Topics in mathematics of information 3 hours 341 Numerical solution of differential equations 3 hours 342 Biological physics and complex fluids 3 hours 343 Quantum fluids 2 hours 344 Theoretical physics of soft condensed matter 2 hours 345 Environmental fluid dynamics 2 hours The Faculty Board reminds candidates and Tutors that requests for papers to be set on additional subjects should be sent to the Secretary of the Faculty Board, c/o the Undergraduate Office, Faculty of Mathematics, Wilberforce Road (email: faculty@maths.cam.ac.uk) not later than 9 November 2016. Natural Sciences Tripos, Part III (Astrophysics) and Master of Advanced Studies in Astrophysics, 2016–17 The Director of the Institute of Astronomy gives notice that the following courses will be available for examination in 2017: Three-unit lecture courses These papers, from Part III of the Mathematical Tripos, will be taken in June. Each will be examined by a written paper of three hours’ duration. 301. Quantum field theory 309. General relativity 310. Cosmology 311. Black holes 312. Advanced cosmology 314. Astrophysical fluid dynamics 315. Extrasolar planets: atmospheres and interiors 316. Planetary system dynamics 317. Structure and evolution of stars 320. Galactic astronomy and dynamics Two-unit lecture courses These papers, from Part III of the Mathematical Tripos, will be taken in June and will be examined by a written paper of two hours’ duration. 321. Dynamics of astrophysical discs 322. Binary stars 337. Convection and magnetoconvection 338. Optical and infrared astronomical telescopes and instruments These papers, from Part III of the Natural Sciences Tripos (Physics), will be taken at the start of the Lent Term and will be examined by a written paper of two hours’ duration. Each paper will consist of three questions of which candidates will be required to answer two; all questions carry equal weight. Paper 1/PP. Particle physics Paper 1/PEP. Physics of the earth as a planet Paper 1/RAC. Relativistic astrophysics and cosmology One-unit lecture courses These papers, from Part III of the Natural Sciences Tripos (Physics), will be taken at the start of the Easter Term and will be examined by a written paper of one and a half hours’ duration. Each paper will consist of three questions of which candidates will be required to answer two; all questions carry equal weight. Paper 2/FSU. Formation of structure in the universe Paper 2/FOA. Frontiers of observational astrophysics It is recommended that candidates take the equivalent of four 3-unit lecture courses. At least nine units should be selected from the recommended list of courses above. Up to three units may be chosen freely from Part III of the Mathematical Tripos (and need not be relevant to astrophysics), or the allowed list of courses from Part III Physics in the Natural Sciences Tripos, or a mixture of both. The courses offered in Part III of the Mathematical Tripos vary from year to year and may be found in their lecture listing at https://www.maths.cam.ac.uk/system/files/partiiiweb_8.pdf. The allowed courses from Part III Physics may be found at http://www.ast.cam.ac.uk/students/undergrad/part_iii/lectures/. Students may be examined in up to a maximum of fifteen units in addition to their compulsory project. Students should consult the Part III Course Co-ordinator for guidance about choice of courses. Natural Sciences Tripos, Part III (Physics) and Master of Advanced Studies in Physics, 2016–17 The Head of the Department of Physics gives notice that the following Major Topics, Minor Topics, and types of further work will be available for examination in 2017. Major Topics These papers will be taken at the start of the Lent Term. Each Major Topic will be examined by a written paper of two hours’ duration. Each paper will consist of three questions, of which candidates will be required to answer two; all questions carry equal weight. Candidates are required to take a minimum of three papers. The titles of the papers are as follows: Paper 1/AQC. Advanced quantum condensed matter physics Paper 1/BIO. Biological physics Paper 1/RAC. Relativistic astrophysics and cosmology Paper 1/PP. Particle physics Paper 1/PEP. Physics of the Earth as a planet Paper 1/TQM. Theories of quantum matter Paper 1/AOP. Atomic and optical physics Candidates may replace one Major Topic with the paper Quantum field theory (Paper 1/QFT) from Part III of the Mathematics Tripos (taken in June). Minor Topics These papers will be taken at the start of the Easter Term. Each Minor Topic will be examined by a written paper of one and a half hours’ duration. Each paper will consist of three questions, of which candidates will be required to answer two; all questions carry equal weight. Candidates who are not replacing Minor Topics by other work, as specified below, are required to take a minimum of three papers. The titles of the papers are as follows: Paper 2/EXO. Exoplanets Paper 2/FSU. Formation of structure in the universe Paper 2/FOA. The frontiers of observational astrophysics Paper 2/GFT. Gauge field theory Paper 2/MP. Medical physics Paper 2/NOQL. Non-linear optics and quantum states of light Paper 2/CP. Colloid physics Paper 2/PT. Phase transitions Paper 2/PNS. The physics of nanoelectronic systems Paper 2/QI. Quantum information Paper 2/SQC. Superconductivity and quantum coherence Each paper or piece of further work listed below may replace one Minor Topic: •A Long Vacation Project (2/LVP) (based on pre-approved project work undertaken during the previous Long Vacation) •The Entrepreneurship option (2/ENP), which is examined by coursework •The paper ‘Advanced quantum field theory’ (2/AQFT) from Part III of the Mathematical Tripos (examined in June) •The examination paper ‘Nuclear power engineering’ (2/4M16) and ‘Mathematical biology of the cell’ (2/4G1) from Part IIb of the Engineering Tripos (examined at the start of the Easter Term) •The Interdisciplinary papers in ‘Materials, electronics, and renewable energy’ (2/IDP3); ‘Atmospheric chemistry and global change’ (2/IDP1) and (2/IDP2) ‘Climate dynamics and critical transitions’ (all examined in the second half of the Easter Term) Where candidates take more than three Major Topics, the examiners will use the best three results in determining the class; where candidates take more than three Minor Topics, the examiners will use the best three results in determining the class: all marks will appear on the transcript.
{"url":"https://www.admin.cam.ac.uk/reporter/2016-17/weekly/6441/section4.shtml","timestamp":"2024-11-02T18:56:23Z","content_type":"application/xhtml+xml","content_length":"62083","record_id":"<urn:uuid:ffda1701-825b-4010-bfe9-ba6e489da65e>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00612.warc.gz"}
Bio.SubsMat package Bio.SubsMat package¶ Module contents¶ Substitution matrices, log odds matrices, and operations on them. This module provides a class and a few routines for generating substitution matrices, similar ot BLOSUM or PAM matrices, but based on user-provided data. The class used for these matrices is SeqMat Matrices are implemented as a dictionary. Each index contains a 2-tuple, which are the two residue/nucleotide types replaced. The value differs according to the matrix’s purpose: e.g in a log-odds frequency matrix, the value would be log(Pij/(Pi*Pj)) where: Pij: frequency of substitution of letter (residue/nucleotide) i by j Pi, Pj: expected frequencies of i and j, respectively. The following section is laid out in the order by which most people wish to generate a log-odds matrix. Of course, interim matrices can be generated and investigated. Most people just want a log-odds matrix, that’s all. Generating an Accepted Replacement Matrix:¶ Initially, you should generate an accepted replacement matrix (ARM) from your data. The values in ARM are the _counted_ number of replacements according to your data. The data could be a set of pairs or multiple alignments. So for instance if Alanine was replaced by Cysteine 10 times, and Cysteine by Alanine 12 times, the corresponding ARM entries would be: [‘A’,’C’]: 10, [‘C’,’A’] 12 As order doesn’t matter, user can already provide only one entry: [‘A’,’C’]: 22 A SeqMat instance may be initialized with either a full (first method of counting: 10, 12) or half (the latter method, 22) matrix. A Full protein alphabet matrix would be of the size 20x20 = 400. A Half matrix of that alphabet would be 20x20/2 + 20/2 = 210. That is because same-letter entries don’t change. (The matrix diagonal). Given an alphabet size of N: Full matrix size:N*N Half matrix size: N(N+1)/2 If you provide a full matrix, the constructor will create a half-matrix automatically. If you provide a half-matrix, make sure of a (low, high) sorted order in the keys: there should only be a (‘A’,’C’) not a (‘C’,’A’). Internal functions: Generating the observed frequency matrix (OFM):¶ Use: OFM = _build_obs_freq_mat(ARM) The OFM is generated from the ARM, only instead of replacement counts, it contains replacement frequencies. Generating an expected frequency matrix (EFM):¶ Use: EFM = _build_exp_freq_mat(OFM,exp_freq_table) exp_freq_table: should be a freqTableC instantiation. See freqTable.py for detailed information. Briefly, the expected frequency table has the frequencies of appearance for each member of the alphabet Generating a substitution frequency matrix (SFM):¶ Use: SFM = _build_subs_mat(OFM,EFM) Accepts an OFM, EFM. Provides the division product of the corresponding values. Generating a log-odds matrix (LOM):¶ Use: LOM=_build_log_odds_mat(SFM[,logbase=10,factor=10.0,roundit=1]) Accepts an SFM. logbase: base of the logarithm used to generate the log-odds values. factor: factor used to multiply the log-odds values. roundit: default - true. Whether to round the values. Each entry is generated by log(LOM[key])*factor And rounded if required. In most cases, users will want to generate a log-odds matrix only, without explicitly calling the OFM –> EFM –> SFM stages. The function build_log_odds_matrix does that. User provides an ARM and an expected frequency table. The function returns the log-odds matrix. Methods for subtraction, addition and multiplication of matrices:¶ • Generation of an expected frequency table from an observed frequency matrix. • Calculation of linear correlation coefficient between two matrices. • Calculation of relative entropy is now done using the _make_relative_entropy method and is stored in the member self.relative_entropy • Calculation of entropy is now done using the _make_entropy method and is stored in the member self.entropy. • Jensen-Shannon distance between the distributions from which the matrices are derived. This is a distance function based on the distribution’s entropies. class Bio.SubsMat.SeqMat(data=None, alphabet=None, mat_name='', build_later=0)¶ Bases: dict A Generic sequence matrix class. The key is a 2-tuple containing the letter indices of the matrix. Those should be sorted in the tuple (low, high). Because each matrix is dealt with as a half-matrix. __init__(self, data=None, alphabet=None, mat_name='', build_later=0)¶ User may supply: ☆ data: matrix itself ☆ mat_name: its name. See below. ☆ alphabet: an instance of Bio.Alphabet, or a subclass. If not supplied, constructor builds its own from that matrix. ☆ build_later: skip the matrix size assertion. User will build the matrix after creating the instance. Constructor builds a half matrix filled with zeroes. Calculate and set the entropy attribute. Return sum of the results. format(self, fmt='%4d', letterfmt='%4s', alphabet=None, non_sym=None, full=False)¶ Create a string with the bottom-half (default) or a full matrix. User may pass own alphabet, which should contain all letters in the alphabet of the matrix, but may be in a different order. This order will be the order of the letters on the axes. print_full_mat(self, f=None, format='%4d', topformat='%4s', alphabet=None, factor=1, non_sym=None)¶ Print the full matrix to the file handle f or stdout. print_mat(self, f=None, format='%4d', bottomformat='%4s', alphabet=None, factor=1)¶ Print a nice half-matrix. f=sys.stdout to see on the screen. User may pass own alphabet, which should contain all letters in the alphabet of the matrix, but may be in a different order. This order will be the order of the letters on the axes. Print a nice half-matrix. __sub__(self, other)¶ Return integer subtraction product of the two matrices. __mul__(self, other)¶ Element-wise matrix multiplication. Returns a new matrix created by multiplying each element by other (if other is scalar), or by performing element-wise multiplication of the two matrices (if other is a matrix of the same __rmul__(self, other)¶ Element-wise matrix multiplication. Returns a new matrix created by multiplying each element by other (if other is scalar), or by performing element-wise multiplication of the two matrices (if other is a matrix of the same __add__(self, other)¶ Matrix addition. class Bio.SubsMat.SubstitutionMatrix(data=None, alphabet=None, mat_name='', build_later=0)¶ Bases: Bio.SubsMat.SeqMat Substitution matrix. calculate_relative_entropy(self, obs_freq_mat)¶ Calculate and return relative entropy w.r.t. observed frequency matrix. class Bio.SubsMat.LogOddsMatrix(data=None, alphabet=None, mat_name='', build_later=0)¶ Bases: Bio.SubsMat.SeqMat Log odds matrix. calculate_relative_entropy(self, obs_freq_mat)¶ Calculate and return relative entropy w.r.t. observed frequency matrix. Bio.SubsMat.make_log_odds_matrix(acc_rep_mat, exp_freq_table=None, logbase=2, factor=1.0, round_digit=9, keep_nd=0)¶ Make log-odds matrix. Convert observed frequency table into substitution matrix. Read a matrix from a text file. Bio.SubsMat.two_mat_relative_entropy(mat_1, mat_2, logbase=2, diag=3)¶ Return relative entropy of two matrices. Bio.SubsMat.two_mat_correlation(mat_1, mat_2)¶ Return linear correlation coefficient between two matrices. Bio.SubsMat.two_mat_DJS(mat_1, mat_2, pi_1=0.5, pi_2=0.5)¶ Return Jensen-Shannon Distance between two observed frequence matrices.
{"url":"https://biopython.org/docs/1.76/api/Bio.SubsMat.html","timestamp":"2024-11-12T02:31:07Z","content_type":"text/html","content_length":"36094","record_id":"<urn:uuid:4eaf99ef-91c4-4c91-9694-6353ff121545>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00243.warc.gz"}
Avoiding paradoxes The strict conditionals may avoid paradoxes of material implication. The following statement, for example, is not correctly formalized by material implication: If Bill Gates graduated in medicine, then Elvis never died. This condition should clearly be false: the degree of Bill Gates has nothing to do with whether Elvis is still alive. However, the direct encoding of this formula in classical logic using material implication leads to: Bill Gates graduated in medicine → Elvis never died. This formula is true because whenever the antecedent A is false, a formula A → B is true. Hence, this formula is not an adequate translation of the original sentence. An encoding using the strict conditional is: ${\displaystyle \Box }$ (Bill Gates graduated in medicine → Elvis never died). In modal logic, this formula means (roughly) that, in every possible world in which Bill Gates graduated in medicine, Elvis never died. Since one can easily imagine a world where Bill Gates is a medicine graduate and Elvis is dead, this formula is false. Hence, this formula seems to be a correct translation of the original sentence. Although the strict conditional is much closer to being able to express natural language conditionals than the material conditional, it has its own problems with consequents that are necessarily true (such as 2 + 2 = 4) or antecedents that are necessarily false.^[5] The following sentence, for example, is not correctly formalized by a strict conditional: If Bill Gates graduated in medicine, then 2 + 2 = 4. Using strict conditionals, this sentence is expressed as: ${\displaystyle \Box }$ (Bill Gates graduated in medicine → 2 + 2 = 4) In modal logic, this formula means that, in every possible world where Bill Gates graduated in medicine, it holds that 2 + 2 = 4. Since 2 + 2 is equal to 4 in all possible worlds, this formula is true, although it does not seem that the original sentence should be. A similar situation arises with 2 + 2 = 5, which is necessarily false: If 2 + 2 = 5, then Bill Gates graduated in medicine. Some logicians view this situation as indicating that the strict conditional is still unsatisfactory. Others have noted that the strict conditional cannot adequately express counterfactual conditionals,^[6] and that it does not satisfy certain logical properties.^[7] In particular, the strict conditional is transitive, while the counterfactual conditional is not.^[8] Some logicians, such as Paul Grice, have used conversational implicature to argue that, despite apparent difficulties, the material conditional is just fine as a translation for the natural language 'if...then...'. Others still have turned to relevance logic to supply a connection between the antecedent and consequent of provable conditionals. Constructive logic See also • Edgington, Dorothy, 2001, "Conditionals," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell. • For an introduction to non-classical logic as an attempt to find a better translation of the conditional, see: □ Priest, Graham, 2001. An Introduction to Non-Classical Logic. Cambridge Univ. Press. • For an extended philosophical discussion of the issues mentioned in this article, see: • Jonathan Bennett, 2003. A Philosophical Guide to Conditionals. Oxford Univ. Press.
{"url":"https://www.knowpia.com/knowpedia/Strict_conditional","timestamp":"2024-11-05T19:37:03Z","content_type":"text/html","content_length":"89870","record_id":"<urn:uuid:16bb5004-50d0-4787-9469-2003be66630d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00557.warc.gz"}
Mastering Average Percentage Calculator in PHP Welcome to a comprehensive guide on the programming of an ‘Average Percentage Calculator in PHP’. Do you ever wonder how to calculate percentages? PHP, a popular server-side scripting language, commonly powers web development and can create a simple yet powerful average percentage calculator. In this blog, we will dive deep into the world of PHP and discover how to design and implement an average percentage calculator in PHP. The focus will solely be on the programming aspects, and throughout the journey, we will illustrate the flexibility, simplicity, and robustness that PHP provides to tackle such tasks. Average Percentage Calculator -Code // Function to prompt the user and read input from the command line function prompt($message) { echo $message; return trim(fgets(STDIN)); // Function to validate if the input is a valid integer function isValidInteger($input) { return filter_var($input, FILTER_VALIDATE_INT) !== false; // Program to calculate the average percentage $subjects = prompt("Enter the number of subjects: "); // Validate the number of subjects while (!isValidInteger($subjects) || $subjects <= 0) { $subjects = prompt("Invalid input. Please enter a positive integer for the number of subjects: "); // Initialize a variable to store the total marks $totalMarks = 0; // Loop through each subject for ($i = 1; $i <= $subjects; $i++) { $marks = prompt("Enter the marks for subject $i: "); // Validate the marks input while (!isValidInteger($marks) || $marks < 0 || $marks > 100) { $marks = prompt("Invalid input. Please enter a valid integer between 0 and 100 for the marks of subject $i: "); // Add the marks to the total marks $totalMarks += $marks; // Calculate the average percentage by dividing the total marks by the number of subjects $averagePercentage = $totalMarks / $subjects; // Display the average percentage to the user echo "The average percentage is: " . number_format($averagePercentage, 2) . "%\n"; Explanation of the Code: The highlights of the improved PHP code for calculating the average percentage are: 1. User Input Prompting: □ A prompt function is used to display a message and read input from the user, ensuring a consistent way of handling user input throughout the script. 2. Input Validation: □ The isValidInteger function validates whether the input is a valid integer, ensuring that only appropriate values are processed. □ The script repeatedly prompts the user until they provide valid input for both the number of subjects and the marks for each subject. 3. Range Checks: □ For the number of subjects, the input must be a positive integer. □ For the marks, the input must be an integer between 0 and 100, inclusive. 4. User-Friendly Messages: □ Clear and specific prompts guide the user through the input process. □ Invalid input messages provide immediate feedback and instructions for correction. 5. Total Marks Calculation: □ The script accumulates the total marks by iterating through the number of subjects and adding validated marks. 6. Average Calculation and Formatting: □ We calculate the average percentage by dividing the total marks by the number of subjects □ Then, we format the result to two decimal places for clarity and precision. 7. Code Reusability and Readability: □ Functions like prompt and isValidInteger improve the reusability and readability of the code. □ The structure of the code is logical and easy to follow, making it maintainable and scalable. Enter the number of subjects: 3 Enter the marks for subject 1: 85 Enter the marks for subject 2: 90 Enter the marks for subject 3: 95 The average percentage is: 90.00% Importance of Understanding the Concept and Formula of Average Percentage Understanding the concept and formula of the ‘Average Percentage’ is crucial for writing any program related to it, such as the ‘Average Percentage Calculator in PHP’. The formula provides the basis for the logic and functioning of the program. Using PHP, we can easily implement this formula to create an efficient calculator. For a comprehensive understanding of the ‘Average Percentage Formula‘, you can refer to this link. For practical application and understanding of ‘Average Percentage Calculator‘, you can experiment with this online tool. Understanding the ‘Average Percentage Formula’ is a vital aspect of any statistical or mathematical calculation. This formula is not only useful in academics, but it has extensive applications in practical scenarios like data analysis, predicting patterns, understanding trends, and so on. By now, you should have a sturdy understanding of how to program an ‘Average Percentage Calculator in PHP’, its functioning, and its relevance. Keep practicing and applying this knowledge in real-world situations to fully grasp the concept. Remember, programming is all about problem-solving, and with the right tools and understanding, the sky’s the limit! FAQ’s about Average Percentage Calculator in PHP How does the user input validation work in the PHP code? The isValidInteger function ensures that only valid integers are accepted as input, improving the reliability of the program. Why is the prompt function used in the PHP code? The prompt function provides a consistent way to prompt users for input, enhancing the user experience. Can I calculate the average percentage of any number of subjects with this PHP code? Yes, the code dynamically adjusts to accommodate any number of subjects entered by the user. How does the PHP code handle invalid input for marks? It prompts the user to enter valid integers between 0 and 100 for each subject’s marks until correct input is provided. Is the PHP code reusable for similar average percentage calculation tasks? Yes, the modular structure and clear logic of the code make it reusable for similar tasks involving average percentage calculation. About The Author
{"url":"https://blog.newtum.com/average-percentage-calculator-in-php/","timestamp":"2024-11-13T12:19:55Z","content_type":"text/html","content_length":"217743","record_id":"<urn:uuid:e5a5647b-2192-4abc-a6c1-1119ccfa570b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00651.warc.gz"}
The Gigaverse is the fourth nested level of the metric -verse series and the lowest-level archverse. This archverse contains a finite or infinite amount of megaverses, which are the third nested level. It is contained by the Teraverse, which is an finite or infinite set of gigaverses. At scales beginning at Gigaverse and higher, the dimensionality becomes hard to measure. This is caused by the fractal nature of a Gigaverse and archverses above. The most common amount of dimensions for Gigaverses are between 6-Dimensional and 11-Dimensional, but their dimensionality are often presented by real numbers, giving rise to higher fractal dimensions. See Also[ ]
{"url":"https://verse-and-dimensions.fandom.com/wiki/Gigaverse","timestamp":"2024-11-11T13:09:03Z","content_type":"text/html","content_length":"176360","record_id":"<urn:uuid:e8bc1362-2550-4ca0-806d-8cff8cbb5cf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00308.warc.gz"}
Nanometers to Nautical Leagues (International) Converter Enter Nanometers Nautical Leagues (International) β Switch toNautical Leagues (International) to Nanometers Converter How to use this Nanometers to Nautical Leagues (International) Converter π € Follow these steps to convert given length from the units of Nanometers to the units of Nautical Leagues (International). 1. Enter the input Nanometers value in the text field. 2. The calculator converts the given Nanometers into Nautical Leagues (International) in realtime β using the conversion formula, and displays under the Nautical Leagues (International) label. You do not need to click any button. If the input changes, Nautical Leagues (International) value is re-calculated, just like that. 3. You may copy the resulting Nautical Leagues (International) value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Nanometers to Nautical Leagues (International)? The formula to convert given length from Nanometers to Nautical Leagues (International) is: Length[(Nautical Leagues (International))] = Length[(Nanometers)] / 5556000035558.4 Substitute the given value of length in nanometers, i.e., Length[(Nanometers)] in the above formula and simplify the right-hand side value. The resulting value is the length in nautical leagues (international), i.e., Length[(Nautical Leagues (International))]. Calculation will be done after you enter a valid input. Consider that the latest smartphone screen has a pixel size of 500 nanometers. Convert this pixel size from nanometers to Nautical Leagues (International). The length in nanometers is: Length[(Nanometers)] = 500 The formula to convert length from nanometers to nautical leagues (international) is: Length[(Nautical Leagues (International))] = Length[(Nanometers)] / 5556000035558.4 Substitute given weight Length[(Nanometers)] = 500 in the above formula. Length[(Nautical Leagues (International))] = 500 / 5556000035558.4 Length[(Nautical Leagues (International))] = 8.99928e-11 Final Answer: Therefore, 500 nm is equal to 8.99928e-11 nautical league. The length is 8.99928e-11 nautical league, in nautical leagues (international). Consider that an advanced semiconductor has a feature size of 50 nanometers. Convert this size from nanometers to Nautical Leagues (International). The length in nanometers is: Length[(Nanometers)] = 50 The formula to convert length from nanometers to nautical leagues (international) is: Length[(Nautical Leagues (International))] = Length[(Nanometers)] / 5556000035558.4 Substitute given weight Length[(Nanometers)] = 50 in the above formula. Length[(Nautical Leagues (International))] = 50 / 5556000035558.4 Length[(Nautical Leagues (International))] = 8.9993e-12 Final Answer: Therefore, 50 nm is equal to 8.9993e-12 nautical league. The length is 8.9993e-12 nautical league, in nautical leagues (international). Nanometers to Nautical Leagues (International) Conversion Table The following table gives some of the most used conversions from Nanometers to Nautical Leagues (International). Nanometers (nm) Nautical Leagues (International) (nautical league) 0 nm 0 nautical league 1 nm 0 nautical league 2 nm 0 nautical league 3 nm 0 nautical league 4 nm 0 nautical league 5 nm 0 nautical league 6 nm 0 nautical league 7 nm 0 nautical league 8 nm 0 nautical league 9 nm 0 nautical league 10 nm 0 nautical league 20 nm 0 nautical league 50 nm 1e-11 nautical league 100 nm 2e-11 nautical league 1000 nm 1.8e-10 nautical league 10000 nm 1.8e-9 nautical league 100000 nm 1.8e-8 nautical league A nanometer (nm) is a unit of length in the International System of Units (SI). One nanometer is equivalent to 0.000000001 meters or approximately 0.00000003937 inches. The nanometer is defined as one-billionth of a meter, making it an extremely precise measurement for very small distances. Nanometers are used worldwide to measure length and distance in various fields, including science, engineering, and technology. They are especially important in fields that require precise measurements at the atomic and molecular scale, such as nanotechnology, semiconductor fabrication, and materials science. Nautical Leagues (International) A nautical league (international) is a unit of length used in maritime contexts. One nautical league is equivalent to 3 nautical miles, which is approximately 5,556 meters or 3.452 miles. The nautical league is defined as three times the length of a nautical mile, based on the Earth's circumference and one minute of latitude. Nautical leagues are used historically for measuring distances at sea. While not commonly used in modern navigation, they remain a part of maritime history and are occasionally referenced in literature and older navigational texts. Frequently Asked Questions (FAQs) 1. What is the formula for converting Nanometers to Nautical Leagues (International) in Length? The formula to convert Nanometers to Nautical Leagues (International) in Length is: Nanometers / 5556000035558.4 2. Is this tool free or paid? This Length conversion tool, which converts Nanometers to Nautical Leagues (International), is completely free to use. 3. How do I convert Length from Nanometers to Nautical Leagues (International)? To convert Length from Nanometers to Nautical Leagues (International), you can use the following formula: Nanometers / 5556000035558.4 For example, if you have a value in Nanometers, you substitute that value in place of Nanometers in the above formula, and solve the mathematical expression to get the equivalent value in Nautical Leagues (International).
{"url":"https://convertonline.org/unit/?convert=nanometers-nautical_leagues","timestamp":"2024-11-02T02:16:03Z","content_type":"text/html","content_length":"92355","record_id":"<urn:uuid:fda954dc-dc06-41f5-86d3-779a3ba8d8cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00610.warc.gz"}
The Power of Recourse in Online Optimization - P.PDFKUL.COM The Power of Recourse in Online Optimization Robust Solutions for Scheduling, Matroid and MST Problems vorgelegt von Math.-Ing. Jos´e Claudio Verschae Tannenbaum Santiago de Chile Von der Fakult¨at II - Mathematik und Naturwissenschaften der Technischen Universit¨at Berlin zur Erlangung des akademischen Grades Doktors der Naturwissenschaften genehmigte Dissertation Promotionauschuss: Vorsitzender: Prof. Dr. Peter Bank Berichter: Prof. Dr. Martin Skutella Berichter: Prof. Dr. Gerhard J. Woeginger Tag der wissenschaftlichen Aussprache: 6.2.2012 Berlin 2012 D83 Contents Acknowledgments 1 A Family of Robust PTASes for Parallel Machine Scheduling 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Robust Solutions for the Machine Covering Problem . . . . . . . . . 1.2.1 Basic Terminology . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 General Strategy . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Robust Solutions: a Motivating Example . . . . . . . . . . . 1.3 A Robust PTAS for the Machine Covering Problem with Permanent 1.3.1 A Stable Estimate of the Optimal Value . . . . . . . . . . . 1.3.2 The Structure of Robust Solutions . . . . . . . . . . . . . . 1.3.3 Maintaining Stable Solutions Dynamically . . . . . . . . . . 1.3.4 Reducing the Accumulated Reassignment Potential . . . . . 1.3.5 Arrival of small jobs . . . . . . . . . . . . . . . . . . . . . . 1.4 A Robust PTAS for Temporary Jobs . . . . . . . . . . . . . . . . . 1.5 Robust PTASes for General Objective Functions . . . . . . . . . . . 1.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Robust Multi-Stage Matroid Optimization 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Basics Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Robust Multi-Stage Optimization Under a Single Matroid Constraint . . . . 2.3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Dynamic Changes of Independent Sets . . . . . . . . . . . . . . . . . 2.3.3 Competitive Analysis of Robust-Greedy . . . . . . . . . . . . . . . 2.3.4 Competitive Analysis for Maximum Weight Bases . . . . . . . . . . . 2.3.5 The Optimal Robust Solution and Matroid Intersection . . . . . . . . 2.3.6 Relation to Submodular Function Maximization over a Partition Matroid 2.4 Robust Multi-Stage Matroid Intersection . . . . . . . . . . . . . . . . . . . . 2.4.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Swap Sequences for the Intersection of Two Matroids . . . . . . . . . 2.4.3 Competitive Analysis for Matroid Intersection . . . . . . . . . . . . . 2.4.4 Intersection of Several Matroids . . . . . . . . . . . . . . . . . . . . . 3 2.4.5 Applications to the Maximum Traveling Salesman Problem . . . . . . 106 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3 Robust Multi-Stage Minimum Spanning Trees 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Our Contribution . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The Unit Budget Case . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 A Near-Optimal Algorithm with Amortized Constant Budget . . . . . 3.6 Towards a Constant Competitive Factor with Constant Budget . . . . 3.6.1 Approximate Robust Solutions . . . . . . . . . . . . . . . . . . 3.6.2 A Constant Competitive Algorithm under Full Information . . 3.6.3 On the Competitiveness of a Greedy Algorithm with Budget 2 3.7 Applications to the Traveling Salesman Problem . . . . . . . . . . . . 3.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Computing Lower Bounds in Linear Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments I cannot start this text without expressing my gratitude to my advisor, Martin Skutella. His dedication, advice, and good disposition were of invaluable help since my first day in Berlin. I have enjoyed every conversation that I had with him, and I am convinced that his example will serve me as guidance, professionally and personally, for many years to come. Is hard to ask for a better atmosphere for doing research that in the COGA group at TU Berlin. Besides being always very stimulating in terms of research, it was also enjoyable and fun. I am deeply grateful to all the members of the group, I have learned a lot from all of you. My work has greatly benefited from talking and discussing with several other researchers. It is impossible to name all of them in this short space. I am in particular very grateful to Nicole Megow (coauthor of Chapter 3) and Andreas Wiese (coauthor of Chapters 2 and 3). “The devil is in the details”, says popular knowledge. I am in great debt with all the devil slayers of this thesis: Ashwin Arulselvan, Christina B¨ using, Daniel Dressler, JanPhilipp Kappmeier, Jannik Matuschke, Nicole Megow, Melanie Schmidt, Sebastian Stiller, my brother Rodrigo Verschae, and Andreas Wiese. The presentation of this thesis has greatly improved thanks to their work. I especially learned a lot about writing from Melanie’s very detailed comments and Sebastian’s suggestions for the introduction. My special thanks go to Jan-Philipp who very kindly agreed to translate the abstract of this thesis to German. I am also in debt to Madeleine Theile, would helped me with the bureaucracy needed to hand in my dissertation. Additionally, my German “teachers” helped to make my staying in Berlin much more pleasurable. For their friendship and for helping me understand what was happening around me, Jan-Philipp, Jannik, and Shahrad, ich danke Euch vielmals. There are many more people to whom I owe their friendship. My GURPS group that helped me waste many Sunday afternoons. Daniel, Janina, Jannik, Jan-Philipp, Bernd, and Pepe: to the pigs!. I also thank my friends from Dortmund, Melanie and Daniel, who knew how to increase my self-confidence in each of their visits to Berlin. I am also in debt to all my other friends at TU-Berlin who shared with me countless liters of coffee. I would also like to thank the Berlin Mathematical School (BMS) and all its staff for not only supporting me financially, but also in all aspects of everyday life. I am also very grateful to all my family, especially my parents, for encouraging me along all these years. Lastly, and most importantly, I wish to thank my wife, Natalia, for always being there for listening. Her care and love kept my motivation high at all times. Berlin, July 26, 2012 Jos´e Verschae Introduction We constantly take decisions without knowledge of the future. One example is choosing a route to drive home after work. Besides people who love driving, most of us are interested in arriving home as early as possible. This problem can be modeled as finding a shortest path in a network, and it is easily solvable if the traffic information is known in advance. However, this information is subject to uncertainty. Unexpected situations might occur (e.g., a car accident), making our a priori decision suboptimal. Better solutions are obtained by adapting our route to the congestion information observed while driving. In this every-day example, the challenge is in the fact that the information is revealed over time. This is the type of challenge addressed in this thesis. The mathematical optimization community has long searched for models that cope with uncertainty. For around 20 years, increased attention was given to the study of online algorithms. Online optimization studies algorithms in settings where the input is revealed piecewise. Each piece of arriving information usually relates to a natural unit of the corresponding problem, for example, a job in a scheduling environment or a node in a graph. Whenever a new unit is available the optimizer must take an irrevocable decision as to how to deal with this unit. The performance of algorithms is evaluated with a competitive analysis, that confronts the algorithm with an optimal solution with full knowledge of the input. The quality of an algorithm is measured with its competitive factor, defined as the worst case ratio between the cost of the constructed solution and the optimal cost under full information. A large number of discrete optimization problems have been studied in this setting. We refer to [BE98] for a general introduction to online algorithms. Despite the high popularity of this model, it might be too conservative in some situations: (1) it does not allow the recourse of any of the decisions previously taken; and (2) assuming worst case scenarios gives over pessimistic bounds on the competitive ratios. Robust optimization seeks to optimize the worst case scenario given a limited amount of available information. The part of the input subject to uncertainty is assumed to belong to a given uncertainty set, representing the possible realizations of the uncertain parameters. For any solution that we might choose, we assume the worst possible realization of the uncertain data. This situation can be modeled with a min-max approach: we choose our variables in order to minimize the cost, assuming that the uncertain parameters maximizes the cost for the chosen solution. A significantly richer extension of the last approach is obtained by considering multistage robust problems. In this setting, partial information of the input is revealed in several stages – similar to the online framework. While the information is revealed, we are allowed 7 Outline of the Thesis to adjust part of the solution to improve its quality. Given an initial uncertainty set for the parameters, it is the role of the optimizer to chose optimal rules (that is, functions) for recourse actions. The recourse helps adapting the solution while the uncertain parameters are being revealed. More often than not the recourse itself incurs additional costs, which the model must also take into account. This setting, despite being an extremely powerful modeling tool, suffers from a high computational complexity. In particular, the optimizer must choose a rule that might belong, in principle, to an infinite dimensional space. An extensive introduction of robust models, including the multi-stage setting, can be found in [BEGN09]. In this thesis we focus on models that combine some of the ideas from multi-stage robust optimization and online algorithms. Taking as a base the online framework, we allow a limited amount of recourse to partially adapt the solutions while the instance is revealed. To simplify the computational complexity found in multi-stage robust optimization, we do not seek to find the optimal robust strategy. Rather, we are interested in (hopefully efficient) algorithms that provide provably good solutions. The quality of our algorithms is evaluated by their competitive factor. Moreover, we bound the amount of recourse allowed at each stage by an appropriately defined parameter β. The exact interpretation and definition of β must be chosen depending on the specific problem. In general, this parameter can be interpreted as a measure of the stability or robustness of the constructed solutions. Additionally, β portraits the cost that we are willing to incur in each stage for adjusting the solution. Note that if no recourse is allowed (usually meaning that β = 0), then we are back to the classic online setting. On the other hand, an unlimited amount of recourse (that is, β = ∞) means that we are allowed to completely change the solutions in each stage. In this last case we fall back into solving a sequence of independent problems for each stage. In other words, β parametrizes two opposite settings: online and (a sequence of) offline problems. It is then particularly interesting to study the trade-off between β and the best achievable competitive factor. We show that in several relevant settings a small value of β allows us to obtain nearoptimal solutions. Quite surprisingly, this also occurs for problems in which no recourse implies unbounded competitive ratios. For these problems our models manage to overcome the over-conservatism of online algorithms, and the apparent higher complexity of optimizing the recourse in multi-stage robust optimization. Outline of the Thesis This thesis is divided into three chapters. The chapters are mostly self-contained with the intention that they can be read independently. Chapter 1 Scheduling problems have been historically used as a playground for combinatorial optimization models and techniques. In the first chapter we study fundamental scheduling problems on identical parallel machines. We consider a model introduced by Sanders, Sivadasan, and Skutella [SSS09] in which jobs arrive online one by one. Whenever a job arrives we must decide on which machine to schedule it. The recourse is controlled by the migration factor β: when a job with processing time pj arrives, then the algorithm is allowed to migrate jobs with total processing time at most β · pj . We are interested in constructing a robust PTAS, i. e., a (1 + ε)-competitive algorithm with constant migration factor β = β(ε). In this framework we study a large family of objective functions. For some of these objectives, β = 0 implies non-constant values of the competitive ratio [AE98]. We first solve an open question left by Sanders et al. [SSS09] for the problem of maximizing the minimum load of the machines. For this setting we show that there is no robust PTAS with constant migration factor. Given this negative result we turn our attention to a slightly relaxed model that considers an amortized version of the migration factor. We prove that this version admits a robust PTAS. Among other results, we generalize this algorithm to the case of temporary jobs (that is, jobs might leave the system), and to a large class of objective functions which were considered before by Alon, Azar, Woeginger, and Yadid [AAWY98]. Chapter 2 In the second chapter we study the general setting of matroid optimization. This is arguably one of the most influential modeling frameworks in combinatorial optimization, and has contributed significantly to our understanding of efficient algorithms. In our setting we consider a weighted set of elements that is revealed in several stages. In each stage a set of new elements becomes available. The objective is to maintain a set that satisfies one or more matroid constraints and has the largest possible total weight. To limit the recourse, in each stage we are given a value that bounds the number of elements that can be inserted to the solution in this stage (the number of removed elements is not constrained). We analyze the performance of greedy algorithms in this setting. In particular we propose an algorithm that is O(`2 ) -competitive for the intersection of ` matroids. We remark that in this setting no online algorithm can be competitive when compared to the offline optimum. Therefore, our concept of competitiveness is defined by comparing to the total weight of the optimal robust solution, that is, a solution that has the same recourse power as the online algorithm. Notice that in this respect this setting is closer to multi-stage robust models than to online algorithms. We show applications of our result to the Maximum Traveling Salesman Problem in directed graph. For this particular problem, our result implies a constant competitive algorithm with constant recourse in each stage, even when compared against the offline optimum. Chapter 3 In the last chapter we study a robust online version of the Minimum Spanning Tree problem. Our study is motivated by an important application on multi-cast routing. In our model we assume that the nodes of a complete metric graph appear one by one. Whenever a new node is revealed, we are allowed to add a constant number of edges to the solution and remove other edges in order to maintain a spanning tree. This limits the recourse allowed in the model. The objective is to minimize the total cost of the tree. If no recourse is allowed, then the competitive ratio of any algorithm is unbounded [AA93]. Additionally, a simple example shows that with constant recourse the competitive ratio of every algorithm is larger than 2. However, we can overcome this limitation by slightly relaxing the constrain and considering an amortized bound for the recourse: up to stage t we are allowed to add O(t) many edges. The main result in this chapter is a (1 + ε)-competitive algorithm with an amortized recourse of O( 1ε log 1ε ). Moreover, we show that the algorithm is best possible up to logarithmic factors on the recourse. For the non-amortized setting, we state a conjecture that implies the existence of a Outline of the Thesis constant competitive algorithm with constant recourse on each stage. Additionally, we study the full information case, where we obtain a O(1)-competitive algorithm with a recourse of two edges per stage. Finally, we apply our results to an online robust version of the Minimum Traveling Salesmen problem (TSP). We show that all our results for the Minimum Spanning Tree problem transfer to TSP, if we increase the competitive ratio by a factor 2 and the recourse by a factor 4. This is done with a robust version of the shortcutting strategy used in the classic Christofides’ Algorithm [Chr76]. Chapter 1 A Family of Robust PTASes for Parallel Machine Scheduling Joint work with M. Skutella Scheduling problems are considered a fruitful ground for developing and testing combinatorial optimization models and techniques. Their appeal comes from their rich combinatorial structure together with a large number of applications. One of the most fundamental scheduling settings considers n jobs that need to be assigned to m identical machines. Each job j has a processing time pj ≥ 0, denoting the amount of time that this job needs to be processed. The load of a machine is the total processing time of jobs assigned to it. For this setting, we study online robust models for a large family of objective functions, which depend exclusively on the load of the machines. The most prominent problems of this kind are the following. • Minimum Makespan problem (Machine Packing problem). In this case the objective function asks for minimizing the makespan, that is, the maximum machine load. It models the natural situation in which the system is alive (i. e., productive) if at least one machine is running. It is the most common and widely studied objective function, and it is one of the first problems for which approximation algorithms were studied [Gra66, Gra69]. The literature for this problem is extensive; see, e.g., [Hal97, KSW98, Sga98, Alb03] and the references therein. • Machine Covering problem. This problem asks for a schedule maximizing the minimum machine load. This objective tries to capture a fairness property. Assume that jobs represent goods or resources that must be assigned to different players, represented by machines. In this setting, the processing time of a job pj corresponds to the value of the resource. A solution that maximizes the minimum machine load distributes the resources as fair as possible: it maximizes the value assigned to the machine that receives the least total value. This problem has several applications, including the distribution of disks in storage area networks [SSS09], the sequencing of maintenance 11 1.1. Introduction actions for modular gas turbine aircraft engines [FD81], fair bandwidth allocation in network optimization, and congestion control by fair queuing [ELvS11]. • Minimum `p -norm problem. For a given p ≥ 1, this objective function seeks to minimize the `p -norm of the vector of machines loads. In some sense, minimizing the `p -norm is similar to the Minimum Makespan problem. While the makespan (that is, the `∞ -norm) only considers the machine with maximum load, the `p -norm also tries to balance the load of the remaining machines. Note that since limp→∞ kxkp =kxk∞ for any vector x, we can interpret p as a value that parametrizes the importance of the makespan versus balancing the rest of the machines. This objective function finds applications in placing a set of records on a sectored drum [CC76] and in other problems concerning the allocation of storage [CW75]. The three problems above are strongly NP-hard (by a simple reduction from 3-partition) and admit a Polynomial-Time Approximation Scheme (PTAS); see [HS87, Woe97, AAWY98]. Online Algorithms A significant amount of effort has been put into the study of scheduling problems in online settings. In these models, the jobs of an instance are revealed incrementally. Sgall [Sga98] distinguishes two classes of online scheduling problems, depending on how the information is revealed. In the online-time paradigm, a scheduler must decide at each time t which job to process on which machine. Usually each job has a release date, at which the job is revealed together with its processing time. At time t no information is known about jobs released after t. In the online-list paradigm there is a list of jobs that determines the order of their appearance. In this setting jobs are revealed one by one. Again, the processing time is revealed with the appearance of the job. Whenever a job arrives the scheduler must choose a machine to process the job. We remark that in these classic online scheduling settings, all decisions are irrevocable. This is, of course, a very natural assumption in the online-time model, since changing decisions requires to travel in time. However, the situation is different for the online-list model. As we argue below, there are situations in which a moderate amount of recourse better captures real-life problems. In both frameworks we quantify the quality of an algorithm by its competitive ratio. Consider a (partial) instance I corresponding to the jobs known to the algorithm in some particular stage of the process. Let A(I) be the value of the algorithm on input I, and let OPT(I) be the value of an optimal (offline) solution for this instance. For minimization problems, we say that the algorithm is α-competitive if A(I) ≤ αOPT(I) for any partial input I. For maximization problems, the algorithm is α-competitive if A(I) ≥ α1 OPT(I). Notice that in both cases α ≥ 1. Bounded Reassignment and Migration Factor In this chapter we study an onlinelist scenario known as online load balancing with bounded reassignment factor that can be considered as a relaxation of the usual online-list model. In this setting, jobs may arrive or depart at any time, and when a new job enters the system it must be immediately assigned to a machine. Since jobs might leave the system, we say that they are temporary jobs. If jobs never depart then we are in the permanent job setting. Again, the objective is to optimize Chapter 1. Robust PTASes for Machine Scheduling any of the objective functions mentioned above. We remark that in this setting we do not need to specify the time slot to schedule each job, since we are only interested in the total load of each machine. The strict requirement of the online-list paradigm of not allowing to revert any decision, might be unrealistic for several applications. In some of these applications the scheduler might be willing to perform a limited recourse on the decisions taken in the past. We give an important application of this idea below. Thus, we consider a model in which the scheduler is allowed to reassign previously assigned jobs in a limited way. The premise to bound the amount of reassignments is the following: the larger the processing volume of jobs that arrives or leaves the system, the more is the scheduler willing to reassign jobs. More precisely, the amount of reassignment is controlled by the reassignment factor which is defined as follows. Let J be the set of jobs that have so far appeared in the system, and let JL ⊆ J be the set of jobs that have left the system. Additionally, let JR be the multiset of all jobs that have been migrated so far. Note that if a job has been migrated several times then several copies of it will appear factor r of an algorithm as the worst case Pin JR . We Pdefine the reassignment P ratio between j∈J pj + j∈JL pj and j∈JR pj . In general we are interested in algorithms with constant reassignment factor. That is, in the worst case the amount of processing time that the algorithm is allowed to migrate must be proportional to the processing time of jobs that arrive or depart. Alternatively, we can interpret this framework by considering that migrating a job j incurs a cost proportional to its size, c · pj . By scaling we can assume that c = 1. On the other hand, we have a budget L that denotes the reassignment potential available to migrate jobs. At the arrival or departure of a job j, our budget L is increased by pj . Whenever a job k is migrated, pk /r reassignment potential is spent. Note that r = 0 means that no reassignment is allowed, and thus we are in the classical online setting. On the other hand, r = ∞ implies that we are allowed to reassign all jobs in each iteration, and thus we fall back to the offline case. Our objective is developing αcompetitive algorithms, for some constant α, that also have constant reassignment factor. In particular, we are interested in studying the trade-off between the competitive factor α and the reassignment factor r. Arguably, the best that we can expect in this framework is a robust PTAS (also known as dynamic PTAS ), that is, a family of polynomial-time (1 + ε)-competitive algorithms with constant reassignment factor r = r(ε), for all ε > 0. Sanders, Sivadasan, and Skutella [SSS09] consider a tighter online model, known as the bounded migration framework. This model can be interpreted in the reassignment model with the following restriction: after the arrival or departure of a job j, its reassignment potential pj must be immediately spent or is otherwise lost. In the bounded migration scenario, the value r is called the migration factor of the algorithm, and is a measure of the robustness of the constructed solutions. Notice that the reassignment factor is an amortized variant of the migration factor, and thus it can be considered an amortized measure of robustness. Both the migration and reassignment factor frameworks are motivated by an important online application found in [SSS09]. A Storage Area Network (SAN) commonly manages several disks of different capacities. To manage the stored information more efficiently, it is desirable to replicate the data in different storage devices for two different reasons. First, it allows data to be read simultaneously from different disks, increasing the maximum 1.1. Introduction throughput of the network. Second, data replication makes the system robust against disk failures. We can model a SAN by considering a partition of the storage devices into several sub-servers, each containing copies of the same information. Therefore, the capacity of the SAN is the minimum capacity of all sub-servers. In scheduling notation, the sub-servers corresponds to machines, while the disks correspond to jobs. Our objective is to maximize the capacity of the SAN, i.e., the minimum load of the machines. Moreover, we might want to increase the capacity of the SAN by attaching new disks. This corresponds to jobs that enter the system in our online model. On the other hand, job departure models disks that fail and must be removed from the network. We would like to maintain solutions that are close to optimal, by reassigning a limited amount of storage devices. However, the larger the capacity of arriving or departing disks, the more reassignments that we are willing to accept. Notice that this problem fits in the reassignment and the bounded migration models. Nonetheless, the latter unrealistically asks the reassignment potential to be spent immediately after is generated. This may be undesirable in practice since it may provoke down-time of the system each time a new disk is inserted. Instead, we obtain more appropriate solutions by collecting work until a reconfiguration of disks has a larger impact on the objective function, for example, larger than ε · OPT. Related work. We briefly review the literature related to online scheduling, settings with bounded reassignment and migration factor, and other related models. Online-List Scheduling The literature about online scheduling is vast; for general surveys, see [Sga98, Aza98, Alb03, PST04]. For the Minimum Makespan problem, already Graham [Gra66] noted that a greedy algorithm, called list-scheduling, is (2− m1 )-competitive, where m denotes the number of machines. This basic algorithm was improved through a sequence of results [BFKV95, KPT96, Alb99] culminating with the 1.9201-competitive algorithm by Fleischer and Wahl [FW00]. This is the so far best known competitive guarantee for any deterministic algorithm. For this type of algorithms, the best known lower bound is 1.88, which is shown by Rudin and Chandrasekaran [RIC03]. For randomized online algorithms there is a lower bound of e/(e − 1) ≈ 1.58; see Chen, Vliet, and Woeginger [CvVW94] and Sgall [Sga97]. The best known randomized online algorithm is derived by Albers [Alb02] and achieves a competitive ratio of 1.916. Avidor, Sgall, and Azar [ASA01] consider the online version of the Minimum `p -norm problem. For the `2 -norm they show that the greedy algorithm is optimal and achieves a competitive ratio of 4/3. For general `p -norms, they show that list-scheduling is (2 − Θ(ln p)/p)-competitive (for p → ∞), and present a lower bound of 3/2 − Θ(1/p) on the competitive ratio of any deterministic online algorithm. The online variant of the Machine Covering problem turns out to be less tractable, admitting no online algorithm with constant competitive ratio. The best possible deterministic algorithm greedily assigns jobs to the least loaded machine, and has competitive √ ratio m; see Woeginger [Woe97]. Azar and Epstein [AE98] show a lower bound of Ω( m) for √ the ˜ competitive ratio of any randomized online algorithm, and give an almost matching O( m)competitive algorithm. Chapter 1. Robust PTASes for Machine Scheduling Bounded Reassignment The first to study online scheduling problems in the reassignment model is Westbrook [Wes00]. For the Minimum Makespan problem, he gives a 6-competitive algorithm with reassignment factor 1 (according to our definition1 ). Andrews, Goemans, and Zhang [AGZ99] improve this result, obtaining a 3.5981-competitive algorithm with reassignment factor 1. Furthermore, they give a (2 + ε)-competitive algorithm with constant reassignment factor r(ε) ∈ O(1/ε). Bounded Migration Sanders et al. [SSS09] study the bounded migration model for permanent jobs. For the Minimum Makespan problem, they give a 3/2-competitive algorithm with migration factor 4/3. Moreover, using well known rounding techniques, they formulate the problem as an integer linear programming (ILP) feasibility problem in constant dimension. Combining this with an ILP sensitivity analysis result, they obtain a robust PTAS for the bounded migration model. An important consequence of their analysis is that no special structure of the solutions is needed to achieve robustness. More precisely, it is possible to take an arbitrary (1 + ε)-approximate solution and, at the arrival of a new job, turn it into a (1 + ε)-approximate solution to the augmented instance while keeping the migration factor constant. This feature prevents their technique from working in the job departure case. Based on the same kind of ideas just described, Epstein and Levin [EL09] develop a robust asymptotic PTAS for Bin-Packing. Very recently, Epstein and Levin [EL11] considered the preemptive version of the Minimum Makespan problem with permanent jobs. They derive an online algorithm that is simultaneously optimal for the Minimum Makespan problem and all `p -norms. Their algorithm uses a migration factor 1 − 1/m, where m is the number of machines in the system. They also give a matching lower bound on the migration factor of any optimal algorithm for any of these objective functions. Sanders et al. [SSS09] additionally consider the Machine Covering problem, showing a 2-competitive algorithm with migration factor 1. Moreover, they give a counterexample showing that it is not possible to start with an arbitrary (2 − ε)-approximate solution, and then maintain the approximation guarantee while keeping the migration factor constant. This implies that their ideas, developed for the Minimum Makespan problem, cannot be applied directly to derive a robust PTAS with constant migration factor for the Machine Covering problem. Other Models A different model for controlling changes in schedules is by bounding the ratio β between the number of reassignments and the number of job arrivals or departures. For this model, Westbrook [Wes00] considers the Minimum Makespan problem in the temporary job setting and proposes a 5.83-competitive algorithm with β = 2. Andrews, Goemans, and Zhang [AGZ99] improve this result by giving an algorithm with competitive ratio 3.5981 and the same β. They also give a (3 + ε)-competitive algorithm with β ∈ O(log(1/ε)/ε). Awerbuch, Azar, Plotkin, and Waarts [AAPW01], consider this problem in the unrelated machine setting, in which the processing time of job depends on the machine to which it 1 Our definition differs slightly from the one given in [Wes00]: they do not consider the departure of jobs to add any reassignment potential, and the first assignment of a job j also spends pj /r reassignment potential. However, the concept of constant reassignment factor is the same in both models. 1.1. Introduction is assigned. For this case they obtain a O(log n)-competitive algorithm with β ∈ O(log n), where n is the number of job arrivals and departures. Andrews, Goemans, and Zhang [AGZ99] consider a different kind of generalization in which the reassignment of a job j incurs a given cost cj . For the Minimum Makespan problem (on identical machines) they give a 3.5981-competitive algorithm, for which the ratio between the total cost due to reassignments and the initial cost of assigning all jobs, is bounded by 6.8285. Our Contribution We develop a general framework for obtaining robust PTASes in the reassignment model. This unifies and improves several of the results mentioned above. Our results can be considered from various angles and have interesting interpretations in several contexts: (i) We contribute to the understanding of various fundamental online scheduling problems on identical parallel machines, that are also relevant building blocks for many more complex real-world problems. (ii) We give valuable insights related to sensitivity analysis in scheduling. (iii) We achieve the best possible performance bound for the Minimum Makespan problem with proportional reassignment costs, improving upon earlier work by Westbrook [Wes00] and Andrews, Goemans, and Zhang [AGZ99]. (iv) We identify a broad class of problems that can be analyzed with our approach, including the Machine Covering, Minimum Makespan and Minimum `p -norm problems. We start by considering the Machine Covering problem with permanent jobs. Our first result is that it does not admit a robust PTAS with constant migration factor. This implies that at least some amount of reassignment potential must be accumulated over iterations. Therefore we consider an intermediate model between the migration and reassignment frameworks. That is, we seek a robust PTAS with constant reassignment factor that uses, in each iteration, a small amount of accumulated reassignment potential. Thus, despite the impossibility of deriving a robust PTAS with constant migration factor, we can still derive solutions that are as robust as possible. More precisely, consider an arbitrary number ε > 0, and denote by OPT the optimal value in some iteration. We develop a (1 + ε)-competitive algorithm with constant reassignment factor r = r(ε), that uses at most O(εOPT) accumulated reassignment potential in each iteration. Note that the quantity εOPT is also the amount that we are willing to loose in the objective function. Therefore, this quantity is – arguably – considered negligible by the scheduler. Recall that Sanders et al. [SSS09] show that for the Machine Covering problem a robust PTAS with constant migration needs to maintain (1 + ε)-approximate solutions with some particular structure. In Section 1.2.3 we give a series of examples showing that this is also true in the constant reassignment model if the amount of reassignment potential used in each iteration is in O(εOPT). Moreover, our examples give a better understanding of the kind of structure needed for the solutions. Namely, they indicate that robustness is Chapter 1. Robust PTASes for Machine Scheduling achieved by maintaining solutions whose sorted vector of machine loads is (approximately) lexicographically maximal. In Sections 1.3.2 to 1.3.4 we apply the insights gained with the examples described above. We obtain a robust PTAS by rounding instances with the techniques of Alon et al. [AAWY98], constructing a lexicographically optimal solution of the rounded instance, and then bounding the reassignment factor using a sensitivity analysis result for ILPs as in [SSS09]. Our algorithm is significantly more involved than the robust PTASes in [SSS09]. The extra difficulty comes from the interaction between the rounding technique and the structure of the rounded solutions. To guarantee the competitive factor, the coarseness of our rounding must change while jobs arrive. Thus, our algorithm carefully changes the coarseness of the rounded instance dynamically by re-rounding a constant number of jobs in each iteration. We show that by carefully choosing the number of jobs to be re-rounded we can bound simultaneously the competitive ratio and the reassignment factor. We can adapt the techniques just described to the temporary job case, where jobs may leave the system. We also obtain for this case a robust PTAS with constant reassignment factor. However, the amount of reassignment potential used in each iteration might not be in O(εOPT), and thus the solutions are robust only in an amortized manner. This is done in Section 1.4. Finally, in Section 1.5 we extend our results to a very broad class of objective functions, first considered by Alon et al. [AAWY98]. Let `i denote the load of machine i. For a given function consider are of the form: P f : R≥0 → R≥0 , the objective functions that we P (I) minimize i f (`i ), (II) minimize maxi f (`i ), (III) maximize i f (`i ) and (IV) maxix mize mini f (`i ). As in [AAWY98], we must assume that ln(f (e )) is uniformly continuous. Moreover, for Problems (I) and (II) we assume that f is convex and for Problems (III) and (IV) f is concave. Under these conditions, the two robust PTASes derived for the Machine Covering problem can be adapted for these objective functions. It is easy to check that the Machine Covering, Minimum Makespan and Minimum `p -norm problems fall under this setting. In particular, our results improves upon the (2 + ε)-competitive algorithm with constant reassignment factor by Andrews, Goemans, and Zhang [AGZ99] for the Minimum Makespan problem. A preliminary version of this chapter appeared in the proceedings of the 18th Annual European Symposium on Algorithms (ESA 2010) [SV10]. 1.2 1.2.1 Robust Solutions for the Machine Covering Problem Basic Terminology Consider a sequence of instances of the Machine Covering problem consisting of a set M of m identical machines and a set of jobs that arrive or depart one by one. Assume that we start with an instance (M, J0 ) for a given set of jobs J0 . Denote by Jt the set of jobs needed to be scheduled in iteration t ∈ N0 . In each iteration t ≥ 1, either a new job jt 6∈ Jt−1 arrives, and thus Jt = Jt−1 ∪ {jt }, or a job jt ∈ Jt−1 departs which implies Jt = Jt−1 \ {jt }. The processing time pjt ≥ 0 of an arriving job jt is revealed at the arrival of the job. 1.2. Robust Solutions for the Machine Covering Problem An algorithm in this setting must assign every job in Jt to a machine in M for each iteration t. For a given online algorithm, let St be the schedule constructed by the algorithm for iteration t. We denote by Dt the set of all jobs in Jt−1 ∩ Jt that are processed on different machines in schedules St−1 and St , that is, Dt is the set of jobs that are reassigned in iteration t (without considering jt ). Also, P for any subset of jobs J, we denote by p(J) the total processing time of J, that is, p(J) := j∈J pj . Definition 1.1 (Reassignment factor). An online algorithm has reassignment factor r > 0 if for every possible sequence of instances (M, J0 ) . . . , (M, Jt ) and for all t ∈ N it holds that t X s=1 p(Ds ) ≤ r · t X pjs . An alternative way of defining the reassignment factor is as follows. For a given value of r > 1, define L0 := 0 and Lt := Lt−1 + pjt − p(Dt )/r for every t ∈ N>0 . We say that Lt is the reassignment potential available at the end of iteration t. The reassignment potential Lt can be interpreted as the available budget for migrating jobs at the end of iteration t. In iteration t, a job pjt arrives or departs, increasing our budget from Lt−1 to Lt−1 + pjt . Each unit of reassignment potential, that is, one unit of our budget, allows us to migrate 1/r units of processing time in our schedule. Therefore, the reassignment of jobs in Dt costs p(Dt )/r of the budget, leaving Lt reassignment potential available for iteration t + 1. With this definition, it is easy to check that an algorithm has reassignment factor r if and only if the reassignment potential Lt is non-negative for every instance and every t. Definition 1.2. An online algorithm has migration factor β ≥ 0 if for every sequence of schedules (M, J0 ), . . . , (M, Jt ) and all t ∈ N>0 it holds that p(Dt ) ≤ β · pjt . Note that an algorithm has migration factor β if it has reassignment factor β and uses only the budget given by the new job pjt in each iteration. That is, it does not use any of the budget accumulated in previous iterations. Definition 1.3. A family of online algorithms {ALGε }ε>0 is said to be a robust PTAS with constant migration (resp. reassignment) factor if ALGε is a (1 + ε)-competitive algorithm with migration (resp. reassignment ) factor r = r(ε), for all ε > 0. We will show that the Machine Covering problem does not admit a robust PTAS with constant migration factor. However, we can be arbitrarily close to such a result. For this consider the following definition. Definition 1.4. An online algorithm is said to use at most Kt accumulated reassignment potential in iteration t if p(Dt ) ≤ r · (pjt + min{Kt , Lt−1 }). Chapter 1. Robust PTASes for Machine Scheduling In other words, Kt is a bound on how much budget accumulated from previous iterations the algorithm uses in iteration t. Notice that Kt measures how close is an algorithm to have migration factor r. That is, if Kt = 0 for all t, then the algorithm uses not only a reassignment factor of r but also a migration factor of r. On the other hand, if Kt = ∞ then the algorithm can accumulate all the budget and use it simultaneously in one iteration. Having large values of Kt in practice can be problematic since we might have drastic changes in our solutions in a handful of iterations, destroying the robustness property that we try to model. It is therefore desirable to construct algorithms with small values of Kt , which distribute the migration of jobs evenly over all iterations. For a given iteration t, let OPTt be the optimal value for instance (M, Jt ). We give a (1 + ε)-competitive algorithm with constant reassignment factor r = r(ε) that uses, in each iteration t, at most Kt ∈ O(ε · OPTt ) accumulated reassignment potential. The following lemma shows that this result is nearly tight since Kt ∈ Ω(OPTt /r(ε)) in the worst case. Lemma 1.5. For 0 < ε < 1/19, consider a (1 + ε) -competitive algorithm for the Machine Covering problem with constant reassignment factor r(ε). Then there exists an instance for which the accumulated reassignment potential Kt is in Ω(OPTt /r(ε)). Proof. Let α > 0 be a parameter that determines the scale of the instance that we will construct. We consider an instance consisting of 3 machines and 7 jobs of sizes p1 = p2 = p3 = p4 = 2 · α, p5 = p6 = 3 · α and p7 = 5.7 · α. It is easy to see that the optimal solution is given by Figure 1.1a. This is, up to symmetries, the only (1 + ε)-approximate solution for any 0 < ε < 1/19. Let us assume that there exists a (1 + ε)-competitive algorithm with constant reassignment r(ε) such that for every iteration t, inequality p(Dt ) ≤ r(ε) · (pjt + Kt ) is satisfied for some value Kt . Then, we must start with the solution given by Figure 1.1a. For some δ > 0, consider that a sequence of jobs of size less than δ arrive, whose total processing time sum up to 1.3 · α. Notice that if none of the original seven jobs are migrated while the small jobs arrive, the best possible solution has a value of 6.65 · α. On the other hand, the optimum, shown in Figure 1.1b, has value 7 · α. This yields a contradiction since 7/6.65 = 20/19 > (1 + ε). We conclude that some of the original jobs must be migrated, and thus Dt must contain at least one job of size at least 2 · α, while the arriving job has a size of at most δ. Since p(Dt ) ≤ r(ε) · (δ + Kt ) for every δ > 0, we conclude that Kt ≥ p(Dt )/r(ε) ≥ 2α/r(ε) ∈ Θ(OPTt /r(ε)) for some t. Corollary 1.6. For any ε > 0, there is no (20/19 − ε)-competitive algorithm with constant migration factor. Proof. If there is such a PTAS then there is an algorithm with Kt = 0 for all t in the previous lemma, contradicting the result. This result justifies the use of the bounded reassignment model instead of the bounded migration framework. 1.2. Robust Solutions for the Machine Covering Problem p4 p6 p7 p5 p3 p7 (a) Unique optimal solution to the original instance. (b) Unique optimal solution to instance with new jobs. Figure 1.1: There is no (20/19 − ε)-competitive algorithm with constant migration factor. General Strategy One of the main result of this chapter is the following. Theorem 1.7. In the permanent job setting, the Machine Covering problem admits a robust PTAS with constant reassignment factor that uses at most Kt ≤ 2εOPTt accumulated potential for each iteration t. We saw in the proof of Lemma 1.5 that the limitation of the bounded migration model is caused by arbitrarily small jobs, whose reassignment potential do not allow any larger job to be migrated. We show that these are the only cases where the migration model is not enough to obtain a robust PTAS. We then use this result to show the previous theorem. Our strategy to show the theorem is the following. 1. Show that there is a robust PTAS with constant migration factor for only one iteration, where the arriving job is larger than εOPTt . 2. Use this result to show that there is a robust PTAS with constant reassignment factor if all arriving jobs are larger than εOPTt . 3. Refine the previous algorithm to use constant migration factor if all arriving jobs are larger than εOPTt . 4. Remark that this algorithm can also deal with arriving jobs that are smaller than ε · OPTt , if we allow to use at most 2εOPTt accumulated reassignment potential. After showing this theorem, in Section 1.4, we show that we can adapt the techniques above to the the job departure case, obtaining the following result. Theorem 1.8. There exists a robust PTAS with constant reassignment factor for the Machine Covering problem in the temporary job setting. We note that this theorem does not guarantee that a small amount of accumulated reassignment potential is used in each iteration. After proving these theorems for the Machine Covering problem, we will generalize them to other objectives functions in Section Chapter 1. Robust PTASes for Machine Scheduling .. . ... (a) A possible optimal solution for the initial instance. (b) Unique α-approximate solution for α < 2 after the arrival of a job of size 1. Figure 1.2: Example showing that for the Machine Covering problem a robust PTAS must maintain solutions with extra structure. Robust Solutions: a Motivating Example As explain before, we first derive a robust PTAS with constant migration factor under the following assumption. Assumption 1.9. For a given ε > 0, all arriving and departing jobs jt for t ≥ 1 satisfy pjt ≥ εOPTt . An interesting characteristic of any robust PTAS with constant migration factor for the Machine Covering problem is that – even under Assumption 1.9 – it must maintain nearoptimal solutions with a special structure. This contrasts the minimum makespan case for which, starting with an arbitrary (1 + ε)-approximate solution, we can maintain the approximation guarantee with constant migration factor [SSS09]. In the following we give a collection of examples showing this fact. Our examples grow gradually in complexity to argue that solutions must be highly structured. At the same time, we discuss what kind of structures are useful for deriving a robust PTAS. The insights presented in this section contribute to the understanding of robust solutions for our scheduling problem, and will be of great help when deriving our algorithms. The first example, that was already observed by Sanders et al. [SSS09], indicates that not every (near) optimal solution is robust. Example 1.10. Consider an instance with m machines and 2m − 1 jobs of size 1, then one new job of size 1 arrives. In Figure 1.2a we show a possible solution for the original instance, and Figure 1.2b depicts the unique (2 − ε)-approximate solution after the arrival of the new job (for any ε > 0). For a given solution S, let nOPT (S) be the number of machines whose load is equal to the optimal value. Intuitively, the problem with the solution in Figure 1.2a is that nOPT (S) = m − 1. Since the optimal value is increased significantly after the arrival of the new job, we must cover – in only one iteration – all machines whose load equals 1.2. Robust Solutions for the Machine Covering Problem the optimal value. This requires the migration of m − 1 jobs, and thus the migration factor necessary is again m−1. A natural idea to avoid the type of solutions depicted in Figure 1.2a is to choose in every iteration t a solution St that minimizes nOPT (St ) among all optimal (or near-optimal) solutions. However, as shown in the following example, this property is not enough for our purposes. Nonetheless, the idea of minimizing nOPT (S) still points us to the right direction as we will see below. Example 1.11. Consider an instance with an even number of machines m. Our initial instance contains jobs of four types: set A1 contains m/2 jobs with processing time 1, set A2 contains m/2 jobs with processing time 3/2, set B1 contains m/2 jobs with processing time 1 and finally set B2 contains m/2 jobs with processing time 3/4. Consider now that m/2 jobs of size 3/4 arrive one after another. For the original instance, it is not hard to see that any (1 + ε)-approximate solution for ε < 1/7 has to equal, up to permutations, the solution depicted in Figure 1.3a. Consider the arrival of the m/2 − 1 first jobs. One possible strategy that minimizes nOPT (S) is to assign the i-th new job to the i-th machine. We then obtain the solution in Figure 1.3b which is still optimal. However, when the last job arrives, the unique (1 + ε)-approximate solution for ε < 3/7 is the one shown in Figure 1.3c. To obtain this solution we need a migration factor of Ω(m). The reason why our strategy failed in the last example is that we did not take into consideration the machines with load larger than the optimal value, that is, the last m/2 machines. Since after the arrival of the (m/2)-th job the optimal value surpassed the load of those machines, we need to additionally cover those machines to maintain (1+ε)-approximate solutions. Interestingly, we can cover the last m/2 machines and minimize nOPT (S) simultaneously. Indeed, consider the arrival of the first job. There are two optimal solutions minimizing nOPT (S): the solution that assigns the new job to the first machine and leave the rest untouched, and the solution shown in Figure 1.4. Note that we can construct this last solution with a small migration factor, since we just need to migrate a job of size 1 and a job of size 3/4. In general, when the i-th job arrives, we can assign it to machine i and swaps one job of size 1 in machine i with a job of size 3/4 in machine m/2 + i. Let us call this schedule S¯i for i ∈ {1, . . . , m/2}. After the (m/2)-th iteration we will obtain the optimal solution (Figure 1.3c). We now generalize the idea of the previous example to derive a general rule. Consider an arbitrary instance. For any schedule S, let fS : R≥0 → N0 be a function so that fS (`) equals the number of machines in S with load equal to `. The previous example suggests that for ¯ < fT (`) ¯ for some `, ¯ two schedules S and T such that fT (`) = fS (`) for all ` < `¯ and fS (`) then schedule S should be preferred over T . In this case we say that S is lexicographically smaller than T . A schedule S that is lexicographically smaller than any other schedule T is a lexicographically minimal solution2 . It is crucial to observe that a lexicographically minimal solution is an optimal solution for the Machine Covering problem. Consider schedule S¯1 (depicted in Figure 1.4), and let T be the solution that takes the original optimal solution (Figure 1.3a) and adds the first new job to the first machine. It 2 An alternative way of defining such property is by considering a schedule whose vector of loads (`1 , . . . , `m ), with `1 ≤ . . . ≤ `m , is lexicographically maximal. Chapter 1. Robust PTASes for Machine Scheduling 1 ... 1 m 2 m 2 (a) Unique (1+ε)-approximate solution for the original instance. 3 4 1 1 ... 1 1 m 2 m 2 (b) Possible solution minimizing the number of machines whose load equals the optimal value. 1 m 2 1 ... m 2 (c) Unique (1 + ε)-approximate solution after the arrival of the new jobs. Figure 1.3: Minimizing the number of machines whose load equal the optimal value is not enough to obtain a robust PTAS. Gray boxes depict the newly arrived jobs. is easy to see that fT (`) = fS¯1 (`) for all ` < 9/4 and that fS¯1 (9/4) < fT (9/4), and thus S¯1 is lexicographically smaller than T . In general, it is not hard to see that schedule S¯i is the unique lexicographically minimal schedule for its respective instance. At least in this example, maintaining lexicographically minimal solutions for each job arrival helps us keeping optimal solutions with a small migration factor. It is possible to generalize the idea of Example 1.11 to show that, up to small variations, such a complicated structure is necessary to derive a robust PTAS with constant migration factor. This is done in the following example. Example 1.12. Fix a (small) number δ > 0 and assume that N := 1/(2δ) is an integer. Consider m = N · k machines for some value k ∈ N0 (that is chosen independently of δ). For i ∈ {1, . . . , N }, let Ai be a set containing k jobs of size pA i := 1 + δ · (i − 1) and let Bi be B a set containing k jobs of size pi := 1/2 − δ · (i − 1) − δ/i. Then, a sequence of k new jobs of size pnew := 1/2 − δ 2 arrive. Consider the original instance, before any job arrival. For a fixed value of δ, it is not hard to see that the unique (1 + ε)-approximate solution, for a small enough ε, is obtained by processing a job in Bi together with a job in Ai on the same machine for each i ∈ {1, . . . , N } 1.2. Robust Solutions for the Machine Covering Problem 1 ... 1 m 2 m 2 Figure 1.4: A preferable optimal solution after the first job arrival. (see Figure 1.5a). After the arrival of all k new jobs, an optimal solution can be constructed as follows. First, for each i ∈ {1, . . . , N } and s ∈ {1, . . . , k} assigned the s-th job in Ai to machine (i − 1) · k + s. Additionally, for each i ∈ {1, . . . , N } and s ∈ {1, . . . , k} process the s-th job in Bi on machine (i mod N ) · k + s. Finally, for s ∈ {1, . . . , k}, assign the s-th new job to machine s (see Figure 1.5b). Let us call Slex the schedule just constructed. We notice that the minimum load of this schedule is 3/2, and it is attained on the machines that process a job in A2 together with a job in B1 . Lemma 1.13. Schedule Slex is, up to permutations, the unique solution that maximizes the minimum machine load of the instance after the k job arrivals. Thus, it is also the unique lexicographically minimal solution . Proof. We say that a job is an A-job if it belongs to some set Ai and a B-job if it belongs to some set Bi . Also, we say that the k jobs that arrive are new. Recall that the value of Slex is 3/2. First we observe that in an optimal solution there are no two A-jobs assigned to the same machine. Indeed, if this happens, there must be one machine that has only B-jobs or new jobs. Clearly, each machine can have at most 3 jobs, otherwise the minimum load of the schedule is less than 3/2. We conclude that there is a machine that contains at most three jobs that are B-jobs or new jobs. Since new jobs are larger than any B-job, the load of this machine is at most 3 · pnew = 3/2 − 3δ 2 < 3/2, which is a contradiction. This implies that all A-jobs are processed on different machines. Therefore we can assume that the assignment of A-jobs in Slex is correct. new If a machine processing a job in A1 has only two jobs, its load is at most pA < 3/2. 1 +p This implies that each of the first k machines must process a job in A1 plus two other jobs. Consider one of the first k machines. If a job in BN is not assigned to one of these machines, it means that it must be assigned to some other machine together with a job in Ai and B nothing else. This means that the load of this machine is at most pA N + pN < 3/2 which is a contradiction. Also, the third job assigned to any of the first k machines must be a new job, B B otherwise the maximum load of the machine is pA 1 + pN + p1 = 3/2 − δ/N < 3/2. For the rest of the machines, it is clear that jobs in B1 , . . . , BN −1 must be assigned to the machines in non-decreasing order, from left to right. Chapter 1. Robust PTASes for Machine Scheduling B1 B1 B1 B4 B4 B4 B2 B2 B2 B3 B3 B3 B1 B1 B1 B2 B2 B2 B3 B3 B3 B4 B4 B4 A3 A3 A3 A1 A1 A1 A2 A2 A2 A4 A4 A4 (a) Unique (1 + ε)-approximate solution for the original instance. A3 A3 A3 A1 A1 A1 A2 A2 A2 A4 A4 A4 (b) Unique (1 + ε)-approximate solution for the new instance (schedule Slex ). Newly arrived jobs are displayed in gray. Figure 1.5: Solutions for Example 1.12: (approximately) lexicographically optimal solutions are necessary for a robust PTAS. The lemma also implies that any other solution different to Slex has a smaller minimum load, and thus the ratio between its value and the optimum depends exclusively on δ. This implies that for small enough ε > 0, Slex is (up to permutations) the unique (1 + ε) approximate solution. Starting from the optimal solution for the initial instance, a sequence of lexicographically minimal solutions for the intermediate iterations can be constructed as follows. In each iteration s, assign the new job js to machine s. Additionally, for i ∈ {1, . . . , N }, migrate the job in Bi assigned to machine (i − 1) · k + s to machine (i mod N ) · k + s. At the end of this process we will obtain schedule Slex (Figure 1.5b). Notice that whenever a new job arrives we need to migrate exactly one job for each set B1 , . . . , BN . Hence, we need constant migration factor since we assumed that δ (and thus N ) is fixed. With the same kind of arguments as in the previous lemma, it is easy to show that each of the intermediate schedules just described is lexicographically minimal for its respective instance. Moreover, our strategy minimizes the migration factor over all possible sequences of solutions since it spreads the migration of jobs evenly over all iterations. This example strongly indicates that lexicographically minimal solutions, or very similar structures, are necessary for a robust PTAS with constant migration factor. Notice that at any intermediate iteration, the instance admits a large number of optimal solutions. For example, consider the following family of schedules parametrized by a number p ∈ {1, . . . , N − 1}. For each iteration s, assign the new job js to machine s. Also, for i ∈ {1, . . . , p}, migrate the job in Bi assigned to machine (i−1)·k +s to machine (i mod p)·k +s. Notice that during this procedure we obtain an optimal solution for each job arrival except for the last iteration. Moreover, to obtain the unique (1 + ε)-approximate solution (for small ε) at the end of this procedure, we need to migrate at least all jobs in BN −1 and BN . It is easy to see that this would require a non-constant migration factor Ω(k) (where the 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs big-Ω notation hides a small constant depending on δ). Notice that this also holds if we allow to accumulate O(εOPT) reassignment potential. 1.3 1.3.1 A Robust PTAS for the Machine Covering Problem with Permanent Jobs A Stable Estimate of the Optimal Value In the previous section we gave an example indicating that robustness can be achieved by considering lexicographically minimal solutions. We devote the next sections to showing that this idea, correctly applied, indeed yields robust solutions. As seen in previous results of this type [SSS09, EL09], rounding the processing time of jobs and grouping small jobs together is a key tool for showing robustness: it reduces the complexity of the instances and helps identifying and avoiding symmetries. The rounding techniques we use are similar to classical rounding techniques for other scheduling problems [AAWY98]. To apply them we need to compute an estimate of the optimum value that determines how coarse our rounding should be. In our online setting we additionally need this estimate to be stable: at the arrival of a new job, its value must not change by more than a constant factor. In our case this property cannot always be fulfilled, however we characterize the instances for which this is not satisfied and deal with them separately. We first describe the upper bound for a particular class of instances, and then extend it for arbitrary instances and prove the stability property in general. The upper bound that we consider was previously introduced by Alon et al. [AAWY98]. Let I = (J, M ) be an instance of our problem, where J is a set of n jobs and M a set of m machines. We denote by OPT the optimal value for this instance. Also, for a given schedule S, we denote by `i (S) the load of machine i and by `min (S) the minimum machine load of schedule S. The most natural upper bound to use in our setting is the average load of the instance, p(J)/m. However, this estimate is not within a constant factor of the optimum (consider, e. g., an instance with two machines and two jobs with processing times 1 and K 1, respectively). Through this section we say that instance I satisfies property (∗) if pj ≤ p(J)/m, for all j ∈ J. Under condition (∗), the average load is always within a factor 2 of OPT. To show this we need the following definition. Definition 1.14 (Locally optimal schedules). For a given schedule S, let pimin the processing time of the smallest job assigned to i for each machine i ∈ M . We say that S is locally optimal if `i (S) − pimin ≤ `min (S) for every machine i ∈ M . It is not hard to see that there always exists an optimal solution that is locally optimal. Indeed, let SOPT be an optimal schedule. For any machine i such that `i (SOPT ) − pimin > `min (SOPT ) = OPT, we can migrate the smallest job assigned to i to a machine whose load equal `min (SOPT ). This does not decrease the value of the solution. We can iterate this procedure until the solution is locally optimal. Chapter 1. Robust PTASes for Machine Scheduling For any instance satisfying (∗), we will show that p(J)/m is within a factor of 2 of `min (S) for any locally optimal solution S. With our previous observation, this implies that p(J)/m is within a factor 2 of OPT. Lemma 1.15 ([AAWY98]). Let I = (J, M ) be an instance satisfying (∗), and let S be a ≤ `min (S) ≤ p(J) . locally optimal solution for this instance. Then p(J) 2m m Proof. The upper bound on `min (S) is clear. Assume by contradiction that `min (S) < p(J)/2m. This implies that there must exist some machine i whose load is strictly larger than the average load p(J)/m. Since the processing time of every job is at most p(J)/m, machine i must contain at least two jobs. Let j be the smallest job assigned to i, and let j 0 6= j be any other job processed on i. Since S is locally optimal we have that pj ≥ `i (S) − `min (S) p(J) p(J) p(J) − = . ≥ m 2m 2m This, together with the fact that pj + pj 0 ≤ `i (S), implies that `min (S) < p(J) ≤ pj ≤ pj 0 ≤ `i (S) − pj , 2m which is a contradiction since S is locally optimal. Now we show how to transform arbitrary instances to instances satisfying (∗) without changing the set of locally optimal solutions. Consider a locally optimal solution S. If pj > p(J)/m ≥ `min (S), then we can assume that j is processed on a machine of its own. Indeed, if there is another job in the same machine as j, then migrating this job to a machine with load `min (S) cannot not decrease `min (S). Thus, removing j plus its corresponding machine does not change the value of the solution, but it does reduce the average load of the instance. Iterating this idea we get the following algorithm. Algorithm Input: An arbitrary scheduling instance (J, M ). 1. Order the jobs in J so that pj1 ≥ pj2 ≥ . . . ≥ pjn . 2. Initialize w := m, L := J and k := 1. 3. Set p(L) , w and check whether pjk ≤ A. If this holds, then return A together with w and L. Otherwise, set w := w − 1, k := k + 1 and L := L \ {pjk }. Repeat Step (3). A := We remark that the running time of this algorithm is O(n ln(n)). However, there is an alternative algorithm with linear running time, see Appendix A. We call the value A returned by the algorithm the stable average of instance I. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Notice that all jobs in J \ L must be processed on a machine of their own by any locally optimal schedule, so we can ignore these jobs together with the corresponding m − w machines. This implies that there is a one-to-one correspondence between locally optimal solutions of instance I and locally optimal solutions of an instance with jobs L and w machines. This, together with Lemma 1.15 implies the following. Lemma 1.16. Let I = (J, M ) be any instance and A its stable average. Any locally optimal solution S for this instance satisfies A/2 ≤ `min (S) ≤ A. Since there is always an optimal solution which is locally optimal we conclude that A/2 ≤ OPT ≤ A. Lemma 1.17. For any Machine Covering instance I with optimal value OPT, the stable average A satisfies OPT ≤ A ≤ 2 · OPT. It is easy to see that, in general, the factor by which the upper bound A changes at the arrival/departure of a job might not be bounded (consider two machines and two jobs of size 1 and K 1, respectively; then one job of size K − 1 arrives). However, we can show that if the stable average A is increased by more than a factor 2, then the instance was trivial to solve in the first place. We first show that if the value A is increased by more than a factor 2, then a significant number of jobs must have arrived to the system. In the next lemma we denote by S4T the symmetric difference of sets S and T . Lemma 1.18. Consider two scheduling instances I = (J, M ) and I 0 = (J 0 , M ). Let A, L, w and A0 , L0 , w0 be the returned values when applying Algorithm Stable-Average on input I and I 0 , respectively. If A0 > 2A, then |J4J 0 | > w/2. Proof. Let δ be an arbitrary positive number. We assume, without loss of generality, that jobs in instances I and I 0 have processing time upper bounded by A0 (1 + δ). Indeed, if there is some job j with pj > A0 , changing its processing time to A0 (1 + δ) leaves A, A0 , L, L0 , w, and w0 unchanged. Let k := |J 0 \ J| ≤ |J 0 4J|, then wA + (m − w)A0 (1 + δ) + kA0 (1 + δ) . m Simple algebraic manipulation yields that w 1 − AA0 + δ(m + w) k≥ . (1 + δ) A0 ≤ Notice that the limit of the right-hand-side when δ → 0+ equals w(1 − A/A0 ) > w/2. The result then follows by choosing δ small enough. Moreover, we say that an instance is trivial if Algorithm Stable-Average returns w = 1, otherwise it is non-trivial. If an instance is trivial, the optimal solution of the instance can be constructed by processing each of the m − 1 largest jobs on a machine of their own. The rest of the jobs are processed on the remaining machine. Moreover, the optimal value OPT equals A. This observation motivates considering trivial instances separately. For non-trivial instances, we get the following result that is a direct implication of Lemma 1.18. Corollary 1.19. Consider a non-trivial instance I = (M, J) and an arbitrary instance I 0 = (M, J 0 ) with J 0 = J ∪ {j ∗ } for some j ∗ 6∈ J. Then, it must hold that A ≤ A0 ≤ 2 · A. Chapter 1. Robust PTASes for Machine Scheduling The Structure of Robust Solutions In this section we show how to construct robust solutions in a static setting. That is, given an instance I we show how to construct a schedule S for I such that: (1) S is a (1 + O(ε))approximate solution, and (2) at the arrival of any job larger than ε·OPT, we can construct a (1 + O(ε))-approximate solution S 0 for the modified instance with constant migration factor. The solution constructed will be based on rounding the instance and then constructing a lexicographically minimal solution by solving an integer linear program (ILP) of constant dimension. For our online setting, where the input is a sequence of instances (M, J0 ), . . . , (M, Jt ), . . . revealed online, the procedure presented in this section can be directly used to construct schedules S0 and S1 for instances (M, J0 ) and (M, J1 ), respectively. However, we cannot directly apply the same technique to later instances in the sequence. Additional complications arise from the rounding technique: the coarseness of the rounding depends on the optimal value and this changes through the iterations. If we want to construct lexicographically minimal solutions for our rounded instance, changing the coarseness of the rounding alone might provoke changes in the lexicographically minimal solution. We can deal with this problem by carefully choosing when to change the coarseness of the rounding. This is done later in Section 1.3.3. We break our result for the static case in two parts. We first start by showing our result for trivial instances. Later we consider the non-trivial case. Lemma 1.20. Let I = (J, M ) be an instance with optimal value OPT and consider an instance I 0 = (J 0 , M ) with one extra job, that is, J 0 = J ∪ {j ∗ }. Let also A and A0 be the stable average for instances I and I 0 , respectively, and assume that A0 > 2A. Let S and S 0 be locally optimal solutions for I and I 0 , respectively. By permuting machines in schedule S 0 , it is possible to transform S into S 0 with a migration factor of 1. Proof. Since A0 > 2A, Corollary 1.19 implies that I is a trivial instance. Hence, running Algorithm Stable-Average on input I yields w = 1, and the subset of jobs L returned by the algorithm satisfies A = p(L). Consider the locally optimal solution S 0 . We first show by contradiction that no two jobs in J 0 \ L are processed on the same machine. Indeed, consider two jobs j, j 0 ∈ J 0 \ L that are processed on machine i, and assume that pj ≥ pj 0 . Notice that either j or j 0 belong to J \ L, and therefore pj > A. On the other hand, since J \ L contains m − 1 jobs, there exists a machine i0 that contains only jobs in L. This implies that S 0 is not locally optimal: if `k (S 0 ) denotes the load of a machine k and `min (S 0 ) the value of schedule S 0 , then `i (S 0 ) − pj 0 ≥ pj > A = p(L) ≥ `i0 (S 0 ) ≥ `min (S 0 ). We conclude that no two jobs in J 0 \ L are processed on the same machine. Since I is trivial and S is locally optimal, it is easy to see that S processes all jobs in J \L on different machines. We conclude that, up to permutation of machines, schedules S and S 0 differ only in the new job j ∗ and the jobs in L. Thus, the total processing time of jobs that must be migrated is p(L) = A. On the other hand, it is not hard to see that A0 − A ≤ pj ∗ . Since by hypothesis A0 > 2A, this implies that pj ∗ ≥ A and thus the migration factor needed is at most 1. From now on we can assume that A0 ≤ 2 · A. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Compact Description of a Schedule As usual in PTASes, we first simplify our instance by rounding. The techniques we use are similar to the ones found, e. g., in [Woe97, AAWY98, SSS09]. Consider an arbitrary instance I = (M, J) of the Machine Covering problem, with optimal value OPT. It is easy to see that rounding down all processing times to the nearest power of 1 + ε can only decrease the optimal value of the instance by a (1 + ε) factor. Thus, without loss of generality we assume the following. Assumption 1.21. For every job j, there exists k ∈ Z so that pj = (1 + ε)k . Further, we group together small jobs and round down jobs that are too large. For this consider an index set I := {`, ` + 1, . . . , u} ⊆ Z. After our rounding, each job will have processing time (1 + ε)i for some i ∈ I. Given this set, we say that a job j is small if pj ≤ (1 + ε)` , big if (1 + ε)`+1 ≤ pj ≤ (1 + ε)u−1 , and huge if pj ≥ (1 + ε)u . Our rounding groups small jobs into jobs of size (1 + ε)` , big jobs are left untouched, and huge jobs are rounded down to (1 + ε)u . We can choose set I, for example, as A i ≤ (1 + ε) ≤ A(1 + ε) , (1.1) IA := i ∈ Z : ε · 2(1 + ε) where A is the stable average of instance I. Then, small jobs have processing time at most εA/2 ≤ εOPT (by Lemma 1.17), and huge jobs are larger than A ≥ OPT. As we will see below, this guarantees that the rounding decreases the optimal value at most by a factor 1 + O(ε). Also, with this choice of I, the number of different processing times is constant, |IA | ∈ O(log1+ε 1/ε). In general, however, we will consider I = {`, . . . , u} as a superset of IA . The purpose of this is the following: When a job arrives, the same set I can be used for rounding the new instance. This can be done so that set I has – essentially – the same size, |I| ∈ O(log1+ε 1/ε). Given an index set I, we round our instance with the following procedure. Algorithm Input: An arbitrary scheduling instance (J, M ), and an index set I = {`, . . . , u}. 1. Define vector N = (Ni )i∈I ∈ NI0 as follows, Ni := j ∈ J : pj = (1 + ε)i Nu := |{j ∈ J : pj ≥ (1 + ε)u }| , $P % j:pj ≤(1+ε)` pj . N` := (1 + ε)` for i ∈ {` + 1, . . . , u − 1}, 2. Define a set of jobs JN containing Ni jobs of size (1 + ε)i for all i ∈ I. 3. Return vector N and instance IN := (JN , M ). Chapter 1. Robust PTASes for Machine Scheduling Consider the following definition: For a given instance I = (J, M ) and a number ` ∈ Z, we denote by J` (I) the set of small jobs with respect to (1 + ε)` , that is, J` (I) := j ∈ J : pj ≤ (1 + ε)` . P Also recall that for a given set of jobs J we denote p(J) := j∈J pj . Notice that the definition of N` in the previous algorithm ensures that the total processing time of small jobs in IN and I differs by at most (1 + ε)` , that is p(J` (IN )) ≤ p(J` (I)) ≤ p(J` (IN )) + (1 + ε)` . Thus, choosing ` so that (1 + ε)` ≤ εOPT implies that the difference in volume of small jobs in both instances is arbitrarily small. As shown in the following lemma, this implies that the optimal value of the rounded instance is within a 1 + O(ε) factor of OPT. Lemma 1.22. Let I1 = (J1 , M ) and I2 = (J1 , M ) be two scheduling instances and denote by OPT1 and OPT2 their corresponding optimal values. Consider a given index set I = {`, . . . , u} so that (1 + ε)` ≤ εOPT1 and that (1 + ε)u ≥ OPT1 . Assuming that big jobs in both instances coincide, {j ∈ J1 : pj = (1 + ε)i } = {j ∈ J2 : pj = (1 + ε)i } for all i ∈ {` + 1, . . . , u − 1}, the number of huge jobs is the same, |{j ∈ J1 : pj ≥ (1 + ε)u }| = |{j ∈ J2 : pj ≥ (1 + ε)u }| , and the volume of small jobs differ by at most (1 + ε)` , |p(J` (I1 )) − p(J` (I2 ))| ≤ (1 + ε)` , then OPT1 ≤ (1 + O(ε)) · OPT2 . Proof. Let us consider an optimal schedule S for I1 . We modify this solution to construct a schedule for I2 . First, replace all jobs in J1 with processing time at least (1 + ε)u with a job of size at least (1 + ε)u in J2 . Thus, the load of each affected machine is still at least (1 + ε)u ≥ OPT1 . Remove all jobs j with pj ≤ (1 + ε)` , and apply a list-scheduling algorithm to jobs smaller than (1 + ε)` in instance I2 , i. e., greedily assign these jobs to the least loaded machine in an arbitrary order. With this we have constructed a feasible solution for instance I2 . Let j be the last job scheduled by this procedure, and let Sj be its starting time. It is clear that the value of the schedule is at least Sj . Assume by contradiction that Sj < OPT1 − 2(1 + ε)` . Since we are using a greedy algorithm, all jobs of size at most (1 + ε)` have completion time strictly smaller than OPT − (1 + ε)` . This contradicts the fact that, |p(J` (I1 )) − p(J` (I2 ))| ≤ (1 + ε)` , since small jobs in I1 can be used to cover all machines up to OPT1 . We conclude that the value of the solution constructed is at least OPT1 − 2(1 + ε)` , and thus OPT2 ≥ OPT1 − 2(1 + ε)` ≥ OPT1 − 2εOPT1 . This implies that OPT1 ≤ OPT2 /(1 − 2ε) = (1 + O(ε)) · OPT2 . 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs The following lemma follows directly from the previous result. Lemma 1.23. Let I be a scheduling instance and OPT its optimal value. Assume that I = {`, . . . , u} satisfies that (1 + ε)` ≤ εOPT and OPT ≤ (1 + ε)u . Then, Procedure Rounding on input I and I returns an instance whose optimal value is within a 1+O(ε) factor of OPT. We can thus restrict to work with instance IN , and thus, if I is chosen appropriately, its jobs take only a constant number of different sizes. With the help of the following definition we can compactly describe schedules for IN . Definition 1.24 (Machine configuration). For a given schedule, a machine is said to obey configuration k : I → N0 , if k(i) equals the number of jobs of size (1 + ε)i being processed on that machine, for all i ∈ I. Also, the load of a configuration k,Pdenoted as load(k), is the load of a machine that obeys that configuration, i. e., load(k) = i∈I k(i) · (1 + ε)i . Let us now consider the set of configurations KI := k : I → N0 | k(i) ≤ (1 + ε)u−` + 1 for all i ∈ I . When it is clear from the context, we will omit the subscript in the previous definition and call KI = K. Notice that if the cardinality of I is constant then the same holds for K since 2 |K| ∈ (1 + ε)O(|I| ) . The next lemma assures that KI contains all necessary configurations that we need to consider. Lemma 1.25. If (1 + ε)u ≥ OPT, in any locally optimal solution for IN all machines obey a configuration in KI . Proof. Consider a locally optimal solution for IN . Then no job starts later than OPT ≤ (1 + ε)u . Therefore, since all jobs are larger than (1 + ε)` , the number of jobs per machine is at most (1 + ε)u /(1 + ε)` + 1. We can now describe a schedule for IN as a vector x = (xk )k∈K , where xk denotes the number of machines that obey configuration k in the schedule. Then, any locally optimal solution to IN corresponds to a vector x satisfying the following set of constraints, X xk = m, (1.3) k∈KI k(i) · xk = Ni for all i ∈ I, xk ∈ N0 for all k ∈ KI . We denote by AI the matrix defining the set of Equations (1.3) and (1.4); its corresponding right-hand-side is denoted by b(N, m). the non-negative integral solutions Then, K to these equations correspond to the set D := x ∈ N0 : AI · x = b(N, m) . A key point in the following argument is that set D belongs to a constant dimensional space, i. e., D ⊆ ZK , 2 where |K| ∈ (1 + ε)O(|I| ) . Chapter 1. Robust PTASes for Machine Scheduling Constructing Stable Solutions In the following we present the main structural contribution of this chapter: we show that by considering a lexicographically minimal solution for IN (which is also optimal), upon the arrival of a new job, we can maintain optimality by migrating jobs with total processing time at most f (ε) · OPT, for some function f (ε) that depends exclusively on ε (and thus it is constant for fixed ε). Since we are assuming that the arriving job is larger than ε · OPT, this implies that the migration factor needed is upper bounded by f (ε)/ε. Let us order and relabel the set of configurations K = {k1 , . . . , k|K| } such that load(k1 ) ≤ load(k2 ) ≤ . . . ≤ load k|K| . Definition 1.26. Let x, x0 ∈ D. We say that x0 is lexicographically smaller3 than x, denoted x0 ≺lex x, if x` = x0` for all ` ∈ {k1 , . . . , kq }, and x0kq+1 < xkq+1 , for some q ∈ {0, 1, . . . , |K| − 1}. It is easy to see that ≺lex defines a total order on the solution set D, and thus there exists a unique lexicographically minimal vector, which we call x∗ . We will soon see that x∗ has the structure needed for our purposes. In particular, it maximizes the minimum machine load of instance IN . Lemma 1.27. Let x∗ be the lexicographically minimal vector in D. Then x∗ represents an optimal schedule for instance IN . Proof. Consider an optimal schedule that is also locally optimal. By Lemma 1.25 we can describe such a solution by a vector xOPT ∈ D. Clearly, xOPT = 0 for all k ∈ K k with load(k) < OPT. Thus, since x∗ is lexicographically minimal, x∗k = 0 for all k ∈ K with load(k) < OPT. The result follows. Moreover, x∗ can be computed in polynomial time by solving a sequence of integer linear programs in constant dimension. To this end, consider the following algorithm. Algorithm Input: A configuration set K = {k1 , . . . , k|K| } and its corresponding set of feasible configuration vectors D. 1. Solve min {xk1 | x ∈ D} using Lenstra’s algorithm [Len83] and call the optimal value x∗k1 . 2. For each q = 2, . . . , |K|, use Lenstra’s algorithm to compute x∗kq := min xkq : x ∈ D and xkr = x∗kr for all r = 1, . . . , q − 1 . 3. Return x∗ := (x∗k1 , . . . , x∗k|K| ). 3 We remark that this definition differs from the one given in Section 1.2.3. Indeed, the concept in Section 1.2.3 does not distinguish between machines with the same load, while the new definition gives (arbitrary) priorities among configurations with the same load. Notice that in the examples of Section 1.2.3, all relevant configurations have different loads so for these examples the two concepts are equivalent. We introduce the new definition because it has technical advantages that will simplify our proofs. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs A simple inductive argument shows that the vector x∗ returned by the algorithm is the lexicographically minimal element in D. We remark that this is a polynomial time algorithm since D is embedded in a constant dimensional space and thus Lentra’s algorithm runs in polynomial time. Alternatively, we can find x∗ by solving a single integer linear program in constant dimension. This can be achieved by minimizing a carefully chosen linear function over set D. Let λ := 1/(m + 1), and define cq := λq for all q ∈ {1, . . . , |K|}. Consider the following problem, |K| X [LEX] : min cq · x k q : x ∈ D . q=1 Lemma 1.28. Let z be an optimal solution to [LEX]. Then, z is the lexicographically minimal vector in D. In particular [LEX] has a unique optimal solution. Proof. We use the following claim which is obtained byP standard calculus techniques. |K| Claim. For each ` ∈ {1, . . . , |K| − 1}, it holds that m · q=`+1 cq < c` . Let z be an optimal solution to [LEX], and x∗ the lexicographically minimal solution in D. We proceed by contradiction, and call ` the smallest index such that zk` 6= x∗k` . Since x∗ is the lexicographically minimal solution, we know that x∗k` ≤ zk` − 1. Then, |K| X cq x∗kq ≤ c` (zk` − 1) + |K| X cq x∗kq ≤ c` (zk` − 1) + |K| X cq (zkq + m) < |K| X cq zkq , where the last inequality follows from the claim above. Finally, adding `−1 X q=1 cq x∗kq `−1 X cq zkq to both sides of the last inequality yields a contradiction to the optimality of z. With this last result we can already compute a (1 + ε)-approximate solutions for instance I: Compute the stable average A; Run Algorithm Rounding on input I and I = IA as in Equation (1.1); Solve [LEX] for the returned instance IN with Lenstra’s algorithm. It is easy to check that this last step takes time O (log(n2 )) (where the O-notation hides a constant depending on 1/ε). Since we can compute A in linear time (see Appendix A), the running time of this algorithm is O(n). Bounding the Migration Factor Now that we have shown how to compute an optimal solution to the rounded instance by solving [LEX], we show that this solution is robust. Consider a new instance I 0 = (J 0 , M ) Chapter 1. Robust PTASes for Machine Scheduling with optimal value OPT0 . We assume that J 0 = J ∪ {j ∗ } with pj ∗ ≥ εOPT0 , and that the stable average A0 of I 0 satisfies A0 ≤ 2A (otherwise the result follows by Lemma 1.20). For this case we show how to update the solution given by [LEX] to a solution for I 0 with constant migration factor. Define the set A i ≤ (1 + ε) ≤ 2A(1 + ε) = {`, . . . , u}. (1.5) I := i ∈ Z : ε · 2(1 + ε) We now run Algorithm Rounding on instances I and I 0 with the same interval I just defined. Let (N, IN ) and (N 0 , IN 0 ) be the output of the algorithm for I and I 0 , respectively. Notice that (1 + ε)u ≥ 2A ≥ A0 ≥ OPT0 ≥ OPT. Similarly, (1 + ε)` ≤ εA/2 ≤ εOPT ≤ εOPT0 . Thus, Lemma 1.23 implies that optimal solutions for IN and IN 0 yield (1+O(ε))-approximate solutions for I and I 0 , respectively. Consider set KI as defined in Expression 1.2. Lemma 1.25 implies that there are optimal solutions for IN and IN 0 that only uses configurations in KI . Consider the integer linear programs |KI | X [LEX] : min cq xkq : AI · x = b(N, m) and xk ∈ N0 for all k ∈ KI , q=1 |KI | X [LEX’] : min cq xkq : AI · x = b(N 0 , m) and xk ∈ N0 for all k ∈ KI . q=1 Let z and z 0 be optimal solutions to [LEX] and [LEX’], respectively. Notice that these two integer programs differ only in their right-hand-side. Moreover, since I and I 0 only differ in one job, then ||N − N 0 ||1 ≤ 1. This observation plus the following sensitivity analysis result for integer linear programs allow us to bound the difference between z and z 0 . Lemma 1.29 ([CGST86]). Let A be an integral m×n-matrix, such that each sub-determinant is at most ∆ in absolute value. Let b and b0 be column m-vectors, and let c be a row n-vector. Suppose min{c · x | A · x ≤ b; x ∈ Zn } and min{c · x | A · x ≤ b0 ; x ∈ Zn } are finite. Then, for each optimal solution z for the first problem there exists an optimal solution z 0 for the second problem such that kz − z 0 k∞ ≤ n∆(kb − b0 k∞ + 2). To be able to apply this lemma for [LEX] and [LEX’] we need to bound the value of the sub-determinants of AI . This is done in the following lemma, which was previously proved in [SSS09]. Lemma 1.30. Assume that I is and index set so that |I| ≤ log1+ε C/ε, for some constant C. 1 2 1 Then, for any square sub-matrix B of AI we have that | det(B)| ∈ 2O( ε log ε ) . Proof. Let B be a square sub-matrix of AI . Then B contains at most |I| + 1 columns and u−` |I|−1 rows, and each entry is upper + 1 ≤ C/ε + 1. Noting 1 bounded by (1 + ε) + 1 = (1 + ε) log ε C that |I| ≤ log1+ε ε ∈ O ε , we obtain u−` | det(B)| ≤ (|I| + 1)!((1 + ε) + 1) ≤ (|I| + 1) 1 ≤ 2(|I|+1)·(log(|I|+1)+log( ε +1)) ∈ 2O( ε log 2 1 ) ε C +1 ε 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs It is easy to check that the interval index I defined in Equation (1.5) satisfies the hypothesis of the last lemma and thus we obtain a bound on the absolute value of each subdeterminant of AI . Lemma 1.31. Let z and z 0 be the solutions of [LEX] and [LEX’], respectively. Then kz − 1 2 1 z 0 k1 ∈ 2O( ε log ε ) Proof. Let ∆ be an upper bound on the absolute value of each sub-determinant of AI . Since lexicographically minimal solutions are unique, and ||N − N 0 ||∞ ≤ 1, Lemma 1.29 implies that kz − z 0 k∞ ≤ |KI |∆ (kb(N, m) − b(N 0 , m)k∞ + 2) ≤ 3|KI |∆ . 1 Thus, by the previous lemma and recalling that |KI | ∈ (1 + ε)O(|I| ) = 2O( ε log 1 kz − z 0 k1 ≤ |KI | · kz − z 0 k∞ ≤ 3|KI |2 ∆ ∈ 2O( ε log 2 1 ) ε 2 1 ) ε Note that by the last theorem, we can compute z 0 by an exhaustive search through all 1 2 1 vectors feasible to [LEX]’ whose components differ from z by at most 2O( ε log ε ) . Therefore, O( 1 log2 1 ε the running time needed to compute z 0 is 22 ε . Alternatively we can use Lenstra’s algorithm: the uniqueness of the optimum in [LEX] guarantees that the output of Lenstra’s algorithm will be the optimal solution z 0 claimed in Lemma 1.29. Let SN and SN 0 be the schedules corresponding to solutions z and z 0 , respectively. Recall that these are schedules for the rounded instances IN and IN 0 , respectively. By the previous 1 2 1 lemma we can construct these schedules so that they only differ in 2O( ε log ε ) machines. It is straightforward to obtain a (1 + O(ε))-approximate solution S for I from schedule SN : (1) Replace each huge job of IN for a huge job in I; (2) Remove all N` small jobs from the solution; (3) Order small jobs in I from largest to smallest; (4) Add these jobs to the solution with a greedy algorithm (list-scheduling). With the same argument as in Lemma 1.22, it is easy to see that such a solution is (1 + O(ε))-approximate. More generally, notice that the solution constructed falls into the following category. Definition 1.32 (Following Schedule). We say that a schedule S for instance I follows schedule SN if (i) the assignment of big and huge jobs in both schedules coincide; and (ii) assuming that in each machine jobs are ordered from largest to smallest, reassigning any small job in S do not decrease its starting time. Lemma 1.33. Assume that I = {`, . . . , u} satisfies that (1 + ε)` ≤ εOPT0 . If SN 0 is an optimal solution to IN 0 and S 0 follows SN 0 , then S 0 is a (1 + O(ε))-approximate solution to instance I 0 . We omit the proof of this lemma since it follows by the same argument as in the proof of Lemma 1.22. We use this fact to construct a (1 + O(ε))-approximate schedule S 0 for I 0 that follows SN 0 and that only differs from S slightly. Chapter 1. Robust PTASes for Machine Scheduling Input: A triplet (SN , SN 0 , S), where SN and SN 0 are solutions to instances IN and IN 0 , respectively, and S is a solution to I that follows SN . 1. Initialize schedule S 0 as schedule SN 0 . 2. Replace huge jobs by the corresponding jobs in I 0 . 3. Remove small jobs from the schedule. Assign small jobs in I 0 by using the same assignment of small jobs as schedule S. 4. Reorder the jobs in each machine from largest to smallest. 5. Let `min be the value of the schedule so far constructed. If there exists a small job j whose starting time is strictly larger than `min , migrate this job to a machine whose load equals `min and go back to Step (4). 6. Return S 0 . Clearly, the schedule returned by this algorithm follows schedule SN 0 , and thus by Lemmas 1.23 and 1.33 it is (1 + O(ε))-approximate. Finally, we show that the total processing time of jobs migrated to construct S 0 from S is bounded. Lemma 1.34. Let δ be the number of machines on which the assignment of jobs in schedules SN and SN 0 differ. If S and S 0 follow SN and SN 0 , respectively, then the total amount of processing time migrated to construct S 0 from S is at most 6 · A · δ. Proof. Note that SN and SN 0 differ in machines which do not contain huge jobs, and thus each of these machines has load at most 2A. Thus, we can turn SN into SN 0 by migrating jobs with at most 2Ad total processing time. Since S follows SN and S 0 follows SN 0 , this implies that the total processing time of big and huge jobs that must be migrated to obtain S 0 from S is at most A · 2 · δ. Now we bound the total processing time of the migrated small jobs. Let ∆M ⊆ M be the set of machines in which schedules SN and SN 0 differ. Consider schedule S 0 before Step (4) in Algorithm Construct-Schedule. At this stage, S and S 0 only differ in machines that belong to ∆M . Let us further partition set ∆M into the set of machines ∆M − that has smaller load in S 0 than in S, and ∆M + the set of machines that has larger load in S 0 than in S. It is then clear that small jobs migrated in Step (4) can only be reassigned from a machine in ∆M + or to a machine in ∆M − . Since the load of each machine in ∆M − is at most 2 · A0 in schedule S 0 , the total processing time of small jobs migrated to this machine is at most 2 · |∆M − | · A0 . Similarly, since for each machine in ∆M + its load in schedule S is at most 2 · A, then the total processing time of jobs migrated from these machines is at most 2 · A · |∆M − |. We conclude that the total processing time of small jobs migrated is upper bounded by 2 · A0 · |∆M − | + 2 · A · |∆M + | ≤ 2 · A0 · δ ≤ 4 · A · δ. The lemma follows. The main theorem of this section follows from collecting our previous results. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Theorem 1.35. Let I be an instance of the Machine Covering problem and let ε > 0. There exists a (1 + ε)-approximate solution for I so that, for any instance I 0 that differs from I in one job j ∗ with pj∗ ≥ εOPT0 , we can construct a (1 + ε)-approximate solution to I 0 with 1 2 1 migration factor 2O( ε log ε ) . Proof. From the previous lemma and Lemma 1.31, follows that the total amount of process1 2 1 ing time that needs to be migrated is 6 · A · δ ∈ A · 2O( ε log ε ) . The theorem follows since the new arriving job is larger than εOPT0 ≥ εA/2. Maintaining Stable Solutions Dynamically In the previous section we give a (1 + O(ε))-competitive algorithm with constant migration factor if there is only one arriving job. We extend now this result to an arbitrary number of iterations. We do this in two steps. First we show our techniques for the reassignment model, where we allow to accumulate (an arbitrary amount of) reassignment potential. In the next section we refine this result to show that the same holds for the migration factor model. In both of these sections we still assume that the arriving job for each iteration t is larger than εOPTt (Assumption 1.9). In the last section, we chose the coarseness of our rounding (i. e., set I) so to ensure that the optimal value of both rounded instances IN and IN 0 are close to the optimum of their respective original instances. However, as more jobs arrive, the optimal value of the new instances may become arbitrarily large, and thus the range I will not be large enough to guarantee the approximation ratio. We deal with this difficulty by dynamically adjusting the set I, shifting it to the right to match the current instance. In doing so, we must be extremely careful not to destroy the structure of the constructed solutions and maintain the reassignment factor bounded. In particular, notice that each time that set I is shifted to the right, we must regroup small jobs into larger groups. Then, we should avoid changing I too often: it should be changed only when we can guarantee that there is enough reassignment potential accumulated to regroup all small jobs and simultaneously maintain lexicographically optimal solutions. To this end, we only update set I when the stable average increases by a factor two. Once again, we will bound the difference of lexicographically optimal solution by using Lemma 1.29. We remark that as long as the index set I is not modified, all techniques of the previous section can be iterated for an arbitrary number of iterations. Summarizing, our algorithm iterates the technique of the previous section, and updates index set I when the stable average of the instances is increased by a factor 2. More precisely, our algorithm is as follows. Input: A sequence of instances revealed online, It = (Jt , M ) for t = 0, 1 . . . , so that: (i) Jt = Jt−1 ∪ {jt } and (ii) pjt ≥ εOPTt where OPTt is the optimal value for It . 1. Initialize A := 0 and I := ∅. 2. For each t = 0, 1, . . . , Chapter 1. Robust PTASes for Machine Scheduling (a) Run Algorithm Stable-Average on It . Let At and wt be the corresponding output. (b) If At > 2 · A then set A := At and A i I := i ∈ Z : ε · ≤ (1 + ε) ≤ 4A(1 + ε) . 4(1 + ε) (c) Run Algorithm Rounding on input (It , I). Let (N t , IN t ) be the output of the algorithm. (d) Construct the set of configurations KI and the ILP [LEX] for this set, and solve it with Lenstra’s algorithm. Let z t be the optimal solution of the ILP. (e) If t = 0, construct a schedule SN 0 using z 0 as a template. Let S0 be any schedule that follows SN 0 . (f) If t > 0, construct a schedule SN t for instance It by using vector z t as a template. Permute machines in schedule SN t so that the number of machines on which schedules SN t−1 and SN t−1 differ is minimized. (g) Run Algorithm Construct-Schedule on input (SN t−1 , SN t , St−1 ). Call St the output of this subroutine. The previous algorithm uses the same rounding technique as in the previous section, and construct a schedule accordingly. The only difference is that the set index I is chosen larger. However, notice that |I| ≤ log1+ε (16(1 + ε)2 /ε) + 1 ∈ O(log1+ε 1/ε), and thus all results from the previous section still follow for this set. Moreover, it is easy to check that (1 + ε)` ≤ εOPTt and (1 + ε)u ≥ OPTt for all t. By Lemmas 1.23, 1.28, and 1.33 this implies that schedule St is a (1 + O(ε))-approximate solution. We conclude the following. Lemma 1.36. Algorithm Robust-PTAS is (1 + O(ε)) -competitive. We now argue that the reassignment factor of the algorithm is constant. As explain before, the key feature for showing this is the fact that the algorithm updates the index set I only when the stable average changes by a factor two. For the analysis we separate the iterations of the algorithm into blocks. Each block B := {s, s + 1, . . . , r} consists of a consecutive sequence of iterations, such that the value of A in the algorithm is kept constant. That is, A = As , and r + 1 is the smallest integer so that Ar+1 > 2A. Let us fix a block B = {s, . . . , r} and consider two consecutive instances It and It+1 for t ∈ {s, . . . , r − 1}. Note that in this case the rounding of both instances was done with the same interval I, and thus ||N t − N t+1 ||1 ≤ 1. Hence, the ILPs [LEX] of these instances only differ in their right-hand-side, where one entry is increased by at most one. With this observation we can use the same reasoning as in Theorem 1.35 to prove the following lemma. Lemma 1.37. If two consecutive instances It and It+1 belong to the same block, then the 1 2 1 migration factor used to obtained St+1 from St is 2O( ε log ε ) . 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs It remains to consider the limit case where instances It and It+1 belong to different blocks, that is, if t = r. If Ar+1 > 2Ar then Lemma 1.20 implies that the migration factor needed for this iteration is 1. We assume now that Ar+1 ≤ 2 · Ar . Consider the value of A for block B, and let I = {`, . . . , u} be the corresponding index 0 interval for this block. Let B 0 be the block following B. We denote by A = Ar+1 and I 0 = {`0 , . . . , u0 } the stable average and set of indices corresponding to block B 0 . Note that 0 A ≤ 2 · Ar ≤ 4 · A. 0 We now interpret vectors N r ∈ NI0 and N r+1 ∈ NI0 in the same space. To this end, we re-round huge jobs in IN r+1 to jobs of size (1 + ε)u . That is, we redefine Nir+1 = 0 for all i = {u + 1, . . . , u0 }, and set Nur+1 := |{j ∈ Jr+1 : pj ≥ (1 + ε)u }| . Note that this does not change the lexicographically minimal solution of this instance. In0 deed, a job whose size is at least (1 + ε)u ≥ 4A ≥ A ≥ OPTr+1 is processed on a machine for itself by any locally optimal schedule, in particular lexicographically minimal solutions. Additionally, we define Nir+1 = 0 for all i ∈ {`, . . . , `0 − 1}. With this we can interpret N r+1 as a vector in NI , obtaining an equivalent set of schedules for the rounded instance. After this modification, we can bound the difference between N r and N r+1 in terms of |B|. This will help us show that accumulating reassignment potential in block B is enough to turn schedule Sr into Sr+1 with constant reassignment factor. Notice that the difference between N r and N r+1 can be attributed to two causes: the job jr+1 that arrived to instance Ir+1 , and the jobs of size (1 + ε)i for i ∈ {`, . . . , `0 − 1} that 0 are grouped in N r+1 into jobs of size (1 + ε)` . The next lemma bounds the total volume of (a superset of) these jobs in terms of |B|. Then this can be used to bound the number of these jobs. Lemma 1.38. Let B = {s, . . . , r}. Then X pj ≤ 4 · |B| · A. j∈Js :pj ≤A Proof. Consider instance Is and the value ws returned by Algorithm Stable average. Since Ar+1 > 2As , Lemma 1.18 implies that ws /2 < |B|. On the other hand, in any optimal solution for instance Is , each job smaller than As = A must be processed on one of the ws machines not containing huge jobs. Therefore, the total volume of such jobs is at most 2As ws = 2Aws < 4 · |B| · A. Lemma 1.39. After modifying vector N r+1 as above, we have that u X |B| r+1 r |Ni − Ni | ∈ O . ε i=` P0 P Proof. As argued above, it is enough to upper bound `i=` |Nir − Nir+1 |, since ui=`0 +1 |Nir − Nir+1 | ≤ 1. Recall that N`r+1 is defined by Algorithm Rounding (last equation of Step (1) 0 Chapter 1. Robust PTASes for Machine Scheduling of this algorithm). Then, since (1 + ε)` ≤ (1 + ε)` , the number of regrouped jobs can be no more than the number of jobs before grouping plus one. Since also N`r+1 could be increased 0 by one due to the new job, we obtain, 0 N`r+1 | 0 ` −1 X Nir + 2. and thus, 0 ` ` −1 X X r+1 r |Ni − Ni | ≤ 2 · Nir + 2. i=` P 0 −1 s Ni ∈ O(|B|/ε). To Since i=` Nir ≤ |B| + i=` Nis , it is then enough to show that `i=` s this end, note that the definition of N and the previous lemma imply that the total number of small jobs in IN s , is at most 4|B|A 16|B|(1 + ε) |B| ≤ ∈O . (1 + ε)` ε ε P`0 −1 P`0 −1 The lemma follows by recollecting the inequalities above. With the last lemma we can easily compute the difference between solutions z r and z r+1 . To this end, we first need to interpret these two vectors in a common euclidean space. Let KI and KI 0 be the set of configurations corresponding to set I and I 0 , correspondingly. By Lemma 1.25 a configuration k ∈ KI 0 \ KI is never used by a locally optimal solution, in particular is not used by a lexicographically optimal solution. This implies that zkr+1 = 0 I for all k ∈ KI 0 \ KI . Thus, we can interpret z r+1 as a vector in NK 0 , and moreover, it must correspond to a lexicographically minimal solution, and thus is the optimal solution to |KI | X [LEX’] : min cq xkq : AI · x = b(N r+1 , m) and xk ∈ N0 for all k ∈ KI . q=1 Note that this is the same ILP as for N r , but with the right-hand-side updated to b(N r+1 , m). This implies the following. Lemma 1.40. Vector z r+1 satisfies that zkr+1 = 0 for all k ∈ KI 0 \ KI , and X 1 2 1 |zkr+1 − zkr | ∈ |B| · 2O( ε log ε ) . k∈KI Proof. We already justified that zkt+1 = 0 for all k ∈ KI 0 \ KI . To conclude the lemma we argue as in Theorem 1.35. Since (zkr+1 )k∈KI is the optimal solution to [LEX’], Lemma 1.29 implies that X |zkr+1 − zkr | ≤ |KI | · max |zkr+1 − zkr | ≤ |KI |2 ∆ N r − N r+1 ∞ + 2 . k∈KI Finally, the right-hand-side of this expression is in |B| · 2O( ε log 2 1 ) ε by Lemmas 1.30 and 1.39. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs With this last result we can directly use Lemma 1.34 to bound the migration needed to transform Sr to Sr+1 . Collecting all these results, we can conclude the main result of this section. Theorem 1.41. Let ε > 0. For the Machine Covering problem with permanent jobs, there 1 2 1 exists a (1 + ε)-competitive algorithm with reassignment factor 2O( ε log ε ) . Moreover the algorithm runs in polynomial time. Proof. We only need to argue that accumulating reassignment potential in block B is enough for migrating jobs when constructing schedule S r+1 . Let r1 (ε) be the migration factor needed to transform schedule S` to S`+1 for any ` ∈ {s, . . . , r − 1}. By Lemma 1.37 we now that 1 2 1 r1 (ε) ∈ 2O( ε log ε ) . Assume that the algorithm has a reassignment factor of r(ε) ≥ 2r1 (ε). Since all jobs arriving in this block are larger εOPT` ≥ εA` /2 ≥ εA/2, this means that we add at least εA/2 reassignment potential to our budget in each iteration. Thus, at the end of block B be we will have at least ε · (|B| − 1) · A/2 ≥ ε · |B| · A/4 accumulated reassignment potential, where the last inequality follows since |B| ≥ 2 (otherwise Ar+1 > 2Ar = 2As ). We conclude that when constructing schedule Sr+1 we can migrate a total volume of r(ε) · ε · |B| · A/4. On the other hand, the previous lemma implies that transforming SN r to SN r+1 requires to 1 2 1 touch at most r2 (ε) ·|B| machines, where r2 (ε) ∈ 2O( ε log ε ) . By Lemma 1.34 we conclude that the volume of jobs needed to be migrated to transform Sr into Sr+1 is at most 6·Ar ·|B|·r2 (ε) ≤ 12 · A · |B| · r2 (ε). This implies that by defining r(ε) := 48 · max{r1 (ε), r2 (ε)}/ε (which is larger than 2r1 (ε) for small ε) we accumulate enough reassignment potential at the end of the block so to be able to migrate r(ε) · ε · |B| · A/4 total processing time. The theorem 1 2 1 follows by noting that r(ε) ∈ 2O( ε log ε ) . Reducing the Accumulated Reassignment Potential We will devote this section to reduce the reassignment potential needed by the algorithm, obtaining an algorithm with constant migration factor if all arriving jobs are larger than εOPTt . Recall that a block in Algorithm Robust-PTAS is a set of consecutive iterations in which the value A is unchanged. In this algorithm, the only iterations where we use accumulated reassignment potential is in the first iteration of a block. This is only needed for regrouping small jobs, that were originally grouped into jobs of size (1 + ε)` and are regrouped into jobs 0 of size (1 + ε)` . To avoid doing the regrouping in only one iteration, we do them along all the |B| iterations of the block B = {s, . . . , r}. To this end, assume that we define the index set I for an arbitrary block B as A i I := i ∈ Z : ε · ≤ (1 + ε) ≤ 4A(1 + ε) = {`, . . . , u}. (1.6) 16(1 + ε) As before, inside block B all our rounded instances will have jobs with size (1 + ε)i for some i ∈ I. Notice that this definition is very similar to the one used in Algorithm RobustPTAS, the only difference being that ` is chosen smaller (but only a constant factor smaller). ∗ We will also consider a number `∗ , defined as the largest integer so that (1 + ε)` ≤ ε · A/4. Chapter 1. Robust PTASes for Machine Scheduling 43 ∗ In our algorithm, we will treat jobs larger than (1 + ε)` as big, and thus we will leave them untouched (i. e., we do not group them) in this block. However, jobs whose processing time ∗ is in [(1 + ε)` , (1 + ε)` ] can be regrouped or not depending on the situation. Let us call JM the set of those jobs in the rounded instance. Jobs in JM are the only jobs that need to be 0 grouped into jobs larger than (1 + ε)` in iteration r + 1 (where `0 is the value of ` for the block after B). To avoid the regrouping of these jobs in only one iteration, we would like that our algorithm groups them through all iterations of block B, and thus it should satisfy the following property. (P) If r is the last iteration of B and Ar+1 ≤ 2Ar , then the rounded instance of Ir does ∗ not contain any job smaller than (1 + ε)` . 0 Let us call as before A = Ar+1 , that is, the value of A for the block following block B. This value implies an index set I 0 := {`0 , . . . , u0 } defined as in Equation 1.6 by changing A 0 0 by A . As in the previous section. we can assume that A ≤ 4A, otherwise Ir is trivial. Then, if (P) holds, we do not need to regroup any small job for the new iteration since 0 0 ∗ (1 + ε)` ≥ ε · A/(4(1 + ε)) ≥ A /(16(1 + ε)) ≥ (1 + ε)` , and thus they are already grouped appropriately. To ensure that property (P) is satisfied, in each iteration t ∈ B we group C ∈ Θ(1/ε) ∗ jobs smaller than (1 + ε)` . If C is chosen large enough we can show that by the end of ∗ ∗ block B all jobs smaller than (1 + ε)` are grouped into jobs of size (1 + ε)` , and thus (P) holds. We now explain the ideas in the previous paragraph in more detail. To describe our algorithm we need a subroutine, called Algorithm Round-Up-Small, that regroups C ∗ small jobs into groups of size (1 + ε)` . In the algorithm we use the previously introduced notation J` (I) := {j ∈ J : pj ≤ (1 + ε)` }, where I is an scheduling instance equal to (J, M ) and ` ∈ Z. Also p(J) denotes the total processing time of a set of jobs J. Input: An instance I = (J, M ), an index set I = {`, . . . , u}, a vector N ∈ NI0 , an integer `∗ ≥ ` and a number C ∈ N0 . 1. Set D := C, and for all i = `∗ + 1, . . . , u set Ni∗ := Ni . 2. For i = `, . . . , `∗ − 1, set Ni∗ := max{Ni − D, 0} and D := D − (Ni − Ni∗ ). 3. Set $ N`∗∗ := 1 (1 + ε)`∗ % ∗ −1 `X ∗ p · p(J`∗ (I)) − Np · (1 + ε) . p=` 4. Return N ∗ . ∗ Notice that by Step (2) of this algorithm vector N ∗ has C less jobs of size less than (1+ε)` . 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Observation 1.42. Let N ∗ be the output of Algorithm Round-Up-Small on input N . Then (`∗ −1 ) ∗ −1 `X X Ni∗ ≤ max Ni − C, 0 . i=` Additionally, in Step (3) we regroup these C jobs by considering the appropriate number ∗ of jobs of size (1 + ε)` . That is, entry N`∗∗ is chosen so that the total volume of jobs ∗ ∗ smaller than (1 + ε)` in the rounded instance and in instance I differs by at most (1 + ε)` . By Lemma 1.33, this guarantees that the optimal values of the two instances are within a factor 1 + O(ε). Using this subroutine we can describe our algorithm. Algorithm Robust-PTAS II Input: A sequence of instances revealed online, It = (Jt , M ) for t = 0, 1 . . . , so that: (i) Jt = Jt−1 ∪ {jt } and (ii) pjt ≥ εOPTt where OPTt is the optimal value for It . A number C ∈ N0 . 1. Initialize A := 0 and I := ∅. 2. For each t = 0, 1 . . . , (a) Run Algorithm Stable-Average on It . Let At and wt be the corresponding output. (b) If At > 2 · A, • then set A := At and I := i ∈ Z : ε A i ≤ (1 + ε) ≤ 4A(1 + ε) 16(1 + ε) = {`, ` + 1, . . . , u} . ∗ • Set `∗ as the largest integer so that (1 + ε)` ≤ ε · A/4. (c) Consider the rounding vector from the previous iteration N t−1 (for t = 0 take N −1 as the zero vector) and define Nit := Nit−1 Nit := j ∈ Jt : pj = (1 + ε)i Nut := |{j ∈ Jt : pj ≥ (1 + ε)u }| . for i ∈ {`, . . . , `∗ }, for i ∈ {`∗ + 1, . . . , u − 1}, (d) Run Algorithm Round-Up-Small on input (It , I, N t , `∗ , C). Redefine N t as the output of the algorithm. (e) Construct the set of configurations KI and the ILP [LEX] for this set, and solve it with Lenstra’s algorithm. Let z t be the optimal solution of the ILP. Chapter 1. Robust PTASes for Machine Scheduling (f) If t = 0, construct a schedule SN 0 using z 0 as a template. Let S0 be any schedule that follows SN 0 . (g) If t > 0, construct a schedule SN t for instance It by using vector z t as a template. Permute machines in schedule SN t so that the number of machines on which schedules SN t−1 and SN t−1 differ is minimized. (h) Run Algorithm Construct-Schedule on input (SN t−1 , SN t , St−1 ). Call St the output of this subroutine. Note that the previous algorithm is very similar to Algorithm Robust-PTAS from the previous section. The main difference lies in the rounding of the instances: instead of rounding instance It anew using Algorithm Rounding in each iteration, we define the vector N t by updating N t−1 . Afterwards, in Step (2d) we regroup C jobs with processing ∗ ∗ time in [(1 + ε)` , (1 + ε)` ] to groups of size (1 + ε)` . We first observe that this algorithm is (1 + O(ε))-competitive. Lemma 1.43. Algorithm Robust-PTAS II is a (1 + O(ε))-competitive algorithm. Proof. It is enough to notice that we run Algorithm Round-Up-Small for defining vector N t . By the last step of this algorithm, the total volume of small jobs in the rounded instance and It are almost equal, that is, ∗ p(J`∗ (It )) − p(J`∗ (IN t )) ≤ (1 + ε)` ≤ εAt εA ≤ ≤ εOPTt . 4 4 Thus, Lemma 1.22 implies that the optimal value of IN t is within a 1 + O(ε) factor to the optimal value of It . By Lemmas 1.23, 1.28 and 1.33 we conclude that St is a (1 + O(ε))approximate solution. As in the previous section, we show that the migration factor is constant by bounding the difference between N t and N t+1 for any t. For this we need the following technical lemma. Lemma 1.44. Let N ∗ be the output of Algorithm Round-Up-Small on input I = (J, M ), N , C, and `∗ . If N satisfies that `∗ X ∗ i ∗ Ni · (1 + ε) ≤ (1 + ε)` , p(J` (I)) − i=` then kN ∗ − N k1 ≤ 2 · C + 2 · (1 + ε)` ∗ −` Proof. Notice that Ni∗ = Ni for all i ∈ {`∗ + 1, . . . , u}. Also, by definition of N ∗ in Step (2) of the algorithm we have that ∗ −1 `X |Ni − Ni∗ | ≤ C. i=` 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Thus, it is enough to bound |N`∗∗ − N`∗ |. Note that ∗ ∗ −1 ` `X X |N`∗∗ − N`∗ | = (Ni − Ni∗ ) + (Ni∗ − Ni ) i=` i=` ∗ ∗ `X −1 ` X 1 i ∗ ∗ ≤ Ni − Ni · (Ni − Ni )(1 + ε) + (1 + ε)` i=` i=` ∗ ! ` `∗ X X 1 i i ∗ ≤ · Ni (1 + ε) − p(J`∗ (I)) + p(J`∗ (I)) − Ni (1 + ε) + C (1 + ε)` i=` i=` 1 `∗ `∗ ≤ · (1 + ε) + (1 + ε) +C (1 + ε)` ∗ ≤ C + 2 · (1 + ε)` −` , where the second last inequality follows by the definition of N`∗∗ (Step (3) of Algorithm Round-Up-Small). The lemma follows by combining both inequalities above. We use the last lemma to bound the difference between N t and N t+1 for t ∈ {s, . . . , r−1}. We consider the case t = r afterwards. Lemma 1.45. Assume that t ∈ {s, . . . , r − 1}. Then kN t+1 − N t k1 ≤ 2 · C + 9 + 8ε. Proof. Consider vector N t+1 before Step (2d) of Algorithm Robust PTAS II, that is, before running Algorithm Round-Up-Small on it. Thus, up to this stage in the algorithm we have that kN t+1 − N t k1 ≤ 1, where the difference is caused by job jt+1 . Since in iteration t we redefine N t as the output of Algorithm Round-Up-Small, we have that `∗ X ∗ t i ∗ N · (1 + ε) p(J (I )) − ≤ (1 + ε)` . ` t i i=` ∗ −` Thus, Lemma 1.44 implies that kN t+1 − N t k1 ≤ 2 · C + 2 · (1 + ε)` + 1 ≤ 2 · C + 9 + 8ε. As in the previous section, this lemma implies that the migration factor needed for 1 2 1 iterations t ∈ {s, . . . , r − 1} is upper bounded by (C + 9 + 8ε) · 2O( ε log ε ) : simply iterate the reasoning of Lemma 1.37 (C + 9 + 8ε) times. This means that the migration factor for these 1 2 1 iterations is 2O( ε log ε ) since we will choose C ∈ O( 1ε ). Finally, we bound the difference between N t and N t+1 when t is the last iteration of a block. For this we need to show that Property (P), described at the beginning of this section, is satisfied by the output of our algorithm. This is done in the following lemma. Lemma 1.46. Consider a block B and let r ∈ B be its last iteration. Assume that Algorithm Robust-PTAS II is run on input C := 65/ε. Then, Nir = 0 for all i < `∗ . Proof. Let s be the first iteration of block B. By the same argument as Lemma 1.38, we can show that X p(J`∗ (Is )) ≤ pj ≤ 4 · (r − s + 1) · A. j∈Js :pj ≤As Chapter 1. Robust PTASes for Machine Scheduling Also, by definition of vector N s in Step (2d) of the algorithm, we have that ∗ ` X Nis (1 + ε)i ≤ p(J`∗ (Is )) + (1 + ε)` ≤ 4 · (r − s + 1) · A + εA . 4 By dividing both sides of the last inequality by (1 + ε)` we obtain that ∗ ` X 1 εA ≤ · 4 · (r − s + 1) · A + (1 + ε)` 4 16(1 + ε) ε ≤ · 4(r − s + 1) + ε 4 64(1 + ε) = (r − s + 1) + 4(1 + ε). ε Since r − s + 1 ≥ 1, then ∗ ` X 64(1 + ε) + 4(1 + ε) · (r − s + 1) ε 65 (r − s + 1), ε where the last inequality holds if ε is small enough. Finally, since in each iteration t ∈ {s, . . . , r} we call Algorithm Round-Up-Small, Observation 1.42 implies that (`∗ −1 ) ∗ −1 `X X Nit ≤ max Nit−1 − C, 0 . i=` Then, ∗ −1 `X Nir ≤ max (`∗ −1 X ) Nis − C · (r − s + 1), 0 i=` 65 · (r − s + 1) − C · (r − s + 1), 0 . ≤ max ε This concludes the lemma since C = 65/ε. Finally, we bound the difference between N r and N r+1 . As before, let `0 be the value of ` for the block following B. As in the previous sections, we can assume that the entries of Nir+1 for i > u are zero (since jobs larger than (1 + ε)u are processed in a machine of their own in any locally optimal solution, and thus we can assume their size is (1 + ε)u ). Since by the previous lemma Nir is zero for any i < `∗ , andP`0 ≤ `∗ , to bound the difference (in r+1 r `1 -norm) between N r and N r+1 it is enough to bound u−1 |. This is done in the i=`0 | Ni − Ni next lemma. Also notice that, without loss of generality, we can assume that Ar+1 ≤ 4A, otherwise the migration factor is bounded by Lemma 1.20. 1.3. A Robust PTAS for the Machine Covering Problem with Permanent Jobs Lemma 1.47. Let r be the last iteration of block B, and let I 0 = {`0 , . . . , u0 } be the index set for the block following B. If 2A ≤ Ar+1 ≤ 4A, then u−1 X |Nir − Nir+1 | ≤ 2 · C + 9 + 8ε. Proof. Let B 0 be the block following B, and let `∗∗ be the value of variable `∗ for block B 0 . Consider now vector N r+1 before running P Step (2d) of Algorithm Robust-PTAS II in `∗∗ iteration r + 1. At this stage we have that i=`0 |Nir − Nir+1 | ≤ 1, where the difference in the two vectors can only be due to job jr+1 . Also, the previous lemma implies that Nir = 0 0 for all i ∈ {`, . . . , `∗ }. Additionally, 2A < Ar+1 = A ≤ 4A implies `∗ ≥ `0 , and therefore we obtain that Nir = 0 for all i ∈ {`, . . . , `0 − 1} . Since N r was defined in the previous iteration as the output of Algorithm Round-UpSmall, we have that p(J`∗ (Ir )) − N`r∗ · (1 + ε)`∗ ≤ (1 + ε)`∗ . Thus, before running Step (2d) in iteration r + 1, we have that `∗∗ X ∗ ∗∗ Nir+1 · (1 + ε)i ≤ (1 + ε)` ≤ (1 + ε)` . p(J`∗∗ (Ir+1 )) − 0 i=` r+1 With this, Lemma 1.44 implies that the difference between vector NP before an after ∗∗ 0 `∗∗ ` −` Step (2d) is at most 2·C +2(1+ε) ≤ 2·C +9+8ε. This implies that i=`0 |Nir −Nir+1 | ≤ 2 · C + 9 + Finally, to conclude that the migration factor for iteration r + 1 is constant, we can argue as in Lemma 1.40 and conclude that SN t and SN t+1 differ in only a constant number of machines. Lemma 1.33 then implies that we can obtain St+1 from St with a migration factor of 1 1 2 1 2 1 (2 · C + 9 + 8ε) · 2O( ε log ε ) = 2O( ε log ε ) if C = 65/ε. We conclude the following. Theorem 1.48. For the Machine Covering problem with permanent jobs, where pjt ≥ εOPTt for each arriving job jt , Algorithm Robust-PTAS II is a (1 + O(ε))-competitive algorithm 1 2 1 with migration factor 2O( ε log ε ) . Arrival of small jobs Finally, we explain how to adapt our algorithm if there are arriving jobs smaller than εOPTt . As justified by Lemma 1.5, we cannot avoid using at least a small amount of accumulated reassignment Chapter 1. Robust PTASes for Machine Scheduling If a new job jt with pjt ≤ ε · OPTt arrives, we trick the algorithm to believe that this job has not yet appeared4 . Instead, we schedule this job on an arbitrary machine, for example machine 1. Once the total processing time of these jobs surpasses ε · OPTt for some iteration t, we feed all these jobs to the algorithm in a batch with total processing time at most ε · OPTt + pjt ≤ 2εOPTt . Clearly, this causes an increase of the reassignment factor by at most 1, and the amount of accumulated potential needed is at most 2εOPTt for all t. Also, leaving out a batch of small jobs can only affect the objective function by a 1 + O(ε) factor. It is clear that if a batch of small jobs is introduced to the instance in iteration t, then its load is at most 2εOPTt ≤ 2εAt . Since in the corresponding rounded instance all jobs are larger than (1 + ε)` ≥ εA/(16(1 + ε)) ≥ εAt /(32(1 + ε)), the number of jobs added to the rounded instance is constant. This implies that the migration factor needed for this 1 2 1 iteration is 2O( ε log ε ) . To guarantee that also at the beginning of a block the migration factor is bounded, we can still ensure that Lemma 1.46 is satisfied. To this end, if at any ∗ iteration t a batch of small jobs is introduced, we only add jobs of size (1 + ε)` to the rounded instance. In other words, we define N`t∗ so that `∗ X ∗ t i ∗ Ni · (1 + ε) ≤ (1 + ε) ` . p(J` (It )) − i=` In this way the number of rounded jobs with size in [(1 + ε)` , (1 + ε)` ) is never increased, guaranteeing that Lemma 1.46 still holds. The rest of the analysis is thus valid, hence we 1 2 1 conclude that the migration factor in all iterations is at most 2O( ε log ε ) . This implies the following theorem. Theorem 1.49. Consider the online Machine Covering problem with permanent jobs. For 1 2 1 any ε > 0 there exists a (1 + ε)-competitive algorithm with reassignment factor 2O( ε log ε ) that uses O(εOPTt ) accumulated reassignment potential in each iteration t. A Robust PTAS for Temporary Jobs In this section we adapt Algorithm Robust PTAS for the job arrival and departure setting. The techniques are very similar as before, and thus we only focus on the main differences. As with the job arrival case, we start by considering trivial instances. That is, assume that we are given an initial instance I = (J, M ) with stable average A, and then a job leaves the instance, creating a new instance I 0 = (J 0 , M ). The next lemma takes care of the case in which the stable average of the new instance A0 is smaller than A0 /2. Lemma 1.50. Let I = (J, M ) be an instance with optimal value OPT and consider an instance I 0 = (J 0 , M ) with one job less, that is J 0 = J \ {j ∗ }. Let also A and A0 be the stable average for instances I and I 0 , respectively, and assume that A0 < A /2. Let S and S 0 be 4 Notice that we cannot check whether pjt ≤ εOPTt in polynomial time, unless P = NP, since computing OPTt is NP-hard. However, since At is within a factor 2 of OPTt , we can instead check whether pjt ≤ εAt . Our algorithms in the previous sections work in the same way if pjt ≥ εAt ≥ εOPTt /2 for all t. The only difference is that the migration (resp. reassignment) factor is increased by at most a factor of 2. 1.4. A Robust PTAS for Temporary Jobs locally optimal solutions for I and I 0 , respectively. By permuting machines in schedule S 0 , it is possible to transform S into S 0 with a migration factor of 3. Proof. We use a similar technique as in Lemma 1.20. Since A0 < A/2, Corollary 1.19 implies that I 0 is trivial. Let L0 be the set of jobs returned by Algorithm Stable-Average on input I 0 . Note that in schedule S there exists at most one machine that processes more than 2 jobs in J 0 \L0 . Indeed, assume by contradiction that there are two such machines, i1 and i2 . Since I 0 is trivial, set J 0 \ L0 contains exactly m − 1 jobs. Then, there exist two machines that process only jobs in L0 ∪ {j ∗ }, and thus there exists at least one machine processing only jobs in L0 . The load of this machine is at most p(L0 ) = A0 . Since jobs in J 0 \ L0 are larger than A0 , then machines i1 and i2 violate the condition of locally optimality for schedule S. This is a contradiction. Since I 0 is trivial, any locally optimal schedule must process all jobs in J 0 \ L0 separately. Thus, S and S 0 differ, up to permutation of machines, in jobs in L0 and a job j ∈ J 0 \ L0 . If job j must indeed be migrated it means that it was processed together with another job j 0 ∈ J 0 \ L0 by schedule S. By assuming, without loss of generality, that pj ≤ pj 0 (otherwise we migrate j 0 instead of j), we have that pj ≤ pj 0 ≤ A, where the last inequality follows since S is locally optimal. We conclude that the total processing time migrated is at most A + p(L0 ) = A + A0 ≤ 3A/2. On the other hand, since A − A0 ≤ pj ∗ and A0 < A/2, then pj ∗ ≥ A/2. This implies that the migration factor used is at most 3. As before, we first deal with the case where all arriving or departing jobs are larger than εOPTt . The algorithm we present now is basically the same as Algorithm RobustPTAS, with only some minor modifications. Algorithm Input: A sequence of instances revealed online, It = (Jt , M ) for t = 0, 1 . . . , so that: (i) Jt 4Jt−1 = {jt } and (ii) pjt ≥ εOPTt where OPTt is the optimal value for It . 1. Initialize A := 0 and I := ∅. 2. For each t = 0, 1, . . . , (a) Run Algorithm Stable-Average on It . Let At and wt be the corresponding output. (b) If At < A/2 or At > 2 · A then set A := At and A i I := i ∈ Z : ε · ≤ (1 + ε) ≤ 4A(1 + ε) . 8(1 + ε) (c) Run Algorithm Rounding on input (It , I). Let (N t , IN t ) be the output of the algorithm. (d) Construct the set of configurations KI and the ILP [LEX] for this set, and solve it with Lenstra’s algorithm. Let z t be the optimal solution of the ILP. Chapter 1. Robust PTASes for Machine Scheduling (e) If t = 0, construct a schedule SN 0 using z 0 as a template. Let S0 be any schedule that follows SN 0 . (f) If t > 0, construct a schedule SN t for instance It by using vector z t as a template. Permute machines in schedule SN t so that the number of machines on which schedules SN t−1 and SN t−1 differ is minimized. (g) Run Algorithm Construct-Schedule on input (SN t−1 , SN t , St−1 ). Let St be the output of this subroutine. Notice that the only difference between this algorithm and Algorithm Robust-PTAS is in Step (2b). In the new algorithm the value of A is also updated whenever At is smaller than A/2. Also, the set I is defined slightly larger than before. Notice, however, that the size of I is essentially the same, that is, |I| ∈ O(log1+ε (1/ε)). Also we have that (1 + ε)` ≤ εA/8 ≤ εAt /4 ≤ εOPTt /2 for any iteration t. By Lemma 1.23, this implies that our rounding procedure can decrease the optimal value by at most a 1 + O(ε) factor. Therefore, by Lemma 1.33, we conclude that the algorithm is (1 + O (ε))-competitive. To bound the reassignment factor we again consider blocks, defined as intervals of iterations where the value A stays constant. Consider a block B = {s, . . . , r}. The following lemma follows by directly applying previous results. Lemma 1.51. Consider t ∈ {s, . . . , r − 1}. Then the migration factor necessary to construct 1 2 1 schedule St+1 from St is in 2O( ε log ε ) . Proof. It is enough to notice that kN t+1 − N t k1 ≤ 1. Thus, Lemmas 1.29 and 1.30 imply 1 2 1 that the migration factor necessary to convert schedule SN t into SN t+1 is 2O( ε log ε ) . The result then follows by Lemma 1.34. We now show a bound on the reassignment factor for the last iteration of block B, i. e., 0 for t = r. Let B 0 be the block following B, and let A be the corresponding value for A. We 0 can deal with the case where A > 2A, exactly in the same way as in Section 1.3.4. Thus 0 we only consider the case A < A/2. Moreover, if Ar+1 < Ar /2, then the migration factor necessary for such an iteration is at most 3 by Lemma 1.50. Therefore, we assume that 0 A = Ar+1 ≥ Ar /2 ≥ A/4. We now bound the difference between N r and N r+1 . To this end, we first re-round huge 0 jobs in IN r to jobs of size (1 + ε)u . That is, we redefine Nir = 0 for all i = {u0 + 1, . . . , u}, and set n o u0 r Nu0 := j ∈ Jr : pj ≥ (1 + ε) . As before, it is easy to see that this does not change the lexicographically minimal solution of this instance. Additionally, we define Nir = 0 for all i ∈ {`0 , . . . , `}. After this modification, we can bound the difference between N r and N r+1 in terms of |B|. This will help us showing that accumulating reassignment potential during block B is enough to turn schedule Sr into Sr+1 with constant reassignment factor. To show this we use a similar proof technique as the proof of Lemma 1.40 and Theorem 1.41. First we need the following result, that plays the same role as Lemma 1.38. 1.4. A Robust PTAS for Temporary Jobs Lemma 1.52. Let B = {s, . . . , r}. Then X pj ≤ (4 · |B| + 1) · A. j∈Jr :pj ≤A Proof. Consider instance Ir+1 and its value wr+1 returned by Algorithm Stable average. Note that Lemma 1.18 implies wr+1 /2 < |B|. On the other hand, in any optimal solution for 0 instance Ir , each job smaller than Ar+1 = A must be processed on one of the wr+1 machines not containing huge jobs. Therefore, the total volume of such jobs is at most 2Ar+1 wr+1 = 0 2A wr+1 < 4 · |B| · A. The lemma follows since Jr = Jr+1 ∪ {jr+1 }. Lemma 1.53. After modifying vector N r as above, we have that Nir = 0 for all i ∈ {u0 + 1, . . . , u} u0 X |B| r+1 r |Ni − Ni | ∈ O . ε i=`0 We skip the proof of this lemma since it follows from an analogous argument to the proof of Lemma 1.39. Once we have bounded the difference between N r and N r+1 , it is easy to show that 1 2 1 schedules SN r and SN r+1 differ in at most |B| · 2O( ε log ε ) machines. This follows by the same argument as Lemma 1.40. Lemma 1.54. Vector z r satisfies that zkr = 0 for all k ∈ KI \ KI 0 , and X 1 2 1 |zkr+1 − zkr | ∈ |B| · 2O( ε log ε ) . k∈KI 0 This lemma and Lemma 1.34 imply that the load migrated in iteration r + 1 is |B| · 2 . With this we can conclude our main theorem by the same argument as in the proof of Theorem 1.41. O( 1ε log2 1ε ) Theorem 1.55. Algorithm General-Robust-PTAS is a robust PTAS with reassignment 1 2 1 factor 2O( ε log ε ) for temporary jobs if pjt ≥ εOPTt for each iteration t. Finally, we see how to deal with arriving or departing small jobs. For job arrivals, we can use the same technique as for Algorithm Robust-PTAS II explained at the end of Section 1.3.4. For the departure of small jobs we do as follows. We ignore the fact that certain small jobs are removed as long as the following property holds: There is no machine which has lost jobs of total processing time at least ε · OPTt . Under this condition, the objective function is affected by less than a factor 1 − ε. If there is such a machine, we remove all the corresponding jobs for this machine, treating this removal as a new iteration. We call this Operation O. Whenever we remove a batch of small jobs, we treat this as a new iteration, and therefore we update our rounded instance with Algorithm Rounding. Since the batch of small jobs that are removed has total processing time in O(εOPTt ), then the updated rounded instance has only a constant number of jobs less. This implies that the accumulated potential corresponding to the jobs being removed is enough to update our schedule. Note Chapter 1. Robust PTASes for Machine Scheduling that after performing Operation O there might be new machines which have assigned more than ε · OPTt processing time of jobs that have left. Then we need to repeat O until there is no such machine. Applying the idea of the previous paragraph to Algorithm General-Robust-PTAS we conclude the following. Theorem 1.56. Consider the online Machine Covering problem with temporary jobs. There 1 2 1 exists a robust PTAS with reassignment factor 2O( ε log ε ) . We remark that it is not clear if it is possible to bound the amount of accumulated reassignment potential by O(εOPTt ) for the job departure case. The main bottleneck for such a result is the departure of small jobs. Indeed, by delaying the departure of small jobs as explained above we might have to use a large amount of reassignment potential, since in one iteration we have to repeat Operation O an arbitrary number of times. On the other hand, it is possible to adjust the techniques of Section 1.3.4 and obtain a robust PTAS with constant migration factor (that is, with accumulated reassignment potential) in the temporary job setting, if we additionally assume that for each arriving or departing jobs jt it holds that pjt ≤ εOPTt . Robust PTASes for General Objective Functions We now generalize our results for a larger class of objective functions. This class was previously introduced by Alon et al. [AAWY98] as a family of objective functions that admit a PTAS on parallel machines. By using similar techniques we show that this class also admits a robust PTAS. Most of our results for the Machine Covering problem can be translated to this general setting. We only focus on the main differences to describe our results. We consider four types of objective functions. Given a schedule S, recall that `i (S) denotes the load of machine i, that is, the sum of processing times of all jobs assigned to i. The class of objective functions that we consider depends exclusively on the vector (`i (S))i∈M . P Minimization problems are of the form i∈M f (`i (S)) or maxi∈M f (`i (S)) for some given function f : R≥0 → R≥0 . Function f must satisfy the following conditions. (C.i) Convexity: 5 For all 0 ≤ x ≤ y and 0 ≤ ∆ ≤ y − x, it must hold that f (x + ∆) + f (y − ∆) ≤ f (x) + f (y). (C.ii) For all ε > 0 there exists δ = δ(ε) > 0 so that ∀x, y ≥ 0, if x ≤ y ≤ (1 + δ)x 1+δ f (x) ≤ f (y) ≤ (1 + ε)f (x). (1 + ε) We remark that, as observed by Epstein and Sgall [ES04], (C.ii) is equivalent to the uniform continuity of function ln(f (ex )) on the real line. Formally, the minimization problems in consideration are as follows. 5 This definition is equivalent to the usual concept of convexity: f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) for all x, y ≥ 0 and 0 ≤ λ ≤ 1. 1.5. Robust PTASes for General Objective Functions (I) Given a function f satisfying (C.i) and (C.ii), minimize 54 P f (`i (S)). (II) Given a function f satisfying (C.i) and (C.ii), minimize maxi∈M f (`i (S)). Note that these families generalize several classic objective functions. In particular, if f is the identity function then maxi∈M f (`i (S)) corresponds to the Minimum Makespan problem. Also, we can modelP all `p -norms with this framework. Indeed, consider f (x) = x1/p for some p ≥ 1, so that i∈M f (`i (S)) = k(`i (S))i∈M kpp . It is easy to see that this function f satisfies Conditions (C.i) and (C.ii). Moreover, a (1 + ε)-approximate solution for this function has an approximation guarantee of (1 + ε)1/p ≤ (1 + ε) for the Lp norm. For maximization problems we require function f to be concave instead of convex. (C.i’) Concavity: For all 0 ≤ x ≤ y and 0 ≤ ∆ ≤ y−x, it must hold that f (x+∆)+f (y−∆) ≥ f (x) + f (y). The maximization problems that we consider are the following. P (III) Given a function f satisfying (C.i’) and (C.ii), maximize i∈M f (`i (S)). (IV) Given a function f satisfying (C.i’) and (C.ii), maximize mini∈M f (`i (S)). We remark that with these families we can model, among many others, the Machine Covering problem. We consider the same online setting introduced in Section 1.2.1 for all these objective functions, obtaining the same results as for the Machine Covering problem. Theorem 1.57. For Problems (I), (II), (III) and (IV) with permanent jobs, there exists a robust PTAS with constant reassignment factor that uses at most Kt ∈ O(εAt ) accumulated reassignment potential in each iteration t. We remark that in this theorem, the bound on the accumulated reassignment potential is given in terms of At , that is, the stable average of the instance in iteration t, as defined in Section 1.3.1. In the corresponding result for the Machine Covering problem, i. e., Theorem 1.49, we bounded the reassignment factor in terms of OPTt . However, for this problem, At and OPTt are within a factor of 2 and therefore the theorem above implies Theorem 1.49. Theorem 1.58. For Problems (I), (II), (III) and (IV) with temporary jobs, there exists a robust PTAS with constant reassignment factor. The proofs of these two results are obtained in the same way as for the Machine Covering problem. The main extra properties that we need to show for these theorems are: (1) there exists an optimal solution that is locally optimal (this follows from convexity or concavity), and (2) in order to approximate the objective function it is enough to approximate the load of the machines (this follows by Condition (C.ii)). Additionally, when dealing with small jobs, instead of comparing the size of jobs with εOPTt as in Section 1.3.5, we compare them with εAt (recall that for the Machine Covering problem, OPTt and At are within a factor of 2, and thus this does not make a significant difference). The rest of the arguments follow analogously. We start by showing Property (1), which was observed before by Alon et al. [AAWY98]. Chapter 1. Robust PTASes for Machine Scheduling Observation 1.59. Consider a schedule S and let pimin be the processing time of the smallest job assigned to machine i ∈ M . If there exists a pair of machines i, k so that `i (S) − pimin ≥ `k (S), then reassigning the smallest job on i to machine k does not worsen the objective function for any of the problems (I), (II), (III) or (IV). Proof. For Problem (II), since f is convex then max{f (x) : `k (S) ≤ x ≤ `i (S)} = max{f (`i (S)), f (`k (S))}. Thus, max{f (`i (S) − pimin ), f (`k (S) + pimin )} ≤ max{f (`i (S)), f (`k (S))}, and thus the objective function cannot increase. The result for Problem (IV) follows by the same argument by considering −f instead of f . For Problem (I), the convexity of f implies that f (`i (S) − pimin ) + f (`k (S) + pimin ) ≤ f (`i (S)) + f (`k (S)). This implies the observation for Problem (I). An analogous argument works for Problem (III). This observation implies that we can restrict ourselves to consider locally optimal solutions. In particular we have the following. Observation 1.60. For any of the Problems (I), (II), (III) and (IV), there exists an optimal solution that is locally optimal. With this observation we can apply all the techniques of Section 1.3.1 for locally optimal solutions. In particular, by Lemma 1.16, there exists an optimal solution for which the load of any machine is at least A/2, where A is the stable average of the instance. We now observe that the rounding technique of Section 1.3.2 can also be applied to the problems in this section. Consider ε > 0. Condition (C.ii) implies that there exists a δ such that we can round the processing times of each job to the nearest power of (1 + δ) without affecting the objective function in more than a (1 + ε) factor. Thus, as before, we can assume that all our jobs have processing time (1 + δ)i for some i ∈ Z. The rest of our rounding techniques are still valid as shown in the following lemma. In this lemma we use the concept of a schedule that follows another solution, defined in Section 1.3.2. Recall that in all of our algorithms before, the competitive guarantee was ensured by considering an optimal solution of a rounded instance and then constructing a solution of the original instance that follows the rounded solution. The next lemma shows that this still yields (1 + O(ε))-approximate solutions for the problems considered in this section. In the following we consider the set J` (I) containing all jobs j in instance I such that pj ≤ (1 + δ)` . We say that these jobs are small. Similarly, we denote by Hu (I) the set of jobs j in I such that pj ≥ (1 + δ)u . In the following lemma, which is an extension of Lemma 1.22, instance I1 represents an arbitrary instance, and I2 a rounded version of I1 . Lemma 1.61. For any ε > 0 there exists δ > 0 satisfying the following. Consider two instances I1 = (J1 , M ) and I2 = (J2 , M ), and let A1 and A2 be the stable average of I1 1.5. Robust PTASes for General Objective Functions and I2 , respectively. Let I = {`, . . . , u} be an index interval so that (1 + δ)u ≤ δA1 /2 and (1 + δ)u > 2A1 . Additionally, assume that |p(J` (I1 ) − p(J` (I2 ))| ≤ (1 + δ)` , {j ∈ J1 : pj = (1 + δ)i } = {j ∈ J2 : pj = (1 + δ)i } |Hu (I1 )| = |Hu (I2 )|. for all i ∈ {` + 1, . . . , u − 1}, Consider any of the Problems (I), (II), (III), or (IV), and let S2∗ be an optimal solution to instance I2 that is also locally optimal. Then, any schedule S1 for I1 that follows S2 is a (1 + O(ε)) -approximate solution. For the proof of this lemma, observe that A2 ≤ A1 (1 + δ), and by assuming δ < 1 we obtain that A2 < 2A1 . This implies that in any locally optimal solution all jobs in Hu (I1 ) and Hu (I2 ) must be processed on a machine of their own. Therefore, removing these jobs together with |Hu (I1 )| = |Hu (I2 )| many machines can only make the problem harder to approximate. We conclude that, without loss of generality, we can assume that Hu (I1 ) and Hu (I2 ) are empty. Under this assumption, we show that if two schedules follow each other then the load of every machine in both schedules is within a (1 + O(δ)) factor. Lemma 1.62. Consider the same two instances as defined in the previous lemma, and assume additionally that Hu (I1 ) = Hu (I2 ) = ∅. Let S1 and S2 be feasible solutions for I1 and I2 , respectively, and assume that S1 follows S2 and also S2 follows S1 . Then, `i (S1 ) − 3(1 + δ)` ≤ `i (S2 ) ≤ `i (S1 ) + 3(1 + δ)` for all i ∈ M. Proof. Recall that `min (S) denotes the minimum load of a machine in schedule S. We can assume without loss of generality that `min (S2 ) ≤ `min (S1 ) (otherwise invert the roles of S1 and S2 ). Observe that the same proof of Lemma 1.22 yields that `min (S2 ) ≥ `min (S1 ) − 2(1 + δ)` . Let i be an arbitrary machine. We consider two different cases. Assume first the machine i satisfies `i (S1 ) > `min (S2 )+3(1+δ) ` . Thus, `i (S1 ) > `min (S1 )+ (1+δ)` , because `min (S2 ) ≥ `min (S1 )−2(1+δ)` . Since S1 follows S2 , this implies that machine i does not process any small job in schedule S1 . If also no small job is assigned to i by S2 , then `i (S1 ) = `i (S2 ) and then we are done. Otherwise, schedule S2 can only assign more jobs to i, and thus `i (S2 ) ≥ `i (S1 ). We conclude the lemma for this machine i since `i (S1 ) ≤ `i (S2 ) ≤ `min (S2 ) + (1 + δ)` ≤ `min (S1 ) + (1 + δ)` ≤ `i (S1 ) + (1 + δ)` , where the second inequality follows since S2 assigns a small job to machine i. Assume now that `i (S1 ) ≤ `min (S2 ) + 3(1 + δ)` . Then simply note that `i (S1 ) − 3(1 + δ)` ≤ `min (S2 ) ≤ `i (S2 ). If i does not process a small job in schedule S2 , then `i (S2 ) ≤ `i (S1 ), and hence we are done. Thus, we can assume that i processes a small job in schedule S2 . Then `i (S2 ) ≤ `min (S2 ) + (1 + δ)` ≤ `min (S1 ) + (1 + δ)` ≤ `i (S1 ) + (1 + δ)` . Using this technical result we show Lemma Chapter 1. Robust PTASes for Machine Scheduling Proof (Lemma 1.61). Let S1∗ be an optimal solution to I1 that is also locally optimal. Remove all jobs in J` (I1 ). Greedily add all jobs in J` (I2 ) in a list-scheduling fashion. Call this solution S2 . With this construction, we have that S2 follows S1∗ . Since S1∗ is locally optimal also S1∗ follows S2 , and hence we can use the previous lemma for these schedules. Recall that (1 + δ)` ≤ δA1 / 2. By Lemma 1.16 this implies that (1 + δ)` ≤ δ`min (S1∗ ) ≤ `i (S1∗ ). The previous lemma implies that (1 − 3δ)`i (S1∗ ) ≤ `i (S2 ) ≤ (1 + 3δ)`i (S1∗ ). Then, Condition (C.ii) implies that for any ε > 0 there exists a small enough δ > 0 such that f (`i (S1∗ )) ≤ f (`i (S2 )) ≤ (1 + ε)f (`i (S1∗ )). (1.7) (1 + ε) Consider now an optimal solution S2∗ to I2 that is also locally optimal, and let S1 be a solution to I1 that follows S2∗ . Since S2∗ is locally optimal then S2∗ also follows S1 . With the same argument as above we obtain f (`i (S1 )) ≤ f (`i (S2∗ )) ≤ (1 + ε)f (`i (S1∗ )). (1 + ε) Consider now Problems (I) and (II). Since these are minimization problems, Equation (1.7) implies that OPT2 ≤ (1 + ε)OPT1 . Let C(S1 ) denote the cost of schedule S1 . Then, Equation (1.8) implies that C(S1 ) ≤ (1 + ε)OPT2 , and thus C(S1 ) ≤ (1 + ε)2 OPT1 ∈ (1 + O(ε))OPT1 . This implies the lemma for Problems (I) and (II). An analogous argument works for Problems (III) and (IV). Consider one of the robust PTASes from the previous sections. Lemma 1.61 applied to an instance It together with its rounded version IN t ensures that the rounding procedure in all our algorithms (Algorithm Rounding) do not modify the objective function by more than a 1 + O(ε) factor. Moreover, assume that we modify the objective function of the ILP [LEX] so that it computes an optimal solution (of the rounded instance) for one of the objective functions of Problems (I)-(IV). Then, since the output of the robust PTAS for iteration t follows the optimal solution for IN t , then Lemma 1.61 ensures that the algorithm is (1 + O(ε))-competitive. In the following we show how to modify the objective function of our ILP [LEX] (Section 1.3.2) to model each of the problems (I) to (IV). Modeling Problems (I) and (III) is easier than before. For a configuration k ∈ K we define its corresponding cost coefficient as ck := f (load(k)).PThus, changing the objective function of [LEX] to minimize (correspondingly maximize) k∈KI ck · xk yields an optimal solution to the rounded instance IN for Problem (I) (correspondingly Problem (III)). For Problem (IV), we must again consider lexicographically minimal solutions. However, the order in which we consider the configurations is different. Consider the set of configurations KI = K, and relabel the configurations in K = {k1 , . . . , k|K| }, so that f (load(k1 )) ≤ . . . ≤ f (load(k|K| )). We now consider lexicographically minimal solutions with respect to this order, as defined in Section 1.3.2. With the same argument as in Lemma 1.27, finding the lexicographically minimal solution with this order yields a schedule 1.6. Conclusions that maximizes mini f (`i (S)). The rest of the analysis follows identically as for the Machine Covering problem. For Problem (II), we can simply revert the order of the configurations, that is, we relabel K = {k1 , . . . , k|K| }, so that f (load(k1 )) ≥ . . . ≥ f (load(k|K| )). After defining how to model our objective functions, we note an additional difference between the Machine Covering problem and Problems (I) and (III). Recall that the ILP [LEX] for the Machine Covering problem (as well as for Problems (II) and (IV)) has a unique optimal solution. Then we have argued as follows. Given an optimal solution z t for [LEX] in iteration t, Lemma 1.29 implies that there exists an optimal solution z t+1 for the following iteration such that the difference between z t and z t+1 (in `1 -norm) is bounded. Moreover, the uniqueness of the optimal solution of [LEX] in iteration t + 1 assured that the solution found by Lenstra’s algorithm is indeed the sought solution. With this we could bound the difference of z t and z t+1 , and hence bound the migration and reassignment factors. If the optimal solution of [LEX] is not unique, as in Problems (I) and (III), solving the ILP for each iteration independently might yield some other optimal solution, not the one claimed in Lemma 1.29. If this happens we cannot bound the difference between z t and z t+1 as before. Thus, instead of solving the ILP in each iteration with Lenstra’s algorithm, we find z t+1 with an exhaustive search. Recall that if t is not the last iteration of a block, the difference between z t and z t+1 is 1 2 1 bounded by 2O( ε log ε ) (Lemma 1.31). Thus, we can find z t+1 by searching through all vectors 1 2 1 that differ to z t (in `1 -norm) by at most 2O( ε log ε ) . Since z t+1 belongs to a space of dimension 1 O( 1 log2 1 ) ε 2O( ε log ε ) , the search space has size 22 ε . For Algorithm Robust-PTAS II, the same argument works when t is the last iteration of a block. However, for Algorithms RobustPTAS and General-Robust-PTAS we have that if t is the last iteration of a block, 1 2 1 then the difference between z t and z t+1 is only bounded by |B| · 2O( ε log ε ) (Lemmas 1.40 and 1.54). Moreover, this larger difference is caused by the re-rounding of several small jobs. In this case we can pretend that we re-round these jobs one at a time, obtaining a sequence of solutions each corresponding to the re-rounding of one job. Each solution of the sequence can be found with the exhaustive search technique explained above, and thus finding the O( 1 log2 1 ε) solution takes time 22 ε . Moreover, we have implicitly showed (Lemmas 1.39 and 1.53) that the length of such a sequence is in O(|B|/ε). Therefore, the running time of the whole O( 1 log2 1 ) ε . With this we guarantee that our algorithms run in polynomial procedure is in |B|·22 ε time. Having modeled the objective function and adapted our algorithms as explained above, the rest of our arguments follow without modification. This is because all of them are based on properties of sensitivity analysis of ILPs (which do not depend on the objective function), or on locally optimal solutions. This implies Theorems 1.57 and 1.58. In this chapter we considered online robust versions of different parallel machine scheduling problems. We started by showing that the Machine Covering problem does not admit a robust PTAS with constant migration factor. This answers an open question proposed by Chapter 1. Robust PTASes for Machine Scheduling Sanders et al. [SSS09]. We introduce a new framework to study this problem, by slightly relaxing the definition of algorithms with constant migration factor: we considered the problem in the constant reassignment setting, with the additional restriction that the amount of accumulated reassignment potential used in each iteration must be in O(εOPTt ). We derived such a result for the permanent job setting. To this end, we identified a key property that ensures robustness, that is, considering solutions that are lexicographically minimal. Subsequently, we presented a robust PTAS with constant reassignment factor for the temporary job setting. This constitutes the first result of this kind that considers job departures. In the last section we identified key properties for objective functions that guarantee the existence of robust PTASes. Our work leaves several open questions. Recall that in the setting with temporary jobs, our robust PTAS might accumulate an unbounded amount of reassignment potential and use it all in one iteration. Is it possible to avoid this? As discussed at the end of Section 1.4, the main difficulty to answer this question is how to deal with small leaving jobs. In all robust PTASes found in the literature, as well as in our algorithms, the migration or reassignment factor has an exponential dependency on 1/ε. An interesting open question, and maybe the most important one for this kind of problems, is whether it is possible to obtain a polynomial dependency on 1/ε. Answering this question for any of the problems considered above – including Machine Covering, Minimum Makespan and the Minimum `p norm problems – would be a big step in better understanding robust solutions for scheduling problems. Finally, it would be interesting to derive algorithms with constant migration or reassignment factor in more general machine settings. For example, a natural generalization would be to consider the problem on related machines. In this setting, each machine i has a speed si and job j needs to be processed for pj /si units of time if assigned to machine i. Finding a robust PTAS in this setting seems to be considerably more complicated than in the parallel machine case. For the Minimum Makespan problem, Jansen [Jan10] has recently shown an offline PTAS based on an ILP to solve the rounded instance. This technique might be useful to obtain a robust PTAS by again using the sensitivity analysis result (Lemma 1.29) for ILPs. Chapter 2 Robust Multi-Stage Matroid Optimization Joint work with M. Skutella and A. Wiese Matroid Optimization Matroid optimization is one of the most successful modeling frameworks in combinatorial optimization. To motivate this setting, consider the following three natural problems. • Maximum Spanning Trees. This is one of the best understood problems in combinatorial optimization. Consider a graph G with a node set V and an edge set E. Given a weight function on the edges, the task is to find a set of edges of maximum weight that contains no cycle, that is, a forest of the graph. Note that if G is connected and the weights are positive, the optimal solution will be a spanning tree. See Chapter 3 for an interesting application of this problem and the Minimum Spanning Tree problem. • Maximum Weight Single Machine Scheduling. Consider the following scheduling problem that appears naturally in production environments. We are given a set of unit length jobs that needs to be scheduled on a single machine. Each job j has a positive weight, a release date rj and a due date dj > rj . The task is to pick a subset of jobs with maximum total weight satisfying the following constraint: it must be possible to assign each job j that is picked to an integral time slot in the interval [rj , dj ], so that no two jobs are assigned to the same slot. • Maximum Weight Vector Basis. Assume that we are given a set V of vectors in Rn . Each vector has an associated positive weight, and the objective is to find a subset of V with maximum weight among all independent subsets. This problem and its minimization version (where we seek a basis of minimum weight) have a vast number of applications, including electrical networks, structural engineering, chemistry and biochemistry, and periodic timetabling (see [Mic06] and the references therein). 61 2.1. Introduction All these problems, which are seemingly very different, can be modeled with the matroid paradigm. Notice that in each of these settings we are given a set of elements E (corresponding to edges, jobs and vectors, respectively) for which a subset of maximum weight must be chosen. Let us denote by I ⊆ 2E the collection of all feasible sets. This corresponds to the collection of forests in the first example, the collection of feasible subset of jobs in the second example, and the collection of independent sets in the third one. It is not hard to check that all of these three problems satisfy the following properties (see, e. g., [Sch03, Chapter 39]). 1. Inclusion Property: If I ∈ I and J ⊆ I then J ∈ I. 2. Extension Property: If I, J ∈ I and |J| < |I|, then there exists e ∈ I \ J such that J + e ∈ I. A pair (E, I) is a matroid if I is non-empty and the two properties above are satisfied. Set E is called the ground set, and sets in I are said to be independent. Moreover, inclusion-wise maximal independent sets are the bases of the matroid. Notice that these concepts coincide with the definitions of independence of vectors and bases in linear algebra. A famous result by Rado [Rad57], Gale [Gal68], and Edmonds [Edm71] states that a pair (E, I) satisfying the Inclusion Property is a matroid if and only if the greedy algorithm computes a maximum weight independent set for arbitrary weights. This characterization highlights the relevance of matroids in the area of efficient algorithms and combinatorial optimization. An analogous result shows that a minimum weight basis can also be found with the greedy algorithm. An even greater modeling power is obtained if we T allow to take intersection of matroids. That is, we consider a system (E, I) with I = `i=1 Ii , where (E, Ii ) is a matroid for each i ∈ {1, . . . , `}. Two prominent examples of problems that can be modeled with this framework are the following. • Maximum Weight Matching on Bipartite Graphs. Given a bipartite graph with weights of the edges, the objective is to find maximum weight matching. In other words, we are interested in finding a subset of edges with maximum weight in which no two edges cover the same node. Well known approaches exists for solving this problem in polynomial time, see for example [Sch03, Chapter 16]. • Maximum Weight Directed Hamiltonian Path Problem (MaxHam). Consider a salesman that travels on the edges of a weighted graph that is directed and complete. Assuming that the weights are non-negative, the salesman must visit each node exactly once with the objective of maximizing the total weight of the traversed edges. This problem can be shown to be NP-hard with a straight forward reduction from the Hamiltonian Path problem [GJ79]. This setting is closely related to the directed Maximum Traveling Salesmen Problem (MaxTSP), where the salesmen must additionally return to the starting node. It is not hard to see that the first problem can be modeled with the intersection of two matroids. Each matroid upper bounds by one the degree of each node in one side of the bipartite graph. Similarly, the MaxHam problem can be modeled as the intersection of three Chapter 2. Robust Multi-Stage Matroid Optimization matroids: one matroid representing the collection of forests of the graph, and two matroids for bounding the in- and out-degree of each node by one. Edmonds [Edm70] observed that the weighted matroid intersection problem on two matroids can be solved efficiently. Besides the maximum matching problem just described, this setting has applications in many areas such as edge connectivity [Gab91], survivable network design [BMM98], constrained [HL04] as well as degree-bounded [Goe06] minimum spanning trees, and multicast network codes [HKM05]. As in the case of the MaxHam problem, considering the intersection of more than two matroids leads into the world of NP-hard optimization problems, and thus cannot be solved efficiently unless P = NP. Robust Multi-Stage Optimization In this thesis we are interested in discrete optimization problems where the input data is revealed in several stages, similarly to online problems. We study robust versions of this idea, where we allow a limited amount of recourse in each stage. In this chapter we study such robust multi-stage optimization problems from a broad perspective in the general context of matroid optimization. More precisely, we are considering the problem of finding a subset of maximum weight in the intersection of several matroids over a common weighted ground set. In our model, the input data – the elements of the ground set – is revealed over time in discrete stages. At every stage t, a set of new elements becomes available and we are allowed to add at most kt of the current set of available elements to our solution, while dropping others in order to maintain feasibility. A sequence satisfying this property is said to be robust. The objective is to build a robust sequence of solutions with large weight. As in an online setting, decisions must be taken without any information on the elements (and their weights) that will arrive in the future. In each stage, we compare the solution computed by our algorithms to the best robust sequence. Our measure of performance is the worst-case ratio between their weights. If this ratio equals α, we say that the algorithm is α-competitive against robust solutions. Notice that the optimal robust sequence is obtained using complete knowledge of all stages in advance. It is interesting to mention that, due to the limited flexibility in each stage, robust solutions are in general not competitive when compared against the offline optimum (as it is done in classical competitive analysis of online algorithms). Despite this fact, there are interesting problems that admit constant competitive algorithms when compared with the offline optimum in each stage. In particular, we prove that for MaxHam and MaxTSP, there exist constant competitive robust solutions (compared against the offline optimum), even for small values of kt . Related Work To the best of our knowledge, multi-stage robust optimization on matroids has not been studied in its own right. We now review the literature of several related models in which the main difficulty is also the uncertainty of the input. Matroid Secretary Problem A particularly interesting example of matroid optimization under uncertainty is the matroid secretary problem. In this problem, the elements of 2.1. Introduction the ground set of a matroid arrive in a random order and the algorithm has to take an irrevocable decision to take an element or not. Babaioff, Immorlica, and Kleinberg [BIK07] give an O(log r) -competitive online algorithm for any matroids of rank r. For some important classes of matroids they give even constant-competitive online algorithms. The competitive ratios in the latter and other important cases are subsequently improved in a series of papers [BDG+ 09, DP08, KP09, Sot11]. For the random assignment model, where there is an adversary that chooses a set of weights which are then randomly assigned to the ground set, Soto [Sot11] gives the first constant factor competitive algorithm for general matroids. A further generalization of the matroid secretary problem to the discounted and the weighted setting (weighted meaning that up to K secretaries can be chosen and have to be allocated to one of K positions) is given by Babaioff et al. [BDG+ 09]. α-Robustness A different concept of robustness in the context of matroid intersection is considered by Hassin and Rubinstein [HR02]. In their setting, an independent set in the intersection of two matroids is said to be α-robust if the weight of its p heaviest elements is within a factor α of the maximum weight common independent set of p elements. They give an easy proof showing that for ` matroids √ a greedy solution is `-robust. In the same paper they present an algorithm to compute a 2-robust algorithm for the Maximum Weight Matching problem in complete graphs. √ Later, Fujita, Kobayashi, and Makino [FKM10] show that for two matroids there exists a 2-robust solution. Moreover, they give a matching lower bound by√showing that it is NP-hard to decide whether there exists an α-robust solution for any α < 2. Graphic Matroids A particular class of matroids that has been studied in the online setting are graphic matroids, corresponding to the maximum (or minimum) spanning tree problem stated above. Imase and Waxman [IW91] consider a problem where the vertices of a graph appear one by one and, upon arrival, have to be connected to the rest of the graph. For the Minimum Spanning Tree problem, they show that the greedy online algorithm achieves a performance ratio of O(log n) in any metric space. Due to the metric property, this immediately generalizes to the Online Steiner Tree Problem. Additionally, they give a lower bound of Ω(log n) for any online algorithm. They also consider an online robust model similar to ours. By allowing to rearrange a limited number of edges in each stage, they give a constant competitive algorithm which needs in total O(n3/2 ) rearrangements for a graph with n nodes. They leave as an open problem if it is possible to achieve a constant competitive factor by rearranging a constant number of edges in each stage. Chapter 3 is devoted to study this problem. Online Weighted Matchings A classical example of matroid intersection is the minimum weight bipartite matching problem. An online version of this problem can be defined as follows: the vertices on one side of the bipartition arrive one by one, and at the appearance of each node we must choose an edge in the matching covering it. In the setting with metric weights, the best possible competitive factor of a deterministic online algorithm is 2n − 1 (where n denotes the number of vertices on each side) [KP93, KMV94]. By allowing randomization, poly-logarithmic competitive factors can be achieved [MNP06]. For the Chapter 2. Robust Multi-Stage Matroid Optimization maximization version, Kalyanasundaram and Pruhs [KP93] give a 3-competitive algorithm and show a matching lower bound. The Semi-Streaming Model Muthukrishnan [Mut05] introduced the semi-streaming model, motivated by optimization problems in graphs with a large number of edges. In this setting, the nodes of a graph are known and the edges are given one by one in a streaming fashion. By using a limited amount of memory, namely, O(n · polylog(n)) where n is the number of nodes, the objective is to optimize a given graph problem (e. g., maximum matching, shortest paths, diameter). The model considered is adversarial, assuming a worst case ordering of the edges in the stream. In this setting Feigenbaum, Kannan, Mcgregor, Suri, and Zhang [FKM+ 04] studied the maximum matching problem (which is closely related to the matroid intersection problem), obtaining a 6-approximation algorithm. McGregor [McG05] and Zelke [Zel10] improved on this obtaining approximation factors of 5.828 and 5.585, respectively. The best algorithm known so far is by Epstein, Levin, Mestre, and Segev [ELMS11] and achieves an approximation guarantee of 4.91 + ε. This last algorithm works by classifying edges by their weight, keeping a maximal matching for each class and at the end combining the solutions to obtain a matching for the complete graph. Our Contribution We first consider our problem – the Robust Multi-Stage Matroid problem – in the single matroid case. We obtain a constant factor competitive algorithm by showing that a natural greedy algorithm is 2-competitive against robust solutions. To this end, we first identify a convenient class of instances that constitutes the worst-case scenario for robust algorithms. This reduction allows us to base our analysis on the study of sequences of local changes of independent sets. We derive several properties of these sequences, extending the work by Gabow and Tarjan [GT84]. Using these tools we analyze the competitive factor of the algorithm by bounding the loss on the objective function caused by any single wrong decision taken by the algorithm. Moreover, we give a matching tight example for the analysis of our algorithm. An extension of our analysis yields analogous results if we require robust solutions to be bases in each stage. Additionally, we characterize the structure of optimal robust solutions by showing that the robustness property can be modeled with an extra matroid constraint. This means that in the full information case, in which the arrival sequence of elements is known in advance, we can compute an optimal robust solution with any matroid intersection algorithm. In the second part of this chapter we consider our problem in the matroid intersection setting. We give an O(`2 )-competitive algorithm against robust solutions when solutions must be independent for ` matroids. Since already the offline problem is NP-hard, we identify a subclass of robust solutions – called lazy-removing – that can be constructed online without sacrificing more than a factor O(`) in the objective function. This type of solutions are derived by extending the ideas of Feigenbaum et al. [FKM+ 04] for the matching problem in the semi-streaming model. To study such solutions, we analyze the weight of the elements that the solution chooses either not to include or to remove in a certain stage. We introduce a carefully constructed charging scheme to compare their weight to the weight of the final solution. The extra structure of lazy-removing solutions allows us to generalize 2.2. Basics Concepts the techniques derived for the single matroid case, showing that our online algorithm is O(`2 )-competitive against robust solutions. To demonstrate the power of our techniques, we apply our results to the directed MaxTSP problem. In the robust multi-stage version of this problem, each stage corresponds to the arrival of a new city (together with its connections to known cities). Our algorithm for the intersection of three matroids yields O(1)-competitive common independent sets when compared to the offline optimum, even when kt = 1. By increasing the budget to kt = 6, we obtain a constant competitive algorithm even when compared to the offline optimum in each stage. This chapter is structured as follows. In Section 2.2 we start by introducing basic matroid notions and results. In Section 2.3.1 we give a precise definition of our problem and Section 2.3.2 studies how an independent set can be changed by applying a sequence of local changes (swaps) to it, as well as basic properties of swap sequences. These properties are used in Section 2.3.3 to show that our online algorithm is 2-competitive against robust solutions. In Section 2.3.4 we generalize the techniques for the case in which we require solutions to be bases at the end of each stage. Section 2.3.5 studies the full information case, showing that we can find an optimal robust solution in polynomial time if we know the complete input of our instance. In Section 2.4 we generalize our techniques to the matroid intersection setting. We define the problem for the intersection of two matroids in Section 2.4.1, and in Section 2.4.2 we extend the idea of swap sequence to this setting and derive several of their properties. In Section 2.4.3 we analyze the competitive ratio of an online algorithm. In Section 2.4.4 we shortly discuss how to extend our techniques to the intersection of several matroids. We conclude by applying our model to the maxTSP problem in Section 2.4.5. Remark Very recently, it came to our attention1 a result on submodular function maximization by Fisher, Nemhauser, and Wolsey [FNW78b]. They consider the problem of maximizing a submodular function over a partition matroid, showing that a greedy algorithm is 2-competitive. We notice that this implies our first result, namely, the analysis of the 2-competitive greedy algorithm for the Robust Multi-Stage Matroid problem. Indeed, the robustness property in our setting can be modeled with a partition matroid. Additionally, maximizing a linear function over a matroid defines a submodular function, called the weighted rank. We discuss this connection in more detail in Section 2.3.6. We remark that maximizing over the intersection of two or more matroids does not yield a submodular function. This implies that our results for the matroid intersection setting do not follow from [FNW78b], requiring significantly more ideas. Basics Concepts In the following we give a succinct introduction to matroids and submodularity. For a more exhaustive exposition of these topics see, e. g., [Sch03, Oxl93]. 1 We are very grateful to an anonymous referee who pointed us to this remark and to references [FKM+ 04] and [FNW78b]. Chapter 2. Robust Multi-Stage Matroid Optimization Matroid Basics Given a set X and an element x we denote X + x := X ∪ {x} and X − x := X \ {x}. Similarly, given a set X and two elements x, y, we denote ( (X − y) + x if x 6= y, X − y + x := X if x = y. Consider a set E, called the ground set (or element’s set), and I ⊆ 2E a collection of subsets of E. Recall that the pair M = (E, I) is called a matroid if the following properties hold. 1. I is non-empty. 2. Inclusion Property: If I ∈ I and J ⊆ I then J ∈ I. 3. Extension Property: If I, J ∈ I and |J| < |I|, then there exists e ∈ I \ J such that J + e ∈ I. Sets in I are said to be independent, and an inclusion-wise maximal set in I is called a basis of the matroid. It is easy to see from the definition that all bases have the same cardinality. Moreover, such cardinality is called the rank of M, denoted rankM (E). As mentioned before, a prominent example of matroids are the so called graphic matroids, defined as follows. Given an undirected graph G = (V, E), we define the system M = (E, I) where I is the collection of all forests2 in G. It is not hard to show that M is indeed a matroid. Notice that if G is connected then bases correspond to spanning trees, and the rank is simply |V | − 1. It is often useful to think in terms of graphic matroids when first encountering matroid terms, since several definitions are called after graph analogs. A set D is called dependent if D 6∈ I, and an element e ∈ E so that {e} is dependent is called a loop. Another important concept of matroids that we will use extensively is the concept of circuit. A set C is a circuit of M if it is an inclusion-wise minimal dependent set. Note that in graphic matroids circuits correspond to simple cycles in the graph. Given a basis B and any element f 6∈ B, the set B + f contains exactly one circuit, which we call C(B, f ) ⊆ B + f . Given g ∈ C(B, f ), by definition of circuit we have that C(B, f ) − g is an independent set. By the Extension Property this implies that B − g + f is also independent and therefore a basis. An extension of this simple idea is given by the following result. Lemma 2.1 (Symmetric Exchange Property [Sch03, Theorem 39.12]). Let B1 and B2 be two bases of M. Then for any f ∈ B1 \ B2 there exists g ∈ B2 \ B1 such that B2 + f − g and B1 + g − f are bases. Consider now a non-negative weight P function w : E → R≥0 . For a given set of elements I ⊆ E we denote w(I) = e∈I w(e). One of the most fundamental results on matroid optimization is that an independent set I maximizing w(I) can be computed 2 A forest is a subset of edges containing no cycle. 2.2. Basics Concepts with the following greedy algorithm: Order the elements in E = {e1 , . . . , e|E| } such that w(e1 ) ≥ w(e2 ) ≥ . . . ≥ w(e|E| ); Initialize I := ∅; For each i = 1, . . . , |E|, if I + ei is independent then set I := I + ei . This result was originally shown by Rado [Rad57], Gale [Gal68], and Edmonds [Edm71]. Lemma 2.2 ([Rad57, Gal68, Edm71]). The greedy algorithm returns an independent set of maximum weight. Additionally, for finding a minimum weight basis of a matroid, an analogous greedy algorithm (obtained as before but reverting the ordering of the elements) yields an optimal solution. Note that in graphic matroids this algorithm is equivalent to Kruskal’s algorithm [Kru56]. Weighted Rank and Submodularity Recall that all bases of a matroid M have the same cardinality, rankM (E). More generally, given a subset X ⊆ E, every inclusion-wise maximal independent set in X has the same cardinality. This follows easily from the Extension Property. The cardinality of these sets is called the rank of X, rankM (X) := max{|I| : I ∈ I and I ⊆ X}. Arguably, one of the most important properties of this function is that it is submodular 3 . Definition 2.3 (Submodularity). A function R : 2E → R is submodular if for any sets X, Y ⊆ E with X ⊆ Y and e ∈ E it holds that R(X ∪ {e}) − R(X) ≥ R(Y ∪ {e}) − R(Y ). Notice that submodularity means that the gain on R(X) by adding e to X cannot increase if we consider an addition of e to a superset of X. Intuitively, this property can be interpreted as a discrete analog of convexity. Lemma 2.4 ([Sch03, Theorem 39.8]). For any matroid M, its rank function rankM (·) is submodular. Proof. Consider two sets X, Y ⊆ E with X ⊆ Y and an element e ∈ E \ Y . Given an independent set I ⊆ X of maximal cardinality, the Extension Property ensures that there exists an independent set J ⊆ Y of maximum cardinality such that I ⊆ J ⊆ Y . Assume that rankM (Y + e) − rankM (Y ) = 1, otherwise there is nothing to show. By the Extension Property this implies that J + e is independent. Thus, by the Inclusion Property set I + e is independent. We conclude that rankM (X + e) − rankM (X) = 1, and thus the lemma follows. More generally, given a non-negative weight function w : E → R≥0 , we define the weighted rank function of matroid M as RM (X) := max {w(I) : I ∈ I and I ⊆ X} . To simplify notation we call RM (·) = R(·). Notice that R(X) can be computed with the greedy algorithm described above. Moreover, as we now show, this function is also submodular. 3 There exist several equivalent definitions of submodularity. We present here the form that better suits our needs. Chapter 2. Robust Multi-Stage Matroid Optimization Lemma 2.5 ([FNW78a, Proposition 3.1]). For any matroid M = (E, I) and non-negative weight function w, the corresponding weighted rank function is submodular. Proof. Let us fix a set X ⊆ E and let n : = |E|. Assume that E = {e1 , . . . , en } with w(e1 ) ≥ . . . ≥ w(en ) and denote Ei := {e1 , . . . , ei }. To compute R(X) we can use the greedy algorithm with the ordering of elements e1 , . . . , en restricted to set X. Assume that the greedy algorithm returns a set I, so that R(X) = w(I). For any set S, let χS denote the indicator function of S. By defining w(en+1 ) := 0, we compute that R (X) = w(I) = n X χI (ei ) · w(ei ) = j n X X j=1 i=1 n X i=1 χI (ei ) · n X w(ej ) − w(ej+1 ) χI (ei ) · (w(ej ) − w(ej+1 )) = n X |I ∩ Ei | · (w(ej ) − w(ej+1 )). Since I is chosen by the greedy algorithm with the ordering of elements as above, I ∩ Ei is a maximum cardinality independent set of X ∩ Ei . Denote by ranki (·) the rank function of matroid M restricted to in Ei . Thus, |I ∩ Ei | = ranki (X ∩ Ei ) and therefore Pelements n we obtain that R(X) = j=1 ranki (X ∩ Ei ) · (w(ej ) − w(ej+1 )). Recall that ranki (·) is submodular by the previous lemma. Moreover, a straightforward extension of this implies that ranki ((·) ∩ Ei ) is submodular. We conclude that R(X) is a conical combination of submodular functions and therefore is also submodular. Additionally, it will be useful to have a formula for computing R(X + e) − R(X). This is stated in the following result. Lemma 2.6. Given a matroid M = (E, I) and a non-negative weight function w, consider the corresponding weighted rank function R. For a subset X ⊆ E, let I ∈ I be a maximum weight set such that I ⊆ X. Consider a given element e ∈ E. If I + e is independent then R (X + e) − R(X) = w(e). Otherwise, let g be a minimum weight element in the cycle C(I, e) ⊆ I + e. Then, R(X + e) − R(X) = w(e) − w(g). Proof. Assume first that I + e is independent. Then, by submodularity of R, we have that w(e) = R(∅ + e) − R(∅) ≥ R(X + e) − R(X) ≥ w(I + e) − w(I) = w(e), where the last inequality follows since w(I + e) is independent and thus R(X + e) ≥ w(I + e). The lemma then follows if I + e is independent. Assume now that I + e is dependent, and relabel the elements in I + e = {e1 , . . . , en } so that w(e1 ) ≥ . . . ≥ w(en ). Since g is an element with smallest weight in the cycle C(I, e), we can assume that in the ordering e1 , . . . , en element g comes after all elements in C(I, e). More precisely, let j be such that g = ej . By definition of g, we have that w(ei ) ≥ w(g) for all ei ∈ C(I, e), so we can assume that i < j. For any i ∈ {1, . . . , n}, denote Ei := {e1 , . . . , ei } 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint and let Ii be a maximum weight independent set in Ei computed by the greedy algorithm with the ordering of the elements as above. Notice that Ej−1 ⊆ I − g + e, and thus Ej−1 is independent. Thus, because w is non-negative then Ij−1 = Ej−1 . Also, Ej contains C(I, e) and thus g = ej is not selected by the algorithm. This implies that Ij = Ij−1 = Ej−1 = Ej −g. On the other hand, Ej − e is contained in I and thus it is independent. This implies that R(Ej − e) = w(Ej ) − w(e). We conclude that R(Ej )−R(Ej −e) = w(Ij )−(w(Ej )−w(e)) = w(Ej )−w(g)−(w(Ej )−w(e)) = w(e)−w(g). Thus, since R is submodular and Ej − e ⊆ X we have that R(X + e) − R(X) ≤ w(e) − w(g). Finally, we notice that R(X) = w(I) and R(X + e) ≥ w(I − g + e). The lemma follows since w(e) − w(g) = w(I − g + e) − w(I) ≤ R (X + e) − R(X) ≤ w(e) − w(g) 2.3 2.3.1 Robust Multi-Stage Optimization Under a Single Matroid Constraint Problem Definition Now that we have introduced the basic definitions and properties of matroids and weighted rank functions, we state the Robust Multi-Stage Matroid problem in the single matroid case. Consider a matroid M = (E, I) and a weight function w : E → R≥0 that are both revealed incrementally in n stages. For technical reasons we add to matroid M a dummy element ε ∈ E which is a loop and has zero weight. Initially, we are given a starting set of available elements E0 ⊆ E, together with an initial independent set I0 ⊆ E0 . In each stage t ∈ {1, . . . , n}, a set Et ⊆ E, with Et−1 ⊆ Et , is revealed together with the weight of new elements in Et . The objective is to maintain an independent set at all times, with the largest possible total weight. We are also given a value kt for each t ∈ {1, . . . , n} that denotes the current capacity for changing our solution. We are looking for an online sequence of sets It for t ∈ {1, . . . , n}, satisfying the following properties. (P1) Feasibility: It ⊆ Et is an independent set. (P2) Robustness: |It \ It−1 | ≤ kt . We assume the usual independence-testing oracle model, where in each stage t an algorithm is allowed to test independence of any subset of Et . For this problem, we are interested in analyzing algorithms that are online, meaning that for each stage t the algorithm must construct a solution satisfying Properties (P1) and (P2) without knowledge of the elements that will appear in the future. In particular, the total number of stages n is unknown to the algorithm. As in the classic online framework, it would be desirable to construct online solutions that are constant competitive, that is, there exists some number α ≥ 1 so that w(It ) ≥ w(OPTt )/α for all t ∈ {1, . . . , n}, where OPTt is a Chapter 2. Robust Multi-Stage Matroid Optimization maximum weight independent set contained in Et . In our case this is not possible, even if we assume that I0 = OPT0 . Indeed, consider a uniform matroid4 of rank r with an initial set E0 = I0 containing r elements of weight 1. In the first stage r new elements arrives with weight M , but the budget is k1 = O(1). Thus, the best possible competitive ratio is Ω(M ). This counter example makes the usual competitive analysis for online algorithms meaningless in our case. Therefore, we compare our solutions to the best possible robust solution. A sequence of independent sets I1 , . . . , In is said to be robust if it satisfies (P1) and (P2) for each t. For a given t ∈ {1, . . . , n}, let (Is∗ )ts=1 be a sequence of independent sets maximizing w(It∗ ) among all robust sequences. We say that an online algorithm is α-competitive against robust solutions if the solution computed by the algorithm satisfies w(It ) ≥ w(It∗ )/α for each t and every possible input. In general, since the value of n is a priori unknown, this is equivalent to showing that w(In ) ≥ w(In∗ )/α. To simplify notation, we will say that the algorithm is α-competitive if it is clear from the context that we are comparing it to the best robust solution. We use this framework to analyze the following algorithm. Algorithm In each stage t ∈ {1, . . . , n}, we construct the independent set It based on the solution It−1 to the previous stage. 1. Set I := It−1 and k := 1. 2. While k ≤ kt , find a pair of elements (f, g) ∈ Et × Et maximizing w(f ) − w(g) among all pairs such that I − g + f ∈ I. Set I := I − g + f and k := k + 1. 3. Return It := I. Note that in the previous algorithm, element g might be equal to the dummy element ε and therefore (I − g + f ) = I + f . In the following sections we show that this algorithm is 2-competitive against robust solutions and that our analysis is tight. Dynamic Changes of Independent Sets Notice that Robust-Greedy operates by locally changing the current solution in an iterative way. Hence, before analyzing our algorithm we first study sequences of local changes and their basic properties. In particular we extend some definitions and basic results previously introduced by Gabow and Tarjan [GT84]. To simplify notation we consider the following definitions. First we extend the notion of a circuit as follows. Definition 2.7 (Circuit (Extension)). Let I be an independent set and f ∈ E. 1. If I + f is dependent, then C(I, f ) is defined as the unique circuit contained in I + f . 2. If f ∈ I, then C(I, f ) := {f }. 4 A matroid M = (E, I) is a uniform matroid of rank r if I = {I ⊆ E : |I| ≤ r}. 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint 3. If f 6∈ I and I + f is independent, then C(I, f ) := {ε}. In each stage, the algorithm chooses a pair of elements (f, g) and modifies the current solution I obtaining I − g + f . This motivates the following definition. Definition 2.8 (Swap). An ordered pair of elements (f, g) ∈ E × E is a swap for set I ∈ I if g ∈ C(I, f ). We say that applying the swap to set I yields the independent set I − g + f , and we define ∆(f, g) := w(f ) − w(g) as the gain of swap (f, g). It is worth recalling that by definition of I −g+f (Section 2.2), we have that I −g+f = I if f = g. Notice that we have three different cases when applying a swap (f, g) for independent set I: (1) If I + f is dependent then the swap adds element f 6∈ I and removes element g in the unique circuit contained in I + f ; (2) If f ∈ I then the swap leaves I = I − f + f untouched; (3) If f 6∈ I and I + f is independent then g = ε and the swap augments I to I − ε + f = I + f . These three cases correspond to the three different cases from the definition of C(I, f ). Moreover, it is not hard to check that the gain ∆(f, g) corresponds in all three cases to the increase in weight of the solution w(I − g + f ) − w(I). Definition 2.9 (Swap Sequence). Consider an independent set I and a tuple of pairs S = ((f1 , g1 ), . . . , (fk , gk )) (or S = (ft , gt )kt=1 for short). Define It (S) recursively so that I0 (S) := I and It (S) := It−1 (S) − gt + ft for all t ∈ {1, . . . , k}. We say that S is a swap sequence of length k for set I if (ft , gt ) is a swap for It−1 (S) for each t ∈ {1, . . . , k}. DefinitionP 2.10 (Total gain). The total gain of a swap sequence S = (ft , gt )kt=1 is defined as ∆(S) = kt=1 ∆(ft , gt ). Given a swap sequence S = (ft , gt )kt=1 for I, notice that set It (S) is independent for all t and that w(Ik (S)) = w(I) + ∆(S). These definitions give a framework to study an independent set that changes by applying a sequence of swaps to it. We will show in following sections that studying such changes is enough to analyze Robust-Greedy. In the following we derive basic properties of swap sequences that will be useful to analyze our algorithms. Let I be an independent set and consider a fixed set of elements f1 , . . . , fk to be added to I. Our first observation on a swap sequence S = (ft , gt )kt=1 for I is that choosing g1 , . . . , gk greedily maximizes the total gain of S. More precisely, consider the following definition. Definition 2.11. Sequence S = (ft , gt )kt=1 is said to be greedy-removing if gt ∈ arg min{w(e) : e ∈ C(It−1 (S), ft )} for all t ∈ {1, . . . , k}. Note that given a sequence of elements to be added f1 , . . . , fk , we can easily compute a greedy-removing sequence S = (ft , gt )kt=1 by picking elements gt greedily. Also, note that all the swaps of a greedy-removing sequence have non-negative gain since for every t either gt = ε or ft ∈ C(It−1 (S), ft ), and thus w(gt ) ≤ w(ft ). Furthermore, the greedy-removing sequences satisfy the following important property. Chapter 2. Robust Multi-Stage Matroid Optimization Lemma 2.12. Let S = (ft , gt )kt=1 be a greedy-removing swap sequence for independent set I. Then Ik (S) is a maximum weight independent set for matroid M restricted to elements in I ∪ {f1 , . . . , fk }. Proof. It is enough to apply Lemma 2.6 iteratively. This lemma directly implies the following observation, which allows us to restrict ourselves to consider greedy-removing sequences. Observation 2.13. Let S1 = (ft , gt )kt=1 and S2 = (ft , et )kt=1 be two swap sequences for I ∈ I. If S1 is greedy removing then ∆(S1 ) ≥ ∆(S2 ). In our online setting, it is crucial to study what happens if we insert an element by a greedy swap (f, g) at some particular stage or if we insert it at a later stage by another greedy swap (f, g 0 ). The following result states that the gain of adding an element to our solution in stage t is non-increasing on t. This property is basically equivalent to the submodularity of the the weighted rank function of our matroid, defined in Section 2.2. Lemma 2.14. Let S = (ft , gt )kt=1 be a greedy-removing swap sequence for independent set I, and consider an element e ∈ E. If for each t ∈ {1, . . . , k}, element dt is an element of minimum weight in C(It−1 (S), e), then ∆(e, d1 ) ≥ ∆(e, d2 ) ≥ . . . ≥ ∆(e, dk ). Proof. Let R be the weighted rank function of our matroid as defined in Section 2.2. Notice that by Lemma 2.12, the set It−1 (S) is a maximum weight independent set in Ft−1 := I ∪ {f1 , . . . , ft−1 }. Similarly, It−1 (S) − dt + e is a maximum weight independent set in Ft−1 + e. This implies that R(Ft−1 ∪ {e}) − R(Ft−1 ) = ∆(e, dt ). Because I ⊆ F1 ⊆ . . . ⊆ Fk , the lemma then follows by the submodularity of R (Lemma 2.5). It is possible to extend the previous lemma to show that the gain of a whole swap sequence is decreased if the sequence is applied in a later iteration of the algorithm. This is formalized in the following statement. Lemma 2.15. Let I be an independent set and let (f, g) be a swap for I such that g ∈ arg min{w(e) : e ∈ C(I, f )}. Consider a set of elements e1 , . . . , ek to be added to I. Let S1 := (et , dt )kt=1 be a swap sequence for I and let S2 = (et , d0t )kt=1 be a swap sequence for set I − g + f . If (dt )kt=1 and (d0t )kt=1 are chosen so that S1 and S2 are greedy-removing, then ∆(S1 ) ≥ ∆(S2 ). Proof. Let g 0 be an element of largest weight in C(Ik (S), f ), and consider the swap sequence S10 = ((e1 , d1 ), . . . , (ek , dk ), (f, g 0 )) for I. Lemma 2.12 implies that w(Ik+1 (S10 )) = w(Ik (S2 )), since both Ik+1 (S10 ) and Ik (S2 ) are maximum weight independent sets for matroid M restricted to elements in I ∪ {f, e1 , . . . , ek }. Hence, w(I) + ∆(S1 ) + ∆(f, g 0 ) = w(Ik+1 (S10 )) = w(Ik (S2 )) = w(I) + ∆(f, g) + ∆(S2 ), and thus ∆(S1 ) + ∆(f, g 0 ) = ∆(f, g) + ∆(S2 ). By Lemma 2.14 we know that w(g 0 ) ≥ w(g), and thus ∆(S1 ) ≥ ∆(S2 ). 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint Competitive Analysis of Robust-Greedy In this section we analyze the competitive ratio of Robust-Greedy. To this end we first consider a special class of instances that we call regular. After analyzing our algorithm for this type of instances, we show that they constitute a worst case scenario. Definition 2.16 (Regular Instance). We say that an instance of the Robust Multi-Stage Matroid problem is regular if kt = 1 for all t ∈ {1, . . . , n}. Consider a regular instance of the Robust Multi-Stage Matroid problem. Intuitively, in each stage of this instance, a robust solution adds one element to the current solution and, depending whether this set is independent, it may or may not remove one element. In other words, for each stage t a solution must choose a swap (f, g) for set It−1 (where g might be equal to ε if It−1 + f is independent). Note that the algorithm may choose f = g, leaving the solution untouched. Definition 2.17 (Robust Feasible Swap Sequence). Given a regular instance, a swap sequence S = (ft , gt )nt=1 for set I0 ∈ I is said to be robust feasible (or feasible for short) if ft ∈ Et for all t ∈ {1, . . . , n}. We note that the swap sequence corresponding to Robust-Greedy is robust feasible. However, a robust sequence I1 , . . . , In does not always correspond to a feasible swap sequence, even in regular instances. This happens if the sequence drops in one iteration several elements (recall that our model does not put any restrictions on the number of elements that can be dropped per iteration). On the other hand, dropping elements can never improve the weight of the solutions. This is shown in the following lemma. Lemma 2.18. Let I1 , . . . , In be a robust sequence. There exists a robust feasible swap sequence S for I0 so that w(In (S)) ≥ w(In ). Proof. Let ft be the unique element in It \ It−1 (if the latter is empty we define ft as ε). We construct a sequence S = (ft , gt )nt=1 such that It (S) ⊇ It for all t. Assume that we have such a sequence up to stage t − 1. We now choose gt as any element gt in C(It−1 (S), ft ) \ It . Element gt must exist since otherwise It contains a circuit. By appending swap (ft , gt ) to S we conclude that It (S) ⊇ It . Therefore, In (S) ⊇ In , and thus the lemma follows. In particular, this lemma implies that the optimal robust solution is given by a feasible swap sequence S ∗ = (ft∗ , gt∗ )nt=1 . In other words, S ∗ is a feasible swap sequence that maximizes w(In (S ∗ )), and this value equals the maximum w(In∗ ) for any robust sequence I1∗ , . . . , In∗ . Let S = (ft , gt )nt=1 be the feasible swap sequence corresponding to the greedy solution. Notice that S is greedy-removing by definition, and by Observation 2.13 we can assume that S ∗ is also greedy-removing. In what follows we show that Robust-Greedy is 2-competitive against robust solutions for regular instances. To this end we will first show a slightly stronger property, namely that ∆(S) ≥ ∆(S ∗ )/2. This fact will be proven iteratively and the following lemma will imply the inductive step. Chapter 2. Robust Multi-Stage Matroid Optimization Lemma 2.19. Consider an independent set I, and let f be an arbitrary element. Let S1 = (et , dt )kt=1 be any greedy-removing swap sequence for set I, and consider the swap sequence for I S2 := ((f, g), (e2 , d02 ), . . . , (en , d0k )). If g, d02 , . . . , d0k are set such that S2 is greedy-removing, then ∆(S2 ) ≥ ∆(S1 ) − ∆(e1 , d1 ). Proof. We construct an auxiliary swap sequence S3 := ((e2 , d002 ), . . . , (ek , d00k )), for set I, where d002 , . . . , d00k are defined so that S3 is greedy-removing. Lemma 2.12 implies that Ik (S2 ) ≥ Ik−1 (S3 ), since S2 yields the optimal solution to a matroid with one more element than Pk S3 . Therefore, ∆(S2 ) ≥ ∆(S3 ). On the other hand, Lemma 2.15 implies that ∆(S3 ) ≥ t=2 ∆(et , dt ). Combining these two inequalities we obtain that ∆(S2 ) ≥ ∆ (S3 ) ≥ k X ∆(et , dt ) = ∆(S1 ) − ∆(e1 , d1 ). In the setting of the last lemma, let I = I0 be the initial set of our regular instance, f = f1 be the first element inserted by Robust-Greedy, and S1 = S ∗ to be an optimal feasible sequence. Then the lemma implies that there exists a feasible sequence S2 for I0 whose first swap is the greedy swap (f1 , g1 ), and gains as much as ∆(S ∗ )−∆(f1∗ , g1∗ ) ≥ ∆(S ∗ )−∆(f1 , g1 ). Intuitively, this means that by applying a greedy swap (f1 , g1 ) to I0 , we loose for future stages at most what we gained with the current swap (f1 , g1 ). The same argument can be iterated for the rest of the stages. This is the main insight needed to prove the following lemma. Lemma 2.20. Consider a regular instance of the Robust Multi-Stage Matroid problem. If S = (ft , gt )nt=1 is the swap sequence computed by Robust-Greedy and S ∗ = (ft∗ , gt∗ )nt=1 is the optimal robust feasible swap sequence, then ∆(S) ≥ ∆(S ∗ )/2. Proof. For each ` ∈ {1, . . . , n − 1}, consider the sequence for set I`−1 (S), ∗ 0 S` := ((f` , g` ), (f`+1 , g`+1 ), . . . , (fn∗ , gn0 )), 0 where g`+1 , . . . , gn0 are chosen greedily so that S` is greedy-removing. Similarly, consider the swap sequence S`∗ := (ft∗ , gt∗∗ )nt=` for set I`−1 (S), where g`∗∗ , . . . , gn∗∗ are set so to make S`∗ greedy-removing. Additionally, Sn := (fn , gn ) and Sn∗ := (fn∗ , gn∗∗ ) are defined as two greedy-removing sequences (of one swap each) for set In−1 (S), where gn∗∗ is chosen greedily. Lemma 2.19 applied to S` and S`∗ implies that ∆(S` ) ≥ ∆(S`∗ ) − ∆(f`∗ , g`∗∗ ) ≥ ∆(S`∗ ) − ∆(f` , g` ), where the last inequality follows since (f` , g` ) is chosen as the available swap with larger gain in I`−1 (S). 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint Notice that, without loss of generality, the last n − ` swaps of S` correspond to se∗ ∗ , . . . , fn∗ to I` (S) and . This is because both swap sequences add elements f`+1 quence S`+1 remove elements greedily so that they are greedy-removing. This together with Inequality (2.1) implies that, for each ` ∈ {1, . . . , n − 1}, ∗ w(I` (S)) + ∆(S`+1 ) = w(I`−1 (S)) + ∆(S` ) ≥ w(I`−1 (S)) + ∆(S`∗ ) − ∆ (f` , g` ). Iterating this last expression yields w(In (S)) = w(In−1 (S)) + ∆(Sn ) ≥ w(In−1 (S)) + ∆(Sn∗ ) − ∆(fn , gn ) ∗ ≥ w(In−2 (S)) + ∆(Sn−1 ) − ∆(fn , gn ) − ∆(fn−1 , gn−1 ) .. . n X ∗ ∆(ft , gt ) ≥ w(I1 (S)) + ∆(S2 ) − ≥ w(I0 (S)) + ∆(S1∗ ) − t=2 n X ∆(ft , gt ) t=1 ∗ = w(I0 ) + ∆(S ) − ∆(S). We obtain that w(In (S)) = w(I0 ) + ∆(S) ≥ w(I0 ) − ∆(S ∗ ) − ∆(S), implying that 2∆(S) ≥ ∆(S ∗ ). Corollary 2.21. Consider a regular instance of the Robust Multi-Stage Matroid problem. Then Robust-Greedy is 2-competitive against robust solutions. Proof. It is enough to add w(I0 ) at both sides of inequality ∆(S) ≥ ∆(S ∗ )/2. To conclude this section, we show this result for instances that are not necessarily regular. Lemma 2.22. Any α-competitive algorithm for regular instances implies an α-competitive algorithm for general instances. Proof. Consider an arbitrary instance of the Robust Multi-Stage Matroid problem. We split each stage t of this instance into kt stages, each having Et as the available set of elements and allowing to insert at most one element. Notice that a robust solution of the modified instance also yields trivially a solution to the original instance of the same weight. We now observe that also any robust solution to the original instance implies a solution to the modified instance with equal total weight. Indeed, consider a solution to the original instance I1 , . . . , In . For a given state t, we have kt corresponding stages in the modified instance. We construct a sequence of kt sets J1 , . . . , Jkt corresponding to the solution of the modify instance for these kt stages. For this we simply consider It \ It−1 = {f1 , . . . , fkt } and define Js = (It−1 \ It ) ∪ {f1 , . . . , fs } for all s ∈ {1, . . . , kt }. It is clear than in this sequence at most one element is added to the solution in each stage, and that Jkt = It . Notice that at the beginning it may happen that we drop more than one element, however our model does Chapter 2. Robust Multi-Stage Matroid Optimization not have any restriction on the number of elements that we are allowed to remove in each stage. We conclude that the optimal solution of the two instances coincide, and that applying an algorithm on the regular instance yields a solution to the original one of the same weight. The lemma follows. With this we conclude the main theorem of this section. We remark that the following theorem also follows from a result by Fisher et al. [FNW78b], see Section 2.3.6 for more details. Theorem 2.23. For any instance of the Robust Multi-Stage Matroid problem, algorithm Robust-Greedy is 2-competitive against robust solutions when compared to the optimal robust solution. Proof. The proof follows directly from Corollary 2.21 and Lemma 2.22. On the other hand, we show that the analysis of Robust-Greedy is tight, even for regular instances. Theorem 2.24. For any δ > 0, there is an instance on which Robust-Greedy achieves a competitive factor (against robust solutions) no better than 2 − δ. This holds even if considering only regular instances and if M is a graphic or partition matroid. Proof. The instances are described via a graph G = (V, E) whose edges E are revealed in several stages. The corresponding graphic matroid M is then (E, I) where I simply denotes all forests in G. We define N := 2T where T is an integer to be defined later. In the instance, there are T + 1 iterations t ∈ {1, ..., T + 1} with kt = N/2t for t < T + 1 and kT +1 := 1. We start with a graph G0 = (V, E0 ) with V = {v ∗ , v0 , ..., vN −1 } and E0 = ∅. In iteration t = 1, a set of edges E1 = {v ∗ vi : i ∈ {0, . . . , N − 1}} is revealed with weight 1 each. For each iteration t with 1 < t ≤ T + 1 we define 1 1 L Vt := vi ∈ V : 1 − t−1 N ≤ i < 1 − t N 2 2 and VtR 1 := vi ∈ V : i ≥ 1 − t N . 2 In each iteration t > 1, a new set of edges Et = {v ∗ vi : vi ∈ VtR } is revealed with weight 2t−1 each (note that the instance contains parallel edges). If Robust-Greedy selects an edge e ∈ S 0 0 Et it might be necessary to drop an edge e ∈ t0 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint P iteration t. This results in an overall weight of Tt=1 N · 21t · 2t−1 = T · N2 . The claim follows by choosing T sufficiently large. Moreover, it is straightforward to turn the above instance into a regular instance. Also, it is easy to check that the instance can be interpreted as a partition matroid. Competitive Analysis for Maximum Weight Bases In this section we analyze a variant of our problem in which we required solutions to be bases at all times. We show that in this case the greedy algorithm is also 2-competitive, which follows from a small adjustment to the proof above for the case of independent sets. As before, we consider an underlying matroid M = (E, I) and weights w : E → R≥0 . At the beginning of the procedure we are given a set E0 , and in each stage t ∈ {1, . . . , n} the set of available elements corresponds to Et ⊇ Et−1 . For each stage t we denote by Mt the matroid M restricted to set Et . That is, Mt = (E, It ), where It = {I ⊆ Et : I ∈ I} . Additionally, we are allowed to add at most kt elements in each stage t ∈ {1, . . . , n}. Assuming that we are given a starting basis B0 for matroid M0 , a robust solution for this setting is a sequence of sets B1 , . . . , Bn so that for each t ∈ {1, . . . , n}, • Bt is a basis for Mt , and • |Bt \ Bt−1 | ≤ kt . Notice that for a robust solution to exists the budget for each t must be at least kt ≥ rankM (Et ) − rankM (Et−1 ), and thus we assume that property to be true from now on. The algorithm that we analyze is a natural variant of the algorithm that we proposed for the independent set case. Algorithm In each stage t, we construct the current basis Bt based on the solution for the previous stage Bt−1 . 1. Set B := Bt−1 and k := 1. 2. While B is not a basis for Mt , find an element f ∈ Et 6∈ B of maximum weight so that B + f is independent. Set B := B + f and k := k + 1. 3. While k ≤ kt , find a feasible swap (f, g) ∈ Et × Et for B maximizing ∆(f, g), and redefine I := I − g + f and k := k + 1. 4. Return It := I. To analyze this algorithm we will reuse the techniques derived in the previous section. Let B1 , . . . , Bn be the sequence of bases computed by the algorithm. Note Pn that BasisRobust-Greedy computes a swap sequence S = (fs , gs )N where N = s=1 t=1 kt . For any Chapter 2. Robust Multi-Stage Matroid Optimization t ∈ {1, . . . , n} we define st := t X kr . With this definition we have that Ist (S) = Bt and gs = ε for any s ∈ {st−1 + 1, . . . , st−1 + rankM (Et ) − rankM (Et−1 )} and t ∈ {1, . . . , n}. Now we show a technical lemma that will help us show that the optimal robust solution B1∗ , . . . , Bn∗ can also be described by a swap sequence. Lemma 2.25. Let I be an independent set, B be a basis for M and define ` := |B \ I|. There exists a swap sequence S = (et , ht )`t=1 for I so that I` (S) = B. Moreover, the elements {et }`t=1 can be inserted to I by S in an arbitrary order. Proof. Consider the elements {e1 , . . . , e` } = B \ I. We iteratively insert these elements to I, and always remove elements in (I \ B) ∪ {ε}. To show that this can be done assume by induction that we have created a sequence of swaps St := (es , hs )ts=1 for some t ≥ 1 (note that the base of the induction, that is, the case t = 0, is trivial). Adding element et+1 to It (St ) will create a circuit C(It (St ), et+1 ) (which may be equal to {ε}). If C (It (St ), et+1 ) = {ε} we simply define ht+1 = ε and define St+1 by appending (et+1 , ht+1 ) to St . If C(It (St ), et+1 ) 6= {ε}, then It (St ) + et+1 is dependent and C(It (St ), et+1 ) denotes the unique fundamental circuit in It (St ) + et+1 . If this is the case, then note that (I \ B) ∩ C(It (St ), et+1 ) is nonempty, since if it were empty then C(It (St ), et+1 ) ⊆ B which contradicts the fact that B is independent. Then we can take ht+1 as any element in (I \ B) ∩ C(It (St ), et+1 ), and define St+1 by appending (et+1 , ht+1 ) to St . This concludes the induction. We define our sequence S to be S` . Since S inserts all elements in B \ I and remove only elements in I \ B, we conclude that I` (S) ⊇ B. Since also I` (S) is independent and B is a basis, we obtain that I` (S) = B. The lemma follows by noting that the elements in {e1 , . . . , e` } were inserted in an arbitrary order. ∗ and Bt∗ (recall that Bt∗ is a basis for Mt Applying this lemma subsequently to sets Bt−1 ∗ and Bt−1 is an independent set for this matroid) we obtain that there exists a sequence ∗ ∗ S ∗ = (fs , gs )N s=1 for B0 so that Ist (S ) = Bt for all t ∈ {1, . . . , n}. To show the competitive factor we will follow basically the same proof of Lemma 2.20. Notice that in that proof the only property that we required of the greedy sequence S = (fs , gs )N s=1 and the optimal ∗ ∗ ∗ N sequence S = (fs , gs )s=1 is that ∆(fs , gs ) ≥ ∆(fs∗ , gs0 ) for each s ∈ {1, . . . , N }, where (fs∗ , gs0 ) is a greedy-removing swap for Is−1 (S). In the independent set case this was trivial since (fs , gs ) is always chosen as the available swap with maximum gain. In the case where we require solutions to be bases we must be more careful. Notice that for each stage t, the greedy algorithm works in two phases: In the first phase the algorithm only adds elements that do not create circuits (and thus the element removed by each swap is always ε), until the current solution is a base for Mt ; In the second phase the algorithm chooses swaps greedily. Thus, for each stage t, the property required in (2.2) follows directly for swaps corresponding to the second phases of the algorithm in this stage. To show that this also holds in the first phase we need to add more structure to sequence S ∗ . We do so by using the following technical lemma. 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint Lemma 2.26. Let B1 , B2 be two bases for matroid M = (E, I), and let ` = rankM (E). We can label the elements in B1 and B2 , B1 = {f1 , . . . , f` } and B2 = {h1 , . . . , h` }, such that ht+1 6∈ {f1 , . . . , ft } and {f1 , . . . , ft , ht+1 } ∈ I for all t ∈ {1, . . . , ` − 1}. Proof. Order the elements of B1 arbitrarily, so that B1 = {f1 , . . . , f` }. We find an ordering of the elements in B2 satisfying the claim of the lemma by iteratively applying the Symmetric Exchange Property (Lemma 2.1). If f` ∈ B2 then we simply choose h` = f` . Otherwise, f` ∈ B1 \ B2 and thus by the symmetric exchange property we have that there exists an element h` ∈ B2 \ B1 so that B1 − f` + h` is also a basis. Thus, {f1 , . . . , f`−1 , h` } is also a basis. Repeating the same argument for basis B2 and {f1 , . . . , f`−1 , h` }, either f`−1 ∈ B2 and thus we define h`−1 = f`−1 , or there exists an element h`−1 ∈ B2 \ {f1 , . . . , f`−1 , h` } so that {f1 , . . . , f`−1 , h` } − f`−1 + h`−1 = {f1 , . . . , f`−2 , h`−1 , h` } is a basis. Iterating this argument we obtain that {f1 , . . . , ft , ht+1 , . . . , h` } is a basis for all t ∈ {1, . . . , ` − 1} which implies the lemma. We are now ready to show the main result of this section. Theorem 2.27. Algorithm Basis-Robust-Greedy is a 2-competitive algorithm against robust solutions for the Basis Robust Multi-Stage Matroid problem. Moreover, the analysis is tight. Proof. For a given stage t, consider the optimal sequence S ∗ restricted to stages st−1 + 1 to t . By using Lemma 2.25 to construct S ∗ , sequence St∗ can st , that is St∗ := (ft∗ , gt∗ )ss=s t−1 +1 ∗ ∗ in an arbitrary order. Thus, since Bt∗ is a basis for to Bt−1 insert elements in Bt∗ \ Bt−1 Mt , we can assume that the first rankM (Et ) − rankM (Et−1 ) many swaps of St∗ remove the element ε, that is, gs∗ = ε for all s ∈ {st−1 + 1, . . . , st−1 + rankM (Et ) − rankM (Et−1 )}. Let us denote s0t = st−1 + rankM (Et ) − rankM (Et−1 ). Then, Is0t (S ∗ ) is a basis for matroid Mt . Moreover, because in the first phase of stage t the greedy algorithm only removes element ε in each stage, we also obtain that Is0t (S) is a basis for Mt . Now consider the contraction of matroid Mt to elements Et−1 , that is, Mt /Et−1 5 . Thus, B1 := Is0t (S ∗ ) \ Et−1 and B2 := Is0t (S) \ Et−1 are bases for Mt /Et−1 . Applying Lemma 2.26 to bases B1 and B2 , we obtain that for ∗ all s ∈ {st−1 + 1, . . . , s0t − 1} the set (Is (S) \ Et−1 ) + fs+1 is independent for Mt /Et−1 , ∗ ∗ and thus Is (S) + fs+1 is independent for Mt . We conclude that w(fs+1 ) ≥ w(fs+1 ) since Basis-Robust-Greedy picks element fs+1 as the largest weight element such that Is (S) + fs+1 is independent. With this construction we showed that Inequality (2.2) is satisfied for 5 Given a matroid M = (E, I) and a subset of elements F ⊆ E, the contraction of M to F is defined as a matroid M/F := (E \ F, I|F ) where I|F := {I ⊆ E \ F : I ∪ B ∈ I} and B is an arbitrary maximal independent subset of F . Chapter 2. Robust Multi-Stage Matroid Optimization all swaps of the first phase of our algorithm. Since for the second phase Inequality (2.2) follows by definition of Basis-Robust-Greedy we conclude that for all swaps in S and S ∗ Inequality (2.2) holds. Thus, we can apply the same proof as in Lemma 2.20 to conclude that ∆(S) ≥ ∆(S ∗ )/2. Adding w(B0 ) at both sides of this inequality yields that w(Bn ) ≥ w(Bn∗ )/2 which implies that Basis-Robust-Greedy is 2-competitive. Finally, that the analysis is tight follows from the same construction as in Theorem 2.24. The Optimal Robust Solution and Matroid Intersection Consider an instance of the Robust Multi-Stage Matroid problem, and assume that we have full knowledge of the instance, that is, matroid M = (E, I), sets E0 , . . . , En and the values k1 , . . . , kn are known in advance. For this case we would like to compute in polynomial time the robust sequence maximizing w(In ). In this section we show that this is indeed possible by identifying a connection between the optimal robust sequence and finding a set of maximum weight in the intersection of two matroids. In other words, we encode the robustness property in a second matroid. The matroid encoding the robustness property will be a so called transversal matroid. Transversal matroids are a classical example of matroids and are defined as follows. Consider a set of elements E, a family of subsets A = (A1 , . . . , Am ) where Ai ⊆ E for all i ∈ {1, . . . , m}, and a multiplicity mi ∈ N0 for each set Ai . We say that a subset I ⊆ E is a partial transversal if we can assign each element e ∈ I to a set Ai 3 e, so that each Ai has assigned at most mi elements. Formally, I ⊆ E is a partial transversal for A if there exists a map ψ : I → {1, . . . , m} satisfying, (i) for all e ∈ I, element e belongs to Aψ(e) , and (ii) |ψ −1 ({i})| ≤ mi for all i ∈ {1, . . . , m}. Let I 0 be the family of all partial transversals of A. It is not hard to see that the system M0 = (E, I 0 ) is a matroid (for details see [Sch03, Chapter 39]). For our particular problem we construct a transversal matroid as follows. Consider the sets {Et }nt=0 from the instance of the Robust Multi-Stage Matroid problem. To model the robustness property of our solutions, we define the family of subsets A = (I0 , E1 , . . . , En ). The multiplicity of I0 is |I0 | and the multiplicity of Et is kt for each t ∈ {1, . . . , n}. Let M0 = (E, I 0 ) be the transversal matroid corresponding to the family A just defined, and recall that M = (E, I) is the underlying matroid defined in the Robust Multi-Stage Matroid instance. In what follows we show that the optimal robust sequence corresponds to a maximum weight common independent set in I ∩ I 0 . Lemma 2.28. For each set I in I ∩ I 0 , there exists a sequence of sets I1 , . . . , In with In = I that is feasible for the instance of the Robust Multi-Stage Matroid problem. Given I, the sequence I1 , . . . , In can be computed in polynomial time. 2.3. Robust Multi-Stage Optimization Under a Single Matroid Constraint Proof. Notice that since I ∈ I 0 , we can decompose I as I = (I ∩ I0 ) ∪ n [ Ft , where Ft ⊆ Et and |Ft | ≤ kt for all t ∈ {1, . . . , n}. Thus, we can define It = (I ∩ I0 ) ∪ t [ Fs , and hence It \ It−1 = Ft for all t ∈ {1, . . . , n}. We conclude that |It \ It−1 | ≤ kt for all t. Moreover, we can compute sets Ft (and thus sets It ) in polynomial time. To this end we just need to assign elements in I to the corresponding sets Et , which can be easily computed by finding a maximum cardinality matching in an appropriately defined bipartite graph. Lemma 2.29. For any solution I1 , . . . , In of the Robust Multi-Stage Matroid problem we have that In ∈ I ∩ I 0 . Proof. Notice that a simple inductive argument shows that ! n [ In ⊆ I0 ∪ (It \ It−1 ) . t=1 Thus, we can assign each element of In ∩ (It \ It−1 ) to Et and the elements in In ∩ I0 to I0 . Because |It \ It−1 | ≤ kt , this implies that I ∈ I 0 . The claim follows. The last two lemmas imply that in the full information case our problem can be seen as an instance of the (offline) Weighted Matroid Intersection problem. Theorem 2.30. Consider an arbitrary instance of the Robust Multi-Stage Matroid problem. There exists a polynomial time algorithm that computes a robust sequence of independent sets I1 , . . . , In maximizing w(In ). Proof. The previous two lemmas imply that computing a robust sequence of independent sets I1 , . . . , In is equivalent to finding a set I ∈ I ∩ I 0 maximizing w(I). This is possible in polynomial time with the weighted matroid intersection algorithm [Edm70] (see also [Sch03, Chapter 41]). Relation to Submodular Function Maximization over a Partition Matroid As mentioned in Section 2.1, we recently found out that Theorem 2.23 follows from a result by Fisher, Nemhauser, and Wolsey [FNW78b]. We now give a brief description of their problem and the connection to our setting. Let MP = (E, IP ) be a partition matroid, that Sm is, there exists a partition of E = i=1 Ai , and non-negative numbers mi for each i such that IP := {I ⊆ E : |I ∩ Ai | ≤ mi for all i}. Chapter 2. Robust Multi-Stage Matroid Optimization In other words, MP is a transversal matroid where the family of sets A = (A1 , . . . , Am ) covers E and it is pair-wise disjoint. Consider a submodular function R : 2E → R≥0 that is also non-decreasing, that is, R(X) ≤ R(Y ) for all X ⊆ Y ⊆ E. Fisher et al. proposed a greedy algorithm for the following problem max R(I) s.t. I ∈ IP . Their algorithm, which we call Algorithm FNW, works as follows. First initialize I := ∅. For each i = 1, . . . , m, repeat the following step mi times. Set I := I + f where f is an element maximizing R(I + f ) − R(I) over all f ∈ Ai . Theorem 2.31 ([FNW78b]). Algorithm FNWis a 2-approximation algorithm. To see that this result implies Theorem 2.23, first we do the following modification to the Robust Multi-Stage Matroid problem: in each iteration t we only allow the solution to insert elements in Et \ Et−1 , that is, we can only insert new appearing elements to the solution It . Note that this extra restriction yields an equivalent problem, since we can simply take duplicates6 of elements in Et−1 , and assume the duplicates belong to Et \ Et−1 . Define now As := Es \ Es−1 for all s ≥ 1 and A0 := ES0 , and notice that this sets are pairwise disjoint and thus define a partition with Et = ts=0 As . We now consider the partition matroid MP = (E, IP ) defined as IP := {I ⊆ E : |I ∩ As | ≤ ks for all s ≤ t}. Additionally, let R(·) denote the weighted rank of the matroid M, where M is the matroid given by the input of the Robust Matroid problem. By Lemma 2.5 function R is submodular, and it is clearly non-decreasing. With the same observations as in the last section, we obtain that our problem is equivalent to Expression (2.3). Moreover, Algorithm Robust-Greedy mimics Algorithm FNW, since in each iteration t we pick a swap maximizing the gain among all swaps inserting an element in At . As shown in Lemma 2.6, this is equivalent to choose f that maximizes R(I +f ) −R(I). We conclude that Theorem 2.23 follows from Theorem 2.31. Robust Multi-Stage Matroid Intersection In this section we study the Robust Multi-Stage Matroid problem under several matroid constraints. As described above, this setting applies to a large number of combinatorial optimization problems. We first consider the problem for the intersection of two matroids. The same proofs can be directly applied to the intersection of an arbitrary number of matroids, with a loss in the competitive ratio of the algorithms. More details about this extension are given in Section 2.4.4. Additionally, in that section we optimize one of the parameters of our algorithm. This yields marginally better competitive ratios, even in the setting with two matroids. We say that an element e0 is a duplicate of e ∈ E if {e, e0 } is dependent and I + e ∈ I if and only I + e0 ∈ I for all I ⊆ E with I 3 6 e, e0 . 6 2.4. Robust Multi-Stage Matroid Intersection Problem Definition We define an instance of the Robust Multi-Stage Matroid Intersection problem as follows. Let E be a set of elements and consider a non-negative weight function w : E → R≥0 . Consider also two matroids M = (E, I) and N = (E, J ) that are unknown to the online algorithm. We are given an initial set of elements E0 and an initial solution I0 ⊆ E0 so that I0 ∈ I ∩ J . In each stage t ∈ {1, . . . , n}, a subset Et ⊆ E is revealed so that Et−1 ⊆ Et . Moreover, for each stage t we are given a value kt denoting how many elements can be added to our solution in stage t. We are looking for a sequence of sets I1 , . . . , In so that for each t ∈ {1, . . . , n} • It ⊆ Et belongs to I ∩ J , and • |It \ It−1 | ≤ kt . A sequence I1 , . . . , In satisfying these two properties for all t ∈ {1, . . . , n} is said to be robust. Let I1∗ , . . . , In∗ be the robust sequence that maximizes w(In∗ ). For a given α ≥ 1, an online algorithm is said to be α-competitive against robust solutions if for any instance it computes a sequence I1 , . . . , In satisfying w(In ) ≥ w(In∗ )/α. In the next section we formalize the notion of swap sequences in the context of two matroids and discuss some of their basic properties. Swap Sequences for the Intersection of Two Matroids In what follows we extend the concept of swap sequence, introduced in Section 2.3.2, to the matroid intersection setting. Assume that we are given two matroids M = (E, I), N = (E, J ) and a weight function w : E → R≥0 . Without loss of generality, we assume that E contains a dummy element ε that is a loop for both matroids and has zero weight. Moreover, we remove any other element with zero weight in E. This clearly does not change the optimal solutions. Given a set I ⊆ E and three elements f, g, h ∈ E, we extend the definition of I − g + f (Section 2.2) to the matroid intersection setting as follows I − g − h + f := (I − g + f ) − h + f. We note that we will only use this definition in the following cases: (1) f = g = h and then I − g − h + f = I; (2) f 6= h, f 6= g and hence I − g − h + f = (I \ {g, h}) ∪ {f }. For a set I ∈ I ∩ J and e ∈ E we denote CM (I, e) the circuit for matroid M as given by Definition 2.7 (Section 2.3.2). Similarly, CN (I, e) is the corresponding circuit for matroid N . Definition 2.32 (Generalized Swap). Given a set I ∈ I ∩ J , a triple of elements (f, g, h) ∈ E × E × E is a generalized swap (or simply a swap if its clear from the context) for I, if • g ∈ CM (I, f ) and h ∈ CN (I, f ), and • g = f if and only if h = f . We say that the swap (f, g, h) applied to I yields set I − g − h + f . Moreover, the gain of the swap is defined as ∆ (f, g, h) = w(f ) − w(g) − w(h). Chapter 2. Robust Multi-Stage Matroid Optimization M f 2 f1 g1 N h1 85 w(h1 ) = w(g1 ) = 1 f2 f2 w(f1 ) = 2 + ε w(f2 ) = 2 Figure 2.1: Example showing that an online algorithm cannot determine the optimal way of removing elements. On the left a graphic matroid corresponding to M is shown, and the right picture depicts the graphic matroid N . It is not hard to check that if I ∈ I ∩ J and (f, g, h) is a swap for I then I − g − h + f ∈ I ∩ J and I − g − h + f = I − h − g + f. Notice that the condition “g = f if and only if h = f ” is necessary to guarantee that I − g − h + f ∈ I ∩ J . Otherwise, if for example g = f and h 6= f , then I − g − h + f equals I − h + f which might not belong to I if h 6∈ CM (I, f ). Additionally, we remark that the change of weight of the solution I when applying the swap (f, g, h) might not be equal to ∆(f, g, h) (this happens if g = h). However, it does hold that w(I − g − h + f ) − w(I) ≥ ∆(f, g, h), and this inequality will be all we need to show our results. Definition 2.33 (Generalized Swap Sequence). Consider a sequence S = (ft , gt , ht )kt=1 and a set I ∈ I ∩ J . Iteratively define I0 = I and It (S) = It−1 (S) − gt − ht + ft for all t ∈ {1, . . . , k}. We say that S is a generalized swap sequence (or simply a swap sequence) if (ft , gt , ht ) is a generalized swap for It−1 (S) for all t ∈ {1, . . . , k}. Given a swap sequence S = (ft , gt , ht )kt=1 for I, we denote by SM := (ft , gt )kt=1 and SN := (ft , ht )kt=1 the two corresponding sequences for matroids M and N , respectively. We remark that SM and SN may not be swap sequences for the corresponding single matroid problem. Indeed, notice that It (SM ) is a superset of It (S), since sequence S also removes elements h1 , . . . , ht , and thus It (SM ) may contain circuits for M. However, the swap sequences that our online algorithm will construct will have the property that SM and SN are swap sequences for matroid M and N respectively. Recall that in the single matroid case, given a sequence of elements f1 , . . . , fk to be inserted, removing elements greedily maximizes the total gain of a swap sequence. That is, choosing a sequence S = (ft , gt )kt=1 that is greedy-removing is best possible given elements f1 , . . . , fk . Moreover, several of the properties derived in Section 2.3.2 need as hypothesis that the considered sequences are greedy-removing. For the matroid intersection problem it is not necessarily best possible to greedily choose the elements to be removed. Furthermore, no online algorithm can determine these elements so that w(Ik (S)) is maximized. This fact 2.4. Robust Multi-Stage Matroid Intersection is shown in the following simple example. Consider the two graphic matroids depicted in Figure 2.1, and assume that I0 = E0 = {g1 , h1 }. The weights for the edges are w(h1 ) = w(g1 ) = 1, w(f1 ) = 2 + ε and w(f2 ) = 2. When edge f1 appears, an online algorithm must decide whether f1 must be inserted, and in that case it must apply swap (f1 , g1 , h1 ) (otherwise it can apply (f1 , f1 , f1 ) and leave the solution unchanged). However, since this swap has positive gain, the algorithm must apply it to maintain an optimal solution in the case there are no more elements arriving. When element f2 appears, the online algorithm will not add f2 since this would decrease the total weight. Thus, the total weight of the solution given by the online algorithm is 2 + . On the other hand, the optimal swap sequence is ((f1 , f1 , f1 ), (f2 , g1 , g1 )) which yields a solution with weight 3. This example shows that we cannot determine an optimal way of choosing which elements to remove online. It is then necessary to determine a rule for choosing which elements to remove, so that the total weight of the independent set after applying the swap sequence is large enough when compared to the optimal way of removing elements. For this we will consider a swap only if its gain is relatively large. This will guarantee that this swap was really valuable and that future losses implied by performing these swaps are bounded. Moreover, we would like that the sequence S obtained satisfies that SM and SN are greedy-removing swap sequences, so that we can apply the knowledge gained for the single matroid case. For this we consider the following definition. We notice that a similar idea is used by Feigenbaum et al. [FKM+ 04] for the maximum weight matching problem in the semi-streaming model. Definition 2.34 (Lazy-Removing Swap Sequences). Consider a generalized swap sequence S = (ft , gt , ht )kt=1 for set I. Let g¯t be an element of minimum weight in CM (It−1 (SM ), ft ) ¯ t an element of minimum weight in CN (It−1 (SN ), ft ). We say that S is lazy-removing and h if for all t ∈ {1, . . . , k} we have that: ¯ t )), then • if w(ft ) ≥ 2 · (w(¯ gt ) + w(h gt ∈ arg min{w(e) : e ∈ CM (It−1 (SM ), ft )}, and ht ∈ arg min{w(e) : e ∈ CN (It−1 (SN ), ft )}; • otherwise, gt = ht = ft . Given elements f1 , . . . , fk , it is easy to construct a sequence S = (ft , gt , ht )kt=1 that is ¯ t ; If w(ft ) ≥ 2 · (w(¯ ¯ t )) then set gt := g¯t lazy-removing: For each t compute g¯t and h gt ) + w(h ¯ and ht := ht , otherwise gt = ht = ft . We notice that the sequence generated is a generalized swap sequence. Indeed, we only need to guarantee that for all t ∈ {1, . . . , k}, ft = gt if and only if ht = ft . Assume by contradiction that for some t this does not happen, that is, gt 6= ft and ht = ft (the case gt = ft and ht 6= ft is symmetric). Then we know that w(ft ) ≥ 2 · (w(gt ) + w(ht )), and since ft = ht we conclude that 0 ≥ 2w(gt ) + w(ht ). Since we are assuming that no element but ε has zero weight, we conclude that gt = ht = ft = ε, which contradicts our assumption. We have shown the following. Observation 2.35. Consider a set I ∈ I ∩ J and elements f ∈ E, g ∈ CM (I, f ) and h ∈ CN (I, f ). If w(f ) ≥ 2(w(g) + w(h)), then (f, g, h) is a generalized swap for set I. Chapter 2. Robust Multi-Stage Matroid Optimization Thus, given elements f1 , . . . , fn , it is possible to construct online a generalized swap sequence S = (ft , gt , ht )kt=1 that is lazy-removing. Let us remark that in the definition of lazy-removing, element g¯t belongs to the circuit ¯ t ). For the case where CM (It−1 (SM ), ft ) and not necessarily to CM (It−1 (S), ft ) (similarly for h ¯ t )), also gt belongs to CM (It−1 (SM ), ft ) instead of CM (It−1 (S), ft ). w(ft ) ≥ 2 · (w(¯ gt ) + w(h This implies that we are taking more elements than necessary to guarantee that It (S) is in I ∩ J . However, taking gt and ht this way guarantees that SM and SN are swap sequences for their respective matroids. Observation 2.36. Let S be a generalized lazy-removing swap sequence. Then SM and SN are swap sequences for matroids M and N , respectively. Strictly speaking, sequences SM and SN are not greedy-removing. Indeed, for SM , there may be a swap (ft , gt ) where ft = gt and ft is not of minimum weight in CM (It−1 (S), ft ). Similarly for SN . However, such a swap does not modify the solution, and therefore we can skip it without changing the overall effect of the sequence. With this observation it is easy to check that all results in Sections 2.3.2 and 2.3.3 also hold for sequences SM and SN when applied appropriately. This observation will be useful for the analysis of our online algorithm, because it will allow us to reuse the properties already derived for the single matroid problem. Lazy-removing swap sequences have the important property of being constant competitive in the following sense. Consider a given set of elements f1 , . . . , fk to be added, and let S be a lazy-removing sequence that inserts these elements. Then w(Ik (S)) ≥ w(Ik (S ∗ ))/7, where S ∗ is a generalized sequence of the form (ft , gt∗ , h∗t )kt=1 maximizing w(Ik (S ∗ )). The overall idea to show this is as follows. Notice that S can make two types of mistakes: (1) either it does not add an element ft that S ∗ inserts (and thus it takes the swap (ft , ft , ft )), or (2) it removes an element that belongs to Ik (S ∗ ). Since a lazy-removing sequence only removes elements that have relatively small weight, we will be able to show that the total weight of elements removed is upper bounded by w(Ik (S)). On the other hand, an element ft that is ¯ t )) (where g¯t and h ¯ t were never added to the solution satisfies that w(ft ) ≤ 2(w(¯ gt ) + w(h introduced in the definition of lazy-removing). This will allow us to charge the cost of these elements to elements in Ik (S ∗ ). We devote the rest of this section to show what was just described. The proof will be divided into showing two main properties, corresponding to the two types of mistakes just described. We start by showing that the total weight of elements removed by a lazy-removing sequence S is never more than w(Ik (S)). To make this statement more precise, consider the following definition. Let S = (ft , gt , ht )kt=1 be a lazy-removing swap sequence. We distinguish swaps of two types. For this, consider the set R := {t ∈ {1, . . . , k} : gt 6= ft and ht 6= ft } . In other words, swaps (ft , gt , ht ) with t ∈ R correspond to the swaps that do not leave the solutions unchanged. We call swaps with indices in R non-trivial. The rest of the swaps (i. e., swaps of the form (ft , ft , ft )) are said to be 2.4. Robust Multi-Stage Matroid Intersection Lemma 2.37. Let S = (ft , gt , ht )kt=1 be a lazy-removing sequence for set I, and let R be the set of indices of non-trivial swaps in S. Then w({gt , ht }t∈R ) ≤ w ({ft }t∈R ∩ Ik (S)) . Proof. Let us define Rt = R ∩ {1, . . . , t}. To show the inequality of the lemma we use a charging argument. For each t ∈ {1, . . . , k}, we will construct a function Φt : {gs , hs }s∈Rt → {fs }s∈Rt ∩ It (S), such that w(Φ−1 t ({fs })) ≤ w(fs ) for all s ∈ Rt with fs ∈ It (S), that is, we charge the weight of each element in {gs , hs }s∈Rt to some element in {fs }s∈Rt ∩ It (S). Notice that this immediately implies the lemma since X w({gs , hs }s∈R ) = w({gs , hs }s∈Rk ) = w Φ−1 k ({fs }) s∈Rk :fs ∈Ik (S) w (fs ) s∈Rk :fs ∈Ik (S) = w(Ik (S) ∩ {ft }t∈Rk ) = w(Ik (S) ∩ {ft }t∈R ). We show the existence of functions Φt by induction. We start with the existence of Φ1 . If the first swap of S is trivial then the domain of Φ1 is empty and thus the claim follows. Otherwise, we have that w(f1 ) > 2(w(g1 ) + w(h1 )) ≥ w(g1 ) + w(h1 ), and thus the claim follows by defining Φ1 (g1 ) = Φ1 (h1 ) = f1 . Assume now by induction hypothesis that we have constructed function Φt−1 . We show how to construct Φt by modifying Φt−1 . We can assume that (ft , gt , ht ) is non-trivial otherwise we can choose Φt = Φt−1 . First note that Φt must assign gt and ht to some element in {fs }s∈Rt ∩ It (S). Moreover, if gt and ht belong to {fs }s∈Rt−1 ∩ It−1 (S), then all the elements in Φ−1 ({gt }) (or Φ−1 ({ht }), respectively) must be assigned to other elements by Φt . In this case we assign all these elements to ft . More precisely, we define −1 Φt (e) = ft for all e ∈ Φ−1 t−1 ({gt }) ∪ Φt−1 ({ht }) ∪ {gt , ht }, and Φt (e) = Φt−1 (e) for the rest of the elements in {gs , hs }s∈Rt . To check that the weight assigned to ft is less than w(ft ), notice that −1 −1 w(Φ−1 t ({ft })) = w(Φt−1 ({gt })) + w(Φt−1 ({ht })) + w(gt ) + w(ht ) ≤ 2(w(gt ) + w(ht )) ≤ w(ft ), where the first inequality follows by the induction hypothesis and the second since S is lazyremoving and the swap (ft , gt , ht ) is non-trivial. This completes the construction of Φt for all t ∈ {1, . . . , k}, and thus the lemma follows by the argumentation above. In what follows we show that the total weight of elements not added by a lazy-removing swap sequence, that is elements ft with t 6∈ R, is bounded. For this we extend an important tool to the online case, often used to analyze algorithms for matroid intersection. Consider a set I ∈ I ∩ J that is locally optimal, that is, any generalized swap for I has negative gain. Chapter 2. Robust Multi-Stage Matroid Optimization Then I is a 2-approximate solution for the (offline) Maximum Weight Matroid Intersection problem. This property was proven by Skutella and Reichel [RS08] and later extended by Lee, Sviridenko, and Vondr´ak [LSV10] to the problem of maximizing a submodular functions under matroid constraints. We will extend this technique to our setting, therefore we first shortly present the classic results. The main technique to show such a property is a charging argument: if an element f belongs to the optimal solution but not to I, then we can charge the weight of f to the weight of two elements in I, each belonging to one of the two circuits in I + f . The factor 2 follows by showing that no element in I has to pay for more than two elements in the optimum. To show that this is indeed possible we need the following lemma. Lemma 2.38 (Corollary 39.12a in [Sch03]). Consider a matroid M = (E, I), and let B1 and B2 be two bases for M. Then there exists a bijection Ψ : B1 \ B2 → B2 \ B1 so that Ψ (e) ∈ C(B2 , e) for all e ∈ B1 \ B2 . By using this result we show that the charging argument described above can indeed be applied. In the next lemma, I represents any independent set, F are the set of elements that can be charged to the elements in I (and thus F = E if I is locally optimal), and J plays the role of an optimal solution. Lemma 2.39. Given two matroids M = (E, I) and N = (E, J ), consider a set I ∈ I ∩ J . For any e ∈ E, define g(e) and h(e) to be minimum weight elements in CM (I, e) and CN (I, e), respectively. We define the set F = {e : w(e) ≤ w(g(e)) + w(h(e))} . For any set J ⊆ F with J ∈ I ∩ I it holds that w(J \ I) ≤ 2 · w(I \ J). Proof idea. Using the previous lemma we can construct two functions ΨM and ΨN so that for each e ∈ J \ I, ΨM (e) ∈ CM (I, e) and ΨN (e) ∈ CN (I, e). Since J ⊆ F , then w(e) ≤ w(ΨM (e)) + w(ΨN (e)). Summing over all e ∈ J \ I and recalling that ΨM and ΨN are injective, we get that w(J \ I) ≤ 2 · w(I \ J). Notice that the last lemma implies directly that any locally optimal solution I is 2approximate. Indeed, assume that ∆(f, g, h) ≤ 0 for all generalized swaps (f, g, h) for I, and thus E = F . For an optimal solution I ∗ , we have that I ∗ \ I is a subset of F and thus w(I ∗ \ I) ≤ 2w(I \ I ∗ ). Therefore w(I ∗ ) ≤ 2w(I) and thus I is a 2-approximate solution. We now extend the charging argument of the last lemma to the online case. For this consider the following definition. Let S = (ft , gt , ht )kt=1 be a lazy-removing swap sequence for I. For a given element e ∈ E, let gt (e) and ht (e) be minimum weight elements in CM (It−1 (SM ), e) and CN (It−1 (SN ), e), respectively. We now consider the set of all elements e whose weight is relatively small in comparison to w(gt (e)) and w(ht (e)) for some t. Then, these elements can be charged to elements in It (S) for some stage t. More precisely, consider F = {e ∈ E : w(e) ≤ 2 · (w(gt (e)) + w(ht (e))), for some t = 1, . . . , k} . 2.4. Robust Multi-Stage Matroid Intersection The next lemma, which is an extension of Lemma 2.39 to the online case, shows that the weight of any independent set that is a subset of F is bounded. Notice that if ft ∈ F , then ft is an element that a lazy-removing sequence could have added but it chose to perform a trivial swap instead. For our analysis later, we will only need to consider elements in F that are equal to some ft , but we state the lemma in general nonetheless. Lemma 2.40. Let S = (ft , gt , ht )kt=1 be a generalized lazy-removing sequence for I with set of non-trivial swaps R, and let F be the set defined above. For any J ⊆ F \ {gt , ht }t∈R so that J ∈ I ∩ J , we have that w(J \ Ik (S)) ≤ 4 · w(Ik (S) \ J) + 2 · w(Ik (S)). Let us fix a set J as in the statement of the previous lemma, that is, J is independent for both matroids and J ⊆ F \ {gt , ht }t∈R . To show the lemma we charge the elements in J \ Ik (S) to elements in Ik (S). For that we use Lemma 2.38 as follows. Extend set J to a basis B1 for M. Since SM is a swap sequence for I on matroid M, we can also extend Ik (SM ) to a basis B2 for M. Thus, Lemma 2.38 implies that there exists a bijection ΨM : B1 \ B2 → B2 \ B1 satisfying ΨM (e) ∈ CM (B2 , e) for all e ∈ B1 \ B2 . An analogous construction gives a function ΨN for matroid N . The following property will help us translate the charging of an element e ∈ F from gt (e) and ht (e) (which may not be in Ik (S)) to ΨM (e) and ΨN (e). We state the lemma only for ΨM . Lemma 2.41. Consider set J and function ΨM as was just defined. For any e ∈ J and t ∈ {1, . . . , k}, let gt (e) be a minimum weight element in circuit CM (It−1 (SM ), e). Then we have that either • w(gt (e)) ≤ w(ΨM (e)) and ΨM (e) ∈ Ik (SM ), or • w(gt (e)) = 0. Proof. Consider an element e ∈ J \ Ik (SM ), and recall that ΨM : B1 \ B2 → B2 \ B1 where J ⊆ B1 and Ik (SM ) ⊆ B2 . To show the claim we distinguish two cases. 1. If Ik (SM ) + e is not in I, then CM (Ik (SM ), e) = CM (B2 , e), and thus ΨM (e) ∈ Ik (SM ). Therefore we conclude that ΨM (e) ∈ CM (Ik (SM ), e) and hence w(gk (e)) ≤ w(ΨM (e)). Recalling that SM is greedy-removing when omitting swaps not in R, Lemma 2.14 implies that w(gt (e)) ≤ w(gk (e)) ≤ w(ΨM (e)). Thus, the claim holds for this case. 2. If Ik (SM ) + e belongs to I, then CM (Ik (SM ), e) = {ε}. Again, Lemma 2.14 implies that w(gt (e)) ≤ w(ε) = 0. The lemma follows. With this result we are ready to show Lemma 2.40. Chapter 2. Robust Multi-Stage Matroid Optimization Proof (Lemma 2.40). Recall that J ⊆ F . Thus, by definition of F , for all e ∈ J there exists te ∈ {1, . . . , k} such that w(e) ≤ 2 · (w(gte (e)) + w(hte (e))). To simplify notation we denote gte = gte (e) and hte = hte (e). Therefore we have the following upper bound. X w(J \ Ik (S)) ≤ 2 · (w(gte ) + w(hte )) (2.5) e∈J\Ik (S) We now bound the right-hand-side of this inequality by using the previous lemma. Claim: The following bound holds, X w(gte ) ≤ w(Ik (SM ) \ J) e∈J\Ik (S) To show the claim, first notice that Ik (SM ) \ {ht }t∈R = Ik (S), and since by definition of J we have that J ∩ {ht }t∈R = ∅, then X J \ Ik (S) = J \ Ik (SM ), and thus w(gte ) = w(gte ). e∈J\Ik (SM ) e∈J\Ik (S) Recall that ΨM : B1 \ B2 → B2 \ B1 is a bijection, where J ⊆ B1 and Ik (SM ) ⊆ B2 . Lemma 2.41 implies the following bound, X X X w(ΨM (e)) w(gte ) ≤ w(gte ) = e∈J\Ik (S) e∈J\Ik (SM ) e∈J\Ik (SM ) ≤ w(Ik (SM ) \ B1 ) ≤ w(Ik (SM ) \ J). This shows the claim. Analogously, for matroid N we obtain that X w(hte ) ≤ w(Ik (SN ) \ J). e∈J\Ik (S) Combining Inequalities (2.5), (2.6), and (2.7) we conclude that w(J \ Ik (S)) ≤ 2 · (w(Ik (SM ) \ J) + w(Ik (SN ) \ J)) ≤ 2 · (2 · w(Ik (S) \ J) + w({gt , ht }t∈R )), where the last inequality follows since Ik (SM ) ⊆ Ik (S)∪{ht }t∈R and Ik (SN ) ⊆ Ik (S)∪{gt }t∈R . The lemma follows since by Lemma 2.37, w({gt , ht }t∈R ) ≤ w(Ik (S)). Finally, with Lemmas 2.37 and 2.40 we show that lazy-removing sequences yield solutions that are close to optimal given the elements to be added. This justifies the definition of lazyremoving sequences. More precisely, we can restrict ourselves to consider lazy-removing sequences by decreasing the objective function by only a constant factor. 2.4. Robust Multi-Stage Matroid Intersection Theorem 2.42. Let S = (ft , gt , ht )kt=1 and S ∗ = (ft∗ , gt∗ , h∗t )kt=1 be two generalized swap sequences for I ∈ I ∩ J such that ft = ft∗ for all t ∈ {1, . . . , k}. If S is lazy-removing, then w (Ik (S)) ≥ w(Ik (S ∗ ))/7. Proof. Let R be the set of indices of non-trivial swaps of S. Note that elements in Ik (S ∗ ) \ Ik (S) can be classified into two types: (1) Elements that are not in Ik (S ∗ ) because they were removed by the sequence S, i.e., elements in F1 = {gt , ht }t∈R , and (2) elements in Ik (S ∗ ) that were never introduced to the solution by S, i. e., the set {ft }t6∈R ∩ Ik (S ∗ ). For this second case it is enough to consider the elements in F2 = [{ft }t6∈R ∩ Ik (S ∗ )] \ F1 . With the previous observation we have that Ik (S ∗ ) \ Ik (S) ⊆ F1 ∪ (F2 \ Ik (S)). Thus, w(Ik (S ∗ ) \ Ik (S)) ≤ w(F1 ) + w(F2 \ Ik (S)). Lemmas 2.37 and 2.40 imply that w(Ik (S ∗ ) \ Ik (S)) ≤ 4w(Ik (S) \ Ik (S ∗ )) + 3w(Ik (S)). Adding w(Ik (S ∗ ) ∩ Ik (S)) to both sides of the last inequality yields that w(Ik (S ∗ )) ≤ w(Ik (S)) + 3w(Ik (S) \ Ik (S ∗ )) + 3w(Ik (S)) ≤ 7w(Ik (S)). Competitive Analysis for Matroid Intersection Similarly as with the single matroid case, we consider first a special type of instances where we are allowed to insert to the solution at most one element per stage. Definition 2.43 (Regular Instances). An instance of the Robust Multi-Stage Matroid Intersection problem is said to be regular if kt = 1 for all t ∈ {1, . . . , n}. We first propose an online algorithm for the Robust Multi-Stage Matroid Intersection problem in the case of regular instances. Notice that, without loss of generality, any robust solution to this problem corresponds to a generalized swap sequence S = (ft , gt , ht )nt=1 for the initial set I0 (this follows by the same argument as in the proof of Lemma 2.18). Moreover, the robust sequence of independent sets corresponding to this solution is I1 (S), . . . , In (S). Now we present our algorithm for regular instances. The algorithm chooses generalized swaps greedily so to obtain a lazy-removing sequence. Chapter 2. Robust Multi-Stage Matroid Optimization (Regular Instance) For any stage t ∈ {1, . . . , n}, assume that we have constructed a swap sequence S = t−1 (fs , gs , hs )s=1 . We append a new swap to S as follows. 1. Consider the set T of all generalized swaps (f, g, h) ∈ Et × Et × Et for It−1 (S) satisfying, (i) element g is of minimum weight in CM (SM , f ), element h is of minimum weight in CN (SN , f ), and (ii) w(f ) ≥ 2 · (w(g) + w(h)). 2. Let (ft , gt , ht ) be any swap of maximum gain in T . 3. Append (ft , gt , ht ) to S and return It := It (S). We notice that the algorithm is well-defined since T is never empty, indeed (ε, ε, ε) ∈ T at every point of the algorithm. To analyze this algorithm and show that has a constant competitive factor we follow a similar technique as in Section 2.3.3. For this notice that the sequence computed by the algorithm is lazy-removing. Similarly as before, we say that a swap sequence S = (ft , gt , ht )nt=1 is robust feasible (or simply feasible) if ft ∈ Et for all t ∈ {1, . . . , n}. By Theorem 2.42 we can compare the solution given by the algorithm to a simpler solution, namely, the feasible lazy-removing swap sequence S that maximizes w(In (S)). By doing so we only lose a factor 7 in the competitive factor. We will do one further simplification to the optimal solution. Recall that in general w(In (S ∗ )) ≥ w(I0 (S ∗ )) + ∆(S ∗ ). We will compare against the lazy-removing feasible swap sequence S ∗ maximizing w(I0 (S ∗ )) + ∆(S ∗ ) instead of w(In (S ∗ )). We first show that this can only increase the competitive ratio by an extra factor of two. In the lemma we also show that any element introduced by S ∗ equals ε or it is never removed. ¯ Lemma 2.44. Let S¯ be a robust feasible swap sequence maximizing w(In (S)). There exists ∗ n ∗ ∗ ∗ a robust feasible swap sequence S = (ft , gt , ht )t=1 that is lazy-removing and satisfies • ft∗ ∈ In (S ∗ ) ∪ {ε} for all t ∈ {1, . . . , n}, and ¯ • w(I0 ) + ∆(S ∗ ) ≥ w(In (S))/14. Proof. Let S ∗ = (ft∗ , gt∗ , h∗t )nt=1 be any feasible swap sequence that maximizes w(I0 ) + ∆(S ∗ ) among all lazy-removing sequences. We first modify S ∗ to make it satisfy the first property of the lemma. Our modifications do not change w(I0 ) + ∆(S ∗ ) and yields a new sequence S ∗ that is still lazy-removing. We start by replacing trivial swaps by (ε, ε, ε), which cannot ∗ ∗ decrease ∆(S ∗ ). After this the swap sequences SM and SN are greedy-removing for their ∗ ∗ respective matroids. Similarly, for any element ft 6∈ In (S ) we change the swap (ft∗ , gt∗ , h∗t ) for (ε, ε, ε) and greedily update the elements gs∗ and h∗s for s ≥ t + 1. This is equivalent to omitting swap (ft∗ , gt∗ , h∗t ), and thus Lemma 2.14 implies that the weights of gs∗ and h∗s for s ≥ t + 1 cannot increase. We conclude that any non-trivial swap (fs∗ , gs∗ , h∗s ) for s ≥ t + 1 2.4. Robust Multi-Stage Matroid Intersection still satisfies w(fs∗ ) ≥ 2(w(gs∗ ) + w(h∗s )), and thus S ∗ is still lazy-removing. For the same reason ∆(S ∗ ) cannot decrease. Thus, the first property of the lemma follows. We now show the second property. Let S 0 = (ft0 , gt0 , h0t )nt=1 be the feasible lazy-removing sequence maximizing w(In (S 0 )). Notice that by Lemma 2.42 we have that w(In (S 0 )) ≥ ¯ w(In (S))/7. Now we bound w(I0 ) + ∆ (S 0 ) in terms of w(In (S 0 )). To this end we first modify S 0 so that any trivial swap (ft0 , gt0 , h0t ) equals (ε, ε, ε). Again, this does not modify the overall solution In (S 0 ). Thus, we have that w(ft0 ) ≥ 2(w(gt0 ) + w(h0t )) for each t ∈ {1, . . . , n}, and therefore w(ft0 ) w(It (S 0 )) − w(It−1 (S 0 )) ∆(ft0 , gt0 , h0t ) ≥ ≥ . 2 2 Summing the last expression over all t ∈ {1, . . . , n} implies that w(I0 )+∆(S 0 ) ≥ w(In (S 0 ))/2. Combining the inequalities above, we obtain w(I0 ) + ∆(S ∗ ) ≥ w(I0 ) + ∆(S 0 ) ≥ ¯ w(In (S 0 )) w(In (S)) ≥ . 2 14 The next lemma tries to follow the same lines as Lemma 2.19: Given a swap sequence, exchanging the first swap by another one decrease the total gain of the swap sequence in a limited way. Iterating this argument will help us show the bound on the competitive ratio of Robust-Lazy-Intersection. Lemma 2.45. Consider set I ∈ I ∩ J and elements f, e1 , . . . , ek . Assume we are given two swap sequences for set I SM = (et , dt )kt=1 and SN = (et , ct )kt=1 that are greedy-removing swap sequences for matroids M and N respectively. Consider also the modified swap sequences for set I, 0 0 SM = ((f, g), (e2 , d02 ), . . . , (ek , d0k )) and SN := ((f, h), (e2 , c02 ), . . . , (ek , c0k )), where (d02 , . . . , d0k ) and (c02 , . . . , c0k ) are chosen so that these two sequences are greedy-removing for matroids M and N , respectively. Then, " k # k X X ∆(f, g, h) + ∆(et , d0t , c0t ) ≥ ∆(et , dt , ct ) − ∆(e1 , d1 , c1 ) − w(f ). t=1 Proof. By Lemma 2.19 we have that 0 ∆(SM ) ≥ ∆(SM ) − ∆(e1 , d1 ). w(c0t ) to both sides of this inequality we obtain " k # k k X X X 0 0 ∆(f, g, h) + ∆(et , dt , ct ) ≥ ∆(et , dt , ct ) − w(h) + (w(ct ) − w(c0t )). Adding −w(h) − Chapter 2. Robust Multi-Stage Matroid Optimization P We now find a lower bound on −w(h) + kt=2 (w(ct ) − w(c0t )). For this, notice that 0 Lemma 2.19 implies that ∆(SN ) ≥ ∆(SN ) − ∆(e1 , c1 ), and thus, w(f ) − w(h) + k X (w(et ) − −w(h) + k X w(et ) − w(ct ). We conclude that w(c0t )) k X w(ct ) − w(c0t ) ≥ −w(f ). Combining this last expression with Inequality (2.8) yields the result from the lemma. We can now show the main technical result of this section. Lemma 2.46. Let S = (ft , gt , ht )nt=1 be the swap sequence computed by Robust-LazyIntersection, and S ∗ = (ft∗ , gt∗ , h∗t )nt=1 be the robust feasible swap sequence maximizing w(I0 ) + ∆(S ∗ ) among all lazy-removing sequences. Then, w(In (S)) ≥ (w (I0 ) + ∆(S ∗ ))/7. Proof. We follow the same argument as in Lemma 2.20. For each ` ∈ {1, . . . , n − 1} we define the swap sequences for set I`−1 (S), ∗ 0 SM,` := ((f` , g` ), (f`+1 , g`+1 ), . . . , (fn∗ , gn0 )) and ∗ SN,` := ((f` , h` ), (f`+1 , h0`+1 ), . . . , (fn∗ , h0n )), where gt0 and h0t are defined so that SM,` and SN,` are greedy-removing sequences for matroid M and N respectively. Note that in the previous definition we are slightly abusing notation since gt0 and h0t depend on ` and thus they should have an extra index. We omit this to simplify notation. Also for notational convenience, denote ∗ 0 S` = ((f` , g` , h` ), (f`+1 , g`+1 , h0`+1 ), . . . , (fn∗ , gn0 , h0n )). (We remark that S` might not be a generalized swap sequence for I`−1 (S) since it might happen that ft∗ = gt0 and ft∗ 6= h0t for some t.) Similarly, define swap sequences for set I`−1 (S), ∗ ∗ n SM,` := (ft∗ , gt∗∗ )nt=` and SN,` := (ft∗ , h∗∗ t )t=` , where both sequences are greedy-removing for matroid M and N respectively. Again, we ∗ ∗ ∗∗ ∗∗ n are omitting an index ` for gt∗∗ and h∗∗ t . We also define S` = (ft , gt , ht )t=` . Lemma 2.45 implies that for each `, ∆(S` ) ≥ ∆(S`∗ ) − ∆(f`∗ , g`∗∗ , h∗∗ ` ) − w(f` ). ∗ ∗ We distinguish two cases. For this consider the first swaps of SM,` and SM,` , swaps (f`∗ , g`∗∗ ) and (f`∗ , h∗∗ ` ), and define the set R := {` ∈ {1, . . . , n − 1} : w(f`∗ ) ≥ 2(w(g`∗∗ ) + w(h∗∗ ` ))} 2.4. Robust Multi-Stage Matroid Intersection For ` ∈ R, Observation 2.35 implies that (f`∗ , g`∗∗ , h∗∗ ` ) is a generalized swap for I`−1 (S). Since Robust-Lazy-Intersection chooses the swaps greedily we have that for each ` ∈ R, ∆(f` , g` , h ` ) ≥ ∆(f`∗ , g`∗∗ , h∗∗ ` ), and thus ∆(S` ) ≥ ∆(S`∗ ) − ∆(f` , g` , h` ) − w(f` ). ∗ For ` 6∈ R we simply use the upper bound ∆(f`∗ , g`∗∗ , h∗∗ ` ) ≤ w(f` )/2, and thus ∆(S` ) ≥ ∆(S`∗ ) − w(f`∗ )/2 − w(f` ). Denote by ( w(f`∗ )/2 for ` 6∈ R, δ` := ∆(f` , g` , h` ) for ` ∈ R. With this we obtain, ∗ ∆(f` , g` , h` ) + ∆(S`+1 ) = ∆(S` ) ≥ ∆(S`∗ ) − δ` − w(f` ). We can simply iterate this last inequality, obtaining that w(In (S)) ≥ w(In−1 (S)) + ∆(fn , gn , hn ) ≥ w(In−1 (S)) + ∆(Sn∗ ) − δn − w(fn ) ≥ w(In−2 (S)) + ∆(fn−1 , gn−1 , hn−1 ) + ∆(Sn∗ ) − δn − w (fn ) .. . n X ∗ (δt + w(ft )) ≥ w(I1 (S)) + ∆(S2 ) − t=2 n X ∗ (δt + w(ft )). ≥ w(I0 ) + ∆(S1 ) − t=1 Notice that we can assume that ft∗ = gt∗ = h∗t if and only if ft∗ = gt∗ = h∗t = ε (since in that case the swap (ft∗ , gt∗ , h∗t ) leaves the solution untouched). This implies that the sequences ∗ ∗ SM = (ft∗ , gt∗ )nt=1 and SN = (ft∗ , h∗t )nt=1 , derived from the generalized sequence S ∗ , are greedyremoving for their respective matroids. Therefore, without loss of generality we can assume S1∗ = S ∗ . We obtain that ∗ w(I0 ) + ∆(S ) ≤ n X t=1 δt + n X w(ft ) + w(In (S)). To show the lemma we upper bound each of the two summations in the right-hand-side of the last inequality. To bound the second summation, notice that {ft }nt=1 ⊆ In (S) ∪ {gt , ht }nt=1 . Thus, by Lemma 2.37 we have that n X t=1 w(ft ) ≤ 2 · w(In (S)). Chapter 2. Robust Multi-Stage Matroid Optimization Finally, observe that n X δt = ∆(ft , gt , ht )+ X w(f ∗ ) t ≤ ∆(S)+ X w(f ∗ ) t ≤ ∆(S)+3·w(In (S)) ≤ 4·w(In (S)). The second inequality follows by taking set J = {ft∗ }t6∈R \ {ε}, recalling that by Lemma 2.44 J ⊆ In (S ∗ ) and thus J ∈ I ∩ J , and applying Lemma 2.40 to J. Collecting all of our inequalities we obtain, w(I0 ) + ∆(S ∗ ) ≤ 7w(In (S)). The lemma follows. Theorem 2.47. For regular instances of the Robust Multi-Stage Matroid Intersection problem, algorithm Robust-Lazy-Intersection is a 98-competitive algorithm against robust solutions. Proof. Follows directly from the last lemma and Lemma 2.44. We finally reduce the general instances to the regular case. Theorem 2.48. There exists is a 98-competitive algorithm against robust solutions for the Robust Multi-Stage Matroid Intersection problem. Proof. With the same argument as in the proof of Lemma 2.22, we can show that any α-competitive algorithm for regular instances implies an algorithm with the same competitive guarantee for arbitrary instances. The result then follows by the previous theorem. Intersection of Several Matroids A direct generalization of the techniques showed in the previous section yields a similar analysis for the intersection of ` matroids. In what follows we shortly show how this can be done, focusing only on the main differences to the last section. Simultaneously, we optimize the analysis by adapting the definition of lazy-removing sequence. This yields improved bounds for the competitive factor of our algorithm, but it does not reduce the asymptotic behavior when ` goes to infinity. In what follows, superindices are used to denote the different matroids while subindices denote the different stages. We consider ` matroids Mi = (E, I i ) for i ∈ {1, . . . , `} and a weight function w : E → R≥0 . Let us define I := ∩`i=1 I ` the set of all common independent sets for all matroids. We assume, without loss of generality, that E contains a dummy element ε that is a loop for every matroid, and is the only element with zero weight. The Multi-Stage Matroid Intersection problem is defined as follows. Consider an initial element set E0 . Given an independent set I0 ∈ I with I0 ⊆ E0 , a sequence of subsets E0 ⊆ E1 ⊆ . . . ⊆ En ⊆ E and a value kt ∈ N0 for each stage t, we are looking for an online sequence of sets I1 , . . . , In so that for all t ∈ {1, . . . , n} • It ⊆ Et belongs to I, and • |It \ It−1 | ≤ kt . 2.4. Robust Multi-Stage Matroid Intersection In this context we are interested in online algorithms algorithms that are competitive against robust solutions. To study this problem we define swaps and swap sequences for the intersection of several → matroids. In what follows we denote vectors of elements by g = (g 1 , . . . , g ` ) ∈ E ` , where E ` = E × . . . × E denotes the Cartesian product of ` times set E. Also, the total weight of P → → the vector g is defined as w( g) := `i=1 w(g i ). For consistency, we use superindices for the elements of a vector since they usually corresponds to the different matroids M1 , . . . , M` . → For a set I ⊆ E, an element f and a vector of elements g = (g 1 , . . . , g ` ) ∈ E ` we define → I − g + f := (. . . ((I − g 1 + f ) − g 2 + f ) . . .) − g ` + f. For a set I ∈ I and e ∈ E, let C i (I, e) be the circuit for matroid Mi as given by Definition 2.7 (Section 2.3.2). → Definition 2.49 (Generalized Swap). Given a set I ∈ I, an element f , and a vector g = → (g 1 , . . . , g ` ) ∈ E ` , the tuple (f, g) is a generalized swap (or simply a swap if its clear from the context) for I, if • g i ∈ C i (I, f ) for all i ∈ {1, . . . , `}, and • for each i, g i = f if and only if g j = f for all j ∈ {1, . . . , `}. → We say that the swap (f, g) applied to I yields set I − g + f . Moreover, the gain of the swap → → is defined as ∆(f, g) = w(f ) − w( g). → We remark that, in general, ∆(f, g) ≤ w(I − g + f ) − w(I), and that g = (f, . . . , f ) → implies that I − g + f = I. → Definition 2.50 (Generalized Swap Sequence). Consider a sequence S = (ft , g t )kt=1 and a set I ∈ I. Iteratively define → I0 = I and It (S) = It−1 (S) − g t + ft for all t ∈ {1, . . . , k}. → We say that S is a generalized swap sequence (or simply a swap sequence) if (ft , g t ) is a swap for It−1 (S) for all t ∈ {1, . . . , k}. → Given a swap sequence S = (ft , g t )kt=1 for I, we denote S i = (ft , gti )kt=1 for each i ∈ {1, . . . , `}, which are the ` correspondent sequences for matroids M1 , . . . , M` , respectively. As before, S i might not be a swap sequence for matroid Mi . However, all sequences S that we will consider in the following satisfy this property. We now generalize the concept of lazy-removing sequences. We note that the following definition will be parametrized by a number α. In the previous section α = 2. Even in the case of two matroids, choosing the value of α more carefully yields an improved bound on the competitive factor of our algorithm. We chose, however, to first present the ideas in a cleaner way in that section. We give the detailed computation in this more general Chapter 2. Robust Multi-Stage Matroid Optimization 99 → Definition 2.51 (α-lazy Swap Sequence). Let S = (ft , g t )kt=1 be a generalized swap sequence for set I. Let also eit be an element of minimum weight in C i (It−1 (S i ), ft ) for all t and i, and → denote e t = (e1t , . . . , e`t ). For a given α > 1, we say that S is α-lazy if for all t ∈ {1, . . . , n} we have that: → • If w(ft ) ≥ α · w( e t ) then gti ∈ arg min{w(e) : e ∈ C i (It−1 (S i ), ft )} for all i ∈ {1, . . . , `}; • Otherwise, gti = ft for all i ∈ {1, . . . , `}. If we are given a sequence of elements f1 , . . . , fk revealed one by one, we can easily → construct a sequence S = (ft , g t )kt=1 that is α-lazy: For each t ∈ {1, . . . , k} and i ∈ {1, . . . , `} → → → compute eit as in the definition above; If w(ft ) ≥ α · w( e t ) then set g t := e t , otherwise gti := ft for all i. It is not hard to see that the solution generated is a generalized swap sequence. Indeed, we only need to guarantee that for all t ∈ {1, . . . , k}, ft = gti for some i if and only if ft = gtj for all j ∈ {1, . . . , `}. Assume by contradiction that there exists a t → such that gti 6= ft and gtj = ft for some i 6= j. Then, w(ft ) ≥ α · w( g t ) and ft = gtj implies → that 0 ≥ α · w( g t ) − w(gtj ). Recall that ε is the only element with zero weight. Then, since α > 1, we obtain that gti = ft = ε for all i, which contradicts our assumption. Our discussion implies the following. Observation 2.52. Consider a set I ∈ I and elements f ∈ E, g i ∈ C i (I, f ) for each i ∈ {1, . . . , `}. For any α > 1, if → w(f ) ≥ α · w( g) → then (f, g) is a generalized swap for set I. As before, we observe that for any α-lazy swap sequence S, all the sequences S i are swap sequences for their respective matroids. Observation 2.53. Let S be a generalized α−lazy swap sequence. Then S i is a swap sequence for matroid Mi for all i ∈ {1, . . . , `}. → We also distinguish swaps of two types. A swap (f, g) is trivial if g i = f for all i ∈ {1, . . . , `}, otherwise we say that the swap is non-trivial. We start by showing that the total 1 weight that an α-lazy sequence S removes is no larger than α−1 w(Ik (S)). → Lemma 2.54. Let S = (ft , g t )kt=1 be an α-lazy sequence for set I, and let R ⊆ {1, . . . , k} be the set of indices of non-trivial swaps in S. Then X t∈R w( g t ) ≤ 1 · w ({ft }t∈R ∩ Ik (S)) . α−1 Proof. We follow the same technique as in Lemma 2.37. Let us define Rt = R ∩ {1, . . . , t}. For each t ∈ {1, . . . , k}, we will construct a function Φt : {gs1 , . . . , gs` }s∈Rt → {fs }s∈Rt ∩ It 2.4. Robust Multi-Stage Matroid Intersection such that w(Φ−1 t ({fs })) ≤ βt · w(fs ) for all s ∈ Rt with fs ∈ It (S). Here βt is a parameter that will be determined in our computations. We start with the existence of Φ1 . If the first swap of S is trivial then the domain of Φ1 → is empty and the claim follows for any value of β1 . Otherwise, it holds that w(f1 ) ≥ α · w( g), and thus the claim follows by defining Φ1 (g1i ) = f1 for all i and β1 = 1/α. As in the proof of Lemma 2.37 we can construct Φt from Φt−1 . For t 6∈ R, we can simply i i use Φt = Φt−1 . Otherwise, we define Φt (e) = ft for all elements e ∈ ∪`i=1 Φ−1 t−1 (gt ) ∪ {gt }, and Φt (e) = Φt−1 (e) for all other elements e. Thus, we obtain w(Φ−1 t ({ft })) = ≤ ` X i=1 ` X i i w(Φ−1 t−1 ({gt })) + w(gt ) βt−1 w(gti ) + w(gti ) i=1 → ≤ (βt−1 + 1) · w(gt ) βt−1 + 1 ≤ · w(ft ), α → where the last inequality follows since (ft , g t ) is non-trivial. Then we can define βt = (βt−1 + 1)/α. This concludes the construction of Φt . By considering Φk we obtain that X w( g t ) ≤ βk · w ({ft }t∈R ∩ Ik (S)) . The lemma follows by noting that β1 = 1/α and βt = (βt−1 + 1)/α implies that for all t ∞ X 1 1 βt ≤ = . s α α−1 s=1 → We now generalize Lemma 2.40 to the intersection of ` matroids. Let S = (ft , g t )kt=1 be an α-lazy swap sequence for I. For a given element e ∈ E, let gti (e) be a minimum → weight element in C i (It−1 (S i ), e) for all i ∈ {1, . . . , `}, and denote g t (e) = (gt1 (e), . . . , gt` (e)). → We consider the elements e whose weight is relatively small in comparison to w( g t (e)) for some t, that is, the elements that can be charged to elements in It (S) at some stage t. More precisely, we define the set → F = {e ∈ E : w(e) ≤ α · w( g t (e)) for some t ∈ {1, . . . , k}} . → Lemma 2.55. Let S = (ft , g t )kt=1 be an α-lazy swap sequence for I ∈ I and let R be the set of indices of non-trivial swaps of S. If F is the set defined above, then for any J ⊆ F \ {gt1 , . . . , gt` }t∈R so that J ∈ I, we have that `−1 w(J \ Ik (S)) ≤ α ` · w(Ik (S) \ J) + · w(Ik (S)) . α−1 Chapter 2. Robust Multi-Stage Matroid Optimization Consider a set J ∈ I such that J ⊆ F \ {gt1 , . . . , gt` }t∈R . To prove the lemma, we extend J to a basis B1i for Mi . Recall that S i is a swap sequence for I on matroid Mi . Therefore we can also extend Ik (S i ) ∈ I i to a basis B2i for Mi . By Lemma 2.38, we conclude that that there exists a bijection Ψi : B1i \ B2i → B2i \ B1i satisfying Ψi (e) ∈ C i (B2i , e) for all e ∈ B1i \ B2i . Lemma 2.56. Consider set J and function Ψi as defined above. For any e ∈ J, let gti (e) be a minimum weight element in circuit C i (It−1 (S i ), e). Then we have that either • w(gti (e)) ≤ w(Ψi (e)) and Ψi (e) ∈ Ik (S i ), or • w(gti (e)) = 0. We skip the proof of this lemma since it is exactly the same as in Lemma 2.56. With this result we can show Lemma 2.55. Proof (Lemma 2.55). We follow the same proof technique as in the proof of Lemma 2.40. Since J ⊆ F , the definition of F implies that for all e ∈ J there exists a te ∈ {1, . . . , k} such that → w(e) ≤ α · w( g te (e)). → To simplify notation we denote g te = g te (e). Therefore, X w(J \ Ik (S)) ≤ α · w( g te ). e∈J\Ik (S) P To bound the right-hand-side of this inequality we found an upper bound for e∈J\Ik (S) w(gtie ) for each i ∈ {1, . . . , `}. To this end, let us denote GiR = {gti }t∈R . First notice that ! [ Ik (S i ) \ GiR = Ik (S). j6=i The definition of J implies that J ∩ GiR = ∅. Therefore, for all i J \ Ik (S) = J \ Ik (S i ). Since Ψi is injective, by Lemma 2.41 we obtain that for all i ∈ {1, . . . , `} X e∈J\Ik (S) w(gtie ) = X e∈J\Ik w(gtie ) ≤ w(Ik (S i ) \ J). (S i ) This inequality and Inequality (2.9) imply that w(J \ Ik (S)) ≤ α · ` X i=1 w(Ik (S i )). 2.4. Robust Multi-Stage Matroid Intersection i By noting that Ik (S ) ⊆ Ik (S) ∪ w(J \ Ik (S)) ≤ α · j j6=i GR ` X , we obtain w(Ik (S) \ J) + w ∪j6=i GjR ! ≤α· ` · w(Ik (S) \ J) + (` − 1) w( g t ) `−1 w(Ik (S)) , ≤ α · ` · w(Ik (S) \ J) + α−1 where the last inequality follows by Lemma 2.54. With Lemmas 2.54 and 2.55 we can show that by restricting ourselves to consider α-lazy sequences we decrease the weight of our solutions by at most an O(`) factor. We omit the proof of this result since it follows by a very similar argument as in the proof of Theorem 2.42. Theorem 2.57. Let S = (ft , g t )kt=1 and S ∗ = (ft , g ∗t )kt=1 be two generalized swap sequences for I ∈ I. If S is α-lazy, then → w(Ik (S ∗ )) ≤ α2 ` − α + 1 · w(Ik (S)). α−1 Competitive Analysis for Multiple Matroids We present our algorithm for the Robust Multi-Stage Matroid Intersection problem in the case of regular instances (that is, kt = 1 for every stage t). We later argue that the general case can be reduced to this type of instances. → In this setting, a swap sequence S = (ft , g t )nt=1 is said to be robust feasible if ft ∈ Et for all t ∈ {1, . . . , n}. Notice that any robust sequence I0 , I1 , . . . , In can be transformed to a (robust → feasible) generalized swap sequence S = (ft , g t )nt=1 for the initial set I0 without decreasing the weight of the solutions (same argument as in Lemma 2.18). Therefore, for regular instances, we can work with generalized swap sequences instead of with robust sequences. Our online algorithm will choose an α-lazy swap sequence greedily. The algorithm takes α > 1 as a parameter, which we will use later to optimize the procedure. Algorithm (Regular Instance) For any stage t ∈ {1, . . . , n}, assume that we have constructed a swap sequence S = → (fs , g s )t−1 s=1 . We append a new swap to S as follows. → 1. Consider the set T of all swaps (f, g) with g = (g 1 , . . . , g ` ) ∈ Et × . . . × Et for It−1 (S) satisfying, (i) element g i is of minimum weight in C i (S i , f ) for all i ∈ {1, . . . , `}, and → (ii) w(f ) ≥ α · w( g). → 2. Let (ft , g t ) be any swap of maximum gain in T . → 3. Append (ft , g t ) to S and return It := It (S). Chapter 2. Robust Multi-Stage Matroid Optimization The first step to analyze this algorithm is showing that its solutions can be compared to an α-lazy feasible swap sequence S ∗ that maximizes w(I0 (S ∗ ))+∆(S ∗ ) (instead of w(In (S ∗ ))). The next lemma shows that doing this can only increase our estimation of the competitive ratio by an O(`) factor. In the lemma we also show that all elements introduced by S ∗ belong either to In (S ∗ ) or {ε}, and thus they are never removed from the sequence, Lemma 2.58. Let S be a robust feasible swap sequence maximizing w(In (S)). There exists → a robust feasible swap sequence S ∗ = (ft∗ , g ∗t ) that maximizes w(I0 ) + ∆(S ∗ ) among all α-lazy swap sequences and satisfies • ft∗ ∈ In (S ∗ ) ∪ {ε} for all t ∈ {1, . . . , n}, and • w(In (S)) ≤ α (α−1)2 · (α2 ` − α + 1) · (w(I0 ) + ∆(S ∗ )). Proof. With the same argument as in Lemma 2.44, it is easy to construct an α-lazy swap sequence S ∗ that satisfies the first property of the lemma. For the second property, let S 0 = → (ft0 , g 0t ) be the feasible α-lazy sequence maximizing w(In (S 0 )). Notice that by Theorem 2.57 we have that α2 ` − α + 1 · w(In (S 0 )). (2.11) w(In (S)) ≤ α−1 Let us modify S 0 so that any trivial swap (ft0 , g 0t ) equals (ε, (ε, . . . , ε)). This does not → modify the overall solution In (S 0 ). Thus, we have that w(ft0 ) ≥ α · w( g 0t ) for each t ∈ {1, . . . , n}, and therefore every swap in S 0 satisfies 1 α−1 0 →0 ∆(ft , g t ) ≥ 1 − · w(ft0 ) ≥ · (w(It (S 0 )) − w(It−1 (S 0 ))). α α → By summing the last expression over all t ∈ {1, . . . , n} we obtain that w(In (S 0 )) ≤ α · w(I0 )+∆(S 0 ) . The last inequality and Inequality (2.11) imply the lemma. α−1 We now show the inductive step of the analysis of Robust-Lazy-Intersection. The following lemma is a generalization of Lemma 2.45. Lemma 2.59. Consider a set I ∈ I and elements f, e1 , . . . , ek . Assume we are given ` swap sequences for set I S i = (et , dit )kt=1 for all i ∈ {1, . . . , `}, where each S i is a greedy-removing swap sequence for matroid Mi . Consider also the modified swap sequences for set I, i S = ((f, g i ), (e2 , bi2 ), . . . , (ek , bik )) for all i ∈ {1, . . . , `}, i where the elements bit for each t and i are chosen so that for all i ∈ {1, . . . , `} sequence S is → → → greedy-removing for Mi . Finally, denote d t := (d1t , . . . , d`t ), b t := (b1t , . . . , b `t ), S = (ft , d t )kt=1 → → → and S = ((f, g), (et , b 2 ), . . . , (et , b k )). Then the following inequality holds → ∆(S) ≥ ∆(S) − ∆(e1 , d 1 ) − (` − 1) · w(f ). 2.4. Robust Multi-Stage Matroid Intersection We remark that in this last lemma the sequences S and S might not be generalized swap sequences. We omit the proof of this lemma since it is a trivial generalization of the case with two matroids (Lemma 2.45). We can now show the main technical result of this section. → Lemma 2.60. Let S = (ft , g t )nt=1 be the swap sequence computed by Algorithm Robust→ Lazy-Intersection, and S ∗ = (ft∗ , g ∗t )nt=1 be the robust feasible swap sequence maximizing w(I0 ) + ∆(S ∗ ) among all α-lazy swap sequences. Then, w(I0 ) + ∆(S ∗ ) ≤ α2 · ` − 1 w(In (S)). α−1 Proof. We follow the same argument as in the proof of Lemmas 2.20 and 2.46. For each s ∈ {1, . . . , n − 1}, consider the sequences for set Is−1 (S), ∗ Ssi := ((fs , gsi ), (fs+1 , dis+1 ), . . . , (fn∗ , din )) for all i ∈ {1, . . . , `}. Here, elements dis+1 , . . . , din are defined so that Ssi is a greedy-removing sequence for matroid Mi . We remark that in this definition we omit an index to simplify notation, since → i element dt also depends on s. Also for notational convenience, denote d t := (d1t , . . . , d`t ) and → ∗ Ss = ((fs , g s ), (fs+1 , d s+1 ), . . . , (fn∗ , d n )). → (We remark that Ss might not be a generalized swap sequence for Is−1 (S).) Similarly, we consider the swap sequences for set Is−1 (S), Ssi∗ = (ft∗ , hit )nt=s for all i ∈ {1, . . . , `}, where each sequence is greedy-removing for its→respective matroid matroid M→i . Again, we are omitting an index s for hit . We also define ht = (h1t , . . . , h`t ) and Ss∗ = (ft∗ , ht )nt=s . Lemma 2.59 implies that for each s, → ∆(Ss ) ≥ ∆(Ss∗ ) − ∆(fs∗ , hs ) − (` − 1) · w(fs ). We distinguish two different types of iteration. To this end, for each s ∈ {1, . . . , n − 1} consider the first swap of Ssi∗ , swap (fs∗ , his ), and define the set n o → R := s ∈ {1, . . . , n − 1} : w(fs∗ ) ≥ α · w(hs ) . → If s ∈ R then (fs∗ , hs ) is a generalized swap for Is−1 (S) by Observation 2.52. Since Algorithm Robust-Lazy-Intersection chooses the swaps greedily, for each s ∈ R we have that → ∆(fs , g s ) ≥ ∆(fs∗ , hs ). → If s 6∈ R we use the upper bound ∆(fs∗ , hs ) ≤ α−1 w(fs∗ ). Consider the following definition, α ( α−1 w(fs∗ ) for s 6∈ R, α δs = → ∆(fs , gs ) for s ∈ R. Chapter 2. Robust Multi-Stage Matroid Optimization Inequality (2.12) implies that for all s ∈ {1, . . . , n − 1} ∗ ∆(fs , g s ) + ∆(Ss+1 ) = ∆(Ss ) ≥ ∆(Ss∗ ) − δs − (` − 1) · w(fs ). → Iterating this inequality we obtain, ∗ w(I0 ) + ∆(S ) ≤ n X t=1 δt + (` − 1) · n X w(ft ) + w(In (S)). The theorem will follow by upper bounding each of the summations in the right-hand-side of the last inequality. For this purpose, notice that {ft }nt=1 ⊆ In (S) ∪ {gt1 , . . . , gt` }nt=1 . Thus, by Lemma 2.54 we have that n X 1 α + 1 · w(In (S)) = · w(In (S)). w(ft ) ≤ α−1 α−1 t=1 Finally, observe that Lemma 2.55 and a short computation yields n X X α−1 X → δt = ∆(ft , gt ) + · w(ft∗ ) ≤ ∆(S) + (α` − 1)w(In (S)) ≤ α · ` · w(In (S)). α t=1 t∈R t6∈R Collecting all our inequalities we obtain, α2 · ` − 1 α ∗ + 1 · w(In (S)) = · w(In (S)). w(I0 ) + ∆(S ) ≤ α · ` + (` − 1) α−1 α−1 Lemma 2.61. For regular instances of the Robust Multi-Stage Matroid Intersection problem, Algorithm Robust-Lazy-Intersection has a competitive factor of α · (α2 · ` − 1) · (α2 · ` − α + 1) ∈ O(`2 ) (α − 1)3 against robust solutions. Proof. Follows directly by the last lemma and Lemma 2.58. We finally reduce general instances to the regular case. Theorem 2.62. For the general case of the Robust Multi-Stage Matroid Intersection problem, there exists an algorithm with a competitive factor of α · (α2 · ` − 1) · (α2 · ` − α + 1) ∈ O(`2 ) (α − 1)3 against robust solutions. Proof. With the same argument as in the proof of Lemma 2.22 we can restrict ourselves to regular instances. The result then follows by the previous lemma. Given the expression for the competitive ratio in the last theorem, we can find for each value of ` the value of α > 1 minimizing this expression. This optimized value is shown in Figure 2.2 for some values of `. We notice, however, that even if the value of α > 1 is optimized for each `, the competitive factor of the algorithm is at least `2 . Hence, the optimization process does not help diminishing its asymptotic behavior. We conjecture that for the intersection of ` matroids it is possible to obtain an algorithm with competitive ratio O(`) (or even ` + 1) when compared to an optimal robust solution. 2.4. Robust Multi-Stage Matroid Intersection ` 2 3 4 5 10 α∗ 2.34158 2.39647 2.42307 2.43878 2.4697 106 c 93.0153 226.116 417.087 665.929 2778.2 Figure 2.2: The optimized competitive ratio c together with the corresponding value α∗ where this minimum is attained. Applications to the Maximum Traveling Salesman Problem Since matroid intersection (in particular the intersection of an arbitrary number of matroids) is a very flexible framework, the above results can be applied to a vast number of problems. For many of these, the best robust sequence is as good as the offline-optimum or only by a constant factor smaller. In these cases, our algorithm is even constant competitive in comparison to the offline optimum of every stage. One important problem with this property is the directed Maximum Traveling Salesman problem (MaxTSP): Given a complete directed graph G = (V, A) and a weight function w : V × V → R>0 , the goal is to compute a directed tour of maximum total weight which visits each vertex exactly once. We assume that in each stage one new vertex v is revealed, together with the weight of all the arcs that connect v with other vertices previously known. We call this problem Robust MaxTSP. This problem is closely related to the MaxHam problem introduced in Section 2.1. Any solution for MaxTSP yields a solution to MaxHam by removing one edge. Similarly, since we assume a complete graph, any solution to the MaxHam problem can be converted to a tour by adding one arc. We note that the following result also holds for the Robust MaxHam problem, which is defined analogously. Theorem 2.63. If kt ≥ 6 for each stage t, there exists an online algorithm for the Robust MaxTSP problem which computes tours I1 , ..., In such that OPTt ≤ w(It ) · O(1) for each stage t ∈ {1, ..., n}, where OPTt denotes the optimal (offline) solution for iteration t. Proof. We exploit the connection between MaxTSP and MaxHam. As mentioned above, the directed MaxHam problem can be modeled as the intersection of three matroids: one matroid being the graphic matroid associated with the graph, and the other two matroids ensure that each vertex has an in- and an out-degree of at most one. Let Vt ⊆ V the set of vertices revealed up to stage t, and At ⊆ A be the set of available arcs in stage t. We denote by I ⊆ 2A the collection of all sets which are independent in all the three matroids described above. In other words, since (V, A) is complete, I is the collection of all subsets of directed Hamiltonian paths. Given kt = 1 for each iteration t, Algorithm Robust-Lazy-Intersection computes a sequence of sets I00 , . . . , In0 with It0 ∈ I for all t. We first show that w(It0 ) is within a constant factor of OPTt . Afterwards, we discuss how to convert each It0 to a tour of larger total weight. Chapter 2. Robust Multi-Stage Matroid Optimization We claim that OPTt ≤ w(It0 ) · O(1). Indeed, consider a directed Hamiltonian tour T of (Vt , At ) with w(T ) = OPTt . If |T | is even, taking alternating edges in this solution yields two disjoint matchings (for the underlying undirected multigraph) whose total weight sums up to the weight of T . If |T | is odd, we obtain three matchings (one of them containing only one arc). Since the weight of the three matchings sums up to w(T ), one of them has weight t . Let Mt∗ be a maximum weight matching for (Vt , At ), which by our discussion at least OPT 3 t satisfies that w(Mt∗ ) ≥ OPT . Notice that this solution can be constructed by adding at most 3 ∗ ⊆ Mt∗ one arc in each iteration. That is, there exists a sequence of matchings M1∗ ⊆ . . . Mt−1 so that Ms∗ ⊆ As and |Ms \ Ms−1 | ≤ 1 for all s ≤ t. Moreover, Ms∗ ∈ I for all s ≤ t. This means that an optimal robust sequence I1∗ , . . . , It∗ in I with ks = 1 for all s satisfies that t . Since by Theorem 2.62 It0 is constant competitive against It∗ , we w(It∗ ) ≥ w(Mt∗ ) ≥ OPT 3 conclude that OPTt ≤ w(It0 ) · O(1) for each iteration t. As shown in Figure 2.2, the constant hidden in the O-notation is approximately 3 × 226.116 = 678.348. Based on each set It0 constructed by Robust-Lazy-Intersection, we construct a tour It such that It0 ⊆ It for each stage t. For t = 1 it is clear that I10 = I1 = ∅. Suppose now that for some iteration t we have a set It0 computed by Robust-Lazy-Intersection and a tour It such that It0 ⊆ It . In iteration t + 1 the algorithm adds at most one edge f to It0 0 and removes at most three edges g1 , g2 , g3 , yielding the set It+1 . Let f 0 ∈ It \ It0 such that It − f 0 ∈ I. We remove f 0 from It and add f . This closes at most one circuit in each of the 0 three involved matroids. Since It+1 ∈ I, it is possible to find elements g10 , g20 , g30 such that 0 0 0 0 0 = ∅. Since we removed at most T := (It − f ) − g1 − g2 − g3 + f ∈ I and {g10 , g20 , g30 } ∩ It+1 four elements from It , the set T splits into at most five connected components (note that the vertex added in iteration t + 1 could form a single connected component). Hence, by adding 0 at most five edges to T we can obtain a tour It+1 such that It+1 ⊆ It+1 . Therefore, kt = 6 for each stage t suffices to maintain a tour in each stage. The theorem follows. In this chapter we studied online robust problems from the general perspective of matroid optimization. For the Robust Multi-Stage Matroid problem with a single matroid, we showed that a greedy algorithm is 2-competitive against robust solutions, and that this analysis is tight. The same holds in the case in which bases must be maintained in every iteration. Later, we study the problem on several matroids. We obtained the first algorithm with constant competitive factor against robust solutions, achieving an O(`2 ) ratio for the intersection of ` matroids. Applying this result to MaxTSP yields a constant competitive algorithm, even when compared against the offline optimal solution. It is an interesting open question to improve any of these results. For the single matroid case, it would be interesting to see if the same instances that showed the tightness of our greedy algorithm yield a general lower bound for any online robust algorithm. For an arbitrary number of matroids, any o(`2 )-competitive algorithm would represent a big step forward in the understanding of our setting. The author believes that such result needs a considerable number of new ideas. Several extensions of our problem could be considered. A particular interesting one is obtained by replacing the linear objective function by an arbitrary submodular function. Chapter 3 Robust Multi-Stage Minimum Spanning Trees Joint work with N. Megow, M. Skutella and A. Wiese Consider the problem of sending a stream of data from a source to several clients on a network. This kind of settings arises naturally in applications related to multimedia distribution systems, video-conferencing, software delivery, and groupware [OP05]. One possibility to deal with this problem is sending the stream directly from the source to the clients with, for example, a shortest path route (broadcast routing). It is not hard to see that such a simple strategy might incur into inefficiencies in the use of resources of the network, in particular it forces the source to have an extremely large upload bandwidth. Instead, a multicast approach is more appropriate, where the source has the option of sending the data only once and later the information is replicated at intermediate nodes. In this setting the routing is usually determined by a tree that spans the source and the clients; see [PG98] and references therein. Moreover, each edge can have an associated cost depending on the quality of the corresponding link, measured in network parameters like residual bandwidth and delay [SL91]. In graph-theoretic terminology, the described problem corresponds to finding a tree spanning a set of given nodes, usually called terminals. This corresponds to the classic Minimum Steiner Tree problem. In the case that all nodes in the graph are terminals, then the problem corresponds to the Minimum Spanning Tree (MST) problem. In many applications, the set of terminals is not static: clients might join or leave the terminal set [Wax88, SL91]. This gives the problem an online dimension, but unlike many online problems, there exists the possibility of partially adapting the solution when the terminal set changes. However, a larger number of rearrangements – especially a complete reconstruction of solutions – might be unnecessary and it may require a large use of network resources and CPU time. Additionally, the routing assignment is normally computed distributively instead of centralized, and thus every rearrangement might momentarily increase the needed bandwidth. This means that if the network is heavily loaded, the rearrangement of connections might result in the 109 3.1. Introduction blockage of parts of the network [IW91]. We study a fundamental problem in this setting. Consider an undirected graph whose nodes are revealed in stages, one by one. The objective is to construct in each stage a low-cost spanning tree of the revealed vertices (terminals), without any assumption on the vertices that might arrive in the future. We measure the quality of the sequence of trees with the usual competitive analysis framework by compering solutions to the offline optimum. For the reasons mentioned above, it is desirable to control how much the tree changes along iterations. In other words, we would expect that solutions satisfy some adequate concept of robustness. To this end we count the number of edges inserted at each iteration. We say that an algorithm needs a budget k if the number of inserted edges (or rearrangements) in each iteration is bounded by k. Similarly, the algorithm uses an amortized budget k if up to iteration t the total number of rearrangements is at most t · k. In both cases, k measures the robustness of our solutions. We call the problem just described the Robust Minimum Spanning Tree (Robust MST) problem. If we allow to use nodes that are not terminals (that is, Steiner nodes), then the problem corresponds to the Robust Steiner Tree problem (also found in the literature as the Dynamic Steiner Tree problem). We assume that the graphs in consideration are complete and that the cost function is metric, that is, it satisfies the triangular inequality. This is not a real restriction if the graph of potential terminals and cost function are known in advance, since then we can take the metric closure of the graph1 . We also consider the classic Traveling Salesman problem from an online robust perspective. In this setting, the objective is to find a cycle of minimum cost that visits each node exactly once. The robustness of solutions is measured again with the concept of (amortized) budget. We also remark that the Robust MST problem is closely related to the Robust MultiStage Maximum Basis problem, studied in Section 2.3.4. Indeed, the arrival of a node v can be modeled as the arrival of the adjacent edges to v. However, the MST problem differs in two key features: it is a minimization problem instead of maximization, and the edge costs are metric. These two properties together require significantly different algorithmic ideas to tackle the problem (although some knowledge from the previous section is indeed used). Related Work Offline Spanning and Steiner Trees In the offline setting, the MST problem has been widely studied. It can be solved efficiently with classic greedy approaches like Kruskal’s and Prim’s algorithm [Sch03, Chapter 50]. On the other hand, the Minimum Steiner Tree problem is NP-hard, and it cannot be approximated within a factor of 98/95 unless P = NP [CC08]. For many years, the best known approximation guarantee was 2, which can be achieved by considering the metric closure of the graph and then computing a MST on the terminals [GP68, Vaz01]. This factor was improved in a series of algorithms with decreasing factor guarantees [Zel93, KZ97, PS00, RZ05] until Byrka, Grandoni, Rothvoß, and Sanit`a [BGRS10] obtained the currently best upper bound of 1.39. 1 Given a cost function c on the edges of a graph G = (V, E), its metric closure is a complete graph (that is, each edge vw is in the edge set for all v, w ∈ V ) where the cost of and edge e = vw is the minimum cost path between v and w. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Unit Budget In the online setting, the first to analyze the Robust Steiner Tree problem are Imase and Waxman [IW91]. For the unit budget case, they show that the best possible competitive guarantee is Ω(log t) (where t is the number of iterations or stages). On the other hand, they show that a greedy algorithm that connects a new node to the other terminals through a shortest path (that is, a shortest edge in the metric closure) is O(log t)-competitive. Alon and Azar [AA93] consider the special case in which the nodes are embedded in the plane and the costs correspond to the euclidean distance. In this setting they show a lower bound of Ω( logloglogt t ) on the competitive guarantee of any online algorithm. Additionally, they give a simpler analysis showing that the greedy algorithm of Imase and Waxman [IW91] is O(log t)-competitive (in any metric space). Different generalizations of the Robust Steiner Tree problem can be found in the literature. Westbrook and Yan [WY95a] consider the directed version of the problem, where the task is to find a subgraph containing a directed path from a given root r to every terminal. They give a Ω(t) lower bound on any randomized online algorithm facing an oblivious adversary. Subsequent work focuses in refining the competitive analysis by parametrizing the competitive guarantee on the edge asymmetry of the graph. This quantity, denoted by α, is defined as the maximum ratio between the cost of any two anti-parallel arcs. Faloutsos, Pankaj, and Sevcik [FPS02] show that the competitive guarantee of any online algorithm is at least log t , t}), and that a greedy algorithm is O(min{α log t, t})-competitive. The lower Ω(min{α log α log t bound is later improved by Angelopoulos [Ang07] to Ω(min{max{ alpha log , α logloglogt t }, t1−ε }). α Finally, the same author shows in [Ang08] an almost matching competitive guarantee for the greedy algorithm. For other kinds of generalizations where multiple connections of given pairs of nodes are required, see [AHB04, WY95b, FPS02]. Larger Budget To the best of our knowledge, the literature for multiple rearrangements is significantly less abundant. Imase and Waxman [IW91] consider the Robust Minimum Steiner Tree problem where nodes might arrive or depart from the terminal set. For this setting, they give an algorithm that is 8-competitive and performs in t iterations at most O(t3/2 ) rearrangements. With our previous definitions, this translates into an algorithm that uses a (non-constant) amortized budget of O(t1/2 ). It is worth noticing that for the case in which no node leaves the terminal set their algorithm achieves a competitive guarantee of 4. Moreover, for the Robust MST problem (where no node leaves the graph), their algorithm is 2-competitive. Other Related Models Another way of bounding the changes of the trees is by considering the number of critical iterations. In a solution sequence of the Robust Minimum Steiner Tree problem, an iteration is said to be non-critical if the three constructed in the iteration only adds new edges to the previous tree; it is critical otherwise. For the problem of minimizing the total distance in the tree among all pairs of terminals, Thibault and Laforest [TL07] give a 12-competitive algorithm with O(log t) critical iterations. The same authors consider the problem of minimizing the diameter of the tree2 for the case in which nodes can only leave 2 The diameter of a graph is the maximum distance between any pair of nodes. 3.1. Introduction the terminal set. In this setting they give a constant competitive algorithm with O(log t) critical iterations [TL06]. Dynia, Korzeniowski, and Kutylowski [DKK07] consider an online MST problem where the cost of some edge increases or decreases by one in each iteration. The goal is to maintain a sequence of MSTes (that is, trees that are 1-competitive in terms of costs) and the objective function is to minimize the number of rearrangements. They give a deterministic algorithm that is O(t2 )-competitive, and show that this is best possible up to constant factors. Additionally, they present a randomized algorithm with expected competitive ratio O(t · log t). Traveling Salesman The Traveling Salesman problem (TSP) is one of the most studied problem in combinatorial optimization. The literature for this problem is vast, we only review here the settings that are more closely related to the topics of this thesis. For a more general introduction see [LLRKS85, Lap92]. In metric graphs, the prominent Christofides’ algorithm [Chr76] achieves a 23 approximation guarantee. Despite being a more than 30 years old result, this is the so far best known upper bound. An important special case is when the costs correspond to Euclidean distances in the plane. For this case a PTAS is derived by Arora [Aro96], who even extends this result to any constant-dimensional Euclidean space. Mitchell [Mit99] independently discovers a PTAS for the plane, only a few months later. A different setting is considered by Papadimitriou and Yannakakis [PY93], in which all the costs belong to {1, 2} (and thus there are metric). They show that there exists a 76 -approximation algorithm. On the negative side, they show that this case is AP X-hard, and therefore it does not admit a PTAS unless P = NP. The same result holds then for metric TSP. The upper bound for this special case was later improved by Berman and Karpinski [BK06] to 87 . Another special case is the graphic TSP, where costs correspond to shortest path distances of an unweighted underlying graph. If the underlying graph is cubic, Boyd, Sitters, van der Ster, and Stougie [BSvdSS11] give an algorithm with a worst case guarantee of 43 . Very recently, M¨omke and Svensson [MS11] generalize this result to subcubic graphs. In the same paper they show a 1.461-approximation algorithm for general underlying graphs. Online variants of the Traveling Salesman problem have been also studied. Consider a salesman that travels at a unit speed on a metric space. Starting from a given origin, the salesman traverses the graph with the objective of serving requests that are revealed online. The salesman can change its course en route to adjust for new requests. The objective is to minimize the total traveling time until returning to the origin. This problem was first introduced by Ausiello, Feuerstein, Leonardi, Stougie, and Talamo [AFL+ 01], who derive a 2-competitive algorithm for a natural class of metric spaces. This algorithm is bestpossible, however it needs to solve TSP subinstances to optimality and thus it does not run in polynomial time unless P = NP. The 2-competitive algorithm was generalized by Jaillet and Wagner [JW07] into two directions: to the case in which there are precedence and capacity constraints, and when there are m salesmen traversing the graph. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Our Contribution Our main concern is the study of the Robust MST problem with constant amortized budget. As mentioned before, we assume metric costs and that nodes can only be added to the terminal set. For this setting, we derive a (1+ε)-competitive algorithm that needs O( 1ε log 1ε ) amortized budget, for any ε > 0. This result is presented in Section 3.5. Note that the competitive ratio is computed by comparing our solution to the minimum spanning tree of the terminal set, whose cost is within a factor 2 to the cost of the minimum Steiner tree [GP68, Vaz01]. This immediately implies a (2 + ε) -competitive algorithm for the Robust Steiner Tree problem with the same amortized budget. Our result significantly improves the 4-competitive algorithm with O(t1/2 ) amortized budget given by Imase and Waxman [IW91], and constitutes the first advancement in 20 years. Moreover, we show that any (1 + ε)-competitive algorithm for the Robust MST problem needs an amortized budget of Ω( 1ε ), and thus our algorithm is best possible up to logarithmic factors. Our algorithm is simple and easy to implement, but captures subtleties in the structure of the problem that allow the improved analysis. Similarly to the algorithm by Imase and Waxman [IW91], the overall idea of the algorithm is: (1) connect a new node to its closest neighbor, and (2) iteratively improve the solution by swapping pairs of edges if the ratio of their costs is sufficiently large. We refine this idea by adding two freezing rules to the algorithm that avoid doing unnecessary swaps. The first rule avoids removing edges whose cost is low enough. The second rule is more subtle, and avoids removing an edge if the edge that it would replace can be traced back to a subgraph whose MST has cost less than ε · OPT. Thanks to the two freezing rules, we can bound the amortized budget by exploiting the cost structure of greedy edges derived by Alon and Azar [AA93]. Our result also implies that algorithms with amortized budget are significantly more powerful than their non-amortized counterparts. Indeed, we also give a simple example showing that no online algorithm can be (2 − ε)-competitive for any ε > 0 if it uses (nonamortized) constant budget. It is, however, an important open question whether there exists a constant competitive algorithm that needs constant budget. In Section 3.6 we study the possibility for the existence of such an algorithm. To this end we consider the problem under full information, that is, the input sequence of vertices and cost function are known in advance. We show that any algorithm with unit budget has a competitive ratio of Ω(log t). On the other hand, we give a polynomial time 14-competitive algorithm with budget 2. Roughly speaking, the algorithm works by dividing the iterations in phases, defined by the iterations in which the optimal cost doubles (that is, we use a doubling framework [BCR93, AAF+ 97, CK06, Sha07, LNRW10]). In each phase we construct a sequence of trees based on a tour that approximates the optimal solution. The fact that the node degree in a tour is 2 allows us to construct a sequence of trees with budget 2. Additionally, we propose a simple online greedy algorithm that needs a budget of 2. We state a structural condition on the behavior of optimal solutions that guarantees this algorithm to be constant competitive. We conjecture that this condition hold for every input sequence. In Section 3.7 we consider a robust version of the Traveling Salesman problem, defined analogously to the Robust MST problem. We show that any algorithm for the Robust MST problem can be translated to this setting by increasing the competitive ratio by a factor 3.2. Problem Definition 2 and the budget by a factor 4. To show this result we consider the classic shortcutting technique for converting trees to tours [Chr76]. This technique duplicates the edges of the tree and constructs a Eulerian tour on this graph. The Hamiltonian tour is constructed then by following the Eulerian tour and skipping nodes that have been already visited. We notice that a single rearrangement of the tree modifies the Eulerian tour only slightly. However, visiting the first appearance of a node might provoke an unbounded change in the Hamiltonian tour. We can repair this problem by fixing the copies of the nodes that we visit on the corresponding Eulerian tours, thus obtaining a robust version of the shortcutting technique. Problem Definition In what follows we assume basic familiarity with graph terminology; see, e. g., [Die05, Sch03]. An instance of the Robust Minimum Spanning Tree (MST) problem is defined as follows. Consider a sequence of nodes v0 , v1 . . . , vt , . . . arriving online one by one. In iteration t ≥ 0 node vt appears together with all edges vt vs for s ∈ {0, . . . , t − 1}. The cost c(e) ≥ 0 of an edge e is revealed with the edge appearance. We assume that the edges are undirected and that the costs satisfy the triangular inequality, that is, c(vw) ≤ c(vz) + c(zw) for all nodes v, w, z. For each iteration t, the current graph is denoted by Gt = (Vt , Et ) where Vt = {v0 , . . . , vt } and Et = Vt × Vt , that is, Gt is a complete graph. We are interested in constructing an online sequence T0 , T1 , T2 . . . where T0 = ∅ and for each t ≥ 1 the following two properties are satisfied. (P1) Feasibility: Tt is a spanning tree of Gt . (P2) Robustness: |Tt \ Tt−1 | ≤ k. Additionally, we consider an amortized version of Property (P2), in which we bound the average difference between Tt and Tt−1 , that is, P (P2)’ Amortized robustness: ts=1 |Ts \ Ts−1 | ≤ k · t. An algorithm whose sequence of trees satisfies (P2) is said to need a budget of k, and for (P2)’ is said to need an amortized budget of k. For measuring the quality of our algorithms, we consider the classic online competitive analysis. P Let OPTt be the cost of an MST of Gt , and for a given set of edges E denote c(E) := e∈E c(e). We say that an algorithm is α-competitive for some α ≥ 1, if for any input sequence the algorithm computes a sequence of trees T0 , . . . , Tt , . . . such that c(Tt ) ≤ α · OPTt for each t. The main contribution of this chapter is an online (1 + ε)-competitive algorithm with constant amortized budget O( 1ε log 1ε ). On the other hand, as we see in the next lemma, there is no (2 − ε)-competitive algorithm with (non-amortized) constant budget. Thus, there exists an intrinsic difference in the power of algorithms satisfying Property (P2) or (P2)’. Lemma 3.1. For every fixed ε > 0 and k ∈ N0 , there is no (2 − ε)-competitive algorithm with budget k. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Proof. Let us fix a value n ≥ 1. Consider a complete graph with vertices v0 , . . . , vn such that c(vt vn ) = 1 for all t ≤ n − 1. All other edges vs vt with s, t ≤ n − 1 have cost 2. Note that this graph is metric and that the minimum spanning tree of Gn is a star centered at vn whose total cost is n. However, in any sequence of trees, tree Tn−1 can have only edges of cost 2. Hence, tree Tn has to contain at least n − k edges of weight 2 and thus c(Tn ) ≥ 2(n − k) + k = 2n − k. We conclude that the competitive ratio of the algorithm is at least (2n − k)/n = 2 − k/n, which is larger than 2 − ε for sufficiently large n. More generally, note that the same construction in the proof of this lemma implies that there is no (2 − ε)-competitive algorithm for k = o(n), where n is the number of iterations. Additionally, there is no optimal algorithm (that is, 1-competitive) with constant amortized budget. This can be shown with an example given by Imase and Waxman [IW91], which we review below. Lemma 3.2. There is no optimal online algorithm for the Robust MST problem with constant amortized budget. Proof. Consider a sequence of vertices v0 , . . . , vn , where the cost of each edge vt vs for s < t equals 1 − t/(2n). Note that the costs satisfy the triangular inequality since they all belong to the interval [1/2, 1] and thus they differ by at most a factor 2. For this input sequence, the minimum spanning tree of each Gt is a star centered at vt . Thus, if T0 , . . . , Tn is a sequence of optimal solutions we have that |Tt \ Tt−1 | = t for each t ≥ 0. This implies that Pn 2 t=1 |Tt \ Tt−1 | ∈ Ω(n ), and hence the needed amortized budget is at least Ω(n). We conclude that it is necessary to relax the competitive guarantee to (1 + ε) to obtain an algorithm with constant amortized budget. Basic Properties Locally Optimal Solutions In what follows we review some basic properties about spanning trees. For some α ≥ 1, we say that a spanning tree T is locally α-approximate if it satisfies the following: for any pair of edges h ∈ T, f 6∈ T , if (T ∪ {f }) \ {h} is a tree then c(h) ≤ α · c(f ). We remark that (T ∪ {f }) \ {h} is a tree if and only if h belongs to the unique cycle contained in T ∪ {f }. We start by showing that locally α-approximate solutions are also α-approximate. First we need a technical lemma that allows us to find a one to one correspondence between the edges of two trees. With this we can charge the cost of edges of the locally α-approximate solution to edges in the optimum. Lemma 3.3. Consider a graph G = (V, E), and let T1 and T2 be two spanning trees for this graph. Then there exists a bijection Ψ : T1 \ T2 → T2 \ T1 such that each edge e ∈ T1 \ T2 belongs to the unique cycle contained in T1 ∪ {Ψ(e)}. We notice that this lemma is a special case of a more general property of matroids; see Lemma 2.38 (Chapter 2). A proof can be found in classic books on matroid theory, for example [Sch03, Corollary 39.12a]. It is also proved implicitly for the tree case by Imase and Waxman [IW91, Lemma 4]. We omit the proof. 3.3. Basic Properties Lemma 3.4. Let α ≥ 1. For a given graph G, consider a locally α-approximate spanning tree T . Then T is α-approximate, that is, c(T ) ≤ α · OPT where OPT is the cost of a minimum spanning tree of G. Proof. Let T ∗ be a minimum spanning tree. By Lemma 3.3, there exists a bijection Ψ : T \ T∗ → T∗ \ T such that each edge h ∈ T \ T ∗ belongs to the unique cycle in T ∪ {Ψ(h)}. In other words, (T ∪ {Ψ(h)}) \ {h} is a tree, and thus c(h) ≤ α · c(Ψ(h)). We conclude that X c(T ) = c(T ∩T ∗ )+c(T \T ∗ ) ≤ c(T ∩T ∗ )+α· c(Ψ(h)) ≤ c(T ∩T ∗ )+α·c(T ∗ \T ) ≤ α·c(T ∗ ), h∈T \T ∗ where the second last inequality follows since Ψ is bijective. In particular, for α = 1, this lemma implies that any locally optimal solution is also optimal. The proof technique of this lemma will be useful to show the competitive guarantee of our algorithm. Shortcut Tours We now state the classic connection between minimum spanning trees and tours. A Hamiltonian tour (or tour for short) is a cycle that visits each vertex exactly once. Given a tree T , we can easily compute a tour Q whose cost is at most 2 · c(T ). To this end, consider the multi-graph 2 · T obtained by duplicating each edge in T . Note that 2 · T is Eulerian (each node has even degree). It is easy to see [Die05] that then the graph (V, 2 · T ) admits a Eulerian walk, that is, a sequence of nodes W = x1 , . . . , xr where x1 = xr , such that each edge in 2 · T is traversed exactly once. More precisely, we have that for all e ∈ T there exists exactly two distinct indices i, j ∈ {1, . . . , r − 1} such that e = xi xi+1 = xj xj+1 . Recall that we assume a complete graph. In order to obtain a Hamiltonian tour, we can traverse the nodes in the order given by the walk, and skip the vertices already visited. In other words, we take shortcuts. By the metric property, taking shortcuts can only diminish the total cost of the solution. This yields the following algorithm that outputs a tour Q that is said to be a shortcut tour of T . Algorithm Input: A spanning tree T of graph G = (V, E). 1. Create a Eulerian walk W = x1 , . . . , xr for graph (V, 2 · T ). 2. Compute indices `1 < `2 < . . . < `|V | where for all i ∈ {1, . . . , |V |} it holds that xj 6= x`i for each j < `i . 3. Return Q := {x`1 x`2 , x`2 x`3 , . . . , x`|V |−1 x`|V | , x`|V | x`1 }. In the following observation we prove that the cost of a shortcut tour of T is at most 2 · c(T ). Chapter 3. Robust Multi-Stage Minimum Spanning Trees Observation 3.5. If Q is the output of Algorithm Tour-Shortcut on input T then c(Q) ≤ 2 · c(T ). Proof. Following the notation of the algorithm, note that `1 = 1 and let us call `|V |+1 := r. The metric property implies that for all i ∈ {1, . . . , |V |}, `i+1 −1 c(x`i x`i+1 ) ≤ c(xj xj+1 ). By summing the last inequality over all i ∈ {1, . . . , |V |} we obtain that c(Q) ≤ c(2 · T ) = 2 · c(T ). This observation have the following important consequence. Lemma 3.6 ([Chr76]). Given a complete graph G = (V, E) with metric costs c, let OPT be the cost of an MST, and let Q∗ be a minimum cost tour. Then OPT ≤ c(Q∗ ) ≤ 2OPT. Proof. The fact that OPT ≤ c(Q∗ ) follows since removing any edge of Q yields a spanning tree. To show that c(Q∗ ) ≤ 2OPT, consider a minimum spanning tree T ∗ and let Q be any shortcut tour of T ∗ . Thus c(Q∗ ) ≤ c(Q) ≤ 2c(T ∗ ) = 2OPT. This lemma also implies an important property of minimum spanning trees of a subgraph of G. It is easy to see that given a subgraph G0 of G, a minimum spanning tree T 0 of G0 might have a larger cost than the one of G (consider for example the graphs corresponding to iterations n − 1 and n in Lemma 3.1). However, tree T 0 cannot be arbitrarily more expensive than the minimum spanning tree of G. Indeed, its costs is at most twice the optimal cost in G. This property will be a useful tool to understand the cost structure of the greedy edges chosen by our algorithm. Lemma 3.7. Let G be a complete metric graph and let OPT be the cost of a minimum spanning tree of G. If G0 is a complete subgraph of G with optimum cost OPT0 , then OPT0 ≤ 2OPT. Proof. Let Q be a minimum cost tour for G. Consider now the tour Q0 constructed by taking shortcuts on the nodes not in G0 . That is, we construct Q0 by traversing the vertices of G0 in the same order given by Q, skipping any vertex not in G0 . Similarly as in Observation 3.5, the metric property implies c(Q0 ) ≤ c(Q). Since OPT0 ≤ c(Q0 ) ≤ c(Q), the lemma follows by Lemma 3.6. Decreasing Gains and Supermodularity We now discuss some basic properties of minimum spanning trees and supermodularity. The question that we address now is: how much can we decrease the cost of our solution if one extra edge is available in the graph? We study this question for general graphs, that are not necessarily complete nor metric. The results will be a useful tool when analyzing online algorithm. They will help us compare the gain in the cost of adding a single edge to different trees in the online sequence. The observations that we present in the following are analogous to properties presented in Chapter 2. We shortly describe the basic definitions and results with the goal of making the chapters easier to read independently. 3.3. Basic Properties Consider a graph G = (V, E) (not necessarily complete) and let T := {T ⊆ E : T is a spanning tree }. For a given non-negative cost function c, we analyze the function R : 2E → R ∪ {∞} defined as R(X) T ⊆X,T ∈T for any X ⊆ E. c(T ) We notice that if X is not spanning then R(X) = ∞. To shorten notation we define ∞ − ∞ := ∞. We show that function R is supermodular, that is, • (Supermodularity) for any sets X, Y ⊆ E with X ⊆ Y and e ∈ E it holds that R(X) − R(X ∪ {e}) ≥ R(Y ) − R(Y ∪ {e}). Intuitively, supermodularity means that the decrease on function R by adding an element to a set cannot increase if the set is enlarged. Similarly, a function R is called submodular if −R is supermodular. We remark that supermodularity can be interpret as a discrete analogous of concavity. Lemma 3.8. Function R is supermodular. Proof. Let cmax := {c(e) : e ∈ E}, and consider a new cost function c0 defined as c0 (e) := cmax − c(e) for all e ∈ E. Let us denote by F the set of all forests in G. We consider the function R0 : 2E → R≥0 defined as R0 (X) = max c0 (T ). T ⊆X,T ∈F Notice that F is a weighted rank function of a graphic matroid and thus by Lemma 2.5, R0 is submodular. Consider two sets X, Y ⊆ E with X ⊆ Y and an element e ∈ E. If X does not span E then R(X) = ∞ and thus the supermodularity property follows directly. We can then assume that X and Y are spanning sets. Consider any spanning set S ⊆ E. Since c0 is non-negative, the maximum in the definition of R0 (S) is always attained by a spanning tree (which has cardinality |V | − 1). Thus, R0 (S) = max c0 (T ) = cmax (|V | − 1) + max −c(T ) = cmax (|V | − 1) − R(S). T ⊆S,T ∈T T ⊆S,T ∈T Using the submodularity of R0 we have that R0 (X) − R0 (X ∪ {e}) ≤ R0 (Y ) − R0 (Y ∪ {e}) We conclude the lemma since R0 (X) − R0 (X ∪ {e}) = −R(X) + R(X ∪ {e}) and R0 (Y ) − R0 (Y ∪ {e}) = −R(Y ) + R(Y ∪ {e}). We now interpret the supermodularity of R in graph-theoretic terms. To this end we first compute R(X) − R(X ∪ {e}) explicitly. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Lemma 3.9. Consider a graph G = (V, E) and let X ⊆ E and e ∈ E. Let T be a minimum spanning tree of (V, X), and let d be a maximum cost edge in the unique cycle contained in T ∪ {e}. Then R(X) − R(X ∪ {e}) = c(d) − c(e). Proof. Consider the tree T 0 := (T ∪ {e}) \ {d}. We use Lemma 3.4 (for α = 1) to prove that this tree is a minimum spanning tree for (V, X ∪ {e}). To this end, consider any edge f ∈ X \ T 0 , and let h0 be a maximum cost element in the unique cycle in T 0 ∪ {f }. By Lemma 3.4, it is enough to show that c(f ) ≥ c(h0 ). Notice that this clearly holds if f = d. Then, we can assume that f ∈ X \ T . Since T is a minimum spanning tree of (V, X), then R(T ) = R(T ∪ {f }) = R(X). Then, the supermodularity of R implies that 0 = R(T ) − R(T ∪ {f }) ≥ R(T ∪ {e}) − R(T ∪ {e, f }). Also, since d is the maximum cost element in the cycle in T ∪ {e}, Lemma 3.4 implies that T 0 is a minimum cost tree of (V, T ∪ {e}). Thus, R(T ∪ {e}) = c(T 0 ). On the other hand, (T 0 ∪ {f }) \ {h0 } is a spanning tree of (V, T ∪ {e, f }), and therefore R(T ∪ {e, f }) ≤ R(T ∪ {e, f }). Recollecting our observations we obtain that 0 ≥ R(T ∪ {e}) − R(T ∪ {e, f }) ≥ c(T 0 ) − c((T 0 ∪ {f }) \ {h0 }) = −(c(f ) − c(h0 )). Then c(f ) ≥ c(h0 ), which implies that T 0 is a minimum spanning tree of (V, X ∪ {e}). This implies the lemma. This property can also be interpreted as follows. Consider a graph (V, X), and let T be a minimum spanning tree for it. If e is an edge not in X, a minimum spanning tree for (V, X ∪{e}) can be constructed as follows. Let h be a maximum cost edge in the unique cycle of T ∪ {e}. Then (T ∪ {e}) \ {h} is a minimum spanning tree for (V, X ∪ {e}). Iterating this idea we can compute an MST for a graph (V, Y ) given an MST for graph (V, X) if X ⊆ Y . From a graph perspective the supermodularity of R can be interpreted with the following result. Property 3.10. Consider two spanning graphs G1 = (V, E1 ) and G1 = (V, E2 ) with E1 ⊆ E2 and a non-negative cost function on the edges c. Let T1 and T2 be spanning trees for graphs G1 and G2 , respectively. If d1 ∈ E1 is a maximum cost edge in the unique cycle in T1 ∪ {e}, and d2 ∈ E2 is a maximum cost edge in the unique cycle in T2 ∪ {e}, then c(d1 ) ≥ c(d2 ). Proof. The last lemma implies that R(Ei ) − R(Ei ∪ {e}) = c(di ) − c(e) for i ∈ {1, 2}. Then, the supermodularity of R implies that c(d1 ) − c(e) ≥ c(d2 ) − c(e). The lemma follows. An easy consequence of this fact is the following property. Property 3.11. Consider a spanning graph G = (V, E) with a non-negative cost function on the edges c. Let T be any spanning tree of G and T ∗ a minimum spanning tree for G. If d is a maximum cost edge in the unique cycle in T ∪ {e}, and d∗ a maximum cost edge in the unique cycle in T ∗ ∪ {e}, then c(d) ≥ c(d∗ ). Proof. It is enough to apply the previous property to graphs G1 = (V, T ) and G2 = (V, E). 3.4. The Unit Budget Case The Unit Budget Case Imase and Waxman [IW91] study the Robust Spanning Tree and Robust Steiner Tree problems with budget k = 1. They show that the greedy algorithm, which in every iteration t connects vt to its closest neighbor in Vt−1 , is O(log t)-competitive. They also show that this result is tight up to constant factors. We now revisit these results for the Robust MST problem. We show the upper bound for the greedy algorithm with a simpler analysis derived by Alon and Azar [AA93]. Their analysis uses an important fact about the cost structure of the greedy edges. This result, presented in the following, will be also of great importance for the analysis of our more elaborate algorithm in the next section. Lemma 3.12 ([AA93]). For every t ≥ 1, let gt be the edge inserted to the solution by the greedy algorithm in iteration t, and thus gt ∈ arg min{c(vt vs ) : 0 ≤ s ≤ t − 1}. Sort the elements g1 , . . . , gt by relabeling them into e1 , . . . , et so that c(e1 ) ≥ c(e2 ) ≥ . . . ≥ c(et ). Then, 2OPTt for all j ∈ {1, . . . , t}. c(ej ) ≤ j Proof. For each j ≥ 1, let wj be the vertex associated with edge ej , that is, wj = v` where ej = g` . Also define w0 := v0 . Let Fj be the subgraph induced by {w0 , w1 , . . . , wj }, and let OPT(Fj ) be the cost of a minimum spanning tree of this subgraph. Notice that any edge wk w` for k, ` ≤ j satisfies that c(wk w` ) ≥ c(ej ). Indeed, if node w` is revealed after wk , then c(wk w` ) ≥ c(e` ) ≥ c(ej ). This implies that any edge in Fj has cost at least c(ej ) and thus OPT(Fj ) ≥ j · c(ej ). This inequality and Lemma 3.7 imply that c (ej ) ≤ 2 · OPTt OPT(Fj ) ≤ . j j Lemma 3.13. The greedy algorithm with budget k = 1 is 2(ln(t) + 1)-competitive. Proof. Using the notation of the last lemma, notice that tree constructed by the greedy algorithm in iteration t corresponds to {g1 , . . . , gt } = {e1 , . . . , et }. Thus, c(Tt ) = t X j=1 c(ej ) ≤ 2OPTt · t X 1 j=1 ≤ 2OPTt · 1 + 1 1 dx = 2OPTt · (1 + ln t). x We finish this section by showing that it is not possible to obtain a better than Ω(log t) competitive guarantee for the unit budget case. This follows from the same lower bound presented for the Steiner tree problem in [IW91]. However, for the MST problem, essentially the same proof shows something even stronger: the lower bound holds even if the costs correspond to the euclidean distance on the real line, and if the whole input sequence of nodes is given to the algorithm in advance. Lemma 3.14. The competitive ratio of any algorithm with budget k = 1 is in Ω(log t). This holds even if the costs correspond to the euclidean distances in R, and the sequence of node arrivals is known to the algorithm in advance. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Proof. We consider a sequence of nodes that lie in the interval [0, 1] ⊆ R. The cost of the edges corresponds to the distances of the vertices in the real line. Our instance is constructed in phases. The first phase P0 = {v0 , v1 } contains two vertices v0 = 0 and v1 = 1. We now construct phase P` from P`−1 . Assume that P`−1 = {w1 , . . . , wr } for some r, where wi < wi+1 for all i ∈ {1, . . . , r −1}. We define P` := {w1 , x1 , w2 , x2 , . . . , wr−1 , xr−1 , wr } where xi = (wi + wi+1 )/2. That is, in phase ` we add, one after another, the vertices in the middle point of every pair of consecutive vertices of the previous phase. Also, note that in phase ` ≥ 1 we add 2`−1 new vertices to the instance. Since any algorithm with budget k = 1 must connect a new vertex xi in P` to some vertex, the best that it can do is connect xi to wi or wi+1 . This operation adds a cost of 21` to the solution. Thus, during phase ` the cost of the solution increased by 12 . Since in phase 0 the cost is also increased by 1, we conclude that the total cost of the solution at the end of phase ` is 2` + 1. On the other hand, the cost of the offline optimum always equals to 1 (obtained by traversing the vertices from left to right). We conclude that the competitive ratio of any online algorithm is at least 2` + 1. The fact that at the end of phase ` we are in iteration t = |P` | − 1 = 2` , implies a lower bound of log22 t + 1 on the competitive ratio. A Near-Optimal Algorithm with Amortized Constant Budget In this section we give a (1 + ε)-competitive algorithm for the Robust MST problem with amortized budget O( 1ε log 1ε ) for any ε > 0. Recall that for the case in which no node leaves the terminal set, Imase and Waxman [IW91] proposed an algorithm that achieves a constant competitive factor and needs amortized budget O(t1/2 ). Our result can be seen as a refinement of this algorithm. The algorithm of Imase and Waxman is a simple greedy algorithm that, for each t ≥ 1, constructs the solution tree Tt based on Tt−1 as follows: set Tt := Tt−1 ∪ {gt } where gt is the shortest connection between vt and any other node in Vt−1 ; for any edge f 6∈ Tt find an edge h of maximum cost in the cycle contained in Tt ∪ {f }, and if c(h) ≥ 2c(f ) swap edges h and f in Tt , i.e., set Tt := (Tt ∪ {f }) \ {h} ; repeat this last step until there is no more pair of edges to swap. Notice that this algorithm is 2-competitive by Lemma 3.4 (and by Lemma 3.7 this implies that the algorithm is 4-competitive for the Robust Steiner Tree problem). Additionally, Imase and Waxman [IW91] show that the amortized budget of the algorithm is O(t1/2 ) and they conjecture that the algorithm needs amortized budget 1. We refine this algorithm so that it is (1+O(ε))-competitive and admits a stronger analysis on its amortized budget. First, we reduce the competitive ratio by performing swaps as long as c(h) ≥ (1 + ε) · c(f ) instead of c(h) ≥ 2c(f ). Additionally, to decrease the budget we use two freezing rules that avoid performing unnecessary swaps. The intuition for these freezing rules is as follows. Note that if at iteration t the optimal value OPTt is much higher than OPTs for some s < t, then the edges in Ts – whose total cost is approximately OPTs – are already very cheap. Thus, replacing these edges by cheaper ones would only waste rearrangements. Note that although OPTs is not monotonically increasing, it cannot decrease dramatically because of Lemma 3.7. To simplify the analysis we ignore these minor 3.5. A Near-Optimal Algorithm with Amortized Constant Budget decreases by considering the maximum optimum value up to t, that is, OPTmax := max{OPTs : 1 ≤ s ≤ t}. t We remark that OPTmax is non-decreasing on t and also, by Lemma 3.7, it holds that t OPTt ≤ OPTmax ≤ 2 · OPT t . With this in mind we define `(t) as the largest iteration with t ignorable edges with respect to OPTmax , i. e. , t `(t) := max{s ∈ {0, . . . , t} : OPTmax ≤ εOPTmax }. s t Note that `(t) is also non-decreasing, meaning that if Ts has neglectable cost when compared to OPTmax , then the same will hold when compared to OPTmax for t0 ≥ t. t t0 i(s) For our first freezing rule we define sequences of edges (gs0 , . . . , gs ), where gs0 corresponds to the greedy edge added at iteration s (that is, an edge connecting vs to one of its closest neighbors in Vs−1 ). At the moment where edge gs0 is removed from our solution we define gs1 as the element that replaces gs0 . In general gsi is the edge that was swapped in for edge gsi−1 . i(s) In this way, the only edge in the sequence that belongs to the current solution is gs . With i(s) this construction, we freeze a sequence (gs0 , . . . , gs ) in iteration t if s ≤ `(t). Note that i(s) since `(·) is non-decreasing, once the sequence is frozen gs will stay indefinitely in the solution. As we will see later in detail, the weight of all elements in frozen sequences is at most c(T`(t) ) ∈ O(εOPTt ). Our second freezing rule is somewhat simpler. We skip swaps that remove edges that are too small, namely, smaller than εOPTmax /(t − `(t)). Together with the previous rule, the t edges that were not removed because of this rule are at most (t − `(t)), and thus their total cost is at most εOPTt . The two freezing rules are crucial for bounding the amortized budget of the algorithm. i(s) We will bound the length of each sequence (gs0 , . . . , gs ) by using the fact that we only swap edges when their cost is decreased by a (1 + ε) factor, that is, c(gsi ) ≤ c(gsi−1 )/(1 + ε) for i(s) each i. Thus, the length of this sequence is bounded by log1+ε c(gs0 ) − log1+ε c(gs ). We can bound this quantity by using the cost structure of the greedy edges gs0 in Lemma 3.13 and i(s) lower bounding the cost gs with our freezing rules. In what follows we explain our ideas more precisely. We start by stating our algorithm in more detail. Algorithm Input: A sequence of complete graphs Gt = (Vt , Et ) for t ≥ 0 revealed online with V0 = {v0 } and Vt = Vt−1 ∪ {vt } for all t ≥ 1. A (non-negative) metric cost function c revealed together with the edges. Define T0 = ∅. For each iteration t ≥ 1 do as follows. 1. Let gt0 be any minimum cost element in {vt vs : 0 ≤ s ≤ t − 1}. 2. Initialize Tt := Tt−1 ∪ {gt0 } and i(t) := 0. Chapter 3. Robust Multi-Stage Minimum Spanning Trees 3. While there exists a pair of edges (f, h) ∈ (Et \ Tt ) × Tt such that (Tt ∪ {f }) \ h is a tree, and the following three conditions are satisfied (C.i) c(h) > (1 + ε) · c(f ), i(s) (C.ii) h = gs for some s ≥ `(t) + 1, and max t (C.iii) c(h) > ε OPT , t−`(t) then set Tt := (Tt ∪ {f }) \ {h}, i(s) := i(s) + 1, gsi(s) := f. 4. Return Tt . We remark that Conditions (C.ii) and (C.iii) correspond to the two freezing rules previously described. Competitive Analysis We now show that the algorithm is (1 + O(ε))-competitive. We break the proof of this fact into two lemmas. Let us fix a value ε < 1/7. We will show by induction that c(Tt ) ≤ (1 + 7ε) · OPTt for all t. Notice that this clearly holds for t = 1. We now fix a value of t ≥ 2 and assume by induction hypothesis that the approximation guarantee holds for all values smaller than t − 1. Consider the values i(s) for s ∈ {1, . . . , t} at the end of iteration t of the algorithm, so that n o i(1) i(t) Tt = g1 , . . . , gt . Let us simplify notation by denoting ` = `(t). We partition tree Tt into two disjoint i(1) i(`) i(`+1) i(t) subsets, Tt = Ttold ∪ Ttnew where Ttold := {g1 , . . . , g` } and Ttnew := {g`+1 , . . . , gt }. We first bound the cost of Ttold . Note that by our induction hypothesis we can assume that c(T` ) ≤ (1 + 7ε)OPT` . Lemma 3.15. Let 0 < ε < 1/7 and assume that c(T`(t) ) ≤ (1 + 7ε)OPT`(t) . Then c(Ttold ) ≤ 4εOPTt . Proof. Notice that whenever the algorithm removes an edge, it replaces it by an edge of i(s) smaller cost. Thus, for each s we have that c(gs0 ) > c(gs1 ) > . . . > c(gs ). Since each i(s) element in T` must belong to a sequence gs0 , gs1 , . . . , gs for some s ≤ `, we conclude that c(Ttold ) ≤ c(T` ). Using our induction hypothesis and that ε < 1/7 we conclude that c(Ttold ) ≤ c(T` ) ≤ (1 + 7ε)OPT` ≤ 2OPT` ≤ 2OPTmax ≤ 2εOPTmax ≤ 4εOPTt , ` t 3.5. A Near-Optimal Algorithm with Amortized Constant Budget where the second last inequality follows from the definition of ` = `(t) and the last one from Lemma 3.7. Lemma 3.16. Algorithm Sequence-Freeze is a (1 + 7ε)-competitive algorithm for any ε < 71 . Proof. Recall that we are considering a fix value ε < 1/7, and that we want to show by induction that c(Tt ) ≤ (1 + 7ε) · OPTt for all t ≥ 1. Clearly this is satisfies for t = 1. For a given t ≥ 2, the induction hypothesis implies that c(T` ) ≤ (1 + 7ε) · OPT` and thus by the previous lemma we have that c(Ttold ) ≤ 4εOPTt . We bound c(Tt ) by using a similar technique as in Lemma 3.4. Let T ∗ be a minimum spanning tree of Gt . By Lemma 3.3, there exists a bijection Ψ : Tt \ T ∗ → T ∗ \ Tt , such that for all h ∈ Tt \ T ∗ , edge h belongs to the unique cycle in set T ∪ {Ψ(h)}, and thus (T ∪ {Ψ(h)}) \ {h} is a tree. Since h is not removed from Tt by the algorithm in iteration t, we conclude that for each h ∈ Tt \ T ∗ either (i) c(h) ≤ (1 + ε) · c(Ψ(h)), max t , or (ii) c(h) ≤ ε OPT t−` (iii) h = gs for some s ≤ `, and thus h ∈ Ttold . We partition edges in (Tt \ T ∗ ) \ Ttold in two sets: set F1 of edges satisfying Property (i), and set F2 of edges satisfying Property (ii). Notice that since Ψ is bijective we have that c(F1 ) ≤ (1 + ε) c(Ψ(h)) ≤ (1 + ε)c(T ∗ \ Tt ). Also, F2 ⊆ Ttnew , which implies that |F2 | ≤ t − ` and therefore c(F2 ) ≤ |F2 | · ε · OPTmax t ≤ εOPTmax . t t−` This implies that c(F2 ) ≤ 2εOPTt by Lemma 3.7. By using our upper bounds for c(F1 ) and c(F2 ) together with Lemma 3.15 we conclude that c(Tt ) ≤ c(Tt ∩ T ∗ ) + c(F1 ) + c(F2 ) + c(Ttold ) c(Tt ) ≤ (1 + ε) · [c(Tt ∩ T ∗ ) + c(T ∗ \ Tt )] + 2εOPTt + 4εOPTt , and thus c(Tt ) ≤ (1 + 7ε)OPTt . The lemma follows. Chapter 3. Robust Multi-Stage Minimum Spanning Trees OPTmax t εOPTmax t εOPTmax `(t)+1 r r+1 = `(`(t) + 1) `(t) + 1 i0 (s) Figure 3.1: Sketch of notation in the proof of Lemma 3.19. The abscissa denotes the iterations and the ordinate the value of OPTmax (·) . Amortized Budget Bound We also break the proof on the budget bound of the algorithm in two lemmas. We begin by stating our result. Lemma 3.17. Let kq := |Tq \ Tq−1 | be the budget used by the algorithm in iteration q. Then, for every t ≥ 1, √ t 2 ln ε2 + 1 X 1 1 ∈O log kq ≤ Dε · t, where Dε = 2 + 2 · ln(1 + ε) ε ε q=1 We first show that total budget needed for iterations `(t) + 1 to t is proportional to t − `(`(t) + 1). This will be used later to show the claim on the amortized budget by loosing a factor 2 on the guarantee. To show this we need the following technical observation. Observation 3.18. For any n ∈ N>0 it holds that n X ln j ≥ n ln(n) − n. Proof. It is enough to notice that n X Z ln j ≥ ln(x)dx = n(ln(n) − 1) − 1(ln(1) − 1) = n ln(n) − n + 1. 1 The next lemma is the core of our analysis. In it we exploit the cost structure of the greedy edges together with our freezing rules. Lemma 3.19. For each t ≥ 1 it holds that t X q=`(t)+1 kq ≤ Cε · (t − `(`(t) + 1)) 2 ln ε2 + 1 Dε Cε := =1+ . 2 ln(1 + ε) 3.5. A Near-Optimal Algorithm with Amortized Constant Budget Proof. Consider the values i(s) at the end of iteration t, and let i0 (s) be the value of i(s) at the beginning of iteration `(t) + 1 (and i0 (s) := 0 for s ≥ `(t) + 1). By Condition (C.ii) in the i0 (s) i0 (s)+1 i(s) algorithm, in iterations `(t)+1 to t we only touch edges belonging to {gs , gs , . . . , gs } for some s ∈ {`(`(t) + 1) + 1, . . . , t} (recall that `(·) is a non-decreasing function). Let us denote r := `(`(t) + 1) (see Figure 3.1 for a sketch of the situation). Then, t X kq = t X (i(s) − i (s)) = t − r + t X (i(s) − 1 − i0 (s)). We now upper bound each term i(s) − 1 − i0 (s) for s ∈ {r + 1, . . . , t}, which corresponds to i0 (s)+1 i0 (s)+1 i(s)−1 the length of the sequence (gs , gs , . . . , gs ). To bound the length of this sequence, note that whenever we add an edge gsi and remove gsi−1 , then c(gsi ) < c(gsi−1 )/(1 + ε). This implies that i0 (s) c(gs ) i(s)−1 c(gs )≤ , (1 + ε)i(s)−1−i0 (s) and thus 0 i(s) − 1 − i0 (s) ≤ log1+ε c(gsi (s) ) − log1+ε c(gsi(s)−1 ) ≤ log1+ε c(gs0 ) − log1+ε c(gsi(s)−1 ) 1 · ln c(gs0 ) − ln c(gsi(s)−1 ) . = ln(1 + ε) Combining this expression with Inequality (3.1) we obtain that t X kq ≤ t − r + t X 1 · ln c(gs0 ) − ln c(gsi(s)−1 ) . ln(1 + ε) s=r+1 ) are within a small factor. To We bound this term by noting that c(gs0 ) and c(gs i(s)−1 0 ) with the freezing do so we upper bound c(gs ) with Lemma 3.13 and lower bound c(gs condition (C.iii). i (s)−1 First we lower bound c(gs ). Assume that i(s) > i0 (s) (otherwise we can upper bound i(s)−1 was swapped for the term i(s) − 1 − i0 (s) by 0 in (3.1)). This implies that edge gs i(s) ∗ edge gs in some iteration q ∈ {`(t) + 1, . . . , t}. By Condition (C.iii) this implies that c(gsi(s)−1 ) ≥ ε · max OPTmax OPTmax `(t)+1 q∗ 2 OPTt ≥ ε ≥ ε , (q ∗ − `(q ∗ )) (t − r) (t − r) where the last inequality follows by the definition of `(·) (see Figure 3.1). Thus, t X ln c(gsi(s)−1 ) ≥ (t − r) · ln(ε2 · OPTmax ) − (t − r) ln(t − r). t We now upper bound c(gs0 ) for all s ∈ {r+1, r+2, . . . , t}. Recall that gs0 is the greedy edge for iteration s, that is, it is a closest connection between vs and any element in {v0 , . . . , vs−1 Chapter 3. Robust Multi-Stage Minimum Spanning Trees 0 , . . . , gt0 } = {e1 , . . . , et−r } so that c(e1 ) ≥ . . . ≥ c(et−r ). Let us rename the edges {gr+1 Lemma 3.13 implies that c(ej ) ≤ 2 OPTt OPTt max ≤2 for all j. j j We conclude that t X ln c(gs0 ) t−r X ln c(ej ) ≤ (t − r) ln(2 · OPTmax ) t t−r X ln j. Using Observation 3.18 we obtain that t X ln c(gs0 ) ≤ (t − r) ln(2 · OPTmax ) + (t − r) − (t − r) ln(t − r). t Inserting this inequality and Inequality (3.3) into (3.2) we conclude that t X q=`(t)+1 (t − r) ln(2 · OPTmax ) + (t − r) − (t − r) · ln(ε2 · OPTmax ) t t ln(1 + ε) ! ln ε22 + 1 . = (t − r) · 1 + ln(1 + ε) kq ≤ t − r + With this main technical lemma we are ready to bound the amortized budget. Proof (Lemma 3.17). We show by induction that for all t ≥ 1, t X kq ≤ 2 · Cε · `(t) + Cε · (t − `(t)), where Cε is the constant from the last lemma. Notice that showing this implies directly the lemma. Clearly the inequality holds for t = 1 since k1 = 0. Let us fix t ≥ 2, and assume that the inequality is valid for all t0 ≤ t − 1. In particular this holds for t0 = `(t) ≤ t − 1. By denoting `(`(t)) = `2 (t), we have `(t) X kq ≤ 2 · Cε · `2 (t) + Cε · (`(t) − `2 (t)). Also, the previous lemma implies that t X q=`(t)+1 kq ≤ Cε (t − `(`(t) + 1)) ≤ Cε (t − `2 (t)), 3.5. A Near-Optimal Algorithm with Amortized Constant Budget where the last inequality follows since `(·) is non-decreasing. Summing together the last two inequalities we obtain t X kq ≤ 2Cε · `2 (t) + Cε · (`(t) − `2 (t)) + Cε (t − `2 (t)) ≤ Cε · (t + `(t)) = 2Cε · `(t) + Cε · (t − `(t)). With this we showed the induction and thus the lemma follows. We conclude our main theorem. Theorem 3.20. There exists a (1 + ε)-competitive algorithm for the Robust MST problem with amortized budget √ 2 ln ε98 + 1 1 1 ∈O log . 2+2· ln(1 + 7ε ) ε ε Proof. By Lemmas 3.16 and 3.17, it is enough to redefine ε := ε/7 in Algorithm SequenceFreeze to obtain a (1 + ε)-competitive algorithm with the claimed amortized budget. Finally we show that the amortized budget of our algorithm is best possible up to logarithmic factors. Theorem 3.21. The amortized budget of any (1 + ε)-competitive algorithm for the Robust MST problem belongs to Ω( 1ε ). Proof. We use a similar construction as in Lemma 3.2. Let us fix ε > 0, and let n := bln(2)/ ln(1 + 2ε)c. Consider an instance with n + 1 vertices v0 , . . . , vn . The costs are chosen so that, for every iteration t, the cost of any edge incident to vt is a (1 + 2ε) factor smaller than any other edge previously available. The precise definition is as follows: for each t ∈ {1, . . . , n} define c(vs vt ) = ct := (1 + 2ε)n−t for any s < t. Note that our choice of n implies that ct ∈ [1, 2] for all t ∈ {1, . . . , n}. Hence, the constructed graph is metric. Let T0 , . . . , Tn be the output of a (1 + ε)-competitive algorithm, and denote by kt the budget used in iteration t, i.e., kt := |Tt \ Tt−1 |. Since up to iteration t − 1 all available edges have cost at least ct−1 , then c(Tt ) ≥ ct · kt + ct−1 · (t − kt ) = (ct − ct−1 ) · kt + ct−1 · t. On the other hand, Tt is a (1 + ε)-approximate solution, and therefore c(Tt ) ≤ (1 + ε)OPTt = (1 + ε)ct · t. Combining these two inequalities and simple algebra implies that kt ≥ t · (1 + 2ε) − (1 + ε) t ct−1 − (1 + ε)ct =t· = , ct − ct−1 (1 + 2ε) − 1 2 where the second last equality follows from the definition of ct . Recalling that n := bln(2)/ ln(1+ 2ε)c, the theorem follows since n n n X 1X n(n + 1) n ln(2) t= ≥ ∈Ω . kt ≥ 2 t=1 4 4 ln(1 + 2ε) ε Chapter 3. Robust Multi-Stage Minimum Spanning Trees Towards a Constant Competitive Factor with Constant Budget In the last section we show a (1 + ε)-competitive algorithm with constant amortized budget. However, it is still an open question whether it is possible to achieve constant competitive algorithms with (non-amortized) constant budget, or even a budget of 2. In this section we show that such an algorithm exists in the full information scenario. That is, we assume that we are given in advance the input sequence of graphs G0 , . . . , Gn and the metric cost function on the edges. We remark that – even in the full information scenario – any algorithm with budget 1 is not constant competitive; this follows from Lemma 3.14. It is thus interesting to determine whether there exist algorithms with budget 2 that are constant competitive, even under full information. We begin our study of the full information case by considering the problem of approximating the optimal solution at the end of the sequence. In this setting, we say that a sequence of trees T0 , . . . , Tn is α-approximate if c(Tn ) ≤ αOPTn . Our first observation is that there exist 2-approximate sequences with budget 2. We do this by proposing a simple algorithm that computes such a sequence based on a tour for graph Gn . This algorithm has the disadvantage that in every iteration both of the edges added to the solution have to be carefully picked to guarantee that the solution in iteration n is 2-approximate. However, a more careful analysis shows that one of the edges can be chosen greedily: in iteration t, one of the edges can be chosen as the shortest connection to any vertex previously revealed. This refinement yields the same approximation guarantee, indicating that choosing a greedy edge is a good design decision for an eventual constant competitive online algorithm with constant budget. Afterwards, we use the techniques described before to derive an algorithm that is constant competitive for the full information case. Notice that a 2-approximate robust solution (that approximates c(Tn ) for a given n) is not necessarily 2-competitive. However, we show that we can extend one of the 2-approximate robust algorithms to be constant competitive by embedding it into a doubling framework. Naive applications of this technique yield competitive factor of up to 16 or even 20. Being more careful we are able to obtain a 14-competitive algorithm. We finish this section by considering a greedy online algorithm with budget 2. We show that this algorithm is constant competitive if a structural condition of the input sequence holds. The condition has to do with how much the optimal value of solutions changes over the iterations. We conjecture that this condition is true for every input sequence. Showing this would imply the competitiveness of the algorithm in Approximate Robust Solutions We now present an offline algorithm computing a sequence T0 , . . . , Tn that needs a budget of 2 and yields a solution that is 2-approximate for graph Gn . Recall that by Observation 3.5 we know that, given a minimum spanning tree Tn∗ for Gn , we can compute a shortcut tour with at most doubled cost. We use the ordering given by this tour to help us construct our sequence, so that Tn is a subset of the tour. More precisely, the algorithm consists of 3.6. Towards a Constant Competitive Factor with Constant Budget constructing a tour for Gn and then, for each iteration t, construct Tt by traversing the nodes in Vt = {v0 , . . . , vt } in the ordering induced by the tour, and skipping (shortcutting) the rest of the nodes. Algorithm Input: A sequence of complete graphs Gt = (Vt , Et ) for t ∈ {0, . . . , n} with V0 = {v0 } and Vt = Vt−1 ∪ {vt } for all t. A metric cost function c on the edges. (The complete input is known in advance). 1. Let Tn∗ be a minimum spanning tree of graph Gn ; compute a shortcut tour Q of tree Tn∗ with Algorithm Tour-Shortcut. 2. Consider the closed walk induced by Q, i.e., a sequence of nodes W = x0 , . . . , xn+1 with xn+1 = x0 such that e ∈ Q if and only if xj xj+1 = e for some j ∈ {0, . . . , n}. 3. For each t ∈ {1, . . . , n}, construct tree Tt as the Hamiltonian path obtained by traversing nodes with walk x0 , . . . , xn and shortcutting nodes not in {v0 , . . . , vt }. More precisely, xi xj ∈ Tt for some i < j ≤ n if and only if xi , xj ∈ Vt and x` 6∈ Vt for all ` ∈ {i + 1, i + 2, . . . , j − 1}. It is easy to see that this algorithm needs a budget of 2. Indeed, since tree Tt is constructed by using the same ordering as for Tt−1 , it holds that Tt = (Tt−1 ∪ {xi x` , x` xj }) \ {xi xj } where x` = vt , i < ` < j and xi xj ∈ Tt−1 . Moreover, we obtain that Tn = Q \ {xn xn+1 } (since we do not visit node xn+1 in the walk when constructing any tree Tt ), and thus c(Tn ) ≤ c(Q) ≤ 2c(Tn∗ ). Notice that this last inequality follows by Observation 3.5. We conclude the following. Lemma 3.22. There exists a sequence T0 , . . . , Tn with budget 2 such that c(Tn ) ≤ 2OPTn . Notice that in the sequence T0 , . . . , Tn constructed by Algorithm Tour, the pair of edges introduced in each iteration must be carefully chosen to obtain the approximation guarantee. Interestingly, we can also construct a sequence that yields a 2-approximate solution and at each iteration one of the edges introduced is a greedy edge. This suggests that it is safe for an online algorithm to connect a new node vt to its closest neighbor. Theorem 3.23. Let gt be an element in arg min{c(vt vs ) : 0 ≤ s ≤ t − 1}. There exists a polynomial time algorithm that constructs a sequence of trees T0 , . . . , Tn with a budget of 2 such that c(Tn ) ≤ 2OPTn and that for all t ∈ {1, . . . , n} it holds that Tt = (Tt−1 ∪ {gt , ft }) \ {ht } for some edges ft , ht . Proof. Consider the set of greedy edges E g := {g1 , . . . , gn }, and let Q be a tour for Gn such that c(Q) ≤ 2OPTn . To construct the tree sequence, we will restrict ourselves to edges in E g ∪ Q. To simplify the analysis, we imagine each edge having a direction, and this always Chapter 3. Robust Multi-Stage Minimum Spanning Trees points from the newer to the older node. That is, edge vi vj is directed from vi to vj if and only if i > j. For a given node v and a set of edges F , we denote by δF+ (v) the set of all outgoing edges of v in F . Claim: There exists a tree T ⊆ Q ∪ E g such that c(T ) ≤ c(Q) and |δT+ (v) \ E g | ≤ 1 for all v ∈ V . In other words, T contains at most one outgoing edge at every v ∈ V that is not a greedy edge. The tree T can be constructed in polynomial time. Before proving the claim, we show that the theorem follows from it. For a given tree T and edge e 6∈ T , let us define C(T, e) as the unique cycle in T ∪ {e}. Based on the tree T from the claim, we define a sequence of trees T0 , . . . , Tn iteratively as follows. Set T0 := ∅, and for every t ∈ {1, . . . , n}: • if δT+ (vt ) \ E g = ∅ set Tt := Tt−1 ∪ {gt } • otherwise, set Tt = (Tt−1 ∪ {gt , ft }) \ {ht } where ft is the unique element in δT+ (vt ) \ E g and ht is any element in C(Tt−1 , ft ) ∩ T c . Here we denote by T c the set of edges not in T. We notice that the last step is well defined since C(Tt−1 , ft ) ∩ T c 6= ∅. Indeed, if C(Tt−1 , ft ) ∩ T c = ∅ then T ⊇ C(Tt−1 , ft ) would contain a cycle. At the end we obtain a tree Tn that must be equal to T , and thus c(Tn ) ≤ c(Q) ≤ 2OPTn . This follows since we insert each edge of T at some point of the procedure and we only remove elements in the complement of T . Hence the claim implies the theorem. We now prove the claim. For this we start with tour Q and we iteratively modify it to obtain the tree T from the claim. First we remove an arbitrary edge from Q so we obtain a Hamiltonian path (and thus also a tree). Notice that this tree is close to satisfy the properties of the claim, except that it may happen that |δT+ (vt ) \ E g | = 2 for some vertex vt . If this is the case we remove an edge in δT+ (vt ) and replace it by the greedy edge gt . More precisely we modify Q as follows: 1. Initialize tree T as Q \ {e} where e is an arbitrary edge in Q. 2. For each t = 0 . . . , n, check whether if |δT+ (vt ) \ E g | = 2. If this is the case then add gt to T and remove the unique edge in δT+ (vt ) ∩ C(T, gt ). Notice that this procedure is well defined and that T is always a tree. Indeed, at iteration t of the procedure all edges added so far to T do not touch vertex vt , since they are outgoing edges of some vertex vs with s < t. Thus, vt has (at most) 2 adjacent edges in T , say d and d0 , that must also belong to Q. Removing these two edges disconnects T into (at most) three connected components, one of them containing only vertex vt . This implies that one endpoint of gt is vt and the other endpoint is in one of the other two connected components. Therefore, C(T, gt ) must contains gt and either d or d0 . Assume without loss of generality that d ∈ C(T, gt ). Then, adding gt and removing d from T yields a tree. Moreover, the overall weight of the tree does not increase since w(gt ) ≤ w(d) by definition of gt . This shows that c(T ) ≤ c(Q). Finally, notice that through the procedure |δT+ (v) \ E g | is never increased for any vertex v, since the only elements we add to T belong to E g . This proves the claim and thus the lemma follows. 3.6. Towards a Constant Competitive Factor with Constant Budget A Constant Competitive Algorithm under Full Information We now show how to turn Algorithm Tour into a constant competitive algorithm that uses a budget of 2. This is stronger that the previous result since before we could only ensure that the solution is 2-approximate in the last iteration n. To achieve this goal, we embed Algorithm Tour into a doubling framework to guarantee the competitiveness in all iterations. The doubling framework is a common technique for online and incremental algorithms, and has been successfully used for a variety of problems: see, e. g., [BCR93, AAF+ 97, CK06, Sha07, LNRW10]. In the doubling framework we classify iterations in phases. The phases are defined iteratively, and a new phase starts at the iteration in which the optimal value of the instance doubles. Loosely speaking, within each phase we use Algorithm Tour to solve the subinstance defined by the phase. The fact that the cost of the optimal solution doubles from phase to phase will allow us to bound the cost of our solutions at each iteration. More precisely, we define phases P1 , . . . , Pr , where Pi is of the form {`i , `i + 1, . . . , ui }. That is, `i is the first and ui is the last iteration of Pi . Set `1 to 1. For i ≥ 2, we define `i iteratively: `i is the smallest integer such that OPT`i ≥ 2 · OPT`i−1 and `i ≥ `i−1 (if there is no such `i then r := i − 1 and `i is left undefined). Then, define ui := `i+1 − 1 for all i ∈ {1, . . . , r − 1} and ur := n. Notice that with this definition the phases are pairwise disjoint and ∪ri=1 Pi = {1, . . . , n}. Our algorithm works iteratively over the phases. Assume that we have constructed a solution up to the last iteration of phase Pi−1 . We construct solutions Tt for t ∈ Pi as follows. For the first iteration of Pi , that is, t = `i , we define Tt greedily. More precisely, we first initialize tree Tt by taking tree Tt−1 = Tui−1 and connect vertex vt with a shortest possible edge. Since our budget still allows us to add one extra edge, we swap a pair of edges. This pair is chosen greedily, so that the cost of the solution is diminished as much as possible. For the remaining iterations, i. e., t ∈ Pi \ {`i }, we first construct a Hamiltonian tour Qi for graph Gui by shortcutting a minimum spanning tree (Algorithm Tour-Shortcut). Taking further shortcuts of Qi , we obtain a Hamiltonian tour for the graph induced by {v`i , v`i +1 . . . , vt }. Removing an arbitrary edge in this tour yields a tree, which we call Xt . Notice that the construction guarantees that the cost of Xt is at most c(Qi ) ≤ 2OPTui . Finally, the tree Tt is defined as the union of tree T`i and Xt . Now we present our algorithm in full detail. In general, we will use t to denote iterations of the online sequence and i for the index of the difference phases. Input: A sequence of complete graphs Gt = (Vt , Et ) for t ∈ {0, . . . , n} with V0 = {v0 } and Vt = Vt−1 ∪ {vt } for all t. A metric cost function c on the edges. (The complete input is known in advance.) 1. Define phases P1 , . . . , Pr as explained above, where Pi = {`i , `i + 1, . . . , ui }. 2. For all i ∈ {1, . . . , r}: Chapter 3. Robust Multi-Stage Minimum Spanning Trees (a) compute a minimum spanning tree Tu∗i of graph Gui with, e. g., Kruskal’s algorithm [Kru56], and (b) construct a shortcut tour Qi from Tu∗i as explained in Section 3.3 (Algorithm Tour-Shortcut). 3. Set T0 = ∅ and T1 := {v0 v1 }. 4. For each t ∈ {2, . . . , n}: (a) Let i ∈ {1, . . . , r} be such that t ∈ Pi . (b) If t = `i , then let gi ∈ arg min{c(v`i vs ) : 0 ≤ s ≤ `i − 1}. Consider the tree Rt := Tt−1 ∪ {gi }. Find, if any, a pair of edges (fi , hi ) ∈ (Et \ Rt ) × Rt such that (Rt ∪ {fi }) \ {hi } is a tree and c(hi ) − c(fi ) is maximized among all such pairs (if there is no such pair, or if for the maximum c(hi ) − c(fi ) < 0, set fi = hi := e for any e 6∈ Rt ). Return Tt := (Rt ∪ {fi }) \ {hi }. (c) If t 6= `i , construct a Hamiltonian path Xt for vertices {v`i , v`i +1 , . . . , vt } by taking shortcuts of Qi as in Algorithm Tour, and return Tt := T`i ∪ Xt . It is easy to see that Tt is a spanning tree for each t ∈ {0, . . . , n}. Indeed, assume inductively that Ts is a spanning tree for graph Gs for all s ≤ t. If t = `i then it is clear from the construction of Tt that it is a spanning tree. If t 6= `i , then T`i spans all vertices in {vs : 0 ≤ s ≤ `i } and Xt is a tree that spans all vertices in {vs : `i ≤ s ≤ t}. Thus Tt = T`i ∪ Xt is a spanning tree of Gt . We now argue that the algorithm needs a budget of 2. Lemma 3.24. The budget of Algorithm Doubling-Tour is at most 2. Proof. If t = `i for some i, it is clear that the needed budget is 2 since Tt = (Tt−1 ∪ {gi , fi }) \ {hi }. For t ∈ Pi and t 6= `i , notice that sets Xt and Xt−1 are Hamiltonian paths for sets {vs : `i ≤ s ≤ t} and {vs : `i ≤ s ≤ t − 1}, respectively, and that both travel the vertices in the same order. This implies, with the same observation as in Lemma 3.22, that |Xt \ Xt−1 | ≤ 2. In what follows we show how to bound the competitive ratio of the algorithm. We first compute how much the cost of the solution increases during a phase. Lemma 3.25. For each phase Pi with i ≥ 2, it holds that c(T`i ) ≤ c(T`i−1 ) + 5 · OPT`i . 2 3.6. Towards a Constant Competitive Factor with Constant Budget Before proving this lemma, we show how to use it to derive the competitive ratio of the algorithm. To this end we exploit the fact that the optimal value doubles in each phase. Theorem 3.26. Algorithm Doubling-Tour is a 14-competitive algorithm. Proof. Consider first an iteration t = `i for some i ≥ 2. Then, by iterating the last lemma and noting that c(T`1 ) = c(T1 ) = OPT1 = OPT`1 we obtain that 5 · OPT`i 2 5 ≤ c(T`i−2 ) + · (OPT`i−1 + OPT`i ) 2 .. . c(T`i ) = c(T`i−1 ) + ≤ c(T`1 ) + i 5 X · OPT`j 2 j=2 i 5 X ≤ · OPT`j . 2 j=1 By the definition of `j , OPT`j ≤ · OPT`j+1 ≤ . . . ≤ ( 21 )i−j OPT`i , and thus i X 1 5 ≤ 5 · OPT`i . c(T`i ) ≤ · OPT`i 2 2i−j j=1 This implies that the approximation ratio for an iteration t = `i is at most 5. Consider now an iteration t ∈ Pi with t 6= `i . Then, c(Tt ) = c(Xt ) + c(T`i ), and recall that Xt is obtained by shortcutting Qi which implies that c(Xt ) ≤ c(Qi ) ≤ 2OPT`i . We conclude that c(Tt ) ≤ 2 · OPT`i + 5 · OPT`i ≤ 14 · OPTt , where the last inequality follows from Lemma 3.7. It is left to show Lemma 3.25. To this end we need the following technical lemma that will help us bound the cost added by the algorithm between iterations ui and `i+1 = ui + 1. Lemma 3.27. Let G = (V, E) be a complete graph, and let OPT be the value of a minimum spanning tree for it. Let G0 be a complete graph with one more vertex than G, G0 = (V ∪ {v 0 }, E 0 ), and denote by OPT0 its optimal value. For any tree T and edge g 0 ∈ arg min{c(vv 0 ) : v ∈ V }, there exist edges f, h ∈ E 0 such that T 0 = (T ∪ {g 0 , f }) \ {h} is a tree, and OPT 0 c(T ) − c(T ) ≤ max OPT − OPT, . 2 0 Chapter 3. Robust Multi-Stage Minimum Spanning Trees Proof. Let T ∗ be a minimum spanning tree of G. We consider two different cases depending on the number of edges in T ∗ with cost larger or equal than c(g 0 ). Case 1. Tree T ∗ contains 2 or more edges with cost larger than c(g 0 ). In this case OPT > 2 · c(g 0 ). By choosing f = h for any f 6∈ T ∪ {g 0 }, we obtain that T 0 = (T ∪ {g 0 , f }) \ {h} = T ∪ {g 0 }. Therefore c(T 0 ) = c(T ) + c (g 0 ) ≤ c(T ) + OPT/2, and thus the lemma follows for this case. Case 2. Tree T ∗ contains at most 1 edge with cost larger than c(g 0 ). To analyze this case we show that there exists an MST T ∗∗ for graph G0 with at most one edge not in T ∗ ∪ {g 0 }. More precisely, we show the following claim. Claim: There exists an MST T ∗∗ for G0 such that T ∗∗ \ T ∗ ⊆ {g 0 , f 0 } for some f 0 = vv 0 with v ∈V. To show the claim we first notice that we can compute T ∗∗ by using Lemma 3.9 iteratively. This lemma implies a procedure which, given an MST T for a connected graph G = V , E 0 and an edge f 6∈ E, it computes an MST T for graph V , E ∪ f . Indeed, the algorithm consists in considering a maximum cost edge h in the unique cycle contained in T ∪ {f }, and 0 setting T := T ∪ f \ h . Iterating this idea we can compute graph T ∗∗ . Indeed, notice that T ∪ {g 0 } is an MST for graph (V, E ∪ {g 0 }). Thus, by making available each edge of the form vv 0 with v ∈ V , one by one, our previous discussion implies a procedure that yields a tree T ∗∗ which is optimal for (V, E ∪ {g 0 } ∪ {vv 0 : v ∈ V }) = G0 . More precisely, we can compute an MST T ∗∗ for G0 based on T ∗ as follows: set T ∗∗ := T ∗ ∪ {g 0 }; for every edge f 0 = vv 0 with v ∈ V , iteratively find an edge h0 with maximum cost in the unique cycle in T ∗∗ ∪ {f 0 }, and update T ∗∗ to (T ∗∗ ∪ {f 0 }) \ {h0 }. Additionally, if f 0 is a maximum cost edge in the cycle in T ∗∗ ∪ {f 0 }, we give priority to f 0 and set h0 := f 0 . We show that the tree T ∗∗ computed by this algorithm satisfies the claim. We notice that the procedure only removes one edge h0 in T ∗ from the solution (and for the rest of the iterations f 0 = h0 , and thus T ∗∗ is left untouched). Indeed, because we give priority to f 0 , whenever the procedure chooses h0 6= f 0 with h0 ∈ T ∗ , then c(g 0 ) ≤ c(f 0 ) < c(h0 ) and thus c(h0 ) > c(g 0 ). Since by hypothesis T ∗ contains at most one element larger than c(g 0 ), this means that the procedure only removes one element h0 in T ∗ . This shows the claim. We conclude that that there exist edges f 0 , h0 such that OPT0 − OPT = c(g 0 ) + c(f 0 ) − c(h0 ). Let h be an edge of maximum cost in the unique cycle contained in T ∪ {g 0 , f 0 }. We define T 0 := (T ∪ {g 0 , f 0 }) \ {h} and show the lemma for this tree. Indeed, since T ∗ ∪ {g 0 } is an MST for graph (V ∪ {v 0 }, T ∗ ∪ {g 0 }) and T ∪ {g 0 } is a spanning tree of the same graph, by Property 3.11 we have that c(h) ≥ c(h0 ). We conclude that c(T 0 ) − c(T ) = c(g 0 ) + c(f 0 ) − c(h) ≤ c(g 0 ) + c(f 0 ) − c(h0 ) = OPT0 − OPT. We are now ready to show Lemma 3.25. 3.6. Towards a Constant Competitive Factor with Constant Budget Proof (Lemma 3.25). Let t = `i , and thus ui−1 = t−1. Notice that Tt = (Tt−1 ∪{gi , fi })\{hi }, where gi , fi and hi were defined by the algorithm. Also recall that Tt−1 = T`i−1 ∪ Xt−1 . Therefore, c(Tt ) = c(Tt−1 ) + c(Tt ) − c(Tt−1 ) = c(T`i−1 ) + c(Xt−1 ) + c(Tt ) − c(Tt−1 ) ≤ c(T`i−1 ) + 2OPTt−1 + c(Tt ) − c(Tt−1 ), where the last inequality follows since Xt−1 is a Hamiltonian path obtained by traversing vertices in the ordering given by Qi−1 , and thus c(Xt−1 ) ≤ c(Qi−1 ) ≤ 2OPTui−1 = 2OPTt−1 . Remark that gi is chosen as the shortest connection between vt and any previous node, and fi and hi are chosen greedily. Thus, the previous lemma applied to trees Tt−1 and Tt , implies that OPTt−1 c(Tt ) − c(Tt−1 ) ≤ max OPTt − OPTt−1 , . 2 Note that by the definition of `i = t we have that OPTt−1 ≤ OPTt . Combining our two previous inequalities plus this fact we obtain that OPTt−1 c(T`i ) = c(Tt ) ≤ c(T`i−1 ) + 2OPTt−1 + max OPTt − OPTt−1 , 2 3OPTt−1 ≤ c(T`i−1 ) + OPTt−1 + max OPTt , 2 3OPTt ≤ c(T`i−1 ) + OPTt + max OPTt , 2 5OPT`i . = c(T`i−1 ) + 2 On the Competitiveness of a Greedy Algorithm with Budget 2 In this section we come back to the online model, and study the robust MST problem with a (non-amortized) budget per iteration. We study a simple greedy algorithm for this problem, and propose a condition that implies that this algorithm is constant competitive. We also conjecture that this condition holds for every input. The algorithm is as follows. Chapter 3. Robust Multi-Stage Minimum Spanning Trees Input: A sequence of complete graphs Gt = (Vt , Et ) for t ≥ 1 revealed online with V0 = {v0 } and Vt = Vt−1 ∪ {vt } for all t ≥ 1. A metric cost function c revealed together with the edges. Define T0 := ∅. For each iteration t ≥ 1 do as follows. 1. Define gt as any edge of minimum cost connecting vt to any node in Vt−1 . 2. If there exists a pair of edges gt , ft satisfying that • (Tt−1 ∪ {gt , ft }) \ {ht } is a tree, • ft is adjacent to vt , and • c(ft ) ≤ c(ht ) , 2 then choose such pair of edges that maximizes c(ht ) − c(ft ) and return Tt := (Tt−1 ∪ {gt , ft }) \ {ht }. 3. If there is no such pair of edges, return Tt := Tt−1 ∪ {gt }. Clearly, this algorithm needs a budget of 2. However, it is not clear whether this algorithm is constant competitive. We propose a conjecture that we believe is true for every input. We show that the conjecture implies that the algorithm is constant competitive. To state the conjecture we need the following definition. Definition 3.28. Given a complete graph G = (V, E) and a non-negative cost function c on the edges, we say that the graph is 2-metric if for every cycle C ⊆ E it holds that c(e) ≤ 2 · c(C \ {e}) for all e ∈ C. Notice that every metric graph is 2-metric, because for any cycle C and e ∈ C it holds that c(e) ≤ c(C \ {e}). However, the converse is not true. In what follows, for a given real number x we define x+ := max{x, 0} and x− := max{−x, 0}. Note that x = x+ − x− . Also, denote ∆OPTt := OPTt − OPTt−1 . Conjecture 3.29. There exists a constant α ≥ 1 satisfying the following. Consider any input sequence G0 , G1 , . . . , Gn of the Robust Minimum Spanning Tree problem, with a cost function c0 on the edges such that Gt is 2-metric for all t ≥ 0. If OPTt denotes the optimal cost of the tree in iteration t for cost function c0 , then for all t ≥ 1 it holds that t X (∆OPTs )− ≤ α · OPTt . s=1 We do not know how to show that the conjecture holds in general. However, we can show that it holds if OPTt , as a function of t, is unimodal. That is, there exists t∗ such that OPTt is non-decreasing for t ≤ t∗ and is non-increasing for t ≥ t∗ . Note that in this case 3.6. Towards a Constant Competitive Factor with Constant Budget = OPTt∗ − OPTt . If the cost function c0 is metric, then OPTt∗ ≤ 2OPTt by Lemma 3.7, and thus the conjecture is true for α = 1. If c0 is not metric but only 2-metric, the same argument in the proof of Lemma 3.7 can be used to show that OPTt∗ ≤ 4OPTt . Therefore the conjecture holds for α = 3. We remark that the unimodal case is, in some sense, an ideal situation for the conjecture. Intuitively, the case where OPTt oscillates indefinitely is a hard case for proving the conjecture. We show that if the conjecture is true then Algorithm Greedy is 2(α + 1)-competitive. To this end we modify the cost function as follows. Let T0 , T1 , . . . , Tt , . . . be the output of Algorithm Greedy when run on an input with cost function c (which is by definition metric). We classify iterations in two types. Let I ∈ N0 be the subset of iterations such that t ∈ I if and only if Tt = Tt−1 ∪ {gt }. Thus, for all t 6∈ I we have that Tt := (Tt−1 ∪ {gt , ft }) \ {ht } and c(ft ) ≤ c(ht )/2. Not that for an iteration t 6∈ I, the cost of Tt is at most the cost of Tt−1 . s=1 (∆OPTs )− Observation 3.30. Set ∆Tt := c(Tt ) − c(Tt−1 ). Then for all t 6∈ I, ∆Tt = c(gt ) + c(ft ) − c(ht ) ≤ 2 · c(ft ) − c(ht ) ≤ 0. For any iteration t, we can decompose the cost of Tt as follows, c(Tt ) t X ∆Tt . and by our observation c(Tt ) ≤ ∆Ts . To bound the values ∆Tt for t ∈ I, we use the fact that in these iterations the algorithm did not find any pair of edges ft , ht to swap such that c(ft ) ≤ c(ht )/2. As we will see this implies the same for the optimal solution. In particular, if we double the cost of all edges of the form vt vs with t ∈ I, s ≤ t − 1 and vt vs 6= gt , then the optimal solution would not see any gain in inserting any other edge but gt in this iteration. More precisely, consider the following set EI := {vt vs : t ∈ I, s ≤ t − 1} \ {gt : t ∈ I}. We now modify the objective function as follows. Set c0 (e) := 2 · c(e) c0 (e) := c(e) for all e ∈ EI , for all e 6∈ EI , Let us denote by OPT0t the cost of a minimum spanning tree for graph Gt with cost function c0 , and let Tt∗∗ be a corresponding minimum spanning tree. Notice that since we only change the costs by at most a factor of 2, arguing on costs c0 can only affect our competitive factor by a factor 2. In particular, if Tt∗ is a minimum spanning tree for Gt for cost function c, then OPT0t = c0 (Tt∗∗ ) ≤ c0 (Tt∗ ) ≤ 2 · c(Tt∗ ) = 2 · OPTt . Also, since c is metric, graph Gt with cost function c0 is 2-metric. Chapter 3. Robust Multi-Stage Minimum Spanning Trees The next lemma shows that for any t ∈ I, the only new element in Tt∗∗ is the greedy ∗∗ ∪ {gt }, and therefore edge gt , that is, Tt∗∗ = Tt−1 ∗∗ c0 (Tt∗∗ ) − c0 (Tt−1 ) = c0 (gt ) = c(gt ) = c(Tt ) − c(Tt−1 ) = ∆Tt . Lemma 3.31. For all t there exists a minimum spanning tree Tt∗∗ for graph Gt = (Vt , Et ) ∗∗ ∪ {gt } if t ∈ I, where with cost function c0 such that Tt∗∗ ∩ EI = ∅. In particular Tt∗∗ = Tt−1 gt is a shortest connection between vt and any vertex in Vt−1 . Proof. We show the lemma by induction. Clearly the claimed tree T1∗∗ exists. Assume that ∗∗ there exists tree Tt−1 as in the statement of the lemma. We show that we can construct ∗∗ tree Tt . For this we distinguish two cases. Case 1: t ∈ I. ∗∗ For this case we show that Tt∗∗ := Tt−1 ∪{gt } is an MST for costs c0 , which clearly satisfies ∗∗ the claimed property since Tt−1 ∩ EI = ∅ by induction hypothesis. To show that Tt∗∗ is a minimum spanning tree, let e be any edge in Et \ Tt∗∗ . Consider the unique cycle C(Tt∗∗ , e) in graph (Vt , Tt∗∗ ∪ {e}). By Lemma 3.4, it is enough to show that if f maximizes c0 (f ) in C(Tt∗∗ , e), then c0 (f ) ≤ c0 (e). We do so by using the fact that t ∈ I, and thus the algorithm cannot find a pair of edges ft , ht to swap in the solution such that c(ft ) ≤ c(ht )/2. ∗∗ ∪ {e}, otherwise To show that c0 (f ) ≤ c0 (e), first notice that this holds if C(Tt∗∗ , e) ⊆ Tt−1 ∗∗ we obtain a contradiction to the optimality of Tt−1 . We can then assume that gt ∈ C(Tt∗∗ , e), and therefore e = vt vs ∈ EI for some s ≤ t − 1. By definition of c0 , this implies that c0 (e) = 2 · c(e). Let fˆ be the edge maximizing c0 (fˆ) in C(Tt , e). Also, since t ∈ I the definition of I implies that, c(fˆ) c(e) ≥ . 2 Claim: c0 (fˆ) ≥ c0 (f ). Before showing the claim, we prove that it implies the lemma for this case. Indeed, by definition of our algorithm, Tt ∩ EI = ∅, and thus c(fˆ) = c0 (fˆ). Hence, c0 (e) = 2 · c(e) ≥ c(fˆ) = c0 (fˆ) ≥ c0 (f ), which implies that Tt∗∗ is a minimum spanning tree for cost c0 , and thus the lemma follows for this case if we show the claim. ∗∗ To show the claim, consider the graph (Vt , Et−1 ∪{gt }). Since Tt−1 is a minimum spanning 0 ∗∗ tree for (Vt−1 , Et−1 ) with cost c , then Tt is a minimum spanning tree for costs c0 in (Vt , Et−1 ∪ {gt }). Since also Tt is a spanning tree for this graph, Proposition 3.11 implies that c0 (fˆ) ≥ c0 (f ). The claim follows. Case 2: t 6∈ I. We define Tt∗∗ as a minimum spanning tree for graph (Vt , Et \ EI ) with costs c0 (which coincide with c in this graph), and show that it is also an optimal solution for graph (Vt , Et ). ∗∗ Consider tree Tt−1 ∪ {gt }, that is a minimum spanning tree for graph (Vt , Et−1 ∪ {gt }). Consider an edge e ∈ EI ∩ Et , and let C(Tt∗∗ , e) be the unique cycle in Tt∗∗ ∪ {e}. It is 3.6. Towards a Constant Competitive Factor with Constant Budget important to notice that since t 6∈ I, and thus EI ∩ Et ⊆ Et−1 . Our proof follows the ∗∗ ∩ EI = ∅ by hypothesis, then there is no gain in adding an element argument that, since Tt−1 ∗∗ e in EI ∩ Et to Tt . This is proven by using Property 3.10. More precisely, to show the optimality of Tt∗∗ , consider e ∈ Et ∩ EI . It is enough to consider an edge f maximizing c0 (f ) in C(Tt∗∗ , e) and show that c0 (f ) ≤ c0 (e). To this end, ∗∗ let fˆ be an edge maximizing c0 (fˆ) in C(Tt−1 ∪ {gt }, e). Noting that by induction hypothesis ∗∗ ∗∗ is an optimal solution for (Vt , (Et−1 ∪ {gt }) \ EI ) for cost c0 . Since ∩ EI = ∅, then Tt−1 Tt−1 (Et−1 ∪ {gt }) \ EI ⊆ Et \ EI , Property 3.10 implies that c0 (fˆ) ≥ c0 (f ). ∗∗ ∪ {gt } is a minimum spanning tree for graph (Vt , Et−1 ∪ {gt }), then Moreover, since Tt−1 0 0 ˆ c (e) ≥ c (f ). Therefore c0 (e) ≥ c0 (fˆ) ≥ c0 (f ), with which we conclude that Tt∗∗ ⊆ Et \ EI is a minimum spanning tree for (Vt , Et ). The lemma follows. Lemma 3.32. If Conjecture 3.29 holds then Algorithm Greedy is 2 · (α + 1)-competitive. Proof. Let us denote It := I ∩ {1, . . . , t}. By Observation 3.30 we have that c(Tt ) = t X ∆Tt ≤ ∆Tt . Also, by definition of I we have that Ts = Ts−1 ∪ {gs } for all s ∈ I. Thus, for s ∈ It it holds ∗∗ that c(Ts ) − c(Ts−1 ) = c(gs ). Since by Lemma 3.31 also Ts∗∗ = Ts−1 ∪ {gs } for all s ∈ It , ∗∗ ∗∗ ∗∗ ∗∗ ). We we conclude that ∆Ts = c(Ts ) − c(Ts−1 ). Let us also denote ∆Ts = c(Ts∗∗ ) − c(Ts−1 conclude that c(Tt ) ≤ ∆Ts = t X X (∆Ts∗∗ )+ (∆Ts∗∗ )+ ≤ ≤ s=1 = c(Tt∗∗ ) + t X (∆Ts∗∗ )− Note that Lemma 3.31 implies that c0 (Ts∗∗ ) = c(Ts∗∗ ) for all s. Thus, Conjecture 3.29 implies that t t X X ∗∗ ∗∗ (∆Ts )− = (c0 (Ts∗∗ ) − c0 (Ts−1 ))− ≤ α · OPT0t . s=1 Hence, we conclude that c(Tt ) ≤ OPT0t + α · OPT0t ≤ 2(α + 1) · OPTt , where the last inequality follows since OPT0t ≤ 2OPTt . Chapter 3. Robust Multi-Stage Minimum Spanning Trees Applications to the Traveling Salesman Problem In this section we consider a robust version of the emblematic Traveling Salesman problem (TSP) in metric graphs. As in the Robust MST setting, the input is a sequence of undirected graphs G0 , G1 , . . . , Gt , . . . such that Gt = (Vt , Et ) is a complete metric graph and Vt = {v0 , . . . , vt }. The objective is to construct a sequence of tours Q2 , Q3 , . . . , Qt such that Qt is a Hamiltonian tour of Gt for all t ≥ 2 (note that graphs G0 and G1 do not admit Hamiltonian tours since they have less than 3 nodes). We measure robustness in the same way as with minimum spanning trees: we say that a sequence of tours needs a budget of k if t \ Qt−1 | ≤ k for all t ≥ 3. Similarly, the sequence needs an amortized budget of k P|Q t if s=3 |Qs \ Qs−1 | ≤ k · t. We call this problem the Robust Traveling Salesman problem (Robust TSP). In what follows we show how to transfer our results for the Robust MST problem to the Robust TSP. In Section 3.3 we have already discussed the shortcutting technique to convert a spanning tree into a tour, which increases the cost of the solution by at most a factor 2. Unfortunately, applying this technique directly to a sequence of trees might provoke an unbounded increase on the needed budget. We give a concrete example of such phenomenon below. However, we can robustify the shortcutting technique. This will help us show the following theorem. Theorem 3.33. Consider a sequence of complete metric graphs G0 , . . . , Gt , . . . where Gt = (Vt , Et ) and Vt = {v0 , . . . , vt }. Assume that in each iteration t we are given a spanning tree Tt for graph Gt . Then there exists an online algorithm that computes a sequence of tours Q0 , Q1 , . . . , Qt , . . . such that c(Qt ) ≤ 2 · c(Tt ) and |Qt \ Qt−1 | ≤ 4 · |Tt \ Tt−1 | for all t ≥ 1. This theorem immediately implies that most of our results for the Robust MST problem translate directly to the TSP setting by only increasing the competitive ratio and the budget (respectively amortized budget) by a constant factor. In particular, Robust TSP admits an online (2 + ε)-competitive algorithm with amortized budget O( 1ε log 1ε ). In the following we explain the intuition of our approach. Consider two spanning trees R and R0 where R0 = (R ∪ {f }) \ {g} for some edges f 6∈ R and g ∈ R. I some sense, tree R0 is obtained by the smallest possible modification to R. Thus, we should at least be able to update a tour Q obtained from R to a tour Q0 obtained from R0 , such that |Q0 \ Q| ≤ 4. More precisely, we will show that there exist tours Q and Q0 such that: c(Q) ≤ 2 · c(R), c(Q0 ) ≤ 2·c(R0 ) and |Q0 \Q| ≤ 4. Later, we will use this result to prove the previous theorem. Recall that given any tree T we denote by 2 · T the multi-graph obtained by duplicating each edge in T . As in Section 3.3, recall that the graph induced by 2 · R is Eulerian (each node has even degree). Therefore we can construct a Eulerian walk W for it, that is, a sequence of nodes W = x1 , x2 , . . . , xr where x1 = xr and each edge in 2 · R is traversed exactly once by the walk. By visiting the first copy of a node in W (except for u which we visit twice) we obtain a tour Q. However, applying the same strategy to tree R0 might yield a tour Q0 where |Q0 \ Q| is unbounded. This might happen even if the walk W 0 for 2 · R0 is constructed as similar as possible to W . Consider the example shown in Figure 3.2. In the left-hand-side of this figure, we depict a tree R (in solid lines). In the following we distinguish different appearances of a node in walk W . To this end we use bars to denote different copies of nodes, for example v, v¯ and 3.7. Applications to the Traveling Salesman Problem (a) A tree R (solid lines) and a possible tour obtained by taking shortcuts (dotted lines). (b) A tree R0 = (R ∪ {uv4 }) \ {uv1 } (solid lines) and a possible tour obtained by taking shortcuts (dotted lines). Figure 3.2: Example of trees R and R0 and possible tours obtained by taking shortcuts. A trivial generalization of this example shows that visiting the first copy of each node in the walk might yield arbitrarily different tours. v are copies of the same node. We remark however, that we do not make distinctions on edges, e. g., we consider vw to be equal to vw. Consider one possible Eulerian walk for 2 · R, e. g., W = u, v1 , w1 , v1 , v2 , w2 , v2 , v3 , w3 , v3 , v4 , w4 , v4 , v3 , v2 , v1 , u. Visiting the first appearance of each node in W yields the tour depicted with dotted lines in Figure 3.2a. In Figure 3.2b we show a tree R0 (solid lines) that is of the form (R ∪ {f }) \ {g}. Again, we can pick any Eulerian walk W 0 for 2 · R0 and visit the first appearance of each node, obtaining a tour Q0 . However, it is easy to observe that for any W 0 this strategy yields a tour Q0 that does not contain any of the edges of the form wi vi+1 ∈ Q. This means that |Q0 \ Q| ≥ 3. With a trivial generalization of this example we obtain an instance for which |Q0 \ Q| is unbounded. It is worth noticing that in this example, changing the starting vertex of the Eulerian tour from u to v1 would yield a tour Q0 that differ to Q in only one edge. However, a simple extension of this example, depicted in Figure 3.3, shows that changing the first node of the Eulerian tour is in general not enough. The first step to address this problem is by choosing a walk W 0 for 2 · R0 that is “similar” to W . To this end, let us decompose W as W = u, v1 , w1 , v1 , v2 , w2 , v2 , v3 , w3 , v3 , v4 , w4 , v4 , v3 , v2 , v1 , u, {z } | {z } | W1 so that W = u, v1 , W1 , W2 , u (we use a comma to denote concatenation of walks). Base on this decomposition we define W 0 := u, W2 , W1 , v4 , u, where v4 is a new copy of v4 . Clearly W 0 is a Eulerian walk for 2 · R0 . The tour obtained by visiting the first appearance of each node in W 0 is depicted with dotted lines in Figure 3.2b. As argued before, visiting the first Chapter 3. Robust Multi-Stage Minimum Spanning Trees (a) A tree R (solid lines) and a possible tour obtained by taking shortcuts (dotted lines) to a Eulerian Tour of the form v40 , v1 , w1 , v1 , v2 , . . .. (b) A tree R0 = (R ∪ {v10 v4 }) \ {v40 v1 } (solid lines) and a possible tour (dotted lines) obtained by taking shortcuts to a Eulerian Tour of the form v1 , w1 , v1 , v2 , . . .. Figure 3.3 copy of a node in W and W 0 give very different tours. We fix this problem by not visiting the first copy of each node in W 0 . Rather, we remember the copy of each node that we visit in W to construct Q, and then visit the same copy when constructing Q0 . In the following we write in boldface the copies of nodes that we visit when constructing tour Q, W = u , v1 , w1 , v1 , v2 , w2 , v2 , v3 , w3 , v3 , v4 , w4 , v4 , v3 , v2 , v1 , u. | {z } | {z } W1 If when traversing W 0 we visit the same copies of nodes as in W , that is, we also choose to visit the following nodes in boldface, W 0 = u , v4 , v3 , v2 , v1 , w1 , v1 , v2 , w2 , v2 , v3 , w3 , v3 , v4 , w4 , v4 , u, | {z } | {z } W2 we also obtain a Hamiltonian tour Q0 . We remark that the copy of node v1 in W visited when constructing Q does not appear in W 0 . Thus, we had chosen to visit one of the other copies left, namely v1 . In this simple example, this strategy yields a tour Q0 equal to Q. We remark that this is not going to be true in general, but rather Q and Q0 will differ in only a few edges. Additionally, since we take shortcuts to construct Q0 we have that c(Q0 ) ≤ c(2 · R0 ) = 2 · c(R0 ). Now that we have given the intuition of our approach we present it in more detail. To show Theorem 3.33, consider two consecutive trees Tt−1 and Tt from the statement of the theorem. In general, we can think that tree Tt is obtained from Tt−1 by a series of local 3.7. Applications to the Traveling Salesman Problem changes. Here, a local change means an operation that adds an edge and removes another one if it is necessary. This is formalized in the next observation. Observation 3.34. Consider two spanning trees Tt−1 and Tt for graphs Gt−1 and Gt , respectively. Let us also denote ` := |Tt \ Tt−1 |. Then there exists a sequence of trees R1 , R2 . . . , R` satisfying the following. • Tree R1 equals Tt−1 ∪ {f1 } for some f1 ∈ Tt \ Tt−1 adjacent to vt . • For all i ∈ {2, . . . , `} there exist elements fi ∈ Tt \ Tt−1 and gi ∈ Tt−1 \ Tt such that Ri = (Ri−1 ∪ {fi }) \ {gi }. • Tree R` equals Tt . Proof. Assume that Tt \ Tt−1 = {f1 , . . . , f` }, where f1 is any edge adjacent to node vt . We construct trees R1 , . . . , R` with the following procedure: Set R1 = Tt−1 ∪ {f1 }; For each i ∈ {2, . . . , `} let gi be any edge in C(Ri−1 , fi ) ∩ (Tt−1 \ Tt ), where C(Ri−1 , fi ) is the unique cycle in set Ri−1 ∪ {fi }; Set Ri := (Ri−1 ∪ {fi }) \ {gi }. We remark that this procedure is well defined and that edge gi must exist for every i, otherwise tree Tt would contain a cycle. Also, note that R` = Tt , since in every iteration we add an edge in Tt \ Tt−1 and remove an edge in Tt−1 \ Tt . With this observation, it is enough to find an algorithm that is robust against a local change of the tree. More precisely, assume that we have constructed a tour Qt−1 based on Tt−1 . To construct tour Qt we consider trees R1 , . . . , R` as in the previous observation. Then we derive a procedure for updating tour Qt−1 to a tour Q1 , where Q1 is a tour constructed based on tree R1 such that |Q1 \ Qt−1 | ≤ 4. After, for each i ∈ {2, . . . , `} we must update tour Qi−1 to a tour Qi such that |Qi \ Qi−1 | ≤ 4. In each of these steps we construct tour Qi by an appropriate way of shortcutting tour Ri so that c(Qi ) ≤ 2 · c(Ri ). By defining Qt := Q` we will obtain that |Qt \ Qt−1 | ≤ 4 · ` = 4 · |Tt \ Tt−1 |, implying Theorem 3.33. Based on this we consider two cases. The first one corresponds to updating tour Qt−1 to 1 Q . This is a somewhat easier case since R1 and Tt−1 only differ by one edge, namely, edge f1 . The second case corresponds to updating tour Qi−1 to tour Qi , which is a more involved operation since Ri is obtained from Ri−1 by swapping a pair of edges. We now focus on the second case. The first case will then follow easily by a similar argument. Given a graph G = (V, E), let us consider two spanning trees R and R0 where R0 = (R ∪ {f }) \ {g} for some edges f 6∈ R and g ∈ R. Assume that we have already computed a walk W = x1 , x2 , . . . , xr corresponding to 2 · R. Consider the set {x1 , . . . , xr }, and recall that we consider all the copies of nodes in W as distinct, so that there are no repeated elements in this set. Additionally, assume that we have computed a function I that assigns to each element in {x1 , . . . , xr } a number in {0, 1} such that for each node v ∈ V exactly one copy xi of v satisfies I(xi ) = 1. In case that I(xi ) = 1, we say that I selects xi . This function indicates whether a copy of a node is visited by our Hamiltonian tour or not (with our visual representation in Equations 3.4 and 3.5, we would write x in boldface if and only if I(x) = 1 for any copy of a node x appearing in W ). The computed tour Q is defined with the following algorithm. Chapter 3. Robust Multi-Stage Minimum Spanning Trees C1 145 C2 x1 v Figure 3.4: Sketch of walk W and its decomposition. Input: A tree R, a Eulerian walk W = x1 , . . . , xr for 2 · R and a function I that assigns each element in the set {x1 , . . . , xr } a number in {0, 1} such that exactly one copy xi of v in {x1 , . . . , xr } satisfies that I(xi ) = 1. 1. Create a walk of the form x`1 , x`2 , . . . , x`|V | where `i < `j for all i < j and I(x`i ) = 1 for all i. 2. Return Q := {x`1 x`2 , x`2 x`3 , . . . , x`|V |−1 x`|V | , x`|V | x`1 }. We first observe that, independently of which function I we choose, the constructed tour has cost at most twice the cost of the original tree R. Observation 3.35. By using any valid function I as input of Algorithm Robust-TourShortcut, the returned output Q satisfies that c(Q) ≤ 2 · c(R). We omit the proof of this observation since it follows by the same argument as the proof of Observation 3.5. We now study how to update the Eulerian walk W to obtain an appropriate walk W 0 for the new tree R0 . Recall that R0 = (R ∪ {f }) \ {g} for some edges f, g, and assume that f = st and g = vw for some nodes v, w, s, t. Also, note that the set R \ {g} induces two connected components, which we denote by C1 and C2 . Assume without loss of generality that x1 , v, s ∈ C1 and t, w ∈ C2 . Therefore, the walk W starts in C1 , then travels to C2 by using a copy of edge vw, traverses all edges in 2 · R2 restricted to C2 , and then returns to C1 by using the second copy of vw (see Figure 3.4). We conclude that W is of the form W = W1 , v, w, W2 , w, v, W10 , where W1 and W10 only touch nodes in C1 and W2 is a closed Eulerian walk for 2 · R restricted to C2 . This implies that W2 must visit t. Also, notice that either W1 or W10 (or both) visits node s. We can assume without loss of generality that W10 visits s, otherwise we redefine W 3.7. Applications to the Traveling Salesman Problem as xr , xr−1 , . . . , x1 (notice that inverting the order of W but maintaining function I yields the same Hamiltonian tour). Therefore we can decompose W as W = W (x1 , v), v, W (w, t), W (t, w), w, W (v, s), W (s, x1 ), where walk W (x, y) = x` , x`+1 , . . . , xu satisfies that x` is a copy of x and xu+1 is a copy of y. Based on this decomposition we construct a Eulerian tour for 2 · R0 , W 0 := W (x1 , v), W (v, s), s, W (t, w), W (w, t), t, W (s, x1 ). We remark that s and t are new copies of s and t, respectively, which are different to any copy of these nodes appearing in W . For these copies we define I(s) = I(t) = 0. Also, notice that W contains two elements (i. e., copies of nodes) that are not appearing in W 0 , namely v and w. Thus, if function I originally selects one of these elements then, for constructing Q0 , function I must be updated so that it selects another copy of v, respectively w. In this case we set I(x) = 1 where x is the first element appearing in W (v, s) (which is a copy of v) and W (w, t) (which is a copy of w), respectively. Summarizing, we use the following algorithm to construct a walk W 0 based on W and I. This algorithm takes as an input the original function I (that we assumed was used to construct W by Algorithm Robust-Tour-Shortcut) and updates it accordingly. Algorithm Input: A tree R of graph G = (V, E); A Eulerian tour W = x1 , . . . , xr (with x1 = xr ) for graph (V, 2 · R), where 2 · R is a set of edges obtained by duplicating every edge in R; A function I that assigns to each element in the multiset {x1 , . . . , xr } a number in {0, 1} such that for each v ∈ V exactly one copy xi of v satisfies that I(xi ) = 1; A tree R0 = (R ∪ {f }) \ {h} for some edges f = st 6∈ R and g = vw ∈ R. 1. Decompose W in walks such that W = W (x1 , v), v, W (w, t), W (t, w), w, W (v, s), W (s, x1 ). Each sequence of nodes W (x, y) = x` , x`+1 , . . . , xu satisfies that x ` is a copy of x and xu+1 is a copy of y. If the decomposition is not possible then redefine W := xr , xr−1 , . . . , x2 , x1 and repeat the step. 2. Return a new walk W 0 of the form W 0 := W (x1 , v), W (v, s), s, W (t, w), W (w, t), t, W (s, x1 ), where t and s are new copies of nodes t and s, respectively. 3. Set I(t) = I(s) = 0. If I(v) = 1 then set I(x) = 1 where x is the first node visited by W (v, s) (and thus it is a copy of v). Similarly, if I(w) = 1 then set I(x) = 1 where x is the first node visited by W (w, t) (and thus it is a copy of w). Chapter 3. Robust Multi-Stage Minimum Spanning Trees Now that we have constructed a walk W 0 , we derive Q0 by taking shortcuts in the same way as when constructing Q. That is, we define Q0 as the output of Algorithm RobustTour-Shortcut on input W 0 and I (where I has been updated by Algorithm RobustWalk-Update when constructing W 0 ). We remark that our construction ensures that when constructing Q and Q0 , the same copy of nodes are taken inside each walk W (x1 , v), W (v, s), W (t, w), W (w, t) and W (s, xr ) (except maybe for the first node in W (v, s) and W (w, t)). Therefore, the edges in Q and Q0 picked when traversing each of these 5 walks are (mostly) the same. This observation is the key to show the following theorem, which summarizes all our previous discussion. Theorem 3.36. Consider two spanning trees R and R0 where R0 = (R ∪ {f }) \ {g} for some edges f 6∈ R and g ∈ R. Assume that Q is the output of Algorithm Robust-TourShortcut on input R, W and I, where W is a walk for 2 · R and I is chosen arbitrarily. Let W 0 be the output of Algorithm Robust-Walk-Update. Then, if Q0 is the output of Algorithm Robust-Tour-Shortcut on input R0 , W 0 and I (where I was updated by Algorithm Robust-Walk-Update), then c(Q) ≤ 2 · c(R), c(Q0 ) ≤ 2 · c(R0 ) and |Q0 \ Q| ≤ 4. Proof. We just need to show that |Q0 \ Q| ≤ 4. Consider any walk X ∈ {W (x1 , v), W (v, s), W (t, w), W (w, t), W (s, x1 )}, where X = xh , xh+1 , . . . , xu for some positive integers h, u. Consider the copies of nodes visited by X that are selected by function I, that is, the set {x`1 , . . . , x`q } such that h ≤ `1 < `2 < . . . < `q ≤ u, and given i ∈ {h, . . . , u} we have that I(xi ) = 1 if and only if i = `j for some j. Notice that by construction of Q and Q0 , for each i ∈ {h + 1, . . . , u} each edge of the form x`i x`i+1 belongs to Q and Q0 . If X equals W (v, s) or W (w, t), this might not happen for i = h if the value of I(x), for x being the first element in X, was modified to 1 in Algorithm RobustWalk-Update (Step 3). Therefore we will have to consider these cases separately. With this observation we simply visit the vertices in the order given by W 0 and see which edges belong to Q0 \ Q. We have four different cases, corresponding to the moments in which walk W 0 traverses from W (x1 , v) to W (v, s), from W (v, s) to W (t, w), from W (t, w) to W (w, t), and from W (w, t) to W (s, x1 ). 1. The first case is further divided into two subcases. Let x be the first element in W (v, s) (that is a copy of v). For the first subcase, assume that I(x) is left unchanged by Algorithm Robust-Walk-Update. Then there can be an edge in Q0 \ Q connecting the last node in W (x1 , v) selected by I and the first node in W (v, s) selected by I. For the second subcase we assume that the algorithm updates I by setting I(x) = 1. Recall that W is of the form W = W (x1 , v), v, W (w, t), W (t, w), w, W (v, s), W (s, x1 ) 3.7. Applications to the Traveling Salesman Problem and that I(x) is updated if and only if v (the element after W (x1 , v) in W ) was selected by I. Since W 0 := W (x1 , v), W (v, s), s, W (t, w), W (w, t), t, W (s, x1 ), and the algorithm sets I(x) = 1, Q and Q0 contain the edge that connects the last element in W (x1 , v) selected by I and v. Thus, in this subcase we must only account an edge in Q0 \ Q that connects the first node in W (v, s) selected by I (which is a copy of v) and the second node in W (v, s) selected by I. Since these subcases cannot happen simultaneously they account for at most one edge in Q0 \ Q. 2. By the discussion before, the second occurrence (in the worst case) of an edge in Q0 \ Q can only be the edge that connects the last node in W (v, s) selected by I and the first node in W (t, w) selected by I (note that s is by definition not selected by I). 3. Analogously as for the first case, we also distinguish two subcases depending whether the value of I(x), for x being the first element in W (w, t), is set to 1 by the algorithm. By the same argumentation the two subcases together account for one edge in Q0 \ Q. 4. Finally, the last possible occurrence of an edge in Q0 \ Q corresponds to the edge that connects the last node in W (w, t) selected by I and the first node in W (s, x1 ) selected by I (again, t is not selected by I). We conclude that |Q0 \ Q| ≤ 4. To show Theorem 3.33, we can iterate the algorithmic ideas just presented for each pair of trees Ri and Ri+1 obtained in Observation 3.34. As mentioned before, we still need to determine how to a construct tour based on R1 = Tt−1 ∪ {f1 } given a tour Qt−1 for tree Tt−1 . We shortly explain how to deal with this case. Assume that we have a tour Qt−1 constructed as the output of Algorithm Robust-Tour-Shortcut on input Tt−1 , Wt−1 and I. We use a similar (but simpler) approach as before. Recall that f1 = vvt where v ∈ Vt−1 , and c , we can define a Eulerian thus vt is a leaf of R1 . Thus, if W is of the form W = W , v, W c for 2 · R1 , where v is a new copy of v. Function I is updated walk W 1 := W , v, vt , v, W so that I(v) = 0 and I(vt ) = 1. With this construction it is easy to observe that applying Algorithm Robust-Tour-Shortcut on input R1 and the updated function I, yields a tour Q1 such that |Q1 \ Qt−1 | ≤ 2. We have presented all arguments for proving Theorem 3.33. For the sake of completeness we present in detail the algorithm claimed in this theorem. In the algorithm we use lower indices to indicate iterations in our input sequence (e. g., for trees T1 , . . . , Tt ), and upper indices to indicate iterations corresponding to the sequence of trees R1 , . . . , R` as given by Observation 3.34. Algorithm Input: A sequence of complete metric graphs G0 , . . . , Gt , . . . where Gt = (Vt , Et ) and Vt = {v0 , . . . , vt }. A spanning tree Tt of graph Gt for each t ≥ 0. Chapter 3. Robust Multi-Stage Minimum Spanning Trees 1. Set W1 := v0 , v1 , v0 ; I(v0 ) := 1, I(v1 ) := 1 and I(v0 ) := 0; and T1 := {v0 v1 }. 2. For each t ≥ 2 do the following. (a) Use the algorithm presented in Observation 3.34 to create a sequence of spanning trees R1 , . . . , R` for graph Gt satisfying the following. • Tree R1 equals Tt−1 ∪ {f1 } for some f1 = vvt ∈ Tt \ Tt−1 with v ∈ Vt−1 . • Tree R` equals Tt . • For all i ∈ {2, . . . , `} there exist elements fi ∈ Tt \ Tt−1 and gi ∈ Tt−1 \ Tt such that Ri = (Ri−1 ∪ {fi }) \ {gi }. (b) Define a Eulerian walk W 1 for 2 · R1 as follows. • Assume that f1 = vvt for some v ∈ Vt−1 . Then, there exists two walks W c such that Wt−1 = W , v, W c. and W c , where v is a new copy of v. • Set W 1 := W , v, vt , v, W (c) Set I(vt ) := 1 and I(v) := 0. (d) Define tour Q1 as the output of Algorithm Robust-Tour-Shortcut on input W 1 and I. (e) For all i ∈ {2, . . . , `} do: i. Define W i as the output of Algorithm Robust-Walk-Update on input R = Ri−1 , W = W i−1 , I, and R0 = Ri (note that here I gets updated by the algorithm); ii. Define Qi as the output of Algorithm Robust-Tour-Shortcut on input W i and I. (f) Set Wt := W ` and Qt := Q` . We now show that this algorithm fulfills the claim in Theorem 3.33. Proof (Theorem 3.33). Fix an iteration t ≥ 2. Notice that Qt = Q` is the output of Algorithm Robust-Tour-Shortcut on input W ` , where W ` is a Eulerian tour of 2 · R` = 2 · Tt . Thus, by Observation 3.35, we have that c(Qt ) ≤ 2 · c(Tt ). Recall that |Tt−1 \ Tt | = `. To bound |Qt \ Qt−1 |, first notice that |Qi \ Qi−1 | ≤ 4 for all i ∈ {2, . . . , `}. This follows from Theorem 3.36. Also, we have that |Q1 \ Qt−1 | ≤ 2. To c . Notice that walks W 1 and Wt−1 visit nodes in W see this, consider walk Wt−1 = W , v, W c in the same order. Also, the tour Qt−1 for Wt−1 and Q1 for W 1 visit the same copies and W c . This implies that |Q1 \ Qt−1 | = 2. With this we conclude that of nodes inside W and W 1 |Qt \ Qt−1 | = |Q \ Qt−1 | + ` X i=2 |Qi \ Qi−1 | ≤ 4 · ` = 4 · |Tt \ Tt−1 |. 3.8. Conclusions The theorem just shown together with Theorem 3.20 directly implies the following result. Theorem 3.37. The Robust Traveling Salesman problem on metric graphs admits a (2 + ε)competitive algorithm with amortized budget O( 1ε log 1ε ) for any ε > 0. In this chapter we studied the Robust MST problem. We presented the first algorithm with constant competitive guarantee and constant amortized budget. Moreover the algorithm computes arbitrarily close to optimal solutions, that is, is (1 + ε)-competitive, and needs amortized budget O( 1ε log 1ε ). We also showed that this last bound is best possible up to logarithmic factors over all algorithms with the same competitive ratio. Subsequently, we considered the problem in the non-amortized setting. For the full information case, we gave two algorithms with budget 2 that are 2-approximate at the end of the input sequence. The first approach, Algorithm Tour, is based in computing a tour of the whole graph. This algorithm has to carefully choose the two edges to insert in each iteration. The second algorithm is a refinement of Algorithm Tour, and has the advantage that one of the two inserted edges is chosen greedily. Additionally, we embedded Algorithm Tour into a doubling framework, obtaining a 14-competitive algorithm if the whole input sequence is known in advance. Our last result for the Robust MST problem is a greedy algorithm that needs a budget of 2. We introduced a conjecture, that if true would imply that this algorithm is constant competitive. Finally, we considered the robust version of the Traveling Salesmen problem. We showed that all the results for the Robust MST problem can be transferred to this setting by only loosing a factor 2 in the competitive guarantee and a factor 4 in the budget. Our study leaves several open questions and indicates different directions for future research. For the Robust MST problem, the first open problem is to close the O(log 1ε ) gap on the amortized budget for (1 + ε)-competitive algorithms. Even more interesting would be to adapt our algorithm for the case in which nodes may leave the terminal set. There are several technical difficulties to achieve such a result. In particular, we would need to define appropriate unfreezing rules: if a sequence of edges is frozen and the value of OPTt is decreased considerably (by more than a constant factor), then we need to unfreeze the edge sequence to maintain a good approximation guarantee. An even more involved technical problem is to adapt the analysis of the amortized budget of the algorithm. Indeed, notice that our analysis bounds the length of each edge sequence by upper bounding the cost of the greedy edge that starts the sequence. This upper bound is in terms of OPTt (or OPTmax ). t The problem lies in the fact that each greedy edge corresponds to a node, and a node that is removed from the graph does not contribute to OPTt . Therefore we cannot bound its corresponding greedy edge in the same manner as before. We also leave several open questions for the non-amortized scenario. In the full information case, we embedded Algorithm Tour into a doubling framework to obtain a 14competitive algorithm. However, it is not clear that we can use the refinement of this algorithm (presented in Theorem 3.23) to derive a similar result. In other words, it is still open to determine whether there exists a constant competitive solution that inserts a greedy edge in every iteration. The difficulty to do this lies in the following fact. Notice that the output sequence of the algorithm in Theorem 3.23 only satisfies c(Tn ) ≤ 2OPTn , but the Chapter 3. Robust Multi-Stage Minimum Spanning Trees ratio between c(Tt ) and OPTn might be arbitrarily large for t ≤ n − 1. This occurs, e. g., in the example given in Lemma 3.14. On the other hand, Algorithm Tour returns a sequence of trees satisfying c(Tt ) ≤ 2OPTn for all t ∈ {0, . . . , n}. This last fact is essential to prove the competitive guarantee of Algorithm Doubling-Tour. The main question that we left open in this chapter is whether Conjecture 3.29 holds in general. This would imply that there exists an online algorithm with budget 2 that is constant competitive. For the Traveling Salesman problem, it would be interesting to determined whether the competitive guarantee of our results can be decreased. Notice that our algorithm is obtained by taking a special shortcut tour of a tree, which can be seen as using part of Christofides’ algorithm. However, the approximation guarantee of 23 achieved by Christofides is based on computing a minimum cost matching on the nodes of odd degree in the tree. To apply the full power of this algorithm we would need to study a robust version of the Minimum Cost Matching problem in metric graphs. Obtaining a (1+ε)-competitive algorithm with constant amortized budget for matchings would translate into a ( 32 + ε)-competitive guarantee with constant amortized budget for the Robust TSP problem. Bibliography [AA93] N. Alon and Y. Azar. On-line Steiner trees in the euclidean plane. Discrete and Computational Geometry, 10:113–121, 1993. [AAF+ 97] J. Aspnes, Y. Azar, A. Fiat, S. Plotkin, and O. Waarts. On-line routing of virtual circuits with applications to load balancing and machine scheduling. Journal of the ACM, 44:486–504, 1997. [AAPW01] B. Awerbuch, Y. Azar, S. Plotkin, and O. Waarts. Competitive routing of virtual circuits with unknown duration. Journal of Computer and System Sciences, 62: 385–397, 2001. [AAWY98] N. Alon, Y. Azar, G. J. Woeginger, and T. Yadid. Approximation schemes for scheduling on parallel machines. Journal of Scheduling, 1:55–66, 1998. [AE98] Y. Azar and L. Epstein. On-line machine covering. Journal of Scheduling, 1: 67–77, 1998. [AFL+ 01] G. Ausiello, E. Feuerstein, S. Leonardi, L. Stougie, and M. Talamo. Algorithms for the on-line travelling salesman. Algorithmica, 29:560–581, 2001. M. Andrews, M. Goemans, and L. Zhang. Improved bounds for on-line load balancing. Algorithmica, 23:278–301, 1999. B. Awerbuch, Y. He, and Y. Bartal. On-line generalized steiner problem. Theoretical Computer Science, 324:313–324, 2004. S. Albers. Better bounds for online scheduling. SIAM Journal on Computing, 29:459–473, 1999. S. Albers. On randomized online scheduling. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing (STOC 2002), pages 134–143, 2002. S. Albers. Online algorithms: a survey. Mathematical Programming, 97:3–26, 2003. S. Angelopoulos. Improved bounds for the online steiner tree problem in graphs of bounded edge-asymmetry. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), pages 248–257, 2007. 153 S. Angelopoulos. A near-tight bound for the online steiner tree problem in graphs of bounded asymmetry. In D. Halperin and K. Mehlhorn, editors, Algorithms — ESA 2008, volume 5193 of Lecture Notes in Computer Science, pages 76–87. Springer, 2008. S. Arora. Polynomial time approximation schemes for euclidean traveling salesman and other geometric problems. Journal of the ACM, 45:2–11, 1996. A. Avidor, J. Sgall, and Y. Azar. Ancient and new algorithms for load balancing in the `p norm. Algorithmica, 29:422–441, 2001. Y. Azar. On-line load balancing. In A. Fiat and G. J. Woeginger, editors, Online Algorithms: The State of the Art, volume 1442 of Lecture Notes in Computer Science, pages 178–195. Springer, 1998. R. A. Baeza-Yates, J. C. Culberson, and G. J. E. Rawlins. Searching in the plane. Information and Computation, 106:234–252, 1993. [BDG+ 09] M. Babaioff, M. Dinitz, A. Gupta, N. Immorlica, and K. Talwar. Secretary problems: weights and discounts. In Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2009), pages 1245–1254, 2009. A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, New York, 1998. A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, Princeton and Oxford, 2009. Y. Bartal, A. Fiat, H. Karloff, and R. Vohra. New algorithms for an ancient scheduling problem. Journal of Computer and System Sciences, 51:359–366, 1995. [BFP+ 73] M. Blum, R. W. Floyd, V. Pratt, R. L. Rivest, and R. E. Tarjan. Time bounds for selection. Journal of Computer and System Sciences, 7:448–461, 1973. J. Byrka, F. Grandoni, T. Rothvoß, and L. Sanit`a. An improved LP-based approximation for steiner tree. In Proceedings of the 42nd ACM Symposium on Theory of Computing (STOC 2010), pages 583–592, M. Babaioff, N. Immorlica, and R. Kleinberg. Matroids, secretary problems, and online mechanisms. In Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), pages 434–443, 2007. P. Berman and M. Karpinski. 8/7-approximation algorithm for (1,2)-TSP. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2006), pages 641–648, 2006. A. Balakrishnan, T. L. Magnanti, and P. Mirchandani. Designing hierarchical survivable networks. Operations Research, 46:116–136, 1998. [BSvdSS11] S. Boyd, R. Sitters, S. van der Ster, and L. Stougie. TSP on cubic and subcubic graphs. In O. G¨ unl¨ uk and G. Woeginger, editors, Integer Programming and Combinatoral Optimization (IPCO 2011), volume 6655 of Lecture Notes in Computer Science, pages 65–77. Springer, 2011. [CC76] R. A. Cody and E. G. Coffman. Record allocation for minimizing expected retrieval costs on drum-like storage devices. Journal of the ACM, 23:103–115, 1976. M. Chleb´ık and J. Chleb´ıkov´a. The steiner tree problem on graphs: Inapproximability results. Theoretical Computer Science, 406:207–214, 2008. ´ Tardos. Sensitivity theorems W. Cook, A. M. H. Gerards, A. Schrijver, and E. in integer linear programming. Mathematical Programming, 34:251–264, 1986. N. Christofides. Worst-case analysis of a new heuristic for the travelling salesman problem. Technical Report 388, Graduate School of Industrial Administration, CMU, 1976. M. Chrobak and C. Kenyon-Mathieu. Competitiveness via doubling. SIGACT NEWS, 37:115–126, 2006. [CvVW94] B. Chen, A. van Vliet, and G. J. Woeginger. Lower bounds for randomized online scheduling. Information Processing Letters, 51:219–222, 1994. [CW75] A. K. Chandra and C. K. Wong. Worst-case analysis of a placement algorithm related to storage allocation. SIAM Journal on Computing, 4:249–263, 1975. R. Diestel. Graph Theory. Springer, Heidelberg, 4th edition, 2005. M. Dynia, M. Korzeniowski, and J. Kutylowski. Competitive maintenance of minimum spanning trees in dynamic graphs. In J. Leeuwen, G. F. Italiano, W. Hoek, C. Meinel, H. Sack, and F. Pl´aˇsil, editors, SOFSEM 2007: Theory and Practice of Computer Science, volume 4362 of Lecture Notes in Computer Science, pages 260–271. Springer, 2007. N. Dimitrov and C. Plaxton. Competitive weighted matching in transversal matroids. In L. Aceto, I. Damg˚ ard, L. Goldberg, M. Halld´orsson, A. Ing´olfsd´ottir, and I. Walukiewicz, editors, Automata, Languages and Programming (ICALP 2008), volume 5125 of Lecture Notes in Computer Science, pages 397–408. Springer, 2008. J. Edmonds. Submodular functions, matroids, and certain polyhedra. In R. K. Guy, E. Milner, and Sauers N., editors, Combinatorial Structures and Their Applications, pages 69–87. Gordon and Breach, J. Edmonds. Matroids and the greedy algorithm. Mathematical Programming, 1:127–136, 1971. L. Epstein and A. Levin. A robust APTAS for the classical bin packing problem. Mathematical Programming, 119:33–49, 2009. L. Epstein and A. Levin. Robust algorithms for preemptive scheduling. In C. Briand and M. M. Halld´orsson, editors, Algorithms – ESA 2011, volume 6942 of Lecture Notes in Computer Science, pages 567–578. Springer, 2011. L. Epstein, A. Levin, J. Mestre, and D. Segev. Improved approximation guarantees for weighted matching in the semi-streaming model. SIAM Journal on Discrete Mathematics, 25:1251–1265, 2011. L. Epstein, A. Levin, and R. van Stee. Max-min online allocations with a reordering buffer. SIAM Journal on Discrete Mathematics, 25:1230–1250, 2011. L. Epstein and J. Sgall. Approximation schemes for scheduling on uniformly related and identical parallel machines. Algorithmica, 39:43–57, 2004. D. K. Friesen and B. L. Deuermeyer. Analysis of greedy solutions for a replacement part sequencing problem. Mathematics of Operations Research, 6:74–87, 1981. [FKM+ 04] J. Feigenbaum, S. Kannan, A. Mcgregor, S. Suri, and J. Zhang. On graph problems in a semi-streaming model. In J. D´ıaz, J. Karhum¨aki, A. Lepist¨o, and D. Sannella, editors, Automata, Languages and Programming (ICALP 2004), volume 3142 of Lecture Notes in Computer Science, pages 207–216. 2004. [FKM10] R. Fujita, Y. Kobayashi, and K. Makino. Robust matchings and matroid intersections. In M. de Berg and U. Meyer, editors, Algorithms — ESA 2010, volume 6347 of Lecture Notes in Computer Science, pages 123–134. Springer, 2010. M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14: 265–294, 1978. [FNW78b] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations for maximizing submodular set functions—II. Mathematical Programming Studies, 8:73–87, 1978. [FPS02] M. Faloutsos, R. Pankaj, and K. C. Sevcik. The effect of asymmetry on the online multicast routing problem. International Journal of Foundations of Computer Science, 13:889–910, 2002. R. Fleischer and M. Wahl. Online scheduling revisited. Journal of Scheduling, 3:343–353, 2000. H. N. Gabow. A matroid approach to finding edge connectivity and packing arborescences. In Proceedings of the 23rd Annual ACM Symposium on Theory of Computing (STOC 1991), pages 112–122, 1991. D. Gale. Optimal assignments in an ordered set: an application of matroid theory. Journal of Combinatorial Theory, 4:176–180, 1968. M. R. Garey and D. S. Johnson. Computers and intractability: A guide to the theory of NP-Completeness. W. H. Freeman, San Francisco, 1979. M. X. Goemans. Minimum bounded degree spanning trees. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006), pages 273–282, 2006. E. N. Gilbert and H. O. Pollak. Steiner minimal trees. SIAM Journal on Applied Mathematics, pages 1–29, 1968. R. L. Graham. Bounds for certain multiprocessing anomalies. Bell System Technical Journal, 45:1563–1581, 1966. R. L. Graham. Bounds on multiprocessing timing anomalies. SIAM Journal on Applied Mathematics, 17:416–429, 1969. H. N. Gabow and R. E. Tarjan. Efficient algorithms for a family of matroid intersection problems. Journal of Algorithms, 5:80–131, 1984. L. A. Hall. Approximation algorithms for scheduling. In D. S. Hochbaum, editor, Approximation Algorithms for NP-Hard Problems. PWS Publishing Company, 1997. N. J. A. Harvey, D. R. Karger, and K. Murota. Deterministic network coding by matrix completion. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2005), pages 489–498, 2005. R. Hassin and A. Levin. An efficient polynomial time approximation scheme for the constrained minimum spanning tree problem using matroid intersection. SIAM Journal on Computing, 33:261–268, 2004. R. Hassin and S. Rubinstein. Robust matchings. SIAM Journal on Discrete Mathematics, 15:530–537, 2002. D. S. Hochbaum and D. B. Shmoys. Using dual approximation algorithms for scheduling problems: Chapmantheoretical and practical results. Journal of the ACM, 34:144–162, 1987. M. Imase and B. M. Waxman. Dynamic Steiner tree problem. SIAM Journal on Discrete Mathematics, 4:369—384, 1991. K. Jansen. An EPTAS for scheduling jobs on uniform processors: using an MILP relaxation with a constant number of integral variables. SIAM Journal on Discrete Mathematics, 24:457–485, 2010. P. Jaillet and M. R. Wagner. Generalized online routing: New competitive ratios, resource augmentation and asymptotic analysis. Operations Research, 56:745–757, 2007. S. Khuller, S. G. Mitchell, and V. V. Vazirani. On-line algorithms for weighted bipartite matching and stable marriages. Theoretical Computer Science, 127: 255–267, 1994. B. Kalyanasundaram and K. Pruhs. Online weighted matching. Journal of Algorithms, 14:478–488, 1993. N. Korula and M. P´al. Algorithms for secretary problems on graphs and hypergraphs. In S. Albers, A. Marchetti-Spaccamela, Y. Matias, S. Nikoletseas, and W. Thomas, editors, Automata, Languages and Programming (ICALP 2009), volume 5556 of Lecture Notes in Computer Science, pages 508–520. Springer, 2009. D. R. Karger, S. J. Phillips, and E. Torng. A better algorithm for an ancient scheduling problem. Journal of Algorithms, 20:400–430, 1996. J. B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical Society, 7:48–50, 1956. D. Karger, C. Stein, and J. Wein. Scheduling algorithms. In J. A. Mikhail, editor, Algorithms and Theory of Computation Handbook, Applied Algorithms and Data Structures Series. CRC Press, 1998. M. Karpinski and A. Zelikovsky. New approximation algorithms for the Steiner tree problems. Journal of Combinatorial Optimization, 1:47–65, 1997. G. Laporte. The traveling salesman problem: An overview of exact and approximate algorithms. European Journal of Operational Research, 59:231–247, 1992. H. W. Lenstra. Integer programming with a fixed number of variables. Mathematics of Operations Research, 8:538–548, 1983. [LLRKS85] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan, and D. B. Shmoys. The Traveling Salesman Problem. John Wiley and Sons, Chichester, 1985. [LNRW10] G. Lin, C. Nagarajan, R. Rajaraman, and D. P. Williamson. A general approach for incremental approximation and hierarchical clustering. SIAM Journal on Computing, 39:3633–3669, 2010. [LSV10] J. Lee, M. Sviridenko, and J. Vondr´ak. Submodular maximization over multiple matroids via generalized exchange properties. Mathematics of Operations Research, 35:795–806, 2010. A. McGregor. Finding graph matchings in data streams. In C. Chekuri, K. Jansen, J. D. P. Rolim, and L. Trevisan, editors, Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques (APPROXRANDOM 2005), volume 3624 of Lecture Notes in Computer Science, pages 170–181. Springer, 2005. D. Michail. Minimum cycle basis: algorithms and applications. PhD thesis, Saarland University, 2006. J. S. B. Mitchell. Guillotine subdivisions approximate polygonal subdivisions: A simple polynomial-time approximation scheme for geometric TSP, k-MST, and related problems. SIAM Journal on Computing, 28:1298–1309, 1999. A. Meyerson, A. Nanavati, and L. Poplawski. Randomized online algorithms for minimum metric bipartite matching. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithm (SODA 2006), pages 954–959, 2006. T. M¨omke and O. Svensson. Approximating graphic TSP by matchings. In Proceedings of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2011), pages 560—-569, 2011. S. Muthukrishnan. Data streams: algorithms and applications. Foundations and Trends in Theoretical Computer Science, 1:117–236, 2005. C. A. S. Oliveira and P. M. Pardalos. A survey of combinatorial optimization problems in multicast routing. Computers and Operations Research, 32: 1953–1981, 2005. J. G. Oxley. Matroid Theory. Oxford University Press, New York, 1993. J.-J. Pansiot and D. Grad. On routes and multicast trees in the internet. ACM SIGCOMM Computer Communication Review, 28:41–50, 1998. H. Pr¨omel and A. Steger. A new approximation algorithm for the steiner tree problem with performance ratio 5/3. Journal of Algorithms, 36:89–101, 2000. K. Pruhs, J. Sgall, and E. Torng. Online scheduling. In J. Y.-T. Leung, editor, Handbook of Scheduling: Algorithms, Models, and Performance Analysis, Computer and Information Science Series. Chapman and Hall, 2004. C. Papadimitriou and M. Yannakakis. The traveling salesman problem with distances one and two. Mathematics of Operations Research, 18:1–11, 1993. R. Rado. Note on independence functions. Proceedings of the London Mathematical Society, 7:300–320, 1957. J. F. Rudin III and R. Chandrasekaran. Improved bounds for the online scheduling problem. SIAM Journal on Computing, 32:717–735, 2003. J. Reichel and M. Skutella. Evolutionary algorithms and matroid optimization problems. Algorithmica, 57:187–206, 2008. G. Robins and A. Zelikovsky. Tighter bounds for graph Steiner tree approximation. SIAM Journal on Discrete Mathematics, 19:122–134, 2005. A. Schrijver. Combinatorial Optimization. Springer, Berlin, 2003. J. Sgall. A lower bound for randomized on-line multiprocessor scheduling. Information Processing Letters, 63:51–55, 1997. J. Sgall. On-line scheduling — a survey. In A. Fiat and G. J. Woeginger, editors, Online Algorithms: The State of the Art, volume 1442 of Lecture Notes in Computer Science, pages 196–231. Springer, A. M. Sharp. Incremental algorithms: Solving problems in a changing world. PhD thesis, Cornell University, 2007. N. Subramanian and S. Liu. Centralized multi-point routing in wide area networks. In Proceedings of the 1991 Symposium on Applied Computing (SAC 1991), pages 46–52. IEEE, 1991. J. A. Soto. Matroid secretary problem in the random assignment model. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2011), pages 1275–1284, 2011. P. Sanders, N. Sivadasan, and M. Skutella. Online scheduling with bounded migration. Mathematics of Operations Research, 34:481–498, 2009. M. Skutella and J. Verschae. A robust PTAS for machine covering and packing. In M. Berg and U. Meyer, editors, Algorithms – ESA 2010, volume 6346 of Lecture Notes in Computer Science, pages 36–47. Springer, 2010. N. Thibault and C. Laforest. An optimal rebuilding strategy for a decremental tree problem. In P. Flocchini and L. Gasieniec, editors, Structural Information and Communication Complexity (SIROCCO 2006), volume 4056 of Lecture Notes in Computer Science, pages 157–170. Springer, 2006. N. Thibault and C. Laforest. An optimal rebuilding strategy for an incremental tree problem. Journal of Interconnection Networks, 8:75–99, 2007. V. V. Vazirani. Approximation Algorithms. Springer, Berlin, 2001. B. M. Waxman. Routing of multipoint connections. IEEE Journal on Selected Areas in Communications, 6:1617–1622, 1988. J. Westbrook. Load balancing for response time. Journal of Algorithms, 35: 1–16, 2000. G. J. Woeginger. A polynomial-time approximation scheme for maximizing the minimum machine completion time. Operations Research Letters, 20:149–154, 1997. J. Westbrook and D. Yan. Linear bounds for on-line Steiner problems. Information Processing Letters, 55:59–63, 1995. J. Westbrook and D. C. K. Yan. The performance of greedy algorithms for the on-line Steiner tree and related problems. Mathematical Systems Theory, 28: 451–468, 1995. A. Z. Zelikovsky. An 11/6-approximation algorithm for the network Steiner problem. Algorithmica, 9:463–470, 1993. M. Zelke. Weighted matching in the semi-streaming model. Algorithmica, 2010. ISSN 0178-4617, 1432-0541. Appendix A Computing Lower Bounds in Linear Time In this section we refine Algorithm Stable-Average, presented in Chapter 1, Section 1.3.1, to run in linear time. The general idea is to use a binary search approach. For the description of the algorithm we use the following notation. Consider a scheduling instance (J, M ). Let Q := {q1 , . . . , qr } the set of all different processing times, and for each q ∈ Q, we denote J≤q := {j ∈ J : pj ≤ q}. Also, we define, f (q) := p(J≤q ) . m − |J \ J≤q | This value is the average machine load in an instance where we remove jobs in J \ J≤q (that is, jobs larger than q) and also |J \ J≤q | machines. Notice that f (q) can be equal to ∞ if m = |J \ J≤q |. Nonetheless we are only interested in values of f (q) when |J \ J≤q | < m, and thus f is non-negative. With the notation previously introduced, Algorithm StableAverage can be described as follows. Input: An arbitrary scheduling instance (J, M ) with set of processing times Q := {q1 , . . . , qr }. 1. Order the processing times so that qi1 ≤ qi2 ≤ . . . ≤ qir . 2. For each k = r, . . . , 1, check whether qik ≤ f (qik ). If it does, define p∗ := qik and return f (p∗ ). Otherwise, keep iterating. It is not hard to see that this algorithm returns the same value A = f (p∗ ) as algorithm Stable-Average. To avoid ordering the processing times, and therefore reduce the running time of the algorithm, we instead use a binary search approach. Equivalently to this algorithm, we can define p∗ as the largest processing time of a job so that p∗ ≤ f (p∗ ) and q > f (q) for all q ∈ Q with q > p∗ . This suggests the following binary search approach. 163 Appendix A. Computing Lower Bounds in Linear Time Input: An arbitrary scheduling instance (J, M ) with set of processing times Q = {q1 , . . . , qr }. 1. Initialize ` ∈ Q and u ∈ Q as the smallest and largest values in Q respectively. 2. While ` 6= u: (a) Compute the median q¯ of the set {q : ` ≤ q ≤ u}. Note: For a given set of n numbers, we say that its median is the b(n + 1)/2c-th largest number in the set. (b) If q¯ = ` then go to Step (3). (c) Compute f (¯ q ). (d) If q¯ > f (¯ q ) > 0, then set u := q¯. Else, ` := q¯. 3. Set p0 := `, and return f (p0 ). We first discuss the correctness of the algorithm, i. e., that p0 = p∗ , and afterwards we prove that it can be implemented to run in linear time. For the analysis we assume that q1 < q2 < . . . < qr . Notice that we have not used this fact during the algorithm itself. The crucial observation is to note that if qi = p∗ or qi = p0 , the following three properties are satisfied: (i) qi ≤ f (qi ), (ii) qi+1 > f (qi+1 ), (iii) 0 ≤ f (qi ) < ∞. Therefore, the correctness of the algorithm follows from the next lemma. Lemma A.1. There exists a unique value q ∈ Q satisfying Properties (i), (ii) and (iii). To show this lemma we first need the following technical result. Figure A.1 depicts the claim of next lemma. Loosely speaking, the lemma says that in the range in which 0 ≤ f < ∞, f (q) is non-increasing if and only if q ≤ f (q). Lemma A.2. For each qi ∈ Q so that |J \ J≤qi−1 | < m, we have that qi ≤ f (qi ) if and only if f (qi ) ≤ f (qi−1 ). Proof. To simplify the computations we introduce the following notation. Let Ji := {j ∈ J : pj = qi }, si := |Ji |, and mi := m − |J \ J≤qi |. Notice that mi − si > 0, since mi − si = Appendix A. Computing Lower Bounds in Linear Time f (q) = q f (q) ... p∗ {q : J≤q < m} qr−1 qr q Figure A.1: Behavior of function f . m−|J \J≤qi−1 | is positive by hypothesis. We then have the following sequence of equivalences f (qi−1 ) ≥ f (qi ) p(J≤qi−1 ) p(J≤qi ) ≥ mi − si mi p(J≤qi ) p(Ji ) p(J≤qi ) ≥ + mi − si mi mi − si 1 1 p(Ji ) p(J≤qi ) − ≥ mi − si mi mi − si si ≥ p(Ji ) = si · qi p(J≤qi ) mi f (qi ) ≥ qi . Proof of Lemma A.1. Consider by contradiction that there exist qs , qt ∈ Q satisfying Properties (i), (ii) and (iii) with qs < qt . By Property (iii), we have that 0 ≤ f (qs ) < ∞, and thus 0 ≤ f (qt−1 ) < ∞. This implies that |J \ J≤qt−1 | < m, and thus we can apply the previous lemma to qt . Because of Property (i), we have that qt ≤ f (qt ) and thus the previous lemma implies that f (qt−1 ) ≥ f (qt ) ≥ qt > qt−1 . Applying Lemma A.2 once again for qt−1 , we obtain that f (qt−2 ) ≥ f (qt−1 ) ≥ qt−1 > qt−2 . Iterating this argument, we obtain that f (qs+1 ) > qs+1 , which contradicts Property (ii) for qs . We have proved the following theorem. Theorem A.3. Algorithms Stable-Average and Fast-Stable-Average returns the same output. Finally, we must show that Fast-Stable-Average finishes in linear time Theorem A.4. The running time of Algorithm Fast-Stable-Average is O(n). Appendix A. Computing Lower Bounds in Linear Time Proof. We show that the k-th iteration of Step (2) of the algorithm can be implemented to run in O(n/2k ) time. By summing over all iterations this proves the claim. Indeed, since in every iteration we reduce the set {q : ` ≤ q ≤ u} at least by half, then this set has at most n/2k elements in the k-th iteration. We conclude that Step 2a takes time O(n/2k ) since it is possible to compute the median in linear time [BFP+ 73]. For computing Step 2c, notice that we have already computed f (`) in previous iterations, and thus we assume that we store its values. Similarly, we can assume that we have stored the set {j ∈ J : pj ≤ `}, {j ∈ J : ` < pj ≤ u} and {j ∈ J : pj > u}. With the correct data structure we can easily compute the sets {j ∈ J : ` < pj ≤ q¯}, {j ∈ J : q¯ < pj ≤ `} in O(u − `) ⊆ O(n/2k ) time. This implies that X P := pj , |J \ J≤` |, and |J \ J≤¯q |, j:` can be found in O(n/2k ) time. The theorem follows by noting that f (¯ q) = f (`) · (m − |J \ J≤` |) + P . m − |J \ J≤¯q |
{"url":"https://p.pdfkul.com/the-power-of-recourse-in-online-optimization_59c3a85d1723dd9a437ab7b5.html","timestamp":"2024-11-14T14:54:35Z","content_type":"text/html","content_length":"499844","record_id":"<urn:uuid:7244c3d3-5d50-44eb-824d-81c3c1e86615>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00162.warc.gz"}
Optimal static mutation strength distributions for the (1+λ) evolutionary algorithm on OneMax Most evolutionary algorithms have parameters, which allow a great flexibility in controlling their behavior and adapting them to new problems. To achieve the best performance, it is often needed to control some of the parameters during optimization, which gave rise to various parameter control methods. In recent works, however, similar advantages have been shown, and even proven, for sampling parameter values from certain, often heavy-tailed, fixed distributions. This produced a family of algorithms currently known as "fast evolution strategies" and "fast genetic algorithms". However, only little is known so far about the influence of these distributions on the performance of evolutionary algorithms, and about the relationships between (dynamic) parameter control and (static) parameter sampling. We contribute to the body of knowledge by presenting an algorithm that computes the optimal static distributions, which describe the mutation operator used in the well-known simple (1+λ) evolutionary algorithm on a classic benchmark problem OneMax. We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive. We investigate certain properties of these distributions, and also evaluate the performance regrets of the (1 + λ) evolutionary algorithm using standard mutation operators. Iaith wreiddiol Saesneg Teitl GECCO '21 Is-deitl Proceedings of the Genetic and Evolutionary Computation Conference Golygyddion Francisco Chicano Cyhoeddwr Association for Computing Machinery Tudalennau 660-668 Nifer y tudalennau 9 ISBN (Argraffiad) 978-1-4503-8350-9 Dynodwyr Gwrthrych Digidol (DOIs) Statws Cyhoeddwyd - 26 Meh 2021 Cyhoeddwyd yn allanol Ie Digwyddiad GECCO 2021 -Genetic and Evolutionary Computation Conference - Lille, Ffrainc Hyd: 10 Gorff 2021 → 14 Gorff 2021 Cynhadledd GECCO 2021 -Genetic and Evolutionary Computation Conference Gwlad/Tiriogaeth Ffrainc Dinas Lille Cyfnod 10 Gorff 2021 → 14 Gorff 2021 Ôl bys Gweld gwybodaeth am bynciau ymchwil 'Optimal static mutation strength distributions for the (1+λ) evolutionary algorithm on OneMax'. Gyda’i gilydd, maen nhw’n ffurfio ôl bys unigryw.
{"url":"https://research.aber.ac.uk/cy/publications/optimal-static-mutation-strength-distributions-for-the-1%CE%BB-evoluti","timestamp":"2024-11-09T04:30:53Z","content_type":"text/html","content_length":"55090","record_id":"<urn:uuid:3b728792-c542-40df-9431-40f59b2fb174>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00453.warc.gz"}
DimensionCoordinate.squeeze(*args, **kwargs)[source]¶ Remove size one axes from the data array. By default all size one axes are removed, but particular size one axes may be selected for removal. Corresponding axes are also removed from the bounds data array, if present. axes: (sequence of) int The positions of the size one axes to be removed. By default all size one axes are removed. Each axis is identified by its original integer position. Negative integers counting from the last position are allowed. Parameter example: Parameter example: Parameter example: axes=[2, 0] inplace: bool, optional If True then do the operation in-place and return None. i: deprecated at version 3.0.0 Use the inplace parameter instead. DimensionCoordinate or None The new construct with removed data axes. If the operation was in-place then None is returned. >>> f.shape (1, 73, 1, 96) >>> f.squeeze().shape (73, 96) >>> f.squeeze(0).shape (73, 1, 96) >>> g = f.squeeze([-3, 2]) >>> g.shape (73, 96) >>> f.bounds.shape (1, 73, 1, 96, 4) >>> g.shape (73, 96, 4)
{"url":"https://ncas-cms.github.io/cf-python/method/cf.DimensionCoordinate.squeeze.html","timestamp":"2024-11-02T20:16:03Z","content_type":"application/xhtml+xml","content_length":"13691","record_id":"<urn:uuid:f71b4347-34ba-49a4-beac-dd99a062a725>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00058.warc.gz"}
lows and Network Flows and Maximum Flow Algorithms Network flows are used to model and solve various optimization problems in computer science, such as finding the maximum flow in a transportation network or the minimum cut in a network. In this article, we will explore the concept of network flows and delve into the algorithms that can be used to find the maximum flow in a given network. Introduction to Network Flows A network flow is a directed graph in which each edge has a capacity, representing the maximum amount of flow that can pass through it. The goal is to find the maximum amount of flow that can be sent from a source node to a sink node, subject to the capacity constraints. Each edge in the graph can carry flow in either direction, and any unused capacity can be considered as flow returning to the Network flow problems can be represented using an adjacency matrix or an adjacency list. The source node is typically denoted as 's', and the sink node as 't'. Each edge is represented by a tuple (u, v, c, f), where 'u' and 'v' are the nodes connected by the edge, 'c' is the capacity, and 'f' is the current flow through the edge. Ford-Fulkerson Algorithm The Ford-Fulkerson algorithm is one of the classic maximum flow algorithms. It uses a depth-first search (DFS) to find augmenting paths from the source to the sink, increasing the flow along these paths until no more augmenting paths exist. The basic idea behind the algorithm is to repeatedly find an augmenting path from 's' to 't' and update the flow by adding the maximum possible flow along the path. This process continues until no more augmenting paths can be found, indicating that the maximum flow has been reached. Edmonds-Karp Algorithm The Edmonds-Karp algorithm is a variant of the Ford-Fulkerson algorithm that uses breadth-first search (BFS) instead of DFS to find augmenting paths. Due to the use of BFS, it guarantees that the first augmenting path found is of minimum length, which improves the algorithm's efficiency. The algorithm starts by initializing the flow to zero and repeatedly finds augmenting paths using BFS. It computes the maximum possible flow along each path and updates the flow value. Once no more augmenting paths can be found, the algorithm terminates, giving the maximum flow in the network. Push-Relabel Algorithms Push-relabel algorithms provide another approach to solve the maximum flow problem. They maintain a preflow, where each node has an excess or deficit, and try to incrementally convert it into a valid flow by pushing the flow through edges or relabeling nodes. There are several variants of push-relabel algorithms, such as the Relabel-to-Front algorithm and the highest-label-first algorithm. These algorithms have a time complexity of O(V^3), where V is the number of nodes in the network. Network flows and maximum flow algorithms play a vital role in solving various optimization problems. Understanding the concepts and algorithms related to network flows is crucial for competitive programmers, as these problems frequently appear in coding competitions and interviews. In this article, we covered the basics of network flows, introduced the Ford-Fulkerson algorithm, the Edmonds-Karp algorithm, and briefly mentioned push-relabel algorithms. These algorithms provide efficient ways to find the maximum flow in a given network, contributing to the optimization of various real-world problems. • Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to Algorithms (3rd ed.). The MIT Press.
{"url":"https://noobtomaster.com/competitive-programming-c-plus-plus/network-flows-and-maximum-flow-algorithms/","timestamp":"2024-11-12T12:12:30Z","content_type":"text/html","content_length":"26513","record_id":"<urn:uuid:eb9bb82b-0f89-44e1-9736-32039c6736e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00475.warc.gz"}
November 2018 LSAT Question 14 Explanation Which one of the following is a month in which the team must work at Krona mine? Possible problem with deduction. Hello, why cant H be possible for the month of November? We know that at least 1 H must go between G in August and K in October. If we put the second H in November, we still have two months at G and two months at K but only one H left. There needs to be at least one H month between KK and GG months and at least 1 H month that breaks up the GGG pattern if May/ July/ August or May June/ August are all G mine months. Please see below for a more complete explanation of the setup: The game involves assigning an engineering team to work for either Grayson mine (G), Krona mine (K) or company headquarters (H) over a period of 9 months March - November with 3 months spent at each of the mines, and 3 months at the headquarters. ___ ___ ___ ___ ___ ___ ___ ___ ___ M A M J J A S O N The following rules apply: (1) The team must work at least one month at headquarters between any two months working at different mines. This rule tells us that any time a team switches mines, there must be at least a month or possibly more that when they work at headquarters, so the pattern could be: G H K or G HH K or K H G but never KG or GK because at least one H must always come in between. (2) The team cannot work at the same mine for more than two months in a row. This rule tells us that at most we can have two repeating letters in a row for G and K, we could have GG or KK but not GGG or KKK. Combined with rule (1), we can also infer that we can never have HHH or HH because then it would be impossible to have a month in headquarters between the months of work in either of the mines, so at most we could have one month at headquarters in a row. The rule also allows us to infer that we must have at least two months of a row for each mine, otherwise, we will not be able to comply with rule (1). To illustrate why, consider this scenario: G H K H G H K ? ? We have two months left and still have 1 month left to assign to mine K and 1 month to mine G, let's say month 8 is mine K G H K H G H K K ? but then month 9 cannot be G because of rule (1), so the only plausible scenario is where we have one each of GG and KK sequences, for example: G G H K K H G H K Another interesting inference is that even though we cannot have GGG or KKK sequence, the rules only tell us that H is required every time the team switches mine but it leaves open the possibility that H could split 3 months at the same mine as well, for example: GG H G KK H K H Let's keep this in mind for later. (3) The team must work at Grayson mine in August. (4) The team must work at Krona mine in October. ___ ___ ___ ___ ___ G ___ K ___ M A M J J A S O N These two rules tell us that since the team switches mines between August and October, they must work in headquarters in September per rule (1). We can also infer that they must work in mine K in November, otherwise, we would not be able to comply with rules (1) and (2) because there are only 2 months left to assign to H and 4 months to both mines. ___ ___ ___ ___ ___ G H K K M A M J J A S O N The rest of the months need to be filled with G G H H and K in some order, and we can infer that H cannot be #1 because if H is #1 there is no valid combination possible as it would necessarily violate either rule (1) or rule (2): H K H GGG H KK (rule 2) H G H K G... (rule 1) So the only possible combinations are: (1) G G H K H G H K K M A M J J A S O N (2) K H G G H G H K K M A M J J A S O N (3) G H K H G G H K K M A M J J A S O N (4) K H G H G G H K K M A M J J A S O N Notice that scenario (4) takes advantage of our inference in rule (2) that H can be used to split G H GG pattern. For this game it is important to spend time on the setup because the rules allow us to determine every possible scenario - there are only four. If you have made all the above inferences, you would be able to answer all the questions just by looking at these four scenarios without spending any additional time. Let me know if you have any further questions.
{"url":"https://testmaxprep.com/lsat/community/100004610-possible-problem-with-deduction","timestamp":"2024-11-09T16:09:56Z","content_type":"text/html","content_length":"69644","record_id":"<urn:uuid:7fe1824e-c823-42ca-9098-4c80d2fe8947>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00419.warc.gz"}
Opening Range Breakout Trading Strategy Design and Implementation - Helping you Master EasyLanguage The goal of this research is to find various set-ups and exit strategies that could be used for trading the opening range breakouts. The time frames we will be looking at are 10-min, 15-min, and 30-min opening range breakouts. We will focus our attention on the very liquid futures markets, in particular we will analyze the S&P 500 futures. We would like to encourage you as the reader to participate in the discussion and share your knowledge and/or ideas about opening range trading systems. Our research is focused on a popular trading principle called the opening range breakout. We define that range as the first n-bars of minutes of a trading day. Isn’t the electronic futures market trading almost 24 hours a day? Yes, but we use the NYSE opening time, 9:30 a.m. ET. The logic behind this is when the NYSE market opens we have the highest trading volume especially during the first 15 minutes of trading. What makes the opening range an important trading concept is like what we have said, the volume and the fact that traders act in response to recent news. The fact that important economic news are often announced at 10:00 a.m. makes it even more significant. The trading crowd takes positions before the economic news are announced and not at the time it’s announced. Some analysts even claim that about 35% of the time, the high or the low of the day occur within the first 30 minutes of trading. Our analysis will show if this argument holds true. The aim is to identify possible set-ups and exits that can help us in improving the opening range breakout trading system. The set-ups tested include volume spikes, time, and volatility. The tested entry and exit strategies will be analyzed and explained. How about price and volume? A very important indicator is volume. Therefore we must analyze the volume too and add it to our trading arsenal. The reason why you should use it is that high volume is an indication of high commitment to a position. The opposite is also true, if the price increases on low volume it indicates that the price is likely to retrace. For the better understanding of the trading volume on a particular day we will use a Stochastic Volume Index Indicator (compare the volume today with the volume of the last couple of trading days). The current value is expressed as a percentage between the lowest and highest that it has been over the previous X number of bars. The numbers will be between 0 (when at the lowest) to 100 (when at the highest). We calculate the value by using the close of each bar. When done correctly it should look like this: Looking at the sample graph above you can already see why the opening range is so important. The highest volume is between 9:30 a.m. and 10:30 a.m. Thereafter it dries up and we have one more spike right before the close of the trading session at 15:45 until 16:00. Every day has the same game. To analyze the opening range breakout we have to understand the dynamics behind it. It is the time frame with high volatility and high volume because the traders had time to analyze the previous trading day’s price movements. If the trend direction of a trading day is determined by the first two bars of the opening range, wouldn’t it be useful to compare the last trading day and the opening of the current trading day to see if we have an opening gap? The assumption is that a gap-up day should further increase our confidence when trading the opening range. A gap-up tells us traders are already going long and we had a lot of unfilled orders the previous day which are executed at the opening. This further underpins a bullish trend. Let’s see if this assumption is correct. The winning percentage is low but the Payout Ratio of 3.50 looks promising. You remember what we tried to prove? “35% of the time, the high or the low of the day occur within the first 30 minutes of trading and it dictates the direction of the trend for the rest of the day”. Looking at our test results we are quite close. Our results indicate that in about 30% of the time the opening range dictates the trend direction for the rest of the day. Obviously the results can be improved by fine-tuning our exit technique. For now we went with the exit rule, “exit at market close” for the sake of simplicity. When we analyze the trading results by trading day of the month we have the strongest gains during the first week of the month. Why? What we know from the world of investment banking (in Europe) is the fact that insurance companies and funds have a cash inflow during the last days of the month and they invest the new cash within the first five trading days of the subsequent month. This is just our interpretation. If you have another clue or a better explanation we are curious to read your comments. What if we test only the breakout without the gap-up day? Strategy rules are the same as above with the exemption that we do not need a gap-up day to enter the trade. The overall percentage comes down to 25.7% which is not a big difference. The return percentage of 49.6% is lower which can be explained by the fact that we have entered 98 trades more than with the previous set-up and the overall volatility on trading days with no gap is slightly lower. The Kelly Pct. is 3.03% which is way too low. Adding the market opening gap as an additional set-up for our trade entry improves the Kelly Pct. to 7.90%. When money management rules are applied to the trading strategy the final result of our alpha increases tremendously. Yes, the combination of set-ups can be endless and the process of successful system design is tiring. This is why you should use the above steps when developing your trading strategy and when trying to find a good set-up you should apply some common sense. We cannot test everything for you but we will give you some input on how you can come up with some useful set-ups and what you should test/ • Volume during specific time of the day • Volatility during specific time of the day • Correlations with other markets – ETF ? (sector weighting of the S&P 500, what produces the biggest moves, which sectors have the largest impact on the S&P 500) • Intermarket Analysis – T-Bond • Speed of price change • Previous trading day • Trading Day of the Week • Trading Week of the Month As we have already shown yesterday’s data (close of the trading day and next day open) has an effect on the trend direction for the following trading day and the volatility. Opening range breakout systems are influenced by yesterday’s price moves. Trading imbalances can occur which influences the breakout range of the next trading day. We used the yesterday’s data simply by comparing it to the subsequent day to see if there is any price gap and to which extent it influences the trend direction. As mentioned before it is an important piece of the puzzle. Big volume underpins the strength of the trend. It confirms the price action. Yes, we do not always have a clear pattern but that’s why you should compare the current volume relative to the previous day’s volume. Make use of our indicator and try to find the optimal set-up for the market you are planning to trade. For example, take the opening range breakout entry if the volume is 10% greater than the average volume of the last X bars. In addition, entries volume can also be used to identify the support and resistance points. Exit ATR Ratchet For the following strategy test we will implement a more sophisticated exit technique. The exit technique was originally developed for a fund managed by Tan LeBeau LLC. The exit strategy is based on the Average True Range. The idea behind it is to pick logical starting points and then add units of ATR to the starting point to produce a trailing stop that moves consistently higher adapting to changes in volatility. The advantage of the ATR Ratchet is that it exits the position fast and it’s appropriate to changes in volatility. It enables us to lock in the profit faster than with other trailing stop methods. Example of the strategy: After the trade has reached a profit target of at least one ATR or more, we pick a recent low point such as the lowest low of the last 15 bars. Then we add some small unit of ATR (0.10 ATR for example) to that low point for each bar in the trade. If we have been in the trade for 15 bars we multiply 0.10 ATRs by 15 and add the resulting 1.5 ATR to the starting point. After 30 bars in the trade we would now be adding 3 ATRs to the lowest low of the last 15 bars. The exit should be used after a minimum level of profitability is reached since this stop is moving very rapidly. The ATR begins slow and moves up steadily each bar because we are adding one small unit of ATR for each bar in the trade. The starting point from which the stop is being calculated (the 15 bars low in our example) also moves up as long as the market is headed in the right direction. So now we have a constantly increasing number of units of ATR being added to a constantly rising 10-day low. Each time the 15-bar low increases the ATR Ratchet moves higher so we typically have a small but steady increase in the daily stop followed by much larger jumps as the 15-bar low moves higher. It is important to emphasize that we are constantly adding to our acceleration for each bar to an upward moving starting point that produces a unique dual acceleration feature for this exit. We have a rising stop that is being accelerated by both time and price. When the trade makes a good profit run the ATR moves up very fast. For example 5% or 10% of one 15-bar average true range multiplied by the number of bars the trade has been open will move the stop up much faster than you might expect. A feature of the ATR is that you can start it at any point. At a support level, trade entry or pick a low point as the lowest low of the last X bars. If you want to keep things simple you start the ATR Ratchet at something like 2 ATRs below the entry price which would make the starting point fixed. In such a case the ATR Ratchet would move up only as the result of accumulating additional time. As a rule of thumb this exit technique should only be used after reaching a certain amount of profit. The length that we use to average the ranges is crucial, if we want the ATR to be highly responsive in an intraday trading strategy you should use a short length for the average. There are many variations possible here. Best approach is to code the ATR Ratchet and plot it on a chart to get a feeling for its behavior. This will enable you to find the best possible variables. Parameter Optimization Now before we move onto strategy design we want to optimize the indicators first. A caveat here, avoid over optimization. When first looking at a solid set-up we take those values giving us solid results and with no jumps in data. For example, after optimizing our Stochastic Volume Index Indicator we get the following hypothetical results: Now when we look at the results which values for our Indicator should we choose? No.10 for the given set-up we get +110% in gains. 1 20 10 -10% 2 30 20 0.3 3 40 30 0.4 4 50 40 0.5 5 60 50 0.47 6 70 60 0.45 7 80 70 0.53 8 90 80 0.23 9 100 90 -15% 10 110 94 1.1 11 120 100 0.5 12 130 100 -80% NO! No.10 is the wrong answer. A small change in the Index Level and LookBack period gives us a large deviation of gains ranging from -15% to +110%. This can be random and is pushing us right into the direction we do not want to go. Now take a look again. What about the results from No.3 to No.7, we have a small deviation ranging from +40% to 53% and no sudden jumps in the data set. We choose a value somewhere in between which means No.5. Is this one the best possible answer? Again, finding the best possible one is over optimization. Our goal is to find a solid and consistent set-up with consistent results. We want to find the right cluster of possible variables and not an exact variable. Sometimes it’s not that easy like in the table above. In those cases start simple and just have a look at the chart and get a feeling for how the indicator behaves at different levels. Trading is not an exact science like some traders want it to be and sometimes you need to follow your intuition and trust your experience. Here are the results after optimizing our Stochastic Volume Index indicator. As you can see it’s not that easy due to the larger amount of data we have. We use an Index value between 0 and 20 for the breakout and a lookback period of 86 bars (10-min bar period). Our next step is fine-tuning the strategy and using the ATR Ratchet for the exit technique. Strategy Optimization Before we move on to actual backesting and strategy optimization let’s summarize our trading rules so far. We are looking for a breakout during the opening range and a volume index level between 0 and 20 with a sudden price jump above the opening range for a bullish set-up. We will also test the strategy for long entry on gap-up days. After defining those strategy rules we can implement the ATR Ratchet and further improve our results. Final Strategy Results Long Only: Further Research There are many directions that can be taken for further development of this trading concept. Optimizing parameters such as the ATR Ratchet is definitely an exit technique to be further studied and leaves room for improvement. It also shows that the exit of trading strategy is more important than the entry rules as our test results including the ATR Ratchet have shown. The bar period chosen for the opening range breakout and the trailing exit methods used can make a large difference. Further optimizations can be done by selecting the best entries and exits for certain days of the week and levels of volatility (VIX). — by Nehemia Markovits from the website Milton Financial Market Research Institute. […] Källa: Opening Range Breakout Trading Strategy Design and Implementation – System Trader Success […] Check out “market profile.” I am a bit uncertain of how you are defining the Stochastic Volume Index Indicator. You could calculate it as : Raw Stochastic Volume Index = (MyVolume – LowestV) / (HighestV – LowestV); where MyVolume is the volume of the current 10 minute bar, LowestV is the lowest volume of the last x bars and HighestV is the highest volume of the last x bars. The bars are 10 minute price bars. Then you could get (Stochastic Volume Index Indicator) = Smoothed Raw Stochastic Volume Index; Is that what you are suggesting? Hmmm.. $68k drawdown with average trade $1300 = not tradeable. (Current Close – Lowest Low) / (Highest High – Lowest Low) * 100 Lowest Low = The lowest low for the user defined look-back period Highest High = The highest high for the user defined look-back period. Make the look-back period adjustable depending on current levels of volatility(hv,iv). how can i get the code for this strategy • I don’t think it is available. But I may be wrong. Can anyone confirm? {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://easylanguagemastery.com/building-strategies/opening-range-breakout-trading-strategy-design-implementation/","timestamp":"2024-11-07T15:09:13Z","content_type":"text/html","content_length":"462014","record_id":"<urn:uuid:e442a0f5-d59e-483b-a3dd-f9b6b7c2505e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00491.warc.gz"}
Can you impute categorical variables? In the case of categorical variables, mode imputation distorts the relation of the most frequent label with other variables within the dataset and may lead to an over-representation of the most frequent label if the missing values are quite large. Which is an appropriate way of imputing the categorical variable? One approach to imputing categorical features is to replace missing values with the most common class. You can do with by taking the index of the most common feature given in Pandas’ value_counts How do you fill missing values for categorical variables? There is various ways to handle missing values of categorical ways. 1. Ignore observations of missing values if we are dealing with large data sets and less number of records has missing values. 2. Ignore variable, if it is not significant. 3. Develop model to predict missing values. 4. Treat missing data as just another category. How do you impute categorical columns? Step 1: Find which category occurred most in each category using mode(). Step 2: Replace all NAN values in that column with that category. Step 3: Drop original columns and keep newly imputed How do you impute missing categorical data in R? How to Impute Missing Values in R 1. df<-tibble(id=seq(1,10), ColumnA=c(10,9,8,7,NA,NA,20,15,12,NA), 2. ColumnB=factor(c(“A”,”B”,”A”,”A”,””,”B”,”A”,”B”,””,”A”)), 3. ColumnC=factor(c(“”,”BB”,”CC”,”BB”,”BB”,”CC”,”AA”,”BB”,””,”AA”)), 4. ColumnD=c(NA,20,18,22,18,17,19,NA,17,23) 6. 1 1 10 “A” “” NA. WHY IS mode imputation for categorical variables? It distorts the distribution of the dataset. In the case of categorical variables, mode imputation distorts the relation of the most frequent label with other variables within the dataset and may lead to an over-representation of the most frequent label if the missing values are quite large. Can Knn be used for categorical variables? KNN is an algorithm that is useful for matching a point with its closest k neighbors in a multi-dimensional space. It can be used for data that are continuous, discrete, ordinal and categorical which makes it particularly useful for dealing with all kind of missing data. How do you encode a categorical variable in Python? Another approach is to encode categorical values with a technique called “label encoding”, which allows you to convert each value in a column to a number. Numerical labels are always between 0 and n_categories-1. You can do label encoding via attributes . cat. How do you handle missing values for categorical variables in R? Dealing with Missing Data using R 1. colsum(is.na(data frame)) 2. sum(is.na(data frame$column name) 3. Missing values can be treated using following methods : 4. Mean/ Mode/ Median Imputation: Imputation is a method to fill in the missing values with estimated ones. How do I fill missing categorical data in pandas? You can use df = df. fillna(df[‘Label’]. value_counts(). index[0]) to fill NaNs with the most frequent value from one column. What is categorical Imputer? The CategoricalImputer() replaces missing data in categorical variables by an arbitrary value or by the most frequent category. The CategoricalVariableImputer() imputes by default only categorical variables (type ‘object’ or ‘categorical’). How do I assess missing data in R? In R the missing values are coded by the symbol NA . To identify missings in your dataset the function is is.na() . When you import dataset from other statistical applications the missing values might be coded with a number, for example 99 . In order to let R know that is a missing value you need to recode it. How is mode imputation used in a categorical variable? Mode imputation (or mode substitution) replaces missing values of a categorical variable by the mode of non-missing cases of that variable. How to encode and impute categorical features fast? Based on the information we have, here is our situation: Categorical data with text that needs encoded: sex, embarked, class, who, adult_male, embark_town, alive, alone, deck1 and class1. Categorical data that has null values: age, embarked, embark_town, deck1 Why does KNN impute all categorical features fast? We need to round the values because KNN will produce floats. This means that our fare column will be rounded as well, so be sure to leave any features you do not want rounded left out of the data. The process does impute all data (including continuous data), so take care of any continuous nulls upfront. How is mode imputation used to impute missing data? Mode imputation (or mode substitution) replaces missing values of a categorical variable by the mode of non-missing cases of that variable. Imputing missing data by mode is quite easy.
{"url":"https://diaridelsestudiants.com/can-you-impute-categorical-variables/","timestamp":"2024-11-14T18:30:04Z","content_type":"text/html","content_length":"48795","record_id":"<urn:uuid:708d724b-38eb-4ef0-bed1-ed395e1b700d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00436.warc.gz"}
Baccarat formula 1324 good stuff for people with little capital. - Baccarat formula 1324 good stuff for people with little capital. For those who do not want to spend a lot of money in gambling. Or if you want to enjoy playing baccarat for a long time. I don’t have to waste time looking for baccarat formulas. Just read this article to the end and find the answer for sure. Because the Baccarat formula that I bring together today has been guaranteed by gamblers around the world that it’s really good. Use it and have more chances to win. More profitable But how to use it and when to use it let’s go see it. Baccarat formula 1324, this formula is sure or soft? Just seeing the number 1324, many people may wonder. What is this formula? Did it come up randomly or not. Is there anything guaranteed. Anyone who has played baccarat for a long time will know that playing without using. Baccarat money walk formula with the use of the formula along with it gives very different results. So people come up with a lot of formulas to use in playing. One of the most popular formulas in the world. This is the 1324 Baccarat formula. Who would call it a strategy. Let me call this Baccarat money walk formula. Well, by the 1324 baccarat formula, he has been in use since 2006 and it has been popular to this day. More than ten years ago, it still works well. without having to modify anything With a betting style that uses the principle of 1 down, 1 win, 1 loss, 0 loss, win rate is 50 / 50, just like throwing games. Which corresponds to the probability that the dealer will win 0.458597 and the player will win 0.446247, with this formula there is an iron rule that always bet on the bank is prohibited. Since it only has a probability of 0.095156 that it will always land. In addition, if we always bet on the shore, we will be at a disadvantage. The most online casinos, compared to bets on the banker’s side, we have a disadvantage of only 1.0579% and bet on the player’s side, we will only have a disadvantage of 1.2351% But wait Even though we can see that we have an advantage over 95% of the casino, this is nothing to guarantee that in 100 bets we will win up to 90 times, otherwise the casino will go bankrupt. already Anyone who has studied the house edge value will know that the chance or win rate here is only effective for a short period of time. After a long time, the casino will finally get a refund anyway. Gambling, it uses statistics to capture thousands of times to see results. Therefore, we will have to look at the baccarat table that we are going to play with the statistics of the card or card layout. in order to make the right decision on which table to play And for all these reasons, he has developed a Baccarat formula to win. That ‘s the online casino . How to use Baccarat 1324 formula? How to use Baccarat money walk formula This one is nothing complicated. no matter how long we play We will divide the betting round into 4 rounds or 4 steps with the betting principle as follows. • Round 1 bet 1 unit • If Round 1 wins, in Round 2, bet 3 units. • If Round 2 wins, in Round 3, bet 2 units. • If Round 3 still wins, Round 4, which is the last round of this formula, bet 4 units. • No matter which round, if a loss occurs, go back to start round 1 again. • If lucky to win all 4 rounds in a row, then go back to start the 1st round again, where now we have received a profit of 10 times (if winning only the banker side, you will receive 0.95 times). From the above formula, I’ll give you an example of how to use it so that it’s easier to understand as follows. Round 1 bet 1 unit (assuming 1 investment unit equals 10 baht) • If you win, you will receive a profit from this round of 10 baht. • Retained profit after winning 0 baht because it is the first round • If you lose, you will lose only 10 baht of capital. 2nd round bet 3 units In this round, we will use 1 unit of additional capital and 1 unit of profit from the 1st round for a total of 3 units or 30 baht. • If you win, you will receive a profit from this round of 30 baht. • Retained earnings from winning 40 baht (Profit 1 + 2 ) • If you lose, you will lose only 2 units of capital and 1 unit of profit from the first round that is used as capital in this round. This means that there is no profit left. Round 3 bet 2 units In this round we will use profit only to play. because now we have 40 baht profit in hand • If you win, you will get a profit from this round of 20 baht. • Retained earnings after winning 60 baht • If we lose, this time we will only lose 20 baht of profit that we use to make capital. It means that we don’t waste a single baht of our own money. • The remaining profit after losing bets is 20 baht. Round 4 bet 4 units After being lucky for 3 eyes, this time we will only throw profits into it as usual. • If you win, you will get a profit of 40 baht from this round. • Retained profit after winning 100 baht • If you lose, this time you will lose 40 baht of profit. You won’t lose yourself a single baht as usual. • The remaining profit after losing bets is 20 baht. How much is the risk with the 1324 Baccarat Formula? Because betting is a risk, but if using the 1324 baccarat money walking formula, it will help the risk that is present at a very low level. From the example, it can be seen that only rounds 1 and 2 are we going to risk running out of money. And the chance to win 2 consecutive eyes is 25%, which is considered quite enough for gambling. But if we can get through to the 3rd round, we will have profit left in our pocket anyway. Whether we lost in round 3 or 4 we profited at least 2 units. In addition, anyone who thinks that they want to be full in the final can use the formula 1326 as well. The method is the same as 1324, the only difference in the final round is to place bets down to 6 units. If you lose, we don’t have a baht left. But if you win, you get enough profits as well. How many baht do you have to have to use this money-walking formula? One thing that all novice gamblers are always worried about is not knowing how many baht they need to be worth it. For the 1324 Baccarat money walk formula, there is no limit on whether you have to have more or less money. Just divide the available funds into investment units of equal value, for example, having money of 1,000 baht, a table to bet at a minimum of 50 baht, we may divide it into 50 baht per unit according to the minimum of the table. It means that we will have a total of 20 investment units and then place bets according to the formula. If you interested membership with us
{"url":"https://noelevz.com/football/baccarat-formula-1324-good-stuff-for-people-with-little-capital/","timestamp":"2024-11-10T04:34:02Z","content_type":"text/html","content_length":"43782","record_id":"<urn:uuid:f5e311e5-e4f8-40d4-afc7-94d851acc3d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00759.warc.gz"}
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024) Cite as Arnab Chatterjee, Amin Coja-Oghlan, Noela Müller, Connor Riddlesden, Maurice Rolvien, Pavel Zakharov, and Haodong Zhu. The Number of Random 2-SAT Solutions Is Asymptotically Log-Normal. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 39:1-39:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) Copy BibTex To Clipboard author = {Chatterjee, Arnab and Coja-Oghlan, Amin and M\"{u}ller, Noela and Riddlesden, Connor and Rolvien, Maurice and Zakharov, Pavel and Zhu, Haodong}, title = {{The Number of Random 2-SAT Solutions Is Asymptotically Log-Normal}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)}, pages = {39:1--39:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-348-5}, ISSN = {1868-8969}, year = {2024}, volume = {317}, editor = {Kumar, Amit and Ron-Zewi, Noga}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.39}, URN = {urn:nbn:de:0030-drops-210329}, doi = {10.4230/LIPIcs.APPROX/RANDOM.2024.39}, annote = {Keywords: satisfiability problem, 2-SAT, random satisfiability, central limit theorem}
{"url":"https://drops.dagstuhl.de/entities/volume/LIPIcs-volume-317","timestamp":"2024-11-12T16:09:37Z","content_type":"text/html","content_length":"807426","record_id":"<urn:uuid:f2e35895-929d-458a-8645-32b5a0dde402>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00742.warc.gz"}
Derginin ONDOKUZ MAYIS ÜNİVERSİTESİ EĞİTİM FAKÜLTESİ DERGİSİ Cilt: 2013/2 Sayı: 32 Makale İLKÖĞRETİM (6-8) MATEMATİK DERSİ ÖĞRETİM PROGRAMINDAKİ YENİ ALT ÖĞRENME ALANLARINA İLİŞKİN ÖĞRETMEN GÖRÜŞLERİ Alternatif MATHEMATICS TEACHERS’ OPİNİONS ABOUT NEW SUB LEARNING DOMAINS IN ELEMENTARY MATHEMATİCS (6-8) CURRICULUM Eklenme 2.01.2014 Okunma 4 Eğitim alanında yaşanan gelişmeler öğretim programlarının zaman zaman değiştirilmesini gerekli kılmaktadır. Matematiğe değer veren, matematiksel düşünebilen, matematik dilini kullanabilen ve iyi problem çözebilen bireyler yetiştirmek amacıyla İlköğretim Matematik programı 2005 yılında yenilenmiştir. Yenileme sırasında içeriğe yeni bazı konu ve kavramlar eklenmiştir. Bu araştırmanın amacı, ilköğretim (6-8. sınıflar) matematik dersi öğretim programına giren yeni konuların programa alınmasının uygunluğu ve bu konulardaki pedagojik alan bilgisi Makale yeterlilikleri hakkında matematik öğretmenlerinin görüşlerini belirlemektir. Araştırmada betimsel tarama modelinden yararlanılmıştır. Araştırmanın çalışma grubunu Tekirdağ ili Merkez Özeti: ilçesindeki ilköğretim okullarında görev yapan 27 matematik öğretmeni oluşturmaktadır. Verilerin toplanması aşamasında araştırmacılar tarafından hazırlanan yeni konulara ilişkin 9 açık uçlu ve yeterliliklere ilişkin 17 kapalı uçlu sorudan oluşan anket formu kullanılmıştır. Toplanan verilerin analizinde yüzde ve frekans değerlerinden yararlanılmıştır. Araştırmanın sonuçlarına göre, öğretmenlerin genellikle yeni konular hakkında olumlu görüş bildirdiği ve bu konulara ilişkin kendilerini yeterli buldukları tespit edilmiştir. Elde edilen bulgulara dayalı olarak matematik dersi öğretim programına giren yeni konuların öğretimine yönelik öneriler geliştirilmiştir. Purpose: There are mathematical behaviors in all levels from the preschool education to higher education programs. These behaviors are in mathematics curriculum as objectives. Elementary Mathematics (6-8th grades) curriculum was developed in 2005 at last and applied gradually from the 6th grade in 2006-2007 academic year. Thematic approach was considered in regulating context in new mathematics curriculum and determined learning domains and sub learning domains. Some subjects were taken out and some new subjects were added in developing program studies. Patterns and relations in integers, translation, tessellations, structural drawings, transformation geometry, fractals, geometric movements, histogram, kinds of probability, standard deviation, combination, perspective drawings, intersections of objects, polyhedral objects, symmetries of geometric objects are some new subjects in mathematics (6-8th grades) curriculum. Patterns and relations in integers and special number patterns are in Patterns and Relations in Algebra learning domain, translation, reflection, rotation, geometric movements and symmetries of geometric objects, tessellations and fractal, structural drawings, intersections of objects and polyhedral objects are in Geometry learning domain, histogram, kinds of probability, standard deviation, combination are in Probability and Statistics learning domain. The purpose of this study was to determine mathematics teachers’ opinions and qualifications about new sub learning domains in elementary mathematics (6-8th grades) curriculum. The descriptive survey method was used in the study. The work group of the study consists of 27 mathematics teachers from primary schools in Tekirdağ. Data were collected by a questionnaire which has been developed by the researchers. The questionnaire has 9 open-ended and 17 close-ended questions. Open-ended questions were used to determine mathematics teachers’ views about new sub learning domains were suitable or not for these grades. Close-ended questions were used to determine mathematics teachers’ views about their qualifications about new sub learning domains. Given that the mathematics teachers were asked to explain whether they were qualified about new sub learning domains or not, four choices were offered to the teachers: “Completely qualified”, “Qualified”, “Partially qualified” and “None qualified”. Frequencies and percentages were used to analyze data. Results: This study has shown that mathematics teachers have generally positive opinions about new sub learning domains in elementary mathematics Alternatif (6-8th grades) curriculum. But some teachers thought standard deviation in Measures of Central Tendency sub learning domain and perspective drawings in Projection sub learning domain were Dilde Özet difficult for these grades. According to the results of the study, mathematics teachers thought that they were qualified for these new sub learning domains, generally. But some teachers : thought they were partially qualified or none qualified for structural drawings, polyhedral objects, perspective drawings, standard deviation, special number patterns, fractal etc. Discussion: The results of this study indicate that new sub learning domains in elementary mathematics curriculum are generally appropriate for students according to mathematics teachers. However, there are some topics which few teachers have negative opinions by indicating reasons why they have these opinions, while majority of teachers have positive opinions about. For example; the difference between histogram and bar graph has not yet understood by some mathematics teachers. Another example is that the topics of patterns and tessellations sub learning domain are found unnecessary by some mathematics teachers. Generally, it is has seen that teachers who have more than twenty-year experience have negative opinions about new curriculum. As a reason for this, it can be claimed that these teachers have difficulty to adapt to a new understanding of the system because of training for many years according to traditional understanding of education. Another difficulty can be using tools and technologies, because of the new topics in the curriculum require visualization. The majority of the teachers feel themselves "completely qualified" or "qualified", while a small portion of the self feels "partially qualified" that can be identified as shown by the results of the research. However, it does not change the fact that the study group of teachers that constituted this study they find themselves none qualified on topics such as structure drawings, polyhedral objects, perspective drawings, standard deviation calculations, special number patterns, and fractals. Conclusion: This study has shown that teachers generally have positive opinions about new sub learning domains in elementary mathematics (6-8th grades) curriculum and they think that they have qualified pedagogical content knowledge to teach these domains. The Ministry of Education and researchers can be offered as follows: The Ministry of Education should organize in-service training about understanding and teaching new sub learning domains in mathematics curriculum. In addition, mathematics course hours can be increased and the mathematics classes which include all mathematics materials can be arranged. The contents of courses in education faculties should be overviewed by means of teaching new subjects. It should be determined mathematics teachers’ opinions qualifications about new other subjects in other grades’ mathematics curriculum.
{"url":"http://www.turkegitimindeksi.com/Articles.aspx?ID=2630","timestamp":"2024-11-10T15:03:22Z","content_type":"application/xhtml+xml","content_length":"28232","record_id":"<urn:uuid:b2268e7c-871b-47a9-8530-2f75d2100228>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00331.warc.gz"}
The Probability Workbook Category Archives: Conditioning An ant crawls along a coordinate grid. The ant starts at \((0,0)\). At each step, the ant either moves up one unit (with probability 1/2) or to the right 1 unit (with probability 1/2). After 5 steps the ant has • a 5/32 chance of being at the coordinate \((4,1)\), • a 10/32 chance of being at the coordinate \((3,2)\), and • a 17/32 chance of being at one of the coordinates \((2,3), (1,4), (5,0), (0,5)\) What is the probability that the ant is at position \((4,2)\) after 6 steps? Let \(A_n\) be the event that in \(n\) flips of a fair coin, there are never 2 consecutive tails. Suppose we know the following probabilities. 1. \(\mathbf{P}(A_{19})\approx 0.021\) 2. \(\mathbf{P}(A_{20})\approx 0.017\) Evaluate \(\mathbf{P}(A_{21})\) About 60% of the world’s population has brown eyes. About 20% of the world’s population has brown hair. Given that a person has brown eyes, they have a 10% chance of also having brown hair. Given that a randomly selected person does not have brown eyes what is the probability that they also do not have brown hair? Over his career, Shaquille O’Neal made about 53% of his free throws. Assume his probability of making a single free throw is 53%. Suppose Shaq shot a round of 20 free throws and you’re told he made 15 of them. 1. What is the likelihood he made the first free throw, given that he made 15? 2. What is the likelihood he made at least 1 out of his first 5 free throws, given that he made 15? You have a pair of fair dice and a pair of loaded dice. But you forgot which pair is which. You do remember that when you bought the loaded dice, the company that makes them claimed the dice would land on a sum of 7 approximately 1/3 of the time. 1. You choose one of the pairs at random and roll it once. You get a sum of 7. What is the likelihood that you picked the loaded dice? 2. You choose one of the pairs at random and roll the pair three times. You get exactly one sum of 7. What is the likelihood that you picked the loaded dice? Each week you get multiple attempts to take a two-question quiz. For each attempt, two questions are pulled at random from a bank of 100 questions. For a single attempt, the two questions are 1. If you attempt the quiz 5 times, what is the probability that within those 5 attempts, you’ve seen at least one question two or more times? 2. How many times do you need to attempt the quiz to have a greater than 50% chance of seeing at least one question two or more times? Assume that each monkey has a strong preference between red, green, and blue M&M’s. Further, assume that the possible orderings of the preferences are equally distributed in the population. That is to say that each of the 6 possible orderings ( R>G>B or R>B>G or B>R>G or B>G>R or G>B>R or G>R>B) are found with equal frequency in the population. Lastly assume that when presented with two M&Ms of different colors they always eat the M&M with the color they prefer. In an experiment, a random monkey is chosen from the population and presented with a Red and a Green M&M. In the first round, the monkey eats the one based on their personal preference between the colors. The remaining M&M is left on the table and a Blue M&M is added so that there are again two M&M’s on the table. In the second round, the monkey again chooses to eat one of the M&M’s based on their color preference. 1. What is the chance that the red M&M is not eaten in the first round? 2. What is the chance that the green M&M is not eaten in the first round? 3. What is the chance that the Blue M&M is not eaten in the second round? [Mattingly 2022] Suppose that I have two coins in my pocket. One ordinary, fair coin and one coin which has heads on both sides. I pick a random coin out of my pocket, throw it, and it comes up heads. 1. What is the probability that I have thrown the fair coin ? 2. If I throw the same coin again, and heads comes up again, what is the probability that I have thrown the fair coin ? 3. If instead of throwing the same coin again, I reach into my pocket and throw the second coin. If it comes up heads, what is the chance the first coin is the fair coin ? [ Modified version of Meester, ex 1.7.35] Let \(A\) and \(B\) be two events with positive probability. When does \(\mathbf{P}(A|B)=\mathbf{P}(B|A)\) ? Suppose that there are two boxes, labeled odd and even. The odd box contains three balls numbered 1,3,5 and the even box contains two balls labeled 2,4. One of the boxes is picked randomly by tossing a fair coin. 1. What is the probability that a 3 is chosen ? 2. What is the probability a number less than or equal to 2 is chosen ? 3. The above procedure produces a distribution on \(\{1,2,3,4,5\}\) how does it compare to picking a number uniformly (with equal probability) ? [Pitman p 37, example 5] At the London station there are three pay phones which accept 20p coins. one never works, another works, while the third works with probability 1/2. On my way to London for the day, I wish to identify the reliable phone, so that I can use it on my return. The station is empty and I have just three 20p coins. I try one phone and it doesn’t work. I try another twice in succession and it works both times. What is the probability that this second phone is the reliable one ? [Suhov and Kelbert, p.10, problem 1.9] In a certain population of people 5% have a disease. Bob’s road side clinic use a test for the disease which has a 97% of (correctly) returning positive if one has the disease and a 25% chance of (incorrectly) returning a positive if one doesn’t have the disease. If a random person is given the test, what is the chance that the result is positive ? Now let \(\alpha\) be the chance the test returns a positive if one doesn’t have the disease. (Leave the chance that the test correctly returns a positive is one has the disease at 97%). For what value of \(\alpha\) is the chance the test is correct equal to 5% for a randomly chosen person ? Consider the following model: \(X_1,…,X_n \stackrel{iid}{\sim} f(x), \quad Y_i = \theta X_i + \varepsilon_i, \quad \varepsilon_i \stackrel{iid}{\sim} \mbox{N}(0,\sigma^2).\) 1. Compute \({\mathbf E }(Y \mid X)\) 2. Compute \({\mathbf E }(\varepsilon \mid X)\) 3. Compute \({\mathbf E }( \varepsilon)\) 4. Show \( \theta = \frac{{\mathbf E}(XY)}{{\mathbf E}(X^2)}\) Let \(X\) be the number of patients in a clinical trial with a successful outcome. Let \(P\) be the probability of success for an individual patient. We assume before the trial begins that \(P\) is unifom on \([0,1]\). Compute 1. \(f(P \mid X)\) 2. \( {\mathbf E}( P \mid X)\) 3. \( {\mathbf Var}( P \mid X)\) An urn contains 1 black and 2 white balls. One ball is drawn at random and its color noted. The ball is replaced in the urn, together with an additional ball of its color. There are now four balls in the urn. Again, one ball is drawn at random from the urn, then replaced along with an additional ball of its color. The process continues in this way. 1. Let \(B_n\) be the number of black balls in the urn just before the \(n\)th ball is drawn. (Thus \(B_1= 1\).) For \(n \geq 1\), find \(\mathbf{E} (B_{n+1} | B_{n}) \). 2. For \(n \geq 1\), find \(\mathbf{E} (B_{n}) \). [Hint: Use induction based on the previous answer and the fact that \(\mathbf{E}(B_1) =1\)] 3. For \(n \geq 1\), what is the expected proportion of black balls in the urn just before the \(n\)th ball is drawn ? [From pitman p 408, #6] Consider the following hierarchical random variable 1. \(\lambda \sim \mbox{Geometric}(p)\) 2. \(Y \mid \lambda \sim \mbox{Poisson}(\lambda)\) Compute \(\mathbf{E}(Y)\). Consider the following mixture distribution. 1. Draw \(X \sim \mbox{Ber}(p=.3)\) 2. If \(X=1\) then \(Y \sim \mbox{Geometric}(p_1)\) 3. If \(X= 0\) then \(Y \sim \mbox{Bin}(n,p_2)\) What is \(\mathbf{E}(Y)\) ?. (*) What is \(\mathbf{E}(Y | X )\) ?. Stark’s Pond contains 10 trout and 5 bluegill fish. Kyle catches a random number of fish (call the number \(X\)), where \(X \sim \text{Unif}(\{1,\ldots,4\})\). Once caught, that fish is removed from the pond and cannot be caught again. Each new fish comes uniformly from the remaining fish. (a) What is the chance that Kyle catches all trout? (b) Suppose all the fish that Kyle caught were trout. Given this information, what is the probability that he caught exactly 5 fish? [Author Mark Huber. Licensed under Creative Commons.] Consider two draws from a box with replacement contain 1 red ball and 3 blue balls. Let \(X\) be number of red balls. Let \(Y\) be 1 if the two balls are the same color and 0 otherwise. Let \(Z_i\) be the random variable which returns 1 if the \(i\)-th ball is red. 1. What is the sample space. 2. Write down the algebra of all events on this sample space. 3. What is the algebra of events generated by \(X\) ? 4. What is the algebra of events generated by \(Y\) ? 5. What is the algebra of events generated by \(Z_1\) ? 6. What is the algebra of events generated by \(Z_2\) ? 7. Which random variables are determined by an another of the random variables. Why ? How is this reflected in the algebras ? 8. (*) What pair of random variables are independent ? How is this reflected in the algebras ? The following is a hierarchical model. 1. \(\lambda \sim Uniform[1,2]\) 2. \(Y \mid \lambda \sim \mbox{Poisson}(\lambda)\) What is \(\mathbf{E}(Y)\) ? A digital communications system consists of a transmitter and a receiver. During each short transmission interval the transmitter sends a signal which is interpreted as a zero, or it sends a different signal which is to be interpreted as a one. At the end of each interval, the receiver makes its best guess at what is transmitted. Consider the events: \(T_0 = \{\mbox{Transmitter sends } 0\}, \quad T_1 = \{\mbox{Transmitter sends } 1\} \) \(R_0 = \{\mbox{Receiver perceives } 0\}, \quad R_1 = \{\mbox{Reviver perceives } 1\} \) Assume that \(\mathbf{P}(R_0 \mid T_0)=.99\), \(\mathbf{P}(R_1 \mid T_1)=.98\) and \(\mathbf{P}(T_1)=.5\). 1. Compute probability of transmission error given \(R_1\). 2. Compute the overall probability of a transmission error. 3. Repeat a) and b) for \(\mathbf{P}(T_1)=.8\). [Pitman page 54, problem 4] A club contains 100 members; 51 are Democrats (or caucus with Democrats) and 49 are Republicans. A committee of 10 members is chosen at random. 1. Compute the probability of Republicans on the committee for \(n=1,…,10\). 2. Find the probability that the committee members are all the same party. 3. Suppose you didn’t know how many Democrats there were in the senate. You observe that the committee of \(10\) members consists of \(k=7\) Democrats. Compute \(\mathbf{P}(M|k=7) \), where \(M\) is the number of Democrats in the Senate. An insurance company has 50% urban and 50% rural customers. If every year each urban customer has an accident with probability \(\mu\) and each rural customer has an accident with probability \(\ lambda\). Assume that the chance of an accident is independent from year to year and from customer to costumer. This is another way to say, conditioned on being and urban or rural the chance of having an accident each year is independent. A costumer is randomly chosen. Let \(A_n\) be the chance this customer has an accident in year \(n\). Let \(U\) denote the event that this costumer is urban and \(R\) the event that the customer is 1. Find \( \mathbf{P}(A_2|A_1) \). 2. Are \(A_1\) and \(A_2\) independent in general ? Are there any conditions when it is true if not in general ? 3. Show that \(\mathbf{P}(A_2|A_1) \geq \mathbf{P}(A_2) \). To answer this question it is useful to know that for any positive \(a\) and \(b\), one has \( (a+b)^2 < 2(a^2 +b^2)\) as long as \(a \neq b\). In the case \(a = b\), one has of course \( (a+b)^2 = 2(a^2 +b^2)\). To prove this inequality, first show that \( (a+b)^2 +(a-b)^2= 2(a^2 +b^2)\) and then use that fact that \( (a-b)^2 >0 \). 4. Find the probability that a driver has an accident in the 3nd year given that they had one in the 1st and 2nd year. 5. Find the probability that a driver has an accident in the \(n\)-th year given that they had one in all of the previous years. What is the limit as \(n \rightarrow \infty\) ? 6. Find the probability that a diver is a urban diver given that they had an accident in two successive years. Mathematicians and politicians throughout history have dueled. Alexander Hamilton and Aaron Burr dueled. The French mathematician Evariste Galois died in a duel. Consider two individuals (H) and (B) for example dueling. In each round they simultaneously shoot the other and the probability of a fatal shot is \(0 < p < 1\). 1) What is the probability they are fatally injured in the same round ? 2) What is the probability that (B) will be fatally injured before (H) ?
{"url":"https://sites.duke.edu/probabilityworkbook/category/basic-probability/conditioning/","timestamp":"2024-11-02T06:18:38Z","content_type":"text/html","content_length":"70350","record_id":"<urn:uuid:36b18d00-f5fc-426d-9dba-197e56cb4bf7>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00240.warc.gz"}
Max(Utility) from Variety & Taste If a decision process is designed according to the classical utility maximization model, then the choice of an option is explained by it having the highest utility among considered options. Consequently, decision governance over such a decision process needs to influence (i) which options are considered, (ii) how options are compared against preferences, and (iii) how preferences are This text is part of the series on the design of decision governance. Decision governance are guidelines, rules, processes designed to improve how people make decisions. It can help ensure that the right information is used, that that information is correctly analyzed, that participants in decision making understand it, and that they use it before they make a decision. Find all texts on decision governance here. What is the topic of this text? If a decision process is designed according to the classical utility maximization model, then, one, How does it explain decisions? and two, How to influence that process through governance? Why is this topic relevant for decision governance? The classical utility maximization model is widely used in education in economics. It is the basic model of decision making in microeconomics, and the simplest model capturing the idea that the option to select is the option which is most desirable among those considered. Utility is the name given to the measure of desirability. The reason the classical utility maximization model is interesting for decision governance is that it is common to require through decision governance that options are identified and described, that criteria are defined, and options compared against criteria; this is usually followed by the calculation of a single number for each option, as a synthesis of all information about how the option ranks against various criteria. That single number, when without a unit, is called utility. Sometimes, a decision process may not explicitly use utility as a measure, but have one metric it ultimately compares options over, such as, for example the net present value in a currency of interest (in which case the NPV is a proxy for utility). If you understand how a decision is explained using the classical utility maximization model , then you know the factors which determine the decision. This allows you to design decision governance which influences these factors, to steer decision making to better options. Background: What is the classical utility maximization model? In the classical utility maximization model, we have the following: 1. One decision maker 2. Options that the decision maker chooses from 3. Criteria that the decision maker uses to compare options 4. Preferences that the decision maker has over each criterion 5. Relative importance that criteria have for the decision maker 6. The assumption that the decision maker will choose the most desirable option, whereby desirability takes into account preferences and importance of criteria (Note that the model can be described in different ways, including as a problem of choosing between combinations of goods, or quantities of goods under a budget constraint. The version I’m using here is the expected utility model that ignores uncertainty, because this version is more common in my experience; I have seen many business cases, for example, which are structured in terms of options, criteria, ranking over criteria to reflect preference, relative importance of criteria, and then, a single rank which aggregates preference and criteria importance.) It is important to understand the differences and relationships between concepts of criteria, preferences, and criterion importance: • A criterion is something that we use to compare options. Generally speaking, it is a variable we use to describe all options, such that each option can be given a value for that variable, and different values influence desirability differently. Color, weight, length, width, and so on can be such variables, but they will also be criteria only if the desirability of an option over another depends on the differences over that variable: color will be a criterion if it matters to me which color is associated with each option, and vice versa if I’m not indifferent to color of options. Criteria can be as simple as those I mentioned, but they can be more complex, i.e., it may be unclear what values they can have, and how these values are mapped to options. For example, the aesthetic preference of judges in a competition for architectural designs or best films. • A preference is a relation over values of a criterion that reflects the relative desirability of these values. If I am choosing a car and I am considering only cars that are red, white, or black, then my preference for car color needs to say how I relate the three colors in terms of which is more desirable than the other; for example, I prefer red to black, black to white, and red to • You can see by now that if you have more than one criterion when comparing options, then the only way to have a single rank over options is to also to have a preference over criteria: for example, if I’m choosing a car, and I have only two criteria, color and acceleration, then I also need to say which of the two influences desirability of options more – let’s say that acceleration is more important, which would lead me to choose, for example, a white car only if it had the best acceleration over all others I considered in the example where I prefer red to black to white cars. Example: Decision making with the classical utility maximization model Consider the following example. In The Iliad by Homer, Achilles, the Greek hero, decides to withdraw from battle after a dispute with Agamemnon, the leader of the Greek forces during the Trojan War. Agamemnon, having been forced to return his war prize, Chryseis, to appease the gods, demands Achilles’ prize, Briseis, as compensation. In response, Achilles, feeling dishonored and enraged by Agamemnon’s actions, chooses to withdraw himself and his troops from the fighting, despite the consequences for the Greek army. If we were to explain the decision using the simplified classical utility maximization model, then that explanation could look as follows. 1. Identifying Criteria: Achilles would begin by determining the criteria he will use to evaluate his options. These criteria reflect his preferences and could include: □ Preservation of personal honor. □ Contribution to the success of the Greek army. □ Personal safety and survival. □ The potential impact on his long-term reputation. 2. Identifying Options: Achilles would then identify the possible actions he could take. In his case, the primary options are as follows. Each option would be described in terms of its likely outcomes, such as the effect on Achilles’ honor, the Greek army’s success, and the risks to his life. □ Withdraw from battle, which would preserve his honor but potentially harm the Greek war effort. □ Continue fighting despite Agamemnon’s insult, maintaining military support but compromising his personal pride. □ Negotiate with Agamemnon to seek a compromise that might preserve both honor and military effectiveness. 3. Ranking Options on Each Criterion: Achilles would rank each option according to how well it satisfies each criterion. For example: □ Withdrawal may rank highest for preserving honor but lowest for contributing to military success. □ Continuing to fight might rank highest for military success but lower for personal honor. □ Negotiation could receive moderate rankings across multiple criteria. 4. Assigning Weights to Criteria: Achilles would assign weights to each criterion, reflecting how important each is to him. Hypothetical criteria are below. The weights over criteria would indicate the relative importance of each factor in influencing the decision. □ Honor might be assigned the highest weight if it is his most valued objective. □ Military success could be next, followed by personal safety and long-term reputation. 5. Selecting the Best Option: Using the rankings and weights, Achilles would calculate the overall score for each option. This score is a weighted average, combining how well each option meets the criteria with the relative importance of each criterion. He would then choose the option with the highest overall score. If preserving honor is weighted most heavily, and withdrawal ranks highest for honor, Achilles might decide to withdraw. Alternatively, if military success carries significant weight, continuing to fight might emerge as the best choice. Table 1: Summary of hypothetical options Achilles had, the criteria they are compared against, and the values and rationale for scores of each option on each criterion. If criteria are equally important, the best option would be to withdraw from battle. Criteria Withdraw from Battle Continue Fighting Negotiate with Agamemnon Preservation of 5: Preserves honor by rejecting Agamemnon’s insult 2: Honor is compromised by accepting Agamemnon’s terms 4: Honor is partially preserved through compromise Greek Army 1: Risks Greek army defeat without Achilles’ participation 5: Supports the Greek army, contributing to military 3: Moderate benefit to the Greek army if an agreement Success success is reached Personal Safety 4: Ensures personal safety as Achilles is not in battle 3: Poses some personal risk but keeps Achilles in a strong 3: Personal safety is balanced since Achilles avoids position direct conflict Long-term 3: Mixed impact on reputation; honor is upheld, but military retreat 4: Positively impacts reputation as Achilles stays a key 4: Could enhance reputation if a favorable compromise Reputation could be criticized figure in the war is reached What is the explanation of a decision in this decision model? In the classical utility maximization model, the explanation of a decision follows a simple pattern: the reason an option is chosen is because it is the one that was assessed as having the highest utility of all considered options. This is a weak explanation, in that we are appealing to something that synthesizes a lot of other information. The explanation begs other questions. Why is the chosen option the most desirable? And then, What criteria did the decision maker use? What preferences do they have? What options did they consider? It follows that factors which determine the decision according to the classical utility maximization model are the options, preferences, criteria, and the relative importance of criteria. The figure below shows the structure of the probable explanation of a decision when we assume that the classical utility maximization model is a good representation of how the decision was made. For the background on this diagram, see the text here. How to influence this decision process through governance? As noted above, we need to consider guidelines and rules that influence: • Options, the variety and specificity • Criteria • Preferences • Criteria importance Influencing Options There are various ways to influence options that are considered during decision making. In general, we can do the following. 1. Decision Framing: The way a decision problem is framed impacts the range of options considered. For example, Tversky and Kahneman’s prospect theory shows that framing choices by emphasizing gains or losses can influence risk preferences. When decision-makers perceive the stakes as a loss, they are more likely to consider riskier options than when they see them as potential gains (Kahneman & Tversky, 1979). 2. Decision Situation: The culture and norms within the decision situation shape what is perceived as an option, and what is unlikely to be perceived as such. In a firm, a culture that encourages creativity, openness to diverse perspectives, and risk-taking typically results in a wider array of options being explored. Conversely, hierarchical or risk-averse cultures may limit the consideration of innovative or unconventional alternatives (Schein, 2010). 3. Structured Decision-Making Processes: Use of methods like decision analysis, scenario planning, or multi-criteria decision analysis can influence the range of options. Methods can make it mandatory to consider specific kinds of options, or to vary particular parameters to generate new options (Goodwin & Wright, 2014). 4. Mitigating Cognitive Biases: Decision-makers are often subject to cognitive biases like anchoring, availability bias, and confirmation bias, which can restrict the set of options they consider. Addressing these biases through techniques like “devil’s advocacy” or structured debate helps in challenging initial assumptions and broadening the range of potential alternatives (Bazerman & Moore, 2013). Encouraging teams to explore counterarguments or consider alternatives they initially dismissed can reduce the impact of biases. 5. Diverse Decision-Making Teams: Research highlights the importance of team diversity in expanding the range of options considered. Teams composed of individuals with varied backgrounds, expertise, and cognitive styles tend to generate more innovative ideas and are less likely to overlook unconventional solutions (Page, 2007). Diversity fosters critical thinking and prevents groupthink, where a homogenous group might converge too quickly on a narrow set of options. Influencing Criteria and Preferences Preferences and criteria go hand in hand: if there is no criterion to reflect a decision maker’s preference, then those preferences do not matter, and vice versa, if the decision maker wants to satisfy specific preferences they have, they will insist on introducing criteria which allow them to do so. Influencing preferences requires understanding how they are formed, and consequently, what shapes them. Factors that influence criteria and preferences are the following. 1. Psychological Factors: Preferences are often influenced by cognitive biases, heuristics, and emotional states. Behavioral economics research has demonstrated that individuals’ preferences can deviate from rational expectations due to factors like loss aversion, overconfidence, or framing effects. For example, Kahneman and Tversky’s prospect theory shows that individuals are more sensitive to potential losses than to equivalent gains, shaping their risk preferences (Kahneman & Tversky, 1979). 2. Social Norms and Cultural Context: Social preferences, such as concerns for fairness, reciprocity, and status, can lead decision makers to value outcomes that align with societal expectations. Fehr and Schmidt (1999) found that individuals often make decisions based not just on self-interest but on considerations of fairness, driven by social norms. Cultural influences also shape values and behaviors, such as preferences for cooperation versus competition, which differ across societies (Akerlof & Kranton, 2000). 3. Institutional and Market Structures: Institutions, including market systems, laws, and organizational frameworks, also influence decision makers’ preferences. Exposure to different market structures can shape individuals’ preferences for certain behaviors, such as competition or collaboration. For instance, capitalist economies may foster preferences for individual achievement, while cooperative market structures might encourage collective decision-making (Bowles, 1998). 4. Learning and Experience: Preferences evolve based on individual experiences, information, and knowledge. Experienced and observed outcomes of past decisions shape future preferences. For example, a person who experiences financial losses in risky investments may develop a preference for safer investments in the future (Bisin & Verdier, 2011). 5. Time Preferences and Discounting: Individuals differently value present versus future outcomes. Time preferences, captured in models of discounting, reflect whether decision makers prefer immediate rewards or are willing to wait for future gains. For example, in some decision situations, decision makers may heavily discount future rewards, preferring immediate outcomes (Laibson, 6. Genetic and Biological Factors: Studies in neuroeconomics indicate that certain preferences, such as risk tolerance and time preferences, may have a biological basis influenced by brain activity (Camerer, Loewenstein, & Prelec, 2005). These factors contribute to the complexity of how preferences are formed. To influence preferences through decision governance, we need to introduce mechanisms which shape preference factors. We will revisit in more detail how these factors can be influenced through decision governance in other texts. References and Further Reading • Bazerman, M. H., & Moore, D. A. (2013). Judgment in Managerial Decision Making. Wiley. • Goodwin, P., & Wright, G. (2014). Decision Analysis for Management Judgment. Wiley. • Page, S. E. (2007). The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press. • Schein, E. H. (2010). Organizational Culture and Leadership. Wiley. • Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. • Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817–868. • Bowles, S. (1998). Endogenous preferences: The cultural consequences of markets and other economic institutions. Journal of Economic Literature, 36(1), 75–111. • Laibson, D. (1997). Golden eggs and hyperbolic discounting. The Quarterly Journal of Economics, 112(2), 443–477. • Bisin, A., & Verdier, T. (2011). The economics of cultural transmission and socialization. Handbook of Social Economics, 1A, 339-416. • Akerlof, G. A., & Kranton, R. E. (2000). Economics and identity. The Quarterly Journal of Economics, 115(3), 715–753. • Camerer, C. F., Loewenstein, G., & Prelec, D. (2005). Neuroeconomics: How neuroscience can inform economics. Journal of Economic Literature, 43(1), 9-64.
{"url":"https://ivanjureta.com/maximal-utility-from-variety-amp-taste/","timestamp":"2024-11-13T03:08:40Z","content_type":"text/html","content_length":"83463","record_id":"<urn:uuid:9aeea9dc-67e2-4bb9-ba41-61bc60b1de5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00759.warc.gz"}
Arc Length and Radian Measure Worksheets Math Topics Arc Length and Radian Measure Worksheets This section helps us learn to further explore geometric circles. We learn how to measure specific portions of circles. Most of this is based on the concept of circumference which is the full length of the circle curvature. We measure this curvature in units of degrees, but we also explore the use of radian measures that are rooted on this topic. We also show you how transition between these two forms of measure. These worksheets and lessons teach students how to determine the length of the arc of a circle and the measure in radians. Aligned Standard: HSG-C.B.5 Homework Sheets A number of different conversion strategies are set for you over the course of this series. Practice Worksheets Many teachers write in and let us know that these were very helpful for them. Math Skill Quizzes How would you go about using these as a daily warm up. It's a good idea! How to Calculate the Length of an Arc An arc is typically defined as the small segment present at the circumference of a circle. Or any fraction of circle's circumference lying between two points. Arc span is defined as the length of a curvature. An arc measure provides a value for how much of the arc is around the center of a circle. Angles can be measured in radians or degrees. Degree measures are the most popular to use. This is commonly based on the concept of a complete circle encompassing 360 degrees. You can convert degrees to radians by multiplying the measure is degrees by Π / 180. You can easily figure out the arc of the circle by taking less than full length around the circle within two radii. We use the following formula for calculating the length of an arc: arc measure= (arc length)/radius = s/r. Let's understand it better with an example. If our arc length is 3cm and our radius is 4 cm. Write down the formula first: arc measure =s/r | arc measure = 3/4| This is written in radians; we can have it degrees by multiplying it with 180/ Π = (3/4) (180/Π) = 42.971 = 43 degrees. What is the Radian Measure of an Angle? Angles, in general, are measures of the amount of rotation that is required to reach one of the sides to the other. Radian measures take into account the concept of looking at this angle encompassed within a circle. It positions the angle as a central angle of the circle, where the vertex of the angle is positioned in the center of it. One radian is the measure of the arc that is formed by extending the lines of the angle to the edge of the circle. The circumference of a circle can be calculated using the formula 2Πr. We can relate that to measuring in degrees because 360° = 2Πr.
{"url":"https://www.mathworksheetsland.com/topics/trig/arclengthset.html","timestamp":"2024-11-12T19:55:51Z","content_type":"text/html","content_length":"15660","record_id":"<urn:uuid:cdfc7aea-ef87-4273-9e23-5bc5421222da>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00062.warc.gz"}
What are Aggregation Functions on Snowflake? - Snowflake Solutions What are Aggregation Functions on Snowflake? What are Aggregation Functions on Snowflake? 1 Answer Aggregate functions operate on values across rows to perform mathematical calculations such as sum, average, counting, minimum/maximum values, standard deviation, and estimation, as well as some non-mathematical operations. An aggregate function takes multiple rows (actually, zero, one, or more rows) as input and produces a single output. In contrast, scalar functions take one row as input and produce one row (one value) as output. An aggregate function always returns exactly one row, ***even when the input contains zero rows***. Typically, if the input contained zero rows, the output is NULL. However, an aggregate function could return 0, an empty string, or some other value when passed zero rows. Snowflake provides a variety of aggregation functions that allow you to perform calculations and summarizations on data. Here are some commonly used aggregation functions in Snowflake: 1. SUM: Calculates the sum of a numeric column. Example: **`SUM(sales_amount)`** calculates the total sales amount. 2. AVG: Calculates the average (mean) of a numeric column. Example: **`AVG(product_rating)`** calculates the average rating of products. 3. MIN: Returns the minimum value in a column. Example: **`MIN(order_date)`** returns the earliest order date. 4. MAX: Returns the maximum value in a column. Example: **`MAX(order_date)`** returns the latest order date. 5. COUNT: Counts the number of non-null values in a column. Example: **`COUNT(customer_id)`** counts the number of unique customer IDs. 6. GROUP BY: Groups rows based on one or more columns and performs aggregations on each group. Example: **`SELECT category, SUM(sales_amount) FROM sales_table GROUP BY category`** calculates the total sales amount for each category. 7. DISTINCT: Returns the unique values in a column. Example: **`SELECT DISTINCT product_name FROM products`** retrieves the unique product names. 8. COUNT DISTINCT: Counts the number of unique values in a column. Example: **`COUNT(DISTINCT customer_id)`** counts the number of distinct customer IDs. 9. GROUPING SETS: Performs multiple groupings in a single query, generating subtotals and grand totals. Example: **`SELECT category, city, SUM(sales_amount) FROM sales_table GROUP BY GROUPING SETS ((category), (city), ())`** calculates subtotals by category, by city, and grand total. 10. HAVING: Filters groups based on aggregate conditions. Example: **`SELECT category, SUM(sales_amount) FROM sales_table GROUP BY category HAVING SUM(sales_amount) > 10000`** retrieves categories with total sales amount greater than 10,000. These are just a few examples of the aggregation functions available in Snowflake. Snowflake also supports functions like STDDEV, VARIANCE, MEDIAN, FIRST_VALUE, LAST_VALUE, and more for advanced statistical and windowing calculations. The Snowflake documentation provides a comprehensive list of aggregation functions with detailed explanations and usage examples.
{"url":"https://snowflakesolutions.net/question/what-are-aggregation-functions-on-snowflake/","timestamp":"2024-11-06T22:04:34Z","content_type":"text/html","content_length":"303406","record_id":"<urn:uuid:c0c51a8b-1400-433c-95d9-f4291b73f4e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00249.warc.gz"}
Box-and-whisker plots | sofatutor.com Basics on the topic Box-and-whisker plots A box-and-whisker plot is a quick way of showing the variability of a data set. It displays the range and distribution of data along the number line. To make a box-and-whisker plot, start by ordering the data from least to greatest. Next, inspect the ordered data set to determine these 5 critical values: minimum, Q1, median, Q3, and maximum and plot them above a number line. The minimum and maximum values are the least and greatest values. The median or middle value splits the set of data into two equal numbered groups. The first quartile, Q1, is the median of the lower half of the data set. The third quartile, Q3, is the median of the upper half of the data set. The box is created by drawing vertical line segments through Q1, median, and Q3 and drawing two horizontal line segments connecting the endpoints from Q1 to Q3 passing through the median. The first whisker is created by drawing a horizontal line connecting the minimum and Q1 while the second whisker is created by drawing a horizontal line connecting Q3 with the maximum. A good measure of the spread of data is the interquartile range (IQR) or the difference between Q3 and Q1. This gives us the width of the box, as well. A small width means more consistent data values since it indicates less variation in the data or that data values are closer together. Summarize and describe distributions. Transcript Box-and-whisker plots Deep in the mountains lies a martial arts school at 1 Foot Fist Way that focuses on breaking wooden planks. All of the students must break as many wooden planks as they can in one strike. Each student records his number and presents it to his master at the end of the week. The most consistently good student does not have to clean the school for one week. But how can the students' master tell which of his students are most consistently the best? By using Box-and-Whisker Plots, of course! Using Box-and-Whisker Plots If we want to put the students' data into a box-and-whisker plot, we need to have the numbers in order. Looking at student 1's record for the week, he has yet to order his list. After ordering the lists, we need to find 5 critical values: the minimum, the first quartile (also known as Q1), the median, the third quartile (also known as Q3), and the maximum. Student 1 Heeding his teacher's instructions, student 1 orders his list. The minimum, or the smallest number, for student 1 is 1. Student 1's maximum, or the largest number, is 9. Next, let's find the median. The median is the middle number in the data set. Because we have an even number of data points, there are two middle numbers. When this happens, you should take the average of the two middle numbers. In this case, the average of 4 and 4 is 4. So now we know the minimum is 1, the median is 4 and the maximum is 9. To find each quartile, we must split the data into halves. Q1 is the median of the first half of the data 1, 2, 2, 3, 4. The middle number of this portion of the data is 2, so the Q1 is 2. Q3 is the median of the second half of the data 4, 6, 7, 8, 9. The middle number is 7, so the Q3 is 7. Now that we have all 5 values, we can draw the box-and-whisker plot on our number line. Always plot the minimum, Q1, median, Q3, and maximum values The box part of the box-and-whisker plot is drawn with a vertical line through both the Q1 and Q3 values. These are then connected to form our box. Finally, we also need to draw a vertical line in the box to represent the median. The Interquartile Range orIQR is obtained by substracting Q1 from Q3. In the case of student 1 this is 7 minus 5 or 2. The whiskers are then drawn to connect the box to the minimum and maximum values. Student 2 Now let's look at student 2. First let's put the numbers in order. Now we need to find the 5 critical points again. Here, the minimum is 1 and the maximum is 8. Now let's find the median. Again, we have an even number of data points, this means we will have two middle values. The two middle values are 5 and 5, which when averaged, gives us 5. Now, we can split the data into halves in order to find Q1 and Q3. The first half of the data is 1, 1, 2, 3, 5. So the Q1 is 2 because 2 is the median of the first half of the data. The second half of the data is 5, 6, 6, 8, 8. 6 is the median value in this portion of the data, so Q3 is 6. Now that we have our five points, we can make a box-and-whisker plot. We draw a box from Q1 to Q3.Then, we make the whiskers by drawing lines from each end of the box to connect the minimum and maximum values. Student 3 Let's put the data from student 3 in order. This diligent disciple has already completed his box-and-whisker plot! Let's check to see if all the parts are there. The minimum is 0, the maximum is 9... This time, even though the two middle numbers are different, we still just need to take the average. So, our median is the average of 3 and 4, or 3.5. Q1 is 1, Q3 is 7. Comparing graphs Points are plotted, box and whiskers drawn. Now we can compare the graphs and figure out which student is the most consistent. All three of these box-and-whisker plots are pretty similar, but they do have a couple of differences.The box part of the plot for student 2 is the shortest. This means that his data points are closer together; another way to say this is that student 2 has less variation in his data. You may also notice that some of the critical values are different between the three plots. One critical point that varies the most between the three graphs is the median. Student 1 has a median of 4, student 2 has a median of 5, and student 3 has a median of 3.5. So even though student 1 and student 3 have the greatest maximums at 9, their medians are smaller than student 2. Finally, the IQR values will show the teacher how consistent each student was. Student 1 has an IQR of 5, student 2 has an IQR of 4 and student 3's IQR is 6. So it's confirmed that student 2 is the most consistent. Before the teacher gets around to announcing the best student, the students clamor for him to show them how it's really done. Ahem. Deep in the mountains lies a martial arts school at 1/3 Foot Fist Way that focuses on breaking wooden planks, roads, trees, mountains. Box-and-whisker plots exercise Would you like to apply the knowledge you’ve learned? You can review and practice it with the tasks for the video Box-and-whisker plots. • Explain how to create a box-and-whisker plot. Here you see the list of scores of student one and the sorted version of this list below it. Above both lists you can see the resulting box-and-whisker plot. The median of a sorted list divides this list in two halfs with the same number of data points. The interquartile range $IQR=Q_3-Q_1$ is a measure for variation. Here you see a complete box-and-whisker plot. How can we create such a plot? Well, for a given set of data, we first have to order the data points, namely the numbers. Next, we have to determine some critical values: □ The minimum □ The maximum □ The median □ The first quartile $Q_1$ □ The third quartile $Q_3$ Let's have a look at the ordered data list: The minimum of this list is the lowest or the most left value $1$, and the maximum the highest or the most right value $9$. We draw those values in a graph with a horizontal axis labeled from $1$ to $10$. The median of an odd data list is the middle of the list. If the number of data points in the list is even, we choose the average of both middle data points. The list above has an even number of elements. So the median is the average of $4$ and $4$, which is $4$. We also draw this value in the graph above. The first quartile is the median of the first half of the list. Here it's $2$. The third quartile is the median of the second half of the list, $7$. The interquartile range is given by $Q_3-Q_1= 7-2=5$. We draw $Q_1$ as well as $Q_3$ in the graph above. But we haven't finished yet: □ We still draw a box from $Q_1$ to $Q_3$. □ Last, we connect the minimum, $Q_1$, $Q_3$, and the maximum with whiskers. • Find the right box-and-whisker plot. Remember to first order the list. The minimum is the most left and the maximum the most right value of the ordered list. The median of an even numbered list is the average of the two middle data points. The median of an odd numbered list is the middle data point of the list. However, the median divides the list in two lists of the same size. $Q_1$ is the median of the first half and $Q_3$ is the median of the second half of a list of data. To create the box-and-whisker plot we want, we have to do the following: 1. Sort the list: $1,1,2,3,5,5,6,6,8,8$. 2. The minimum is $1$ and the maximum $8$ (the most left and most right values, respectively). 3. The median is the average of $5$ and $5$, which is $5$. Here we have to determine the average of the two middle values because the number of elements in the list is even. 4. The median of the first half of the list, $1,1,2,3,5$, is $Q_1=2$, and the median of the second half of the list, $5,6,6,8,8$, is $Q_3=6$. 5. We draw a box from $Q_1$ to $Q_3$. 6. Lastly, we connect the minimum $1$ and $Q_1=2$ as well as $Q_3=6$ and the maximum $8$ with whiskers. The resulting plot is shown beside. • Compare the different data sets. The interquartile range $IQR=Q_3-Q_1$ is a measure for variance. The smaller the variance, the higher the consistency. The length of the box is the $IQR$. Looking at the box-and-whisker plots for each student, we can compare the students to figure out which student is the most consistent. The box of student $1$ is bigger than this one of student $2$. This is a measure for variation. Or, in other words, student $2$ is more consistent. The minimum values are the same, and the maximum of student $2$ is $1$ more than this one of student $1$. Also the median of student $2$ is higher than this one of student $1$. So we can conclude that, using the box-and-whisker plots, that student $2$ is the more consistent one. • Find the data set(s) corresponding to the box-and-whisker plot pictured. Order each set so that you can find the critical values. First, check the minimum and maximum of the set. Check each data set: it's possible that two different data sets lead to the same box-and-whisker plot. First, exclude any list with a either a minimum or maximum different than the minimum or maximum of the box-and-whisker plot pictured. This means that we exclude the fourth list, as it does not have a minimum of $1$, and the first list, as it does not have a maximum of $6$. Next let's have a look at the median: the median of the fourth list is $5$ and for all the remaining lists it's $4$. So we calculate $Q_1$: □ $Q_1=2$ for the third list □ $Q_1=1$ for the last list Those lists can also be excluded, as $Q_1=3$ for the box-and-whisker plot pictured. There are two lists left: calculating $Q_3$ for those lists, we get: □ The second list has the third quartile $Q_3=5$. □ The fifth list has the third quartile $Q_3=5$. So those lists are the lists which correspond to the box-and-whisker plot pictured. • Label the values in a box-and-whisker plot. The minimum is the lowest and the maximum the highest value of a data list. The median lies in the middle of a sorted data list. It divides the list in two halfs of the same size. The quartiles are the medians of the halves of a list: $Q_1$ ($Q_3$) is the median of the first (second) half of the list. minimum $\le$ $Q_1$ $\le$ median $\le$ $Q_3$ $\le$ maximum Here you see the solution pictured. The data list as well as the sorted data list is already given. The minimum, $1$, is the most left and the maximum, $8$, the most right value. The median lies in the middle of the list. The given list has an even number of entries, so the median is the average of the two middle data points, $4$ and $4$. Thus the median is $4$. The quartiles are medians as well, each time for lists with an odd number of entries: □ $Q_1=2$ is the median of the first half $1,1,2,3,5$. □ $Q_3=2$ is the median of the second half $5,6,6,8,8$. • Determine the interquartile range $(IQR)$. The interquartile range $IQR=Q_3-Q_1$ is a measure of variance for a data set. First, sort the list and determine the median. If the list has an odd number of entries, then the median is the middle data point of the list. If the list has an even number of entries, then the median is the average of both middle data points. The first quartile $Q_1$ is the median of the first half of a data set, and the third quartile $Q_3$ is the median of the second half of a data set. The interquartile range can also be a rational number. To determine the interquartile range you first have to establish the first and the third quartile. For this you need the median of the list. The median of an odd numbered list is the middle data point of the list. If the list is even numbered the median is the average of both middle data points. Let's start with $3,~3,~4,~7,~10,~11,~8,~2,~5,~4,~9$. 1. The sorted list is $2,~3,~3,~4,~4,~5,~7,~8,~9,~10,~11$. 2. The median is the middle of this list: $5$. 3. The median of the first half $2,~3,~3,~4,~4$ is $Q_1=3$, and the median of the second half $7,~8,~9,~10,~11$ is $Q_3=9$. 4. So $IQR=9-3=6$. Now let's consider $5,~3,~4,~3,~12,~7,~8,~4$. 1. The sorted list is $3,~3,~4,~4,~5,~7,~8,~12$. 2. The median is the average of the two middle data points $4$ and $5$, namely $\frac{4+5}2=\frac92=4.5$. 3. The median of the first half $3,~3,~4,~4$ is $Q_1=\frac{3+4}2=\frac72=3.5$, and the median of the second half $5,~7,~8,~12$ is $Q_3=\frac{7+8}2=\frac{15}2=7.5$. 4. So $IQR=7.5-3.5=4$. Finally, we have $7,~5,~8,~5,~7,~6,~12,~3,~5$. 1. The sorted list is $3,~5,~5,~5,~6,~7,~7,~8,~12$. 2. The median is the middle of this list: $6$. 3. The median of the first half $3,~5,~5,~5$ is $Q_1=5$, and the median of the second half $7,~7,~8,~12$ is $Q_3=\frac{7+8}2=\frac{15}2=7.5$. 4. So $IQR=7.5-5=2.5$. More videos in this topic Statistics: Data Distribution
{"url":"https://us.sofatutor.com/math/videos/box-and-whisker-plots","timestamp":"2024-11-12T23:33:50Z","content_type":"text/html","content_length":"160974","record_id":"<urn:uuid:0ebd616e-2801-4f57-8cf6-0e82dd81e52b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00685.warc.gz"}
OpenLab #4: Advice for the future (Due Thursday, 5/22/14, at the start of class).  Imagine that you are invited to speak on the first day of MAT 2680, to give advice to entering students.  Write at least three sentences responding to one or two of the following, describing what you would tell them. 1. What do you wish that you had been told at the start of this class, to help you succeed? 2. Choose one topic in the course that is especially challenging. Identify it, and give advice to students trying to master that topic. 3. What is the most important prior knowledge (not taught in the class) that you need in order to succeed?  Why is it important? Extra Credit. Respond to someone elseâ s comment.  Do you agree? disagree? Have anything to add? 63 responses to “OpenLab #4: Advice for the future” 1. Advice to students that are entering MAT2680 : Differential Equations. From my knowledge and perspective, having completed the course, Differential Equations is a practical view on calculus (differentiation and integration). It applies derivatives and integrals to real world situations and problems, especially physics. If you are a student who enjoyed calculus and have a deep interest in mathematics, then Differential Equations is certainly one of the best classes you will study. Taylor Polynomials, Hooke’s Law (from calc. based physics), Method of Partial Fractions, and basically most topics from integration have been applied in solving for y – this is the purpose of differential equations. From my experience, my professor explained that the equation is describing some real world phenomena or even a real world problem which involves derivatives and higher derivatives with respect to time. So be sure to brush up on calculus 1 and most importantly calculus 2. Brush up on the different integration techniques. It is wise to keep calculus notes with you, to study from, and to have it as reference while in class and for studying purposes. For students who does not have an interest in math, and did badly on calculus (1 & 2) & have to take this class because your degree depends on it, then it would be wise to brush up on your integration and differentiation techniques. Get extra help, extra tutoring or find someone who is willing to help (GO TO PROFESSOR’S OFFICE HOURS!!!!) This class requires time and effort, math is not something you have to only read to understand but actually practice a lot a lot a lot of examples. This is what I do, practice doesn’t make perfect, but practice gets you perfect scores and good grades, dedicate time. GOOD LUCK ! đ -Rachel Rackal □ I agree, especially keeping calculus notes are very important since you will be expected to know the general derivatives and integrals of certain trig functions and logarithms. □ I completely agree with you, this class is great. it helps you apply math in the real world. Practice is very important if you want to obtain good grades. □ I agree 100%. the first few days of class i was completely lost because i had forgotten most of my Calc 1 material. Brushing up on how to find derivatives and integration are key to understanding how to work most of these problems. You have to relearn u-substitution and integration by parts to solve some of the DE problems given to you 2. Thanks for leaving the first response, Rachel (you were quick!). Good advice all around. – Prof. Reitz 3. If it was the first day of the differential equations class. The advise I would give to the students would be to really brush up on your integration techniques and to try and grasp the concept of differential equations instead of just knowing how to solve the question. Dont be afraid to ask questions im sure more than pne of you will have the same issues. Webwork is extremely helpful and should not be left until the last second because you dont want to be up at the crack of dawn struggling over why your answers to eulers equation is completely And lastly dont stress yourself out when test time comes around just keep calm or your paranoia will make you forget everything. □ I agree with the homeworks as it has helped me extremely since pre calculus. I would also add if you are afraid of asking questions during class time try to meet with the professor during office hours. It helps greatly. 4. In terms of prior knowledge, I would say review the Calculus 2 material! Differential Equations heavily involves derivatives, integrals, and more recently Taylor Series, so by keeping Calculus 2 ready and fresh on the first day of class, it is more likely that Differential Equations will be much easier. If you want to make it even easier, don’t take a break from math for one semester like I did (biggest mistake). In addition to that, constantly ask questions. If you don’t, there’s a good chance that the answer to that question could’ve helped you solve a problem on Webwork or on an exam. There’s an even higher chance that another student has the same question but doesn’t want to raise their hand up, so in a way you’re doing them a favor too. Finally, don’t save anything till last minute, no matter what class you take, this never works! □ I agree with the “constantly ask questions” part, especially when taking differentials with professor Reitz, take advantage of the 6-7 seconds of silence after the “any questions???” is asked; think about anything you might have had difficulties with and within the 6-7 seconds you’re most probably will come up with a nice question about it. 5. Choose one topic in the course that is especially challenging. Identify it, and give advice to students trying to master that topic; One topic that I believe is important to take into consideration is “book keeping.” Differential equations tend to be fairly lengthy problems and in order for the class notes to make sense I suggest using all the tricks in the book, here are two I had come up with; #1 [Use a multicolored pen/pencil to take notes, which helps immensely when trying to review the notes and differentiate between the intricate steps taken to solve a problem] #2 [Using a smart phone or recorder to record the audio of the lecture while making time stamps on the side of the notes for easy reference 6. I would strongly suggest that students make sure calculus 2 is fresh on their minds. Differential equations touches on a few topics from calculus an calculus 2. Derivatives and antiderivatives are going to become your best friend. Make sure you know how to do integration by parts, this is one topic from calculus 2 that comes up repeatedly. □ I am agree with him, everybody should know calculus very well in order to understand Differential Equation. I would love to add one thing that not only differential equation but also try to remember all the math methods you learn and use them when you need it. ☆ i totally agreed with you Mohammed Ahmed. I will recommend to students to refresh their minds on topics such as integration, derivatives, anti derivatives to avoid unpleasant struggle with the course □ I agree, because even though I knew my derivatives or anti-derivatives. it gave me trouble. Which made me have little error that i should have notice. It wasn’t bad once you practice or got a hand of it. □ I agree with making sure that calculus 2 is fresh in your mind because i took it over 2 semesters ago and what i really struggled with this semester is trying to relearn those methods that i needed to reuse from cal 2. □ I am agree with him, everybody should know calculus very well in order to understand Differential Equation and all integration methods for sure. 7. At the beginning I would have liked to have been told that the first lectures would be the hardest and that I should pay attention a lot more. A challenging topic would be the identification of types of differential equations and how to choose the methods for solving them. My advice would be to well (Pay attention to the methods ) they must be determined to use more than just 1 or 2 hours a week for studying this topic. Important prior information would be integration techniques from calc II and even some basic math skills like being able to tell if you should put a negative sign or positive sign–Pay attention to the signage of your work XD I have personally messed up on that several times…. 8. Choose one topic in the course that is especially challenging. Identify it, and give advice to students trying to master that topic. -Nonhomogenous solution second order differential equations is especially challenging. It will look confusing and difficult but if you group everything up with a common factor, it’ll look neat and orderly and you’ll get an idea of what to do. What is the most important prior knowledge (not taught in the class) that you need in order to succeed? Why is it important? -Remember almost everything in calculus 2. From integration, partial fractions, and Taylor series. You will have an easier time understanding and doing the problems in Differentials Equation. Remember how to do derivatives from calculus one also. 9. One thing is very important for this class is concentration and revise the previous staff that you have learned from other math class. Taking differential equation class with Prof.Reitz is really amazing, the way he explains the topic is really understandable and always helps to remind previous staff. I would tell everyone to review calculus notes while taking differential equation which helps to solve problems. Paying attention to class is the most important thing which we never learn from class, we have to learn it from ourselves. And paying attention to subject makes all the hard method easy. □ I agree that paying attention in class will greatly ease your journey thorough the course. Revision of prior knowledge will assist as well. The professor does also do a good job explaining most topics. □ I agree, that you need to pay a lot of attention in the class. As well have the previous knowledge of previous math courses. Professor Reitz is really amazing at explaining the topics so you can understand and always helpful. 10. I would say doing all of the homework question and redo all cal 2 homework so everything would in your mind ready to use. □ I agree know cal 2 is very important also you need to have good algebra skills. ☆ Knowing * 11. One topic that gave me the most problem this semester was the first linear order. It requires you to have the basic knowledge of calculus 2. If calculus 2 wasn’t your strong area then finding the derivative or even integration by parts. It would make it harder for you to understand a problem let alone solve it on your own. I would say review your notes for the first few topics so one can have a better understanding of the class. Doing the webwork homework also gives youâ re a better understanding because youâ re doing the problem over and over. □ Sabeeya’s comment has been the most useful so far because it cover what we have done so far throughout the semester, first linear order equation. Therefore, I can relate to this comment. To the other comments not so much because they are too advance, the speak about things I have never seen like ( partial fraction, euler’s and yada yada yada), but I get it I have to get ready for those topics because I’ve seen everybody is getting stomp. Based on this advice I want to go back to calculus II and learn integration by parts again since my notes are not useful. Right now i’m struggling so hard on webwork doing separable and first linear order equation. 17 tries and counting all I can say is GG. PD omw to Prof. Reitz’s office hours. ☆ Hi Gabriel, Thanks for your reply! But it looks like you left it on the OpenLab site for the 2014 class. Could you copy/paste it to the current site (so I make sure to include it in my grading)? Prof. Reitz 12. One thing i wish i have been told at the beginning of the class is yo review calculus 2 notes because this course uses alot of those concepts and techniques The most difficult top in this course is finding approximation of a point using the numerical methods. To succeed in this course a person has to practice, practic, and practic and do all the problems required. Also if you taking this course with prof Reitz then your in the right place because Are actually going to learn the material Prof Reitz your the best. 13. At the onset of the Differential Equations class, I believe it is important to know that the first part of the class [ Solving First Order D.E’s] is the trickiest part of the class, at least for me, once you get past this, the remainder of the class is relatively easy. Try not to miss any days, and always do the homework problems. One thing that would be essential to review before hand would be your algebra skills, the integration isn’t necessarily the hardest part of solving the problem, but it is the messiness of the algebra that can be confusing. If you review your algebra, you make the class easier for yourself. Of course don’t forget to review derivatives and integration as well. □ I think the hardest topic in general was when Euler’s Method Approximation first appeared. 14. 1)What You Need to Do: Focus, commit to this class, it’s not easy but will be if you give it time, don’t be afraid to ask questions and don’t be upset when you’ve done the problem six times and still don’t understand why it’s wrong (you probably forgot to distribute, or its a minor error). If you follow the work and pay attention in class you should have no problem in this class, yes you do need to brush up on Calc 2 (Taylors/Partial Fractions) but don’t think your at less of an advantage if you did not do well in Calc 2. 2) What Could Be Challenging: Euler’s/Backwards Euler Method, personally I had problems with Euler’s as it’s method is well a bit ambiguous. You won’t like practicing it, but if you do, you will master it. I personally spent about 4 hours collectively studying solely Euler’s/ Backward’s Euler Method, if you have any questions don’t hesitate to ask Pr. Reitz he is extraordinarily good the little mistakes that make all the difference. 3) What You Need to Know: Partial Fractions, you will need to know how to do partial fractions well, it will sneak up on you at the end of the semester. Integration IS A MUST for basically half the class. Lastly, be good at keeping your work organized on paper, (trust me, your work will get messy and there will be a lot of it), be wary of your sign/coefficient distributions. □ ” …Pr. Reitz he is extraordinarily *good at finding* the little mistakes that make all the difference.” ☆ I have to agree with you Alexis. Euler’s method was painstaking, especially before we had those nice formulas. And the only way to understand it is through practice. I also have to agree that Pr. Reitz is amazing when it comes to teaching and explaining the material. 15. One important thing to remember is to study!! If math is something that comes easy to you, then that’s great, but don’t forget to study still. Always refresh yourself by reviewing your previous courses. In Differentials you’ll need to know calculus and a lot of calculus 2. Don’t forget to ask for help and to do all of your assignments. Assignments give you the opportunity to work on more examples on your own, and it gives you the chance to test what you learned in class. □ I strongly agree with you Karen. Students must do all homework and if possible extra homework will be a plus. Previous knowledge of calculus is very important. 16. I would advise students to focus a lot on the first topics taught in Differential Equations, especially in integrating factor. Students must know how to take derivative and integrals for this class. There are many problems in this class where integration and derivatives of trigonometry functions need to be performed, so I advise all students to know trigonometry integrals and derivatives before taking differential equations. Finally, itâ s very important to do all the homework on time since falling behind with homework will affect your capability to learn the following topics. 17. 1. What do you wish that you had been told at the start of this class, to help you succeed? from my experiences, i think the course works were well organized with the webwork homework. The professor style of teaching was very great from the beginning to the end of the semester. From my perspectives , everything that needed to known to succeed was properly instructed. the only issue was the room were the course was lectured., the room need to be changed because most students couldn’t properly read on the board. 2. Choose one topic in the course that is especially challenging. Identify it, and give advice to students trying to master that topic. i think the first part of the course that focus on solving partial, homogeneous , integrating by factor, exact equation,was challenging. i would advice students not to slack on homework to properly understand that part of the course. 3. What is the most important prior knowledge (not taught in the class) that you need in order to succeed? to succeed in this class or any other class, you need to come on time, and try not to miss a course to be able to get the full experience of the course and to miss homework. As a student you should be always on you good behavior to pass a course and if you having trouble doing well , seek help , because there is always people willing to help you to succeed 18. Would of been helpful to have taken Differential Equations right after Calculus II, as with any math course they tend to follow. Meaning that you will most likely need to have knowledge of previous course. It is not to say you just need just knowledge on Calculus II, previous math classes play a part as well. Topics I found challenging are Euler Methods and Taylor Series just because they are long, and you have to keep track of everything. Best to practice it, till you get used to it. Slight mistakes can throw everything off, similar to how Calculus is you have to be good at book keeping. Take good notes and understand your notes, helps to take side notes as well. Partial fractions as they sneak up towards the end, as well Taylor Series as it’s covered in Calculus II towards the end of the semester and usually is just breezed over. MUST KNOW INTEGRALS AND DERIVATIVES, as it’s a major factor of the course. Review Calculus material; Professor Reitz does take the time to explain previous course material but the class time is short so he can’t spend as much time on it. Professor Reitz is great on helping, and he gives you the 5 seconds to think if you have additional questions. 19. I would say differential equation may not be hard subject if you pay attention in class and finish all of your assigned home works (important). Come into class without practicing is useless. So, Pay attention + Practice will make learning much easier. â Non-homogeneousâ problems are challenging because you have to find the coefficient by doing many steps. So, you have to track what part of the problem you are working. You need â â Partial Decompositionâ â knowledge, Although this topic has been taught in Cal-2 but thatâ s not enough for understanding. So I recommend to watch YouTube video (Patric) before you come to the class. Lastly, if you donâ t lose confidence, then you may succeed. 20. I would tell people that this class really isn’t very difficult; but that the topics covered are incredibly important, and their are quite a few of them. I would also recommend brushing up on their calc2 materials (especially the partial fractions stuff, which I still struggle with). and to occasionally work on hold homework and class problems. It’s very easy to learn a topic, move on and forget it after the test. Don’t do that! You’ll save yourself a lot of time and trouble in the long run. □ I agree with everything you said, especially how important it is to remember everything taught throughout the semester and not to forget the information after each exam since it will all be on the final again. □ One hundred percent! I agree, it’s incredibly easy to finish the final exam for a class and then completely forget all the material learned. It’s best to hold on to old notes and homeworks so you can refresh your mind in preparation for 2680! 21. One topic which gave me trouble is all the Euler’s method. If I have to say something about all the Euler’s method is one have to give really importance in Euler’s method which is easy compared to other but this is the base for all the Euler’s method like Backward, Improved, Runge-kutta which is coming in your way. If you cannot get grasp of Basic Euler’s method, you will probably struggle in other method which contains a large chunk of differential equations class. 22. I wish I had been told to pay extra close attention in the beginning of the semester. It was definitely the hardest part of the class and took me by surprise. If I had to give advice to students taking this class it would be to do every single homework on time and to try your best to understand the problems. I feel like without doing homework it’s next to impossible to pass this class. You can’t understand math without doing practice outside of class, so I would have to say doing homework is an absolute must. The most important prior knowledge that one might need in order to succeed in this class would have to definitely be Calc I and II, because without knowing how to take derivatives and how to integrate you simply won’t be able to do these problems. All in all I’d say this class is tough but is very doable if you put some time and effort into it; and having a great instructor like professor Reitz, definitely helps a lot! □ I have been paying close attention since the beginning, it is still pretty difficult to stay on track. Even though some of the homeworks took my by surprise, I ended up doing it with a group of friends learning as we solve the problems. Most of the homework problems we had to learn as we go, it was pretty time consuming and frustrating. But the moment we understood it and correctly solved it for it, it felt like an accomplishment. ☆ Hi William, Thanks for the comment – but it looks like you left it in my previous class’ OpenLab site (from Spring 2014) by mistake. If you could post it in the current semester’s OpenLab site that would be great. Prof. Reitz 23. I wish I had been told remember everything in calculus 2 because a lot of it came back in differential equations One of the topic i found challenging was intergrating factors and to master you just have to remember the formula for get mu by heart. I think the most important prior knowledge was knowing taylor series because it branched out to a lot of important topics such as euler’s method and laplace transform □ I agree with you on that.i thought I be done with Taylor series after cal 2 but came back again in differential equation.Was able to be better at doing Taylor series problems b/c of cal 2. 24. I think the most important prior knowledge that’s need for this course is calculus 1 and 2. I say calc 1 and 2 because almost all of the methods we studied involved integrating of taking derivatives. A good understanding of basic algebra and trigonometry is also essential because most of the problems are arguably 30% differential equations related and 70% algebra related. □ I totally agree with you. This course entails a lot of integration. Without knowledge of derivatives and integration it will definitely be a challenge to succeed. 25. 1. What do you wish that you had been told at the start of this class, to help you succeed? I wish that before leaving Cal 2 I was told to go over everything learned in class before entering differential equations 4. What is the most important prior knowledge (not taught in the class) that you need in order to succeed?  Why is it important? One of the biggest prior knowledge you must master is calculus 1 and calculus 2. Many different techniques taught here in Differentials involve almost everything you learn in Cal 1 & 2. From the very basic derivatives all the way to Taylor series. 26. 1)I wish I reviewed by calculus 1-2 topics that I learned before at the beginning of the semester . 2)Had problems with exact and separable problems But doing more practice problems and tutoring helped big time. 3)Need to be good at integration and differentiation or will struggle in the class.Actually found differential equation easier than cal 1-2. □ I agree sometimes I would procrastinate and have too much to study. 27. The most important prior knowledge (not taught in the class) that you need in order to succeed are partial fraction decomposition, different methods of integration and algebra. You need to know these topics in order to apply it to topics taught in Differential Equations. Make sure to review previous notes from Calculus 2. Knowledge of these topics make it easier to understand what is taught. Although Prof. Reitz gives a brief review of the topics, it will be more beneficial to be verse in these topics prior to this course. 28. What is the most important prior knowledge (not taught in the class) that you need in order to succeed? Why is it important? Cant stress it enough, review your cal1&2 basics. Derivatives and Integrals! Simple as that, most of the material you learn in 2680, you will need to have a good understanding of prior knowledge from other mathematics classes. Studying Taylor Series wouldn’t hurt either, a really important chunk in 2680 is Taylor Series so reviewing that wouldn’t negatively affect you. □ 100% agree, better to go over old stuff 15-30 minutes a day it will have a lasting affect of how the course can go, and it will definitely be for the better. 29. Most important thing that I founded I needed to do was make sure I made time to study the class work after class. Also make sure you do the Webworks problems, as you sometime tend to not do the HW and only focus on webwork problems, so not doing them will hurt you in long run. The most helpful thing I found was watching videos to supplement the class lectures, with another voice explain the same topics. Finding someone you like on youtube will make learning a lot easier for learning concepts not solving actual problems. 30. What I found helpful was doing a ton of problems. It made me more able to solve the problems. Also you should take notes and never commit anything to memory. 31. From all the responses the advice that is most relevant to me personally is that itâ s very important to review or become familiar with material from calculus 1 and calculus 2. For example, integrals, derivatives, anti-derivatives, and trig identities. I also agree with Gin Pena from your previous differential equations class when he states â If you want to make it even easier, donâ t take a break from math for one semester like I did (biggest mistake)â . The reason why I agree is because since I took calculus 1 and calculus 2 two semesters ago I do not fully remember the material very well. This makes it very difficult to solve problem that require prior knowledge. Based on this advice the changes I can make right now to help me with this course is to review/brush up on calculus 1 and calculus 2. In addition, I can do practice problems to make sure Iâ m refreshed and fully understand. Lastly, if I have any difficulties I can just ask questions in class, email the professor, go to his office hours, or even ask a classmate for help. □ Whoops – thanks for your comment, Carolina, but it looks like you left it on the OpenLab site for last year’s class. Could you copy/paste it to the current site (so I make sure to include it in my grading)? Prof. Reitz 32. Based on reading these comments from people that took the course i picked up a few things. One that really stayed on my mind was study up and review calculus and some topics like derivatives and integrals. One of the Previous students stated don’t be afraid to ask questions. I am a person who always asks questions and tries to understand the material to the best of my ability. Another advice i will take from the students that took this course is that i will spend more time on studying and reviewing the topics and materials discussed in class. I will study and keep studying calculus even though i took calculus 2 last semester. □ Hi John, Thanks for the comment â but it looks like you left it in my previous classâ OpenLab site (from Spring 2014) by mistake. If you could post it in the current semesterâ s OpenLab site that would be great. Prof. Reitz This entry was posted in Assignments. Bookmark the permalink.
{"url":"https://openlab.citytech.cuny.edu/2014-spring-mat-2680-reitz/?p=424/","timestamp":"2024-11-13T22:21:01Z","content_type":"text/html","content_length":"213098","record_id":"<urn:uuid:7f58d8ab-4263-47b0-ba9b-19c433a96868>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00242.warc.gz"}
Interferometric Synthetic Aperture Microscopy Optical coherence tomography (OCT) was a technique developed to create a synthetic aperture time-of-flight interferogram. The underlying OCT assumption is that the interogating field is extremely high gain (i.e. a single ray). A high gain beam can be achieved through a low-NA (numerical aperture) interogation system. This system, however, will limit resolution and SNR relative to a converging beam. To achieve high quality image reconstructions with a high-NA interogation system, an accurate description of the beam at all planes of interest must be incorporated into the sensing model. Since the beam exiting the interogating lens can be well-approximated by a Gaussian beam, the beam can be thought of as a band-limited spherical wave. The dispersion relation for the return from a Gaussian beam is q2+β2 = (2k)2. Where q and β are the sampled spatial frequencies of interogated object in the transverse, q, and axial, β, directions. (q = √(u2+v2) the l2 norm of the spatial frequency vector in the transverse direction.) The inverse problem is solved by resampling the measured OCT data in (x,y,k) to (u,v,β) according to the dispersion relationship and deconvolving out the Gaussian bandpass with a Wiener (Gaussian prior) or sparsity inducing (Laplacian) filter. The dispersion relation and a simulation demonstrating the performance benefits of ISAM processing with respect to OCT processing are shown in the figures below. The left image shows the data color coded in lines of constant β as measured in lines of constant k. The right image shows the data resampled into lines of constant beta. The left image shows a reconstruction of an array of point scatters by the OCT assumption. The center image shows the out of focus resolution improvement for ISAM. The right image is the interogating Research in ISAM continues at DISP, and future projects will consist of improvements in reconstruction accuracy, time, and adaptive aquisition strategies. More information about ISAM at THz can be found in M. Heimbeck, D. Marks, D. Brady, and H. Everitt, "Terahertz interferometric synthetic aperture tomography for confocal imaging systems," Opt. Lett. 37, 1316-1318 (2012)
{"url":"https://disp.duke.edu/research/interferometric-synthetic-aperture-microscopy","timestamp":"2024-11-14T11:12:00Z","content_type":"text/html","content_length":"22770","record_id":"<urn:uuid:ae341e26-f5fb-4c95-b23f-b241a7521b92>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00484.warc.gz"}
Old: DIKWP Model: Mapping to Natural Language Semantic(初学者版) Evaluating the Completeness of Mathematical Semantics in the DIKWP Model: Mapping to Natural Language Semantics Yucong Duan International Standardization Committee of Networked DIKWP for Artificial Intelligence Evaluation(DIKWP-SC) World Artificial Consciousness CIC(WAC) World Conference on Artificial Consciousness(WCAC) (Email: duanyucong@hotmail.com) The Data-Information-Knowledge-Wisdom-Purpose (DIKWP) model serves as a comprehensive framework for understanding cognitive processes and facilitating effective communication between humans and artificial intelligence (AI) systems. Central to this model are its components—Data (D), Information (I), Knowledge (K), Wisdom (W), and Purpose (P)—each endowed with distinct semantic roles. This document critically evaluates the completeness of the mathematical semantics of these DIKWP components by rigorously mapping them to their natural language semantics as defined in standard cognitive and philosophical contexts. The analysis identifies areas where mathematical representations align with, diverge from, or fall short of capturing the nuanced meanings inherent in natural language, offering recommendations to bridge any identified gaps. 1. Introduction Effective communication and cognitive processing within the DIKWP framework hinge on accurately representing and transforming Data, Information, Knowledge, Wisdom, and Purpose. While mathematical semantics provide a structured and formalized approach to modeling these components, ensuring their completeness in reflecting natural language semantics is paramount for robust human-AI This evaluation assesses whether the current mathematical definitions of DIKWP components fully encapsulate their natural language counterparts, as per the standard semantics provided. By doing so, it seeks to ensure that the model can effectively handle the complexities and subtleties of human cognition and communication. 2. Standard Semantics of DIKWP Components2.1 Data (D) Conceptualization:Data in the DIKWP model represents raw, unprocessed facts or observations. It is characterized by specific semantic attributes S={f1,f2,…,fn}S = \{f_1, f_2, \dots, f_n\}, which enable categorization and recognition within cognitive frameworks. Data is not merely objective recordings but is subjectively interpreted through semantic matching and conceptual confirmation by cognitive entities (humans or AI systems). Key Aspects: • Semantic Attributes: Define shared characteristics that allow data categorization (e.g., color, size). • Semantic Correspondence: Ensures data aligns with cognitive entities' semantic spaces. • Subjectivity: Data interpretation is influenced by cognitive entities' pre-existing knowledge and context. 2.2 Information (I) Conceptualization:Information is processed Data that is organized and structured to provide context and meaning. It involves the association of Data semantics with specific cognitive purposes, enabling meaningful interpretation and application. Key Aspects: • Contextualization: Organizes raw Data into meaningful structures. • Semantic Association: Links Data with cognitive purposes to generate relevant insights. • Dynamic Generation: Information semantics are generated through Purpose-driven processing. 2.3 Knowledge (K) Conceptualization:Knowledge is further processed Information that is contextualized and understood to form insights. It involves abstraction and generalization, creating structured semantic networks that capture relationships and rules. Key Aspects: • Abstraction: Generalizes Information to form broader understanding. • Semantic Networks: Structured relationships between concepts. • Dynamic Evolution: Knowledge evolves through continuous cognitive processing and validation. 2.4 Wisdom (W) Conceptualization:Wisdom encompasses ethical, social, and value-driven insights derived from Knowledge. It guides decision-making processes, integrating multiple facets of DIKWP content to achieve optimal outcomes aligned with core values and purposes. Key Aspects: • Ethical Integration: Incorporates moral and ethical considerations. • Value Alignment: Ensures decisions align with fundamental values and purposes. • Holistic Guidance: Provides comprehensive oversight in decision-making. 2.5 Purpose (P) Conceptualization:Purpose defines the objectives and goals driving cognitive processes. It represents the intent behind data collection, information processing, knowledge generation, and wisdom Key Aspects: • Goal Orientation: Directs cognitive activities towards specific outcomes. • Dynamic Transformation: Facilitates the transition from current states to desired states. • Teleological Framework: Embodies the underlying motivations and intentions in cognitive processing. 3. Mathematical Semantics of DIKWP Components3.1 Data (D) Mathematical Representation: D={d∣d shares S}D = \{ d \mid d \text{ shares } S \} • S={f1,f2,…,fn}S = \{f_1, f_2, \dots, f_n\} are semantic attributes. • dd represents a specific data instance. • Alignment with Standard Semantics: Captures the essence of Data as entities sharing specific semantic features. • Subjectivity Handling: The current representation does not explicitly model the subjective interpretation by cognitive entities. • Semantic Correspondence: Mathematical definition emphasizes shared attributes but lacks mechanisms for semantic matching and conceptual confirmation. 3.2 Information (I) Mathematical Representation: I=fI(D)⊆II = f_I(D) \subseteq \mathbb{I} • Alignment with Standard Semantics: Represents the transformation of Data into structured Information. • Contextualization: The function fIf_I implicitly handles organization and contextualization. • Dynamic Generation: Lacks explicit modeling of Purpose-driven processing and semantic association. 3.3 Knowledge (K) Mathematical Representation: K=fK(I)⊆KK = f_K(I) \subseteq \mathbb{K} • Alignment with Standard Semantics: Captures abstraction and generalization from Information to Knowledge. • Semantic Networks: The current representation does not explicitly model the relationships and structured networks inherent in Knowledge. • Dynamic Evolution: Lacks mechanisms to represent the continuous cognitive processing and validation. 3.4 Wisdom (W) Mathematical Representation: W:{D,I,K,W,P}→D∗W: \{D, I, K, W, P\} \rightarrow D^* • Alignment with Standard Semantics: Incorporates ethical and value-driven decision-making. • Holistic Integration: The function aggregates all DIKWP components to produce decisions. • Ethical and Value Considerations: Lacks formal modeling of ethical frameworks and value alignment processes. 3.5 Purpose (P) Mathematical Representation: P=(Input,Output)P = (Input, Output) • Input: Semantic contents related to Data, Information, Knowledge, Wisdom, or Purpose. • Output: Desired outcomes or goals achieved through processing. • Alignment with Standard Semantics: Defines the goal-oriented nature of cognitive processes. • Dynamic Transformation: Captures the transition from current states to desired outcomes. • Teleological Framework: Lacks formal representation of underlying motivations and intent in cognitive activities. 4. Completeness Assessment4.1 Criteria for Completeness To evaluate the completeness of the mathematical semantics, the following criteria are applied: 1. Exhaustiveness: All relevant cognitive and semantic dimensions are captured. 2. Non-Redundancy: Each component's semantics are distinct without unnecessary overlap. 3. Interoperability: Components interact coherently, reflecting real-world cognitive processes. 4. Alignment with Communication Deficiencies: The semantics support identification and remediation of the 9-No Problems. 5. Mapping to Natural Language Semantics: Mathematical representations fully encapsulate the nuanced meanings inherent in natural language definitions. 4.2 Evaluation by Component4.2.1 Data (D) • Shared Attributes: Effectively captures the idea of Data as entities sharing specific semantic features. • Categorization: Facilitates categorization based on semantic attributes. • Subjectivity and Semantic Matching: The mathematical representation does not explicitly model the subjective interpretation and semantic matching processes that cognitive entities employ to recognize and categorize Data. • Semantic Correspondence with Consciousness Space: Lacks formal mechanisms to represent the correspondence between conceptual and semantic spaces as described in the standard semantics. Recommendation:Incorporate functions or relations that model the semantic matching and confirmation processes, possibly through fuzzy logic or probabilistic models to account for subjectivity. 4.2.2 Information (I) • Transformation Function: Captures the organization of Data into Information through fIf_I. • Contextualization: Implicitly handles the structuring and contextual relevance of Information. • Purpose-Driven Processing: Does not explicitly model how Purpose influences the transformation of Data into Information. • Semantic Association: Lacks formal representation of the association between Data semantics and cognitive purposes. Recommendation:Enhance the transformation function fIf_I to incorporate Purpose as a parameter, reflecting how goals influence the organization and contextualization of Data into Information. 4.2.3 Knowledge (K) • Abstraction: Represents the abstraction of Information into Knowledge. • Generalization: Captures the generalization aspect through fKf_K. • Semantic Networks: Does not model the structured relationships and networks that define Knowledge semantics. • Dynamic Evolution: Lacks representation of continuous cognitive processing and validation mechanisms that evolve Knowledge over time. Recommendation:Integrate graph-based structures or semantic networks within the mathematical model of Knowledge to represent relationships between concepts. Additionally, incorporate temporal dynamics to model the evolution of Knowledge. 4.2.4 Wisdom (W) • Comprehensive Integration: Aggregates all DIKWP components to inform decision-making. • Ethical and Value-Driven: Acknowledges the role of ethics and values in decision processes. • Formal Ethical Frameworks: Does not formally represent ethical considerations or value alignment processes. • Decision Optimization: The model oversimplifies Wisdom as a function producing optimal decisions without detailing the underlying cognitive processes. Recommendation:Incorporate formal ethical frameworks or utility functions that model value alignment and ethical decision-making processes within the mathematical representation of Wisdom. 4.2.5 Purpose (P) • Goal Orientation: Clearly defines the input-output relationship driving cognitive processes. • Dynamic Transformation: Models the transition from inputs to desired outputs. • Underlying Motivations: Does not formally capture the underlying motivations and intentions that guide Purpose-driven processing. • Teleological Aspects: Lacks representation of the teleological nature of Purpose, i.e., the intrinsic reasons behind goals. Recommendation:Expand the mathematical representation of Purpose to include motivational factors, possibly through multi-dimensional vectors or additional parameters that capture intent and underlying motivations. 5. Mapping to Natural Language Semantics5.1 Semantic Correspondence Objective: Ensure that mathematical semantics of DIKWP components fully correspond to their natural language definitions, capturing nuances such as subjectivity, contextuality, and dynamic 5.1.1 Data (D) Natural Language Semantics:Data is subjectively interpreted, categorized based on shared semantic attributes, and involves semantic matching and confirmation processes. Mathematical Semantics Alignment:The current representation captures shared attributes but lacks mechanisms for subjective interpretation and semantic matching. Enhancement Needed:Introduce probabilistic or fuzzy logic elements to model semantic matching and subjective categorization, reflecting the cognitive entity's interpretative processes. 5.1.2 Information (I) Natural Language Semantics:Information involves organizing Data to provide context, driven by specific Purposes, and involves dynamic generation of meaningful insights. Mathematical Semantics Alignment:Captures organization and structuring through transformation functions but omits the explicit role of Purpose in driving these processes. Enhancement Needed:Modify transformation functions to include Purpose as a guiding parameter, thereby aligning with the natural language semantics of context-driven information processing. 5.1.3 Knowledge (K) Natural Language Semantics:Knowledge is abstracted and generalized Information, forming structured semantic networks and evolving through continuous cognitive processes. Mathematical Semantics Alignment:Represents abstraction and generalization but does not model semantic networks or dynamic evolution. Enhancement Needed:Incorporate graph-based structures to represent semantic networks and introduce temporal dynamics to model the evolution and validation of Knowledge. 5.1.4 Wisdom (W) Natural Language Semantics:Wisdom integrates ethical and value-driven insights from Knowledge to guide decision-making, balancing multiple factors beyond technical efficiency. Mathematical Semantics Alignment:Aggregates DIKWP components but lacks formal representation of ethical frameworks and detailed decision optimization processes. Enhancement Needed:Embed formal ethical frameworks or utility functions within Wisdom's mathematical representation to capture value alignment and ethical decision-making intricacies. 5.1.5 Purpose (P) Natural Language Semantics:Purpose defines objectives and goals, driven by underlying motivations and intentions, guiding the transition from current states to desired outcomes. Mathematical Semantics Alignment:Models the input-output relationship but does not formally capture motivations and intentions. Enhancement Needed:Expand the mathematical model to include parameters or structures that represent motivations and intentions, thereby capturing the teleological aspects of Purpose. 5.2 Handling Subjectivity and Contextuality Challenge:Natural language semantics inherently involve subjectivity and contextuality, aspects that are challenging to encapsulate in purely mathematical models. Approach:Incorporate probabilistic models, fuzzy logic, and multi-dimensional vectors to represent degrees of belief, ambiguity, and context-dependent interpretations within the mathematical • Data Subjectivity: Use fuzzy sets to represent data categorization, allowing for partial memberships based on semantic similarity. • Information Contextuality: Utilize contextual parameters within transformation functions to adjust information generation based on Purpose and context. 5.3 Dynamic and Evolving Semantics Challenge:Cognitive processes are dynamic, with semantics evolving over time through continuous learning and adaptation. Approach:Integrate temporal dynamics and feedback mechanisms within the mathematical semantics to model the evolution of Information, Knowledge, and Wisdom. • Knowledge Evolution: Use temporal graphs or dynamic networks to represent the evolving relationships between Knowledge concepts. • Feedback Loops: Implement iterative functions where outputs influence future inputs, allowing for continuous refinement and adaptation. 6. Recommendations for Enhancing Mathematical Semantics6.1 Incorporate Subjectivity and Semantic Matching • Fuzzy Logic: Utilize fuzzy sets to allow for partial memberships in Data categorization, reflecting the subjective interpretation. • Probabilistic Models: Apply Bayesian inference to model the likelihood of semantic matches and confirmations. Benefit:Captures the cognitive entity's subjective processes in recognizing and categorizing Data, aligning mathematical semantics with natural language interpretations. 6.2 Embed Purpose in Transformation Functions • Parameterized Functions: Modify fIf_I and fKf_K to accept Purpose as an additional parameter.I=fI(D,P)andK=fK(I,P)I = f_I(D, P) \quad \text{and} \quad K = f_K(I, P) Benefit:Ensures that the generation of Information and Knowledge is explicitly guided by Purpose, reflecting the context-driven nature of natural language semantics. 6.3 Model Semantic Networks and Relationships in Knowledge • Graph Theory: Represent Knowledge as semantic networks using graph structures where nodes are concepts and edges represent relationships.K=(N,E)K = (N, E)Where NN is the set of concepts and EE is the set of semantic relationships. Benefit:Accurately models the structured relationships and interdependencies within Knowledge, aligning with natural language's relational semantics. 6.4 Formalize Ethical Frameworks within Wisdom • Utility Functions: Define utility functions that incorporate ethical and value-driven parameters.W=Utility(K,V)W = \text{Utility}(K, V)Where VV represents value parameters. Benefit:Provides a formal mechanism to integrate ethics and values into decision-making processes, enhancing the alignment of mathematical semantics with natural language's ethical considerations. 6.5 Expand Purpose to Include Motivations and Intentions • Multi-Dimensional Vectors: Represent Purpose as vectors encompassing motivations, intentions, and goals.P=(Input,Output,M,I)P = (Input, Output, M, I)Where MM represents motivations and II represents intentions. Benefit:Captures the teleological aspects of Purpose, reflecting the underlying motivations and intentions that drive cognitive processes as described in natural language semantics. 6.6 Integrate Temporal Dynamics and Feedback Mechanisms • Dynamic Systems: Model the evolution of Information, Knowledge, and Wisdom over time using differential equations or state-space models.I(t+1)=fI(D(t),P(t))I(t+1) = f_I(D(t), P(t))K(t+1)=fK(I (t),P(t))K(t+1) = f_K(I(t), P(t))P(t+1)=fP(D∗(t))P(t+1) = f_P(D^*(t)) Benefit:Reflects the dynamic and evolving nature of cognitive processes, enabling the model to adapt and refine over time in alignment with natural language's depiction of cognitive evolution. 7. Conclusion The DIKWP model's mathematical semantics for its components—Data (D), Information (I), Knowledge (K), Wisdom (W), and Purpose (P)—provide a foundational structure for modeling cognitive processes. However, to achieve semantic completeness in mapping to natural language semantics, several enhancements are necessary: 1. Subjectivity and Semantic Matching: Incorporating fuzzy logic and probabilistic models to capture subjective interpretations. 2. Purpose Integration: Embedding Purpose directly into transformation functions to reflect context-driven Information and Knowledge generation. 3. Semantic Networks: Utilizing graph-based representations to model structured Knowledge relationships. 4. Ethical Frameworks: Formalizing ethical considerations within Wisdom to align with value-driven decision-making. 5. Motivations and Intentions: Expanding Purpose to include motivations and intentions, capturing teleological aspects. 6. Temporal Dynamics and Feedback: Integrating dynamic systems and feedback mechanisms to model the evolving nature of cognitive processes. By implementing these recommendations, the DIKWP model can more accurately and comprehensively reflect the nuanced meanings and dynamic interactions inherent in natural language semantics, thereby enhancing its effectiveness in facilitating robust human-AI collaborations. 8. References 1. Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379–423. 2. Cover, T. M., & Thomas, J. A. (2006). Elements of Information Theory. Wiley-Interscience. 3. Fano, R. M. (1961). Transmission of Information: A Statistical Theory of Communication. MIT Press. 4. Wang, R. Y., & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5-34. 5. Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive Load Theory. Springer. 6. ISO/IEC 25012:2008. Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Data quality model. 7. Seth, A. K. (2014). A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia. Cognitive Neuroscience, 5(2), 8. Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 5998–6008. 9. Marcus, G., & Davis, E. (2020). GPT-3, Bloviator: OpenAI's language generator has no idea what it's talking about. MIT Technology Review. 10. Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1906–1919. 9. Acknowledgments The author extends gratitude to Prof. Yucong Duan for his pioneering work on the DIKWP model and foundational theories in information science. Appreciation is also given to colleagues in mathematics, information theory, cognitive science, linguistics, and psychology for their invaluable feedback and insights. 10. Author Information Correspondence and requests for materials should be addressed to [Author's Name and Contact Information]. Keywords: DIKWP Model, Mathematical Semantics, Semantic Completeness, Communication Deficiencies, Data, Information, Knowledge, Wisdom, Purpose, Information Theory, Cognitive Processes, Human-AI Interaction, Set Theory, Fuzzy Logic, Probabilistic Models, Semantic Networks, Ethical Frameworks Old: Completeness of Mathematical DIKWP Semantics(初学者版) DIKWP:Gödel\'s Incompleteness and Russell\'s Paradox (初学者版) 0 个评论)
{"url":"https://blog.sciencenet.cn/blog-3429562-1453203.html","timestamp":"2024-11-04T04:15:51Z","content_type":"application/xhtml+xml","content_length":"99125","record_id":"<urn:uuid:6aa73231-1e93-44f7-b701-1c766019c4fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00441.warc.gz"}
similar triangles Use Geogebra to determine if triangles are similar when they have two pairs of congruent angles. The exploration sheet and this Geogebra applet will start you on your way. The investigation sheet is available for download. AA Similarity Exploration Similar Polygon Exploration Explore the relationship between the ratios of sides of similar polygons. Determine how the perimeters of similar polygons relate to one another. Next explore the relationship between the area of dilations. Remember the scale factor? How does it factor in? Choose between a coordinate geometry exploration or a Geogebra exploration. Similar Polygons Explorations Just How Tall Was That Snowman? You have to love people who make lemonade from lemons. A recent photo circulating in FaceBook made me envious, for just a fraction of a second, of the people in Boston who are dealing with the historic snow falls of the 2014-2015 winter. Snow Pit, South Boston, MA. 2015. Courtesy Bill McKay The [...]
{"url":"https://systry.com/tag/similar-triangles/","timestamp":"2024-11-03T03:14:11Z","content_type":"text/html","content_length":"31022","record_id":"<urn:uuid:a7137add-ae28-4eea-8981-6a5d62e7d890>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00565.warc.gz"}
realizability topos One thing I find interesting is that your hypotheses automatically imply that the category is extensive (since it is a topos), even though that isn’t explicitly assumed. I don’t suppose there is a more direct proof that those hypotheses imply extensivity, without going through the proof that it is a realizability topos? I’m also intrigued by the analogy between your object $D$ and the “bound” of a bounded geometric morphism. I don’t suppose you have any more to say about that? Mike: locally cartesian closure is necessary to reconstruct the application from the computable functions. The idea is that the application in the PCA is the realizer of evaluation in the (w)(l)ccc sense. Technically, application is reconstructed via ‘functional completeness’ of DCOs, see Theorem 5.11. Okay, thanks, I think I understand. It’s the first time I’ve ever noticed it (maybe because most articles have more than one bibliographic entry, and I guess you’d rather not apply that trick in case there are several references). I put vertical whitespace at the bottom of entries whenever I am linking to references-items in the entry from elsewhere. Because if there is not a screen worth of entry below any anchor point, then linking to it is practically impossible, since the user will see not the intended target, but whatever is one screen height above the bottom. I have a question about the nLab article realizability topos: why is all that space at the bottom? It seems to have been put there for a reason. I have a question about the article: on a quick skim I wasn’t able to puzzle out exactly where local cartesian closure is needed. I added “finite-limit preserving” to “idempotent monad” since I think that’s an important property of $abla\Gamma$ and the setting where concepts like separated, closed, and dense behave well. I wonder if there is a preferred terminology for “finite-limit preserving idempotent monad” on the nlab. I know that Johnstone calls them cartesian reflector, and that they are called localizations e.g.\ in the paper that Urs mentioned earlier. I usually just call them lex idempotent monads. I would have thought that “reflector” would refer to a left adjoint part, and I think I’d say the same for “localization”. I added “finite-limit preserving” to “idempotent monad” since I think that’s an important property of $abla\Gamma$ and the setting where concepts like separated, closed, and dense behave well. I wonder if there is a preferred terminology for “finite-limit preserving idempotent monad” on the nlab. I know that Johnstone calls them cartesian reflector, and that they are called localizations e.g.\ in the paper that Urs mentioned earlier. Oh, of course, since the global sections are literally maps out of the terminal object here. I have edited the axioms to make that come out more transparently, please check again here. I like closed as well, and anyway nabla.Gamma is an idempotent monad so it fits exactly. I might make a revision of the article where I change it. Actually I wrote uniform instead of prone in earlier versions since there is a relation with “uniform objects” in realizability toposes, but then I realized that uniform should be more general (I think it should be defined as left orthogonal to discrete) which is why I changed it. Concerning the paper you cited, I realized that it’s relevant but I didn’t read it entirely since it’s quite long. I’m not very familiar with your work on modal homotopy types, but from what I gathered it seems to be related to chains $\Pi\dashv\Delta\dashv\Gamma\dashvabla$ of adjunctions, and similar ideas are relevant in realizability as well. ah, thanks for fixing those glitches. Sorry. Regarding “closed”: for idempotent monads at least I picked this up from • Carboni and Janelidze and Kelly and Paré, “On localization and stabilization for factorization systems”, Appl. Categ. Structures 5 (1997), 1–58 and then it played a central role in the discussion of modal homotopy types and cohesion, notably in Mike’s formalization. I like it better than “prone” because it seems to be a much more general concept. On the other hand, it is a bit of abuse of terminology if the monad is not idempotent. But the real reason I put it into the entry was that it allowed me to quickly link to an existing definition without having to type it afresh. If you have time and energy, you should please feel invited to edit the entry further and feel free to change whatever I jotted down there. Urs, I see that you wrote “closed” instead of “prone” – actually I like that since it generalizes “closed mono” in the context of localizations/universal closure operations/cartesian reflectors or whatever you want to call them. Are there other authors who use “closed” in this generalized sense? I corrected two minor mistakes in the article – a mixture of nabla and Delta, and discrete objects are only right orthogonal to closed regular epis, not arbitrary closed maps. Concerning the confusion of nabla and Delta: this confusion has a long history. The situation is that Gamma has a right adjoint for realizability toposes that one wants to call nabla in analogy to the situation in Grothendieck toposes. But this right adjoint is the “constant objects functor” which exists for every tripos-induced topos, and for “localic” triposes the constant objects functor is actually left adjoint to Gamma. For now I put nabla everywhere, but if you prefer Delta I’m fine with that as well. True, I have fixed it now. Thanks; I still need to read that article. The global sections is not a geometric morphism for a realizability topos, though, is it? I mean, it is in the other direction, but in the statement wouldn’t it be more correct to say “the global sections functor has a right adjoint”? I’m surprised that he doesn’t need to assume extensivity. Added a pointer to • Jonas Frey, A Characterization of Realizability Toposes (arXiv:1404.6997) and added the statement of the theorem it proves, here. I decided it would be a good idea to split off realizability topos into a separate entry (it had been tucked away under partial combinatory algebra). I’ve only just begun, mainly to get down the connection with COSHEP. A good (free, online) reference is Menni’s thesis. Mike, you rase two central issues here on which I have a whole lot to say, but I don’t know exactly how to start. So I just do it in no particular order. On extensivity: there is an analogue with prehsheaf toposes. Presheaf toposes are extensive since colimits are “freely added”. Realizability toposes are generalized presheaf toposes in a sense, so it makes sense that they are extensive. However, one has to be careful here. The way realizability toposes can be understood as generalized presehaf toposes is via fibrations – concretely the gluing fibration of $\mathbf{RT}(A)$ along the constant objects functor is the cocompletion of the family fibration of $A$ from finite-limit prestacks to fibered pretoposes (in the sense of my thesis). The analogy to presheaf toposes is that if $A$ is a meet-semilattice, then the gluing fibration of the presheaf topos of $A$ along $\Delta:\mathbf{Set}\to\mathbf{Psh}(A)$ is the same kind of cocompletion of the family fibration of $A$. The subtlety is that the fibrational cocompletion only adds coproducts in the sense of left adjoints to change of fiber, and these do only coincide with ordinary colimits in the glued category if the gluing fibration coincides with the family fibration, which is precisely when the constant objects functor has a right adjoint. So the “fibered pretopos completion” might result in a fibration whose fibers don’t have any colimits. In fact, fibered pretoposes are precisely the gluing fibrations of functors $\mathrm{Set}\to\mathbf{X}$ with exact $\ mathbf{X}$, so “fibrational extensivity” comes totally for free – see also Moens’ theorem in “Fibred categories a la Benabou” by Streicher. This doesn’t answer your question, since I think you are interested in finite colimits in the glued category. One way I understand this is via second order logic: In second order logic, disjunction and existential quantification can be defined in terms of the other connectives by universally quantifying over $\Omega$, and this is very related to the fact that we get coproducts for free in elementary toposes. Extensivity is then a manifestation of commutation laws of intuitionistic logic. In this connection it is interesting to look at related construction that are predicative, and where one can’t perform the second order constructions. A precise instance of this is given by realizability over typed pcas. I think realizability categories over typed pcas do not generally have coproducts, but I don’t have a counterexample. On $D$: yes, that’s the bound. It is helpful to compare to presheaves on meet-semilattices in the presentation sketched above. In general, a bound for presehaf toposes is given by the coproduct of all representables. In our case, the mono $D\to abla\Gamma D$ can be viewed as the internal family of all representables in the gluing fibration along $abla:\mathbf{Set}\to\mathcal{E}$, and $D\to 1$ is its internal coproduct. In all the above it becomes apparent that $abla:\mathbf{Set}\to\mathcal{E}$ should be called $\Delta$ in a fibrational context, since it takes the role of functor along which we glue. If we call the constant objects functor $\Delta$, then $\Gamma$, being its left adjoint, deseves to be called $\Pi_0$, and we can view the situation as generalization of ‘totally connected geometric morphisms’. From this point of view, $D\to abla\Gamma D$ can be viewed as decomposition of $D$ into connected components. Thanks! I should go read your thesis. I recall now that we had a bit of discussion back here. Can you complete the analogy “presheaf topos : realizability topos :: sheaf topos : ?” On the completion of the scheme presheaf topos : realizability topos sheaf topos : ? My basic point of view is that tripos-induced toposes are a generalization of localic toposes. in the following I will focus on this generalized localic case (although the question of finding a common framework for realizability and non-localic grothendieck toposes is interesting as well). Thus I modify your question to localic presheaf topos : realizability topos localic topos : ? The best candidate for (?) I have is “tripos-induced topos $\mathcal{E}$ such that $\Gamma\dashv\Delta$”. I call this condition “realizability-like” at one point in my thesis. I can show that whenever $\mathcal{E}$ is a tripos-induced topos, then there exists a realizability-like $\mathcal{D}$ such that $\mathcal{E}$ is localic over $\mathcal{D}$ i.e. there exists a localic geometric morphism from $\mathcal{E}$ to $\mathcal{D}$. This can be viewed as a decomposition result for constant objects functors, opposite to “Pitts iteration” (the result that constant objects functors coming from triposes on different bases comopose). However, there are some problems with this result. Firstly, the composition is not unique even to equivalence, and secondly I have the feeling that in order to have a well behaved theory of triposes on arbitrary bases (generalizing the theory of localic geometric morphisms), one should consider “enriched triposes” or “fibered triposes”. This reminds me that I wanted to look into your enriched indexed categories. I hope I find time for that soon. Are there examples of tripos-induced toposes that are neither localic nor realizability? Enriched indexed categories are on the reading list of the Kan extension seminar, so there will be a cafe post about them sometime soonish. Could there be a homotopified version - a realizability $(\infty, 1)$-topos? Mike: if you mean the restrictive definition of “realizability-like” by $\Gamma\dashv\Delta$ then even most realizability constructions are not “realizability-like”: relative realizability, modified realizability, Krivine realizability and the Dialectica construction don’t fulfill this criterion I think. So maybe “realizability-like” is not such a good name for the concept. On the other hand, extensional realizability is “realizability-like”, but not “presheaf-like”. The definitions of relative, modified, and extensional realizability triposes and toposes can be found in Jaap van Oosten’s book “Realizability: an introduction to its categorical side”. David: good question, I don’t know enough about $\infty$-toposes. I don’t really see how to how to add higher dimensional aspects to the logical/operational intuition about realizability, but maybe one could try to give an $(\infty,1)$-analogue of the characterization of realizability toposes? A thought on the syntactic approach: in homotopy type theory, the link between higher categories and syntax is given by identity types. Realizability, however is intrinsically untyped, so it is not obvious how to make the connection. But there are lambda-terms that behave like symmetries, e.g. by permuting a list of arguments. Maybe one could use them as invertible 2-cells in an appropriate context? Regarding the $\infty$-version: I would just look for the obvious (if any) homotopy-theoretic analogs of your axioms and let the result answer the question of what $\infty$-realizability is. For some of the axioms this is obvious: 1. lcc goes to locally cartesian closed (infinity,1)-category; 2. exactness goes to groupoid objects in an (infinity,1)-category are effective; 3. the faithful adjoint goes to a faithful adjoint (infinity,1)-functor. I am not sure what to do about the projective objects though. There are two approaches: • Consider the cubical set model in the effective topos. This should be an elementary higher topos in the sense of Joyal. • van Oosten’s model structure on the effective topos @Jonas: Hmm, in that case I find the characterization theorem somewhat less interesting. I would have hoped for a characterization theorem that includes everything that people call “realizability”. I think the right construction of a realizability $(\infty,1)$-topos would be to define $\infty$-exact completion and then apply that in place of the 1-exact completion. Awodey-Bauer have worked on something similar in the (2,1)-case. Cubical/simplicial sets in a realizability topos would be taking the 1-exact completion and then after that the $\infty$-exact completion, which probably isn’t right since exact completion (in the sense of ex/lex completion) isn’t idempotent. Van Oosten’s model structure is intriguing, but my impression is that it’s seeing something different, more akin to the “geometric” homotopy theory in a cohesive topos. Sorry to disappoint you, Mike. To give you all the bad news at once, let me also tell you that the characterization result in the form given in the article depends on the axiom of choice. I write that at the end of the introduction. Realizability toposes over PCAs are only exact completions in the presence of choice, since this is needed to show that partitioned assemblies are projective. Without choice, you have to replace the usual projectivity by a kind of “fibrational projecitivity”, relative to the gluing fibration along the constant objects functor. This is done in my thesis. Concerning a characterization of “everything that people call realizability” – the problem is that the field is simply not very clearly delimited, and I cannot imagine logicians to reach a consensus on the “essence” of realizability very soon. First of all we have to give up toposes since typed realizability interpretations give predicative models. In relatively recent work, the logician Jean-Luis Krivine argues that realizability should be a generalization of forcing, and introduces a notion of “realizability algebra” that comprises all complete boolean algebras. Krivine is only interested in classical logic, but if we drop this restriction then following his philosophy it would be reasonable to include all locales and localic toposes. At this point, a natural candidate for a general notion of realizability is “all triposes” in my opinion. In his 1982 thesis, Andy Pitts gave a characterization of all tripos-induced toposes $\mathcal{E}$ together with their constant objects functors $\Delta:\mathrm{Set}\to\mathcal{E}$. The explicit inclusion of the constant objects functor in the characterization seems necessary, and a way to understand this is to observe that the constant objects functor is equivalent to a fibering of the topos over $\mathrm{Set}$ via Moens’ theorem. One might say that “realizability toposes are fibered toposes”. The fact that we can avoid mentioning the constant objects functor explicitly in the characterization of realizability over PCAs is kind of a coincidence – it is since it is adjoint to the global sections functor in this case. Summarizing could take the point of view that your question for a characterization of “all of realizability” has already been answered by Pitts (at least in the impredicative case). More on this in Section 1.2 of my thesis. [And a remark on the homotopy issue: van Oosten talks about a notion of homotopy in realizability toposes (over PCAs), but I think he doesn’t claim it is an actual model structure.] @Mike, could you recall for me the statement about exact completion that you have in mind? So I understand that $RT(A)$ is the exact completion of $PAss(A)$ (partioned assemblies). But for using this for an $\infty$-categorical generalization we would then need to know the $\infty$-version of $PAss(A)$. Is it clear what that should be? I know almost nothing about realizability, so I may well be missing something really basic here. But I observe that to the extent that Jonas’s axioms really are analogous to Giraud’s axioms, as he suggests they are, then since the $\infty$-version of Giraud’s axioms does give Grothendieck $\infty$-toposes, it would seem really natural to ask for the $\infty$-version of Jonas’s axioms. (“Frey’s axioms”, I should say, for Google :-) Actually I have a more general question (and maybe too general a question): given that we now have an axiomatic formulation of what was previously defined only very explicitly in components, does looking at the axioms make anyone of you want to streamline them further and then declare the result to be a new, more general definition of realizability? One that is justified not (just) by concrete constructions, but by abstract properties? I’m not saying it’s not natural to look for the $\infty$-analogue of Jonas’ axioms, but we should also look for the construction that such axioms would characterize. And the fact that the axioms don’t capture “the whole theory of realizability” suggests to me that it’s more important to focus on the constructions. As for an $\infty$-version of $PAss(A)$, exact completion is analogous to the category of sheaves on a site (in a very precise sense), and when we pass to $\infty$-sheaves we don’t always need to make the 1-site into an $\infty$-site; it often suffices to consider $\infty$-sheaves on the same 1-site. So while it’s also natural to wonder about an $\infty$-version of $PAss(A)$, I don’t think it’s necessary. Maybe after the semester is over I’ll have time to read your thesis. In the absence of AC, can you consider a RT to be one of my generalized exact completions relative to a unary site, e.g. with covers induced by the surjections of sets? The appearance of fibrations is not all that surprising to me, e.g. the version of Giraud’s theorem for a general base topos also requires a fibered category as input. There also ought to be a version of generalized exact completion for a fibered category, e.g. arity classes are naturally described as families of morphisms in the base category. I seem to remember speculating about this at some point on the cafe, but I didn’t see it in the obvious post that I linked to above. Maybe it was somewhere else. Reading Mike’s and Urs’ comments (25, 27), I realize that I should have been more careful when writing the abstract. I brought up Giraud’s theorem since Johnstone pointed out the lack of such a result for realizability when comparing Grothendieck toposes to realizability toposes. As a first approximation the comparison is ok, simply in the sense that they give characterizations of classes of categories that were previously only described by constructions. However, on closer inspection there are differences – most imporantly, realizability toposes shouldn’t be viewed in analogy to sheaf toposes, but to localic presheaf toposes on meet-semilattices. In particular, the central role of projective objects is in analogy to Martha Bunge’s characterization of presheaf toposes as toposes having a generating family of indecomposable projectives (“Internal presheaf toposes”, 1973). I’ll try to make this more clear in a future revision. My result characterizes a very particular class of toposes, and I agree with Mike that one should study more general classes. In my post from yesterday I suggested the class of all “tripos-induced toposes”, and I really think that this is a good framework. As I said, a “Giraud style characterization” of fibered tripos-induced toposes was in a sense already given by Pitts in his thesis – his characterization of constant objects functor can be translated into such a characterization via Moens’ theorem. Let me spell this out explicitely. Pitts’ characterization of constant objects functors: A finite-limit preserving functor $\Delta:\mathrm{Set}\to\mathcal{E}$ into a topos is a constant objects functor into a tripos-induced topos, iff 1. $\Delta$ is bounded by $1$ ( every $A$ in $\mathcal{E}$ is a subquotient of some $\Delta I$) 2. the indexed poset $\mathrm{sub}_{\mathcal{E}}\circ\Delta$ has a generic predicate ($\mathrm{sub}_{\mathcal{E}}$ is the subobject fibration of $\mathcal{E}$) This translates into the following characterization of indexed/fibered toposes via Moens’ theorem: An indexed category $\mathcal{E}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{Cat}$ is (the indexed category corresponding to) the gluing fibration of a tripos-induced topos along the constant objects functor, iff 1. All fibers are toposes 2. $\mathcal{E}$ has internal sums, i.e. all change of fiber maps $f^*:\mathcal{E}(I)\to\mathcal{E}(J)$ have left adjoints $\Sigma_f$ subject to the Beck-Chevalley condition 3. Every $X\in\mathcal{E}(1)$ is a subquotient of some $\Sigma_I 1$ 4. There exists a generic family of subterminals, i.e. a mono $m:U\to 1$ in some $\mathcal{E}(A)$ such that every subterminal in any of the fibers of $\mathcal{E}$ can be obtained as reindexing of [All this works only with choice, without choice we need some regularity and pre-stack conditions] It is illuminating to compare the characterization of constant objects functors to the characterization of localic geometric moprhisms. A localic geometric morphism into $\mathbf{Set}$ can simply be characterized as a finite limit preserving functor $\Delta:\mathbf{Set}\to\mathcal{E}$ into a topos which has a right adjoint. Now the existence of the right adjoint is equivalent to local smallness of the gluing fibration in a fibrational sense (see Streicher’s notes). Thus, tripos induced toposes are generally not locally small fibrationally. This explains why we need condition 4 explicitly above – if the fibration was locally small, then it would also be well powered and the existence of a generic family of subterminals would come for free. @Mike in 30: Yes, you can use unary topologies in absence of choice, as Wouter Stekelenburg pointed out here. In this case the unary covers are the epicartesian maps, i.e. cartesian arrows over surjective functions in the fibration $\mathbf{PAsm}\to\mathbf{Set}$. What a great thread this is. I had been thinking about one of the topics here, namely the possibility of higher effective/realizability toposes, and whether directly adjoining higher inductive types to the semantics of an effective topos would successfully promote it to a higher setting. This is desirable if we want to discuss, for example, notions of computability adapted to homotopy type There’s now the work on cubical assemblies and on [path categories]. There’s some more discussion on the HoTT nlab here, but that page needs updating. Added a sketch of the construction of the realizability tripos over a PCA. diff, v21, current This seems to be described also at partial combinatory algebra. Added the fact that a realizability topos is also an ex/reg completion of the non-partitioned assemblies, and that the ex/lex completion property depends on choice in Set. diff, v22, current Re #36: yes, and also at tripos. I just didn’t like the look of the one subsection just saying “see tripos”; it didn’t seem very friendly to the reader. The natural place to put the construction would be on a page like realizability tripos, but it seems natural to include at least the basic definition within the text of all three pages. I suppose we could eliminate the redundancy by making it a snippet that gets !included into all three pages, but I’m not sure that’s worth the effort. Nah, it wasn’t criticism of what you did; I just didn’t know whether you’d seen it. It’s all fine. The extent function for an assembly must take values in nonempty subsets. diff, v23, current
{"url":"https://nforum.ncatlab.org/discussion/4169/realizability-topos/?Focus=46945","timestamp":"2024-11-10T21:43:32Z","content_type":"application/xhtml+xml","content_length":"144033","record_id":"<urn:uuid:4b86e3c1-ddc8-4f15-806f-04258110fff3>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00167.warc.gz"}
Entropy - its practical use | Spirax Sarco Example 2.16.2 Consider the steam conditions in Example 2.16.1 with steam passing through a control valve with an orifice area of 1 cm². Calculate the maximum flow of steam under these conditions. The downstream steam is at 6 bar a, with a dryness fraction of 0.871 8. Specific volume of dry saturated steam at 6 bar a (sg) equals 0.315 6 m³/kg. Specific volume of saturated steam at 6 bar a and a dryness fraction of 0.871 8 equals 0.315 6 m³/kg x 0.871 8 which equates to 0.275 1 m³/kg. The heat drop in Example 2.16.1 was 86.95 kJ/kg, consequently the velocity can be calculated using Equation 2.16.3: The calculations in Example 2.16.2 could be carried out for a whole series of reduced pressures, and, if done, would reveal that the flow of saturated steam through a fixed opening increases quite quickly at first as the downstream pressure is lowered. The increases in flow become progressively smaller with equal increments of pressure drops and, with saturated steam, these increases actually become zero when the downstream pressure is 58% of the absolute upstream pressure. (If the steam is initially superheated, CPD will occur at just below 55% of the absolute upstream pressure). This is known as the ‘critical flow’ condition and the pressure drop at this point is referred to as critical pressure drop (CPD). After this point has been reached, any further reduction of downstream pressure will not give any further increase in mass flow through the opening. In fact if, for saturated steam, the curves of steam velocity (u) and sonic velocity (s) were drawn for a convergent nozzle (Figure 2.16.2), it would be found that the curves intersect at the critical pressure. P[1 ]is the upstream pressure, and P is the pressure at the throat. The explanation of this, first put forward by Professor Osborne Reynolds (1842 - 1912) of Owens College, Manchester, UK, is as follows: Consider steam flowing through a tube or nozzle with a velocity u, and let s be the speed of sound (sonic velocity) in the steam at any given point, s being a function of the pressure and density of the steam. Then the velocity with which a disturbance such as, for example, a sudden change of pressure P, will be transmitted back through the flowing steam will be s - u. Referring to Figure 2.16.2, let the final pressure P at the nozzle outlet be 0.8 of its inlet pressure P1. Here, as the sonic velocity s is greater than the steam velocity u, s - u is clearly positive. Any change in the pressure P would produce a change in the rate of mass flow. When the pressure P has been reduced to the critical value of 0.58 P1, s - u becomes zero, and any further reduction of pressure after the throat has no effect on the pressure at the throat or the rate of mass flow. When the pressure drop across the valve seat is greater than critical pressure drop, the critical velocity at the throat can be calculated from the heat drop in the steam from the upstream condition to the critical pressure drop condition, using Equation 2.16.5.
{"url":"https://www.spiraxsarco.com/learn-about-steam/steam-engineering-principles-and-heat-transfer/entropy---its-practical-use","timestamp":"2024-11-03T22:17:14Z","content_type":"text/html","content_length":"156085","record_id":"<urn:uuid:418845b9-da93-4752-beee-8107e81805c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00202.warc.gz"}
For 2 x n Cases, Proportional Fitting Problem Reduces to a Single Equation Publication Date Technical Report: UTEP-CS-24-36 In many practical situations, for each of two classifications, we know the probabilities that a randomly selected object belong to different categories. For example, we know what proportion of people are below 20 years old, what proportion is between 20 and 30, etc., and we also know what proportion of people earns less than 10K, between 10K and 20K, etc. In such situations, we are often interested in proportion of people who are classified by two classifications into two given categories. For example, we are interested in the proportion of people whose age is between 20 and 30 and whose income is between 10K and 20K. If we do not have detailed records of all the objects, we select a small sample and count how many objects from this sample belong to each pair of categories. The resulting proportions are a good first-approximation estimate for the desired proportion. However, for a random sample proportions of each category are, in general, somewhat different from the proportions in the overall population. Thus, the first-approximation estimates need to be adjusted, so that they fit with the overall-population values. The problem of finding proper adjustments is known as the proportional fitting problem. There exist many efficient iterative algorithms for solving this problem, but it is still desirable to find classes for which even faster algorithms are possible. In this paper, we show that for the case when one of the classifications has only two categories, the proportional fitting problem can be reduced to solving a polynomial equation of order equal to number n of categories of the second classification. So, for n = 2, 3, 4, explicit formulas for solving quadratic, cubic, and quartic equations lead to explicit solutions for the proportional fitness problem. For n > 4, fast algorithms for solving polynomial equations lead to fast algorithms for solving the proportional fitness problem.
{"url":"https://scholarworks.utep.edu/cs_techrep/1892/","timestamp":"2024-11-11T16:46:33Z","content_type":"text/html","content_length":"39049","record_id":"<urn:uuid:a1707d8f-58e7-4e03-9542-9f90ad3dd7b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00691.warc.gz"}
How to calculate the corrugated board on the roof: Detailed information How to calculate the corrugated board on the roof One of the most important steps in installing corrugated metal roofing is figuring out how much corrugated board you’ll need. You’ll save time and money by purchasing the correct quantity of materials thanks to this process. This simple guide will assist you in figuring out how much corrugated board you’ll need for your roof. Measuring your roof’s dimensions is the first step. To begin, measure the width and length of each section of the roof where the corrugated metal sheets are going to be installed. If the shape of your roof is complicated, divide it into easier shapes like squares or rectangles. This will greatly simplify the area calculation. After obtaining the measurements, multiply the length by the width of each section to determine its total area. One section of your roof, for example, would measure 20 feet long by 10 feet wide, or 200 square feet (20 x 10 = 200). For every area of your roof, perform this computation once more. Next, take your roof’s pitch or slope into consideration. The majority of roofs are slightly angled to aid in water runoff. If possible, measure the pitch angle or make a visual estimate. Pitch is important for precise measurements because it determines how the corrugated sheets fit and overlap. Now think about the area that each corrugated metal sheet covers. Manufacturers usually specify the area that one sheet is intended to cover. A typical corrugated metal sheet, for instance, might be 32 square feet in size. To find the number of sheets you’ll need, divide this number by the total area of each roof section. In order to account for waste or cutting adjustments, add a buffer to your calculations at the end. In order to make sure you have enough material to cover overlaps, corners, and any measurement errors, it is advisable to add an additional 10% to 15%. This buffer guarantees that supplies won’t run out in the middle of the project. You can determine with confidence how much corrugated board you’ll need for your roof by following these steps: measure precisely, take the pitch into account, calculate coverage per sheet, and add a buffer. This planning guarantees a more seamless installation process and aids in cost containment. What technical characteristics should be taken into account during calculations Prices for corrugated board [S:Corrugated board:S] The following profiled sheet indicators are affected by the methods of calculation. Thickness. Currently, there are three types of sheets in the implementation: wall, roofing and universal. Standards prescribe that the thinnest steel can be used for the wall, the roofing profile has the thickest. Universal is located in the middle. But this classification is very conditional, unscrupulous manufacturers are trying to deceive consumers, make products from thin steel. It is impossible to check the thickness of the sheet without laboratory equipment, the metal is covered with several layers of other materials. During calculations it is better to insure, increase the length of the ceiling and reduce the step of the crate. When selecting a material, the corrugated board’s thickness is a crucial factor. The corrugated board wave’s dimensions More intricate roofs result in a higher proportion of corrugated board waste. "There are easy steps to follow when calculating corrugated roofing sheets for your roof to make sure you have the correct quantity of material for the job. You can calculate the number of sheets required without wasting material by taking precise measurements of the roof’s dimensions and taking overlaps into consideration. This guide helps homeowners and do-it-yourself enthusiasts alike confidently plan and carry out their roofing projects by breaking the process down into simple, doable steps." How to use a calculator Using a specialized calculator is the quickest and most straightforward method of calculating the corrugated board on the roof. Prior to beginning construction, you must choose the kind and specifications for the roof and sheet. The following source data will be needed for the calculator: • the size of the sheet is not the total, but only effective (working); • linear dimensions of rectangular slopes and their number; • linear dimensions of triangular slopes (vald) and their number; • the sizes of trapezoidal slopes and their number. To compute, you must be aware of the roof’s precise measurements. The calculator can calculate and summarize the areas of each geometric figure thanks to a straightforward program. Such computations are rarely used by experienced roofers. Too many unanticipated factors affect the final calculations, so no computer program can account for every possible roof configuration. This kind of calculator should only be used to get indicative values and then only with extreme caution. He ignores the length of the sheets, which has an impact on the number of joints in each joint. The width of the overlap and t width are impacted by the slopes’ angle of slope, which is a parameter that the calculator ignores. D. Approximate values from the calculator A practical example of calculating roofing material The number of profiled sheets can be determined by applying trigonometric formulas. However, not all aging builders are able to recall what COS or arctg is, where to find them, or what to do after they’ve been found. We provide the most straightforward computation method; all you need to know is the material covered in first-class arithmetic lessons. Consider the gable roof, which is the most common and basic type. The amount of corrugated board and metal tiles can be calculated in the same way; roofing materials have the same styling technologies, extra elements, and fixation techniques, as well as the same performance characteristics. There are minor variations in the price and appearance, but the computation process is unaffected by these factors. To help with comprehension, the instructions include the roof’s size. They also specify that the rafter system and specific metal tile parameters must be used in their computations. Step 1. Find out the size of the sheets. The manufacturer indicates the general and working width of the sheets. The working width takes into account the size of the waves of waves during installation. In our case, the total width is 119 cm, and the working 110 cm. A wider wave will always lie on top, due to this, the tightness of the fit. The upper wave completely closes the lower one – the probability of entering the subcutaneous space of snow or rain is excluded. The corrugated board has no waves, it has only parallels located at the same height of the crest, which slightly simplifies the calculation process, there is no need to consider this parameter during the choice of the ridge element and wind planes. For a profiled sheet, you can buy strips with lower sides of the sidewalls, they close a small light between the roof and the crate. Step 2: Determine the rafter system’s dimensions. In our instance, there are 11 m 67 cm between two extreme rafter legs and 10 m 60 cm between the building’s walls. Given the size of the house, the rafter system needs to be designed in compliance with all regulations and provide the required rigidity and carrying capacity. Important. In order to reduce the number of unproductive waste of corrugated board and facilitate roofing work, it is recommended to fit the size of the crate to the size of the profiled sheet. This will not only exclude the need to fit sheets, but also improve the appearance of the roof. It will be symmetrical, the screws will be located on equal removal. Another problem of circumcised sheets – cut metal in the sites quickly oxidizes. The sizes of rust are not critical, the metal does not spoil very quickly. The problem is that rust with water is washed off to coating the roof and forms ugly red heights. It is impossible to remove them, the oxides of the iron are tightly eaten into the surface of the coating. Step 3. Calculate the number of sheets of metal profile. In our case, for the roof you will need 11 pcs. × 110 cm + 90 cm = 12 m 19 cm. During the calculations, it was meant that 11 sheets with a working width of 90 cm are not enough, in connection with this it is necessary to increase the coating by 9 cm and due to this to achieve the size of the roof on the skate of 12 m 19 cm. In this case, the overhang on the skate will be 12 m 19 cm – 10 m 60 cm = 159 cm/ 2 = 79 cm. If such a distance seems large, then it can be reduced to acceptable parameters with a multiple of the width of the wave of roofing material. As a result, after choosing the optimal option, the length of the roof cen was 12 m 1 cm. Taking into account the size of the sheets, the crate for the roof should have dimensions of 12 m 1 cm along the ridge and 7 m 97.5 cm according to the slope of the ramp. Estimating the roof’s dimensions Step 4: Measure the skate’s length and move it to the opposite lower roof side (cornice) to remove 7 m, or 97.5 cm. Make sure the corners are perfectly rectangular by carefully inspecting them. All subsequent rows will be steps if the first row is uneven. It is extremely difficult to straighten the sheets by slightly shifting the ceiling profiles, and manufacturers do not advise doing so at all. There will be multiple issues if they aren’t in sync. 1. The cracks in the places of overflow of sheets are increased, the risks of blowing snow or rain will appear. To eliminate the problem, it is necessary to increase the length of the overlap, and this changes all calculations, increases the consumption of expensive roofing materials. 2. In places of bending, the sheets touch each other not on the plane of the surface, but along the line. This position causes significant loads along the lines, the thickness of the profiled sheets is not designed for them. As a result – the coating can bend in these areas, protective layers are damaged, oxidative processes are accelerated. In the future, leaks requiring the repair of the roof appear at such points. Measuring the rectangle’s diagonals is necessary to verify that the crate’s corner is correct. If there is a difference, the skate should always be stable and only the cornice section of the crate can be moved in either direction. The precise slope sizes Sensible guidance. You must speak with a manufacturer representative prior to placing an order. Coordinating the length of the sheets is necessary; however, depending on production capabilities and delivery challenges, it may need to be adjusted in some instances. When purchasing a finished metal profile in standard sizes, you must consider how the overlap will vary based on the slopes’ angles of descent. Step 5: Compute the extra components. They must be ordered in tandem with the roof purchase. The majority of producers create components in standard lengths of one and two meters. The installation of a gable roof will require these extra components. Bar of cornices (dropper) On this, the roof’s preliminary calculations are complete. The value of the roof can be calculated by multiplying the number of elements by their respective prices. However, not everything is that easy. Why? You must factor in the entire construction estimate and perform the most thorough computation possible because the cost of the roof includes more than just the coating. Volume calculations for ordering To calculate the amount of corrugated board needed for your roof: 1. Measure the length and width of your roof in meters. 2. Multiply the length by the width to get the area of the roof in square meters. 3. Check the size of the corrugated boards available and calculate how many you need to cover the roof. 4. Consider buying a little extra to account for cutting and any mistakes. 5. Add about 10% to your total for overlapping and trimming. Every roofing project needs to account for the quantity of corrugated board that will be required for the roof. It guarantees you purchase the appropriate quantity of materials and prevents unforeseen expenses or shortages. Measure the width and length of each roof section where the corrugated board is going to be installed first. Accurate measurement of these parameters is necessary to guarantee accurate computations. Next, multiply the length by the width to find the area of each roof section. For instance, 200 square feet would be the area of one section of your roof that is 20 feet long and 10 feet wide (20 x 10 = 200). For every area of your roof that will have corrugated board covering, repeat this process. Once the area for each section has been determined, add the total area of the roof that needs to be covered. The amount of corrugated board you need to buy will depend on this total. A small amount (usually about 10%) should be added to account for waste from cutting and installation. Since corrugated boards are typically sold in standard sizes, make sure to measure the boards that are available and determine how many you’ll need based on the size of your entire roof. You might need fewer boards if the ones you choose are larger than the area you calculated, but you’ll still need to trim them to fit your roof precisely. Finally, think about your roof’s pitch and configuration. You might have to modify your calculations if your roof has a complicated shape or a lot of angles. To guarantee a seamless and effective roofing project, confirm all measurements and computations one last time before buying supplies. Video on the topic Professor for the Walm. Without waste. 170m kV in 1 day. New Technology. How to calculate the corrugated board How to calculate a gable roof What do you think, which element is the most important for a reliable and durable roof? Add a comment
{"url":"https://innfes.com/how-to-calculate-the-corrugated-board-on-the-roof/","timestamp":"2024-11-07T22:31:37Z","content_type":"text/html","content_length":"89051","record_id":"<urn:uuid:7b5541dc-8519-40b3-821b-80a5652d5fb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00734.warc.gz"}
Lumpsum Calculator - Expense.club About Lumpsum Calculator A lumpsum calculator is a financial tool used to calculate the amount of money that an investor would receive if they were to withdraw all of their investments at once, instead of over a period of This type of calculator is commonly used to estimate the value of a retirement account, savings account, or other type of investment portfolio. The lumpsum calculator typically requires inputs such as the current value of the investment, the expected rate of return, and the number of years until the investment is to be withdrawn. The calculator then calculates the lump sum amount that would be received at the time of withdrawal and can be used to help plan for retirement or other long-term financial goals. Lumpsum Interest Calculation Formula for Lumpsum Interest Calculation: A = P (1 + R/N) ^ NT A: Estimated returns, P: Present value of investment, R: Estimated rate of return T: Tenure N: number of compound interests in a year
{"url":"https://expense.club/lumpsum-calculator","timestamp":"2024-11-11T19:40:22Z","content_type":"text/html","content_length":"36693","record_id":"<urn:uuid:e903bc6b-1152-48de-a34a-bea18eb36f43>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00227.warc.gz"}
Proportional hazards regression — details_proportional_hazards_glmnet Proportional hazards regression For this engine, there is a single mode: censored regression Tuning Parameters This model has 2 tuning parameters: • penalty: Amount of Regularization (type: double, default: see below) • mixture: Proportion of Lasso Penalty (type: double, default: 1.0) The penalty parameter has no default and requires a single numeric value. For more details about this, and the glmnet model in general, see glmnet-details. As for mixture: • mixture = 1 specifies a pure lasso model, • mixture = 0 specifies a ridge regression model, and • 0 < mixture < 1 specifies an elastic net model, interpolating lasso and ridge. Translation from parsnip to the original package The censored extension package is required to fit this model. ## Proportional Hazards Model Specification (censored regression) ## Main Arguments: ## penalty = 0 ## mixture = double(1) ## Computational engine: glmnet ## Model fit template: ## censored::coxnet_train(formula = missing_arg(), data = missing_arg(), ## weights = missing_arg(), alpha = double(1)) Preprocessing requirements Factor/categorical predictors need to be converted to numeric values (e.g., dummy or indicator variables) for this engine. When using the formula method via fit(), parsnip will convert factor columns to indicators. Predictors should have the same scale. One way to achieve this is to center and scale each so that each predictor has mean zero and a variance of one. By default, glmnet::glmnet() uses the argument standardize = TRUE to center and scale the data. Other details The model does not fit an intercept. The model formula (which is required) can include special terms, such as survival::strata(). This allows the baseline hazard to differ between groups contained in the function. (To learn more about using special terms in formulas with tidymodels, see ?model_formula.) The column used inside strata() is treated as qualitative no matter its type. This is different than the syntax offered by the glmnet::glmnet() package (i.e., glmnet::stratifySurv()) which is not recommended here. For example, in this model, the numeric column rx is used to estimate two different baseline hazards for each value of the column: mod <- proportional_hazards(penalty = 0.01) %>% set_engine("glmnet", nlambda = 5) %>% fit(Surv(futime, fustat) ~ age + ecog.ps + strata(rx), data = ovarian) pred_data <- data.frame(age = c(50, 50), ecog.ps = c(1, 1), rx = c(1, 2)) # Different survival probabilities for different values of 'rx' predict(mod, pred_data, type = "survival", time = 500) %>% bind_cols(pred_data) %>% ## # A tibble: 2 x 5 ## .eval_time .pred_survival age ecog.ps rx ## <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 500 0.666 50 1 1 ## 2 500 0.769 50 1 2 Note that columns used in the strata() function will also be estimated in the regular portion of the model (i.e., within the linear predictor). Predictions of type "time" are predictions of the mean survival time. Linear predictor values Since risk regression and parametric survival models are modeling different characteristics (e.g. relative hazard versus event time), their linear predictors will be going in opposite directions. For example, for parametric models, the linear predictor increases with time. For proportional hazards models the linear predictor decreases with time (since hazard is increasing). As such, the linear predictors for these two quantities will have opposite signs. tidymodels does not treat different models differently when computing performance metrics. To standardize across model types, the default for proportional hazards models is to have increasing values with time. As a result, the sign of the linear predictor will be the opposite of the value produced by the predict() method in the engine package. This behavior can be changed by using the increasing argument when calling predict() on a model object. Case weights This model can utilize case weights during model fitting. To use them, see the documentation in case_weights and the examples on tidymodels.org. The fit() and fit_xy() arguments have arguments called case_weights that expect vectors of case weights. Saving fitted model objects This model object contains data that are not required to make predictions. When saving the model for the purpose of prediction, the size of the saved object might be substantially reduced by using functions from the butcher package. • Simon N, Friedman J, Hastie T, Tibshirani R. 2011. “Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” Journal of Statistical Software, Articles 39 (5): 1–13. . • Hastie T, Tibshirani R, Wainwright M. 2015. Statistical Learning with Sparsity. CRC Press. • Kuhn M, Johnson K. 2013. Applied Predictive Modeling. Springer.
{"url":"https://parsnip.tidymodels.org/reference/details_proportional_hazards_glmnet.html","timestamp":"2024-11-07T07:10:19Z","content_type":"text/html","content_length":"24547","record_id":"<urn:uuid:b7efd6ac-7e5a-45fb-945b-77560e3c7f61>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00875.warc.gz"}
Simplifying numerical fractions Simplifying numerical fractions is a very usefull skill and knowing how to make fractions simpler can make a world of diference. If you are solving an equation in which there are fractions with large numerators or denominators, reducing those fractions to their lowest value will make finding the result much easier. In order to make a fraction simpler, the fraction needs to have at least two numbers that could be divided with same number. For example; you have a fraction 20/40 that can be simplified because both numerator and denominator could be divided with same number (in this case, it is number 20). In the end, you will get the number 1/2 as the result. Dividing both the numerator and the denominator with the same number is very important since it is the only way their ratio will not change and therefore influence the end result. If you want to simplify the fraction as much as possible, the number you use for simplifycation should be the greatest common factor of the numerator and the denominator. For practice, you can use the worksheets below. They should provide you with enough practice material to help you master this skill as soon as possible. Simplifying numerical fractions exams for teachers Exam Name File Size Downloads Upload date Proper fractions Simplifying proper fractions – easy 460.6 kB 3713 September 3, 2019 Simplifying proper fractions – medium 460.6 kB 2960 September 3, 2019 Simplifying proper fractions – hard 461.8 kB 3005 September 3, 2019 Mixed numbers Simplifying mixed numbers – easy 459.8 kB 3363 September 3, 2019 Simplifying mixed numbers – medium 460.1 kB 2579 September 3, 2019 Simplifying mixed numbers – hard 460.2 kB 2772 September 3, 2019 Simplifying improper fractions – easy 460.5 kB 2791 September 3, 2019 Improper fractions Simplifying improper fractions – medium 460.5 kB 2711 September 3, 2019 Simplifying improper fractions – hard 460.7 kB 2543 September 3, 2019 Simplifying improper fractions into mixed numbers – easy 460.7 kB 2511 September 3, 2019 Simplifying improper fractions into mixed numbers – medium 459.9 kB 2232 September 3, 2019 Simplifying improper fractions into mixed numbers – hard 460 kB 2703 September 3, 2019 Simplifying numerical fractions worksheets for students Worksheet Name File Size Downloads Upload date Simplifying proper fractions 503.3 kB 3007 September 3, 2019 Simplifying mixed numbers 509.2 kB 2758 September 3, 2019 Simplifying improper fractions into mixed numbers 506.3 kB 2425 September 3, 2019 Simplifying improper fractions 509.5 kB 2436 September 3, 2019
{"url":"https://www.mathx.net/simplifying-numeric-fractions/","timestamp":"2024-11-02T15:40:03Z","content_type":"text/html","content_length":"37838","record_id":"<urn:uuid:1be12cf8-66fb-4958-8fa0-de5456c12992>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00881.warc.gz"}
25 Picture Books for Math Problem Solving 25 Marvelous Math Picture Books for Kids Preschool, elementary & middle school kids will LOVE these math picture books that teach early and advanced mathematical concepts! So here’s the issue at our house — we’ve got one child who is a math whiz and another who wouldn’t mind if math was wiped off the face of the earth. The one thing they both have in common is neither enjoys any math homework: “It’s boring, Mom!” “Yes, I know. Worksheets aren’t always the best way to enjoy math.” But you still need to do your homework. This is how it goes for so many things in life, right. Practice isn’t always as much fun as ‘the real thing’! So since math is important in life, I try to introduce them to non-school, non-worksheet ways to gain a better understanding of math concepts. As a parent, I have to find ways for them to practice, enjoy it and find it relevant to their life. Otherwise, I will be forever helping them to figure out if they have enough allowance money to buy the souvenir they want when we’re on vacation! Math Picture Books for Preschool thru Middle School Luckily, both of my kids adore books. And even better — there are some AWESOME books that integrate math into the story. Authors can find such entertaining ways to weave a boring subject into a good story 😉 This week’s Discover & Explore topic is Math so I thought it was the perfect time to share some of our best loved math stories. The ones that even kids who hate math will love to read. I’ve included affiliate links to each of the books on our list so you can review them further to see if they would be a good option for making math more fun with your kids or students. And as you read along, challenge your kids to solve the problems before they turn the page — Math Picture Books for Preschool & Elementary Kids Learning the identify numbers and then counting items are the earliest introductions to math for most kids. First let me say, you don’t need a math-focused book to practice these concepts. You can count the items in any book you read. And many stories will include page numbers or integrate numbers in their text (such as the number of months in a year or the number of eggs in a basket). Books with a main theme that focuses on counting are a nice addition to reading time because they encourage children can practice their numbers in a type of repetition. Kids will count (sometimes forward, sometimes backward) as they read the story. Richard Scarry’s Best Counting Book Ever Doggies – A Sandra Boynton Board Book How Do Dinosaurs Count To Ten? – a hilarious look at counting Emily’s First 100 Days of School Over in the Ocean: In a Coral Reef And for more advanced counting — If you’re looking for ideas for counting big numbers, be sure to read out 100 Ways to Count to 100 post — great for the 100th Day of School activities or estimation jars! Another favorite series that ties math to nature are the Math in Nature books. These are beautifully illustrated and do a wonderful job of integrating the natural environment with numbers. Try Counting on Fall and Sorting Through Spring Math Picture Books for Middle School Yes! You can (and should) use math picture books when teaching advanced math concepts even to middle school kids! (Don’t let them fool you — anyone loves to be read to 😉 Why use pictures books to teach mathematical concepts? First, because it’s a different way to approach the topic — less numbers and more words will really help some kids to better grasp a concept. Second, it’s a great way to mix things up instead of constantly repeating & practicing math problems. And third, there’s pictures! That’s a hard thing to come by when teaching math in most areas so it’s a welcome departure from the normal math textbooks and worksheets that many kids are used to seeing by this age. There are a few great authors and series that I like to use when discussing concepts such as multiplication, fractions, geometry, etc. Once your kids discover an author or series they enjoy, you’ll quickly find a variety of books that discuss the various mathematical ideas. Greg Tang has written quite a few books that integrate math concepts. One of the things we love about his stories is the math is presented in a variety of word problems so kids quickly begin to see that math doesn’t always look like an equation. A few that we enjoy include: Math Potatoes: Mind-stretching Brain Food Math-terpieces: The Art of Problem-Solving Math For All Seasons: Mind-Stretching Math Riddles Loreen Leedy and David Adler are two more authors who can make math fun and relevant. They create real life stories and examples around concepts such as measurement and fractions. And thank goodness because I can’t tell you how many times I’ve heard “I’ll never need to know how to do this in real life.” Yea, right 😉 Try Working With FractionsFraction Action For measurement, kids will enjoy Perimeter, Area, and VolumeMeasuring Penny Cindy Neuschwander has a unique take on math concepts. In her ‘Sir Cumference’ series, she shares a variety of math concepts through the adventures of favorite medieval characters. For example, Sir Cumference and All the King’s Tens: A Math AdventureSir Cumference and the Dragon of Pi One excellent series for transitioning from math learning in elementary grades to middle school is the Math Starts books. These are leveled stories with Level 1 introducing some of the basic ideas of math, Level 2 shares early concepts such as addition & subtraction and Level 3 dives into more complex topics like estimation, scale and graphs. A few that we recommend from this series include: Monster Musical Chairs (MathStart 1) Lemonade for Sale (MathStart 3) More Math Ideas to Inspire Kids Measurement: Draw a Life-Size Whale
{"url":"https://www.kcedventures.com/math-books-kids-will-love-and-learn-from-too/","timestamp":"2024-11-04T04:37:11Z","content_type":"text/html","content_length":"130800","record_id":"<urn:uuid:b1fa562e-9fbb-4166-a51f-c4818a7e33fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00777.warc.gz"}
CS 5350/6350: Machine Learining Homework 6 solved 1 Naive Bayes Classification In the class, we saw how we can build a naive Bayes classifier for discrete variables. In this question, you will explore the case when features are not discrete. Suppose instances, represented by x, are d dimensional real vectors and the labels, represented by y, are either 0 or 1. Recall from class that the naive Bayes classifier assumes that all features are conditionally independent of each other, given the class label. That is p(x|y) = Y Now, each xj is a real valued feature. Suppose we assume that these drawn from a class specific normal distribution. That is, 1. When y = 0, each xj is drawn from a normal distribution with mean µ0,j and standard deviation σ0,j , and 2. When y = 1, each xj is drawn from a normal distribution with mean µ1,j and standard deviation σ1,j Now, suppose we have a training set S = {(xi , yi)} with m examples and we wish to learn the parameters of the classifier, namely the prior p = P(y = 1) and the µ’s and the σ’s. For brevity, let the symbol θ denote all these parameters together. (a) Write down P(S|θ), the likelihood of the data in terms of the parameters. Write down the log-likelihood. (b) What is the prior probability p? You can derive this by taking the derivative of the log-likelihood with respect to the prior and setting it to zero. (c) By taking the derivative of the log-likelihood with respect to the µj ’s and the σj derive expressions for the µj ’s and the σj 2 Logistic Regression We looked maximum a posteriori learning of the logistic regression classifier in class. In particular, we showed that learning the classifier is equivalent to the following optimization 1 + exp(−yiw T xi) 2 w T w In this question, you will derive the stochastic gradient descent algorithm for the logistic regression classifier. (a) What is the derivative of the function log 1 + exp(−ywT i xi) with respect to the weight (b) The inner most step in the SGD algorithm is the gradient update where we use a single example instead of the entire dataset to compute the gradient. Write down the objective where the entire dataset is composed of a single example, say (xi , yi). Derive the gradient of this objective with respect to the weight vector. (c) Write down the pseudo code for the stochastic gradient algorithm using the gradient from part (b) above. 3 The EM algorithm The two local newspapers The Times and The Gazette publish n articles everyday. The article length in the newspapers is distributed based on the Exponential Distribution with parameter λ. That is, for an non-negative integer x: P(wordcount = x|λ) = λe−λx with parameters λT , λG for the Times and the Gazette, respectively. (Note: Technically, using the exponential distribution is not correct here because the exponential distribution applies to real valued random variables, whereas here, the word counts can only be integers. However, for simplicity, we will use the exponential distribution instead of, say a Poisson.) (a) Given an issue of one of the newspapers (x1, . . . xn), where xi denotes the length of the ith article, what is the most likely value of λ? (b) Assume now that you are given a collection of m issues {(x1, . . . xn)} 1 but you do not know which issue is a Times and which is a Gazette issue. Assume that the probability of a issue is generated from the Times is η. In other words, it means that the probability that a issue is generated from the Gazette is 1 − η. Explain the generative model that governs the generation of this data collection. In doing so, name the parameters that are required in order to fully specify the model. (c) Assume that you are given the parameters of the model described above. How would you use it to cluster issues to two groups, the Times issues and the Gazette issues? (d) Given the collection of m issues without labels of which newspaper they came from, derive the update rule of the EM algorithm. Show all of your work.
{"url":"https://codeshive.com/questions-and-answers/cs-5350-6350-machine-learining-homework-6-solved/","timestamp":"2024-11-08T17:34:25Z","content_type":"text/html","content_length":"104508","record_id":"<urn:uuid:22dcdc2c-047b-409b-aa6d-3ad8ff6c4403>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00334.warc.gz"}
Multiplication Warm Up Worksheets Times Tables Worksheets | Order of Operation Worksheets Multiplication Warm Up Worksheets Times Tables Worksheets Multiplication Warm Up Worksheets Times Tables Worksheets Multiplication Warm Up Worksheets Times Tables Worksheets – You may have heard of an Order Of Operations Worksheet, however just what is it? In this post, we’ll talk about what it is, why it’s essential, and exactly how to get a Math Sheets Multiplication Hopefully, this information will be helpful for you. Your students are worthy of an enjoyable, reliable method to examine the most crucial concepts in maths. On top of that, worksheets are a great way for pupils to practice new skills as well as evaluation old ones. What is the Order Of Operations Worksheet? An order of operations worksheet is a type of mathematics worksheet that needs trainees to do math operations. These worksheets are divided right into three main sections: subtraction, addition, and also multiplication. They also include the examination of parentheses as well as backers. Pupils that are still finding out how to do these tasks will certainly locate this type of worksheet The primary function of an order of operations worksheet is to assist pupils learn the correct means to address math equations. They can examine it by referring to a description page if a student does not yet comprehend the principle of order of operations. Furthermore, an order of operations worksheet can be split right into numerous categories, based on its problem. Another crucial function of an order of operations worksheet is to teach students how to execute PEMDAS operations. These worksheets begin with basic issues connected to the fundamental policies and develop to extra complex issues including every one of the policies. These worksheets are a terrific way to present young learners to the excitement of fixing algebraic equations. Why is Order of Operations Important? One of the most important points you can find out in mathematics is the order of operations. The order of operations makes certain that the mathematics problems you address are regular. An order of operations worksheet is an excellent method to instruct pupils the proper means to address mathematics formulas. Before trainees begin using this worksheet, they might need to evaluate ideas associated with the order of operations. To do this, they ought to review the concept web page for order of operations. This idea page will certainly give pupils an overview of the basic idea. An order of operations worksheet can assist students create their skills in addition and subtraction. Educators can make use of Prodigy as an easy way to differentiate technique as well as deliver appealing material. Prodigy’s worksheets are an excellent method to help pupils learn more about the order of operations. Teachers can begin with the basic concepts of multiplication, addition, as well as division to help pupils build their understanding of parentheses. Math Sheets Multiplication Math Sheets Multiplication Math Sheets Multiplication provide an excellent source for young learners. These worksheets can be quickly tailored for certain requirements. They can be found in three levels of problem. The very first level is easy, calling for pupils to practice using the DMAS method on expressions having four or even more integers or 3 drivers. The 2nd degree requires students to use the PEMDAS technique to simplify expressions making use of outer and internal parentheses, braces, as well as curly dental braces. The Math Sheets Multiplication can be downloaded for free and can be published out. They can then be evaluated using addition, division, subtraction, and also multiplication. Students can also utilize these worksheets to examine order of operations and also making use of exponents. Related For Math Sheets Multiplication
{"url":"https://orderofoperationsworksheet.com/math-sheets-multiplication/multiplication-warm-up-worksheets-times-tables-worksheets/","timestamp":"2024-11-09T10:59:39Z","content_type":"text/html","content_length":"26930","record_id":"<urn:uuid:0542dbe8-2830-40c2-82e8-b84b6609d5a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00770.warc.gz"}
Some Interesting End of Season Numbers - StateFans Nation During various bubble discussions, we talked about whether there was significance to being a 20-win team in a power conference. VaWolf posited that it was correlation, not causation – and the final tally this year proves him correct. Nine such teams are in the NIT: Michigan, FSU, WVU, Alabama, Oklahoma St., Kansas St., Clemson, Ole Miss, Syracuse. Interesting tidbits from the final Sagarin ratings: – Final rating: Drexel 74, NC State 75 – Worst NCAAT snubs: Clemson (26), Missouri State (27) – Weakest at-large selectons: Texas Tech (64), Old Dominion (68) – Teams with more Top 25 wins than NC State’s 5: UCLA (8), UNC (7), Georgetown (6), Oregon (6), Virginia (6).
{"url":"https://www.statefansnation.com/2007/03/some-interesting-end-of-season-numbers/","timestamp":"2024-11-05T01:23:11Z","content_type":"text/html","content_length":"91336","record_id":"<urn:uuid:ab09a7de-02cc-4c70-886d-2ee1d519935b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00656.warc.gz"}
Inspiring Drawing Tutorials Draw And Label Line Ef Draw And Label Line Ef - (a) copy the line ab. Estimate to draw point x halfway up line segment ab. Identify points, lines, line segments, rays, and angles. Terms & labels in geometry. Lines, rays, and line segments 1) 2) lm 3) cd 4) yz 5) st 6) tu 7) draw a ray with the point g and the endpoint. The line has arrows on both ends, but the line segment does not. Because the original line & the line created by the intersection are both diagonals of the constructed rhombus, they are perpendicular bisectors to each other (a property of rhombuses. A ruler or a combination of a ruler and compass are used to construct a line segment of a specific length. Use the line segments to connect all possible pairs of the. Mark the starting point of the line segment as point a. Draw, label, and define an angle as two rays sharing a common endpoint (vertex) measure. You can name the angle on the right in different ways: Mark the starting point of the line segment as point a. 4th grade > unit 11. Without changing the compass width, place the compass. Explain how your drawings of line ef and line segment ef are different. 4th grade > unit 11. Draw, label, and describe line segments, rays, lines, parallel lines, and perpendicular lines. Use a straight edge to draw and label line segment ab, cd, and ef as modeled on the board. How to construct a line segment? Draw a line of any length. (a) copy the line ab. Lines, rays, and line segments 1) 2) lm 3) cd 4) yz 5) st 6) tu 7) draw a ray with the point g and the endpoint. You can say \ (a\hat {b}c\) or \ (c\hat. Use a straight edge to draw and label line segment ab, cd, and ef as modeled on the board.. Lines, line segments, & rays. Terms & labels in geometry. Parallel & perpendicular lines intro. How do we name it? An explanation of the definition of lines, line segments and rays. Draw and label line segment ef. Identify parallel and perpendicular lines. (a) copy the line ab. Without changing the compass width, place the compass. Look at the examples below: That would be called a line. Draw and label line segment ef. Ef common core standard—4.g.a.1 draw and identify lines and angles, and classify shapes by properties of their lines and angles. Constructing a new line segment congruent to another involves creating an. Labour will seek to bring about a new chapter in britain's economic. There are many different ways to label angles. Use a straight edge to draw and label line segment ab, cd, and ef as modeled on the board. (a line, a line segment, or a ray)? To construct a line segment connecting two points, you need to line up a straightedge with two points and trace. (a) copy the line cd. A ruler and a pencil can be used to. Take a ruler and place the pointer of the compass at a, open the compass, say 6 cm. Parallel & perpendicular lines intro. Identify parallel and perpendicular lines. Explain how your drawings of line ef and line segment ef are different. Parallel & perpendicular lines intro. Use the line segments to connect all possible pairs of the. (b) draw a line parallel to cd. Draw and label ray sr. Explain how your drawings of line ef and line segment ef are different. Look at the examples below: Identify points, lines, line segments, rays, and angles. Identify parallel and perpendicular lines. How to label their points. Labour will seek to bring about a new chapter in britain's economic. (b) draw a line parallel to cd. Use a straight edge to draw and label line segment ab, cd, and ef as modeled on the board. Use the line segments to connect all possible pairs of the. Take a ruler and place the pointer of the compass at a, open the compass, say 6 cm. To construct a line segment. Draw And Label Line Ef - Without changing the compass width, place the compass. 4th grade > unit 11. Parallel & perpendicular lines intro. Constructing a new line segment congruent to another involves creating an. How to construct a line segment? Lines, rays, and line segments 1) 2) lm 3) cd 4) yz 5) st 6) tu 7) draw a ray with the point g and the endpoint. Mark the starting point of the line segment as point a. (b) draw a line parallel to ab. Explain how your drawings of line ef and line segment ef are different. (b) draw a line parallel to cd. (a line, a line segment, or a ray)? Find the angle formed by the rays de and df. Draw and label line segment ef. Draw a line of any length. Identify points, lines, line segments, rays, and angles. Parallel & perpendicular lines intro. Draw, label, and define an angle as two rays sharing a common endpoint (vertex) measure. Explain how your drawings of line ef and line segment ef are different. Gabriela should choose option c) with the compass point on (g), construct an arc that intersects (ef). Identify points, lines, line segments, rays, and angles. (a) copy the line ab. A ruler or a combination of a ruler and compass are used to construct a line segment of a specific length. In this problem, we are asked to draw an example of ray ef. Practice and homework lesson 10.3 8. Draw and label line segment ef. Estimate To Draw Point X Halfway Up Line Segment Ab. Draw a line of any length. Because the original line & the line created by the intersection are both diagonals of the constructed rhombus, they are perpendicular bisectors to each other (a property of rhombuses. In this problem, we are asked to draw an example of ray ef. A ruler or a combination of a ruler and compass are used to construct a line segment of a specific length. Lines, Rays, And Line Segments 1) 2) Lm 3) Cd 4) Yz 5) St 6) Tu 7) Draw A Ray With The Point G And The Endpoint. Draw and label ray sr. Identify points, lines, line segments, rays, and angles. How do we name it? (a) copy the line ab. Find The Angle Formed By The Rays Ca And Ce. Explain how your drawings of line ef and line segment ef are different. Identify parallel and perpendicular lines. Draw and label line segment ef. The line in between them would continue travelling outside of these points, regardless of where they. Math > Geometry (All Content) > Lines > Lines, Line Segments, And Rays. Look at the examples below: Take a ruler and place the pointer of the compass at a, open the compass, say 6 cm. Draw, label, and define an angle as two rays sharing a common endpoint (vertex) measure. Ef common core standard—4.g.a.1 draw and identify lines and angles, and classify shapes by properties of their lines and angles.
{"url":"https://one.wkkf.org/art/drawing-tutorials/draw-and-label-line-ef.html","timestamp":"2024-11-13T19:03:39Z","content_type":"text/html","content_length":"33293","record_id":"<urn:uuid:cd7bd162-b950-4cc8-8f08-adc904c77e23>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00196.warc.gz"}
CSS Transform Rotation (2D) rotate() Rotates an element around a fixed point on the 2D plane. The rotate() CSS function defines a transformation that rotates an element around a fixed point on the 2D plane, without deforming it. The amount of rotation created by rotate() is specified by an angle value expressed in degrees, gradians, radians, or turns. If positive, the movement will be clockwise; if negative, it will be counter-clockwise. (A rotation by 180° is called point reflection.) The axis of rotation passes through an origin, defined by the transform-origin CSS property. — More info: developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotate Rotation (3D) rotateX() Rotates an element around the horizontal axis. The rotateX() CSS function defines a transformation that rotates an element around the abscissa (horizontal axis) without deforming it. The amount of rotation created by rotateX() is specified by an angle value expressed in degrees, gradians, radians, or turns. If positive, the movement will be clockwise; if negative, it will be counter-clockwise. The axis of rotation passes through an origin, defined by the transform-origin CSS property. rotateX(a) is equivalent to rotate3d(1, 0, 0, a). — More info : developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotateX rotateY() Rotates an element around the vertical axis. The rotateY() CSS function defines a transformation that rotates an element around the ordinate (vertical axis) without deforming it. The amount of rotation created by rotateY() is specified by an angle value expressed in degrees, gradians, radians, or turns. If positive, the movement will be clockwise; if negative, it will be counter-clockwise. The axis of rotation passes through an origin, defined by the transform-origin CSS property. rotateY(a) is equivalent to rotate3d(0, 1, 0, a). — More info : developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotateY rotateZ() Rotates an element around the z-axis. The rotateZ() CSS function defines a transformation that rotates an element around the z-axis without deforming it. The amount of rotation created by rotateZ() is specified by an angle value expressed in degrees, gradians, radians, or turns. If positive, the movement will be clockwise; if negative, it will be The axis of rotation passes through an origin, defined by the transform-origin CSS property. rotateZ(a) is equivalent to rotate(a) or rotate3d(0, 0, 1, a). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotateZ perspective() Sets the distance between the user and the z=0 plane. The perspective() CSS function defines a transformation that sets the distance between the user and the z=0 plane. The perspective distance used by perspective() is specified by a length value (a number followed by a length unit: em, rem, px, pt, mm…), which represents the distance between the user and the z=0 plane. A positive value makes the element appear closer to the user, a negative value farther. — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/perspective Transform Origin transform-origin() Sets the origin for an element's transformations. The transformation origin is the point around which a transformation is applied. For example, the transformation origin of the rotate() function is the center of rotation. The transform-origin property may be specified using one, two, or three values, where each value represents an offset. Offsets that are not explicitly defined are reset to their corresponding initial If two or more values are defined and either no value is a keyword, or the only used keyword is center, then the first value represents the horizontal offset and the second represents the vertical • One-value syntax: The value must be a length, a percentage, or one of the keywords left, center, right, top, and bottom. • Two-value syntax: One value must be a length, a percentage, or one of the keywords left, center, and right. The other value must be a length, a percentage, or one of the keywords top, center, and • Three-value syntax: The first two values are the same as for the two-value syntax. The third value must be a length. It always represents the Z offset. — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-origin Scaling (Resizing) scale() Scales an element up or down on the 2D plane. The scale() CSS function defines a transformation that resizes an element on the 2D plane. Because the amount of scaling is defined by a vector, it can resize the horizontal and vertical dimensions at different scales. This scaling transformation is characterized by a two-dimensional vector. Its coordinates define how much scaling is done in each direction. If both coordinates are equal, the scaling is uniform (isotropic) and the aspect ratio of the element is preserved (this is a homothetic transformation). When a coordinate value is outside the [-1, 1] range, the element grows along that dimension; when inside, it shrinks. If it is negative, the result a point reflection in that dimension. A value of 1 has no effect. The scale() function only scales in 2D. To scale in 3D, use scale3d() instead. The scale() function is specified with either one or two values, which represent the amount of scaling to be applied in each direction. scale(sx, sy) - sx : A number representing the abscissa of the scaling vector. - sy : A number representing the ordinate of the scaling vector. If not defined, its default value is sx, resulting in a uniform scaling that preserves the element's aspect ratio. — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/scale scaleX() Scales an element up or down horizontally. The scaleX() CSS function defines a transformation that resizes an element along the x-axis (horizontally). It modifies the abscissa of each element point by a constant factor, except when the scale factor is 1, in which case the function is the identity transform. The scaling is not isotropic, and the angles of the element are not conserved. scaleX(-1) defines an axial symmetry, with a vertical axis passing through the origin (as specified by the transform-origin property). scaleX(sx) is equivalent to scale(sx, 1) or scale3d(sx, 1, 1). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/scaleX scaleY() Scales an element up or down vertically. The scaleY() CSS function defines a transformation that resizes an element along the y-axis (vertically). It modifies the ordinate of each element point by a constant factor, except when the scale factor is 1, in which case the function is the identity transform. The scaling is not isotropic, and the angles of the element are not conserved. scaleY(-1) defines an axial symmetry, with a horizontal axis passing through the origin (as specified by the transform-origin property). scaleY(sy) is equivalent to scale(1, sy) or scale3d(1, sy, 1). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/scaleY Translation (Moving) translate() Translates an element on the 2D plane. The translate() CSS function repositions an element in the horizontal and/or vertical directions. This transformation is characterized by a two-dimensional vector. Its coordinates define how much the element moves in each direction. The translate() function is specified as either one or two values. translate(tx, ty) - tx : Is a length value representing the abscissa (x-coordinate) of the translating vector. - ty : Is a length value representing the ordinate of the translating vector (or y-coordinate). If unspecified, its default value is 0. For example, translate(2) is equivalent to translate(2, 0). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translate translateX() Translates an element horizontally. The translateX() CSS function repositions an element horizontally on the 2D plane. Syntax : translateX(t). (t is a length value representing the abscissa of the translating vector.) translateX(tx) is equivalent to translate(tx, 0) or translate3d(tx, 0, 0). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translateX translateY() Translates an element vertically. The translateY() CSS function repositions an element vertically on the 2D plane. translateY(ty) is equivalent to translate(0, ty) or translate3d(0, ty, 0). Syntax : translateY(t). (t is a length value representing the ordinate of the translating vector.) — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translateY Skewing (Distortion) skew() Skews an element on the 2D plane. The skew() CSS function defines a transformation that skews an element on the 2D plane. This transformation is a shear mapping (transvection) that distorts each point within an element by a certain angle in the horizontal and vertical directions. The coordinates of each point are modified by a value proportionate to the specified angle and the distance to the origin; thus, the farther from the origin a point is, the greater will be the value added it. The skew() function is specified with either one or two values, which represent the amount of skewing to be applied in each direction. skew(ax, ay) - ax : Is an angle value expressed in degrees, gradians, radians, or turns; representing the angle to use to distort the element along the abscissa. - ay : Is an angle value expressed in degrees, gradians, radians, or turns; representing the angle to use to distort the element along the ordinate. If not defined, its default value is 0, resulting in a purely horizontal skewing. — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/skew skewX() : Skews an element in the horizontal direction. The skewX() CSS function defines a transformation that skews an element in the horizontal direction on the 2D plane. This transformation is a shear mapping (transvection) that distorts each point within an element by a certain angle in the horizontal direction. The abscissa coordinate of each point is modified by a value proportionate to the specified angle and the distance to the origin; thus, the farther from the origin a point is, the greater will be the value added it. skewX(a) is equivalent to skew(a). — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/skewX skewY() Skews an element in the vertical direction. The skewY() CSS function defines a transformation that skews an element in the vertical direction on the 2D plane. This transformation is a shear mapping (transvection) that distorts each point within an element by a certain angle in the vertical direction. The ordinate coordinate of each point is modified by a value proportionate to the specified angle and the distance to the origin; thus, the farther from the origin a point is, the greater will be the value added it. — More Info : https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/skewY
{"url":"https://css-transform.moro.es/?utm_source=CSS-Weekly&utm_campaign=Issue-364&utm_medium=web","timestamp":"2024-11-10T08:49:23Z","content_type":"text/html","content_length":"142575","record_id":"<urn:uuid:ee83672d-518c-47ca-8192-cd09fcb695cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00635.warc.gz"}
Finding the xy's between two points ★★★★★ Finding the xy's between two points Connect the dots in the correct order and draw happy animals! Well, if A.x and A.y are the coordinates for point A, and same logic for B with let's say course being a variable between 0 and 1 you have lerp(A.X, B.X, course) returning an X value lerp(A.Y, B.Y, course) will return the Y value on the same line corresponding to that X, as long as course is between 0 and 1, the point will be between them. EDIT: if you prefer the actual equation of the line (AB), you can have it: Y=(A.Y-B.Y)/(A.X-B.X)*X+ init Init being found with A.Y=(A.Y-B.Y)/(A.X-B.X)*A.X+ init init= A.Y-A.X*(A.Y-B.Y)/(A.X-B.X) In the end, If I am correct, or Y= (X-A.X)*(A.Y-B.Y)/(A.X-B.X) + A.Y Depends on what you want to do exactly after that...if it is verifying if a point C is between A and B, you just verify that A.X<C.X<B.X and that C.Y= (C.X-A.X)*(A.Y-B.Y)/(A.X-B.X) + A.Y Would this help? Just copied this, but you can find a lot of various methods. But this looked pretty good. Find the slope m=(x1-x2)/(y1-y1) and make a loop of x where y=mx+c (c=0). now get (x,y) for a range of x. Your two points: (x1, y1) (x2, y2): m = (y1 - y2) / (x1-x2); c = y1 - x1 * m; Then, for any given x: y = mx + c; This looks interesting(a little complicated) If it is on a straight line, can't you find midpoint then midpoint between point a and midpoint, and midpoint and point b, and repeat etc? I've used lerp before with distance(). It seemed a little overly complicated, but then again so does this. Like I don't know what init means, and I'm not sure how to get the x component out of that. Or should I say overly simplified? I may have over complicated it, it is the same as dutoit ( init was a value, which dutoit calls c) linear Bézier curve is what you need dx = (1-t) * x1 + t * x2; dy = (1-t) * y1 + t * y2; t - is a coef. from 0 to 1. Try Construct 3 Develop games in your browser. Powerful, performant & highly capable. Construct 3 users don't see these ads For that you could use Bresenham's line algorithm for a nice result. But it isn't simple. You could use lerp to do it like this: https://dl.dropboxusercontent.com/u/542 ... _step.capx but you'll have to handle the case when deltay is more than deltax and do the loop in that direction. Edit: actually this would do it: variable delta= max(abs(x1-x0), abs(y1-y0))/32 repeat delta times --- create sprite at (round(lerp(x0, x1, loopindex/delta)/32)*32), round(lerp(y0, y1, loopindex/delta)/32)*32) Active Users There are 1 visitors browsing this topic (0 users and 1 guests)
{"url":"https://www.construct.net/en/forum/construct-2/how-do-i-18/finding-xys-two-points-95709","timestamp":"2024-11-05T04:18:18Z","content_type":"text/html","content_length":"264803","record_id":"<urn:uuid:c606ef88-2263-4497-a6fd-9a04e47dd2f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00592.warc.gz"}
Digital-to-Digital Conversion - Dutta Tech Digital-to-Digital Conversion As we have already told you that data can either be in analog form or in digital form. So let us learn how can we represent digital data in the form of digital signals. Three techniques used for this conversion are as follows: ● Line Coding ● Block Coding ● Scrambling Line Coding Line coding is the process used to convert digital data to digital signals. Assume that the data is in the form of text, numbers, audio, or video and that it is stored in the computer as a series of bits. As a result, line coding converts a series of bits into a digital signal. Digital data is encoded into digital signals on the sender side, and digital data is regenerated on the receiver side by decoding the digital signal. Line Coding and Decoding are depicted in the diagram above. Five different types of line coding techniques can be found: ● Unipolar ● Polar ● Bipolar ● Multilevel ● Multi Transition Unipolar Method All of the signal levels are on one side of the time axis in this line coding method. It could be either above or below the surface. Unipolar Scheme is a Non-Return-to-Zero (NRZ) scheme in which positive voltage represents bit 1 and zero voltage represents bit 0. Only one voltage level is used in the unipolar scheme. Because the signal does not return to zero in the middle of the bit, it is dubbed NRZ. Unipolar NRZ Scheme This coding system is both cheap and easy to adopt. Polar Method The voltages are on both sides of the time axis in this line coding method. Consider the following example: the voltage level for 0 can be positive, whereas the voltage level for 1 can be negative. As a result, we use two levels of voltage amplitude in Polar NRZ encoding. Polar NRZ comes in two versions: The bit’s value is mostly determined by the voltage level. As a result, the signal’s level is determined by the bit’s value. The value of the bit is mostly determined by the change in voltage level. If there is no change, the bit will be 0; if there is a change, the bit will be 1. There will be no inversion in the diagram above if the next bit is 0. Inversion, however, will occur if the next bit is 1. Bipolar Method There are three voltage levels in the Bipolar method: positive, zero, and negative. One data element’s voltage level is set to 0, while the voltage levels of other data elements alternate between positive and negative. Given The following are two examples of Bipolar Encoding: ● AMI(Alternate Mark Inversion) It simply implies an inversion 1 alternative. Binary 0 is represented by a neutral zero voltage, while binary 1s are represented by alternating positive and negative voltages. The figure shows Bipolar AMI Encoding ● Pseudo ternary: In this case, one bit represents zero voltage, while the zero bit represents alternating positive and negative voltages. The figure shows the Bipolar Pseudoternary Scheme Multilevel Method mBnL is another name for the Multilevel Coding Method, where m is the length of the Binary pattern. The binary data is denoted by B, while the length of the signal pattern is denoted by n. The number of levels in the signaling is indicated by the letter L. This scheme is available in three separate methods: • 4D-PAM5 • 2B1Q • 8B6T Multi Transition(MLT-3) This technique employs three levels (+V, 0,-V), as well as more than three transition rules to move between them. The following are the rules: • If the next bit is 0, there is no transition. • The next level will be 0 if the next bit is 1 and the current level is not 0. • If the next bit is 1 and the current level is 0, the next non-zero level is the inverse of the previous one. For long 0s, this approach does not do self-synchronization. Block Coding Redundancy is the fundamental issue with line coding. The Block Codes are mostly used to manipulate a block of bits. They employ the preset algorithm to take a group of bits and combine them with a coded component to form a big block. After the receiver determines the authenticity of the received sequence, this huge block is examined at the receiver. As a result, block coding converts an m-bit block into an n-bit block, where n>m. mB/nB encoding is another name for this block coding technology. This technique overcomes the disadvantages of line coding and produces superior results. The following are the various types of Block Coding: • 4B/5B • 8B/10B By introducing scrambling, we can modify the line and block coding. It’s worth noting that scrambling, as opposed to block coding, is mostly done during the encoding process. The system must primarily insert the required pulses based on the scrambling rules. The two most frequent scrambling techniques are listed below: • B8ZS (B8ZS) (Bipolar with 8-zero substitution) The sequence 000VB0VB replaces eight consecutive zero-level volts in this technique. In this sequence, V stands for violation, which is a nonzero voltage that violates the AMI encoding rule. According to the AMI rule, the B in the given sequence represents Bipolar, which simply means nonzero voltage level. Two examples of the B8ZS scrambling technique are shown in the diagram below: • HDB3 (High-Density Bipolar 3-zero) This approach is more conservative than B8ZS, as it substitutes a sequence of 000V or B00V for four consecutive zero-level voltages. The main reason for using two separate substitutes is to keep the number of nonzero pulses even after each one. The following are two rules to follow for this purpose: 1. If the total number of nonzero pulses after the last substitution is odd, we’ll employ the 000V substitution pattern, which evens out the total number of nonzero pulses. 2. If the number of nonzero pulses after the last replacement is even, we’ll employ the B00V substitution pattern, which will bring the total number of nonzero pulses to an even number. The diagram below depicts many scenarios in the HDB3 scrambling technique:
{"url":"https://duttatech.com/digital-to-digital-conversion/","timestamp":"2024-11-12T22:34:34Z","content_type":"text/html","content_length":"87873","record_id":"<urn:uuid:cb59e993-6adc-426c-a678-8b5b231ded12>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00865.warc.gz"}