content
stringlengths
86
994k
meta
stringlengths
288
619
Start with two arrays of strings, A and B, each with its elements in alphabetical order and without duplicates. Return a new array containing the first N elements from the two arrays. The result array should be in alphabetical order and without duplicates. A and B will both have a length which is N or more. The best "linear" solution makes a single pass over A and B, taking advantage of the fact that they are in alphabetical order, copying elements directly to the new array. mergeTwo(["a", "c", "z"], ["b", "f", "z"], 3) → ["a", "b", "c"] mergeTwo(["a", "c", "z"], ["c", "f", "z"], 3) → ["a", "c", "f"] mergeTwo(["f", "g", "z"], ["c", "f", "g"], 3) → ["c", "f", "g"] ...Save, Compile, Run (ctrl-enter)
{"url":"https://codingbat.com/prob/p139150","timestamp":"2024-11-05T16:30:18Z","content_type":"text/html","content_length":"6208","record_id":"<urn:uuid:33affd26-4a12-4d40-9d5e-6b5822d1546a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00172.warc.gz"}
Close encounters of the mathematical kind Are we alone in the universe? It’s a question that’s been asked by people all over the world as they look up at the stars and wonder if anyone is peering back. As an astronomer, Dr. Frank Drake spends more time gazing into the heavens than most, using optical and radio telescopes. In 1961 he came up with an equation to help us think about the possibility of extraterrestrial life, and founded a branch of astronomy called SETI, the Search for Extra-Terrestrial Intelligence. Drake’s dilemma The Drake equation, as it is now known, looks like this: N = R* · f[p] × n[e] × f[l] × f[i] × f[c] × L N is the number of alien civilizations in our galaxy with which we might one day be able to communicate R* is the number of new stars that form in the galaxy each year f[p] is the fraction of those stars that have orbiting planets n[e] is the number of planets orbiting a star that can potentially support life f[l] is the fraction of those planets on which life actually emerges f[i] is the fraction of life-bearing planets that develop intelligent life f[c] is the fraction of civilisations that have the technology to communicate across the stars. L is the length of time a civilisation communicates for It’s hard to place a figure on any of these variables, because we only know of one intelligent civilisation – us! We can use what we know about the solar system to come up with some estimates, but the value of N can vary widely, depending on which numbers we choose. If you’d like to try and find your own value for N, this interactive tool will let you experiment. Of all the variables in the Drake equation, we probably have the best idea about R*. Astronomers observing changes in the Milky Way galaxy reckon that about seven new stars are born each year. Next is f[p], which is a little trickier. We’ve only discovered around 400 planets outside of our solar system, which isn’t much compared to the 100 billion stars of the Milky Way! Estimates for f[p] range from 20 to 60%, with 50% seen as the most likely. What’s your number? From here on things get a bit more uncertain. We know that in our solar system n[e] is at least 1, because Earth supports life. It could be more, because some scientists think other planets like Mars or Venus also have the potential to support life. A common value of n[e] is 2. Just because a planet can support life, it doesn’t mean life will actually evolve, or that it will be intelligent and able to communicate. Suggested values are f[l] = 0.33, f[i] = 0.01, and f[c] = 0.01, but we’ve got no real evidence backing up these numbers. Likewise, we don’t know how long civilisation can communicate for. We’ve only had radio telescopes for less than a hundred years, but you would hope the human race will be around much longer than that! A common choice for L is 10,000 years. With all the variables in place, we can multiply them together to find N = 7 × 0.5 × 2 × 0.33 × 0.01 × 0.01 × 10,000 = 2.31. In other words, these particular values for the Drake equation suggest there are roughly two alien civilisations somewhere in the galaxy waiting for us to pick up the phone. The high uncertainty involved means that any value for N could be very wrong though. When multiplying estimated numbers, we have to add the percentage error, and with seven variables that percentage error soon gets quite large. Critics of the Drake equation say this makes it impossible to calculate a meaningful value for N. In a way they’re right, but the Drake equation isn’t really designed to give us an answer about the existence of extraterrestrial life. Its real significance is in helping us ask the right questions. The terms in the equation bring together astronomy, biology, and even sociology to produce a model of life among the stars. By asking the question “are we alone in the universe?” in the language of maths, we learn more about the universe around us, and our place within it.
{"url":"https://www.mathscareers.org.uk/close-encounters-mathematical-kind/","timestamp":"2024-11-09T00:44:05Z","content_type":"text/html","content_length":"99147","record_id":"<urn:uuid:b1c24b2f-f1a1-4520-b20f-3aa5b6af1491>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00312.warc.gz"}
How to gamble with demons (and make money doing it) — Raposa A demon comes to you one night giving you a simple dice game to play. You're offered the chance to wager your wealth and receive a 50% increase if you roll a 6, 5% bump if you roll 2-5, or lose 50% if you roll a 1. You also get 300 rolls and get to compound your wealth with each roll of the die. Do you play? Most people would jump at this opportunity. They'd look at the average of these payoffs and think they have a 3.3% edge. If they do the math on that, they'd realize that the expected value comes out to the easiest 1,700,000% (1.033^300) return on their wealth they'll ever see! This is a demon though, so what's the catch? In fact, talk to any experienced gambler, card player, or trader, and they'd realize that even with that edge, betting the house on each roll of the die is a fool's errand. This is the most important chapter in Mark Spitznagels' book, Safe Haven: Investing for Financial Storms. It's not because he shares the "secret sauce" to his tail hedging fund (spoiler alert: he doesn't) but he does give you the tools about how to approach risk mitigation through the example of a dice game and three ways we can play it. We walk through each of the examples with code, plus add a few different wrinkles not found in the book, to help clarify how Spitznagel considers risk mitigation in his portfolio. If you don't want to code, but just want to learn how to trade, you can check out the free demo of our no-code algorithmic trading platform here. Reset Your Expectations Just like the average family with 2.3 kids doesn't exist, your average return doesn't either. In fact, you only get this 3.3% return in a special case that Mark Spitznagel terms playing with Schrödinger's Demon. In physicist Erwin Schrödinger's famous thought experiment about quantum entangled cat, the cat is in a superposition rendering it both alive and dead at the same time. In the dice game, looking at the arithmetic average is like assuming every roll gives you the payoff from all the outcomes simultaneously. So yes, the average has a great payoff, but only in this multiverse world where you get all of the results at once does it actually mean anything to the individual dice player. In reality, you get to play with what Spitznagel terms Nietzsche's Demon - you get one chance at life and this dice game, so you better make it count! (I'm not going to go into why he uses Nietzsche, Schrödinger, and demons - we'll be plagiarizing enough of the book in this blog post - so go ahead and get the book here for the humorous and entertaining vignette's that motivate the math; it's worth the read!) This requires a different mathematical technique to see if you should play. Simulating the Roll of a Die Because I'm slow (or skeptical) I didn't really "get" the power of Spitznagel's point until I recreated the examples myself. The examples aren't complicated - they're deceptively simple - which makes them great tools to learn from. Moreover, we can run our own Monte Carlo simulation in Python in just a few lines of code! Of course, you don't have to run the code yourself (although doing so will allow you to ask new questions), because we provide detailed explanations all along the way. Enough blabber. Let's import some basic packages and write a function to simulate our game with Nietzsche's Demon of the dice. import numpy as np import pandas as pd import matplotlib.pyplot as plt A quick side note - it's always recommended to set your random seed when doing any kind of stochastic simulation like this so that the results are reproducible. I don't always follow my own advice, but I did remember to insert that simple line above so that you can run these exact experiments for yourself. With that, we've got a 6-sided die with the -50%, 50%, and 5% payoffs we stated above. We'll run 10,000 simulations of 300 rolls each. That's 3 million simulated rolls, which can easily be handled using NumPy and some vectorized functions for some clean, concise code. def NietszcheDice(cash: float=0, returns: list=[0.5, 1.05, 1.05, 1.05, 1.05, 1.5], rolls: int=300, samples: int=10000): bet = 1 - cash adj_returns = cash + bet * np.asarray(returns) roll_sims = np.random.choice(adj_returns, size=(rolls, samples)).reshape(-1, rolls) return roll_sims.cumprod(axis=1) You can pass a different list of returns, increase or decrease the number of sampled simulations or the number of rolls. There's also a bit of foreshadowing going on here if you look at the code because we have another argument called cash; we'll explain this shortly, but for now it's set to 0 and doesn't affect anything. We can get our 10,000 simulated trajectories of our wealth with: Each trajectory corresponds to a simulated series of 300 rolls. The last column in our trajectory array will be our ending wealth fraction, which we can analyze to see how we fared. Now, let's write one more helper function to analyze all of this data. We want to see what the relevant quantiles of our trajectories are, so we'll get a quick function to handle that and look at our def getQuantilePath(trajectories: np.array, q: float=0.5): quantile = np.quantile(trajectories[:, -1], q=q) path = trajectories[np.abs(quantile - trajectories[:, -1]).argmin()] return quantile, path This function takes in our trajectory array and finds the trajectory that most closely corresponds to our ending wealth quantile. So, if we're looking for the median (or the 50th percentile) we pass q=0.5 and we'll get our median value and the particular path it took for easy visualization. Now, let's plot the results of our simulation and look at the results (I'll be using this exact same code throughout the post, so won't repeat it for each plot). colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] perc50, path50 = getQuantilePath(n_traj) perc95, path95 = getQuantilePath(n_traj, q=0.95) perc5, path5 = getQuantilePath(n_traj, q=0.05) path_avg = n_traj.mean(axis=0) fig = plt.figure(figsize=(15, 8)) gs = fig.add_gridspec(1, 2, width_ratios=(3, 1)) ax = fig.add_subplot(gs[0]) ax_hist = fig.add_subplot(gs[1]) ax.plot(path50, label='Median') ax.plot(path95, label=r'$95^{th}$ Percentile') ax.plot(path5, label=r'$5^{th}$ Percentile') ax.plot(path_avg, label='Mean', linestyle=':') alpha=0.3, color=colors[4]) ax.set_title('Playing Dice with your Wealth') ax.set_ylabel('Ending Wealth') growth = (np.power(n_traj[:, -1], 1/300) - 1) * 100 growth_med = (np.power(path50[-1], 1/300) -1) * 100 growth_avg = (np.power(path_avg[-1], 1/300) - 1) * 100 ax_hist.hist(growth, orientation='horizontal', bins=50, color=colors[4], alpha=0.3) ax_hist.axhline(0, label='Break Even', color='k', linestyle=':') ax_hist.axhline(growth_med, label='Median', color=colors[0]) ax_hist.axhline(growth_avg, label='Mean', color=colors[3]) ax_hist.set_ylabel('Compound Growth Rate (%)') The game with the 3.3% edge doesn't look so good if you imagine playing it 10,000 times! The average case (the dotted, orange line) increases steadily to give you that great payoff, but, if you look to the histogram on the right, you'll see that very few people are so lucky as to be "average." In fact, only 0.9% of outcomes are average or better. This is because the returns are heavily skewed, so your average is much higher than the median - where you're much more likely to end up. In fact, looking at the median, we see we end up with a 99% percent loss. That's right, betting everything with a fat, enviable 3.3% edge is going to drive us to end up with 1% of our wealth in most cases. Of course, if you read our previous post, you'd know that you could just take a shortcut and calculate all of this with the geometric average to get the median returns and our compound growth rate. print(f"Median Outcome: {(np.prod([i**50 for i in returns]) - 1)*100:.2f}%") x = (np.power(np.prod([i**50 for i in returns]), 1/300) - 1) * 100 print(f"Compound Growth Rate of Median: {x:.2f}%") Median Outcome: -99.02% Compound Growth Rate of Median: -1.53% What if we could change the rules just a smidge? How about instead of betting all of our wealth on each roll of the dice, we bet 40% of our wealth and held 60% back in cash? What would that do? Positioning to Win Let's use that extra argument in our NietzscheDice() function to keep 60% of our wealth in cash and re-run the experiment above. # With cash in reserve cash = 0.6 cash_traj = NietszcheDice(cash) This 60/40 portfolio of sorts greatly increases our returns! This may seem counter intuitive, but we've reduced our arithmetic average from 3.3% down to 1.3%, but we've boosted our geometric mean up to 0.64%. Over 300 rolls, this translates into a median outcome of 582% increase in your overall wealth. Nothing was done here apart from reducing our bet size. This should highlight the critical importance of risk management and position sizing when trading - take on too much risk and you're likely to blow up your account like we saw in the first example. We reduced the mean, increased the median, but also increased our odds of making money. In the first example, 79% of our trajectories wound up losing money and only 1% were above average. In this case, only 18% lost money and 16% were above the average. We have also reduced our variance. Our 5th and 95th percentile cases are much closer to one another in this latter case. Of course, the extremely high wealth that we had in the all-in case (achieved by the extremely lucky) has been reduced, this strikes me as a worthwhile trade-off. The Optimal Bet Size So we see that reducing our bet size to 40% boosts our returns. But is this the best we can do? To test this, we can run our NietzscheDice Monte Carlo simulation for any bet size we want and compare the results. We'll generate the data and see what maximizes our compound growth rate. The code to run this is given below. Note that I'm running each fraction 10 times and averaging the median values out just to smooth the curves a bit. If you have a slow computer, this may take a few minutes to run. # Optimal tradeoff cash_frac = np.linspace(0, 1, 101)[::-1] N = 10 # Multiple runs to smooth out the values vals5 = np.zeros((len(cash_frac), N)) vals50 = vals5.copy() vals95 = vals5.copy() for i in range(N): for j, f in enumerate(cash_frac): traj = NietszcheDice(f) perc5, _ = getQuantilePath(traj, 0.05) perc50, _ = getQuantilePath(traj, 0.5) perc95, _ = getQuantilePath(traj, 0.95) vals5[j, i] += perc5 vals50[j, i] += perc50 vals95[j, i] += perc95 vals5_smooth = vals5.mean(axis=1) vals50_smooth = vals50.mean(axis=1) vals95_smooth = vals95.mean(axis=1) # Plot the results plt.figure(figsize=(12, 8)) plt.plot(vals5_smooth, label=r'$5^{th}$ Percentile') plt.plot(vals50_smooth, label=r'$50^{th}$ Percentile') plt.plot(vals95_smooth, label=r'$95^{th}$ Percentile') plt.scatter(vals5_smooth.argmax(), vals5_smooth.max(), marker='*', s=200) plt.scatter(vals50_smooth.argmax(), vals50_smooth.max(), marker='*', s=200) plt.scatter(vals95_smooth.argmax(), vals95_smooth.max(), marker='*', s=200) plt.xlabel('Percentage of Wealth Wagered') plt.ylabel('Ending Wealth') plt.title('Optimal Bet Size') It turns out that the 40% wager we chose is roughly optimal (so Spitznagel set it up this way) as shown by the star on the 50th percentile in the plot above. Note that this is the bet that maximizes the 50th percentile, but, if we were more risk averse, we could try to optimize for the 5th percentile and go into 94% cash while only betting 6% of our wealth (that reduces our expectation down to 5.8% total growth in our wealth from the 582% increase we saw with maximizing the median, but you get the idea). Note too that if you increase much beyond that 40% level, you start to get a reduction in your median wealth, and the drop off gets steeper the more off base you are. Maximizing the median like this can be calculated using the Kelly Criterion, which provides a precise formula for calculating our position size. We're not going to go into the derivation or anything here, but for the curious, we can calculate the Kelly Criterion for multiple, discreet outcomes as: $$\max_x g^* = \prod_{i=1}^N (1 + w_i x)^{p_i}$$ This means we're trying to choose x - which is our bet size - in order to maximize g*. In this case, w_i represents the winnings from each of our outcome (e.g. -50%, 5%, and 50%) and p_i is the probability of each outcome (we've got dice, so 1/6 for each). The easiest way to find the bet size that maximizes our returns is by trying every fraction to find out where g* is highest. If we do that, we can get the following plot: def discreteKellyCriterion(x: float, returns: list, probs: list): return np.prod([(1 + b * x)**p for b, p in zip(returns, probs)]) - 1 probs = np.repeat(1/6, 6) returns = [-0.5, 0.05, 0.05, 0.05, 0.05, 0.5] g = np.array([discreteKellyCriterion(f, returns, probs) for f in cash_frac]) g *= 100 plt.figure(figsize=(12, 8)) plt.plot(cash_frac, g) plt.xlabel('Fraction Bet') plt.ylabel('Compound Growth Rate (%)') plt.title('Optimal Bet Size According to the Kelly Criterion') The maximum growth rate for both the simulated and theoretical results are very close. # Simulated and theoretical are very close g_sim = (np.power(vals50_smooth, 1/300) - 1) * 100 print(f"Simulated Max Growth Rate: {g_sim.max():.2f}") print(f"Theoretical Max Growth Rate: {g.max():.2f}") Simulated Max Growth Rate: 0.62 Theoretical Max Growth Rate: 0.64 And the theoretical value comes out to a 64/36 cash/bet split. Spitznagel doesn't stop with the power of the Kelly Criterion though, he continues by offering a tantalizing example of the possibilities if you are able to insure yourself against a loss. Playing Dice with Insurance In our previous post, we showed how paying for insurance can increase the returns for our 17th century merchant who ran the risk of pirates. The same idea can be applied to our dice game as well as your portfolio. Spitznagel asks us to play the same dice game, except we can insure against our losses of rolling that 1. If we get a 1, we get a 500% return on our allocation, otherwise we lose our premium every time. If we allocate 9% of our capital to insurance on every roll and wager 91% of our capital, we see our arithmetic average drop to 3% vs the 3.3% edge we saw with the original game. # Playing Dice with Insurance f = 0.09 insurance = np.array([6, 0, 0, 0, 0, 0]) returns = np.array([0.5, 1.05, 1.05, 1.05, 1.05, 1.5]) ins_rets = f * insurance + (1 - f) * returns print(f'Mean Returns with Insurance {(ins_rets.mean() - 1) * 100:.1f}%') Mean Returns with Insurance 3.0% You probably see where this is going though, so I'll cut to the chase, our geometric mean (the one that does all of that valuable compounding) rises to 2.1% with this trade-off. ins_gm = (np.power(np.prod(np.power(ins_rets, 50)), 1/300) - 1) * 100 print(f'Geometric Mean with Insurance {ins_gm:.1f}%') Geometric Mean with Insurance 2.1% To simulate this, so we'll have to modify our NietzscheDice() function slightly to accommodate both return profiles. def NietszcheDiceIns(ins_frac: float=0, dice_returns: list=[0.5, 1.05, 1.05, 1.05, 1.05, 1.5], ins_returns: list=[6, 0, 0, 0, 0, 0], rolls: int=300, samples: int=10000): bet = 1 - ins_frac adj_returns = f * np.asarray(ins_returns) + bet * np.asarray(returns) roll_sims = np.random.choice(adj_returns, size=(rolls, samples)).reshape(-1, rolls) return roll_sims.cumprod(axis=1) Now, we can simulate our 10,000 trajectories and see how this model performs. # With insurance ins_frac = 0.09 ins_traj = NietszcheDiceIns(ins_frac) Adding this dice insurance does remarkable things for our payoff profile! We've eliminated those 50% losses entirely at the cost of reducing our winning payoffs by 9%. It certainly seems like a worthy trade-off because we've drastically cut down on our probability of losing money (0.2% finished in the red) and boosted our compound growth rate. What is most striking is that we went from losing money on 1/6 rolls to losing money on 5/6 rolls with the insurance, yet improved our payoff. This is precisely the kind of asset Spitznagel is referring to when he writes about safe havens - we have mitigated our risk in a cost-effective manner because we raised our CAGR by adding this to our portfolio. As we did for the others, let's look at the optimality curve for the insurance investment. Buying too much insurance (moving farther left on the plot) is disastrous for your portfolio. The payoff is too small and too rare to make up for it and it quickly plummets into a massive abyss you're not going to get yourself out of. Let's zoom in on those optimal sizes. What's nice about the insurance case is that you don't need much of it to provide big returns. As Spitznagel puts it, insurance is "like a pinch of salt - just a pinch becomes the most important ingredient to the dish, whereas more than a pinch ruins it." You can see that impact in the plot. The optimal point is much more narrow here, thus unforgiving, so don't over do it! Not only is the amount important, but what you pay is important too. With the 500% return, the amount you get is exactly what you're expected to pay into the insurance via your premium. For the insurance company, the arithmetic average is what's important - they can spread their bets across multiple realizations (like you could with Schrödinger's Demon). Due to competition, you'd expect their edge to be low, a fraction of a percent, but this example sets it at 0 - essentially, it's perfectly priced and you're not likely to get insurance that cheap to support your gambling. So, what do these payoffs look like if we reduce our insurance payout and how much should we pay for insurance before we fall off of our compounding wealth expectations cliff? We can investigate this by computing our returns for each fraction of our portfolio that we allocate to insurance at different returns and computing the geometric mean. We'll calculate all of these, then plot it and take a look at the results. # What to pay for insurance? plt.figure(figsize=(12, 8)) fracs = np.linspace(0.8, 1, 21) for i in reversed(range(3, 7)): growth = [] ins_rets = np.array([i, 0, 0, 0, 0, 0]) for f in fracs: rets = (1 - f) * ins_rets + f * np.asarray(returns) g = (np.power(np.prod(np.power(rets, 50)), 1/300) - 1) * 100 plt.plot(fracs * 100, growth, label=f'{(i-1)*100}% Insurance Return') plt.scatter(fracs[np.argmax(growth)] * 100, max(growth), marker='*', s=200) plt.ylabel('Growth Rate (%)') plt.xlabel('Percentage Wagered') plt.title('Returns for Different Insurance Payoffs') The best we could hope to do (unless there was systematic mispricing of insurance in the market - which has happened before) is that perfect price where we get a 500% return (the upper red line). We'd expect to be somewhere below that. If we only get a 400% return every time we hit a 1, then our compound growth rate drops from 2.1% to 0.55% and we need to reduce our allocation to insurance from 9% to 7%. If insurance gets much more expensive than that, we move into negative expectations, but at least in the case of a 300% payout, we do better with some (5%) insurance allocation than none. Below that, it's better to go without insurance all together than to pay for it. Cash, Insurance, and your Optimal Wager I got curious, so I decided to go one step further and look at the trade -off between our three asset classes and see what gives us the best results in all of these cases. To do this, I calculated the geometric mean for different combinations of cash and insurance to see if this portfolio would boost returns at all. Here's the (ugly) code to do this: # Cash and Insurance fig, ax = plt.subplots(1, 3, figsize=(20, 8), sharey=True) for n, j in enumerate(reversed(range(4, 7))): ins_rets = np.array([j, 0, 0, 0, 0, 0]) cash_frac = np.linspace(0, 1, 101) ins_frac = np.linspace(0, 0.2, 21) growth = np.zeros((len(ins_frac), len(cash_frac))) for i, f in enumerate(ins_frac): _growth = np.zeros(len(cash_frac)) for k, c in enumerate(cash_frac): if f + c > 1: rets = c + f * ins_rets + (1 - c - f) * np.asarray(returns) g = (np.power(np.prod(np.power(rets, 50)), 1/300) - 1) * 100 _growth[k] += g growth[i] += _growth m = np.where(growth==growth.max()) ins_frac[m[0]], cash_frac[m[1]], growth.max() X, Y = np.meshgrid(ins_frac, cash_frac) cont = ax[n].contour(X * 100, Y * 100, growth.T, cmap=plt.cm.plasma, levels=[-1, 0, 0.25, 0.5, 1, 2, 2.5]) ax[n].set_xlabel('Insurance Allocation (%)') ax[n].set_title(f'Insurance Payoff = {(ins_rets[0] - 1) * 100}%') # Some ugly code to make sure that the labels don't overlap _x_loc = 15 if j == 6: _x_loc = 6 ax[n].annotate(r'$g^* = {:.2f}$%'.format(growth.max()), xy=(_x_loc, 20), size=10) ax[n].annotate(f'Cash = {cash_frac[m[1]][0] * 100:.0f}%', xy=(_x_loc, 15), size=10) ax[n].annotate(f'Insur. = {ins_frac[m[0]][0] * 100:.0f}%', xy=(_x_loc, 10), size=10) ax[0].set_ylabel('Cash Allocation (%)') cbar = fig.colorbar(cont, ax=ax.ravel().tolist(), shrink=0.95, drawedges=True) fig.suptitle('Expected Growth Rate for Cash and Insurance Allocation', With cheap insurance, the best bet is just to play the optimal insurance game. If it get's a bit more expensive, then we got to 40% cash plus a pinch of insurance (3%). If the insurance gets too expensive, we drop it all together and just take the Kelly bet we found above. We've got Options! It's common to come across discussions that say "tail risk hedging doesn't pay." From this simple example though, you can see that most of the simplistic models that show those conclusions frequently devote a naive, fixed percentage of the portfolio to option strategies, cash, treasuries, or other safe haven strategies. For example, in this post, we see that their insurance strategy included a variety of out of the money puts (table below). That's fine, but from the previous examples, we saw that the price you pay for the insurance is crucial, and there's no indication that was taken into consideration. Likely, they had multiple periods where they were overpaying for insurance by simply allocating 10% (the precise amount isn't clear) to out of the money puts. A more robust strategy would be dynamic and shift to cash or treasuries as option prices increase and back to options as they decrease. In investing, it's never as simple as "tail hedging doesn't work" or any other blanket statement that gets thrown out there makes it seem. Our favorite to push back on is "retailer traders always lose money." Yes, a lot do - we don't dispute that - but most aren't reading about proper risk management and systematic investing either! We believe that with good principles in place, you can make money trading, and that the best way to do it is through a well-tested, algorithmic approach that takes the emotions out of trading Traditionally, this has only been open to those with the money to pay hedge funds to manage their money or to those with lots of math and coding skill (and copious amounts of free time) to develop and test strategies themselves. We're building a no-code trading platform to allow average investors to research quantitative strategies and deploy them to trade in the markets on their behalf. We don't think algorithmic and quantitative trading should be the domain of the few. If you're interested, check out our free demo and join us as we democratize quantitative investing!
{"url":"https://app.raposa.trade/blog/how-to-gamble-with-demons-and-make-money-doing-it/","timestamp":"2024-11-05T04:27:38Z","content_type":"text/html","content_length":"53684","record_id":"<urn:uuid:a56f49aa-92db-47ce-a038-45b97ad3460c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00583.warc.gz"}
Does OpenSim report state equations? Hi every one. I was wondering if I can have the state equations of a model? either explicit or implicit. It would be great if I could have the state equations in one of these forms: dx/dt = f(x,u) where u is the excitation matrix. to be more clear: these are the same equations that we integrate forward in time when using the "Forward Dynamic tool", Re: Does OpenSim report state equations? Hi again, It seems that I was not clear enough! I'll try to reword my question and I hope this time it will be crystal clear and let's hope someone knows the answer(fingers crossed!) let's assume I know the states and excitations at a specific time t=t0, in other words, X(t=t0) and U(t=t0) are known. Is there any functions which returns the Xdot vector at that specific time t=t0? In other words: Is there any functions that reports Xdot(t=t0) ? I searched the Doxygen document and I came up with these 3 functions. But I am not sure if these are reporting the values that I am looking for state::getQDot () ----> for position variables q state::getUDot () ----> for velocity variables u state::getZDot () ----> for auxiliary variables z (Muscle fiber length and muscle activation) I was wondering if these 3 function are reporting the values that I need? And if yes, how can I input the X and U vectors (states and excitations) into these functions? Re: Does OpenSim report state equations? Hi, Sina. Your question is clear, and the answer is yes, but it is somewhat trickier than you might be thinking because you have to make use of the OpenSim API to establish correspondence between entities in the model and individual states and activations. Are you hoping to use the plant model in Matlab? I think if you provide a little more information about what you are trying to do we may be able to find a relevant example. Re: Does OpenSim report state equations? Hi Sherm and thank you for the reply. I am using the OpenSim API to solve an optimal control problem and I am using an open source optimizer which is written in C++. (Everything is happening in C++). I will try my best to avoid a super technical question! I am trying to solve a constraint problem which means I want to find an optimum X and U which minimizes this error function: Err = F( X(t=t0) , U(t=t0) ) - Xdot(t=t0); To avoid technical terms, I will not explain the F function here. Xdot(t=t0) is described in my previous post. In other words I have 2 methods to calculate Xdot at time t = t0: 1- using these function: getQDot (), getUDot () and getZDot () 2- using the F function. Both of these methods will have the same input (X(t=t0) and U(t=t0)) and I am trying to minimize the error between the outputs. My question is how can I chang the default values of X and U to (X(t=t0) and U(t=t0)) before using the getQDot() function, or how can I input (X(t=t0) and U(t=t0)) into getQDot() Re: Does OpenSim report state equations? Hi, Sina. I'm confused now. Your error equation err=f(x,u)-xdot(t) doesn't specify arguments for xdot except time. Does that mean you are content to let OpenSim choose the state and activations? Or did you intend that OpenSim would calculate xdot based on your chosen values of x and u? Can you be precise about the unknowns, inputs, and outputs of the two functions you are comparing? There is no need to avoid a technical discussion if that would make it more clear. If what you mean is err=f(t,x,u)-xdot(t,x,u) with the same t, x, and u on each side then the error would be minimized by f being identical to OpenSim's xdot calculation. I'm having trouble seeing that as an optimization problem involving a search for optimum x and u! Alternatively, if you have perhaps a simplified model in f with its own states x and activations u that you would like to find, then it isn't clear what the inputs to OpenSim are nor how the state derivatives could be matched up. So I am sure I don't actually understand what you're trying to do! Re: Does OpenSim report state equations? Hi Sherm. In General, an optimal control problem is to find the optimum control u* which causes the system: ydot(t) = f (y(t), u(t), t) (eq.1) To follow the optimum trajectory y* that minimizes/maximizes a cost function. In (eq. 1) y is the state vector, u is the control vector and t is the independent variable (usually time). In biomechanical studies the states vector (y) includes positions, velocities, muscle fibre lengths and muscle activations while the controls are the muscle excitations. Two well known methods to solve these kind of problems are Direct Shooting (DS) and Direct Collocation (DC). In DS method, the controls are guessed and the state differential equations (eq. 1) are integrated explicitly. Anderson and Pandy used DS to find the optimal solution for normal gait. In DC method both controls and states are guessed and the state differential equations (eq. 1) are defined as constraint functions. In other words this method tries to find the optimum y and u which are consistent with each other. I am trying to use the DC method. As I guess both y and u in this method, I can calculate the left hand side of (eq. 1) based on my guess. I want to input these guesses to OpenSim to calculate the right hand side of (eq. 1). If the error between these two values is less than a threshold, I consider the guessed y and u consistent. After this brief introduction, I have two questions: 1- Based on your previous post, SimTK::state:: getYDot () should return the right hand side of eq.1. I assume I should use this to input the guessed states SimTK::state:: setY (const Vector &y) The question is: how can I input the guessed u (guessed excitations) 2- As I was not sure what the output of SimTK::state:: getYDot () is, I tested it on the TugofWar model. Though the model has 16 states (6 general coordinates + 6 general velocities+ 2 muscle fibre lengths+ 2 muscle activations) the output of this function is a vector with 17 components which is weird. Re: Does OpenSim report state equations? Hi, Sina. It's the correspondence between the model and the state and input variables that makes this tricky. There may also be constraint equations that must be satisfied by the states -- you would have to include constraint errors in your cost function or project any guessed states to the constraint manifold prior to evaluating the cost. Luckily I managed to get hold of Ajay Seth (who is on vacation) -- he said he has an approach he uses for this in OpenSim and will respond. Re: Does OpenSim report state equations? P.S. I'm not certain but I think the reason you got 17 states for the tug of war is that general orientation requires a quaternion to avoid singularities. That is, a rigid body in space requires 7 generalized coordinates and 6 generalized speeds, plus a constraint on the quaternion length. Re: Does OpenSim report state equations? Hi again, I reckon it shouldn't be that tricky, because the forward dynamic tool needs the same ydot vector in order to integrate the state equations forward in time. Something like y(t+h) = y(t) + h*ydot( t, y(t), u(t) ) which is a first order Euler method. Of course the FD tool uses more accurate methods, but all of them will need to calculate ydot( t, y(t), u(t) ) at some point. Re: Does OpenSim report state equations? There are multiple ways you can set the individual state variables and the controls and evaluate the model dynamics to yield the system dynamics in first order form: ydot =f(y,u). I use the model component methods like setActivation and setFiberLength to set the state variables for muscles. You can do the same thing for Coordinates. The model component provides the context and components and their states are all accessible by name. If you don't care about context and want to solve the system of equations directly, you can use Vector& y = state.updY() to get a writable vector. If you want more granularity you can get the vector of coordinates using updQ, mobilities updU, and additional states (typically activations and fiber lengths of muscles) with updZ(). Note that updating these will invalidate the system to different stages (Q position and above, U velocity and above, Z dynamics and above). You knew this already it seems. The question then is how to update the controls? The controls are not state variables, but they are part of the system and they are owned my the Model (which is also a model component). The model has similar methods to the State access above, such as Vector& controls = model.updControls(State) that provides a writable controls vector. Note, that if you update the controls the dynamics stage is invalidated, since the dynamics of the system are (typically) dependent on the controls. After you have updated the model state variables and controls you must realize the system to the acceleration stage to access Ydot from the state. E.g. model.getMultibodySystem().realize(state, Note, if the system has kinematic constraints, the SimTK integrators employ a projection technique to ensure that the coordinates and speeds remain on the constraint manifold. If you are solving the functional simultaneously across multiple fixed time intervals (as in direct collocation) you need to enforce the kinematic constraints as well. As Sherm suggested, the easiest way is to include the constraint error in your cost function. You can get the Q and U errors from the state as well (once you are realized to the Velocity state or above). Then you should be all set. Thank you for your patience.
{"url":"https://simtk.org/plugins/phpBB/viewtopic.php?f=91&t=3932&p=8867&sid=ba8732fd306f70fe8fe7d025cf625c6b","timestamp":"2024-11-09T17:04:13Z","content_type":"text/html","content_length":"44734","record_id":"<urn:uuid:9a7b02e3-4cc8-4642-aaad-5720f08fe88f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00630.warc.gz"}
Post-hoc tests - (Probabilistic Decision-Making) - Vocab, Definition, Explanations | Fiveable Post-hoc tests from class: Probabilistic Decision-Making Post-hoc tests are statistical analyses conducted after an initial test, such as ANOVA, to determine which specific group means are significantly different from each other. These tests are essential when the overall ANOVA indicates significant differences among groups, as they help identify the precise locations of these differences while controlling for Type I error across multiple comparisons. congrats on reading the definition of post-hoc tests. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Post-hoc tests are necessary only when the ANOVA test shows statistically significant results, indicating that at least one group mean differs from others. 2. These tests help control for the increased risk of Type I error that occurs when multiple comparisons are made. 3. Different types of post-hoc tests exist, including Tukey's HSD, Bonferroni correction, and Scheffรฉ's method, each with its strengths and weaknesses. 4. Post-hoc tests can be sensitive to the number of groups and sample sizes, affecting the choice of which test to use. 5. It is crucial to interpret the results of post-hoc tests in the context of the original research question and the overall experimental design. Review Questions • How do post-hoc tests enhance the interpretation of results obtained from ANOVA? □ Post-hoc tests enhance the interpretation of ANOVA results by identifying which specific group means differ after finding a significant overall effect. While ANOVA indicates that there are differences among groups, it does not specify where those differences lie. By applying post-hoc tests, researchers can pinpoint exact pairs of groups that show statistically significant differences, providing a clearer understanding of how the groups compare. • Discuss the importance of controlling Type I error in post-hoc testing and how different tests achieve this. □ Controlling Type I error in post-hoc testing is crucial because conducting multiple comparisons increases the chance of falsely rejecting the null hypothesis. Different post-hoc tests employ various strategies to maintain this control. For example, Tukey's HSD adjusts for multiple comparisons by determining a critical range for mean differences based on sample sizes. In contrast, Bonferroni correction applies a more conservative approach by adjusting the significance level based on the number of comparisons being made, thus reducing the likelihood of Type I errors. • Evaluate how the choice of post-hoc test might influence research conclusions in an experiment with unequal sample sizes across groups. □ The choice of post-hoc test can significantly influence research conclusions, especially in experiments with unequal sample sizes across groups. Some tests, like Tukey's HSD, are robust to violations of homogeneity and can handle unequal variances well. However, other tests might be overly conservative or liberal depending on sample size disparities. For instance, using Bonferroni correction in such cases might lead to an overly cautious interpretation, potentially missing real differences between groups. Therefore, selecting an appropriate post-hoc test is essential to draw valid conclusions from experimental data. ยฉ 2024 Fiveable Inc. All rights reserved. APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/probabilistic-and-statistical-decision-making-for-management/post-hoc-tests","timestamp":"2024-11-03T05:48:07Z","content_type":"text/html","content_length":"158605","record_id":"<urn:uuid:9599401b-638e-43c1-8e52-098d8622ab80>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00246.warc.gz"}
16,735 research outputs found The projected shell model analysis is carried out using the triaxial Nilsson+BCS basis. It is demonstrated that, for an accurate description of the moments of inertia in the transitional region, it is necessary to take the triaxiality into account and perform the three-dimensional angular-momentum projection from the triaxial Nilsson+BCS intrinsic wavefunction.Comment: 9 pages, 2 figure Information extraction is the task of automatically picking up information of interest from an unconstrained text. Information of interest is usually extracted in two steps. First, sentence level processing locates relevant pieces of information scattered throughout the text; second, discourse processing merges coreferential information to generate the output. In the first step, pieces of information are locally identified without recognizing any relationships among them. A key word search or simple pattern search can achieve this purpose. The second step requires deeper knowledge in order to understand relationships among separately identified pieces of information. Previous information extraction systems focused on the first step, partly because they were not required to link up each piece of information with other pieces. To link the extracted pieces of information and map them onto a structured output format, complex discourse processing is essential. This paper reports on a Japanese information extraction system that merges information using a pattern matcher and discourse processor. Evaluation results show a high level of system performance which approaches human performance.Comment: See http://www.jair.org/ for any accompanying file Varied signature splitting phenomena in odd proton rare earth nuclei are investigated. Signature splitting as functions of $K$ and $j$ in the angular momentum projection theory is explicitly shown and compared with those of the particle rotor model. The observed deviations from these rules are due to the band mixings. The recently measured $^{169}$Ta high spin data are taken as a typical example where fruitful information about signature effects can be extracted. Six bands, two of which have not yet been observed, were calculated and discussed in detail in this paper. The experimentally unknown band head energies are given Within the framework of on-line learning, we study the generalization error of an ensemble learning machine learning from a linear teacher perceptron. The generalization error achieved by an ensemble of linear perceptrons having homogeneous or inhomogeneous initial weight vectors is precisely calculated at the thermodynamic limit of a large number of input elements and shows rich behavior. Our main findings are as follows. For learning with homogeneous initial weight vectors, the generalization error using an infinite number of linear student perceptrons is equal to only half that of a single linear perceptron, and converges with that of the infinite case with O(1/K) for a finite number of K linear perceptrons. For learning with inhomogeneous initial weight vectors, it is advantageous to use an approach of weighted averaging over the output of the linear perceptrons, and we show the conditions under which the optimal weights are constant during the learning process. The optimal weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review We construct a free field realization of vertex operators of the dilute A_L models along with the Felder complex. For L=3, we also study an E_8 structure in terms of the deformed Virasoro currents.Comment: (AMS-)LaTeX(2e), 43page We fabricated Co nano-rings incorporated in the vertical pseudo-spin-valve nanopillar structures with deep submicron lateral sizes. It is shown that the current-perpendicular-to-plane giant magnetoresistance can be used to characterize a very small magnetic nano-ring effectively. Both the onion state and the flux-closure vortex state are observed. The Co nano-rings can be switched between the onion states as well as between onion and vortex states not only by the external field but also by the perpendicularly injected dc current We propose an optimization method of mutual learning which converges into the identical state of optimum ensemble learning within the framework of on-line learning, and have analyzed its asymptotic property through the statistical mechanics method.The proposed model consists of two learning steps: two students independently learn from a teacher, and then the students learn from each other through the mutual learning. In mutual learning, students learn from each other and the generalization error is improved even if the teacher has not taken part in the mutual learning. However, in the case of different initial overlaps(direction cosine) between teacher and students, a student with a larger initial overlap tends to have a larger generalization error than that of before the mutual learning. To overcome this problem, our proposed optimization method of mutual learning optimizes the step sizes of two students to minimize the asymptotic property of the generalization error. Consequently, the optimized mutual learning converges to a generalization error identical to that of the optimal ensemble learning. In addition, we show the relationship between the optimum step size of the mutual learning and the integration mechanism of the ensemble learning.Comment: 13 pages, 3 figures, submitted to Journal of Physical Society of Japa
{"url":"https://core.ac.uk/search/?q=author%3A(Hara%2C%20Y.)","timestamp":"2024-11-15T04:28:43Z","content_type":"text/html","content_length":"127066","record_id":"<urn:uuid:47c832dc-883a-4eda-9143-9e3f3b9f4836>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00347.warc.gz"}
SG-PALM: a Fast Physically Interpretable Tensor Graphical Model SG-PALM: a Fast Physically Interpretable Tensor Graphical Model Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10783-10793, 2021. We propose a new graphical model inference procedure, called SG-PALM, for learning conditional dependency structure of high-dimensional tensor-variate data. Unlike most other tensor graphical models the proposed model is interpretable and computationally scalable to high dimension. Physical interpretability follows from the Sylvester generative (SG) model on which SG-PALM is based: the model is exact for any observation process that is a solution of a partial differential equation of Poisson type. Scalability follows from the fast proximal alternating linearized minimization (PALM) procedure that SG-PALM uses during training. We establish that SG-PALM converges linearly (i.e., geometric convergence rate) to a global optimum of its objective function. We demonstrate scalability and accuracy of SG-PALM for an important but challenging climate prediction problem: spatio-temporal forecasting of solar flares from multimodal imaging data. Cite this Paper Related Material
{"url":"https://proceedings.mlr.press/v139/wang21k.html","timestamp":"2024-11-07T10:23:17Z","content_type":"text/html","content_length":"15145","record_id":"<urn:uuid:67d15623-4a0b-4974-911b-dc27df278044>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00328.warc.gz"}
Harish-Chandra FRS[1] (Devanāgarī: हरिश्चन्द्र, Harish Chandra Mehrotra; 11 October 1923 – 16 October 1983) was an Indian mathematician, who did fundamental work in representation theory, especially Harmonic analysis on semisimple Lie groups.[2][3][4] Harish-Chandra was born in Kanpur (then Cawnpore), British India. He was educated at B.N.S.D. College, Kanpur, and at the University of Allahabad. After receiving his masters degree in Physics in 1943, he moved to the Indian Institute of Science, Bangalore for further studies in theoretical physics and worked with Homi J. Bhabha. In 1945, he moved to University of Cambridge, Cambridge and worked as a research student under Paul Dirac. While at Cambridge, he attended lectures by Wolfgang Pauli, and during one of them pointed out a mistake in Pauli's work. The two were to become life long friends. During this time he became increasingly interested in mathematics. At Cambridge he obtained his PhD in 1947. When Dirac visited Institute of Advanced Study, Princeton, U.S.A. in 1947/48 he brought Harish-Chandra as his assistant. It was at this stage that Harish-Chandra decided to change over from physics to mathematics. He was a faculty member at the Institute for Advanced Study, Princeton, New Jersey from 1963. From 1968, until his death in 1983, he was IBM von Neumann Professor in the School of Mathematics at the Institute for Advanced Study, Princeton, New Jersey. He died of a heart attack while on an evening walk on October 16, 1983, during a conference in Princeton in honour of Armand Borel's 60th birthday. A similar conference for his 60th birthday, scheduled for the following year, instead became a memorial conference. He is survived by his wife, Lalitha (Lily), and his daughters Premala (Premi), and Devaki. Work in mathematics He was influenced by the mathematicians Hermann Weyl and Claude Chevalley. From 1950 to 1963 he was at the Columbia University and worked on representations of semisimple Lie groups. During this period he established as his special area the study of the discrete series representations of semisimple Lie groups, which are analogues of the Peter–Weyl theory in the non-compact case. He is also known for work with Armand Borel on the theory of arithmetic groups; and for papers on finite group analogues. He enunciated a philosophy of cusp forms, a precursor of the Langlands Honors and awards He was a member of the National Academy of Sciences of the U.S. and a Fellow of the Royal Society. [1] He was the recipient of the Cole Prize of the American Mathematical Society, in 1954. The Indian National Science Academy honoured him with the Srinivasa Ramanujan Medal in 1974. The mathematics department of B.N.S.D. College, Kanpur celebrates his birthday every year in different forms, which includes lectures from students and professors from various colleges, institutes and students' visit to Harish-Chandra Research Institute. The Indian Government named the Harish-Chandra Research Institute, an institute dedicated to Theoretical Physics and Mathematics, after him. Robert Langlands wrote in a biographical article of Harish-Chandra: “ He was considered for the Fields Medal in 1958, but a forceful member of the selection committee in whose eyes Thom was a Bourbakist was determined not to have two. So Harish-Chandra, whom he also placed on the Bourbaki camp, was set aside. ” See also Harish-Chandra's c-function Harish-Chandra's character formula for discrete series representations Harish-Chandra homomorphism Harish-Chandra isomorphism identifying the center of the universal enveloping algebra with polynomial invariants of the Weyl group. Harish-Chandra module Harish-Chandra's regularity theorem implying that the character of an irreducible representation is locally integrable Harish-Chandra's Schwartz space Harish-Chandra transform Harish-Chandra's Ξ function ^ a b c Langlands, Robert P. (1985). "Harish-Chandra. 11 October 1923-16 October 1983". Biographical Memoirs of Fellows of the Royal Society 31: 198–193. doi:10.1098/rsbm.1985.0008. JSTOR 769925. ^ Harish-Chandra at the Mathematics Genealogy Project ^ O'Connor, John J.; Robertson, Edmund F., "Harish-Chandra", MacTutor History of Mathematics archive, University of St Andrews. ^ Varadarajan, V. S. (1984). "Harish-Chandra (1923–1983)". The Mathematical Intelligencer 6 (3): 9–5. doi:10.1007/BF03024122. edit Harish-Chandra (1968), Mars, J. G. M., ed., Automorphic forms on semisimple Lie groups, Lecture Notes in Mathematics, 62, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0098434, ISBN 978-3-540-04232-7, MR0232893 Harish-Chandra (1970), Dijk, G. van, ed., Harmonic analysis on reductive p-adic groups, Lecture Notes in Mathematics, 162, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0061269, ISBN 978-3-540-05189-3, MR0414797 Harish-Chandra (1984), Varadarajan, V. S., ed., Collected papers. Vol. I. 1944–1954., Berlin, New York: Springer-Verlag, ISBN 978-0-387-90782-6, MR726025 Harish-Chandra (1984), Varadarajan, V. S., ed., Collected papers. Vol. II 1955–1958., Berlin, New York: Springer-Verlag, ISBN 978-0-387-90782-6, MR726025 Harish-Chandra (1984), Varadarajan, V. S., ed., Collected papers. Vol. III 1959–1968., Berlin, New York: Springer-Verlag, ISBN 978-0-387-90782-6, MR726025 Harish-Chandra (1984), Varadarajan, V. S., ed., Collected papers. Vol. IV 1970–1983., Berlin, New York: Springer-Verlag, ISBN 978-0-387-90782-6, MR726025 Harish-Chandra (1999), DeBacker, Stephen; Sally, Paul J., eds., Admissible invariant distributions on reductive p-adic groups, University Lecture Series, 16, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2025-4, MR1702257 Doran, Robert S.; Varadarajan, V. S., eds. (2000), "The mathematical legacy of Harish-Chandra", Proceedings of the AMS Special Session on Representation Theory and Noncommutative Harmonic Analysis, held in memory of Harish-Chandra on the occasion of the 75th anniversary of his birth, in Baltimore, MD, January 9--10, 1998, Proceedings of Symposia in Pure Mathematics, 68, Providence, R.I.: American Mathematical Society, pp. xii+551, ISBN 978-0-8218-1197-9, MR1767886 Srivastava, R. S. L. (1986), "About Harish Chandra", Gaṇita Bhãrati. Indian Society for History of Mathematics. Bulletin 8 (1): 42–43, ISSN 0970-0307, MR888666 Varadarajan, V. S. (2008), "Harish-Chandra", Complete Dictionary of Scientific Biography Retrieved from "http://en.wikipedia.org/" All text is available under the terms of the GNU Free Documentation License
{"url":"https://www.scientificlib.com/en/Mathematics/Biographies/HarishChandra.html","timestamp":"2024-11-04T01:06:32Z","content_type":"application/xhtml+xml","content_length":"12613","record_id":"<urn:uuid:aad547b0-2450-4b04-aaf4-311c965de726>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00175.warc.gz"}
Liouville-type equations for the n-particle distribution functions of Delle Site, L. and Klein, R. (2020) Liouville-type equations for the n-particle distribution functions of an open system. Journal of Mathematical Physics, 61 (8). Official URL: https://doi.org/10.1063/5.0008262 In this work we derive a mathematical model for an open system that exchanges particles and momentum with a reservoir from their joint Hamiltonian dynamics. The complexity of this many-particle problem is addressed by introducing a countable set of n-particle phase space distribution functions just for the open subsystem, while accounting for the reservoir only in terms of statistical expectations. From the Liouville equation for the full system we derive a set of coupled Liouville-type equations for the n-particle distributions by marginalization with respect to reservoir states. The resulting equation hierarchy describes the external momentum forcing of the open system by the reservoir across its boundaries, and it covers the effects of particle exchanges, which induce probability transfers between the n- and (n+1)-particle distributions. Similarities and differences with the Bergmann-Lebowitz model of open systems (P.G.Bergmann, J.L. Lebowitz, Phys.Rev., 99:578--587 (1955)) are discussed in the context of the implementation of these guiding principles in a computational scheme for molecular simulations. Repository Staff Only: item control page
{"url":"http://publications.imp.fu-berlin.de/2363/","timestamp":"2024-11-13T18:33:56Z","content_type":"application/xhtml+xml","content_length":"20538","record_id":"<urn:uuid:00bea6f9-dfe7-4d61-b4be-d17e49e5a370>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00377.warc.gz"}
Edward Olney Opportunities for formal education on the frontier were sparse, and Olney was largely self-taught. Calloway [2] tells about Edward hiring a neighbor boy to drive the team of oxen on the Olney farm so that he could attend school for six weeks in order to master Day's Algebra. During this time he also ran an arithmetic school at home in the evenings in order to earn the money to pay for his substitute driver. At age 19, Olney began his career as a teacher in the local elementary schools, while studying mathematics, natural science, and languages on his own. Cajori [1] reports that "though he had never studied Latin, he began teaching it and kept ahead of the class because he 'had more application'." In 1848 Olney was hired as a teacher in the district school at Perrysburg, Ohio. The following year he was named principal of the grammar department in the new Union School. Over the next five years he would become the school's superintendent, marry Miss Sarah Huntington (a teacher at the school), and receive an honorary A. M. degree from Madison University (now Colgate University) in Hamilton, New York. Today there is an Olney School in Lake Township, Wood County, named after him [3]. In 1853 Olney was appointed Professor of Mathematics at Kalamazoo College, Michigan, where he remained for ten years and established the first mathematics curriculum at that institution. He inspired his colleagues and students alike with "his high Christian aims; his generous, self-sacrificing spirit; his thoroughness in government and discipline; and the inspiration which attended him." [2] Although he insisted that his students recite using exact and correct language, he always tried to simplify the explanations of concepts and processes and make them more understandable. Kalamazoo college later conferred the honorary degree, LL. D. upon him. In 1863 Olney was named Professor of Mathematics at the University of Michigan, succeeding George P. Williams, whose title was then changed to Professor of Physics. In those days the freshmen at Michigan were taught by inexperienced instructors, but once a week they had to recite for Professor Olney. His reputation for being a stern disciplinarian and a stickler for correct details earned him the nickname "Old Toughy." Nevertheless, he took great pains to see that the poorer students obtained help in making up their deficiencies. According to a former student, G. C. Comstock [1], "He was not a harsh man, and although the students stood in awe of him, I think that he was generally liked by them." While he was at Michigan, Professor Olney began writing a series of successful mathematics textbooks for use in both grammar schools and colleges. In many places these displaced the works of such highly regarded authors as Charles Davies and Elias Loomis. Among the titles are: Elements of Arithmetic for Intermediate, Grammar, and Common Schools (1877), A University Algebra (1873), Elementary Geometry (1883), Elements of Trigonometry (1870), and A General Geometry and Calculus (1871) (online). Olney's treatment of calculus was criticized for using infinitesimal methods, but praised for giving "the elegant method, discovered by Prof. James C. Watson [Professor of Astronomy at Michigan], of demonstrating the rule for differentiating a logarithm without the use of series." [1] It is said that Olney preferred geometry to analysis, and when teaching calculus, he would attempt to translate analytical expressions into their geometrical equivalents. This, along with his own struggles in self-education, contributed to his great success as a teacher and textbook author. Throughout his career, Professor Olney was a strong supporter of the Baptist churches in Kalamazoo and Ann Arbor. His strong religious convictions led to a concern for his students' personal and spiritual welfare, as well as their mathematical achievements. Edward Olney died on January 16, 1887, after suffering for three years from the effects of a stroke. Article by David E. Kullman Miami University * told to us by people who had attended Olney School 1. Cajori, Florian, The Teaching and History of Mathematics in the United States. Washington, Bureau of Education Circular No. 3, 1890. pp. 248-253. 2. Calloway, Jean M. History of the Department of Mathematics and Computer Science at Kalmazoo College, 2000, pp. 6-7. Available: www.kzoo.edu/aluminfo/math.pdf 3. Justus, Judith P. "A History of Perrysburg's Schools, Part 2", Perrysburg Messenger Journal, Feb. 27, 2002, p. 9.
{"url":"http://sections.maa.org/ohio/ohio_masters/olney.html","timestamp":"2024-11-13T12:32:37Z","content_type":"text/html","content_length":"5835","record_id":"<urn:uuid:8aa38ad8-6631-483f-abd0-34d81be659fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00865.warc.gz"}
Parallel Resistor-Inductor Circuits - Electrical Engineering Textbooks Parallel Resistor-Inductor Circuits Let's take the same components for our series example circuit and connect them in parallel: (Figure below). Parallel R-L circuit. Because the power source has the same frequency as the series example circuit, and the resistor and inductor both have the same values of resistance and inductance, respectively, they must also have the same values of impedance. So, we can begin our analysis table with the same “given” values: The only difference in our analysis technique this time is that we will apply the rules of parallel circuits instead of the rules for series circuits. The approach is fundamentally the same as for DC. We know that voltage is shared uniformly by all components in a parallel circuit, so we can transfer the figure of total voltage (10 volts ∠ 0^o) to all components columns: Now we can apply Ohm's Law (I=E/Z) vertically to two columns of the table, calculating current through the resistor and current through the inductor: Just as with DC circuits, branch currents in a parallel AC circuit add to form the total current (Kirchhoff's Current Law still holds true for AC as it did for DC): Finally, total impedance can be calculated by using Ohm's Law (Z=E/I) vertically in the “Total” column. Incidentally, parallel impedance can also be calculated by using a reciprocal formula identical to that used in calculating parallel resistances. The only problem with using this formula is that it typically involves a lot of calculator keystrokes to carry out. And if you're determined to run through a formula like this “longhand,” be prepared for a very large amount of work! But, just as with DC circuits, we often have multiple options in calculating the quantities in our analysis tables, and this example is no different. No matter which way you calculate total impedance (Ohm's Law or the reciprocal formula), you will arrive at the same figure: • Impedances (Z) are managed just like resistances (R) in parallel circuit analysis: parallel impedances diminish to form the total impedance, using the reciprocal formula. Just be sure to perform all calculations in complex (not scalar) form! Z[Total] = 1/(1/Z[1] + 1/Z[2] + . . . 1/Z[n]) • Ohm's Law for AC circuits: E = IZ ; I = E/Z ; Z = E/I • When resistors and inductors are mixed together in parallel circuits (just as in series circuits), the total impedance will have a phase angle somewhere between 0^o and +90^o. The circuit current will have a phase angle somewhere between 0^o and -90^o. • Parallel AC circuits exhibit the same fundamental properties as parallel DC circuits: voltage is uniform throughout the circuit, branch currents add to form the total current, and impedances diminish (through the reciprocal formula) to form the total impedance. Lessons In Electric Circuits copyright (C) 2000-2020 Tony R. Kuphaldt, under the terms and conditions of the CC BY License. See the Design Science License (Appendix 3) for details regarding copying and distribution. Revised July 25, 2007 Explore CircuitBread Get the latest tools and tutorials, fresh from the toaster.
{"url":"https://www.circuitbread.com/textbooks/lessons-in-electric-circuits-volume-ii-ac/reactance-and-impedance-inductive/parallel-resistor-inductor-circuits","timestamp":"2024-11-09T19:08:35Z","content_type":"text/html","content_length":"930618","record_id":"<urn:uuid:6a649aa0-ff57-415e-b2a0-506a691026b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00047.warc.gz"}
Self-consistent field approach to the many-electron problem The self-consistent field method in which a many-electron system is described by a time-dependent interaction of a single electron with a self-consistent electromagnetic field is shown to be equivalent for many purposes to the treatment given by Sawada and Brout. Starting with the correct many-electron Hamiltonian, it is found, when the approximations characteristic of the Sawada-Brout scheme are made, that the equation of motion for the pair creation operators is the same as that for the one-particle density matrix in the self-consistent field framework. These approximations are seen to correspond to (1) factorization of the two-particle density matrix, and (2) linearization with respect to off-diagonal components of the one-particle density matrix. The complex, frequency-dependent dielectric constant is obtained straight-forwardly from the self-consistent field approach both for a free-electron gas and a real solid. It is found to be the same as that obtained by Noziéres and Pines in the random phase approximation. The resulting plasma dispersion relation for the solid in the limit of long wavelengths is discussed. All Science Journal Classification (ASJC) codes • General Physics and Astronomy Dive into the research topics of 'Self-consistent field approach to the many-electron problem'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/self-consistent-field-approach-to-the-many-electron-problem","timestamp":"2024-11-05T12:52:37Z","content_type":"text/html","content_length":"49344","record_id":"<urn:uuid:4129422a-4969-414f-88b1-d7de312fd561>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00793.warc.gz"}
Chapter 15 Probability Ex 15.1 Question 1. Complete the following statements: (i) Probability of an event E + Probability of the event ‘not E’ = ……… (ii) The probability of an event that cannot happen is ……… Such an event is called ……… (iii) The probability of an event that is certain to happen is ………. Such an event is called ……… (iv) The sum of the probabilities of all the elementary events of an experiment is ……….. (v) The probability of an event is greater than or equal to …………. and less than or equal to ……….. (i) Probability of an event E + Probability of the event ‘not E’ = 1. (ii) The probability of an event that cannot happen is 0. Such an event is called impossible event. (iii) The probability of an event that is certain to happen is 1. Such an event is called sure event. (iv) The sum of the probabilities of all the elementary events of an experiment is 1. (v) The probability of an event is greater than or equal to 0 and less than or equal to 1. Question 2. Which of the following experiments have equally likely outcomes? Explain. (i) A driver attempts to start a car. The car starts or does not start. (ii) A player attempts to shoot a basketball. She/he shoots or misses the shot. (iii) A trial is made to answer a true-false question. The answer is right or wrong. (iv) A baby is born. It is a boy or a girl. (i) The outcome is not equally likely because the car starts normally only when there is some defect, the car does not start. (ii) The outcome is not equally likely because the outcome depends on the training of the player. (iii) The outcome in the trial of true-false question is, either true or false. Hence, the two outcomes are equally likely. (iv) A baby can be either a boy or a girl and both the outcomes have equally likely chances. Question 3. Why is tossing a coin considered to be a fair way of deciding which team should get the bail at the beginning of a football game? When we toss a coin, the outcomes head and tail are equally likely. So, the result of an individual coin toss is completely unpredictable. Question 4. Which of the following cannot be the probability of an event? (A) 2/3 (B) -1.5 (C) 15% (D) 0.7 We know that probability of an event cannot be less than 0 and greater than 1. Correct option is (B). Question 5. If P (E) = 0.05, what is the probability of ‘not E’? We have, P (E) + P (not E) = 1 Given: P(E) = 0.05 P (not E) = 1 – 0.05 = 0.95 Question 6. A bag contains lemon flavoured candies only. Malini takes out one candy without looking into the bag. What is the probability that she takes out (i) an orange flavoured candy? (ii) a lemon flavoured candy? (i) A bag contains only lemon flavoured candies. P (an orange flavoured candy) = 0 (ii) P (a lemon flavoured candy) = 1 Question 7. It is given that in a group of 3 students, the probability of 2 students not having the same birthday is 0.992. What is the probability that the 2 students have the same birthday? We have, P (E) + P (not E) = 1 ⇒ P (E) + 0.992 = 1 ⇒ P (E) = 1 – 0.992 = 0.008 Question 8. A bag contains 3 red balls and 5 black balls. A ball is drawn at random from the bag. What is the probability that the ball drawn is (i) red? (ii) not red? Number of red balls = 3 Number of black balls = 5 Total number of balls = 3 + 5 = 8 Question 9. A box contains 5 red marbles, 8 white marbles and 4 green marbles. One marble is taken out of the box at random. What is the probability that the marble taken out will be (i) red? (ii) white? (iii) not green? Question 10. A piggy bank contains hundred 50 p coins, fifty ₹ 1 coins, twenty ₹ 2 coins and ten ₹ 5 coins. If it is equally likely that one of the coins will fall out when the bank is turned upside down, what is the probability that the coin (i) will be a 50 p coin? (ii) will not be a ₹ 5 coin? Number of 50 p coins = 100 Number of ₹ 1 coins = 50 Number of ₹ 2 coins = 20 Number of ₹ 5 coins = 10 Total number of coins = 180 Question 11. Gopi buys a fish from a shop for his aquarium. The shopkeeper takes out one fish at random from a tank containing 5 male fish and 8 female fish (see figure). What is the probability that the fish taken out is a male fish? Number of male fish = 5 Number of female fish = 8 Total number of fish = 5 + 8 = 13 P (a male fish) = 5/13 Question 12. A game of chance consists of spinning an arrow which comes to rest pointing at one of the numbers 1, 2, 3, 4, 5, 6, 7, 8 (see figure.), and these are equally likely outcomes. What is the probability that it will point at (i) 8? (ii) an odd number? (iii) a number greater than 2? (iv) a number less than 9? Question 13. A die is thrown once. Find the probability of getting (i) a prime number (ii) a number lying between 2 and 6 (ill) an odd number Question 14. One card is drawn from a well-shuffled deck of 52 cards. Find the probability of getting (i) a king of red colour (ii) a face card (iii) a red face card (iv) the jack of hearts (v) a spade (vi) the queen of diamonds Question 15. Five cards – the ten, jack, queen, king and ace of diamonds, are well shuffled with their face downwards. One card is then picked up at random. (i) What is the probability that the card is the queen? (ii) If the queen is drawn and put aside, what is the probability that the second card picked up is (a) an ace? (b) a queen? Question 16. 12 defective pens are accidentally mixed with 132 good ones. It is not possible to just look at a pen and tell whether or not it is defective. One pen is taken out at random from this lot. Determine the probability that the pen taken out is a good one. Question 17. (i) A lot of 20 bulbs contain 4 defective ones. One bulb is drawn at random from the lot. What is the probability that this bulb is defective? (ii) Suppose the bulb drawn in (i) is not defective and is not replaced. Now one bulb is drawn at random from the rest. What is the probability that this bulb is not defective? Question 18. A box contains 90 discs which are numbered from 1 to 90. If one disc is drawn at random from the box, find the probability that it bears (i) a two digit number. (ii) a perfect square number. (iii) a number divisible by 5. Question 19. A child has a die whose six faces show the letters as given below: The die is thrown once. What is the probability of getting (i) A? (ii) D? Question 20. Suppose you drop a die at random on the rectangular region shown in figure. What is the probability that it will land inside the circle with diameter 1 m? Question 21. A lot consists of 144 ball pens of which 20 are defective and the others are good. Nuri will buy a pen if it is good, but will not buy if it is defective. The shopkeeper draws one pen at random and gives it to her. What is the probability that (i) she will buy it? (ii) she will not buy it? Total number of ball pens = 144 Number of defective pens = 20 Number of good pens = 144 – 20 = 124 Question 22. Two dice, one blue and one grey, are thrown at the same time. Now (i) Complete the following table: (ii) A student argues that-there are 11 possible outcomes 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. Therefore, each of them has a probability 1/11. Do you agree with this argument? Justify your answer. Question 23. A game consists of tossing a one rupee coin 3 times and noting its outcome each time. Hanif wins if all the tosses give the same result, i.e. three heads or three tails, and loses otherwise. Calculate the probability that Hanif will lose the game. Question 24. A die is thrown twice. What is the probability that (i) 5 will not come up either time? (ii) 5 will come up at least once? [Hint: Throwing a die twice and throwing two dice simultaneously are treated as the same experiment.] Total outcomes = 36 Number of outcomes in favour of 5 is (1, 5) (2, 5) (3, 5) (4, 5) (5, 5) (6, 5) (5, 1) (5, 2) (5, 3) (5, 4) (5, 6) = 11 (i) P (5 will not come up either time) = 25/36 (ii) P (5 will come up at least once) = 11/36 Question 25. Which of the following arguments are correct and which are not correct? Give reasons for your answer. (i) If two coins are tossed simultaneously there are three possible outcomes- two heads, two tails or one of each. Therefore, for each of these outcomes, the probability is 1/3. (ii) If a die is thrown, there are two possible outcomes- an odd number or an even number. Therefore, the probability of getting an odd number is 1/2.
{"url":"https://rajboardexam.in/chapter-15-probability-ex-15-1-2/","timestamp":"2024-11-13T10:55:44Z","content_type":"text/html","content_length":"105305","record_id":"<urn:uuid:083d8833-954b-4b34-95ec-65f27b7a5fa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00671.warc.gz"}
2.14: Empirical and Molecular Formulas Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Stoichiometry is a section of chemistry that involves using relationships between reactants and/or products in a chemical reaction to determine desired quantitative data. In Greek, stoikhein means element and metron means measure, so stoichiometry literally translated means the measure of elements. In order to use stoichiometry to run calculations about chemical reactions, it is important to first understand the relationships that exist between products and reactants and why they exist, which require understanding how to balance reactions. In chemistry, chemical reactions are frequently written as an equation, using chemical symbols. The reactants are displayed on the left side of the equation and the products are shown on the right, with the separation of either a single or double arrow that signifies the direction of the reaction. The significance of single and double arrow is important when discussing solubility constants, but we will not go into detail about it in this module. To balance an equation, it is necessary that there are the same number of atoms on the left side of the equation as the right. One can do this by raising the coefficients. Reactants to Products A chemical equation is like a recipe for a reaction so it displays all the ingredients or terms of a chemical reaction. It includes the elements, molecules, or ions in the reactants and in the products as well as their states, and the proportion for how much of each particle reacts or is formed relative to one another, through the stoichiometric coefficient. The following equation demonstrates the typical format of a chemical equation: \[\ce{2 Na(s) + 2HCl(aq) \rightarrow 2NaCl(aq) + H2(g)} \nonumber \] In the above equation, the elements present in the reaction are represented by their chemical symbols. Based on the Law of Conservation of Mass, which states that matter is neither created nor destroyed in a chemical reaction, every chemical reaction has the same elements in its reactants and products, though the elements they are paired up with often change in a reaction. In this reaction, sodium (\(Na\)), hydrogen (\(H\)), and chloride (\(Cl\)) are the elements present in both reactants, so based on the law of conservation of mass, they are also present on the product side of the equations. Displaying each element is important when using the chemical equation to convert between elements. Stoichiometric Coefficients In a balanced reaction, both sides of the equation have the same number of elements. The stoichiometric coefficient is the number written in front of atoms, ion and molecules in a chemical reaction to balance the number of each element on both the reactant and product sides of the equation. Though the stoichiometric coefficients can be fractions, whole numbers are frequently used and often preferred. This stoichiometric coefficients are useful since they establish the mole ratio between reactants and products. In the balanced equation: \[\ce{2 Na(s) + 2HCl(aq) \rightarrow 2NaCl(aq) + H2(g)} \nonumber \] we can determine that 2 moles of \(HCl\) will react with 2 moles of \(Na_{(s)}\) to form 2 moles of \(NaCl_{(aq)}\) and 1 mole of \(H_{2(g)}\). If we know how many moles of \(Na\) reacted, we can use the ratio of 2 moles of \(NaCl\) to 2 moles of Na to determine how many moles of \(NaCl\) were produced or we can use the ratio of 1 mole of \(H_2\) to 2 moles of \(Na\) to convert to \(NaCl\). This is known as the coefficient factor. The balanced equation makes it possible to convert information about the change in one reactant or product to quantitative data about another reactant or product. Understanding this is essential to solving stoichiometric problems. Lead (IV) hydroxide and sulfuric acid react as shown below. Balance the reaction. \[\ce{Pb(OH)4 + H2SO4 \rightarrow Pb(SO4)2 +H2O} \nonumber \] Start by counting the number of atoms of each element. Element Reactant (# of atoms) Product (# of atoms) Pb 1 1 O 8 9 H 6 2 S 1 2 The reaction is not balanced; the reaction has 16 reactant atoms and only 14 product atoms and does not obey the conservation of mass principle. Stoichiometric coefficients must be added to make the equation balanced. In this example, there are only one sulfur atom present on the reactant side, so a coefficient of 2 should be added in front of \(H_2SO_4\) to have an equal number of sulfur on both sides of the equation. Since there are 12 oxygen on the reactant side and only 9 on the product side, a 4 coefficient should be added in front of \(H_2O\) where there is a deficiency of oxygen. Count the number of elements now present on either side of the equation. Since the numbers are the same, the equation is now balanced. \[\ce{ Pb(OH)4 + 2 H2SO4 \rightarrow Pb(SO4)2 + 4H2O} \nonumber \] Element Reactant (# of atoms) Product (# of atoms) Pb 1 1 O 12 12 H 8 8 S 2 2 Balancing reactions involves finding least common multiples between numbers of elements present on both sides of the equation. In general, when applying coefficients, add coefficients to the molecules or unpaired elements last. A balanced equation ultimately has to satisfy two conditions. 1. The numbers of each element on the left and right side of the equation must be equal. 2. The charge on both sides of the equation must be equal. It is especially important to pay attention to charge when balancing redox reactions. Stoichiometry and Balanced Equations In stoichiometry, balanced equations make it possible to compare different elements through the stoichiometric factor discussed earlier. This is the mole ratio between two factors in a chemical reaction found through the ratio of stoichiometric coefficients. Here is a real world example to show how stoichiometric factors are useful. There are 12 party invitations and 20 stamps. Each party invitation needs 2 stamps to be sent. How many party invitations can be sent? The equation for this can be written as \[\ce{I + 2S \rightarrow IS2}\nonumber \] • \(I\) represents invitations, • \(S\) represents stamps, and • \(IS_2\) represents the sent party invitations consisting of one invitation and two stamps. Based on this, we have the ratio of 2 stamps for 1 sent invite, based on the balanced equation. Invitations Stamps Party Invitations Sent In this example are all the reactants (stamps and invitations) used up? No, and this is normally the case with chemical reactions. There is often excess of one of the reactants. The limiting reagent, the one that runs out first, prevents the reaction from continuing and determines the maximum amount of product that can be formed. What is the limiting reagent in this example? Stamps, because there was only enough to send out invitations, whereas there were enough invitations for 12 complete party invitations. Aside from just looking at the problem, the problem can be solved using stoichiometric factors. 12 I x (1IS[2]/1I) = 12 IS[2] possible 20 S x (1IS[2]/2S) = 10 IS[2] possible When there is no limiting reagent because the ratio of all the reactants caused them to run out at the same time, it is known as stoichiometric proportions. Types of Reactions There are 6 basic types of reactions. • Combustion: Combustion is the formation of CO[2] and H[2]O from the reaction of a chemical and O[2] • Combination (synthesis): Combination is the addition of 2 or more simple reactants to form a complex product. • Decomposition: Decomposition is when complex reactants are broken down into simpler products. • Single Displacement: Single displacement is when an element from on reactant switches with an element of the other to form two new reactants. • Double Displacement: Double displacement is when two elements from on reactants switched with two elements of the other to form two new reactants. • Acid-Base: Acid- base reactions are when two reactants form salts and water. Molar Mass Before applying stoichiometric factors to chemical equations, you need to understand molar mass. Molar mass is a useful chemical ratio between mass and moles. The atomic mass of each individual element as listed in the periodic table established this relationship for atoms or ions. For compounds or molecules, you have to take the sum of the atomic mass times the number of each atom in order to determine the molar mass What is the molar mass of H[2]O? \[\text{Molar mass} = 2 \times (1.00794\; g/mol) + 1 \times (15.9994\; g/mol) = 18.01528\; g/mol \nonumber \] Using molar mass and coefficient factors, it is possible to convert mass of reactants to mass of products or vice versa. Propane (\(\ce{C_3H_8}\)) burns in this reaction: \[\ce{C_3H_8 + 5O_2 \rightarrow 4H_2O + 3CO_2} \nonumber \] If 200 g of propane is burned, how many g of \(H_2O\) is produced? Steps to getting this answer: Since you cannot calculate from grams of reactant to grams of products you must convert from grams of \(C_3H_8\) to moles of \(C_3H_8\) then from moles of \(C_3H_8\) to moles of \(H_2O\). Then convert from moles of \(H_2O\) to grams of \(H_2O\). • Step 1: 200 g \(C_3H_8\) is equal to 4.54 mol \(C_3H_8\). • Step 2: Since there is a ratio of 4:1 \(H_2O\) to \(C_3H_8\), for every 4.54 mol \(C_3H_8\) there are 18.18 mol \(H_2O\). • Step 3: Convert 18.18 mol \(H_2O\) to g \(H_2O\). 18.18 mol \(H_2O\) is equal to 327.27 g \(H_2O\). Variation in Stoichiometric Equations Almost every quantitative relationship can be converted into a ratio that can be useful in data analysis. Density (\(\rho\)) is calculated as mass/volume. This ratio can be useful in determining the volume of a solution, given the mass or useful in finding the mass given the volume. In the latter case, the inverse relationship would be used. Volume x (Mass/Volume) = Mass Mass x (Volume/Mass) = Volume Percent Mass Percents establish a relationship as well. A percent mass states how many grams of a mixture are of a certain element or molecule. The percent X% states that of every 100 grams of a mixture, X grams are of the stated element or compound. This is useful in determining mass of a desired substance in a molecule. A substance is 5% carbon by mass. If the total mass of the substance is 10 grams, what is the mass of carbon in the sample? How many moles of carbon are there? 10 g sample x (5 g carbon/100 g sample) = 0.5 g carbon 0.5g carbon x (1 mol carbon/12.011g carbon) = 0.0416 mol carbon Molarity (moles/L) establishes a relationship between moles and liters. Given volume and molarity, it is possible to calculate mole or use moles and molarity to calculate volume. This is useful in chemical equations and dilutions. How much 5 M stock solution is needed to prepare 100 mL of 2 M solution? 100 mL of dilute solution (1 L/1000 mL)(2 mol/1L solution)(1 L stock solution/5 mol solution)(1000 ml stock solution/1L stock solution) = 40 mL stock solution. These ratios of molarity, density, and mass percent are useful in complex examples ahead. Determining Empirical Formulas An empirical formula can be determined through chemical stoichiometry by determining which elements are present in the molecule and in what ratio. The ratio of elements is determined by comparing the number of moles of each element present. 1.000 gram of an organic molecule burns completely in the presence of excess oxygen. It yields 0.0333 mol of CO[2] and 0.599 g of H[2]O. What is the empirical formula of the organic molecule? This is a combustion reaction. The problem requires that you know that organic molecules consist of some combination of carbon, hydrogen, and oxygen elements. With that in mind, write the chemical equation out, replacing unknown numbers with variables. Do not worry about coefficients here. \[ \ce{C_xH_yO_z(g) + O_2(g) \rightarrow CO_2(g) + H_2O(g)} \nonumber \] Since all the moles of C and H in CO[2] and H[2]O, respectively have to have came from the 1 gram sample of unknown, start by calculating how many moles of each element were present in the unknown 0.0333mol CO[2] (1mol C/ 1mol CO[2]) = 0.0333mol C in unknown 0.599g H[2]O (1mol H[2]O/ 18.01528g H[2]O)(2mol H/ 1mol H[2]O) = 0.0665 mol H in unknown Calculate the final moles of oxygen by taking the sum of the moles of oxygen in CO[2] and H[2]O. This will give you the number of moles from both the unknown organic molecule and the O[2] so you must subtract the moles of oxygen transferred from the O[2]. Moles of oxygen in CO[2]: 0.0333mol CO[2] (2mol O/1mol CO[2]) = 0.0666 mol O Moles of oxygen in H[2]O: 0.599g H[2]O (1mol H[2]O/18.01528 g H[2]O)(1mol O/1mol H[2]O) = 0.0332 mol O Using the Law of Conservation, we know that the mass before a reaction must equal the mass after a reaction. With this we can use the difference of the final mass of products and initial mass of the unknown organic molecule to determine the mass of the O[2] reactant. 0.333mol CO[2](44.0098g CO[2]/ 1mol CO[2]) = 1.466g CO[2] 1.466g CO[2] + 0.599g H[2]O - 1.000g unknown organic = 1.065g O[2] Moles of oxygen in O[2] 1.065g O[2](1mol O[2]/ 31.9988g O[2])(2mol O/1mol O[2]) = 0.0666mol O Moles of oxygen in unknown (0.0666mol O + 0.0332 mol O) - 0.0666mol O = 0.0332 mol O Construct a mole ratio for C, H, and O in the unknown and divide by the smallest number. (1/0.0332)(0.0333mol C : 0.0665mol H : 0.0332 mol O) => 1mol C: 2 mol H: 1 mol O From this ratio, the empirical formula is calculated to be CH[2]O. Determining Molecular Formulas To determine a molecular formula, first determine the empirical formula for the compound as shown in the section above and then determine the molecular mass experimentally. Next, divide the molecular mass by the molar mass of the empirical formula (calculated by finding the sum the total atomic masses of all the elements in the empirical formula). Multiply the subscripts of the molecular formula by this answer to get the molecular formula. In the example above, it was determined that the unknown molecule had an empirical formula of CH[2]O. 1. Find the molar mass of the empircal formula CH[2]O. 12.011g C + (1.008 g H) * (2 H) + 15.999g O = 30.026 g/mol CH[2]O 2. Determine the molecular mass experimentally. For our compound, it is 120.056 g/mol. 3. Divide the experimentally determined molecular mass by the mass of the empirical formula. (120.056 g/mol) / (30.026 g/mol) = 3.9984 4. Since 3.9984 is very close to four, it is possible to safely round up and assume that there was a slight error in the experimentally determined molecular mass. If the answer is not close to a whole number, there was either an error in the calculation of the empirical formula or a large error in the determination of the molecular mass. 5. Multiply the ratio from step 4 by the subscripts of the empirical formula to get the molecular formula. CH[2]O * 4 = ? C: 1 * 4 = 4 H: 2 * 4 = 8 O 1 * 4 = 4 CH[2]O * 4 = C[4]H[8]O[4] 6. Check your result by calculating the molar mass of the molecular formula and comparing it to the experimentally determined mass. molar mass of C[4]H[8]O[4]= 120.104 g/mol experimentally determined mass = 120.056 g/mol % error = | theoretical - experimental | / theoretical * 100% % error = | 120.104 g/mol - 120.056 g/mol | / 120.104 g/mol * 100% % error = 0.040 % An amateur welder melts down two metals to make an alloy that is 45% copper by mass and 55% iron(II) by mass. The alloy's density is 3.15 g/L. One liter of alloy completely fills a mold of volume 1000 cm^3. He accidentally breaks off a 1.203 cm^3 piece of the homogenous mixture and sweeps it outside where it reacts with acid rain over years. Assuming the acid reacts with all the iron(II) and not with the copper, how many grams of H[2](g) are released into the atmosphere because of the amateur's carelessness? (Note that the situation is fiction.) Step 1: Write a balanced equation after determining the products and reactants. In this situation, since we assume copper does not react, the reactants are only H^+(aq) and Fe(s). The given product is H2(g) and based on knowledge of redox reactions, the other product must be Fe^2^+(aq). \[\ce{Fe(s) + 2H^{+}(aq) \rightarrow H2(g) + Fe^{2+}(aq)} \nonumber \] Step 2: Write down all the given information Alloy density = (3.15g alloy/ 1L alloy) x grams of alloy = 45% copper = (45g Cu(s)/100g alloy) x grams of alloy = 55% iron(II) = (55g Fe(s)/100g alloy) 1 liter alloy = 1000cm^3 alloy alloy sample = 1.203cm^3 alloy Step 3: Answer the question of what is being asked. The question asks how much H2(g) was produced. You are expected to solve for the amount of product formed. Step 4: Start with the compound you know the most about and use given ratios to convert it to the desired compound. Convert the given amount of alloy reactant to solve for the moles of Fe(s) reacted. 1.203cm^3 alloy(1liter alloy/1000cm^3 alloy)(3.15g alloy/1liter alloy)(55g Fe(s)/100g alloy)(1mol Fe(s)/55.8g Fe(s))=3.74 x 10^-5 mol Fe(s) Make sure all the units cancel out to give you moles of \(\ce{Fe(s)}\). The above conversion involves using multiple stoichiometric relationships from density, percent mass, and molar mass. The balanced equation must now be used to convert moles of Fe(s) to moles of H[2](g). Remember that the balanced equation's coefficients state the stoichiometric factor or mole ratio of reactants and 3.74 x 10^-5 mol Fe (s) (1mol H[2](g)/1mol Fe(s)) = 3.74 x 10^-5 mol H[2](g) Step 5: Check units The question asks for how many grams of H[2](g) were released so the moles of H[2](g) must still be converted to grams using the molar mass of H[2](g). Since there are two H in each H[2], its molar mass is twice that of a single H atom. molar mass = 2(1.00794g/mol) = 2.01588g/mol 3.74 x 10^-5 mol H[2](g) (2.01588g H[2](g)/1mol H[2] (g)) = 7.53 x 10^-5 g H[2](g) released Stoichiometry and balanced equations make it possible to use one piece of information to calculate another. There are countless ways stoichiometry can be used in chemistry and everyday life. Try and see if you can use what you learned to solve the following problems. 1) Why are the following equations not considered balanced? 1. \(H_2O_{(l)} \rightarrow H_{2(g)} + O_{2(g)}\) 2. \(Zn_{(s)} + Au^+_{(aq)} \rightarrow Zn^{2+}_{(aq)} + Ag_{(s)}\) 2) Hydrochloric acid reacts with a solid chunk of aluminum to produce hydrogen gas and aluminum ions. Write the balanced chemical equation for this reaction. 3) Given a 10.1M stock solution, how many mL must be added to water to produce 200 mL of 5M solution? 4) If 0.502g of methane gas react with 0.27g of oxygen to produce carbon dioxide and water, what is the limiting reagent and how many moles of water are produced? The unbalanced equation is provided \[\ce{CH4(g) + O2(g) \rightarrow CO2(g) + H2O(l)} \nonumber \] 5) A 0.777g sample of an organic compound is burned completely. It produces 1.42g CO[2 ]and 0.388g H[2]O. Knowing that all the carbon and hydrogen atoms in CO[2] and H[2]O came from the 0.777g sample, what is the empirical formula of the organic compound? Weblinks for further reference 1. T. E. Brown, H.E LeMay, B. Bursten, C. Murphy. Chemistry: The Central Science. Prentice Hall, January 8, 2008. 2. J. C. Kotz P.M. Treichel, J. Townsend. Chemistry and Chemical Reactivity. Brooks Cole, February 7, 2008. 3. Petrucci, harwood, Herring, Madura. General Chemistry Principles & Modern Applications. Prentice Hall. New Jersey, 2007. Contributors and Attributions • Joseph Nijmeh (UCD), Mark Tye (DVC)
{"url":"https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_and_Chemical_Reactivity_(Kotz_et_al.)/02%3A_Atoms_Molecules_and_Ions/2.14%3A_Empirical_and_Molecular_Formulas","timestamp":"2024-11-06T10:29:04Z","content_type":"text/html","content_length":"173205","record_id":"<urn:uuid:71234721-b327-4446-9e60-aed7c8b3d700>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00610.warc.gz"}
Peltier Tech Excel Charts and Programming Blog - Peltier Tech Main Topics Cornerstone Articles (Important and Popular Posts) Animated Charts Axis Labels Axis Scales Axis – Multi Tier Category Labels Chart Events Combination Charts Conditional Formatting of Charts Custom Chart Types Data Labels Dynamic Arrays, LET, and LAMBDA Dynamic Array Charts Dynamic Charts Error Bars Floating Bars Gantt Charts Interactive Charts Marimekko Charts Moving Averages Panel Charts Pivot Tables and Charts SERIES Formula Slope Charts Statistical Process Control Trendlines and Regression
{"url":"https://peltiertech.com/blog/","timestamp":"2024-11-06T01:11:30Z","content_type":"text/html","content_length":"179642","record_id":"<urn:uuid:745f2ce9-c47c-4616-a39e-3adc04e495d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00856.warc.gz"}
In the mathematical study of abstract algebra, a is an algebraic structure with operations generalizing the arithmetic operations of addition and multiplication. By means of this generalization, theorems from the algebra of arithmetic are extended to non-numerical objects like polynomials, series and functions. The above text is a snippet from Wikipedia: Ring (mathematics) and as such is available under the Creative Commons Attribution/Share-Alike License. 1. A circumscribing object, (roughly) circular and hollow, looking like an annual ring, earring, finger ring etc. 2. A circular group of people or objects. a ring of mushrooms growing in the wood 3. A round piece of (precious) metal worn around the finger or through the ear, nose, etc. 4. A bird band, a round piece of metal put around a bird's leg used for identification and studies of migration. 5. A piece of food in the shape of a ring. onion rings 6. A place where some sports or exhibitions take place; notably a circular or comparable arena, such as a boxing ring or a circus ring; hence the field of a political contest. 7. An exclusive group of people, usually involving some unethical or illegal practices. a crime ring; a prostitution ring 8. A planar geometrical figure included between two concentric circles. 9. A burner on a kitchen stove. 10. A formation of various pieces of material orbiting around a planet. 11. A diacritical mark in the shape of a hollow circle placed above or under the letter; a krouzek. 12. An old English measure of corn equal to the coomb or half a quarter. 13. a large circular prehistoric stone construction such as . 14. A hierarchical level of privilege in a computer system, usually at hardware level, used to protect data and functionality (also protection ring). 15. In a jack plug, the connector between the tip and the sleeve. 16. An instrument, formerly used for taking the sun's altitude, consisting of a brass ring suspended by a swivel, with a hole at one side through which a solar ray entering indicated the altitude on the graduated inner surface opposite. 17. A flexible band partly or wholly encircling the spore cases of ferns. Noun (etymology 2) 1. The resonant sound of a bell, or a sound resembling it. The church bell's ring could be heard the length of the valley. The ring of hammer on anvil filled the air. 2. A pleasant or correct sound. The name has a nice ring to it. 3. A telephone call. I’ll give you a ring when the plane lands. 4. Any loud sound; the sound of numerous voices; a sound continued, repeated, or reverberated. 5. A chime, or set of bells harmonically tuned. Noun (etymology 3) 1. An algebraic structure which consists of a set with two binary operations, an additive operation and a multiplicative operation, such that the set is an abelian group under the additive operation, a monoid under the multiplicative operation, and such that the multiplicative operation is distributive with respect to the additive operation. The set of integers, <math>\mathbb{Z}</math>, is the prototypical ring. 2. An algebraic structure as above, but only required to be a semigroup under the multiplicative operation, that is, there need not be a multiplicative identity element. The definition of ring without unity allows, for instance, the set <math>2\mathbb{Z}</math> of even integers to be a ring. 1. To surround or enclose. The inner city was ringed with dingy industrial areas. 2. To make an incision around; to girdle. They ringed the trees to make the clearing easier next year. 3. To attach a ring to, especially for identification. Only ringed hogs may forage in the commons. We managed to ring 22 birds this morning. 4. To surround or fit with a ring, or as if with a ring. to ring a pig's snout 5. To rise in the air spirally. Verb (etymology 2) 1. Of a bell, to produce sound. The bells were ringing in the town. 2. To make (a bell) produce sound. The deliveryman rang the doorbell to drop off a parcel. 3. To produce the sound of a bell or a similar sound. Whose mobile phone is ringing? 4. Of something spoken or written, to appear to be, to seem, to sound. That does not ring true. 5. To telephone (someone). I will ring you when we arrive. 6. to resound, reverberate, echo. 7. To produce music with bells. 8. To repeat often, loudly, or earnestly. The above text is a snippet from Wiktionary: ring and as such is available under the Creative Commons Attribution/Share-Alike License. Need help with a clue? Try your search in the crossword dictionary!
{"url":"https://crosswordnexus.com/word/RING","timestamp":"2024-11-07T06:36:49Z","content_type":"application/xhtml+xml","content_length":"17049","record_id":"<urn:uuid:3a660e0c-0f82-4ca0-bb0d-b38e72a2cf16>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00314.warc.gz"}
What Are The Miners Working On, Actually Licho, 2023-10-02 In my previous writing, I often argue that the unit of Ergon represents work. That the miners are rewarded proportionally to the work they do. This work seems to be so essential, it raises the question: What do they actually do that other people need? How is that a contribution to society? There is a common misconception that Proof Of Work (POW) is wasteful. It appears in public comments, in press and on TV. It only shows that it's not clear what are the miners actually burning their electricity for. It seems pointless, the common explanation is that they are solving some hard mathematical puzzles, playing a lottery. Let us start from the beginning. There is a mathematical operation, a so-called one way function. It is like a set of dice, that when thrown at a specific angle and speed will draw a certain result. It is just mathematics, so throwing the dice at the same angle and speed twice will yield the same result. What is important: knowing the result, you can't find out what were the initial conditions, you can't learn the angle and speed. Now if you put a username as the angle (a→1, b→2, c→3) and a password as the speed of the dice and throw them, the resulting dice numbers are the hash of user data. By the way, this is indeed how they securely store credentials in web services. They don't throw dice though, there are cryptographic functions that do it. When you log in, they will just hash what you've sent and compare the results, never revealing the password. For example: user: abba (=1221) password: 123 throwing angle: 12,21° throwing speed: 1.23m/s hash: ⚅, ⚃, ⚁, ⚃. Ok, so what do the miners do, actually? A hash is a mathematically objective truth. Miners are manufacturing the mathematically objective truth, and they carry over the truth onto the record of history by putting the history and the timestamp as parameters of the one way function and hash it. How much of the truth is needed? This kind of truth is easy to counterfeit, change the history record and hash it. How do we tell which record is real? Therefore, we need a stronger truth. You can request the truth to be special, like you'd wish a dice throw that would yield one die to draw a "⚀". This is special a special result, it requires modifying the parameters a bit, maybe adding some numbers after the coma, so we don't ruin the message. For example: user: abba (=1221) password: 123 throwing angle: 12,211234123° throwing speed: 1.2333457456742m/s hash: ⚀, ⚃, ⚁, ⚃. This is a special hash, It's harder to find one like this, it requires much more throwing attempts with different tails. We can say this truth is stronger. Harder to counterfeit. It's hardly a lottery. The search is a bit random, that's why they are called miners, they are searching the available space, but mining is not just gambling on a lottery. They are searching for an objective, mathematical truth, strong enough to confirm the history. If at least half of the all miners' work is dedicated to confirming the reality, the honest ones ought to produce a stronger cumulative truth than the fakers. This is the essence of Proof Of Work. The work that miners are doing is not wasted, they are manufacturing the truth to carry it over on the history record. How do you measure the strength of the truth We measure the strength of the truth by the average number of attempts required to find it. If you wish two of the dices to draw a "⚀", it will be of higher difficulty than drawing one. Naturally, if one does some number of attempts per second, twice as high difficulty will result as twice as high solving time. The strength required by the network is adjusted for the healthy communication - so that the blocks of the records were roughly evenly space with enough time in between them for everyone to communicate. Simplified payment verification Because one can judge the strength of the truth independently, and can cryptographically prove that his transaction is the part of the history without knowing the entirety of it, Simplified Payment Verification (SPV) is possible. Essentially, one only needs the header of a block and a small chunk of data to be confident their transaction was confirmed. Faking the SPV is exactly as difficult as faking the entire history. It is possible because of how objective the mathematical truth manufactured by the miners is. Other consensus algorithms Proof of Stake (POS), usually presented as an alternative to POW, is not concerned with the objective mathematical truth. It is a mechanism for deciding who's got the right to confirm the chunk of history this time. It's a very different question. And indeed the results are different, for instance to join a POS consensus you need to first be enrolled to the system by the present already - by having the transaction sending you enough coins confirmed by them, so you can put the coins at stake (if you cheat you lose the stake). The freedom to join the consensus is closely tied to the cryptocurrency supply distribution - a new participant has to buy the coins from someone to be allowed to join. Light SPV wallets are not possible, to have any certainty about the consensus you need to verify all the transfers. You have to know who is allowed to confirm reality. In most cases, the light wallets simply trust a third party. I don't want to evaluate the pros and cons or assess any attack vectors here, just acknowledging that there is a qualitative difference between the consensus algorithms. They answer a different question, solve a different problem. The miners are not wasting electricity. They are manufacturing something of value - the objective truth. This truth carries over its weight onto the record of history. It's powered by electricity, which in principle can be renewable, electrification of transport happens for this reason. Additionally, at this point, it is completely unstoppable. There is no headquarters. It's like digging sand. It can be made illegal, but you get a shovel, and you dig sand in your backyard. Similarly doing proofs of work, although in most cases you'd need more like a small excavator.
{"url":"https://ergon.moe/blog/What%20Are%20The%20Miners%20Working%20On,%20Actually_Licho_2023-10-02.html","timestamp":"2024-11-12T00:49:46Z","content_type":"text/html","content_length":"12493","record_id":"<urn:uuid:9b106ec6-d584-4666-97ba-11e407674578>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00244.warc.gz"}
Building a Lightweight State of Charge Algorithm: Understanding & Implementing an Extended Kalman Filter Why do we need to calculate state of charge? The ability to identify where energy is available in the network and where energy is needed is a key step toward achieving this goal, and in order to accomplish this we need an accurate measurement of battery state of charge. Battery state of charge (SoC) is the proportion of capacity available relative to the battery’s full capacity. SoC data enables our networks to actively redistribute power when there is risk of blackout and maintain more balanced levels of charge, ultimately forming the foundation of the future grid optimizations and predictive models in our tech pipeline. SoC data plays a vital role in allowing energy to be distributed intelligently in our mesh-grid networks. Why is this a difficult problem? Initially, I was surprised to find so much academic work on SoC estimation, with methods varying greatly in degrees of complexity and of success. Every piece of technology we own confidently reports SoC, and we accept it as truth. But what about that time I called an Uber to the airport with my phone at “40 %” only for it to die immediately, or that old laptop that would hang on at “1%” for hours? In fact, it is challenging to estimate SoC because it cannot be measured directly from a battery under load (or recently under load), and so we need to design an algorithm to extract SoC from the measurements available. Sometimes charging electronics gets more complicated than we want it to be… What are the options? Fundamentally, SoC is estimated from battery current and voltage measurements. There are three main ways to do this: 1. Coulomb counting, 2. model-based approach, and 3. data-driven approach. The methods differ in their performance and complexity, making them each a good fit for particular applications. Coulomb counting Coulomb counting is the simplest option. It counts up the current entering and exiting the battery over the course of time. While this is effective for short time intervals, small errors rapidly accumulate, leading to an SoC estimate that diverges from the truth over time, as seen in the figure below. As a result, frequent recalibration to known SoC levels is required in order for a Coulomb counter to perform reliably. Additionally, the algorithm requires the initial SoC to be set precisely, and it cannot recover from data lapses until it is recalibrated. Coulomb counting can easily diverge from the true SoC between calibration points (full charge or discharge), even when the magnitude of error at each time step is very small. Model-based approaches There are two main routes under the model-based approach. The first is using an equivalent circuit model (ECM), most commonly the Thévenin model, to describe the electrical behavior of the battery. While this model does not perfectly describe the discharge behavior of the battery (especially at high output levels), it is a useful approximation and the resulting set of equations can be solved quickly and inexpensively. The Thévenin ECM is a tool that allows us to express the battery’s terminal voltage as a function of its open-circuit voltage (OCV) and voltages losses. Note that z(t) is the SoC at time t, and OCV is a function of SoC alone. If greater accuracy is desired, the thermal-electrical battery model is experimentally accurate and derived from first principles. However, the resulting set of equations is much more computationally expensive to solve, and the improvement in accuracy compared to an ECM is marginal. Data-driven approach Finally, the data-driven approach uses machine learning models to map battery time-series data to SoC estimates. While this approach can be effective, it requires a large amount of labeled experimental data in order to train the model, and it can be challenging to ensure that the training data covers all use cases. Additionally, the model can be quite computationally expensive to run in real time. Determining the best algorithm for our system Our SoC algorithm must be both lightweight and accurate, especially when data could be spotty and we may see a wide range of user behavior and battery characteristics. Coulomb counting can only recalibrate at full charge, so during days of bad weather it’s likely that the algorithm would lose significant accuracy. On the other hand, a data-driven algorithm is too computationally expensive to run real time on our microcontroller (MCU), and it would take too long to collect the training data. From the remaining two model-based options, we choose the ECM approach, as simplicity and conservation of resources on our MCU are the priority over a slight tradeoff in accuracy. The Algorithm This article is about our implementation of a Kalman filter, which is essentially a tool used to predict a value and then update it based on a measurement. But why do we need to make predictions in this roundabout manner? Couldn’t we instead use the ECM voltage equation (1) to determine the SoC from the OCV? After all, OCV is a function of SoC, as shown in the mapping below. An OCV mapping from the literature is shown above. Note its nonlinearity as well as the flatness of the curve in the mid-range of SoC. It turns out that we could do this, but the resulting SoC estimate would not be very accurate. This is due to approximations made in the ECM, noisy sensor data, and also the flatness of the OCV mapping in the 30-70% SoC region. The voltage hardly changes in this region, making it very difficult to extract an accurate SoC value from an OCV value. Even if the ECM voltage equation was a near perfect approximation (which it is not), it would be difficult to estimate SoC with confidence from an OCV value alone. Therefore, the intention of the algorithm is to estimate SoC with the help of both the ECM and Coulomb counting, recursively updating and correcting our prediction with each new sensor value. In this way, the algorithm is essentially recalibrating the Coulomb counter at each time step. This adaptive structure of the Kalman filter makes the algorithm resilient to unpredictable or noisy data, gaps in data, or inaccurately initialized parameters. Key steps in the algorithm With each new sensor value received, several steps are taken to update the SoC estimate. First, we predict the new SoC value using the previous SoC value and measured current, as well as the time between updates. This is done using a Coulomb counter, as shown below. Note that Q represents the battery’s capacity. The Coulomb counter updates SoC (z) by accumulating the current entering or exiting the battery. The inaccuracies in this algorithm are mitigated with the corrective process in the Kalman Filter, effectively recalibrating the SoC at each time step. Next, we feed this predicted SoC value into the Thévenin voltage equation (3), which defines a relationship between open circuit voltage (OCV), battery current, and terminal voltage (voltage of the battery while loaded) at time step k. This allows us to solve for the predicted battery terminal voltage, v[k], using our SoC prediction from the previous step. The discrete form of the ECM voltage output equation (3). Note the only difference from eqn (1) is that time k is a discrete valued time step rather than a continuous time value. Now that we have a predicted battery voltage, we can use the battery voltage measurement as a point of reference. The Kalman filter enables us to analyze the error between the predicted and measured battery voltage and corrects the estimated SoC appropriately. This correction step sets the Kalman filter apart from other algorithms and provides an elegant solution to a difficult problem. Essentially, the Kalman filter uses the uncertainties in predicted and measured voltages to determine how much of an SoC correction is required. If there is large uncertainty in the predicted voltage, then the SoC will be largely determined by the measured voltage and vice versa. Conditions necessary to apply a Kalman filter To apply the Kalman filter, two assumptions must be satisfied: 1. The errors in battery current and voltage measurements are independent and Gaussian 2. Both the SoC update equation (2) and ECM voltage equation (3) must be linear We can safely assume the first, but not the second. As shown earlier, the OCV → SoC mapping is nonlinear, making the ECM voltage equation nonlinear. So the final piece of the algorithm is to make a linear approximation of this equation, using the Extended Kalman Filter (EKF). This approximation is done using a first-order Taylor series expansion, which yields a linear approximation about a single point. Using this approximation, the EKF follows the same steps as the linear Kalman filter in order to simply compare the predicted voltage value to the measured voltage, calculate an error term, and correct the SoC accordingly. The next step is to observe its performance and determine how to optimize the algorithm going forward. Testing the algorithm In order to tune the Kalman filter and evaluate its accuracy, we completely discharged a battery using current pulses of varying intensity. When we calculated the final SoC (using the final resting voltage as our OCV) and summed up the total load drawn from the battery over the course of the experiment, the battery capacity turned out to be ~80% of the rated capacity. The Kalman filter’s feedback loop is adaptive, but it is still difficult to overcome such a large discrepancy in a parameter as critical as battery capacity. We see that with the battery capacity set at the rated capacity, the Kalman SoC estimate slowly diverges from the true SoC until it is able to sync up at ~15% SoC as the battery’s voltage dissipates. With battery capacity set at the rated capacity, the Kalman filter’s adaptivity enables it to perform far better than Coulomb counting. While it is encouraging that the SoC syncs up with the true value eventually and performs significantly better than a Coulomb counter (5% root mean squared error compared to over 11%), we would like to improve our accuracy further. In order to do this, we adjust the battery capacity to line up more closely with reality. Using 85% of the rated battery capacity, we find that the Kalman SoC lines up almost exactly with the expected SoC. After decreasing the battery capacity to 85% of the rated capacity, we were able to estimate SoC with less than 1% RMSE for a single discharge with the Kalman filter. Interestingly, when we decrease the battery capacity to 80% of its rating, the Coulomb counter tracks the expected SoC almost exactly, but the Kalman SoC slightly underestimates until it syncs up at the end of the discharge. This is likely a consequence of not including a hysteresis term in the ECM voltage equation, which causes the Kalman filter to slightly overestimate the battery voltage, leading to a negative SoC correction. In the future, we plan to add hysteresis to our ECM in order to further optimize performance. After decreasing the battery capacity to 80% of the rated capacity, the Coulomb SoC tracks the true SoC almost exactly and the Kalman diverges slightly. Ultimately, both Coulomb counting and the Kalman filter perform well when the battery capacity is accurate. The difference is that the Kalman filter can handle a wide range of parameter errors and, in the case of a data outage, adapts and quickly re-converges to the true SoC value while the Coulomb counter would be lost until it reaches a recalibration point. Next Steps At this stage we are confident enough in our lab results and are currently deploying this algorithm to the field, where battery usage and health will vary widely, and we will collect more data to inform future improvements. Already we plan to build an adaptive battery capacity into the Kalman filter, so we can stop relying on the inaccurate rated capacities, improve our SoC accuracy, and track battery health across our networks. Stay tuned for future updates on how this algorithm progresses and also how this algorithm is used to support new product features, coming soon!
{"url":"https://www.okrasolar.com/blog/building-a-lightweight-state-of-charge-algorithm","timestamp":"2024-11-09T22:15:28Z","content_type":"text/html","content_length":"33446","record_id":"<urn:uuid:354529b3-cec8-415e-bd8c-482377e5d574>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00232.warc.gz"}
This function selects an appropriate bandwidth bw for the kernel estimator of the pair correlation function of a point process intensity computed by pcf.ppp (homogeneous case) or pcfinhom (inhomogeneous case). With cv.method="leastSQ", the bandwidth \(h\) is chosen to minimise an unbiased estimate of the integrated mean-square error criterion \(M(h)\) defined in equation (4) in Guan (2007a). The code implements the fast algorithm of Jalilian and Waagepetersen (2018). With cv.method="compLik", the bandwidth \(h\) is chosen to maximise a likelihood cross-validation criterion \(CV(h)\) defined in equation (6) of Guan (2007b). $$ M(b) = \frac{\mbox{MSE}(\sigma)}{\lambda^2} - g(0) $$ The result is a numerical value giving the selected bandwidth.
{"url":"https://www.rdocumentation.org/packages/spatstat.explore/versions/3.0-6/topics/bw.pcf","timestamp":"2024-11-12T15:51:38Z","content_type":"text/html","content_length":"80732","record_id":"<urn:uuid:d2b954b3-9e2d-46a6-80b6-779bae537e6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00698.warc.gz"}
Interaction Effects in Linear and Generalized Linear Models Examples and Applications Using Stata January 2019 | 608 pages | SAGE Publications, Inc “This book is remarkable in its accessible treatment of interaction effects. Although this concept can be challenging for students (even those with some background in statistics), this book presents the material in a very accessible manner, with plenty of examples to help the reader understand how to interpret their results.” –Nicole Kalaf-Hughes, Bowling Green State University Offering a clear set of workable examples with data and explanations, Interaction Effects in Linear and Generalized Linear Models is a comprehensive and accessible text that provides a unified approach to interpreting interaction effects. The book develops the statistical basis for the general principles of interpretive tools and applies them to a variety of examples, introduces the ICALC Toolkit for Stata, and offers a series of start-to-finish application examples to show students how to interpret interaction effects for a variety of different techniques of analysis, beginning with OLS regression. The author’s website at provides a downloadable toolkit of Stata® routines to produce the calculations, tables, and graphics for each interpretive tool discussed. Also available are the Stata® dataset files to run the examples in the book. Series Editor’s Introduction About the Author 1. Introduction and Background Overview: Why Should You Read This Book? The Logic of Interaction Effects in Linear Regression Models The Logic of Interaction Effects in GLMs Diagnostic Testing and Consequences of Model Misspecification Roadmap for the Rest of the Book 2. Basics of Interpreting the Focal Variable’s Effect in the Modeling Component Mathematical (Geometric) Foundation for GFI GFI Basics: Algebraic Regrouping, Point Estimates, and Sign Changes 3. The Varying Significance of the Focal Variable’s Effect Test Statistics and Significance Levels JN Mathematically Derived Significance Region Empirically Defined Significance Region Confidence Bounds and Error Bar Plots Summary and Recommendations 4. Linear (Identity Link) Models: Using the Predicted Outcome for Interpretation Options for Display and Reference Values Reference Values for the Other Predictors (Z) Constructing Tables of Predicted Outcome Values Charts and Plots of the Expected Value of the Outcome 5. Nonidentity Link Functions: Challenges of Interpreting Interactions in Nonlinear Models Mathematically Defining the Confounded Sources of Nonlinearity Revisiting Options for Display and Reference Values Summary and Recommendations Derivations and Calculations 6. ICALC Toolkit: Syntax, Options, and Examples INTSPEC: Syntax and Options GFI Tool: Syntax and Options SIGREG Tool: Syntax and Options EFFDISP Tool: Syntax and Options OUTDISP Tool: Syntax and Options 7. Linear Regression Model Applications 8. Logistic Regression and Probit Applications One-Moderator Example (Nominal by Nominal) Three-Way Interaction Example (Interval by Interval by Nominal) 9. Multinomial Logistic Regression Applications One-Moderator Example (Interval by Interval) Two-Moderator Example (Interval by Two Nominal) 10. Ordinal Regression Models One-Moderator Example (Interval by Nominal) Two-Moderator Interaction Example (Nominal by Two Interval) 11. Count Models One-Moderator Example (Interval by Nominal) Three-Way Interaction Example (Interval by Interval by Nominal) 12. Extensions and Final Thoughts Final Thoughts: Dos, Don’ts, and Cautions Appendix: Data for Examples Chapter 2: One-Moderator Example Chapter 2: Two-Moderator Mixed Example Chapter 2: Two-Moderator Interval Example Chapter 2: Three-Way Interaction Example Chapter 3: One-Moderator Example Chapter 3: Two-Moderator Example Chapter 3: Three-Way Interaction Example Chapter 4: Tables One-Moderator Example and Figures Example 3 Chapter 4: Tables Two-Moderator Example Chapter 4: Figures Examples 1 and 2 Chapter 4: Figures Example 4 Chapter 4: Tables Three-Way Interaction Example and Figures Example 5 Chapter 5: Examples 1 and 2 Chapter 6: One-Moderator Example Chapter 6: Two-Moderator Example Chapter 6: Three-Way Interaction Example Chapter 7: One-Moderator Example Chapter 7: Two-Moderator Example Chapter 8: One-Moderator Example Chapter 8: Three-Way Interaction Example Chapter 9: One-Moderator Example Chapter 9: Two-Moderator Example Chapter 10: One-Moderator Example Chapter 10: Two-Moderator Example Chapter 11: One-Moderator Example Chapter 11: Three-Way Interaction Example Chapter 12: Polynomial Example Chapter 12: Heckman Example Chapter 12: Survival Analysis Example “This book is remarkable in its accessible treatment of interaction effects. Although this concept can be challenging for students (even those with some background in statistics), this book presents the material in a very accessible manner, with plenty of examples to help the reader understand how to interpret their results.” Bowling Green State University “Interaction Effects in Linear and Generalized Linear Models provides an intuitive approach that benefits both new users of Stata getting acquainted with these statistical models as well as experienced students looking for a refresher. The topic of interactions is greatly important given that many of our main theories in the social and behavioral sciences rely on moderating effects of variables. This book does a terrific job of guiding the reader through the various statistical commands available in Stata and explaining the results and taking the reader through different considerations in graphically presenting their results.”
{"url":"https://uk.sagepub.com/en-gb/mst/interaction-effects-in-linear-and-generalized-linear-models/book253602","timestamp":"2024-11-06T05:15:26Z","content_type":"text/html","content_length":"128355","record_id":"<urn:uuid:73ebdead-5e96-46e7-8126-c17c184cad76>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00443.warc.gz"}
Double Tetrahelix Here is a figure I came up with after studying the tetrahelix. This must have been some time before November 2, 1981, since that's when I started to analyze its geometry mathematically. It is fairly easy to construct with equal-diameter spheres or toothpicks. The tetrahelix is based on a figure of two tangent spheres which form the center of an arc of four other tangent spheres. In toothpick terminology, this is three face-bonded tetrahedra sharing a common edge. I wondered what I would get based on a figure with four face-bonded tetrahedra and this is what came out. I call it a double tetrahelix since it can be interpreted as two tetrahelices bonded together. They share two common strands of spheres which are tangent all the way along the strand. Each helix has an unshared strand of spheres which has small gaps in it. The gap corresponds to Buckminster Fuller's "Unzipping Angle" (2π - 5⋅acos(1/3), or about 7.35610°) which is discussed in Section 934.00 of Synergetics. There and in the previous section (933.00) discussing the tetrahelix, Fuller draws a tantalizing correspondence between DNA and the tetrahelix, though I've never seen a double tetrahelix model constructed by him. It is an interesting and natural conjecture, but as yet I haven't seen a biochemist structurally verify this or describe any correspondence between the two. Certainly biochemists have shown a lot of interest in geodesics and tensegrity. This would be another good place for them to dip their oars into Fuller's work. The "Double Tetrahelix" figure available in the Tensegrity Viewer is a "toothpick" version of this model. The green backbones of that figure have slightly more than unit-length segments reflecting the gap, while all the other segments are unit length. There is also a VRML model which tries to show the correspondence between the sphere model and a toothpick model.
{"url":"https://bobwb.tripod.com/synergetics/photos/dblhelix.html","timestamp":"2024-11-09T22:41:00Z","content_type":"text/html","content_length":"17323","record_id":"<urn:uuid:36ac32e6-f841-4705-a0fb-164a03c86ad4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00134.warc.gz"}
What weight is 1 oz in grams? Which is more 1 oz or 1g If you’re wondering how an ounce best relates to a gram, it turns out that 1 ounce has a much larger mass than 1 gram. It has been proven that 1 ounce is approximately equal to 28.35 grams. What weight is 1 oz in grams 28.35 you g What does it mean 1 oz – (elders) / name. a unit behind the weight, equal to one sixteenth of a pound (avoirdupois); 1 ounce is often equal to 437.5 grains plus 28.349 grams. Abbreviation: oz. a unit of a pound equal to one twelfth of a functional trojan or apothecary pound; 1 ounce is equal to 480 grains, also called 31.103 grams. What’s 1 oz in cups How many cups per ounce? 5 fluid ounces equals 0.12500004 cups, which is the cup to ounce conversion component. How many weight of 200 grams is 1000 grams Convert 200 grams to kilograms. How many grams of nitrogen are in a diet consisting of 100 grams of protein Why? Taking into account the amount of protein or simply amino acids in the diet, consumers can use any of these methods to determine the amount of nitrogen in the amount of protein for health. The contained protein contains about 16% nitrogen, and converting this value to any value by dividing 100% by 16% gives 6.25. Why are Grams called Grams By mass, a beautiful gram corresponds to one thousandth of a liter of a (one cubic centimeter) in relation to water at 4 degrees Celsius. The word “gram” comes from the late Latin “gramma”, which means small or medium weight through the French “gram”. In that . The symbol for grams is definitely g. What is the amount in grams of quick lime can be obtained from 25 grams of CaCO3 on calculation Full step-by-step answer: So the correct answer is option C, which is usually the decomposition of \[\text25 g\] with calcium carbonate to give \[\text14 g\] lime oxide, or what we call quicklime. How many grams of 80% pure marble stone on calcination can give for 3 grams of quicklime How much 80% pure marble stone in terms of calcination can be obtained from 14 grams in terms of quicklime? 80 g CaCO3 = 100 g marble stone. 25 g CaCO3=? 2580×100=31.25 g.
{"url":"https://www.vanessabenedict.com/what-is-1-oz-in-grams/","timestamp":"2024-11-02T02:06:12Z","content_type":"text/html","content_length":"69064","record_id":"<urn:uuid:9d6a9c4b-f821-48a5-8280-2d8096834f05>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00451.warc.gz"}
INRIAlign toolbox for fMRI realignment in SPM99 The INRIAlign toolbox enhances the standard SPM realignment routine (see topic: spm_realign_ui in SPM99 documentation). In the latter, rigid registration is achieved by minimization of the sum of squared intensity differences (SSD) between two images. As noted by several SPM users, SSD based registration may be biased by a variety of image artifacts and also by activated areas. To get around this problem, INRIAlign reduces the influence of large intensity differences by weighting errors using a non-quadratic, slowly-increasing function (rho function). This is basically the principle of an M-estimator. When launching INRIAlign, the user may select a specific rho function as well as an associated relative cut-off distance (which is needed by most of the rho functions). By default, the rho function is that of Geman-McClure while the relative cut-off distance is set to 2.5. Apart from this distinction, the method is very similar to spm_realign and uses the same editable default parameters. Most of the implementation has been directly adapted from the code written by J. Author: Alexis Roche, INRIA Sophia Antipolis, EPIDAURE Group, Now working at the CEA anatomo-fonctional neuro-imaging unit, Frederic Joliot Hospital, Orsay, France • L. Freire, A. Roche and J.-Fr. Mangin. What is the best similarity measure for motion correction in fMRI? IEEE Transactions in Medical Imaging 21, p. 470-484, 2002. • L. Freire and J.-F. Mangin. Motion correction algorithms may create spurious brain activations in the absence of subject motion. Neuroimage 14(3), p. 709-722, september 2001. • P.J. Rousseeuw and A.M. Leroy. Robust Regression and Outlier Detection. Wiley Series in Probability and Mathematical Statistics. 1987. Matlab routines Xavier Pennec Last modified: Thu Oct 14 18:47:06 MEST 2004
{"url":"http://www-sop.inria.fr/epidaure/Collaborations/IRMf/INRIAlign.html","timestamp":"2024-11-01T23:36:06Z","content_type":"text/html","content_length":"3074","record_id":"<urn:uuid:d5a547b0-884f-46d5-b04b-7427ff1e0186>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00585.warc.gz"}
A refined propensity account for GRW theory Published in: • Foundations of Physics. - Springer. - 2021, vol. 51, no. 2, p. 20 English Spontaneous collapse theories of quantum mechanics turn the usual Schrödinger equation into a stochastic dynamical law. In particular, in this paper I will focus on the GRW theory. Two philosophical issues that can be raised about GRW concern (a) the ontology of the theory, in particular the nature of the wave function and its role within the theory, and (b) the interpretation of the objective probabilities involved in the dynamics of the theory. During the last years, it has been claimed that we can take advantage of dispositional properties in order to develop an ontology for GRW theory, and also in order to ground the objective probabilities which are postulated by it. However, in this paper I will argue that the dispositional interpretations which have been discussed in the literature so far are either flawed or— at best—incomplete. If we want to endorse a dispositional interpretation of GRW theory we thus need an extended account which specifies the precise nature of those properties and which makes also clear how they can correctly ground all the probabilities postulated by the theory. Thus, after having introduced several different kinds of probabilistic dispositions, I will try to fill the gap in the literature by proposing a novel and complete dispositional account of GRW, based on what I call spontaneous weighted multi-track propensities. I claim that such an account can satisfy both of our desiderata. License undefined Persistent URL Document views: File downloads: • Lorenzetti_fop_2021.pdf: 117
{"url":"https://susi.usi.ch/usi/documents/319313","timestamp":"2024-11-11T16:44:24Z","content_type":"text/html","content_length":"24722","record_id":"<urn:uuid:ccbf91d2-c4d7-4481-b6b1-9ef90d2c20ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00598.warc.gz"}
edHelper.com - Logarithm Word Problems 3. Paul is a scientist who measured the intensity of an earthquake to be 121,000 times the reference intensity. If Paul needs to report a Richter scale reading to Albert, a newspaper reporter, what number should Paul tell Albert? 4. An earthquake was reported to have a Richter number of 6. How does the intensity of the earthquake approximately compare with the reference intensity?
{"url":"https://edhelper.com/Logarithms53.htm","timestamp":"2024-11-12T15:04:09Z","content_type":"text/html","content_length":"5605","record_id":"<urn:uuid:d64d48d5-0c72-4e19-bf58-c91458d56186>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00194.warc.gz"}
Software Engineering Iterative Improvement The greedy strategy, considered in the preceding chapter, constructs a solution to an optimization problem piece by piece, always adding a locally optimal piece to a partially constructed solution. In this chapter, we discuss a different approach to designing algorithms for optimization problems. It starts with some feasible solution (a solution that satisfies all the constraints of the problem) and proceeds to improve it by repeated applications of some simple step. This step typically involves a small, localized change yielding a feasible solution with an improved value of the objective function. When no such change improves the value of the objective function, the algorithm returns the last feasible solution as optimal and stops. There can be several obstacles to the successful implementation of this idea. First, we need an initial feasible solution. For some problems, we can always start with a trivial solution or use an approximate solution obtained by some other (e.g., greedy) algorithm. But for others, finding an initial solution may require as much effort as solving the problem after a feasible solution has been identified. Second, it is not always clear what changes should be allowed in a feasible solution so that we can check efficiently whether the current solution is locally optimal and, if not, replace it with a better one. Third—and this is the most fundamental difficulty— is an issue of local versus global extremum (maximum or minimum). Think about the problem of finding the highest point in a hilly area with no map on a foggy day. A logical thing to do would be to start walking “up the hill” from the point you are at until it becomes impossible to do so because no direction would lead up. You will have reached a local highest point, but because of a limited feasibility, there will be no simple way to tell whether the point is the highest (global maximum you are after) in the entire Fortunately, there are important problems that can be solved by iterative-improvement algorithms. The most important of them is linear programming. We have already encountered this topic in Section 6.6. Here, in Section 10.1, we introduce the simplex method, the classic algorithm for linear programming. Discovered by the U.S. mathematician George B. Dantzig in 1947, this algorithm has proved to be one of the most consequential achievements in the history of algorithmics. In Section 10.2, we consider the important problem of maximizing the amount of flow that can be sent through a network with links of limited capacities. This problem is a special case of linear programming. However, its special structure makes it possible to solve the problem by algorithms that are more efficient than the simplex method. We outline the classic iterative-improvement algorithm for this problem, discovered by the American mathematicians L. R. Ford, Jr., and D. R. Fulkerson in the 1950s. The last two sections of the chapter deal with bipartite matching. This is the problem of finding an optimal pairing of elements taken from two disjoint sets. Examples include matching workers and jobs, high school graduates and colleges, and men and women for marriage. Section 10.3 deals with the problem of maximizing the number of matched pairs; Section 10.4 is concerned with the matching We also discuss several iterative-improvement algorithms in Section 12.3, where we consider approximation algorithms for the traveling salesman and knap-sack problems. Other examples of iterative-improvement algorithms can be found in the algorithms textbook by Moret and Shapiro [Mor91], books on continuous and discrete optimization (e.g., [Nem89]), and the literature on heuristic search (e.g., [Mic10]).
{"url":"https://softwareengineering.softecks.in/743/","timestamp":"2024-11-08T22:10:36Z","content_type":"text/html","content_length":"102110","record_id":"<urn:uuid:046ac1fe-9589-4828-a93f-04b8529a7f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00643.warc.gz"}
Saddle Making at Spokane Falls Community College Verlane DeGrange Spokane Falls Community College MEASURING AND SELLING PRICE For cutting straps Many times a saddlemaker must decide how much to charge for a strap cut from a piece of leather or side of leather. Since a side of leather is an irregular shape, at first glance it may difficult to decide where to begin. This method shows how to organize this odd shape and make money from your investment. Follow these steps and always remember: never cut any leather until you have done all the computations first and are absolutely certain of your computations. 1. Lay out side of leather on flat surface. Inspect both flesh and grain sides for flaws, butcher cuts, and holes. Make note of where the flesh side flaws and cuts are and mark lightly on grain side with modeling tool or pencil. 2. Have invoice from supplier on workbench and compare the square footage listed with actual footage received. Leather is always marked with a whole number and a smaller number following, such as 22 3 indicating the number of square feet are 22 and three-quarter feet. Usually the invoice matches the item shipped. 3. Be sure to note: the total price including shipping at the bottom of the invoice is the total amount you’ve paid for the leather. This will be the cost you will use in # 9 to calculate your selling price. 4. Lay long 9 foot straightedge along the top edge to “square up” the side of leather. Be sure to leave as little waste on the part you plan to cut off, while still cutting off any clamp marks, holes, or nicks. 5. Do not mark anything yet. However, measure with a measuring tape the distance from the line you plan to cut off to the belly portion of the side. This distance usually is about 20”-22” depending on the depth of the side from top to bottom. You determine where the last cut for straps on the belly will be by the softness or how much the leather “breaks over”. On a younger animal, the break over point may be only 18”. 6. Write on the invoice what this number is. 7. To find out what YOUR cost is per strap that is 1” wide x full length of side, divide the depth of hide into total cost. This will vary somewhat depending on the original depth. 8. Let’s assume the cost for the side with the shipping included is $150.00. Thus $150.00 ÷ 20” = $7.50 per inch width is your actual cost out of pocket for every 1” width strap you cut from this side in the quality zone of leather. Let’s step aside and learn about calculating prices before proceeding To calculate your selling price, you need to sell the leather at a 40% profit margin. There is a difference between margin and mark-up that you need to be aware of. Since the wholesale price is the smaller amount, you must “mark-up” an item more than 40% to get the true selling price. This can be illustrated by this example: If you are selling an item for $10 and you are working on a 40% margin, to find your actual cost when only the selling price is, known, subtract 40% of the selling price from $10.00. Thus $10.00 – (40% of $10.00) = $6.00. To arrive at the selling price based on the wholesale cost (your cost), you need to mark-up the item 67% of the wholesale cost. Again: $6.00 + (67% of $6.00) = $10.00. If you try to arrive at the selling price by adding only 40% to your cost, you’ll fall short of a profit that allows you to stay in business. Let’s do the math again this time and you’ll see why you’re loosing profits: $6.00 + (40% of $6.00) = $8.90. You’re literally being robbed of $1.10, which is approximately 10%. You wouldn’t pay a customer 10% to do business with you, but with faulty math reasoning that’s exactly what you are doing! You need a 40% margin (translate: 67% mark-up) to stay in business according to the SBA. On some items you’ll need even more to remain profitable, but this is a general guideline for a business. Now, back to our original problem: what should you sell a 1” wide strap for when you’ve paid $7.50 per inch? 9. Do the math this time using actual numbers: $7.50 + (67% of $7.50) =$12.52. To make the number a bit more in line with pricing ease, call your selling price $12.50 for a 1” wide strap that runs the length of the side of leather. 10. Let’s do another problem based on this concept: what should the selling price be of a strap that is only 5/8” wide? Now in a more condensed form, let’s run through the math again: · Your cost per inch: $7.50 per inch in width · 5/8” = .625” as a decimal (for simplicity on a calculator) · Your cost for a 5/8” strap = $4.68; stated mathematically it is (.625 x $7.50 = $4.68). For ease of handling, let’s round off your cost to $4.70 · Mark-up $4.70 by 67% for your profit. Thus $4.70 + (67% of $4.70) = $7.84 is the actual selling price. · For ease of handling, round off selling price to $7.85 Using this method, you’ll make the right amount of money selling this side of leather if you sell 20-1” wide straps and still have the belly left over for small items. If you took the original cost of the leather at $150.00 and marked up the entire side 67%, you’d arrive at the selling price of $250.00 for the whole thing. By cutting straps, you get a greater profitability from a given space on the side (the prime area) and still have a bit left over for small projects. Thus strap work is usually more profitable and efficient than cutting large irregular pieces.
{"url":"http://mac3.matyc.org/Art/saddle_making__project_sfcc.htm","timestamp":"2024-11-02T04:45:29Z","content_type":"text/html","content_length":"14064","record_id":"<urn:uuid:a09c1ae1-3e0e-4627-b0c7-785de7a6acaa>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00663.warc.gz"}
Convolutional Neural Networks: An Introduction | Python-bloggersConvolutional Neural Networks: An Introduction Convolutional Neural Networks: An Introduction Convolutional Neural Networks (CNN) are used for the majority of applications in computer vision. You can find them almost everywhere. They are used for image and video classification and regression, object detection, image segmentation, and even playing Atari games. Understanding the convolution layer is critical in building successful vision models. In this walkthrough, we’ll walk you through the idea of convolution and explain the concept of channels, padding, stride, and receptive field. Curious about machine learning image recognition and object detection? Read YOLO Algorithm and YOLO Object Detection: An Introduction We can represent pictures as a matrix or set of matrices with pixel values. A color image (RGB) transformed into a tensor has three channels corresponding to Red, Blue, and Green channels with pixel values between 0 to 255. The size of a tensor is Channel × Width × Height. From the example below, we can see it’s 3 x 128 x 128. Convolution is an operation of applying a kernel (a small matrix e.g., 3×3 with weights) over an image grid and computing the dot product. The animation shows convolving a 3×3 kernel over a 4×4 input resulting in 2×2 output. We can generalize this into: • An input of size W (assuming height and width are the same and equal W) • Kernel of size K • Output of size (W – K) + 1. Input, output channels, and kernels The number of input channels in the first layer should equal to the number of channels in the input data. The user can define the number of output channels, and it’s a hyperparameter to set. The output channels from one layer become the input channels for the next layer. We can convert an input, a 3-dimensional tensor, of size n_in × input_hight × input_width into output tensor of size n_out × output_hight × output_width by applying a set of n_out 3-dimensional kernels of size n_in × kernel_hight × kernel_width. After each filter application, we receive an output of size 1 × output_hight × output_width. We can stack these n_out tensors together to get the output of final size n_out × output_hight × It’s a lot to process, so re-read the last couple of paragraphs multiple times if needed. Also, feel free to refer to the image below for further clarification. The kernel values, also called filters, are parameters and are learned by the neural network. We can represent a kernel as a weight matrix with a couple of parameter types: • Value of 0 – untrainable • Tied – having the same value, but are trainable If you want a more in-depth overview of the mathematics behind this, refer to this explanation by Matthew Kleinsmith. Using convolutions instead of fully connected layers has two benefits: the network trains faster and is less prone to overfitting. There’s a problem of shrinking dimensions – which means every layer of a neural net would have a smaller feature space. Also, the network loses information about the image corners and edges. We don’t want that. The solution to this issue is to introduce padding, which adds a frame of pixels around the image (usually 0 valued pixels). We add padding of size P, which results in the output size of (W-K) + 1 + 2P. It is common to set padding to (K – 1) / 2, so both input and output are of the same dimensions. In practice, we almost always use an odd size kernel. Receptive field Now let’s see what happens if we stack three convolutions layers on top of each other. We apply a kernel 3×3 to the (input) image of size 7×7. An orange square in the input matrix’s top left corner is a receptive field for cell (2, 2) in the first layer. It is defined as an area of an image that is involved in the calculation of a layer. We start with a receptive field of size 3×3, and with each convolution, the receptive field is increased by K – 1 (K = kernel size). So in the final layer, we end up with a receptive field of size 7×7 (going from 3×3 to 5×5 to 7×7). This means that larger and larger areas of an initial input image are used to calculate the features by going deeper into the network. Sliding 1 pixel when moving the kernel means that we would need many layers to build big enough receptive fields to build complex features. One way to approach this problem is to introduce a stride. Adding a stride to a layer means skipping pixels when applying the kernel. We could move over two pixels after each kernel application. This is called stride-2 convolution. After stride-2 convolution and padding, the output size can be calculated as (W – K + 2P) / S + 1, where S is a stride size. In the example above, we start with 5×5 input, apply 3×3 kernel, add 1 padding, and stride 2, so we end up with the output of size 3×3. As a result, we decreased the size of activations. We already know that more sophisticated features are calculated by going deep in the network, so we don’t want to reduce the number of calculations. When we downsample the activations by adding a stride to the layer, we need to increase the number of output channels (depth of the output) to retain the calculation complexity. For example, stride-2 is halving the output size, so we need to double the output channels. The deeper in the network, the more output channels we will have. Neural networks are tough to understand at first, with convolutions being one of the most challenging topics in the field. Still, image data is everywhere, and knowing how to work with images can give a competitive advantage to both yourself and your company. Thankfully, we have many resources to learn these complex topics, and the subject gets easier after a couple of repetitions. This article covered the essentials needed to move forward with the practical part. Modern libraries like TensorFlow and PyTorch won’t require you to code out things like padding and stride manually, but knowing what’s going on under the hood will make the debugging process that much easier. Learn More Article Convolutional Neural Networks: An Introduction comes from Appsilon Data Science | End­ to­ End Data Science Solutions. Want to share your content on python-bloggers?
{"url":"https://python-bloggers.com/2020/10/convolutional-neural-networks-an-introduction/","timestamp":"2024-11-07T00:01:28Z","content_type":"text/html","content_length":"50339","record_id":"<urn:uuid:c131416a-f6f0-48a5-b035-70210f44253a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00588.warc.gz"}
priority queue - OpenGenus IQ: Learn Algorithms, DL, System Design Cartesian tree sorting, also called Levcopoulos Petersson algorithm is an adaptive sorting algorithm, i.e. it performs much better on a partially sorted data. It needs two data structures a Cartesian tree and a priority queue. The algorithm here uses min-heap Cartesian tree to give a sorted sequence
{"url":"https://iq.opengenus.org/tag/priority-queue/","timestamp":"2024-11-11T04:52:24Z","content_type":"text/html","content_length":"30762","record_id":"<urn:uuid:9aa237f6-9e71-4f43-8580-be849c731ae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00103.warc.gz"}
Probabilities | AnalystPrep - FRM Part 1 Study Notes and Study Materials After completing this reading, you should be able to: • Describe an event and an event space. • Describe independent events and mutually exclusive events. • Explain the difference between independent events and conditionally independent events. • Calculate the probability of an event for a discrete probability function. • Define and calculate a conditional probability. • Distinguish between conditional and unconditional probabilities. • Explain and apply Bayes’ rule. Probability is the foundation of statistics, risk management, and econometrics. Probability quantifies the likelihood that some event will occur. For instance, we could be interested in the probability that there will be a defaulter in a prime mortgage facility. Sample Space, Event Space, and Events Sample Space (Ω) A sample space is defined as a collection of all possible occurrences of an experiment. The outcomes are dependent on the problem being studied. For example, when modeling returns from a portfolio, the sample space is a set of real numbers. As another example, assume we want to model defaults in loan payment; we know that there can only be two outcomes: either the firm defaults or it doesn’t. As such, the sample space is Ω = {Default, No Default}. To give yet another example, the sample space when a fair six-sided die is tossed is made of six different outcomes: Ω = {1, 2, 3, 4, 5, 6} Events (ω) An event is a set of outcomes (which may contain more than one element). For example, suppose we tossed a die. A “6” would constitute an event. If we toss two dice simultaneously, a {6, 2} would constitute an event. An event that contains only one outcome is termed an elementary event. Event Space (F) The event space refers to the set of all possible outcomes and combinations of outcomes. For example, consider a scenario where we toss two fair coins simultaneously. The following would constitute our event space: {HH, HT, TH, TT} Note: If the coins are fair, the probability of a head, P(H), equals the probability of a tail, P(T). The probability of an event refers to the likelihood of that particular event occurring. For example, the probability of a Head when we toss a coin is 0.5, and so is the probability of a Tail. According to frequentist interpretation, the term probability stands for the number of times an event occurs if a set of independent experiments is performed. But this is what we call the frequentist interpretation because it defines an event’s probability as the limit of its relative frequency in many trials. It is just a conceptual explanation; in finance, we deal with actual, non-experimental events such as the return earned on a stock. Independent and Mutually Exclusive Events Mutually Exclusive Events Two events, A and B, are said to be mutually exclusive if the occurrence of A rules out the occurrence of B, and vice versa. For example, a car cannot turn left and turn right at the same time. Mutually exclusive events are such that one event precludes the occurrence of all the other events. Thus, if you roll a dice and a 4 comes up, that particular event precludes all the other events, i.e., 1,2,3,5 and 6. In other words, rolling a 1 and a 5 are mutually exclusive events: they cannot occur simultaneously. Furthermore, there is no way a single investment can have more than one arithmetic mean return. Thus, arithmetic returns of, say, 20% and 17% constitute mutually exclusive events. Independent Events Two events, A and B, are independent if the fact that A occurs does not affect the probability of B occurring. When two events are independent, this simply means that both events can happen at the same time. In other words, the probability of one event happening does not depend on whether the other event occurs or not. For example, we can define A as the likelihood that it rains on March 15 in New York and B as the probability that it rains in Frankfurt on March 15. In this instance, both events can happen simultaneously or not. Another example would be defining event A as getting tails on the first coin toss and B on the second coin toss. The fact of landing on tails on the first toss will not affect the probability of getting tails on the second toss. The intersection of events say A and B is the set of outcomes occurring both in A and B. It is denoted as P(A∩B). Using the Venn diagram, this is represented as: $$P(A∩B)=P(A \text{ and } B)=P(A)×P(B)$$ Independence can be extended to n independent events: Let \(A_1,A_2,…, A_n\) be independent events then: $$P(A_1∩A_2∩…∩ A_n )=P(A_1)×P(A_2 )×…×P(A_n )$$ For mutually exclusive events, $$P(A∩B)=P(A \text{ and } B)=0$$ This is because of the occurrence of A rules out the occurrence of B. Remember that a car cannot turn left and turn right at the same time! The union of events, say, A and B, is the set of outcomes occurring in at least one of the two sets – A or B. It is denoted as P(A∪B). Using the Venn diagram, this is represented as: To determine the likelihood of any two mutually exclusive events occurring, we sum up their individual probabilities. The following is the statistical notation: $$ P\left( A\cup B \right) =P(A \text{ or } B)=P\left( A \right) +P\left( B \right) $$ Given two events A and B, that are not mutually exclusive (independent events), the probability that at least one of the events will occur is given by: $$P(A \cup B)=P(A \text{ or } B) = P(A)+P(B)-P(A \cap B)$$ The Complement of a Set Another important concept under probability is the compliment of a set denoted by Ac (where A can be any other event) which is the set of outcomes that are not in A. For example, consider the following Venn diagram: This is the first axiom of probability, and it implies that: $$P(A ∪ A^c )=P(A)+P(A^c )=1$$ Conditional Probability Until now, we’ve only looked at unconditional probabilities. An unconditional probability (also known as a marginal probability) is simply the probability that an event occurs without considering any other preceding events. In other words, unconditional probabilities are not conditioned on the occurrence of any other events; they are ‘stand-alone’ events. Conditional probability is the probability of one event occurring with some relationship to one or more other events. Our interest lies in the probability of an event ‘A’ given that another event ‘B ‘has already occurred. Here’s what you should ask yourself: “What is the probability of one event occurring if another event has already taken place?” We pronounce P(A | B) as “the probability of A given B.,” and it is given by: $$P(A│B)=\frac{P(A\cap B)}{P(B)}$$ The bar sandwiched between A and B simply indicates “given.” Bayes’ Theorem Bayes’ theorem describes the probability of an event based on prior knowledge of conditions that might be related to the event. Assuming that we have two random variables, A and B, then according to Bayes’ theorem: $$ P\left( A|B \right) =\frac { P\left( B|A \right) \times P\left( A \right) }{ P\left( B \right) } $$ Applying Bayes’ Theorem Supposing that we are issued with two bonds, A and B. Each bond has a default probability of 10% over the following year. We are also told that there is a 6% chance that both the bonds will default, an 86% chance that none of them will default, and a 14% chance that either of the bonds will default. All of this information can be summarized in a probability matrix. Often, there is a high correlation between bond defaults. This can be attributed to the sensitivity displayed by bond issuers when dealing with broad economic bonds. The 6% chances of both the bonds defaulting are higher than the 1% chances of default had the default events been independent. The features of the probability matrix can also be expressed in terms of conditional probabilities. For example, the likelihood that bond \(A\) will default given that \(B\) has defaulted is computed $$ P\left( A|B \right) =\frac { P\left[ A\cap B \right] }{ P\left[ B \right] } =\frac { 6\% }{ 10\% } =60\% $$ This means that in 60% of the scenarios in which bond \(B\) will default, bond \(A\) will also default. The above equation is often written as: $$ P\left[ A\cap B \right] =P\left( A|B \right) \times P\left[ B\right] \quad \quad \quad \quad I $$ $$ P\left[ A\cap B \right] =P\left( B|A \right) \times P\left[ A \right] \quad \quad \quad \quad II $$ Both the right-hand sides of equations \(I\) and \(II\) are combined and rearranged to give the Bayes’ theorem: $$ \Rightarrow P\left( A|B \right) =\frac { P\left( B|A \right) \times P\left[ A \right] }{ P\left[ B \right] } $$ When presented with new data, Bayes’ theorem can be applied to update beliefs. To understand how the theorem can provide a framework for how exactly the new beliefs should be, consider the following Example: Applying Baye’s Theorem Suppose that an analyst could group fund managers into two categories, star and non-star managers, after evaluating historical data. Given that the best managers are a star and there is a 75% likelihood that in a particular year, the market will be beaten by a star. On the other hand, there are equal chances that non-star managers will either beat the market or underperform it. Furthermore, there is year-to-year independence in the probabilities that both types of managers will beat the market. Only 16% of managers within a given cohort become stars. Three years have passed since a new manager who was able to beat the market every single year was added to the portfolio of the analyst. Determine the chances of the manager being a star when he was first added to the portfolio. What are the chances of him being a star at present? What are his chances of beating the market in the following year, given that he has beaten it in the past three years? We first summarize the data by introducing some notations as follows: The chances that a manager will beat the market on the condition that he is a star is: $$ P\left( B|S \right) =0.75=\frac { 3 }{ 4 } $$ The chances of a non-star manager beating the market are: $$ P\left( B|\bar { S } \right) =0.5=\frac { 1 }{ 2 } $$ The chances of the new manager being a star during the particular time he was added to the analyst’s portfolio are exactly the chances that any manager will be made a star, which is unconditional: $$ P\left[ S \right] =0.16=\frac { 4 }{ 25 } $$ To evaluate the likelihood of him being a star at present, we compute the likelihood of him being a star given that he has beaten the market for three consecutive years, \(P\left( S|3B \right) \), using the Bayes’ theorem: $$ P\left( S|3B \right) =\frac { P\left( 3B|S \right) \times P\left[ S \right] }{ P\left[ 3B \right] } $$ $$ P\left( 3B|S \right) ={ \left( \frac { 3 }{ 4 } \right) }^{ 3 }=\frac { 27 }{ 64 } $$ The unconditional chances that the manager will beat the market for three years is the denominator. $$ P\left[ 3B \right] =P\left( 3B|S \right) \times P\left[ S \right] +P\left( 3B|\bar { S } \right) \times P\left[ \bar { S } \right] $$ $$ P\left[ 3B \right] ={ \left( \frac { 3 }{ 4 } \right) }^{ 3 }\times \frac { 4 }{ 25 } +{ \left( \frac { 1 }{ 2 } \right) }^{ 3 }\frac { 21 }{ 25 } =\frac { 69 }{ 400 } $$ $$ P\left( S|3B \right) =\frac { \left( \frac { 27 }{ 64 } \right) \left( \frac { 4 }{ 25 } \right) }{ \left( \frac { 69 }{ 400 } \right) } =\frac { 9 }{ 23 } =39\% $$ Therefore, there is a 39% chance that the manager will be a star after beating the market for three consecutive years, which happens to be our new belief and is a significant improvement from our old belief, which was 16%. Finally, we compute the manager’s chances of beating the market the following year. This happens to be the summation of the chances of a star beating the market and the chances of a non-star beating the market, weighted by the new belief: $$ P\left[ B \right] =P\left( B|S \right) \times P\left[ S \right] +P\left( B|\bar { S } \right) \times P\left[ \bar { S } \right] $$ $$ P\left[ B \right] =\frac { 3 }{ 4 } \times \frac { 9 }{ 23 } +\frac { 1 }{ 2 } \times \frac { 14 }{ 23 } =60\%=\frac { 3 }{ 5 } $$ We also have that: $$ P\left( S|3B \right) =\frac { P\left( 3B|S \right) \times P\left[ S \right] }{ P\left[ 3B \right] } $$ The L.H.S of the formula is posterior. The first item on the numerator is the likelihood, and the second part is prior. Question 1 The probability that the Eurozone economy will grow this year is 18%, and the probability that the European Central Bank (ECB) will loosen its monetary policy is 52%. Assume that the joint probability that the Eurozone economy will grow and the ECB will loosen its monetary policy is 45%. What is the probability that either the Eurozone economy will grow or the ECB will loosen its monetary policy? A. 42.12% B. 25% C. 11% D. 17% The correct answer is B. The addition rule of probability is used to solve this question: P(E) = 0.18 (the probability that the Eurozone economy will grow is 18%) p(M) = 0.52 (the probability that the ECB will loosen the monetary policy is 52%) p(EM) = 0.45 (the joint probability that Eurozone economy will grow and the ECB will loosen its monetary policy is 45%) The probability that either the Eurozone economy will grow or the central bank will loosen its the monetary policy: p(E or M) = p(E) + p(M) – p(EM) = 0.18 + 0.52 – 0.45 = 0.25 Question 2 A mathematician has given you the following conditional probabilities: $$\begin{array}{l|l} \text{p(O|T) = 0.62} & \text{Conditional probability of reaching } \\ \text{} & \text{ the office if the train arrives on time} \\\hline \text{p(O|T c) = 0.47} & \text{Conditional probability of reaching the office} \\ \text{} & \text{ if the train does not arrive on time} \\ \hline \text{p(T) = 0.65} & \text{Unconditional probability of } \\ \text{ } & \text{the train arriving on time } \\ \hline \text{p(O) = ?} & \text{Unconditional probability}\\ \text{} & \text{of reaching the office}\\ \end{array}$$ What is the unconditional probability of reaching the office, p(O)? A. 0.4325 B. 0.5675 C. 0.3856 D. 0.5244 The correct answer is B. This question can be solved using the total probability rule. If p(T) = 0.65 (Unconditional probability of train arriving on time is 0.65), then the unconditional probability of the train not arriving on time p(T c) = 1 – p(T) = 1 – 0.65 = 0.35. Now, we can solve for $$\begin{align*}p(O)&= p(O|T) * p(T) + p(O|T c) * p(T c)\\& = 0.62 * 0.65 + 0.47 * 0.35 \\&= 0.5675\end{align*}$$ Note: p(O) is the unconditional probability of reaching the office. It is simply the addition of: 1. reaching the office if the train arrives on time, multiplied by the train arriving on time, and 2. reaching the office if the train does not arrive on time, multiplied by the train not arriving on time (or given the information, one minus the train arriving on time) Question 3 Suppose you are an equity analyst for the XYZ investment bank. You use historical data to categorize the managers as excellent or average. Excellent managers outperform the market 70% of the time and average managers outperform the market only 40% of the time. Furthermore, 20% of all fund managers are excellent managers and 80% are simply average. The probability of a manager outperforming the market in any given year is independent of their performance in any other year. A new fund manager started three years ago and outperformed the market all three years. What’s the probability that the manager is excellent? A. 29.53% B. 12.56% C. 57.26% D. 30.21% The correct answer is C. The best way to visualize this problem is to start off with a probability matrix: $$\small{ \begin{array}{l|c|c} \textbf{Kind of manager} & \textbf{Probability} & \textbf{Probability of beating market} \\ \hline \text{Excellent} & {0.2} & {0.7}\\ \hline \text{Average} & {0.8} & {0.4}\\ \end{array}}$$ Let E be the event of an excellent manager, and A represent the event of an average manager. P(E) = 0.2 and P(A) = 0.8 Further, let O be the event of outperforming the market. We know that: P(O|E) = 0.7 and P(O|A) = 0.4 We want P(E|O): $$\begin{align*}P\left( E|O \right)&=\frac {P\left( O|E \right) \times P(E)}{ P\left( O|E \right) \times P(E) + P\left( O|A \right) \times P(A) } \\&=\frac {\left( 0.7^{ 3 }\right) \times 0.2}{ \left( 0.7^{ 3 }\right) \times 0.2 + \left( 0.4^{ 3 }\right) \times 0.8 }\\& = 57.26 \% \end{align*}$$ Note: The power of three is used to indicate three consecutive years. Leave a Comment You must be logged in to post a comment. Feb 21, 2023 Machine Learning Methods After completing this reading, you should be able to: Discuss the philosophical and... Read More
{"url":"https://analystprep.com/study-notes/frm/part-1/quantitative-analysis/probabilities/","timestamp":"2024-11-09T10:41:07Z","content_type":"text/html","content_length":"175845","record_id":"<urn:uuid:11cd09a3-27aa-490f-a87c-c68c3c8bacff>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00700.warc.gz"}
Complex Math Equations | Articulate - Community Forum Discussion Hi! I'm creating an Advanced Functions course via Articulate Storyline. Is there a way that I can allow students to type complex mathematical equations in slides or maybe at least insert a part where they could draw the equation? • Hi RandomEgg! Glad to see the community has been helping you! I just wanted to share that we're tracking a feature request for support of mathematical equations in Storyline. I've included your voice in the request and will update this discussion if it makes it onto our Feature Roadmap. • Hey RandomEgg, I created a project similar to this years ago for a different employer, so I'm afraid I don't have a file to share; but what I did was create a sort of equation builder. All of the fancy symbols were available at the bottom of the page as drag and drop items, and then had to be assembled into place. I could imagine a situation where you are using text entry references to adjust values, if any are in play. I hope that helps you get on your way.
{"url":"https://community.articulate.com/discussions/discuss/complex-math-equations/1202979?topicRepliesSort=postTimeDesc&autoScroll=true","timestamp":"2024-11-04T15:33:06Z","content_type":"text/html","content_length":"316339","record_id":"<urn:uuid:7f95eb72-7806-4857-8c8a-017e80fceed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00042.warc.gz"}
IF auto generated number in Row matches anywhere in 2 sheet columns then checkbox checked I am trying to input a formula that automatically checks a checkbox when an auto generated number from a specific row in one sheet, is found anywhere in a column from 2 separate sheets. I've tried to make it work by referencing just one of the two sheets but it is behaving strangely. The formula I tried is =IF(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0), 1, 0) If I drag the formula down the checkbox rows and populate the referenced sheet with matching Lead ID's I get a checked box on my first row and INVALID DATA TYPE on any others that should match. Thanks in advance for any help you're able to give. • The MATCH function actually produces a number based on where within a grid (range) the data is found. Your current formula is basically =IF(##, 1, 0) You don't really have a logical statement. What you would ideally have would be along the lines of =IF(## > 0, 1, 0) When we replace ## with your MATCH formula it is basically saying that if the data is found in any position within the grid (range), check the box. So here would be a minor adjustment to your current formula that should work. Let me know if it does. =IF(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0) > 0, 1, 0) • Hi Paul, That worked! If it isn't a match it returns #NOMATCH rather than leaving the checkbox unchecked. I can live with that but is there a way to preserve the unchecked checkbox? Also, fairly new to formulas and wondering how I then add the other sheet to the MATCH range. Thank you so much. • =IF(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0) > 0, 1, 0) To avoid the #NO MATCH error, we wrap the MATCH function in an IFERROR statement to get it to return the number 0. Since that is not greater than 0, it will leave the box unchecked. =IF(IFERROR(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0), 0) > 0, 1, 0) To account for a second sheet, we would simply use an OR statement and duplicate the MATCH function with the exception of the range (we will wrap that one in an IFERROR as well). You will need to follow the appropriate cross sheet referencing steps for the second sheet range. =IF(OR(IFERROR(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0), 0) > 0, IFERROR(MATCH([Lead ID]1, {Second Sheet Range 1}, 0), 0) > 0), 1, 0) How does this work out for you? • Paul, you're amazing. Thank you very much for such a thorough response. It works like a charm and I understand how it's all functioning. Is it just me or is this stuff a tonne of fun? Thanks again, • Happy to help! Is this stuff fun...? Give it time. Hahahaha • I have one more addition that I'm trying to make to this formula. I'm trying to also add- IF column = "Declined" and a date column has any date in it = checkbox checked. This is my current attempt. =IF(OR(IFERROR(MATCH([Lead ID]2, {Construction/Enhancements Estimate Workshe Range 1}, 0), 0) > 0, IFERROR(MATCH([Lead ID]2, {Maintenance Estimate Worksheet Range 1}, 0), 0) > 0), 1, IF(AND (IFERROR(MATCH("Declined", [Dave approval]2:[Wes Shelley approval]2, 0), 0), [Connected with on:]2 > 0), 1, 0)) Any help is much appreciated. • So you want the previous solution's criteria OR the word "Decline" in x-range OR a date in y-range? It could be any one of those with equal importance meaning it won't necessarily look in a specific order, it just needs to find any one of those things? • It would be the previous solutions criteria OR the word "Declined" in x range AND a date in y range. • Sorry. I just want to make sure we get this right... Previous Solution Date and "Declined" together • Perfect. Now for (hopefully) the last question... Date and Declined... Are they on the same sheet as the formula, or are they going to be a cross sheet reference? If x-sheet reference... Is there a way to uniquely identify the corresponding row (lead id/row id/etc)? • Date and Declined both exist on the same sheet as the formula on the row that the formula is also on. Thanks so much Paul. • Phew. This makes things a lot easier. Hahaha. Give this a whirl... =IF(OR(OR(IFERROR(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0), 0) > 0, IFERROR(MATCH([Lead ID]1, {Second Sheet Range 1}, 0), 0) > 0), AND(ISDATE([Date Column]@row), [Other Column Name]@row = "Declined")), 1) Basically what we are doing is =IF(OR(Previous Solution's Criteria, AND(Date, "Declined")), check the box) • It seems I'm getting an invalid operation error. =IF(OR(IFERROR(MATCH([Lead ID]1, {Construction/Enhancements Estimate Workshe Range 1}, 0), 0) > 0, IFERROR(MATCH([Lead ID]1, {Maintenance Estimate Worksheet Range 1}, 0), 0) > 0, AND(ISDATE ([Connected with on:]@row), [Dave approval]1:[Wes Shelley approval]@row = "Declined")), 1) • I've tracked it down to the problem being [Dave approval]@row:[Wes Shelley approval]@row = "Declined" portion. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/47616/if-auto-generated-number-in-row-matches-anywhere-in-2-sheet-columns-then-checkbox-checked","timestamp":"2024-11-09T04:23:39Z","content_type":"text/html","content_length":"443856","record_id":"<urn:uuid:104fe919-f2dd-4415-b3a0-7f610d891e7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00004.warc.gz"}
Export Reviews, Discussions, Author Feedback and Meta-Reviews Submitted by Assigned_Reviewer_17 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The authors propose to formalize "the notion that the ranking function depends only on the object features, and not the order in which the documents are presented." This is a good idea, but the proposed notion of exchangeability is too strict in my opinion: we can capture the intended notion without the strict equality in eqn 1 and 2. We just want the order of the scores to be preserved, not their exact values. In terms of clarity, there are sections that are quite unclear, as pointed out below. You should define symmetric function in the proof of Thm 3.2. You should define p in Def 3.9. You probably don't mean to use p on both sides of the equality. In Thm 3.11, it's not clear what values \theta takes. In Thm 3.10, it's not clear what is meant by the integral over \theta, when later it says that \theta is a random variable. Q2: Please summarize your review in 1-2 sentences The idea is good, but this paper is not clear enough in its main result (Thm 3.11). After author feedback: I realize that 3.10 is not the main result: I meant 3.11. The same criticism remains unaddressed in your rebuttal: what do you mean by the integral over \theta, when you say elsewhere that \theta is a real-valued random variable. Submitted by Assigned_Reviewer_30 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The paper considers settings where the goal is to learn a permutation-valued function, such as in subset ranking/information retrieval applications. Specifically, it focuses on settings where one learns a vector-valued function that assigns a real-valued score to each document and then sorts the objects according to these scores. E.g. if there are m objects to rank, represented as m feature vectors x_1, …, x_m, then one frequently learns a weight vector w and sorts the objects according to scores w.x_1, …, w.x_m. The paper refers to this as a "pointwise" approach, since the score assigned to x_i does not depend on the feature vectors for the remaining objects x_j, and advocates learning directly a function that collectively maps m feature vectors x_1,…,x_m to m scores The primary contribution is in developing a mathematical characterization of classes of such "collective scoring" functions that satisfy a natural symmetry/exchangeability property, which simply says that if we exchange two feature vectors x_i and x_j, then the scores s_i and s_j assigned by the function to the corresponding objects should also be exchanged accordingly. The characterization involves tools from tensor analysis when there is a finite set of possible feature vectors and De Finetti-like theorems in the general case. The approach is interesting overall, but my main concern is that there is no clear evidence in the paper for why it is useful. In particular, the experiments could have compared methods that learn standard linear "pointwise" functions with methods that learn functions from exchangeable function classes as proposed (keeping other parameters, such as loss function minimized, constant); this would have shown clearly what sorts of benefits might be offered by learning functions in the proposed form of function classes. Instead, the paper contains experiments which start with a baseline (linear?) function learned by some standard method, incorporate this baseline function in the "exchangeable" function class, and then re-learn a scoring function from this class. The experiments are "brute-force" in style (lots of data sets, baseline linear learning methods/loss functions), without much clear insight into what is being tested or what are the precise benefits of the proposed Small comment: The title is too broad for this work (both "representation theory" and "ranking functions" have other meanings in different mathematical contexts); a better title would be something like "Symmetric (or exchangeable) permutation-valued functions with applications in …" Q2: Please summarize your review in 1-2 sentences The paper considers learning from more general classes of permutation-valued functions than what are generally used in current subset ranking/IR applications. Interesting approach, but needs further development and validation; in particular experiments need to be more clearly thought through. Submitted by Assigned_Reviewer_42 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: This paper considers the problem of learning rankings from features. They develop listwise ranking functions and show that under an assumptions of exchangability a nice formulation of the loss functions can be given. The authors then develop some representation theory for rank functions followed by examples of rank functions that satisfy the theoretical requirements. The authors close with some empirical results. I thought this paper was a very clever use of DeFennitti's theorem. I really liked Theorem 3.2. The empirical results are reasonable. The basic idea in this paper is very good, and one complaint could be that once you realize the idea the rest is obvious but since no one had before this makes this paper very good. Q2: Please summarize your review in 1-2 sentences This paper considers the problem of learning rankings from features. They develop listwise ranking functions and show that under an assumptions of exchangability a nice formulation of the loss functions can be given. Submitted by Assigned_Reviewer_43 Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http: The paper derives a representation theorem for the class of "exchangeable" listwise scoring functions employed in learning to rank problems. The results show that for finite instance spaces, such functions are expressible as a particular combination of similarity functions over pairs of instances. For compact spaces, a similar result holds for the case of lists of size two; for general lists, the result holds under additional assumptions by appeal to deFinetti's representation theorem. The issue of designing listwise loss functions has been well studied, but the design of listwise scoring functions less so. As noted in the paper, some works focus (e.g. [4]) involve scoring each instance within the list with a single function, albeit one that is optimised in a listwise sense. The results in the paper are interesting, and while largely intuitive not a-priori obvious. The stated goal is to give a recipe with which to guide the design of listwise ranking functions. I think the observation that this can be done by designing similarity functions on pairs of instances, and suitably combine them, is a good first step towards this goal. All results rely on a symmetry assumption on the listwise ranking function. This seems reasonable, and minimally holds e.g. for the pointwise case. The results for the compact case in 3.2.2 additionally rely on the scoring function being an unnormalised probability density over the instances. It is less clear to me how reasonable this is. The requirement of nonnegativity of the scoring function could be handled by an appropriate link function (e.g. exp(.) as done in the experiments). But this would require suitable interplay with the surrogate loss being optimised to ensure convexity, something that may deserve some comment. The experiments show that it is possible to post-process the outputs of a pointwise ranker (even one trained to optimise a listwise loss) using the representation theorem. The experiments are the weakest part of the paper, and likely more could be done to make a convincing case for the value of the recipe (e.g. try a wider class of pairwise functions, have a comparison of pairwise "as-is" versus pointwise to check the perils of overfitting, et cetera). But good performance in this regime is still interesting, as (to my knowledge) improving upon methods trained to optimise listwise losses is non-trivial. So I think they are indicative of the promise of the representation theorem in guiding the design of scoring functions. - The statement of Theorem C.1 apparently holds for m > 2 instances in the list. This seems to not agree with the text; perhaps the point is that existence of a multilinear symmetric decomposition would imply the result? If so the appeal to Theorem 3.8 in the proof seems misleading. It does seem like you could have such a corollary to 3.11, so perhaps that should be stated. - Even if a simple consequence of 3.10, I think a proof of Theorem 3.11 should be included in the appendix. The statement of the theorem alone leaves some questions: * There is an implicit "for every m" needed? In which case, f should actually be defined for every possible m in X^m? * In 3.10 the "g" function is (I presume) the marginal, derived appropriately from the joint p. Does a similar restriction exist here in terms of the function g? * The requirement of an infinite sequence in deFinetti must of course carry over to the theorem, but it is a little hard to understand what exactly it is requiring of the ranking function and/or the input space. The comment about the result requiring "infinitely many objects for which every leading finite subsequence is exchangeable" could be expanded. Isn't this implicit when one assumes that f is symmetric in its arguments, and hence exchangeable regardless of the provided inputs (which could be one of infinitely many possible values, since X is infinite)? * What does it mean for theta to be a "random distribution" in the prelude to the theorem? - The extension of 3.11 to the partially symmetric case seems non-obvious, and minimally requires a careful statement of the infinite sequence assumption, and the appropriate boundedness of the integral of f. I would recommend making this explicit. - I like that the paper builds up from the simple case of finite instance spaces to a more general theory. - Line 211, for three documents f_i : X^3 \to R, so f_i is a tensor? Or do you group the last two elements into one? - Remark II pg 5, tensor completion seems reasonable for finite spaces and suitably large training sets, but in practice it seems likely that one would face cold-start problems (no observations for a particular "row") due to limited coverage of the instance space. - Line 268, is the corresponding decomposition theorem referred to Theorem C.1? - It might be better to include Theorem C.1 in the body and defer 3.8 to the appendix if space is an issue. - Appendix F is unclear. What are r, s? How exactly is the surrogate loss used for optimisation? - The paper is generally well written. There were some small typos worth addressing: * Def 2.1, \to instead of \mapsto in definition of f_j. * Missing period line 169. * Period instead of comma line 186. * Theorem 3.8 proof, missing boldface in x, y. * Theorem 3.11, we need f : X^m \to R_+? * Fix capitalisation in ref [3]. - Other comments: * Why the use of braces in { x_{\i} }? * Include summation indices in Prop 3.5, 3.6, Thm 3.8? * Def 3.4, use \pi rather than \sigma for consistency? * Theorem C.1 proof should come after 3.8. * Notation clash in (14), S for training set and S_k for similarity function? Q2: Please summarize your review in 1-2 sentences The paper derives a representation theorem for the class of "exchangeable" listwise scoring functions employed in learning to rank problems. The theoretical results are interesting, and while largely intuitive not a-priori obvious. The experimental results, while far from conclusive, are indicative of the promise of the approach. Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. The reviewers have widely varying opinions on our paper. While AR42 has given it a very positive review, AR17 and AR30 have given it significantly worse reviews. We believe this is because AR17 has a misunderstanding regarding which result is the principal novelty of the paper. AR30 raises a point regarding our experiments, but as we explain below, our experiments encapsulate the condition (s)he proposes. We thank the reviewers for their valuable suggestions, and believe that they recognize the novelty of our results modulo the above misunderstandings; we are thus very hopeful that they would increase their scores in light of our response below. AR17's main objection is that "the paper is not clear enough in its main result (Thm 3.10)". We would like to clarify that Theorem 3.10 is not our main result, but is De Finetti's theorem, a well-known result from the literature. AR17 also points out that details such as what values \theta takes in Theorem 3.10 are described only briefly. Since it was a description of the known result of De Finetti's theorem, it was necessarily terse due to space constraints, where we referred the reader to [2] for additional details. We thank the reviewer for their general suggestion to make the paper clearer, but feel their judgement that the paper be rejected on these grounds is a bit too harsh. Please also see the other two reviews for a summary of our contributions, which agree that the paper presents substantially novel and interesting theoretical developments. We are hopeful that the reviewer will modify their review in light of the above explanation. AR30 believes our experiments are weak. We would like to clarify our insight into what is being tested: * The goal of our experiments is to measure the improvement that can be obtained by reranking the output of *state-of-the-art* pointwise ranking procedures using our listwise ranking functions. Our baseline functions are not just linear, but are *state-of-the-art* pointwise functions, such as MART, RankBoost etc (described in lines 365-367). * The reviewer proposes that we compare linear pointwise functions with a method that directly learns exchangeable function classes. Our reranking experiments are very much in the spirit of what the reviewer proposes, except that our exchangable ranking function is learnt via a two stage procedure, first setting its "pointwise component" b() equal to the learnt pointwise ranking function (that is being re-ranked), and then fitting the pairwise terms. What the reviewer is proposing is to also learn the pointwise component from scratch. Incidentally, we do have experiments with this comparison, and will be glad to add these to the appendix. Here, our exchangeable ranking function again handily beats the pointwise ranking function. But statistically this is not that interesting a comparison: note that our function class is strictly more general than pointwise ranking functions (it includes the latter, which can be seen by setting the pairwise term coefficients to zero); accordingly, statistically, it is not at all surprising for our approach to beat the linear pointwise ranking functions. Incidentally, an earlier submission of an earlier version of this paper in another conference, had precisely the experiments suggested by the reviewer: and the reviews there were unanimous that this was an extremely theoretically interesting paper, except that the experiments were not statistically surprising, and that it could have been accepted if it had had a re-ranking experiment (as in our current submission) instead! (We note that for space constraints, we cannot include both these kinds of experiments!) * Our experiments certainly do involve many datasets and base pointwise ranking functions, but this was with the aim of being exhaustive in showing that across such settings, our proposed reranking method shows improvements over the base pointwise functions. In light of the above explanation, and the reviewer's own assessment that the paper presents interesting and important theoretical developments, we are sincerely hopeful they would modify their We thank the reviewer for their kind comments.
{"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/d96409bf894217686ba124d7356686c9-Reviews.html","timestamp":"2024-11-11T10:27:29Z","content_type":"application/xhtml+xml","content_length":"24299","record_id":"<urn:uuid:3cfbe8a8-f690-48c7-9a3b-87313a584fad>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00088.warc.gz"}
Clique problem - (Combinatorial Optimization) - Vocab, Definition, Explanations | Fiveable Clique problem from class: Combinatorial Optimization The clique problem involves finding a subset of vertices in a graph that form a complete subgraph, known as a clique, meaning that every two vertices in the subset are directly connected by an edge. This problem is significant in graph theory and is fundamental to the study of NP-completeness, as determining whether a clique of a specified size exists in a given graph is a classic example of an NP-complete problem. congrats on reading the definition of clique problem. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The clique problem is NP-complete, meaning that there is no known polynomial-time algorithm that can solve all instances of this problem efficiently. 2. The size of the largest clique in a graph can be determined using various algorithms, but they often require exponential time for larger graphs. 3. The clique problem can be applied in many real-world scenarios, such as social network analysis where cliques represent groups of individuals who all know each other. 4. Finding all cliques in a graph is even more complex than just finding the largest one, making it an area of active research within combinatorial optimization. 5. There are several approaches to approximate solutions for the clique problem, including greedy algorithms and heuristic methods that aim to find near-optimal solutions more quickly. Review Questions • How does the clique problem illustrate the concept of NP-completeness and what are its implications for solving similar problems? □ The clique problem serves as a quintessential example of NP-completeness because it exemplifies problems for which no polynomial-time solution is known. It highlights the difficulty of finding solutions for many combinatorial problems, as they can be transformed into instances of the clique problem. This relationship indicates that if an efficient solution were found for the clique problem, it would likely lead to efficient solutions for all problems classified as NP-complete. • What are some common methods used to approach the clique problem and how do they compare in terms of efficiency and accuracy? □ Common methods for tackling the clique problem include exact algorithms like backtracking and branch-and-bound, which provide accurate solutions but can be computationally intensive. Approximation algorithms and heuristic methods offer faster performance by providing near-optimal solutions, though they may not guarantee finding the largest clique. The trade-off between accuracy and efficiency makes choosing an approach context-dependent, depending on whether exactness or speed is prioritized. • Evaluate the significance of the clique problem in real-world applications and how understanding its complexity can benefit various fields. □ The significance of the clique problem extends beyond theoretical computer science into practical applications in fields such as social network analysis, bioinformatics, and telecommunications. In these domains, cliques can represent tightly-knit groups or interactions among elements. Understanding its complexity helps researchers develop better algorithms tailored to specific situations, leading to more effective data analysis and decision-making processes. This comprehension aids industries in optimizing resources and understanding underlying structures within complex networks. "Clique problem" also found in: Subjects (2) © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/combinatorial-optimization/clique-problem","timestamp":"2024-11-09T04:36:23Z","content_type":"text/html","content_length":"145475","record_id":"<urn:uuid:60c90330-f92b-498a-8231-07a2a5a8286d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00698.warc.gz"}
tcl::mathop is a Tcl 8.5 namespace that contains commands equivalent to the [expr] operators. These commands are all bytecode compiled. Changing or adding to these commands does not change the expr operators themselves. See Also additional discussion a closely-related feature that exposes math functions, as opposed to math operators, as Tcl commands. AMG: The following table lists all commands (operators) in this namespace and defines their behavior by showing their expression equivalents. Parentheses are used to indicate associativity. Operation or test Cmd 0 1 2 3 or more arguments Bitwise negation ~ err ~a err err Logical negation ! err !a err err Arithmetic negation - err -a Addition + 0 a a+b ((a+b)+c)+... Multiplication * 1 a a*b ((a*b)*c)*... Shift left << err err a<<b err Exponentiation ** 1 a a**b a**(b**(c**...)) Subtraction - a-b ((a-b)-c)-... Division / err 1./a a/b ((a/b)/c)/... Remainder % err err a%b err Arithmetic shift right >> err err a>>b err Bitwise and & -1 a a&b ((a&b)&c)&... Bitwise inclusive or | 0 a a|b ((a|b)|c)|... Bitwise exclusive or ^ 0 a a^b ((a^b)^c)^... Numeric equality == 1 1 a==b ((a==b)&&(b==c))&&... String equality eq 1 1 a eq b ((a eq b)&&(b eq c))&&... Numeric inequality != err err a!=b err String inequality ne err err a ne b err List membership in err err a in b err List non-membership ni err err a ni b err Strict increasing order < 1 1 a<b ((a<b)&&(b<c))&&... Increasing order <= 1 1 a<=b ((a<=b)&&(b<=c))&&... Strict decreasing order > 1 1 a>b ((a>b)&&(b>c))&&... Decreasing order >= 1 1 a>=b ((a>=b)&&(b>=c))&&... The - command is something of a changeling. In its unary form, it negates its first (and only) argument. For higher arity, it negates every argument except its first and returns the sum. While this may seem counterintuitive, it does match the behavior of the - expr operator. Exponentiation (**) is right-associative. Short-circuit logic (&&, ||) isn't supported because it cannot be implemented in Tcl. Tcl always eagerly determines the value of each argument before passing it to the command. For this, just keep using expr: % expr {1==1 && [puts hello] eq ""} % expr {1==0 && [puts hello] eq ""} % expr {1==0 || [puts hello] eq ""} % expr {1==1 || [puts hello] eq ""} Shift right is arithmetic, not bitwise, meaning that the sign bit is unchanged by the operation. For two's-complement machines, arithmetic and bitwise shift left produce the same results. A simple example, because I couldn't immediately work out the incantation to make mathops visible. Note that functions and operators come from different namespaces: namespace import ::tcl::mathop::* namespace import ::tcl::mathfunc::* puts [* [sqrt 49] [+ 1 2 3]] namespace path {tcl::mathop tcl::mathfunc} puts [* [sqrt 49] [+ 1 2 3]] There are better examples of the use of mathop scattered through the wiki; somebody should add links to some of the ones less obvious to non-lispers! AMG: You can also use [namespace path ::tcl::mathop] to add to the namespace search path. RS 2008-09-17: Should you only want to import the * multiplication operator, escape it with backslash, to distinguish from "*" meaning "all": % namespace import tcl::mathop::\\* % + 2 3 invalid command name "+" % * 2 3
{"url":"https://wiki.tcl-lang.org/page/tcl%3A%3Amathop","timestamp":"2024-11-13T16:17:15Z","content_type":"text/html","content_length":"20931","record_id":"<urn:uuid:5a92372d-8881-4c38-9706-acfce4b69a39>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00115.warc.gz"}
Sum of column when moving rows When I move rows to a destination sheet, I need to have a running sum of one particular column as each row is added. I've tried doing a sum of children (but the moved row comes in at the wrong position) or just an auto sum at the top of the sheet but when new rows are added, they don't get summed. Is there an easy way to accomplish this? Best Answer • =SUM([Column 2]:[Column 2]) This will only work if you put the formula in a different column. In my example, the formula is in Column 1 and adding up all the values in Column 2 Alternately, you could put the same formula in the sheet summary and you wouldn't have to add the new column. • =SUM([Column 2]:[Column 2]) This will only work if you put the formula in a different column. In my example, the formula is in Column 1 and adding up all the values in Column 2 Alternately, you could put the same formula in the sheet summary and you wouldn't have to add the new column. Help Article Resources
{"url":"https://community.smartsheet.com/discussion/76036/sum-of-column-when-moving-rows","timestamp":"2024-11-09T22:41:26Z","content_type":"text/html","content_length":"402911","record_id":"<urn:uuid:d6704020-5344-4a8e-a10e-44ce28926a09>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00853.warc.gz"}
Refining the Peak Oil Rosy Scenario Part 5: Preliminary nonlinear logistic modeling of the USA oil production curve Historic Source Data My primary source data for yearly oil production and consumption will be BP's "Statistical Review of World Energy 2010 (" Statistical Review ") which provides historical yearly production and consumption data for most countries from 1965 and on in an EXCEL work file. One problem here for the USA is that 1965 is pretty "late" in the production curve; recall that Hubbert predicted a peak in production in 1965 and the actual peak occurred in 1970. Fortunately, the EIA's Table 5.1 entilted, "Petroleum Overview" provides yearly production and consumption data back to 1949 for the Moreover, the columns of data under the headings "Field Production Total" and "Petroleum Products Supplied" are in close agreement with the production and consumption data for the USA, as reported in BP's Statistical Review. I am not aware of any earlier historic data that is online. I already have shown the USA's production data from Table 5.1 in part 2 of this series—but here it is again in Figure 13, in a slightly different format. Since I am interested in determining yearly changes in the exponential rate constant "a," the production data, originally presented as millions of barrels per day has been converted to billions of barrels per year. A preliminary NLLS analysis of USA production data As an initial exploration of using the NLLS process, described in Part 4, I selected the entire data base of dQ/dt versus t(years) from 1949 to 2009. The red solid line in Figure 1 shows the best fit curve obtained to the source data points from 1949 to 2009 (I used seed values of 250, 35 and 0.07, respectively for Q∞, Qo and "a"). The best-fit values for the parameters in the nonlinear logistic equation are as follows: Q∞ = 298 bbls; Qo = 53 bbls; and “a” = 0.0511 yr^-1 Notice that Qo is large because it is an estimate of the oil accumulation through 1948. Next, I repeated the NLLS fitting to progressively 10-year smaller data sets, that is, 1949 to 1999, 1949 to 1989, 1949 to 1979 and 1949 to 1969 (all using the best fit from the full data set as the seed value). These results are shown in Figure 14, with different colored curves corresponding to the different data ranges as defined in the legend. Also included for reference are the source data and the best-fit curve using the full data set. The best-fits to the progressively smaller data sets are pretty similar to the best-fit using the full data set until we get to the 1949-1969—I extended the predicted curve in the figure beyond the data range so that the trend was clear to see. The curve fit to the 1949-1969 data actually looks pretty good—I can’t expect the NLLS procedure to do anything else than estimate the best fit to the data presented to it. The estimated parameter values for the 1949-1969 range depart wildly from the estimates made from the larger time ranges: Range 1949-2009 1949-99 1949-89 1949-79 1949-69 a 0.0511 0.0562 0.0579 0.0656 0.0274 Q[o] 53.19 45.51 43.35 36.70 80.04 Q∞ 298.30 275.13 267.67 233.44 1222033 The implications of this are clear: I can not expect the nonlinear logistic equation to give reasonable predictions of Q∞ unless either: 1) The data on the growth side of the production curve is sufficiently scatter-free that roll-over towards the plateau in production (dQ/dt) can be discerned by the NLLS procedure. That is clearly not the case for the USA production data in the 1949-1969 range. Other than taking moving averages there is nothing I can do to smooth the data. However, just looking at the data in the 1949-1969 range suggests that a moving average would not likely have helped too much here. 2) Enough data is included from the decline side of the production curve that the plateau in production can be discerned NLLS procedure. For instance, for the 1949-1979 data set, including about ten-years worth of data that is past the actual year of maximum production (1970) gives predictions of Q∞, Qo and "a" that are in reasonably good agreement with the predictions made using the data set over a larger time span. Performing (2) would still allow the remainder of the decline curve (i.e., 1980-2010) to be analyzed using NLLS with Q∞ fixed, as described in procedure developed at the end of Part 4. There is a broader problem with this type of approach, however. I expect that there may be production curves for at least some countries for which the plateau in production has not yet occurred. For such data sets, the method outlined in (2) would not be available. So (2) is not a general solution that can be relied on. Perhaps, there is a third option: 3) Estimate Q∞ and "a" on the growth side of the production curve using Hubbert’s logistic linear equation and then apply NLLS to analyze the decline side. Ahaaa...back to modeling.... No comments:
{"url":"https://crash-watcher.blogspot.com/2010/10/refining-peak-oil-rosy-scenario-part-5.html","timestamp":"2024-11-05T00:54:42Z","content_type":"text/html","content_length":"101739","record_id":"<urn:uuid:c1fab9c0-ffee-4ea7-b39c-5ba5ac6518ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00283.warc.gz"}
[Solved] In their book Empirical Model Building an | SolutionInn In their book Empirical Model Building and Response Surfaces (John Wiley, 1987), Box and Draper described an In their book Empirical Model Building and Response Surfaces (John Wiley, 1987), Box and Draper described an experiment with three factors. The data in the following table are a variation of the original experiment from their book. Suppose that these data were collected in a semiconductor manufacturing process. (a) The response y[1] is the average of three readings on resistivity for a single wafer. Fit a quadratic model to this response. (b) The response y[2] is the standard deviation of the three resistivity measurements. Fit a linear model to this response. (c) Where would you recommend that we set x[1], x[2], and x[3] if the objective is to hold mean resistivity at 500 and minimize the standard deviation? Fantastic news! We've Found the answer you've been seeking!
{"url":"https://www.solutioninn.com/study-help/applied-statistics-and-probability/in-their-book-empirical-model-building-and-response-surfaces-john","timestamp":"2024-11-11T17:48:34Z","content_type":"text/html","content_length":"84561","record_id":"<urn:uuid:3322b76c-12fb-48cc-bf36-3fe8aa5b7408>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00810.warc.gz"}
An Introduction to Statistical Machine Learning: The Crucial Role of Statistics in Modern Technology An Introduction to Statistical Machine Learning Discover the powerful fusion of statistics and machine learning. Explore how statistical techniques underpin machine learning models, enabling data-driven decision-making. Jun 30, 2023 · 11 min read More than seventy years have passed since the renowned American mathematician Samuel S. Wilks famously stated, "Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write," paraphrasing HG Wells' book, Mankind in the Making. While this statement may have been somewhat exaggerated, its underlying message about the significance of statistics remains relevant in today's information age. (Image Source: The American Statistical Association) With the rapid progress of technology and unprecedented innovation, machine learning and Generative AI have taken center stage. These advancements have profoundly influenced our personal lives and facilitated data-driven decision-making on a larger scale. However, amidst all the buzz surrounding these cutting-edge technologies, statistics continue to play a crucial role. Statistical inference remains the bedrock of numerous technological breakthroughs, particularly within the realm of machine learning. It is inseparable from the very essence of data, which is the foundation for all the exciting new technologies surrounding us. Let’s unmask the magic of statistical machine learning together! What is Statistical Machine Learning? As intuitive as it sounds from its name, statistical machine learning involves using statistical techniques to develop models that can learn from data and make predictions or decisions. You might have heard technical terms such as supervised, unsupervised, and semi-supervised learning– they all rely on a solid statistical foundation. In essence, statistical machine learning merges the computational efficiency and adaptability of machine learning algorithms with statistical inference and modeling capabilities. By employing statistical methods, we can extract significant patterns, relationships, and insights from intricate datasets, thereby promoting the effectiveness of machine learning algorithms. The Role of Statistics in Machine Learning Statistics constitutes the backbone of machine learning, providing the tools and techniques to analyze and interpret data. Essentially, statistics provides the theoretical framework upon which machine learning algorithms are built. Statistics is the science that allows us to collect, analyze, interpret, present, and organize data. It provides a robust set of tools for understanding patterns and trends, and making inferences and predictions based on data. When we're dealing with large datasets, statistics helps us understand and summarize the data, allowing us to make sense of complex phenomena. Machine learning, on the other hand, is a powerful tool that allows computers to learn from and make decisions or predictions based on data. The ultimate goal of machine learning is to create models that can adapt and improve over time, as well as generalize from specific examples to broader cases. This is where the beauty of the fusion between statistics and machine learning comes to light. The principles of statistics are the very pillars that uphold the structure of machine learning. • Constructing machine learning models. Statistics provides the methodologies and principles for creating models in machine learning. For instance, the linear regression model leverages the statistical method of least squares to estimate the coefficients. • Interpreting results. Statistical concepts allow us to interpret the results generated by machine learning models. Measures such as p-value, confidence intervals, R-squared, and others provide us with a statistical perspective on the machine learning model’s performance. • Validating models. Statistical techniques are essential for validating and refining the machine learning models. For instance, techniques like hypothesis testing, cross-validation, and bootstrapping help us quantify the performance of models and avoid problems like overfitting. • Underpinning advanced techniques. Even some of the more complex machine learning algorithms, such as Neural Networks, have statistical principles at their core. The optimization techniques, like gradient descent, used to train these models are based on statistical theory. As a result, a solid understanding of statistics not only allows us to better construct and validate machine learning models, but also enables us to interpret their outputs in a meaningful and useful Let’s take a look at some of the key statistical concepts that are tightly related to machine learning. You can learn more about these concepts in our Statistics Fundamentals with Python skill track. Probability theory is of utmost importance in machine learning as it provides the foundation for modeling uncertainty and making probabilistic predictions. How could we quantify the likelihood of different outcomes, events, or simply numerical values? Probability helps with that! In addition, Probability distributions are especially important in machine learning and make all the magic happen. Some commonly used distributions include Gaussian (Normal), Bernoulli, Poisson, and Exponential distributions. We have a handy probability cheat sheet to act as a quick reference for probability Descriptive Statistics Descriptive statistics enable us to understand the characteristics and properties of datasets. They help us summarize and visualize data, identify patterns, detect outliers, and gain initial insights that inform subsequent modeling and analysis. Our descriptive statistics cheat sheet can help you learn these concepts Measure of Central Tendency The mean, median, and mode provide valuable insights into the central or representative values of a dataset. In machine learning, they aid in data preprocessing by assisting with the imputation of missing values and identifying potential outliers. During feature engineering, they also come in handy in capturing the typical or most frequent values that impact model performance. Variance and Standard Deviation Variance and standard deviation quantify the spread or dispersion of data points around the central tendency. They serve as indicators of data consistency and variability in machine learning. These measures are useful for feature selection or dimensionality reduction, identifying features with limited predictive power. Additionally, they aid in assessing model performance by analyzing the variability of predictions or residuals, facilitating the evaluation and comparison of different algorithms. Measure of Spread Range, interquartile range, and percentiles are measures of spread that offer insights into the distribution of data values. They are particularly valuable in outlier detection, as they help identify and address outliers that can greatly influence model training and predictions. In cases where data needs to be transformed or normalized for better algorithm performance, these measures can provide Machine learning models are trained based on sampled data. If the samples are not carefully selected, the reliability of our models becomes uncertain. Ideally, we aim to choose representative subsets of data from larger populations. Employing proper sampling techniques also ensures that machine learning models are trained on diverse and unbiased data, promoting ethical and responsible use of technology. Check out our Sampling in Python course to learn more about this powerful skill. Estimation techniques are crucial in machine learning for determining unknown population parameters based on sample data. They allow us to estimate model parameters, evaluate model performance, and make predictions about unseen data. The most common estimation method used in machine learning is Maximum Likelihood (ML) estimation, which finds the estimator of an unknown parameter by maximizing the likelihood function. Hypothesis Testing Hypothesis testing provides a systematic approach to evaluating the significance of relationships or differences in machine learning tasks. It enables us to assess the validity of assumptions, compare models, and make statistically significant decisions based on the available evidence. Cross-Validation (CV) is a statistical technique used in machine learning to assess the performance and generalization error of an algorithm. Its primary purpose is to prevent overfitting, a phenomenon where the model performs well on the training data but fails to generalize to unseen data. By dividing the dataset into multiple subsets and iteratively training and evaluating the model on different combinations, CV provides a more reliable estimate of the algorithm's performance on unseen data. Popular Statistical Machine Learning Techniques These complex statistical concepts are the first steps toward effective machine learning algorithms. Let’s now explore some of the most popular machine learning models and see how statistics helped achieve their remarkable capabilities. Linear Regression Linear regression is a term commonly encountered in the statistical literature, but it is more than just that. It is also seen as a supervised learning algorithm that captures the connection between a dependent variable and independent variables. Statistics assist in estimating coefficients, conducting hypothesis tests, and evaluating the significance of the relationships, providing valuable insights and a deeper understanding of the data. Explore the topic in more depth with our essentials of linear regression in Python tutorial or on our Introduction to Regression in R course. Logistic Regression Just like linear regression, logistic regression is a statistical classification algorithm that estimates the probability of categorical outcomes based on independent variables. By applying a logistic function, it predicts the occurrence of a particular class. We have a full explanation of the topic in our understanding logistic regression in Python tutorial. Logistic and linear regression Decision Trees Decision trees are versatile algorithms that use statistics to split data based on features, creating a tree-like structure for classification or regression. They are intuitive, interpretable, and handle categorical and numerical data. Statistics-based measurements, such as Gini impurity or information gain, are often incorporated to guide the splits throughout the tree construction process. You can learn about decision tree classification in Python in a separate tutorial, or explore decision trees in machine learning using R. Random Forest Random Forest is an ensemble learning method that improves prediction accuracy by combining multiple decision trees. It employs sampling to randomly select subsets of features and data for building the trees. The predictions of these individual trees are then aggregated to make the final prediction. This algorithm is a powerful choice as it introduces diversity and reduces overfitting. The incorporation of diversity allows for a more robust and comprehensive model that captures a wide range of data patterns, and the reduction of overfitting ensures that the model generalizes well to unseen data, making it a reliable and accurate tool for predictive analytics. We have a separate tutorial on random forest classification, which covers how and when to use the statistics technique in machine learning. An example of random forest classification Support Vector Machines (SVM) SVM is a powerful algorithm that can be used for classification and regression tasks. It uses statistical principles to create a boundary between different groups of data points, making it easier to tell them apart. By optimizing this boundary, SVM reduces the chances of making mistakes and improves overall accuracy. We have tutorials exploring support vector machines with SciKit learn in Python, as well as SVMs in R. K-Nearest Neighbors (KNN) KNN is a simple yet effective algorithm used for classifying data points based on the majority vote of their nearest neighbors. It is suitable for both classification and regression problems and does not require training. In KNN, statistical measures are utilized to determine the proximity between data points, helping to identify the nearest neighbors. The majority vote of the nearest neighbors is then used to classify or predict the target variable. Again, you can explore the concept of KKNs in more detail with our K-Nearst Neighbors Classification with scikit-learn tutorial. Final Thoughts As we navigate through the exciting era of advancing technology and data-driven decision-making, gaining a solid understanding of statistics becomes invaluable for enhancing our machine learning skills. By delving into the fundamentals of statistics, we unlock the gateway to unleashing the true potential of machine learning. Whether you're a newbie or a seasoned pro, kickstart your learning journey today with the Statistics Fundamentals with Python and Machine Learning Fundamentals with Python tracks to learn more about the fascinating field of statistical machine learning! 1. All of Statistics (A Concise Course in Statistical Inference) by larry Wasserman 2. The Elements of Statistical Learning by Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie Supervised Machine Learning Discover what supervised machine learning is, how it compares to unsupervised machine learning and how some essential supervised machine learning algorithms work Data Science and Ecology The intersection of data science and ecology and the adoption of techniques such as machine learning in academic research. See MoreSee More
{"url":"https://www.datacamp.com/tutorial/unveiling-the-magic-of-statistical-machine-learning","timestamp":"2024-11-03T22:40:38Z","content_type":"text/html","content_length":"315781","record_id":"<urn:uuid:f9fedead-0b3e-4ce3-b5f2-1500d4300069>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00615.warc.gz"}
On how mathematics can improve your pizza experience - Paul's Pizza On how mathematics can improve your pizza experience September 5th, 2022 Read through! This is not a boring lecture. On the contrary, it may be the most valuable application of algebra and geometry ever to exist. Mathematics is supposed to help us understand the world and solve problems. And for you and us, the world revolves around pizza. After reading this enlightening article, you will be the wisest person among your friends and relatives. People will entrust you with the responsibility of placing every order with your pizza delivery Detroit. You will become a pizza deity, and the original house of pizza menu will have a pie named after you. What a way to transcend into the history of humankind! Grab your calculator and keep reading. Math for ordering pizza The first problem to solve comes at the time of ordering: how many pizzas should we get? The answer is in the pi within the pie. When your teacher told you that the area of a circumference is found by squaring the radius and multiplying it by pi, you may have wondered what use that could have in your life. Today, we are giving meaning to your entire school time: this very formula helps you stand for the fact that you should always go large. Math tells us that a large pizza contains four times as much pie as a single medium one. This is twice as much as two mediums. Since a large never costs twice as much, it is definitely the most cost effective choice. Math also tells us that we would need four New York style pizzas to match a Chicago deep dish pizza of the same diameter to have the same amount of food. You can pile up the thin pizzas to get an equal volume than the thick one. But if a large New York style pizza is twice as wide as a small Chicago pie, then they would both be the same Math for cutting pizza One of the most favorite theorems in math, possibly because it involves this popular dish, is called the Pizza Theorem. This theorem arose from the need to maximize the number of cut slices, a problem as relevant in math as it is in life. It is named after pizza because it mimics its traditional slicing technique. Its basic principle states that if a circular pizza is divided into multiple slices by making cuts at equal angles from an arbitrary point, the sums of the areas of alternate slices are equal. This means that if two people share a pizza sliced in this way by taking alternating slices, they both will get an equal amount of pizza. Next time, you could try cutting your pizza like this and see what happens: The theorem shows that the sum of the white slices equals the sum of the gray ones. Furthermore, if the pizza has one or more toppings, each covering a circular region even if it is not concentric, and each cut crosses every region, then every person will receive equal shares of toppings and crust. Math for eating pizza We do not wish to start an argument regarding a proper way of eating pizza. But if you are one of those who use cutlery, it may be because you are not familiar with Carl Friedrich Gauss&#39;s Theorema Egregium. This theorem, which translates from the Latin meaning ‘remarkable theorem’, solves the problem of floppy slices dangling limply from your hand because of the excess of toppings. The Gaussian curvature is a very abstract concept which applies to very common things. For example, it links to the impossibility of accurately depicting a world map on paper because of a distortion that occurs when flattening a sphere. In the same way, flat things always retain a trace of their original flatness when you try to give them volume, like a sheet of paper which is unavoidably crinkled when you wrap a round object. A pizza slice on a plate is flat. In order to eat it without the toppings falling off, you must keep one direction of the slice in its original flat state. Fold the pizza slice sideways, forcing it to become stiff in the direction that points towards your mouth. This natural curvature is also found in the pizza box. The wrinkles in corrugated cardboard, apart from providing aeration, keep the material thin and lightweight yet stiff enough to resist Now that you have been educated in the most essential aspects of mathematics, visit Paul’s Pizza or order our pizza delivery Detroit to put these theories into practice.
{"url":"https://www.paulspizza.net/on-how-mathematics-can-improve-your-pizza-experience/","timestamp":"2024-11-02T15:42:45Z","content_type":"text/html","content_length":"36008","record_id":"<urn:uuid:0ccf41e0-3f22-4530-a717-75664bd0ad1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00258.warc.gz"}
4 Types of Learning in Machine Learning Explained – TechTarget Image Classification and Categorization 4 Types of Learning in Machine Learning Explained – TechTarget Machine learning is a broad field that uses automated training techniques to discover better algorithms. The term was coined by Arthur Samuel at IBM in 1959. A subset of AI, machine learning encompasses many techniques. An example is deep learning, an approach which relies on artificial neural networks to learn. There are many other kinds of machine learning techniques commonly used in practice, including some that are used to train deep learning algorithms. Practitioners often choose from four main types of machine learning models based on their respective suitability to the way the data is prepared. Choosing the right machine learning model type Selecting the type of machine learning model is a mix of art and science. It’s important to use an experimental and iterative process to determine the most valuable approach in terms of performance, accuracy, reliability and explainability. Each model type has its strengths and weaknesses. The right choice will depend on factors such as the provenance of your data and the class of algorithms suited to the problem you’re looking to solve. Machine learning practitioners are likely to combine multiple machine learning types and various algorithms within those types to achieve the best Data scientists, for example, might analyze a data set using unsupervised techniques to achieve a basic understanding of relationships within a data set — for example, how the sale of a product correlates with its position on a store’s shelf. Once that relationship is confirmed, practitioners might use supervised techniques with labels that describe a product’s shelf location. Semi-supervised techniques could automatically compute shelf location labels. After the machine learning model is deployed, reinforcement learning could fine-tune the model’s predictions based on actual sales. A deep understanding of the data is essential because it serves as a project’s blueprint, said David Guarrera, EY America’s generative AI leader. The performance of a new machine learning model depends on the nature of the data, the specific problem and what’s required to solve it. Neural networks, for example, might be best for image recognition tasks, while decision trees could be more suitable for a different type of classification problem. “It’s often about finding the right tool for the right job in the context of machine learning and about fitting to the budget and computational constraints of the project,” Guarrera explained. The four main types of machine learning and their most common algorithms. Here’s a deeper look at the four main types of machine learning models. 1. Supervised learning model Supervised learning models work with data that has been previously labeled. The downside is that someone or some process needs to apply these labels. Applying labels after the fact requires a lot of time and effort. In some cases, these labels can be generated automatically as part of an automation process, such as capturing the location of products in a store. Classification and regression are the most common types of supervised learning algorithms. • Classification algorithms decide the category of an entity, object or event as represented in the data. The simplest classification algorithms answer binary questions such as yes/no, sales/ not-sales or cat/not-cat. More complicated algorithms lump things into multiple categories like cat, dog or mouse. Popular classification algorithms include decision trees, logistic regression, random forest and support vector machines. • Regression algorithms identify relationships within multiple variables represented in a data set. This approach is useful when analyzing how a specific variable such as product sales correlates with changing variables like price, temperature, day of week or shelf location. Popular regression algorithms include linear regression, multivariate regression, decision tree and least absolute shrinkage and selection operator (lasso) regression. Common use cases are classifying images of objects into categories, predicting sales trends, categorizing loan applications and applying predictive maintenance to estimate failure rates. 2. Unsupervised learning model Unsupervised learning models automate the process of discerning patterns present within a data set. These patterns are particularly helpful in exploratory data analysis to determine the best way to frame a data science problem. Clustering and dimensional reduction are two common unsupervised learning algorithmic types. • Clustering algorithms help group similar sets of data together based on various criteria. Practitioners can segment data into different groups to identify patterns within each group. • Dimension reduction algorithms explore ways to compact multiple variables efficiently for a specific problem. These algorithms include approaches to feature selection and projection. Feature selection helps prioritize characteristics that are more relevant to a given question. Feature projection explores ways to find deeper relationships among multiple variables that can be quantified into new intermediate variables that are more appropriate for the problem at hand. Common clustering and dimension reduction use cases include grouping inventory based on sales data, associating sales data with a product’s store shelf location, categorizing customer personas and identifying features in images. 3. Semi-supervised learning model Semi-supervised learning models characterize processes that use unsupervised learning algorithms to automatically generate labels for data that can be consumed by supervised techniques. Several approaches can be used to apply these labels, including the following: • Clustering techniques label data that looks similar to labels generated by humans. • Self-supervised learning techniques train algorithms to solve a pretext task that correctly applies labels. • Multi-instance techniques find ways to generate labels for a collection of examples with specific characteristics. 4. Reinforcement learning model Reinforcement learning models are often used to improve models after they have been deployed. They can also be used in an interactive training process, such as teaching an algorithm to play games in response to feedback about individual moves or to determine wins and losses in a round of games like chess or Go. The core technique requires establishing a set of actions, parameters and end values that are tuned through trial and error. At each step, the algorithm makes a prediction, move or decision. The result is compared to results in a game or real-world scenario. A penalty or reward is sent back to refine the algorithm over time. The most common reinforcement learning algorithms use various neural networks. In self-driving applications, for example, an algorithm’s training might be based on how it responds to data recorded from cars or synthetic data that represents what the car’s sensors might see at night. [embedded content] Machine learning model vs. machine learning algorithm The terms machine learning model and machine learning algorithm are sometimes conflated to mean the same thing. But from a data science perspective, they’re very different. Machine learning algorithms are used in training machine learning models. Machine learning algorithms are the brains of the models, explained Brian Steele, AI strategy consultant at Curate Partners. The algorithms contain code that’s used to form predictions for the models. The data the algorithms are trained on often determines the types of outputs the models create. The data acts as a source of information, or inputs, for the algorithm to learn from, so the models can create understandable and relevant outputs. Put another way, an algorithm is a set of procedures that describes how to do something, and a machine learning model is a mathematical representation of a real-world problem trained on machine learning algorithms, said Anantha Sekar, UK Geo presales at Tata Consultancy Services. “So, the machine learning model is a specific instance,” he said, “while machine learning algorithms are a suite of procedures on how to train machine learning models.” The algorithm shapes and influences what the model does. The model considers the what of the problem, while the algorithm provides the how for getting the model to perform as desired. Data is the third relevant entity because the algorithm uses the training data to train the machine learning model. In practice, therefore, a machine learning outcome depends on the model, the algorithms and the training data. Additional popular types of machine learning algorithms There are hundreds of types of machine learning algorithms, making it difficult to select the best approach for a given problem. Furthermore, one algorithm can sometimes be used to solve different types of problems such as classification and regression. “Algorithms are the underlying blueprints for constructing machine learning models,” Guarrera said. These algorithms define the rules and techniques used to learn from the data. They contain not only the logic for pre-processing and preparing data, but also the trained and learned patterns that can be used to make predictions and decisions based on new data. As data scientists navigate the machine learning algorithm landscape to determine the most important areas to focus on, it’s important to consider metrics that represent utility, breadth of applicability, efficiency and reliability, advised Michael Shehab, principal and labs technology and innovation leader at PwC. He also emphasized an algorithm’s ability to support a wide breadth of problems instead of just solving a single task. Some algorithms are more sample efficient and require less training data to arrive at a well-performing model, while others are more compute efficient at training and inference time and don’t require the compute resources needed to operate them. “There is no singular best machine learning algorithm,” Shehab said. “The right option for any company is one that has been carefully selected through rigid experimentation and evaluation to best meet the criteria defined by the problem.” Some of the more popular algorithms and the models they work with include the following: • Artificial neural networks train a network of interconnected neurons, each of which runs a particular inference algorithm that translates inputs into outputs fed to nodes in subsequent layers of the network. Models: unsupervised, semi-supervised and reinforcement. • Decision trees evaluate a data point through a set of tests on a variable to arrive at a result. They’re commonly used for classification and regression. Model: supervised. • K-means clustering automates the process of finding groups in a data set in which the number of groups is represented by the variable K. Once these groups are identified, it assigns each data point to one of these groups. Model: unsupervised. • Linear regression finds a relationship between continuous variables. Model: supervised. • Logistic regression estimates the probability of a data point being in one category by identifying the best formula for splitting events into two categories. It’s commonly used for classification. Model: supervised. • Naive Bayes uses Bayes’ theorem to classify categories based on statistical probabilities showing the relationship of patterns between variables in the data set. Model: supervised. • Nearest neighbors algorithms look at multiple data points around a given data point to determine its category. Model: supervised. • Random forests organizes an ensemble of separate algorithms to generate a decision tree that can be applied to classification problems. Model: supervised. • Support vector machine uses labeled data to train a model that assigns new data points to various categories. Model: supervised. Best practices for training machine learning models Data scientists will each develop their own approach to training machine learning models. Training generally starts with preparing the data, identifying the use case, selecting training algorithms and analyzing the results. Following is a set of best practices developed by Shehab for PwC: • Start simple. Model training should begin with the simplest approach. Complexity can then be added in the form of model features, feature sophistication and advanced learning algorithms. The simpler model serves as a basis for determining if the performance obtained through the added complexity will be worth the additional investment in time and technical costs. • Create a consistent model development process. Given its highly iterative nature, a consistent development process should be supported with tools that provide comprehensive experiment tracking so data scientists can more readily pinpoint where their models can be improved. • Identify the right problem to solve. Look for improperly defined objectives, wrong areas of focus and unrealistic expectations, all of which are often responsible for a model’s poor performance or failure to produce tangible value. Building a model requires solid grounding to properly assess its development. • Understand the historical data. The model is only as good as the data it will be trained on, so start with a firm understanding of how that data behaves, the overall quality and completeness of the data, important trends or elements of the data set related to the task at hand and any biases that may be present. • Ensure accuracy. To avoid introducing bias, providing the model with inappropriate feedback or reinforcing the wrong behavior, carefully set measurable benchmarks for model performance. A machine learning algorithm learns through feedback from an objective or outcome set in the training data. If the calculations that generate the feedback aren’t carefully defined and aligned to the expected values, the result could be a poor or non-functioning model. • Focus on explainability. Data scientists who focus on why a model performs the way it does will produce better models. This approach requires more comprehensive model validation and testing. Explainability also provides insights into a model’s underperformance, hypotheses of how to enhance performance and a global view of how a model functions to help develop trust among consumers. • Continue training. Model training is an ongoing process over the life of the model, including the production stage, so it can be continuously improved. Is there a best machine learning model? In general, there is no one best machine learning model. “Different models work best for each problem or use case,” Sekar said. Insights derived from experimenting with the data, he added, may lead to a different model. The patterns of data can also change over time. A model that works well in development might have to be replaced with a different model. A specific model can be regarded as the best only for a specific use case or data set at a certain point in time, Sekar said. The use case can add more nuance. Some uses, for example, may require high accuracy while others demand higher confidence. It’s also important to consider environmental constraints in model deployment, such as memory, power and performance requirements. Other use cases may have explainability requirements that could drive decisions toward a different type of model. Data scientists also need to consider the operational aspects of models after deployment when prioritizing one type of model over another. These considerations may include how the raw data is transformed for processing, fine-tuning processes, prompt engineering and the need to mitigate AI hallucinations. “Choosing the best model for a given situation,” Sekar advised, “is a complex task with many business and technical aspects to be considered.” George Lawton is a journalist based in London. Over the last 30 years, he has written more than 3,000 stories about computers, communications, knowledge management, business, health and other areas that interest him. This post was originally published on 3rd party site mentioned in the title of this site AI Vision News and Blogs Deep Learning Architectures Face Recognition and Verification Feature Extraction and Representation Learning Image Classification and Categorization Image Generation and Synthesis Object Detection and Localization Object Tracking Semantic Segmentation Cutting Costs with AI VisionPlace: Affordable Supply Chain Solutions Save money with AI VisionPlace's affordable supply chain solutions. Cut costs and optimize your operations with our advanced technology. AI Vision News and Blogs Deep Learning Architectures Feature Extraction and Representation Learning Image Classification and Categorization Image Generation and Synthesis Object Detection and Localization Object Tracking Semantic Segmentation Uncover the Best AI Vision Solutions at AI VisionPlace Marketplace Discover the top AI vision solutions at AI VisionPlace Marketplace. Enhance your business with cutting-edge technology. AI Vision News and Blogs Deep Learning Architectures Face Recognition and Verification Feature Extraction and Representation Learning Image Classification and Categorization Image Generation and Synthesis Object Detection and Localization Object Tracking Semantic Segmentation Join the Top AI-Enabled Directory Service for the Computer Vision Industry Discover the leading AI-Enabled Directory Service for Computer Vision. Connect with industry experts and stay ahead in this cutting-edge field.
{"url":"https://aivisionplace.com/4-types-of-learning-in-machine-learning-explained-techtarget/","timestamp":"2024-11-14T00:55:55Z","content_type":"text/html","content_length":"175051","record_id":"<urn:uuid:0a49bf36-0c2e-4674-abef-83b36082148d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00289.warc.gz"}
“Search Insert Position” Problem Photo by Mika Baumeister on Unsplash Problem : Search Insert Position O (Log n) | Binary Search Given a sorted array of distinct integers and a target value, return the index if the target is found. If not, return the index where it would be if it were inserted in order. Output: 2 Output: 1 The solution to the problem is the Binary Search as given as a Hint with the problem statement. If you don’t know about Binary Search then check this article first But here is a little variation to do in Binary Search for this problem. When the target value is not found then check whether the latest mid value of the array is smaller than the target value, if it is smaller than the target value, then it will be after that index. And if the latest mid-index value of the array is not smaller then simply return the mid-index as the target value will be placed at that index. Code :
{"url":"https://abdullahafzal-11779.medium.com/search-insert-position-problem-aed06be4b979?source=user_profile_page---------4-------------e320f618ddba---------------","timestamp":"2024-11-12T01:46:17Z","content_type":"text/html","content_length":"100818","record_id":"<urn:uuid:ad3ae4c7-8387-43c8-8874-07612d76ad3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00544.warc.gz"}
Level Order Traversal in a Binary Tree | DigitalOcean Level Order Traversal is one of the methods for traversing across a Binary Tree. In this article, we shall look at how we can implement this algorithm in C/C++. But before that, let us have our concepts covered. A Binary Tree is a data structure where every node has at-most two children. The topmost node is called the Root node. Binary Tree There are 4 common ways of traversing the nodes of a Binary Tree, namely: • In order Traversal • Pre Order Traversal • Post Order Traversal • Level Order Traversal Let’s understand what a level in a Binary Tree means. A level is the number of parent nodes corresponding to a given a node of the tree. It is basically the number of ancestors from that node until the root node. So, for the root node (topmost node), it’s level is 0, since it has no parents. If it has children, both of them will have a level of 1, since it has only one ancestor until the root node, which is the root node itself. Binary Tree Level We also need to understand the notion of height in a Binary Tree. This is simply the length of the path from the root to the deepest node in the tree. In this case, the height will be the length from the deepest node (40 or 50, since they have the maximum level) to the root. So the height of the tree is 2. Now that we have our concepts covered, let’s understand how we can implement Level Order Traversal. A Level Order Traversal is a traversal which always traverses based on the level of the tree. So, this traversal first traverses the nodes corresponding to Level 0, and then Level 1, and so on, from the root node. In the example Binary Tree above, the level order traversal will be: (Root) 10 -> 20 -> 30 -> 40 -> 50 To do this, we need to do 2 things. 1. We must first find the height of the tree 2. We need to find a way to print the nodes corresponding to every level. We will find the height of the tree first. To do this, the logic is simple. Since the height of the tree is defined as the largest path from the root to a leaf. So we can recursively compute the height of the left and right sub-trees, and find the maximum height of the sub-tree. The height of the tree will then simply be the height of the sub-tree + 1. C- style Pseudo Code: // Find height of a tree, defined by the root node int tree_height(Node* root) { if (root == NULL) return 0; else { // Find the height of left, right subtrees left_height = tree_height(root->left); right_height = tree_height(root->right); // Find max(subtree_height) + 1 to get the height of the tree return max(left_height, right_height) + 1; Now that we have the height, we must print nodes for every level. To do this, we will use a for loop to iterate all levels until the height, and print nodes at every level. void print_tree_level_order(Node* root) { int height = tree_height(root); for (int i=0; i<height; i++) { // Print the ith level print_level(root, i); Observe that we need another function to print the ith level of the tree. Here again, we have a similar logic. But this time, after printing the root node, we change the root node to it’s left and right children and print both sub-trees. This will continue until we reach a leaf node, that is when the auxiliary root will be NULL at the next step. (Since leaf_node->left = NULL and leaf_node->right = NULL) void print_level(Node* root, int level_no) { // Prints the nodes in the tree // having a level = level_no // We have a auxiliary root node // for printing the root of every // sub-tree if (!root) if (level_no == 0) { // We are at the top of a sub-tree // So print the auxiliary root node printf("%d -> ", root->value); else { // Make the auxiliary root node to // be the left and right nodes for // the sub-trees and decrease level by 1, since // you are moving from top to bottom print_level(root->left, level_no - 1); print_level(root->right, level_no - 1); Now, we have finally completed the Level Order Traversal! I will provide the complete program below, which also has a section to construct the Binary Tree using insertion. While this is originally a C program, the same can be compiled on C++ as well. Code for https://journaldev.com File Name: level_order.c Purpose: Find the Level Order Traversal of a Binary Tree @author Vijay Ramachandran @date 28/01/2020 #include <stdio.h> #include <stdlib.h> typedef struct Node Node; // Define the Tree Node here struct Node { int value; // Pointers to the left and right children Node* left, *right; Node* init_tree(int data) { // Creates the tree and returns the // root node Node* root = (Node*) malloc (sizeof(Node)); root->left = root->right = NULL; root->value = data; return root; Node* create_node(int data) { // Creates a new node Node* node = (Node*) malloc (sizeof(Node)); node->value = data; node->left = node->right = NULL; return node; void free_tree(Node* root) { // Deallocates memory corresponding // to every node in the tree. Node* temp = root; if (!temp) if (!temp->left && !temp->right) { int tree_height(Node* root) { // Get the height of the tree if (!root) return 0; else { // Find the height of both subtrees // and use the larger one int left_height = tree_height(root->left); int right_height = tree_height(root->right); if (left_height >= right_height) return left_height + 1; return right_height + 1; void print_level(Node* root, int level_no) { // Prints the nodes in the tree // having a level = level_no // We have a auxiliary root node // for printing the root of every // subtree if (!root) if (level_no == 0) { // We are at the top of a subtree // So print the auxiliary root node printf("%d -> ", root->value); else { // Make the auxiliary root node to // be the left and right nodes for // the subtrees and decrease level by 1, since // you are moving from top to bottom print_level(root->left, level_no - 1); print_level(root->right, level_no - 1); void print_tree_level_order(Node* root) { if (!root) int height = tree_height(root); for (int i=0; i<height; i++) { printf("Level %d: ", i); print_level(root, i); printf("\n\n-----Complete Level Order Traversal:-----\n"); for (int i=0; i<height; i++) { print_level(root, i); int main() { // Program to demonstrate Level Order Traversal // Create the root node having a value of 10 Node* root = init_tree(10); // Insert nodes onto the tree root->left = create_node(20); root->right = create_node(30); root->left->left = create_node(40); root->left->right = create_node(50); // Level Order Traversal // Free the tree! return 0; Level 0: 10 -> Level 1: 20 -> 30 -> Level 2: 40 -> 50 -> -----Complete Level Order Traversal:----- 10 -> 20 -> 30 -> 40 -> 50 -> You can also download this through a Github gist that I created for this purpose. (Contains code for insertion as well) Hopefully you have a better understanding of how Level Order Traversal can be implemented in C/C++. If you have any questions, feel free to ask them in the comments section below! While we believe that this content benefits our community, we have not yet thoroughly reviewed it. If you have any suggestions for improvements, please let us know by clicking the “report an issue“ button at the bottom of the tutorial.
{"url":"https://www.digitalocean.com/community/tutorials/level-order-traversal-in-a-binary-tree","timestamp":"2024-11-08T01:01:00Z","content_type":"text/html","content_length":"283772","record_id":"<urn:uuid:a770fe18-eb6f-4806-93bb-3cfc18d8a674>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00172.warc.gz"}
Numbers (Values) | Intro to Computer Science (ICS3U/C) Numbers (Values) Math is the basis of computer science. We looked at strings first but that's only because we're used to typing words and sentences. Also, we learned about output. What about numeric variables? Dynamically-typed (or loosely-typed) languages like Javascript take the guesswork out of memory management. These languages treat a number as just a value - no specific - so we don't need to worry about declaring a double vs. an integer. Statically-typed (or strongly-typed) languages like C++ do not have active memory management. It is up to the programmer to know and declare the exact type of variable (memory space) required. This is because a number without decimals (integer) takes less memory than a number with decimals (double or float). Any sort of mathematics can be done in most programming languages. It is up to the programmer to learn the built-in mathematics as well as to create any new required computations. Here are the basic mathematical commands available. Assume x has a value of 19. (let x = 19;) Code Description x + 3 Add. Returns 22 x - 4 Subtract. Returns 15 x * 2 Multiply. Returns 38 x / 4 Divide. Returns 4.75 Modulo. Returns the remainder after a division. In this case, it returns 3 since 0.75 of 4 is 3. x % 4 If no remainder (ie. 10 % 5), modulo returns 0. Computers and programming languages do their best to follow proper order of operations for mathematics. That being said, the interpreter or compiler can only interpret your code as best as possible. For this reason, it's important to utilize brackets (or parentheses) ( ) properly! For example, 1 + 8 / 2 is quite different from (1 + 8) / 2. Assignment Operators Assigning a numeric value to a variable is straight-forward. Any math that is done must also be stored, either in a new variable or back into the current one. The list below is an incomplete list of the ways you can assign a value to memory. Code Description x = 3 Assign the value 3 to variable x. ++x or x++ Increment the value of x by one before or after the current value is utilized. --x or x-- Decrement the value of x by one before or after the current value is utilized. x = x + 2 x = x - 2 Add, subtract, multiply, or divide some value, in this case 2, to x. This assigns x to the new value. x = x * 2 x = x / 2 x += 2 x -= 2 These also add, etc, any value, to x. They are a shortcut to the lines above. x *= 2 x /= 2 Left-Hand Assignment There is a standard in programming that the variable or item being used to contain data is on the left of an operator and the mathematics or item(s) being assigned to that variable is on the right. Important: x = 5 will assign 5 to the variable x while 5 = x will throw an exception because it is not possible to store x into the value 5. This becomes important when comparing two items (see below). You can not use a single = to compare, it assigns. Comparison Operators In order to make decisions we must be able to compare values. Comparisons typically return a true or false. Below is an incomplete list of the ways you can compare two or more values. Code Description Notes == Equal in value 5 == "5" is true === Equal in value and type 5 === "5" is false != Not equal 5 != "5" is false Since they are equal in value 5 !== "5" is true !== Not equal in value or type Since they are not equal in type < or > Less than or greater than From left-to-right <= or => Less than or equal to greater than or equal to From left-to-right && Logical operator and true && false is false || Logical operator or true || false is true !true is false ! Logical operator not !(5 < 1) is true Greater / Less Than It is easy to confuse the terminology or logic with the operators <, >, <=, and >=. You should be reading it from left-to-right. Here are some examples: 4 < 10 is read "four is less than 10" 100 > 6 is read "one hundred is greater than six" someVariable >= 0 is read "some variable is greater than or equal to 0 x <= y is read "x is less than or equal to y" When a variable is created in memory, it has no value. It has a name, but the variable itself has not been defined. let x = 3; // Has the value 3 let y; // Has no value, specifically it has no definition console.log(x + "\n" + y) /* Output: If a variable has value (is defined) but cannot be represented with a number (value) Javascript returns NaN which represents "Not a Number". You can check to see if something is a number: let myNumber = 3; let myString = "Hello"; isNaN(myNumber); // false isNaN(myString); // true /* Remember, to print those results, use console.log() and to store them, you need a variable: let result = isNaN(myString); */
{"url":"https://cs.brash.ca/unit-2/variables/numbers","timestamp":"2024-11-03T16:57:32Z","content_type":"text/html","content_length":"460089","record_id":"<urn:uuid:08dc5976-856e-4b47-9ba0-7a2fc9685a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00743.warc.gz"}
Methodology of the Residential Property Price Index (RPPI) Archived Content Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available. Release date: November 14, 2019 Skip to text Text begins Following the global financial crisis in 2008, the G-20 identified real estate price indices as an important financial soundness indicator. Linked to these efforts, residential property price indices form a core set of data necessary for financial stability analysis under a new tier of the IMF's Special Data Dissemination Standard, known as SDDS Plus. In order to meet these new data requirements, and to improve the relevance of housing price statistics, the 2016 Federal Budget mandated Statistics Canada to develop an official Residential Property Price Index (RPPI). The RPPI is a quarterly index covering prices for new and resale residential housing in the Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria census metropolitan areas (CMA), as well as a 6 CMA composite, starting in quarter 1 2017. The index is a composite of three separate indices produced at Statistics Canada. New housing is covered by the New Housing Price Index (NHPI) and the New Condominium Apartment Price Index (NCAPI), and resale housing is covered by the Resale Residential Property Price Index (RRPPI). These three indices are aggregated together to form the RPPI. This document outlines the methodological details behind the NHPI, the NCAPI, and the RRPPI, as well as how these three indices are aggregated to form the RPPI. Sections 1 and 2 cover the NHPI and NCAPI, which are standard survey-based price indices; Section 3 outlines the RRPPI, which uses a more complex repeat-sales methodology. As housing is a fairly heterogeneous good, constructing a price index with a constant-quality interpretation is an important methodological consideration for all three indices. Section 4 details how the NHPI, NCAPI, and RRPPI are aggregated to form the RPPI. 1 The New Housing Price Index (NHPI) The NHPI measures the change over time in builders’ selling prices of newly built houses (single/semi-detached and row) in 27 CMAs. For the purpose of constructing the RPPI, the NHPI covers new housing in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria. The NHPI is a monthly index, starting in January 1981. To produce a constant-quality price index, the NHPI uses a matched-model approach—wherein prices for the same house models are compared over time—along with explicit quality adjustments. Data are collected monthly from builders as part of a survey using an electronic Unlike the NCAPI and the RRPPI, the NHPI is its own index series that is distinct from the RPPI.^Note The RPPI simply uses the city-level NHPI values to capture price changes for new housing. Consequently, this section focuses on the methodological details of the NHPI as they pertain to the RPPI. 1.1 Concepts and definitions Table 1.1 defines key concepts used for constructing the NHPI, at least for its use in the RPPI. Table 1.1 Concepts and definitions for the NHPI Table summary This table displays the results of Concepts and definitions for the NHPI. The information is grouped by Concept (appearing as row headers), Definition (appearing as column headers). Concept Definition Price Either the transaction price or the list price for a model of a house as reported by the builder in a given month, exclusive of any sales tax. This is the price received by the builder, and excludes any additional fees paid by the buyer. Model The particular floor plan and features of a house. Sample See section 1.2. Target All new residential houses (single/semi-detached and row) available for sale or sold in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria in a given month. Index base The period for which the index equals 100. The base period for the NHPI is December 2016 = 100. 1.2 Data Data for the NHPI are collected from a survey of home builders. The sampling frame for the NHPI is Statistics Canada’s Building Permits Survey, and the survey uses a multi-stage sample design in which representative models are selected into the sample at each stage. The first stage of sampling involves contacting the top 15% of developers within a CMA, based on the value of their building permits, to determine if they are in scope for the survey. This helps ensure that large tract builders that develop an entire subdivision are included in the sample. Once a builder is identified as in scope, they select the development they are building with the most lots available for sale within a CMA, and up to three of the top selling house models in this development. This helps to ensure that the same models can be followed over time within the same development, and that these models are broadly representative of market activity for new housing. An electronic questionnaire is used to collect price information for these models each month. If a model does not sell in a particular month, the builder is asked for a list price. The sample is periodically refreshed as developments sell out, and builders enter and exit the market. The data collected from developers are manually reviewed for consistency and completeness, and certain records may be edited or removed based on judgement. 1.3 Index calculation The NHPI is a fairly straightforward matched-model index. Prices are stratified by CMA, builder, and model to produce a price relative for each model that each builder reports in the survey. The value of any promotions or upgrades is subtracted from the price of a model prior to calculating a price relative. Provided that house models do not change over time, this collection of price relatives has a constant-quality interpretation. The price relatives for each model are then aggregated to the CMA level using a Jevons index. Although the NHPI is calculated monthly, the three index values within a quarter are averaged to produce a quarterly index for the RPPI. To make the index calculation explicit, let $p mbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaaaa @3A2D@$ be the price of model $m MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyBaaaa@36FE@$ by builder $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ at time $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$. These model prices are used to calculate a price relative between period $t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaiabgkHiTiaaigdaaaa@38AD@$ and period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$, $p mbt / p mbt−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaOWd biaac+cacaWGWbWdamaaBaaaleaapeGaamyBaiaadkgacaWG0bGaey OeI0IaaGymaaWdaeqaaaaa@40C3@$, for each model that each builder reports in the survey. To produce a CMA-level index, the price relatives for all models by all builders are aggregated with a Jevons index $I t t−1 = ∏ b=1 B t ∏ m=1 M bt ( p mbt p mbt−1 ) 1/ ∑ b=1 B t M bt , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiaadshaa8aabaWdbiaadshacqGHsisl caaIXaaaaOGaeyypa0ZaaybCaeqal8aabaWdbiaadkgacqGH9aqpca aIXaaapaqaa8qacaWGcbWdamaaBaaameaapeGaamiDaaWdaeqaaaqd baWdbiabg+GivdaakmaawahabeWcpaqaa8qacaWGTbGaeyypa0JaaG ymaaWdaeaapeGaamyta8aadaWgaaadbaWdbiaadkgacaWG0baapaqa baaaneaapeGaey4dIunaaOWaaeWaa8aabaWdbmaalaaapaqaa8qaca WGWbWdamaaBaaaleaapeGaamyBaiaadkgacaWG0baapaqabaaakeaa peGaamiCa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaiabgkHiTi aaigdaa8aabeaaaaaak8qacaGLOaGaayzkaaWdamaaCaaaleqabaWd biaaigdacaGGVaWaaybCaeqam8aabaWdbiaadkgacqGH9aqpcaaIXa aapaqaa8qacaWGcbWdamaaBaaabaWdbiaadshaa8aabeaaa4qaa8qa cqGHris5aaWccaWGnbWdamaaBaaameaapeGaamOyaiaadshaa8aabe aal8qacaGGGcGaaiiOaiaacckaaaGccaGGSaaaaa@67EB@$ where $M bt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyta8aadaWgaaWcbaWdbiaadkgacaWG0baapaqabaaaaa@3918@$ is the number of models produced by builder $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$, and $B t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOqa8aadaWgaaWcbaWdbiaadshaa8aabeaaaaa@3826@$ is the number of builders. This index is then chained with the previous period’s index value $I t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadshacqGHsislcaaIXaaapaqabaaa aa@39D5@$ to produce an index $I t = I t t−1 ⋅ I t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadshaa8aabeaak8qacqGH9aqpcaWG jbWdamaaDaaaleaapeGaamiDaaWdaeaapeGaamiDaiabgkHiTiaaig daaaGccqGHflY1caWGjbWdamaaBaaaleaapeGaamiDaiabgkHiTiaa igdaa8aabeaaaaa@443D@$ running from the base period to period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$. Finally, the quarterly CMA-level index is simply the average of the three index values within that quarter. For the quarter starting in month $q MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyCaaaa@3702@$, the index is $I q = 1 3 ∑ t=q q+2 I t . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadghaa8aabeaak8qacqGH9aqpdaWc aaWdaeaapeGaaGymaaWdaeaapeGaaG4maaaadaGfWbqabSWdaeaape GaamiDaiabg2da9iaadghaa8aabaWdbiaadghacqGHRaWkcaaIYaaa n8aabaWdbiabggHiLdaakiaadMeapaWaaSbaaSqaa8qacaWG0baapa qabaGcpeGaaiOlaaaa@4637@$ The resulting collection of quarterly indices at the CMA level capture the new house side of the RPPI. 1.3.1 Model replacement When a house model is no longer for sale anymore, or no longer representative, and replaced by another model in the sample, a back price for the replacement model is imputed in the first period that it appears in the sample. This allows for a new model to be used in the matched-model index calculation immediately. The imputation is done with a linear regression (hedonic) model that relates house prices to observed characteristics (see de Haan and Diewert (2013, chapter 5) for more details). A separate model is calculated for each of the six cities. No imputation is made when a new model is added to the sample without replacing an old model, nor when a new builder is added to the sample. Letting $p mbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaaaa @3A2D@$ be the price of model $m MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyBaaaa@36FE@$ by builder $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ in period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$, the regression model is based on a structural model for house prices $log( p mbt )=α+ x mbt β+ z mbt γ+ d b + d t +log( ϵ mbt ), MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaeiBaiaab+gacaqGNbWaaeWaa8aabaWdbiaadchapaWaaSbaaSqa a8qacaWGTbGaamOyaiaadshaa8aabeaaaOWdbiaawIcacaGLPaaacq GH9aqpcqaHXoqycqGHRaWkcaWG4bWdamaaBaaaleaapeGaamyBaiaa dkgacaWG0baapaqabaGcpeGaeqOSdiMaey4kaSIaamOEa8aadaWgaa WcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaOWdbiabeo7aNjabgUca RiaadsgapaWaaSbaaSqaa8qacaWGIbaapaqabaGcpeGaey4kaSIaam iza8aadaWgaaWcbaWdbiaadshaa8aabeaak8qacqGHRaWkcaqGSbGa ae4BaiaabEgacaGGOaWefv3ySLgznfgDOfdaryqr1ngBPrginfgDOb YtUvgaiuGacqWF1pG8paWaaSbaaSqaa8qacaWGTbGaamOyaiaadsha where $x mbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiEa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaaaa @3A35@$ (row) vector of model characteristics, $z mbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOEa8aadaWgaaWcbaWdbiaad2gacaWGIbGaamiDaaWdaeqaaaaa @3A37@$ is a vector of location characteristics, $d b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiza8aadaWgaaWcbaWdbiaadkgaa8aabeaaaaa@3836@$ and $d t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiza8aadaWgaaWcbaWdbiaadshaa8aabeaaaaa@3848@$ are builder and time specific intercepts, respectively, and $ϵ mbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamrr1ngBPrwtHr hAXaqeguuDJXwAKbstHrhAG8KBLbacfiaeaaaaaaaaa8qacqWF1pG8 paWaaSbaaSqaa8qacaWGTbGaamOyaiaadshaa8aabeaaaaa@4533@$ is an error term. Housing characteristics include the log of lot size and house size (square footage), and dummies for the number of garages, number of bathrooms, and number of bedrooms. Location characteristics include dummies for the property’s forward sortation area (first three digits of the postal code). These characteristic data are collected from builders during the sampling process. The regression model is estimated using a five year rolling window of data collected for the NHPI. Estimation is done with a robust M-estimator, using the bi-square loss function (see Amemiya (1985, section 2.3) or Wooldridge (2010, chapter 12) for more detail about M-estimation). Under the assumptions of the classical linear regression model, this approach to estimation is more robust to outlying price observations than the usual OLS estimator. When a new house model is introduced into the sample, the characteristics for the new model and the characteristics for the old model are used to calculate a pair of fitted prices from the regression model. The fitted price for the new model is then subtracted from the fitted price for the old model, and this difference is added to the price for the old model to impute the back price for the new model. This effectively accounts for the difference between the characteristics of the old model and the new model, giving an imputation for what the price of the new model would have been in the previous period. That is, plugging the characteristics for a new model $n MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOBaaaa@36FF@$ into the hedonic model produces a fitted price $log( p n ) ^ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaHaaabaaeaa aaaaaaa8qaciGGSbGaai4BaiaacEgadaqadaWdaeaapeGaamiCa8aa daWgaaWcbaWdbiaad6gaa8aabeaaaOWdbiaawIcacaGLPaaaa8aaca GLcmaaaaa@3DB1@$, and plugging the characteristics of the old model $o MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Baaaa@3700@$ into the hedonic model produces a fitted price $log( p o ) ^ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaHaaabaaeaa aaaaaaa8qaciGGSbGaai4BaiaacEgadaqadaWdaeaapeGaamiCa8aa daWgaaWcbaWdbiaad+gaa8aabeaaaOWdbiaawIcacaGLPaaaa8aaca GLcmaaaaa@3DB2@$. The difference between these fitted prices $log( p n ) ^ − log( p o ) ^ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaHaaabaaeaa aaaaaaa8qaciGGSbGaai4BaiaacEgadaqadaWdaeaapeGaamiCa8aa daWgaaWcbaWdbiaad6gaa8aabeaaaOWdbiaawIcacaGLPaaaa8aaca GLcmaapeGaeyOeI0YdamaaHaaabaWdbiGacYgacaGGVbGaai4zamaa bmaapaqaa8qacaWGWbWdamaaBaaaleaapeGaam4BaaWdaeqaaaGcpe GaayjkaiaawMcaaaWdaiaawkWaaaaa@4673@$ is then added to the price for the old model $log( p o ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaciiBaiaac+gacaGGNbWaaeWaa8aabaWdbiaadchapaWaaSbaaSqa a8qacaWGVbaapaqabaaak8qacaGLOaGaayzkaaaaaa@3CE1@$ to produce a back price for the new model $exp( log( p 0 )+ log( p n ) ^ − log( p o ) ^ ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaciyzaiaacIhacaGGWbWaaeWaa8aabaWdbiGacYgacaGGVbGaai4z amaabmaapaqaa8qacaWGWbWdamaaBaaaleaapeGaaGimaaWdaeqaaa GcpeGaayjkaiaawMcaaiabgUcaR8aadaqiaaqaa8qaciGGSbGaai4B aiaacEgadaqadaWdaeaapeGaamiCa8aadaWgaaWcbaWdbiaad6gaa8 aabeaaaOWdbiaawIcacaGLPaaaa8aacaGLcmaapeGaeyOeI0Ydamaa HaaabaWdbiGacYgacaGGVbGaai4zamaabmaapaqaa8qacaWGWbWdam aaBaaaleaapeGaam4BaaWdaeqaaaGcpeGaayjkaiaawMcaaaWdaiaa wkWaaaWdbiaawIcacaGLPaaaaaa@52A2@$. The imputed price relative for the new model is then $p n exp( log( p 0 )+ log( p n ) ^ − log( p o ) ^ ) , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaSaaa8aabaWdbiaadchapaWaaSbaaSqaa8qacaWGUbaapaqabaaa keaapeGaciyzaiaacIhacaGGWbWaaeWaa8aabaWdbiGacYgacaGGVb Gaai4zamaabmaapaqaa8qacaWGWbWdamaaBaaaleaapeGaaGimaaWd aeqaaaGcpeGaayjkaiaawMcaaiabgUcaR8aadaqiaaqaa8qaciGGSb Gaai4BaiaacEgadaqadaWdaeaapeGaamiCa8aadaWgaaWcbaWdbiaa d6gaa8aabeaaaOWdbiaawIcacaGLPaaaa8aacaGLcmaapeGaeyOeI0 YdamaaHaaabaWdbiGacYgacaGGVbGaai4zamaabmaapaqaa8qacaWG WbWdamaaBaaaleaapeGaam4BaaWdaeqaaaGcpeGaayjkaiaawMcaaa WdaiaawkWaaaWdbiaawIcacaGLPaaaaaGaaiilaaaa@55DD@$ and this is used directly in the index calculation. 2 The New Condominium Apartment Price Index (NCAPI) The NCAPI measures changes over time in builders' selling prices of newly built, apartment-style units in condominium buildings in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria. This is a quarterly index, starting in quarter 1 2017, composed of 6 sub-indices (one for each city). Each sub-index is computed using a unit-value approach, wherein the price of a unit is standardized by its square-footage to give a price per square foot. Explicit quality adjustments are made prior to calculating these unit prices in order to produce a constant-quality index. Data for the NCAPI are collected monthly from a survey of builders using an electronic questionnaire. 2.1 Concepts and definitions Table 2.1 defines key concepts used for constructing the NCAPI. An important aspect of the new condo market is that condo units often sell during the presale phase of a building, prior to construction beginning. Prices during the presale phase give an indicator of prices for new condo units, but may not reflect a transfer from the buyer to the seller if, for example, the builder is not able to sell enough units to finance construction of the building. Table 2.1 Concepts and definitions for the NCAPI Table summary This table displays the results of Concepts and definitions for the NCAPI. The information is grouped by Concept (appearing as row headers), Definition (appearing as column headers). Concept Definition Price Either the transaction price or the list price for a unit as reported by the builder in a given month, exclusive of any sales tax. This is the price received by the builder, and excludes any additional fees paid by the buyer. Unit value The price of a unit standardized by its square footage, giving a price per square foot. Unit type The number of bedrooms in an apartment, with or without a den, in one of the following categories: one bedroom, one bedroom+den, two bedroom, two bedroom+den and three bedroom. Presale The period in which units can be purchased prior to construction beginning. Sample See section 2.2.1. Target All new residential low rise/high rise apartment condo units available for sale or sold in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria in a given month. Index base The period for which the index equals 100. The base period for the NCAPI is 2017 = 100. 2.2 Data 2.2.1 Sampling Data for the NCAPI are collected from a survey of condo builders. The frame is compiled from multiple sources including zoning and planning applications received from municipalities, building permits, builder associations, new home buyer insurance companies and governmental/non-profit home buyer protection services, advertisements, and various internet sources that provide information on upcoming buildings. The NCAPI uses a multi-stage sample design in which units are selected into the sample at each stage. The first stage of sampling involves contacting developers in the survey frame to determine if they are in scope for the survey. To ensure that the same building can be followed through time, if a developer is in scope they are asked to report up to four buildings they are developing in which less than 70% of at least one of the target unit types have been sold. The second stage of sampling involves selecting one of these buildings into the sample. An electronic questionnaire is then used to collect price information from developers for up to three units of each type in a building each month. Developers also report any premia applied to a unit (e.g., the value of a parking spot, or a better orientation within the building), and are asked for a list price if no units of a particular type sold that month. The same premium information is also collected for list prices. The sample is periodically refreshed as buildings sell out and builders enter and exit the market. 2.2.2 Cleaning and filtering The data collected from developers are manually reviewed for consistency and completeness, and certain records may be edited or removed based on judgement. In addition to this manual cleaning, any price relatives (see section 2.3) greater than or equal to 3 absolute deviations from the median are not included in the index calculation. As NCAPI is based on average transaction/list prices, this is a standard filter to remove outliers that can exert a large influence on averages (e.g., Rousseeuw and Hubert, 2011). In order to adequately clean the data, the NCAPI has a one quarter revision. This is due in part to the small sample size in most months. 2.3 Index calculation The index calculation for the NCAPI is fairly straightforward, and is similar in spirit to the NHPI. First, any premia are subtracted from the price of a unit to arrive at a quality-adjusted price for a “no-frills” reference unit. The quality adjusted price is then standardized by the square footage of a unit to arrive at a quality-adjusted unit price. Units are stratified by CMA, building, and unit type, and an unweighted geometric index is calculated for each stratum, giving a price relative for each stratum. The combination of stratification and explicit quality adjustment means that the same type of unit within each building is compared over time, giving these price relatives a constant-quality interpretation.^Note These stratum-specific price relatives are then aggregated to the CMA level using a Jevons index. The NCAPI is calculated monthly, and the three index values within a quarter are averaged to produce a quarterly index. To make the index calculation explicit, let $p usbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaWgaaWcbaWdbiaadwhacaWGZbGaamOyaiaadshaa8aa beaaaaa@3B2D@$ be the price of unit $u MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyDaaaa@3706@$ of type $s MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Caaaa@3704@$ in building $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ at time $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$, let $Δ usbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaeiLd8aadaWgaaWcbaWdbiaadwhacaWGZbGaamOyaiaadshaa8aa beaaaaa@3B52@$ the value of the premia for this unit, and let $a usbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyya8aadaWgaaWcbaWdbiaadwhacaWGZbGaamOyaiaadshaa8aa beaaaaa@3B1E@$ be its square footage. The quality-adjusted unit price is calculated as $ρ usbt = p usbt − Δ usbt a usbt . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqyWdi3damaaBaaaleaapeGaamyDaiaadohacaWGIbGaamiDaaWd aeqaaOWdbiabg2da9maalaaapaqaa8qacaWGWbWdamaaBaaaleaape GaamyDaiaadohacaWGIbGaamiDaaWdaeqaaOWdbiabgkHiTiaabs5a paWaaSbaaSqaa8qacaWG1bGaam4CaiaadkgacaWG0baapaqabaaake aapeGaamyya8aadaWgaaWcbaWdbiaadwhacaWGZbGaamOyaiaadsha a8aabeaaaaGcpeGaaiOlaaaa@4EAD@$ These unit prices are used in a geometric index to produce a collection of strata-level indices between period $t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaiabgkHiTiaaigdaaaa@38AD@$ and period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$, $I sbt t−1 = ∏ u=1 U sbt ( ρ usbt ) 1/ U sbt ∏ u=1 U sbt−1 ( ρ usbt−1 ) 1/ U sbt−1 , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiaadohacaWGIbGaamiDaaWdaeaapeGa amiDaiabgkHiTiaaigdaaaGccqGH9aqpdaWcaaWdaeaapeWaaubmae qal8aabaWdbiaadwhacqGH9aqpcaaIXaaapaqaa8qacaWGvbWdamaa BaaameaapeGaam4CaiaadkgacaWG0baapaqabaaaneaapeGaey4dIu naaOWaaeWaa8aabaWdbiabeg8aY9aadaWgaaWcbaWdbiaadwhacaWG ZbGaamOyaiaadshaa8aabeaaaOWdbiaawIcacaGLPaaapaWaaWbaaS qabeaapeGaaGymaiaac+cacaWGvbWdamaaBaaameaapeGaam4Caiaa dkgacaWG0baapaqabaaaaaGcbaWdbmaavadabeWcpaqaa8qacaWG1b Gaeyypa0JaaGymaaWdaeaapeGaamyva8aadaWgaaadbaWdbiaadoha caWGIbGaamiDaiabgkHiTiaaigdaa8aabeaaa0qaa8qacqGHpis1aa GcdaqadaWdaeaapeGaeqyWdi3damaaBaaaleaapeGaamyDaiaadoha caWGIbGaamiDaiabgkHiTiaaigdaa8aabeaaaOWdbiaawIcacaGLPa aapaWaaWbaaSqabeaapeGaaGymaiaac+cacaWGvbWdamaaBaaameaa peGaam4CaiaadkgacaWG0bGaeyOeI0IaaGymaaWdaeqaaaaaaaGcpe Gaaiilaaaa@70F1@$ where $U sbt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadohacaWGIbGaamiDaaWdaeqaaaaa @3A18@$ is the number of units sold of type $s MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Caaaa@3704@$ in building $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ at time $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$. To produce a CMA-level index, the within-CMA relatives for each unit type in each building are aggregated with a Jevons index $I t t−1 = ∏ b=1 B t ∏ s=1 S bt ( I sbt t−1 ) 1/ ∑ b=1 B t S bt , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiaadshaa8aabaWdbiaadshacqGHsisl caaIXaaaaOGaeyypa0ZaaybCaeqal8aabaWdbiaadkgacqGH9aqpca aIXaaapaqaa8qacaWGcbWdamaaBaaameaapeGaamiDaaWdaeqaaaqd baWdbiabg+GivdaakmaawahabeWcpaqaa8qacaWGZbGaeyypa0JaaG ymaaWdaeaapeGaam4ua8aadaWgaaadbaWdbiaadkgacaWG0baapaqa baaaneaapeGaey4dIunaaOWaaeWaa8aabaWdbiaadMeapaWaa0baaS qaa8qacaWGZbGaamOyaiaadshaa8aabaWdbiaadshacqGHsislcaaI XaaaaaGccaGLOaGaayzkaaWdamaaCaaaleqabaWdbiaaigdacaGGVa WaaybCaeqam8aabaWdbiaadkgacqGH9aqpcaaIXaaapaqaa8qacaWG cbWdamaaBaaabaWdbiaadshaa8aabeaaa4qaa8qacqGHris5aaWcca WGtbWdamaaBaaameaapeGaamOyaiaadshaa8aabeaaaaGcpeGaaiil aaaa@60F5@$ where $S bt MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4ua8aadaWgaaWcbaWdbiaadkgacaWG0baapaqabaaaaa@391E@$ is the number of unit types in building $b MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ and $B t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOqa8aadaWgaaWcbaWdbiaadshaa8aabeaaaaa@3826@$ is the number of buildings. These period-over-period indices are chained with the pervious period’s index value to give the current-period index value $I t = I t t−1 ⋅ I t−1 , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadshaa8aabeaak8qacqGH9aqpcaWG jbWdamaaDaaaleaapeGaamiDaaWdaeaapeGaamiDaiabgkHiTiaaig daaaGccqGHflY1caWGjbWdamaaBaaaleaapeGaamiDaiabgkHiTiaa igdaa8aabeaak8qacaGGSaaaaa@4507@$ where $I t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadshacqGHsislcaaIXaaapaqabaaa aa@39D5@$ is the index that runs from the base period to period $t−1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaiabgkHiTiaaigdaaaa@38AD@$. If a new building is introduced into the sample in a period, there is no attempt to impute back prices for the units in this building. This means that a building is not included in the index calculation in the first period that it is introduced into the sample. Finally, the quarterly CMA-level index is simply the average of the three index values within that quarter. For the quarter starting in month $q MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyCaaaa@3702@$, the index is $I q = 1 3 ∑ t=q q+2 I t . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaWgaaWcbaWdbiaadghaa8aabeaak8qacqGH9aqpdaWc aaWdaeaapeGaaGymaaWdaeaapeGaaG4maaaadaGfWbqabSWdaeaape GaamiDaiabg2da9iaadghaa8aabaWdbiaadghacqGHRaWkcaaIYaaa n8aabaWdbiabggHiLdaakiaadMeapaWaaSbaaSqaa8qacaWG0baapa qabaGcpeGaaiOlaaaa@4637@$ The resulting collection of quarterly indices at the CMA level capture the new condo side of the RPPI. 3 The Resale Residential Property Price Index (RRPPI) The RRPPI measures the change in transaction prices over time for resale houses and condominium apartments in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria. This is a quarterly index, starting in quarter 1 2017, composed of 12 sub-indices—one for each property type (house and condo) in each of the six cities. Each sub-index is computed using the repeat-sales method, an internationally accepted method for constructing a constant-quality price index as outlined in Eurostat’s Handbook on Residential Property Prices Indices (IMF, 2015). The data collection, ingestion, editing, and calculation is done in partnership with Teranet and National Bank.^Note 3.1 Concepts and definitions Table 3.1 defines key concepts used for constructing the RRPPI. Note that the concept for the date of sale of a property is the closing date, at which time the property is transferred from the seller to the buyer and subsequently recorded in the land registry. The closing date is later than the date at which a buyer and seller agree on a transaction price for the property. Table 3.1 Concepts and definitions for the RRPPI Table summary This table displays the results of Concepts and definitions for the RRPPI. The information is grouped by Concept (appearing as row headers), Definition (appearing as column headers). Concept Definition Price Final transaction price at the closing date for the sale of a property and recorded in the provincial land registry. Sales date The closing date for the sale of a property. Sales pair Prices and sales dates for consecutive sales for the same physical property. Sample All residential single/semi-detached houses, row houses, and apartment condos in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria that sold at least twice since January 1, 1998 and appear in the land registry databases. Target All residential single/semi-detached houses, row houses, and apartment condos in Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria, eligible for resale, that actually sold population between January 1, 1998 and the current period. Index base The period for which the index equals 100. The base period for the RRPPI is 2017 = 100. 3.2 Data 3.2.1 Data sources Property transaction data for the RRPPI come from the provincial land registry offices in Alberta, British Columbia, Ontario, and Quebec, from 1998 to the current period. As each property sale in Canada is registered in its respective provincial land registry office, these data capture all property transactions over this period. The RRPPI includes only transactions for residential single/ semi-detached houses, row houses, and condominium apartments in the Calgary, Montreal, Ottawa, Toronto, Vancouver, and Victoria CMAs. These data are collected and processed by Teranet and National Transaction data from each provincial land registry is provided on a monthly basis. These transactions are then matched to Teranet’s property database to create a sales history for each property. Sales pairs are created for each property that has sold twice, capturing the transaction prices and the closing dates for both sales for that property; sales pairs are created for consecutive sales for properties that have sold three or more times. Properties that have sold only once (e.g., newly built properties) are excluded. Table 3.2 gives a fictitious example of the resulting sales-pair Table 3.2 Example of sales pair data Table summary This table displays the results of Example of sales pair data. The information is grouped by Address (appearing as row headers), Property Type, Sales Date, Sales Price, Previous Sales Date and Previous Sales Price (appearing as column headers). Address Property Type Sales Date Sales Price Previous Sales Date Previous Sales Price 123 Fake St. Condo 08/01/2018 250,000 01/02/2014 200,000 321 False Dr. House 18/01/2018 500,000 04/06/2005 400,000 321 False Dr. House 04/06/2005 400,000 15/12/1999 350,000 3.2.2 Collection delay Although the land registry data are received from the provincial land registries every month, there is a delay between when sales are recorded in the land registries and when these data are received by Teranet and National Bank. This delay is particularly severe for British Columbia. Table 3.3 gives an example of the cumulative proportion of sales received per province at the end of each month, for a fixed month M. Due to this collection delay, the RRPPI has a revision of one quarter to ensure that sufficient data are collected to produce reliable index values. Table 3.3 Average portion of sales received per province each month Table summary This table displays the results of Average portion of sales received per province each month. The information is grouped by Province (appearing as row headers), Period M, Period M+1, Period M+2, Period M+3, Period M+4 and Period M+5, calculated using percent units of measure (appearing as column headers). Province Period M Period M+1 Period M+2 Period M+3 Period M+4 Period M+5 Alberta 92 100 100 100 100 100 British Columbia 43 94 97 99 99 100 Ontario 90 95 97 100 100 100 Quebec 83 83 83 83 83 88 3.2.3 Cleaning and filtering Data for the RRPPI come from administrative sources—and are therefore fairly clean—although some filtering is required to remove property transactions that are not appropriate for constructing the RRPPI, as well as outliers that can have a large influence on the index. This includes removing sales pairs for which one of the transactions may not be at arm’s length (e.g., a bequest) or may be a distress sale, or for which the price movement between sales is so extreme as to suggest that the quality of the property may have changed (e.g., due to renovations). These filters are applied to each CMA and property type separately, and are summarized by the order in which they are applied in table 3.4. Prior to these filters being applied, a series of filters are used to remove transactions that may be part of a builder split or a developer block transaction (i.e., a bundled sale of multiple properties), as these types of transactions fall outside the scope of the RRPPI. Groups of five or more properties within the same Forward Sortation Area (first three digits of a property’s postal code), sold on the same date, and for the same price are treated as a block/split transaction. The transaction for each property in the group is removed when this is the most recent transaction for each property. Transactions in a group can return if the subsequent sale price for at least 75% of the properties in the group is at least 75% of the price of the block/split transaction price, and, for each subsequent sale for each property in the group, there is at most one other property in the same Forward Sortation Area that sold for the same price on the same date. This allows block/split transactions to be used if the price for these transactions is close to the subsequent selling price for most of the properties in the block/split transaction. Table 3.4 Data filters for sales pairs in the RRPPI Table summary This table displays the results of Data filters for sales pairs in the RRPPI. The information is grouped by Filter (appearing as row headers), Rationale (appearing as column headers). Filter Rationale Transaction price less than or equal to These transactions may not be arm’s-length transactions (e.g., bequest). 10,000 dollars. Holding period less than 6 months. These transactions can be distress sales or speculative transactions (de Haan and Diewert, 2013, section 6.11), or flipped properties for which there is a large change in the quality of the property (e.g., Jansen et al., 2008; S&P Dow Jones, 2018). Annualized return greater than or equal to 3 There may be a change in the quality of a property that gives rise to an unusually large price change between transactions, or a data entry error for one median absolute deviations from the median. of the transaction prices. As the RRPPI is based on average transaction prices, this also removes outliers that can exert a large influence on averages (e.g., Rousseeuw and Hubert, 2011). 3.3 Index calculation The repeat-sales method offers a means to construct a constant-quality price index, exploiting multiple sales for the same property over time to control for time-invariant differences in quality between properties. Other approaches for constructing a constant-quality index (e.g., hedonics or stratification) require property characteristics, such as the age of the property, and these are not available in the land registry data. See Hansen (2009) for a comparison of the different approaches for constructing a property price index. In practice there are a number of methodological choices to make when implementing a repeat-sales index. This section outlines the repeat-sales method and highlights the particular flavour of the repeat-sales index used to construct the RRPPI. See Wang and Zorn (1997) and de Haan and Diewert (2013, chapter 6) for an overview of the repeat-sales method, and Jansen et al. (2008) for an Due to the smaller number of transactions for condos, the condo sub-index is calculated for each quarter. For houses the index is calculated monthly, with the resulting index values averaged over each quarter to produce a quarterly index. 3.3.1 The repeat-sales method There are two broad classes of repeat-sales price indices—the Jevons-like geometric repeat-sales index (GRS index) proposed by Bailey et al. (1963) and the Laspeyres-like arithmetic repeat-sales index (ARS index) proposed by Shiller (1991).^Note The GRS and ARS indices often show similar price movements over time (e.g., Shiller, 1991). The RRPPI uses the arithmetic repeat-sales index outlined in Shiller (1991, section II), similar to that used by S&P Dow Jones (2018). In addition to the geometric and arithmetic versions of the repeat-sales index, there are various weighting schemes that can be used to weight the price relatives in the index calculation (e.g., Case and Shiller, 1987; Abraham and Schauman, 1991; Calhoun, 1996). These are inverse-variance weights designed to correct for differences in the variance in transactions prices for properties with different holding periods that can complicate constructing confidence intervals for the index. While weights directly affect the index values, in practice these weights can have an at most marginal impact on the index (e.g., Goetzmann, 1992; Hansen, 2009), especially with large samples. The weighted indices, however, rely on more assumptions than their unweighted counterparts, and cannot be computed if the weights cannot be calculated. Previous studies have also found that the unweighted indices are not inferior to the weighted indices (de Haan and Diewert, 2013, section 6.14). Consequently, as confidence intervals are not reported for the RRPPI, inverse-variance weights are not used to compute the RRPPI. 3.3.2 The GRS and ARS indices Historically the GRS index came before the ARS index, starting with the seminal paper by Bailey et al. (1963), and it is easier to understand the ARS index by first developing the GRS index. Letting time periods be indexed by $t∈{ 0,1,…,T } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaiabgIGiopaacmaapaqaa8qacaaIWaGaaiilaiaaigdacaGG SaGaeyOjGWRaaiilaiaadsfaaiaawUhacaGL9baaaaa@40C5@$ and properties be indexed by $i∈{ 1,2,…, N } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaiabgIGiopaacmaapaqaa8qacaaIXaGaaiilaiaaikdacaGG SaGaeyOjGWRaaiilaiaacckacaWGobaacaGL7bGaayzFaaaaaa@41DA@$, the starting point for the GRS index is a structural (hedonic) model of property prices $log( p it )=log( P t )+ x it θ+log( ϵ it ), MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaeiBaiaab+gacaqGNbWaaeWaa8aabaWdbiaadchapaWaaSbaaSqa a8qacaWGPbGaamiDaaWdaeqaaaGcpeGaayjkaiaawMcaaiabg2da9i aabYgacaqGVbGaae4zaiaacIcacaWGqbWdamaaBaaaleaapeGaamiD aaWdaeqaaOWdbiaacMcacqGHRaWkcaWG4bWdamaaBaaaleaapeGaam yAaiaadshaa8aabeaak8qacqaH4oqCcqGHRaWkcaqGSbGaae4Baiaa bEgadaqadaWdaeaatuuDJXwAK1uy0HwmaeHbfv3ySLgzG0uy0Hgip5 wzaGqbc8qacqWF1pG8paWaaSbaaSqaa8qacaWGPbGaamiDaaWdaeqa aaGcpeGaayjkaiaawMcaaiaacYcaaaa@5F86@$ where $p it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaaaaa@3942@$ is the transaction price of property $i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaaaa@36FA@$ at time $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$, $P t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiua8aadaWgaaWcbaWdbiaadshaa8aabeaaaaa@3834@$ is a common city-level price reflecting aggregate price movements, $x it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiEa8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaaaaa@394A@$ is a (row) vector of property characteristics (e.g., number of bedrooms for property $i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaaaa@36FA@$ at time $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$), $θ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiUdehaaa@37C2@$ is a vector of implicit (hedonic) prices, and $ϵ it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamrr1ngBPrwtHr hAXaqeguuDJXwAKbstHrhAG8KBLbacfiaeaaaaaaaaa8qacqWF1pG8 paWaaSbaaSqaa8qacaWGPbGaamiDaaWdaeqaaaaa@4448@$ is an error term.^ Note This is simply a time-dummy hedonic model in which properties can sell more than once (e.g., de Haan and Diewert, 2013, chapter 5). In the context of this model, the constant-quality (geometric) price index in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$ with base period 0, denoted by $I τ G MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaam4raaaaaaa@39D6@$, is $I τ G ≡ P τ / P 0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaam4raaaakiab ggMi6kaadcfapaWaaSbaaSqaa8qacqaHepaDa8aabeaak8qacaGGVa Gaamiua8aadaWgaaWcbaWdbiaaicdaa8aabeaaaaa@4153@$. Importantly, $P t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiua8aadaWgaaWcbaWdbiaadshaa8aabeaaaaa@3834@$ is not random—it is a parameter that governs the joint distribution of property prices. Under the assumption that property characteristics do not change over time (i.e., $x it = x i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiEa8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaGcpeGaeyyp a0JaamiEa8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3CAF@$, for all $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$) and that each property sells twice, the first-difference transformation can be used to deliver $log( p is( i ) p if( i ) )=log( P s( i ) P f( i ) )+log( ϵ is( i ) ϵ if( i ) ) = ∑ t=1 T D it log( P t P 0 )+log( ϵ is( i ) ϵ if( i ) ) , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOabaqqabaaeaaaaaa aaa8qaciGGSbGaai4BaiaacEgadaqadaWdaeaapeWaaSaaa8aabaWd biaadchapaWaaSbaaSqaa8qacaWGPbGaam4Camaabmaapaqaa8qaca WGPbaacaGLOaGaayzkaaaapaqabaaakeaapeGaamiCa8aadaWgaaWc baWdbiaadMgacaWGMbWaaeWaa8aabaWdbiaadMgaaiaawIcacaGLPa aaa8aabeaaaaaak8qacaGLOaGaayzkaaGaeyypa0JaciiBaiaac+ga caGGNbWaaeWaa8aabaWdbmaalaaapaqaa8qacaWGqbWdamaaBaaale aapeGaam4Camaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapaqa baaakeaapeGaamiua8aadaWgaaWcbaWdbiaadAgadaqadaWdaeaape GaamyAaaGaayjkaiaawMcaaaWdaeqaaaaaaOWdbiaawIcacaGLPaaa cqGHRaWkciGGSbGaai4BaiaacEgadaqadaWdaeaapeWaaSaaa8aaba Wefv3ySLgznfgDOfdaryqr1ngBPrginfgDObYtUvgaiuGapeGae8x9 di= damaaBaaaleaapeGaamyAaiaadohadaqadaWdaeaapeGaamyAaa GaayjkaiaawMcaaaWdaeqaaaGcbaWdbiab=v=aY=aadaWgaaWcbaWd biaadMgacaWGMbWaaeWaa8aabaWdbiaadMgaaiaawIcacaGLPaaaa8 aabeaaaaaak8qacaGLOaGaayzkaaaabaGaeyypa0JaaiiOamaawaha beWcpaqaa8qacaWG0bGaeyypa0JaaGymaaWdaeaapeGaamivaaqdpa qaa8qacqGHris5aaGccaWGebWdamaaBaaaleaapeGaamyAaiaadsha a8aabeaak8qaciGGSbGaai4BaiaacEgadaqadaWdaeaapeWaaSaaa8 aabaWdbiaadcfapaWaaSbaaSqaa8qacaWG0baapaqabaaakeaapeGa amiua8aadaWgaaWcbaWdbiaaicdaa8aabeaaaaaak8qacaGLOaGaay zkaaGaey4kaSIaciiBaiaac+gacaGGNbWaaeWaa8aabaWdbmaalaaa paqaa8qacqWF1pG8paWaaSbaaSqaa8qacaWGPbGaam4Camaabmaapa qaa8qacaWGPbaacaGLOaGaayzkaaaapaqabaaakeaapeGae8x9di=d amaaBaaaleaapeGaamyAaiaadAgadaqadaWdaeaapeGaamyAaaGaay jkaiaawMcaaaWdaeqaaaaaaOWdbiaawIcacaGLPaaacaGGGcGaaiil aaaaaa@9E1A@$ where $s( i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Camaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaaaa@399A@$ gives the time of the second sale for property $i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaaaa@36FA@$, $f( i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOzamaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaaaa@398D@$ gives the time of the first sale for property $i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaaaa@36FA@$, and $D it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamira8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaaaaa@3916@$ is a dummy variable that takes the value 1 if a property sells for the second time in period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$ (i.e., $s( i )=t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Camaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaGaeyypa0Ja amiDaaaa@3B99@$), -1 if the property sells for the first time in period $t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiDaaaa@3705@$ (i.e., $f( i )=t MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOzamaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaGaeyypa0Ja amiDaaaa@3B8C@$), and 0 otherwise. The assumption that property characteristics do not change over time means that the percent change in a property’s price follows the aggregate percent change in property prices, up to an additive error. Properties that sell three or more times can be incorporated in the first difference transformation by treating consecutive pairs of sales as distinct properties. Under the assumption that the error terms are strictly exogenous, so that $E[log( ϵ it )| D i1 , D i2 ,…, D iT ]=0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyraiaacUfacaqGSbGaae4BaiaabEgadaqadaWdaeaatuuDJXwA K1uy0HwmaeHbfv3ySLgzG0uy0Hgip5wzaGqbc8qacqWF1pG8paWaaS baaSqaa8qacaWGPbGaamiDaaWdaeqaaaGcpeGaayjkaiaawMcaaiaa cYhacaWGebWdamaaBaaaleaapeGaamyAaiaaigdaa8aabeaak8qaca GGSaGaamira8aadaWgaaWcbaWdbiaadMgacaaIYaaapaqabaGcpeGa aiilaiabgAci8kaacYcacaWGebWdamaaBaaaleaapeGaamyAaiaads faa8aabeaak8qacaGGDbGaeyypa0JaaGimaaaa@5A8E@$ — a standard assumption in panel-data applications (e.g., Wooldridge, 2010, chapter 10)—the assumption that property characteristics do not change over time allows for the price index to be identified from the linear regression $log( p is( i ) p if( i ) )= ∑ t=1 T D it γ t +log( ϵ is( i ) ϵ if( i ) ), MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaciiBaiaac+gacaGGNbWaaeWaa8aabaWdbmaalaaapaqaa8qacaWG WbWdamaaBaaaleaapeGaamyAaiaadohadaqadaWdaeaapeGaamyAaa GaayjkaiaawMcaaaWdaeqaaaGcbaWdbiaadchapaWaaSbaaSqaa8qa caWGPbGaamOzamaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapa qabaaaaaGcpeGaayjkaiaawMcaaiabg2da9maawahabeWcpaqaa8qa caWG0bGaeyypa0JaaGymaaWdaeaapeGaamivaaqdpaqaa8qacqGHri s5aaGccaWGebWdamaaBaaaleaapeGaamyAaiaadshaa8aabeaak8qa cqaHZoWzpaWaaSbaaSqaa8qacaWG0baapaqabaGcpeGaey4kaSIaci iBaiaac+gacaGGNbWaaeWaa8aabaWdbmaalaaapaqaamrr1ngBPrwt HrhAXaqeguuDJXwAKbstHrhAG8KBLbacfiWdbiab=v=aY=aadaWgaa WcbaWdbiaadMgacaWGZbWaaeWaa8aabaWdbiaadMgaaiaawIcacaGL Paaaa8aabeaaaOqaa8qacqWF1pG8paWaaSbaaSqaa8qacaWGPbGaam Ozamaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapaqabaaaaaGc peGaayjkaiaawMcaaiaacYcaaaa@7268@$ so that $I τ G ≡ P τ / P 0 =exp( γ τ ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY =Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaam4raaaakiab ggMi6kaadcfapaWaaSbaaSqaa8qacqaHepaDa8aabeaak8qacaGGVa Gaamiua8aadaWgaaWcbaWdbiaaicdaa8aabeaak8qacqGH9aqpciGG LbGaaiiEaiaacchadaqadaWdaeaapeGaeq4SdC2damaaBaaaleaape GaeqiXdqhapaqabaaak8qacaGLOaGaayzkaaaaaa@4AD6@$. The first difference transformation turns a structural model that depends on property characteristics into an estimating equation that depends on only the time when a property sells.^Note It is instructive to derive the form of the GRS index as an index number to make the link with the ARS index. Letting $N f ( τ ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOta8aadaWgaaWcbaWdbiaadAgaa8aabeaak8qadaqadaWdaeaa peGaeqiXdqhacaGLOaGaayzkaaaaaa@3BAB@$ be the set of properties that sell for the first time in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$, $N s ( τ ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOta8aadaWgaaWcbaWdbiaadohaa8aabeaak8qadaqadaWdaeaa peGaeqiXdqhacaGLOaGaayzkaaaaaa@3BB8@$ be the set of properties that sell for the second time in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$, and $N( τ )=| N f ( τ ) |+| N s ( τ ) | MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOtamaabmaapaqaa8qacqaHepaDaiaawIcacaGLPaaacqGH9aqp daabdaWdaeaapeGaamOta8aadaWgaaWcbaWdbiaadAgaa8aabeaak8 qadaqadaWdaeaapeGaeqiXdqhacaGLOaGaayzkaaaacaGLhWUaayjc SdGaey4kaSYaaqWaa8aabaWdbiaad6eapaWaaSbaaSqaa8qacaWGZb aapaqabaGcpeWaaeWaa8aabaWdbiabes8a0bGaayjkaiaawMcaaaGa ay5bSlaawIa7aaaa@4E01@$ (the number of properties that sell in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$), it can be shown that $I τ G = ∏ i∈ N f ( τ ) ( p iτ p is( i ) I s( i ) G ) 1 N( τ ) ∏ i∈ N s ( τ ) ( p iτ p if( i ) I f( i ) G ) 1 N( τ ) . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaam4raaaakiab g2da9maawafabeWcpaqaa8qacaWGPbGaeyicI4SaamOta8aadaWgaa adbaWdbiaadAgaa8aabeaal8qadaqadaWdaeaapeGaeqiXdqhacaGL OaGaayzkaaaabeqdpaqaa8qacqGHpis1aaGcdaqadaWdaeaapeWaaS aaa8aabaWdbiaadchapaWaaSbaaSqaa8qacaWGPbGaeqiXdqhapaqa baaakeaapeWaaSaaa8aabaWdbiaadchapaWaaSbaaSqaa8qacaWGPb Gaam4Camaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapaqabaaa keaapeGaamysa8aadaqhaaWcbaWdbiaadohadaqadaWdaeaapeGaam yAaaGaayjkaiaawMcaaaWdaeaapeGaam4raaaaaaaaaaGccaGLOaGa ayzkaaWdamaaCaaaleqabaWdbmaalaaapaqaa8qacaaIXaaapaqaa8 qacaWGobWaaeWaa8aabaWdbiabes8a0bGaayjkaiaawMcaaaaaaaGc caGGGcWaaybuaeqal8aabaWdbiaadMgacqGHiiIZcaWGobWdamaaBa aameaapeGaam4CaaWdaeqaaSWdbmaabmaapaqaa8qacqaHepaDaiaa wIcacaGLPaaaaeqan8aabaWdbiabg+Givdaakmaabmaapaqaa8qada WcaaWdaeaapeGaamiCa8aadaWgaaWcbaWdbiaadMgacqaHepaDa8aa beaaaOqaa8qadaWcaaWdaeaapeGaamiCa8aadaWgaaWcbaWdbiaadM gacaWGMbWaaeWaa8aabaWdbiaadMgaaiaawIcacaGLPaaaa8aabeaa aOqaa8qacaWGjbWdamaaDaaaleaapeGaamOzamaabmaapaqaa8qaca WGPbaacaGLOaGaayzkaaaapaqaa8qacaWGhbaaaaaaaaaakiaawIca caGLPaaapaWaaWbaaSqabeaapeWaaSaaa8aabaWdbiaaigdaa8aaba Wdbiaad6eadaqadaWdaeaapeGaeqiXdqhacaGLOaGaayzkaaaaaaaa kiaacckacaGGUaaaaa@8141@$ The GRS index is simply a matched-model Jevons index with a twist. Rather than use only property transactions that occur in period $0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaGimaaaa@36C6@$ and period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$, the index itself is used to extrapolate prices across time for all properties that sell in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$ by deflating prices for sales that do not occur in the base period using that period’s index. This allows all properties that sell in period $τ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiXdqhaaa@37D1@$ to be used in the index calculation, whether that property sells in period $0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaGimaaaa@36C6@$ or not. As an alternative to a geometric index, Shiller (1991) proposes the ARS index, denoted by $I τ A MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaamyqaaaaaaa@39D0@$, that simply replaces the geometric averages in the GRS index with arithmetic $I τ A = ∑ i∈ N f ( τ ) p iτ + ∑ i∈ N s ( τ ) p iτ ∑ i∈ N f ( τ ) p is( i ) I s( i ) A + ∑ i∈ N s ( τ ) p if( i ) I f( i ) A . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamysa8aadaqhaaWcbaWdbiabes8a0bWdaeaapeGaamyqaaaakiab g2da9maalaaapaqaa8qadaqfqaqabSWdaeaapeGaamyAaiabgIGiol aad6eapaWaaSbaaWqaa8qacaWGMbaapaqabaWcpeWaaeWaa8aabaWd biabes8a0bGaayjkaiaawMcaaaqab0WdaeaapeGaeyyeIuoaaOGaam iCa8aadaWgaaWcbaWdbiaadMgacqaHepaDa8aabeaak8qacqGHRaWk daqfqaqabSWdaeaapeGaamyAaiabgIGiolaad6eapaWaaSbaaWqaa8 qacaWGZbaapaqabaWcpeWaaeWaa8aabaWdbiabes8a0bGaayjkaiaa wMcaaaqab0WdaeaapeGaeyyeIuoaaOGaamiCa8aadaWgaaWcbaWdbi aadMgacqaHepaDa8aabeaaaOqaa8qadaqfqaqabSWdaeaapeGaamyA aiabgIGiolaad6eapaWaaSbaaWqaa8qacaWGMbaapaqabaWcpeWaae Waa8aabaWdbiabes8a0bGaayjkaiaawMcaaaqab0WdaeaapeGaeyye IuoaaOWaaSaaa8aabaWdbiaadchapaWaaSbaaSqaa8qacaWGPbGaam 4Camaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapaqabaaakeaa peGaamysa8aadaqhaaWcbaWdbiaadohadaqadaWdaeaapeGaamyAaa GaayjkaiaawMcaaaWdaeaapeGaamyqaaaaaaGccqGHRaWkcaGGGcWa aubeaeqal8aabaWdbiaadMgacqGHiiIZcaWGobWdamaaBaaameaape Gaam4CaaWdaeqaaSWdbmaabmaapaqaa8qacqaHepaDaiaawIcacaGL Paaaaeqan8aabaWdbiabggHiLdaakmaalaaapaqaa8qacaWGWbWdam aaBaaaleaapeGaamyAaiaadAgadaqadaWdaeaapeGaamyAaaGaayjk aiaawMcaaaWdaeqaaaGcbaWdbiaadMeapaWaa0baaSqaa8qacaWGMb WaaeWaa8aabaWdbiaadMgaaiaawIcacaGLPaaaa8aabaWdbiaadgea aaaaaaaakiaac6caaaa@874B@$ Price relatives are formed in the same way as the GRS index, except now a Laspeyres index is used to combine price relatives, rather than a Jevons index. This is the index used to calculate the Computing the ARS index requires solving a system of equations to calculate the index in each period. As with the GRS index, the ARS index can be computed as a linear regression, although now with a set of instrumental variables—this provides a convenient way to calculate the index and determine its statistical properties. Letting $Y i ={ p if( i ) if f( i )=0 0 if f( i )>0 , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamywa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaGa baWdaeaafaqabeGabaaabaWdbiaadchapaWaaSbaaSqaa8qacaWGPb GaamOzamaabmaapaqaa8qacaWGPbaacaGLOaGaayzkaaaapaqabaGc peGaaiiOaiaabMgacaqGMbGaaeiOaiaadAgadaqadaWdaeaapeGaam yAaaGaayjkaiaawMcaaiabg2da9iaaicdaa8aabaWdbiaaicdacaGG GcGaaeyAaiaabAgacaqGGcGaamOzamaabmaapaqaa8qacaWGPbaaca $X it ={ − p if( i ) if f( i )=t p is( i ) if s( i )=t 0 otherwise , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiwa8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaGcpeGaeyyp a0Zaaiqaa8aabaqbaeqabmqaaaqaa8qacqGHsislcaWGWbWdamaaBa aaleaapeGaamyAaiaadAgadaqadaWdaeaapeGaamyAaaGaayjkaiaa wMcaaaWdaeqaaOWdbiaacckacaqGPbGaaeOzaiaabckacaWGMbWaae Waa8aabaWdbiaadMgaaiaawIcacaGLPaaacqGH9aqpcaWG0baapaqa a8qacaWGWbWdamaaBaaaleaapeGaamyAaiaadohadaqadaWdaeaape GaamyAaaGaayjkaiaawMcaaaWdaeqaaOWdbiaacckacaqGPbGaaeOz aiaabckacaWGZbWaaeWaa8aabaWdbiaadMgaaiaawIcacaGLPaaacq GH9aqpcaWG0baapaqaa8qacaaIWaGaaiiOaiaab+gacaqG0bGaaeiA aiaabwgacaqGYbGaae4DaiaabMgacaqGZbGaaeyzaaaaaiaawUhaai aacYcaaaa@66C2@$ the ARS index is the reciprocal of the instrumental variables (IV) estimator for the regression $Y i = ∑ t=1 T X it β t + v i , MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY= Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamywa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaGf WbqabSWdaeaapeGaamiDaiabg2da9iaaigdaa8aabaWdbiaadsfaa0 WdaeaapeGaeyyeIuoaaOGaamiwa8aadaWgaaWcbaWdbiaadMgacaWG 0baapaqabaGcpeGaeqOSdi2damaaBaaaleaapeGaamiDaaWdaeqaaO WdbiabgUcaRiaadAhapaWaaSbaaSqaa8qacaWGPbaapaqabaGcpeGa aiilaiaacckaaaa@4AEF@$ with $D it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamira8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaaaaa@3916@$ as an instrument for $X it MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiwa8aadaWgaaWcbaWdbiaadMgacaWG0baapaqabaaaaa@392A@$. Letting $X i =( X i1 , X i2 ,…, X iT ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiwa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaqa daWdaeaapeGaamiwa8aadaWgaaWcbaWdbiaadMgacaaIXaaapaqaba GcpeGaaiilaiaadIfapaWaaSbaaSqaa8qacaWGPbGaaGOmaaWdaeqa aOWdbiaacYcacqGHMacVcaGGSaGaamiwa8aadaWgaaWcbaWdbiaadM gacaWGubaapaqabaaak8qacaGLOaGaayzkaaaaaa@47A4@$ and $D i =( D i1 , D i2 ,…, D iT ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamira8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaqa daWdaeaapeGaamira8aadaWgaaWcbaWdbiaadMgacaaIXaaapaqaba GcpeGaaiilaiaadseapaWaaSbaaSqaa8qacaWGPbGaaGOmaaWdaeqa aOWdbiaacYcacqGHMacVcaGGSaGaamira8aadaWgaaWcbaWdbiaadM gacaWGubaapaqabaaak8qacaGLOaGaayzkaaaaaa@4754@$, the entire series of ARS indices from period $1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaGymaaaa@36C7@$ to period $T MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0= as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamivaaaa@36E5@$ is computed as $( I 1 A , I 2 A ,…, I T A )"=diag ( [ ∑ i=1 N D i " X i ] −1 ∑ i=1 N D i " Y i ) −1 . MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaeWaa8aabaWdbiaadMeapaWaa0baaSqaa8qacaaIXaaapaqaa8qa caWGbbaaaOGaaiilaiaadMeapaWaa0baaSqaa8qacaaIYaaapaqaa8 qacaWGbbaaaOGaaiilaiabgAci8kaacYcacaWGjbWdamaaDaaaleaa peGaamivaaWdaeaapeGaamyqaaaaaOGaayjkaiaawMcaaiaacEcacq GH9aqpcaqGKbGaaeyAaiaabggacaqGNbWaaeWaa8aabaWdbmaadmaa paqaa8qadaGfWbqabSWdaeaapeGaamyAaiabg2da9iaaigdaa8aaba Wdbiaad6eaa0WdaeaapeGaeyyeIuoaaOGaamira8aadaqhaaWcbaWd biaadMgaa8aabaWdbiaacEcaaaGccaWGybWdamaaBaaaleaapeGaam yAaaWdaeqaaaGcpeGaay5waiaaw2faa8aadaahaaWcbeqaa8qacqGH sislcaaIXaaaaOWaaybCaeqal8aabaWdbiaadMgacqGH9aqpcaaIXa aapaqaa8qacaWGobaan8aabaWdbiabggHiLdaakiaadseapaWaa0ba aSqaa8qacaWGPbaapaqaa8qacaGGNaaaaOGaamywa8aadaWgaaWcba WdbiaadMgaa8aabeaaaOWdbiaawIcacaGLPaaapaWaaWbaaSqabeaa peGaeyOeI0IaaGymaaaakiaac6cacaGGGcaaaa@692D@$ The validity of the IV estimator rests on a statement that the index to calculate is an arithmetic index (Shiller, 1991, p. 115). Given a sample of repeat-sales transactions for properties over $T MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY= biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamivaaaa@36E5@$ periods, the IV estimator is consistent under fairly weak conditions on the sampling process (e.g., White, 2001, theorem 3.15; Wooldridge, 2010, theorems 5.1 and 8.1), so that the estimator for ARS index converges in probability to the population ARS index (i.e., it is unbiased in large samples). 3.3.3 Worked example of the ARS index The simplest non-trivial example of a repeat-sales index has 3 periods—an initial period 0 that serves as the base period, followed by periods 1 and 2—and three houses, labelled as $a$, $b$, and $c$. House $a$ sells for the first time in period 1 and for the second time in period 2; house $b$ sells for the first time in period 0 and for the second time in period 2; and house $c$ sells for the first time in period 0 and for the second time in period 1. Table 3.5 summarizes these data. Table 3.5 Sales pair data Table summary This table displays the results of Sales pair data. The information is grouped by House (appearing as row headers), Sales Date, Sales Price, Previous Sales Date and Previous Sales Price (appearing as column headers). Sales Previous House Date Sales Price Sales Previous Sales Price $a MathType@MTEF@5@5@+= $p a2 MathType@MTEF@5@5@+= $p a1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 2 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 1 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyyaaaa@36F2@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ $b MathType@MTEF@5@5@+= $p b2 MathType@MTEF@5@5@+= $p b0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 2 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 0 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOyaaaa@36F3@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ $c MathType@MTEF@5@5@+= $p c2 MathType@MTEF@5@5@+= $p c0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 1 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb 0 4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbb a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= a9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr 0= vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4yaaaa@36F4@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ GaamiCa8aadaWgaaWcbaWdbiaadggacaaIYaaapaqabaaaaa@38FD@$ With these data, the ARS index is ${I}_{1}^{A}=\frac{{p}_{a1}+{p}_{c1}}{\frac{{p}_{a2}}{{I}_{2}^{A}}+{p}_{c0}}=\frac{{p}_{a1}}{\frac{{p}_{a2}}{{I}_{2}^{A}}}\cdot \frac{\frac{{p}_{a2}}{{I}_{2}^{A}}}{\frac{{p}_{a2}}{{I}_{2}^{A}}+{p}_ {c0}}+\frac{{p}_{c1}}{{p}_{c0}}\cdot \frac{{p}_{c0}}{\frac{{p}_{a2}}{{I}_{2}^{A}}+{p}_{c0}}$ ${I}_{2}^{A}=\frac{{p}_{a2}+{p}_{b2}}{\frac{{p}_{a1}}{{I}_{1}^{A}}+{p}_{b0}}=\frac{{p}_{a2}}{\frac{{p}_{a1}}{{I}_{1}^{A}}}\cdot \frac{\frac{{p}_{a1}}{{I}_{1}^{A}}}{\frac{{p}_{a1}}{{I}_{1}^{A}}+{p}_ {b0}}+\frac{{p}_{b2}}{{p}_{b0}}\cdot \frac{{p}_{b0}}{\frac{{p}_{a1}}{{I}_{1}^{A}}+{p}_{b0}}.$ This is like a pure matched-model Laspeyres index, except that house $a$ can be included in the index calculation by deflating its price to get a pseudo period $0$ price.^Note Doing this, however, means that the index is defined by a system of equations—one for each time period—that must be solved to get the index for a given period. The ARS index is defined simultaneously for each period. To get a closed-form solution for the ARS index, note that $D\equiv \left[\begin{array}{c}{D}_{a}\\ {D}_{b}\\ {D}_{c}\end{array}\right]=\left[\begin{array}{cc}-1& 1\\ 0& 1\\ 1& 0\end{array}\right]$ $X\equiv \left[\begin{array}{c}{X}_{a}\\ {X}_{b}\\ {X}_{c}\end{array}\right]=\left[\begin{array}{cc}-{p}_{a1}& {p}_{a2}\\ 0& {p}_{b2}\\ {p}_{c1}& 0\end{array}\right],$ $Y\equiv \left[\begin{array}{c}{Y}_{a}\\ {Y}_{b}\\ {Y}_{c}\end{array}\right]=\left[\begin{array}{c}0\\ {p}_{b0}\\ {p}_{c0}\end{array}\right]$ The ARS index comes from the IV estimator for the linear regression $Y=X\beta +v$ with $D$ as an instrumental variable. The moment (orthogonality) condition for the IV estimator, $\stackrel{^}{\beta }$, is $\begin{array}{c}{D}^{\prime }X\cdot \stackrel{^}{\beta }={D}^{\prime }Y\\ \left[\begin{array}{cc}{p}_{a1}+{p}_{c1}& -{p}_{a2}\\ -{p}_{a1}& {p}_{a2}+{p}_{b2}\end{array}\right]\left[\begin{array}{c}{\ stackrel{^}{\beta }}_{1}\\ {\stackrel{^}{\beta }}_{2}\end{array}\right]=\left[\begin{array}{c}{p}_{c0}\\ {p}_{b0}\end{array}\right],\end{array}$ the solution to which is $\left[\begin{array}{c}{\stackrel{^}{\beta }}_{1}\\ {\stackrel{^}{\beta }}_{2}\end{array}\right]=\frac{1}{\left({p}_{a1}+{p}_{c1}\right)\left({p}_{a2}+{p}_{b2}\right)-{p}_{a1}{p}_{a2}}\left[\begin {array}{cc}{p}_{a2}+{p}_{b2}& {p}_{a2}\\ {p}_{a1}& {p}_{a1}+{p}_{c1}\end{array}\right]\left[\begin{array}{c}{p}_{c0}\\ {p}_{b0}\end{array}\right].$ The ARS index for period $t$ is simply $1/{\stackrel{^}{\beta }}_{t}$, and thus Despite the conceptual simplicity of the ARS as a matched-model index, it nonetheless has a fairly complex non-linear structure. 3.3.4 Representativeness of the target population The target population for the RRPPI is all properties that are eligible for resale and have actually sold since January 1998. In practice, sales-pair data are only available for properties that sell two or more times over this period; properties that sell only once are missing from the sample. This is a sample selection problem—repeat-sale properties may not be representative of all transacted properties—and the resulting repeat-sales index may not capture the price movement for the target population of all transacted properties. Producing a representative index rests on the assumption that there are no systematic differences in latent selling prices and holding periods between properties that transact only once and those that transact twice or more. (See Wooldridge (2010, theorem 19.1) for precise conditions under which sample selection can be ignored with an IV estimator.) Previous studies have found some evidence to support this assumption (see de Haan and Diewert, 2013, section 6.17). As the RRPPI focuses on resale properties, properties that sell only once because they are newly built do not contribute to a selected sample. The only divergence between the target population and the sample of transactions available are properties that sold prior to January 1998 and only once since then. These properties are not used to calculate the RRPPI, but fall in the scope of the target population as these properties are both eligible for resale and actually sold after January 1998. This discrepancy between the target population and the sample will disappear over time. 3.3.5 Inverse-variance weights Case and Shiller (1987) argue that the variance of transaction prices for sales pairs increases with the holding period for a property, in which case the error term in the regression for the GRS index can be heteroskedastic.^Note This means that the usual OLS standard errors for the GRS index are inconsistent, and the OLS estimator is no longer minimum variance; the same applies to the IV estimator for the ARS index. If the relationship between holding period and variance in transaction prices is known, the generalized least squares (GLS) and generalized instrumental variables (GIV) estimators, using inverse-variance weights, are more efficient alternatives than their unweighted counterparts, and provide a consistent estimator for their standard errors (White, 2001, theorem 4.62; Wooldridge, 2010, theorem 8.5). Heteroskedasticity is not particularly problematic for the RRPPI; as with most national price indices, standard errors are not reported for the RRPPI, and there is a sufficiently large sample that asymptotic efficiency is not a concern (Wang and Zorn, 1997, section 4.4).^Note Using inverse-variance weights, however, modifies the index values. This is problematic as the GLS and GIV estimators require stronger assumptions than the usual OLS and IV estimators (e.g., the relationship between variance and holding period must be known), and failure of these assumptions can undermine the usefulness of these estimators (e.g., Angrist and Pischke, 2009, section 3.4.1; Wooldridge, 2010, section 4.2.3). There is also no guarantee that inverse-variance weights can be calculated at any point in time (e.g., Calhoun, 1996), and since the weights affect the index values, the index cannot be calculated if the weights fail. Consequently, the RRPPI does not use inverse-variance weights.^ 3.4 Revision 3.4.1 Accounting for revision in the repeat sales model A disadvantage of any repeat-sales index is that it is subject to perpetual revision. Computing the index for one period requires computing the index for all periods and, as new data become available, this will change the index values for previous periods. The RRPPI avoids revision by using a movement splice to update the index when new periods of data become available. With this approach, the price movement of the series computed with the most recent data is chained together with the last index value of the original series, thereby avoiding revision of the original series. This method of successively chaining together indices is used with hedonic price indices to avoid this same type of revision (e.g., de Haan and Diewert, 2013, section 5.18).^Note To fix notation, let ${I}_{0}^{S},\dots ,{I}_{T}^{S}$ be a series of repeat-sale price indices running from period 0 to period $T$, calculated using the first $S\le T$ periods of data. This series can be updated with a movement splice as follows. First, with $T+1$ periods of data available, calculate the series of indices ${I}_{0}^{T},\dots ,{I}_{T+1}^{T+1}$ ; that is, recalculate the entire series using all available data. To then update the original series of indices that runs until period $T$, simply calculate the index value in period $T+1$ as ${I}_{T}^{S}\cdot {I}_{T+1}^{T+1}/{I}_ {T}^{T+1}$, and append this value the original series. Thus, the original series of indices becomes ${I}_{0}^{S},{I}_{1}^{S},\dots ,{I}_{T}^{S},{I}_{T}^{S}\cdot \frac{{I}_{T+1}^{T+1}}{{I}_{T}^{T+1}}.$ The impact of any drift in the index from this type of splicing can easily be evaluated over time by comparing the index calculated using all of the data to the spliced index, and this is part of the quality assurance work done when producing the RRPPI. Provided that the historical index series is relatively stable over time, there should be minimal drift from splicing. 3.4.2 Accounting for revision due to collection delay The RRPPI has a one quarter revision to account for the delay of incoming data from the land registries. This revision means that the index is computed twice for each period. For example, when computing the 2018 quarter 1 index, the index is first computed in quarter 2 of 2018 using all of the data received in quarter 1 of 2018, and is then computed again in quarter 3 of 2018 once the majority of the quarter 1 2018 data has been received from the land registries in quarter 2 of 2018. This revision means that the index must be spliced with two different index series. Using the notation above, the preliminary index is calculated as ${I}_{0}^{S},{I}_{1}^{S},\dots ,{I}_{T}^{S},{I}_{T}^{S}\cdot \frac{{I}_{T+1}^{T+1}}{{I}_{T}^{T+1}},$ and the revised index is calculated as ${I}_{0}^{S},{I}_{1}^{S},\dots ,{I}_{T}^{S},{I}_{T}^{S}\cdot \frac{{I}_{T+1}^{T+2}}{{I}_{T}^{T+2}}.$ This approach to splicing allows for a one quarter revision to the index, so that additional data can be collected from the land registries, while avoiding perpetual revision of the repeat-sales 4 The Residential Property Price Index (RPPI) The RPPI aggregates the CMA-level indices from the NHPI, NCAPI, and RRPPI to produce a price index for residential properties in Calgary, Montreal, Ottawa, Toronto, Vancouver, Victoria, and a 6 CMA composite. The target population for the RPPI is the union of the target populations for each of the three component indices. Each of the four indices (new house, new condo, resale house, resale condo) are aggregated with a Young index, with sales weights capturing the value share of new versus resale properties, and houses versus condo apartments, sold in each CMA. The RPPI is a quarterly index, as both the NCAPI and the RRPPI are quarterly, starting in quarter 1 2017. To keep in line with the NCAPI and RRPPI, the RPPI has a one quarter revision. The weights for the RPPI are derived from the Canada Mortgage and Housing Corporation’s Market Absorption Survey and the inventory of repeat-sales transactions from Teranet and National Bank.^Note Both of these sources capture the value of all new and repeat-sales transactions respectively for residential single/semi-detached houses, row houses, and low rise/high rise apartment condos; consequently, the aggregate values are comparable in order to produce a value share for new versus resale properties, as well as houses versus condo apartments. The weight reference period is the three calendar years prior to the current year of the index, and these weights are updated annually. To avoid overlap with the revision period, the weights are updated in quarter 2 of the year. Abraham, J. M. and Schauman, W. S. (1991). New evidence on home prices for Freddie Mac repeat sales. Real Estate Economics, 19(3): 333-352. Amemiya, T. (1985). Advanced Econometrics. Harvard University Press. Angrist, J. and Pischke, J.-S. (2009). Mostly Harmless Econometrics. Princeton University Press. Bailey, M., Muth, R., and Nourse, H. (1963). A regression method for real estate price index construction. Journal of the American Statistical Association, 58(304): 933-942. Calhoun, C. (1996). OFHEO House Price Indexes: HPI Technical Description. Office of Federal Housing Enterprise Oversight. Retrieved from http://www.ofheo.gov/Media/Archive/house/hpi_tech.pdf. Case, K. and Shiller, R. (1987). Prices of single-family homes since 1970: New indexes for four cities. New England Economic Review: 45-56. de Haan, J. and Diewert, W. E. (Eds.). (2013). Handbook on Residential Property Prices Indices (RPPIs). Eurostat. Goetzmann, W. (1992). The accuracy of real estate indices: Repeat sale estimators. Journal of Real Estate Finance and Economics, 5(1): 5-53. Hansen, J. (2009). Australian house prices: A comparison of hedonic and repeat-sales measures. Economic Record, 85(269): 132-145. IMF. (2015). The Special Data Dissemination Standard Plus: Guide for Adherents and Users. Retrieved from https://www.imf.org/external/pubs/ft/sdds/guide/plus/2015/sddsplus15.pdf. Jansen, S., de Vries, P., Coolen, H., Lamain, C., and Boelhouwer, P. (2008). Developing a house price index for The Netherlands: A practical application of weighted repeat sales. Journal of Real Estate Finance and Economics, 37(2): 163-186. Rousseeuw, P. J. and Hubert, M. (2011). Robust statistics for outlier detection. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1): 73-79. Shiller, R. (1991). Arithmetic repeat sales price estimators. Journal of Housing Economics, 1(1): 110-126. S&P Dow Jones. (April 2018). S&P CoreLogic Case-Shiller Home Price Indices Methodology. Retrieved from https://us.spindices.com/index-family/real-estate/sp-corelogic-case-shiller. Wang, F. and Zorn, P. (1997). Estimating house price growth with repeat sales data: What’s the aim of the game? Journal of Housing Economics, 6: 93-118. White. H. (2001). Asymptotic Theory for Econometricians (revised edition). Emerald Group Publishing. Wooldridge, J. (2010). Econometric Analysis of Cross Section and Panel Data (2nd edition). MIT University Press.
{"url":"https://www150.statcan.gc.ca/n1/pub/62f0014m/62f0014m2019006-eng.htm","timestamp":"2024-11-06T02:40:12Z","content_type":"application/xhtml+xml","content_length":"237982","record_id":"<urn:uuid:550814a3-fceb-4d8b-8f85-4f38ba2a3258>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00093.warc.gz"}
undefined-inverse ( m n r -- * ) - Factor Documentation m an integer n an integer r a rank-kind Word description Throws an Error descriptionmultiplicative-inverse was used with a non-square matrix of rank whose dimensions are m x n . It is not generally possible to find the inverse of a
{"url":"https://docs.factorcode.org/content/word-undefined-inverse%2Cmath.matrices.extras.html","timestamp":"2024-11-12T16:07:42Z","content_type":"application/xhtml+xml","content_length":"10704","record_id":"<urn:uuid:1288346f-4b18-4674-b23b-e37a84eaf61a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00082.warc.gz"}
Marlene has a credit card that uses the adjusted balance method. For the first 10 days of one of her 30-day billing cycles, her balance was $570. She then made a purchase for $120, so her balance jumped to $690, and it remained that amount for the next 10 days. Marlene then made a payment of $250, so her balance for the last 10 days of the billing cycle was $440. If her credit card's APR is 15%, which of these expressions could be used to calculate the amount Marlene was charged in interest for the billing cycle? 1. Home 2. General 3. Marlene has a credit card that uses the adjusted balance method. For the first 10 days of one of her...
{"url":"https://math4finance.com/general/marlene-has-a-credit-card-that-uses-the-adjusted-balance-method-for-the-first-10-days-of-one-of-her-30-day-billing-cycles-her-balance-was-570-she-then-made-a-purchase-for-120-so-her-balance-jumped-to-690-and-it-remained-that-amount-for-the-ne","timestamp":"2024-11-10T15:54:27Z","content_type":"text/html","content_length":"31348","record_id":"<urn:uuid:90c1e229-e08f-4613-b6b4-cb6d2fdc855e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00673.warc.gz"}
Algebra 1 Chapter 9 - Quadratic Functions and Equations - 9-1 Quadratic Graphs and Their Properties - Practice and Problem-Solving Exercises - Page 538 15 The graph and data table are shown below Work Step by Step To graph the given equation we calculate the domain and range and plot the points. We calculate the range for the values x= -3, 0, 3 $y= -\frac{1}{3}x^{2}$ x=-3 $y= -\frac{1}{3}(-3)^{2}$ $y= -\frac {1}{3}(-3)(-3)$ $y= -3$ x=0 $y= -\frac{1}{3}(0)^{2}$ $y= -\frac{1}{3}(0)(0)$ $y= 0$ x=3 $y= -\frac{1}{3}(3)^{2}$ $y= -\frac{1}{3}(3)(3)$ $y= -3$ (-3,-3) (0,0) (3, -3) We plot the points and connect the graph to get the final graph
{"url":"https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-9-quadratic-functions-and-equations-9-1-quadratic-graphs-and-their-properties-practice-and-problem-solving-exercises-page-538/15","timestamp":"2024-11-03T08:49:17Z","content_type":"text/html","content_length":"95816","record_id":"<urn:uuid:0eadf5a3-2bc4-4367-b263-3f83f34a8b80>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00607.warc.gz"}
Updating search results... 7881 Results Conditional Remix & Share Permitted CC BY-NC-SA Material Type: Liberty Public Schools Date Added: Students will bee able to celebrate the 100th day of school by bringing in a project representing it. They will also fill out a chart that has 100 blank squares for them to fill in. Material Type: Julie Dameron Date Added: Students will bee able to celebrate the 100th day of school by bringing in a project representing it. They will also fill out a chart that has 100 blank squares for them to fill in. Material Type: Drew Penn Date Added: Resources to mark the 100th day of school with math activities. Challenge students to generate 100 different ways to represent the number 100. Students will easily generate 99 + 1 and 50 + 50, but encourage them to think out of the box. Challenge them to include examples from all of the NCTM Standards strands: number sense, numerical operations, geometry, measurement, algebra, patterns, data analysis, probability, discrete math, Create a class list to record the best entries. Some teachers write 100 in big bubble numeral style and then record the entries inside the numerals. Material Type: Terry Kawas Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Only Sharing Permitted CC BY-NC-ND Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Conditional Remix & Share Permitted CC BY-NC-SA Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract. Material Type: Date Added: Only Sharing Permitted CC BY-NC-ND Title: 10 for the Win!Grade: Kindergarten Overall Goal: To have students be able to count by multiples of 10 and comprehend the idea of a sequence of steps involved in a process. StandardsLearning ObjectiveAssessment5d Students understand how automation works and use algorithmic thinking to develop a sequence of steps to create and test automated solutions. K.NS.1: Count to at least 100 by ones and tens and count on by one from any number.Students will be able to program the beebots to go the correct distance. Students will be able to count to 100 by tens.The students will have to use the beebots to move forward the correct amount of steps. The students will have the squares the beebot travels represent sets of 10. Key Terms & Definitions: Sequence- certain order in which steps flowSkip counting- skipping numbers while counting, counting by multiples Number line- line which shows number in order, often marked at intervalsProgram- provide machine with coded instructions to perform task Lesson Introduction (Hook, Grabber): 10 Students will paint hands and stamp them on paper! Each set of hands will represent a set of 10. We will do this all the way up to 100. This paper will be hung in the front of the classroom as a reminder of multiples of 10. Lesson Main:After hanging up our poster with the hands displaying multiples of 10, the teacher would count with the class by 10’s all the way up to 100, while referring to the poster so they can follow along.We will also pass out a number line to the students that highlights 10’s so they have a reference if they struggle.We will make a number line and write multiples of 10 along the side. We will measure out the space between numbers so that it is equal to the length the Beebot travels for each time the button is pushed. For example, if the student wanted to get to 30, they would have to know that you count up by saying “10, 20,30” and they would need to press the forward button on the Beebot 3 times. Each press of the button is a multiple of 10. For this activity, the teacher will break up the students into small groups and they will work together. They will draw a card which will have a multiple of 10 on it ranging from 10-100. The students will have to decide how many 10’s it takes to count up to that number, as well as how many times they will need to program the Beebot to reach the answer on the number line. Lesson Ending:For the lesson ending, we will regroup as a class and talk about how we felt the Beebot activity went. Then we will count together by 10’s up to 100 again to reiterate what we have been learning. Lastly, we will pass out a worksheet to the students which we have included a link to under our resources, and have them complete it individually. This will give us an idea of the students understanding of this concept and can be used for our assessment. Assessment Rubric: GreatAveragePoorIndicatorDescriptionDescriptionDescriptionHand Cut-outsStudent participated in the tracing and cutting out of hands.Student partially participated in the tracing and cutting out of hands.Student failed to participate in the tracing and cutting out of hands.Beebot activityStudent was able to successfully move the Beebot to correct answer.Student was able to move the Beebot, but not to the correct answer.Student was unable to move the Beebot and was unable to correctly answer.WorksheetStudent was able to correctly fill out the entire worksheet.Student was able to fill out 70% of the worksheet.Student was unable to fill out at least 70% of the worksheet. Resources / Artifacts: Number line for students https://www.helpingwithmath.com/printables/others/lin0301number11.htmWebsite which has handprint idea on it https:// www.theclassroomkey.com/2016/02/big-list-skip-counting-activities.htmlLesson assessment used in the lesson ending https://www.pinterest.com/pin/287597126178910688 Differentiation: Differentiation for ability levelsIf a student really struggled with math skills, we could place them in a group with stronger math students. We could also offer an alternative activity for the Beebot timeline where we made the timeline go up by smaller multiples. For the worksheet, they could receive a longer amount of time to work on it and have directions read to them/receive help as needed. Differentiation for access & resourcesIf the school had limited resources and did not have access to these robots, they could use other tools like toy cars or something they could use to roll to the spots on the timeline. The game could be altered to fit a large variety of resources. The worksheet we used was found online but a similar version could be created by the teacher. Anticipated Difficulties: Some students might struggle with the concept of skip counting. It may be hard at first for them to remember the multiples of 10. Hopefully by making a poster and providing them with their own number line for reference, this will eliminate some potential difficulties the students may have. Material Type: Carmen Blackley Date Added: Read the Fine Print Educational Use In this video segment from Cyberchase, the CyberSquad replaces a piece of track to get the Madre Bonita Express to the Mother's Day harvest. Material Type: Provider Set: U.S. Department of Education Date Added: Unrestricted Use Public Domain Students will examine and interpret a population chart published in 1898 — depicting changes in the makeup of the United States across time in three categories, “foreign stock,” “native stock,” and “colored” — as well as an 1893 political cartoon about immigration. Students will also explain the causes and effects of population change in the late 19th century. Material Type: Provider Set: Date Added: This is a task from the Illustrative Mathematics website that is one part of a complete illustration of the standard to which it is aligned. Each task has at least one solution and some commentary that addresses important asects of the task and its potential use. Here are the first few lines of the commentary for this task: First pose the question: Here are four triangles. What do all of these triangles have in common? What makes them different from the figures that are no... Material Type: Provider Set: Illustrative Mathematics Date Added:
{"url":"https://oercommons.org/browse?batch_size=20&sort_by=title&view_mode=summary&f.general_subject=mathematics","timestamp":"2024-11-13T04:44:20Z","content_type":"text/html","content_length":"338862","record_id":"<urn:uuid:2198cab3-0ca1-4a49-9828-3dc553651053>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00518.warc.gz"}
What happens to current and voltage in series and parallel circuits? Many circuits can be analyzed as combination of series and parallel circuits, along with other configurations. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. How does voltage compare in a series and a parallel circuit? In a series circuit, the current through each of the components is the same, and the voltage across the circuit is the sum of the voltages across each component. In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents through each component. What is the difference between parallel and series circuit BBC Bitesize? There are two types of circuit we can make, called series and parallel. The components in a circuit are joined by wires. If there are no branches then it’s a series circuit. If there are branches it’s a parallel circuit. Is voltage the same in series or parallel? Voltage is the same across each component of the parallel circuit. The sum of the currents through each path is equal to the total current that flows from the source. Why is voltage divided in a series circuit? The sum of the voltages across components in series is equal to the voltage of the supply. The voltages across each of the components in series is in the same proportion as their resistances . This means that if two identical components are connected in series, the supply voltage divides equally across them. What happens to voltage in series? Voltage applied to a series circuit is equal to the sum of the individual voltage drops. The voltage drop across a resistor in a series circuit is directly proportional to the size of the resistor. If the circuit is broken at any point, no current will flow. Why is voltage different in a series circuit? The total voltage in a series circuit is equal to the sum of all the individual voltage drops in the circuit. As current passes through each resistor in a series circuit, it establishes a difference in potential across each individual resistance. How do current and voltage behave in a parallel circuit? Voltage: Voltage is equal across all components in a parallel circuit. Current: The total circuit current is equal to the sum of the individual branch currents. Resistance: Individual resistances diminish to equal a smaller total resistance rather than add to make the total. What is the voltage in series circuit? Voltage applied to a series circuit is equal to the sum of the individual voltage drops.” This simply means that the voltage drops have to add up to the voltage coming from the battey or batteries. 6V + 6V = 12V. Is voltage the same in series? The supply voltage is shared between components in a series circuit. The sum of the voltages across components in series is equal to the voltage of the supply. The voltages across each of the components in series is in the same proportion as their resistances . Is current the same in series or parallel? The current in the series circuit is the same throughout the circuit. On the other hand, parallel circuits refer to a circuit with more than one path through which current flows. In the parallel circuit, all the components have various branches for current flow; thus, the current is not the same throughout the circuit. Why is voltage different in series but same in parallel? In series between the nodes there may be some resistors connected so the voltage drop occurs so there may be a voltage difference between nodes. But in parallel the resistors are connected between same nodes so the voltage across 1st node is equal to voltage across 2nd node..
{"url":"https://www.true-telecom.com/what-happens-to-current-and-voltage-in-series-and-parallel-circuits/","timestamp":"2024-11-11T07:04:57Z","content_type":"text/html","content_length":"67753","record_id":"<urn:uuid:65686faf-802d-4c6a-a07d-8f2e096749da>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00706.warc.gz"}
[Haskell-cafe] Sequence of lifting transformation operators Alexey Vagarenko vagarenko at gmail.com Tue Jan 31 05:56:11 UTC 2017 What is the order of unwrapping? `return $ !a + !b` doesn't equals `return $ !b + !a` right? 2017-01-31 9:37 GMT+05:00 Taeer Bar-Yam <taeer at necsi.edu>: > This is (IMO) very similar in use-case to Idris' bang-notation. I'll give > a brief > summary of what that is and then explain the pros/cons that I see between > them. > In Idris the do notation has the added notation that > do return $ !a + !b > would desugar to > do > a' <- a > b' <- b > return $ a' + b' > So !a unwraps a higher up and then uses the unwrapped version. > Thus if you want to apply a function to apply a function to some wrapped > and > some unwrapped values: > do return $ f !a b !c !d > Pros/Cons: > - Idris notation is (IMO) more visually appealing. > - In particular, it puts the information about which arguments are lifted > next > to the arguments themselves, which matches our intuition about what's > going on > - While it matches our intuition, it does *not* match what's actually > going on, > so that's a con. > - Idris notation can lift things more than once: > do return $ f !!a !b !!!!c > - Idris notation is syntactic sugar, not a first-class operator > - So that means no currying, no passing it in as an argument, etc. (though > with lambdas this is not as bad as it otherwise would be) > - Idris notation is for monads, so it would not work for things that are > applicative but not monads (though I'm not entirely sure what falls into > this > category) > What do you y'all think? Do they operate in different enough spaces that > they > should both exist (like applicatives and moands), or is one clearly better? > --Taeer > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20170131/7275d2dc/attachment.html> More information about the Haskell-Cafe mailing list
{"url":"https://mail.haskell.org/pipermail/haskell-cafe/2017-January/126165.html","timestamp":"2024-11-10T05:55:54Z","content_type":"text/html","content_length":"5977","record_id":"<urn:uuid:70231a29-4a57-4468-9480-c3e75d34722d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00557.warc.gz"}
What do Caly and pineapples have in common? If you answered they are both from Hawaii, sorry but pineapples are from South America! Ok, another clue: What do Caly, pineapples and spiral galaxies have in common? …The answer is that they all exhibit a geometric pattern called a golden spiral (or Fibonacci spiral). For plants, what is special about this arrangement is that it leads to efficient packing of seeds as seen in pine cones and florets in sunflowers. The other reason this pattern is very important for plants is that, because it arises from seeds, florets or leaves growing in intervals determined by the golden angle which is an irrational number (i.e., a number that can't be written as a ratio of two integers, like pi), leaves growing around a stem using this pattern will rarely grow directly above older leaves below. Consequently, this golden spiral pattern leads to a very efficient way to minimize light competition among leaves on the same plant! Looking at the positions of the leaves and the leaf scars along Caly’s stem, you can see the spiral pattern. This is quite a big deal for most of Caly’s family as nearly all genera and species of Hawaiian Lobelioids have a whorl leaf arrangement. Even Brighamia insignis, Caly’s famous cousin, have this clear spiral leaf arrangement! Now, if like us you think these Fibonacci spirals in plants are awesome, allow us to indulge on the most complex (and delicious) one we know of: the fractal-three-dimensional-Fibonacci spiral, aka, the romanesco broccoli! In fact, just based on the golden angle alone we can replicate this beautiful plant pattern based on a few lines of math and code… Who knew plants had to know so much math to grow?? ps- For plant/math/coding nerds, the r code to generate the 3d graph is attached here.
{"url":"https://www.plantcam.live/post/2019/01/17/what-do-caly-and-pineapples-have-in-common","timestamp":"2024-11-07T09:01:21Z","content_type":"text/html","content_length":"1050479","record_id":"<urn:uuid:81709733-6fa3-4431-9fda-8377148b1064>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00724.warc.gz"}
Relevant cash flows for DCF Taxation (example 4) - ACCA Financial Management (FM) Relevant cash flows for DCF Taxation (example 4) – ACCA Financial Management (FM) Reader Interactions 1. Hi, has the spreadsheet in CBE exam formula XNPV? And if so, what if we calculate the NPV in spreadsheet and take a bit different (more exact) result? Or are we expected to use tables for calculation of NPV, but do not use formulas? Thank you for informative lecture btw:) 2. NPV in the answer in notes i think is wrong, i don’t know but if i put all values in excel and calculate it with formula it gives me 7101.83= 7102. can you please confirm this, because even in notes its showing wrong value. Thanks for the lectures. □ You are correct – thank you. I will have it corrected 馃檪 3. Hi Sir, please i need a little clarification. In calculating the tax savings, why isn’t the corporation tax percentage calculated on the 7,500 but on the capital allowance in order to get the □ Because it is only the capital allowances (the tax allowable depreciation) that reduces the taxable profit each year and therefore saves tax. This is a tax rule from Paper TX. 4. Hi I have a doubt. In calculating the 3rd years balancing charge, since we selling the machine at end of the year don’t we need to calculate the NBV of machine at the end of the year. and if so, the balancing charge would go up. Am i right? □ They rule I work through in the lecture is the correct tax rule in that there is no writing down alliance in the last year, just the balancing charge or allowance. If you do put a writing down allowance in the final year, the balancing charge or allowance will change but the net affect will end up being exactly the same so it doesn’t really matter in the exam. 5. Hello John, Thanks for these beneficial lectures. While solving using the BPP practice and revision kit to prepare evaluations / calculate NPV, I noticed that when we are dealing with depreciation, they are not including the depreciation expense in the calculations (as this is important to calculate correct income tax). If they say it is included in variable/fixed costs, then it(depreciation expense) should be added back to the net cash flow as this is a non-cash item. I am really confused, am I missing anything here? Ref.: Example No. 163 Uftin Co (December 2014, amended; page 60). Thanks in advance. □ Please ask this question in the Ask the Tutor Forum and not as a comment on a lecture. 6. Hi John, I am trying to watch the lecture but it is saying is not available. Can you please advice how I can watch it? Thanks □ The lecture is working fine. If you are still having problems then please ask in the ‘technical problems’ forum and admin will try and help you.https://opentuition.com/forum/ 7. Thank you very much for the lectures John – very clear explanations □ Thank you for your comment 馃檪 8. Wasn’t the net cash flow in year 3 supposed to be 13,463? 8,000-2100+563+6000+1000 ☆ Yes – my mistake (but the printed answer in the notes is correct 馃檪 ) 9. Hi, why do we have to do 30% of the capital allowance, is the whole of the capital allowance not allowed? □ The whole amount is allowed, but with tax at 30% the tax saving is 30%. 10. Thank you Sir for great lecture, I was reading it in Study Text but I got impression it’s difficult to remember. Now, after watching the lecture all makes sense. Thanks to your lectures I have already scored 82 in PM, now I hope to pass FM 馃檪 □ Thank you for your comment, and congratulations on passing Paper PM with such a good mark 馃檪 11. Hi Sir, when do we use the post tax cost of borrowing instead of pre tax? □ We always use the post-tax cost of borrowing when calculating the WACC for the purpose of appraising projects. (The pre-tax cost is really the rate of return demanded by investors and that is relevant when calculating the market value of debt borrowing.) 12. Hi Sir, if so happens there is no scrap value, will there be a balancing charge/allowance? Or do we just count the final year as tax allowable depreciation? □ The rule does not change, which means that there will be a balancing allowance in the final year of the amount of the tax written down value. 13. Hi Sir, wouldn鈥檛 it make more sense to take cash flow minus depreciation and get the taxable profit then charge tax on that? □ By all means do that if you want, but it then means either showing the tax calculation as separate workings or remembering to add back the depreciation after calculating the tax because the depreciation is not a cash flow. You must be logged in to post a comment.
{"url":"https://opentuition.com/acca/fm/relevant-cash-flows-for-dcf-taxation-example-4-acca-financial-management-fm/comment-page-2/","timestamp":"2024-11-14T05:27:13Z","content_type":"text/html","content_length":"117452","record_id":"<urn:uuid:4c2ccea9-036a-41eb-84d7-b1e337d32375>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00742.warc.gz"}
ISL In-Situ Copper Leaching - 911Metallurgist In researching ways to improve the recovery of minerals from in situ leaching operations. Such operations are amenable to recovering low grade ore, reducing many of the typical mining operations cost, and presenting environmental benefits. Hydraulic properties and characteristics of copper ore are investigated to assess the impact of permeability on flow capacity during in-place, copper leaching operation. The Lakeshore deposit at Casa Grande Mine provided an excellent test example for this investigation. The Lakeshore copper oxide ore is a heterogeneous, unsaturated, and fractured medium in which the permeability is dominated by a network of macrofractures, defining blocks of rock of various sizes. Upon saturating the macrofractures, these act as a network of distributed sources and sinks for flow of solution into and out of the rock matrix. Since copper is disseminated throughout the rock matrix, understanding and exploiting permeability conditions are essential for successful leaching operations in both the matrix and fracture system. This study provides insight into fluid flow behavior within the rock blocks, following saturation of macrofractures. In general, it is difficult to quantify unsaturated flow in a heterogeneous rock mass. Moreover, few direct measurements of permeability and flow characteristics have been made. One problem hindering direct measurement is associated with the excessive time involved (months or years). Another problem stems from limitations in test equipment. The approach used to assess transient migration of fluids at the Casa Grande mine site first estimated hydraulic properties indirectly, using models that account for dependence of saturation on water pressure head (also known as matric potential, negative capillary pressure, and negative suction head). Empirical relationships associated with retention properties of the porous medium are then combined with theoretical models of unsaturated flow to approximate permeability and hydraulic conductivity relationships. In turn, this information and appropriate boundary conditions are combined for input into a two-dimensional, unsaturated, finite-element flow code called TWOD. Results are used for the development and optimization of a site-specific hydrologic design for in situ leaching in heterogeneous, fractured, and initially unsaturated copper oxide ore deposits. Copper Oxide Ore Moisture Retention Properties Understanding the transient nature of fluid migration associated with the leaching of an unsaturated copper ore deposit, requires characterization of its moisture retentive properties (or moisture capacity). Insight into the copper ore moisture capacity can be gained through knowledge of the size and distribution of its interstices. In an unsaturated medium, the hydraulic pore radii and/or fracture aperture are changed by variations in the pressure head, which, in turn, causes changes in the saturation, S, or moisture content. The graphical expression is called the moisture retention, capillary pressure, or characteristic curve. The slope at any given point on the moisture retention curve represents the specific moisture capacity of the medium. Before the late 1970’s, methods for determining moisture retention curves in porous media were classified into two main categories. One method involved the removal of fluid (drainage), or introduction (imbibition) of fluid in a core using a high-displacement pressure, porous diaphragm. The second method removed fluid from a sample by subjecting it to centrifugal forces. Both the diaphragm and centrifugal approaches, however, have limitations. The diaphragm method suffers due to limitations of displacement pressure. Conventional testing is limited to a maximum of about 200 psi (about 140 cm of equivalent water pressure head). Since the air entry value for rock typically exceeds hundreds of centimeter pressure head, this application is not expected to yield a complete retention curve. Compounding this problem is the time, several days to several weeks, usually required to achieve steady-state equilibrium for a given pressure increment. Hence, the test duration is likely to be at least a month. Although the centrifuge method offers a distinct advantage over the diaphragm method by arriving at saturation equilibrium in a comparatively short time, the method has the disadvantages of the tedious reduction of data to arrive at water retention curve, has comparatively high cost, and only the drainage portion of the moisture retention curve can be obtained. Recent advances in property testing, however, have led to the development of a miniature thermocouple psychrometer and mercury porosimetry techniques for characterizing the moisture retention properties of porous media. Unfortunately, the thermocouple psychrometer testing has a major disadvantage in that information is deficient for those pores associated with water pressure head values between 0 to -20 m. Conversely, the mercury porosimetry approach has been demonstrated to yield reliable results to pressure head values equal to or greater than -3,500 m. Considering the foregoing, the mercury porosimetry method was selected for application in this study over alternative approaches for the following reasons. First, the experimental data can be obtained in a matter of tens of minutes, since steady-state equilibrium is achieved in seconds. Second, pressures between 1.2 – 60,000 psi (equivalent to pressure head values of -0.844 to -42,000 m) could be employed making it useful for the assessment of copper ore samples. Third, the expense incurred is one third less than that for a conventional centrifuge test. The mercury porosimetry derived saturation curves, moreover, generally agree well with other methods, such as the thermocouple psychrometer tests, for equivalent water pressures to about 5,000 m in rock. Mercury Porosimetry The following describes the methodology used to obtain data concerning the functional relationship between saturation and pressure head for the Casa Grande copper ore, and comments on assumptions and limitations incorporated in the analysis of mercury porosimetry data analysis. Mercury porosimetry provides an indirect method for obtaining the moisture retention curve. The technique is based on that principle that mercury behaves as a non-wetting fluid in a mercury-air filled void. Consequently, it does not penetrate the openings (i.e. pores and/or cracks) unless pressure is applied. The pressure applied to the mercury, PHg, compensates for the pressure difference over the mercury meniscus in the porous body, and it is given by PHg = -δPc = σHg (1/r1+ 1/r2)………………………………………………………..(1) where δPC is the capillary pressure, σHg is the surface tension of the mercury surface. Since the principal radii of curvature of the meniscus (r1, r2) are not known a-priori, the equation is written P = σ C……………………………………………………………………………………..(2) where C is the curvature of the meniscus, and P is pressure. The curvature is dependent on the contact angle, and on the geometry of the pore space. For cylindrical capillaries, the expression is C = 2 Cos θ/rc…………………………………………………………………………(3) where θ is the contact angle, and rc the radius of the capillary tube. For a meniscus existing between two flat plates, as in a microfracture, C = Cos θ/rf…………………………………………………………………………….(4) where rf is the half width aperture assuming a planar microfracture. Combining equations 2 and 3 yields P = (2 σ Cos θ)/rc……………………………………………………………………(5) for capillary tubes. Similarly, combining equations 2 and 4 yields P = (σ Cos θ)/rf……………………………………………………………………(6) for microfractures. While only half the pressure is required to inject fluid into a fracture of a given width, compared with a capillary of the same diameter, the volume injected at a given pressure is presumably much greater. Although there are few cylindrical pores in a porous media, equation 5 is almost universally used to calculate pore-size distribution from moisture retention data. Even without microfractures, this expression implies that there is a linear dependence of pressure on Cos θ. Mercury porosimetry assumes that the contact angle of mercury is larger than 90° (nonwetting phase). A graphical portrayal of hydrostatic equilibrium between two liquid phases, i.e. mercury and air, or water and air, is shown in figure 1. A problem in these contact determinations is that the solid surface is assumed to be smooth and constant. In reality, however, most solids have a rough pore and/or fracture surface. The roughness can be expected to increase the effective contact angle for nonwetting fluids. Furthermore, if the pore space does not have a circular cross section, it is expected that the contact angle will depend on the curvature in micropores. It is also likely that the contact angle will display hysteresis, depending on whether the meniscus is retreating or Sources may act to change the tension between the wetting and nonwetting fluids present. The foregoing represent a few of the conditions when the ideal assumptions in the mercury porosimetry method differ from the actual physical conditions in the rock. Other considerations in interpretation of results when using mercury porosimetry are that the sample is small (usually 2.54 cm diameter by 2.54 cm length), and that it is possible to mechanically damage the sample, if there are a significant number of closed pores. Sample and Petrographic Description Representative samples of copper ore were obtained from drillholes at the 336 m level (mean sea level) of the Cyprus Casa Grande mine. These samples were used to assess the spatial variability in hydraulic properties between these boreholes. Six 2.54-cm-diameter by 2.54-cm- length samples were prepared from each drillcore and tested for saturated hydraulic conductivity and mercury porosimetry testing. Larger 2.54- by 5.08-cm samples, were also prepared for determination of porosity, Mohr-Coulomb strength parameters (cohesion, normal, and shear strength), Young’s modulus, and Poisson’s ratio. These properties are summarized and shown in table 1. Thin sections were prepared for determining the pre-dominant mineralogy, texture, and porosity of the copper ore. Examination of these sections was aided by using a standard axioplan microscope at 3X magnification with a full-wave gypsum filter. The typical mineral assemblage consisted of fine grained rounded quartz mass, with phenocrysts of altered plagioclase feldspar (now kaolinte), biotite, hematite, and quartz. By using an X-ray, electron microprobe, elemental distribution maps were prepared. These maps demonstrate that aluminum was present among the primary elements associated with plagioclase feldspars. This suggested that a portion of the feldspars were altered to kaolinitic clay. Copper was found to be associated with both the altered plagioclase feldspar and biotite minerals. The mineralization suggests that preexisting microfractures once served as preferential pathways for the primary deposition of copper and quartz. These ancient microfractures now appear as solid red stringers, or meanders, throughout the sample. The quartz, however, showed no association with either the mica or clay minerals. Discontinuities are evident in all the copper ore samples, as a result of two distinct porosity systems. The bimodal distribution of pores is comprised of both microfractures and micropores. The distribution of microfractures appear as dark black meanders (fig. 2). These tensional microfractures are believed to represent a distinct phase, since they transect fractures previously filled with copper and quartz. These microfractures provide a network for the lixiviant to access copper hosted minerals, increasing the relative surface area for contact. The microfactures appear to have similar aperture range (5-15 µm) in both the feldspar and biotite minerals, while smaller apertures exist in those cracks transecting the quartz mass. Examination of back scattered X-ray images at 3,000X magnification also revealed the existence of micropores at grain boundaries (fig. 3). Mercury Porosimetry Testing A total of six mercury intrusion tests were performed on 2.54- by 2.54-cm copper oxide ore samples, three from each drillhole. Prior to testing, all the samples were oven dried at 100° C for a minimum of 48 h. The samples were placed in the porosimeter, evacuated, and filled with mercury. The mercury level is indicated by a contact sensor with a digital readout. Estimation of Pressure Head and Saturation The mercury intrusion data were adjusted to account for the differences between the properties of mercury and water for use in the unsaturated fluid flow model. Assuming that capillary bundle theory holds, the equivalent water pressure head, Ψw is calculated as follows: Ψw = (pHg σw Cos θw) / (γw σHg Cos θHg)…………………………………………….(7) where PHg = pressure of mercury, psi; σ = surface tension for between fluid and copper ore, dyne/cm; θ = contact angle between fluid and copper ore; γ = unit weight of water; w = subscript denoting water phase; and Hg = subscript denoting mercury phase. The values of surface tension and contact angle were estimated to be 72 dynes/cm and 15°, and 480 dyne/cm and 135° for water and mercury, respectively. To obtain the equivalent water saturation, the relative mercury saturation must first be calculated. This relative mercury saturation, SHg, can be determined by dividing each of the cumulative mercury intrusion, or extrusion, volumes by the total intruded volume. where VHg = volume of mercury, N = total number volume increments, and i = ith increment. The relative water saturation can now be determined by subtracting subsequent mercury saturation values from unity. This relationship is given by Siw = 1 – SiHg…………………………………………………………………………..(9) A water saturation of 100 pct occurs at a pressure head of zero gage magnitude. This represents a convenient reference datum where increasing positive pressures are indicative of the saturated solution domain. The point of saturation, between zero gage pressure and that where saturation begins to diminish, is interpreted as the zone of capillarity. In this fringe zone, the material is completely saturated, but bound under tensional forces. The capillary fringe extends a distance away from the saturated plume equivalent to a pressure head (air entry value), where saturation begins to diminish (below 100 pct). At increasingly negative values of water pressure head, the saturation diminishes in a nonlinear fashion; hence, this is denoted as the unsaturated zone. The water saturation as a function of pressure head curve is most often referred to as the moisture retention (or material characteristic) curve. When referring to unsaturated media, the water pressure head sometimes is called the matrix potential, or suction head. The former two are plotted as negative values, while the latter is by convention a sequence of positive values. In this report, the term, “water pressure head” is used. Mercury saturation occurs at a maximum positive pressure, whereas, water saturation occurs at a maximum negative pressure. This phenomenon reflects the fundamental difference between a wetting and nonwetting fluid. In the two-phase mercury-air system, the mercury is the nonwetting phase (air is wetting), while in the two-phase, air-water system, air is the nonwetting phase (water is wetting). Figure 4 depicts the equivalent water retention drainage curves derived from mercury intrusion data for the copper oxide ore obtained from coreholes A and B. The two curves display similar moisture retention properties, typified by a similar shape over the same range of pressure head. Conformity of these retention curves implies structural homogeneity over the distance spanned by these coreholes. Hence, any one sample would suffice for the analysis. The shape of a drainage curve often reflects the relative homogeneity of the pore-size distribution. If the pressure head remains constant over a large water saturation interval, the related pore dimension is considered homogeneous. Conversely, a variation in pressure head as a function of saturation implies that pore dimensions are relatively heterogeneous. Figure 5 depicts the equivalent moisture retention drainage curve for copper oxide sample 2BI. Two prominent pore distributions are obvious; the first reflects microfractures at low, negative pressure head values associated with relative saturation of 0.8095 to 1, while the other reflects micropores at pressure head values associated with saturations from 0 to 0.8095. These results are consistent with interpretations based on the axioplan and scanning electron microscope analyses observed earlier. To better assess the range of pressure head values over which the saturation varies, the saturation is plotted against the log of pressure head (fig. 6). Micropores in the saturation range between 0.8095 to 0 are associated with pressure head values ranging from -150 to -10,000 m; while saturation of microfractures between 0.8095 and 1 correspond to pressure head values of -150 to -0.1 m, respectively. Test results beyond a water pressure head of about -3,500 m should be considered inaccurate because of the inability of the mercury intrusion method to penetrate micropores less than about 0.002 nm. This limitation is further reinforced because residual saturation does not, as expected, reach an asymptotic value when slightly above zero (field capacity). Hydraulic radii, reflecting the pore-size distributions seen while draining (or drying), are calculated and displayed in figure 7 as a function of effective saturation for each of the systems observed based on the expressions, r(Ψ) = (2 σ Cos θ) / Ψ γ…………………………………………………………………………(10) Se(Ψ) = [S(Ψ) – Sr(Ψ)] / [Ss(Ψ) – Sr(Ψ)]…………………………………………………(11) where r is the hydraulic radius; σ, θ, r, and Ψ are as previously defined; Se, Sr, and Ss represent the effective, relative, saturated and residual saturations, respectively. In the mercury porosimetry testing, Ss represents 100 pct saturation (S = 1). When the residual saturation is equal to zero, the relative saturation is equivalent to the effective saturation. The microfractures appear to have the broadest pore-size distribution (greatest slope) with hydraulic half width apertures ranging from 0.15 to 33 µm. The most uniform hydraulic half width aperture distribution exists between 1 to 4 µms, which is consistent with that observed using the microscope. The hydraulic radii associated with the micropore distribution ranges from about 0.0015 to 0.05 µm; however, it is uniform between 0.0025 to 0.0075 µm. Estimating Hydraulic Properties To be able to study the behavior of injection and recovery of leach solutions in an unsaturated setting, knowledge of permeability, or preferably the hydraulic conductivity (also known, as the coefficient of permeability, or capillary conductivity), and its relationship to moisture content is needed. The most direct way to obtain this relationship is by experiment. Reliable estimates of unsaturated hydraulic conductivity are difficult to obtain, mainly because of difficulty in obtaining representative samples and the time and expense involved. An estimate of the permeability function can be obtained, however, from the moisture retention properties already discussed, i. e., saturation as a function of pressure head. Although fluid flow phenomena in partially saturated media normally involves two-phase flow of a wetting and nonwetting fluid, usually air and water, the flow of air is seldom of major significance. Hence, the present investigation is restricted to the permeability of the water phase only. Several unsaturated permeability models have been developed that are based on probability functions describing the relationship between the measured moisture content and water pressure head. Using the Brooks and Corey, and Brutsaert power functions, it was found that their two dependent parameters did not yield reasonable results for fitting the moisture retention curves associated with Casa Grande ore. On the other hand, the closed form expression developed by Van Genuchten seemed to provide reasonable results, requiring three dependent coefficients to be estimated. The methodology for the establishing of these constants is outlined in the following section. Moisture Retention Curve Estimation The Van Genuchten procedure requires plotting the effective saturation as a function of log water pressure head. By noting the saturation point, S½, located halfway between the maximum effective saturation and residual saturation point, and its corresponding water pressure value Ψ½, the slope, Sp can be evaluated. The slope is necessary to be able to determine the m coefficient given by Next, the n coefficient is calculated (Maulem theory) using n = -1/(m – 1)…………………………………………………………………..(13) At this point, the m and n coefficients can be combined with an appropriate expression, equation 13 to give the relative permeability, Kr, and hydraulic conductivity, K, as a function of saturation. The modeled coefficients then can be compared with those derived from the laboratory curves. To obtain a theoretical retention curve for comparison, the a parameter must be calculated using α = 1/[Ψ (Se-1/m – 1)1/n]…………………………………………………………..(14) At this point, the three parameters, n, m, and α, can be substituted into the equation, to yield a theoretical retention curve based on the chosen model. Estimation of Fluid Permeability Function The equation used to determine hydraulic conductivity, K(Se), as a function of effective saturation is K(Se) = Ks Kr(Se)………………………………………………………………………(16) where Ks represents the saturated permeability, and the relative permeability Kr(Se) is described by A complete derivation of equations 12 through 17 is given by Van Genuchten. A plot of the nonlinear relative microfracture permeability distribution is presented in figure 8. This permeability function was calculated using expression equation 17, and the Van Genuchten constants: α = 0.30 1/m, m = 0.525, and n = 2.105. It is noteworthy that permeability is nonlinear and observed to decrease one order of magnitude with a 20-pct reduction in saturation. Figure 9 depicts the saturation plotted as a log function of permeability. By inspection, the permeability diminishes with decreasing saturation ultimately spanning six orders of magnitude. The maximum rate of permeability decrease for the Casa Grande ore occurs at saturations below 45 pct. Using the range of saturated permeability values, 10-² to 10-¹ mD, corresponding conductivity values can be obtained from the product of the relative permeability and saturated permeability. Assuming a best case, the range of permeability would be 10-¹ mD (Se = Ss = 1) to about 10-6 mD (Se = 0, Sr = .8095). While the absolute permeability for the micropores cannot be firmly established, it appears to represent the end of the microfracture continuum about 10 -6 mD. Therefore, fluid in micropores can be assumed to be immobile for the purposes of this study. Hence, the remaining portion of this study investigates only the fluid movement in the microfractures. One obvious conclusion from the foregoing is that a decrease in saturation results in decreasing the permeability in the medium, thereby inhibiting fluid flow. From an environmental viewpoint, leach mining in an unsaturated setting presents a tractable scheme under which lixiviants may be better contained. From a production standpoint, however, permeability enhancement may require high-pressure injection, hydrodilation, hydrofracing, and/or in situ rubblizing. For either objective, site-specific modeling is required to assess injection-recovery pressure requirements. Unsaturated porous media typically displays a hysteretic behavior. The two branches of the hysteretic loop (drainage and wetting) therefore see a different average effective pore-size distribution, depending on whether the sample is being saturated or desaturated. Two main causes for the hysteretic behavior are from a change in contact angle, or from the so-called, “ink bottle effect.” The former results from asperities existing along the pore wall, while the latter occurs from changes in pore radii along a given flow tube. Since the drainage branch exhibits higher moisture content than the wetting branch at any given pressure head, the porous media becomes more retentive during the drying cycle. Hence, the average effective pore-size distribution during a drainage experiment shifts toward smaller values of pore radius than one for the wetting branch. This may, in part, explain why the half width apertures calculated from a drainage experiment are slightly less than those observed when using the microscope approach. For modeling fluid flow from the injection into unsaturated copper ore, knowledge of the wetting boundary curve is required. Since a mercury extraction curve was not obtained during porosimetry testing, an equivalent wetting boundary curve was not originally determined. The similarity hypothesis, however, can be used to establish two models that provide the wetting branch saturation curve, Sw(Ψ), based on knowledge of the drainage branch saturation curve, Sd(Ψ). Of the two models, the Maulem model II more accurately reproduced results comparable with laboratory analysis. For this reason, the Maulem model II, given by the quadratic relationship Sd(Ψ) = Sw(Ψ) [Sw(Ψ) – 2]……………………………………………………………(19) was used to derive the wetting branch saturation curve. The dimensionless hysteresis curves for the hysteretic model II, or for the case when hysteresis is not present is depicted in figure 10. When there is no hysteresis effect, a linear relationship exists and flow, either drainage or wetting, follows the same path. The Maulem model II, however, produces an expected departure from the straight line flow path. Since hysteresis is normally present in rock, it is incorporated in the finite element, unsaturated flow model. Figure 11 compares corresponding hysteretic loops for microfractures. Using the Van Genuchten approach, both the drainage and wetting branch constants could be calculated. These are summarized in table 2. Computer Modeling of Underground in Place Leaching Mathematical Modeling Modeling the movement of water in a partially saturated porous medium has been discussed by many authors. A review of the state-of-the-art can be found in a recent text edited by Evans. Basically, the movement of water must consider the movement of liquid water, water vapor, air, and the medium itself. The simplified approach taken here invokes the assumption of single-phase flow. Hence, the water is considered mobile, while the air is assumed immobile. An equation describing partially saturated and unsaturated flow of water in porous media, neglecting air-water interactions, can be derived by the principle of conservation of mass and the constitutive relationship given by Darcy’s law. Variably-saturated flow in a vertical plane, and anisotropic media, is given by Under isotropic conditions, this expression simplifies to the familiar Richards equation. The nonlinear nature of the equation limits the availability of analytic solutions. As a consequence, the computer code TWOD was developed using the Galerkin finite-element approach. While the code has no provision for handling deformable material, both heterogeneity, anisotropy, and hysteresis can be incorporated using this flow simulator. Finite-Element Mesh and Flow Boundary Conditions The nature of fluid flow is generally a three-dimensional phenomenon, however, a two-dimensional representation of the Casa Grande tool crib operation can be approximated by considering radial flow in the region perpendicular to a well. That is, the sin-face that exists along the contact between the fanoglomerate (cemented gravel) and copper oxide ore is considered. This surface undulates in cross section (fig. 12), but can be characterized in a two-dimensional plan view by projecting each well onto this plane. This approach appears justified, since only the interval spanning the copper oxide ore is perforated. Figure 13 gives a cross-sectional representation of the total well field for the hanging wall. Wells related to a given fan pattern are depicted by connecting each group of wells by a dashed line. The corresponding finite-element (FE) grid for the hanging wall cross section is given in figure 14. The FE solution domain is represented by a 100- by 100-m mesh with 512 simplex triangular elements and 300 corresponding node points. The x-direction represents the horizontal distance along the tool crib drift, while the y-direction denotes the vertical distance along the hanging wall. The lower, left-hand corner represents the 0, 0 m mesh coordinate, and the upper, right-hand corner represents the 100, 100 m coordinate. The distance between any horizontal or vertical pair of node points is 6.25 m. The complete solution domain was chosen to be large enough so that there would be no interaction between the surface and sides of the solution domain. If interactions exist, then the calculations of both pressure and flux would be To properly describe flow during the in situ leaching process, the governing equation must be supplemented with a set of suitable boundary conditions. In general, the boundary conditions employed for the site-specific simulations included no flow at the upper and side boundaries, a unit hydraulic gradient along the lower boundary, and a point source or sink at the interior of the mesh (node point 145). The physical problem outlined involves a mixture of boundary conditions, of both Dirichlet and Neumann types. Mathematically, these boundary conditions are expressed as follows: A schematic depicting these boundary conditions with respect to the finite-element mesh is shown in figure 15. To solve the transient flow problem, an initial value of capillary pressure head must be specified in addition to those outlined above. While this condition might be manifested in a steady-state flow, or transient flow condition, these simulations used a uniform static water pressure. The initial condition given for specified pressure head is represented by Ψ (x,z,0) = Ψ0(x,z)…………………………………………………………..(25) Inverse Modeling Since the Casa Grande tool cribsite is known to have macroscale fractures linking various wells, it is necessary to identify a tight well. A tight well, defined as one that had minimum observed flow rates, is used to represent fluid flow through the rock matrix. Since the Casa Grande site has had extensive permutations in pressure conditions over a 1-year period, an additional requirement is to select an interval from the onset of leaching to the first permutation in well conditions. Figure 16 depicts the flow rate and corresponding pressure history recorded for the first 100-day period in well 758. The mean flow rate and injection pressure are calculated to be 0.315 gpm and 528 psi, respectively. These values are used during subsequent inversion of the FE model to determine the appropriate saturated permeability and ambient degree of saturation. The first inversion involved calibrating the model by adjusting the saturated permeability until the computed initial flux matched that observed in the field. Figure 17 gives flow rate (flux) for a range of saturated permeability values. The maximum permeability of 0.1 mD resulted in a maximum flow rate (1.92 gpm), while the minimum value of 0.01 resulted in a minimum flow rate (0.19 gpm). The saturated permeability value of 0.016 mD matched the flow rate (0.315 gpm) observed in well 758; consequently it was used in subsequent forward model simulations. This agrees with the value 0.0177 mD from using a geostatistical analysis used to estimate permeability under the saturated conditions assumed by Schmidt. The second simulation involved estimating the ambient degree of saturation by direct inversion. The ambient degree of saturation is defined as that saturation which existed prior to in situ leaching. Estimating the ambient degree of saturation was accomplished by changing the initial pressure condition until the transient pressure response matched the results observed in well 758, i.e. where injection pressure was equal to 371 m at the 100th day. The initial pressure condition (zero time) was decreased incrementally beginning with 0 m until transient pressure conditions were matched (fig. 18). The -1 m initial pressure condition corresponds to an ambient saturation of 95 pct. Since laboratory moisture determinations were not available, it was not possible to verify this ambient degree of saturation. An alternative approach was used, however, to qualitatively check the calculated minimum saturation, based on draining the initially saturated domain. From the 1,000 year simulation based on the Van Genuchten microfracture drainage constants, a permeability of 0.016 mD correspond to a minimum saturation of about 88 pct. The gravity effect is clearly evident from the moisture tongue in center of figure 19, demonstrating that, at least qualitatively, the derived ambient degree of saturation appears believable. The effect that localized drifts, i.e., the tool crib and other drifts at the 900 (274 m) and 1100 (335 m) levels have on fluid distribution in the rockmass situated between them is shown in figure 12. This analysis was necessary to assess whether the boundary conditions associated with these drifts would need to be integrated into the forward model. Figure 20 gives results for a 25-year simulation approximating the time frame since these drifts were created. Van Genuchten constants again reflect the drainage curve, with a saturated permeability of 0.016 mD and ambient degree of saturation of 95 pct. The net effect of imposing drifts essentially increases the moisture gradient in a region of about two and a half times the radius of each drift. The regional effect on ambient degree of saturation, however, is largely unaffected. Forward Modeling The primary purpose of transient variably-saturated flow modeling is to provide insight into the hydrologic design of in situ leaching operations in deposits where copper mineralization is disseminated in segmented blocks of ore. Forward (predictive) modeling is performed with reference to a single injection or recovery well drilled into a homogeneous block of ore, or equivalently with reference to a single linear fracture which acts as a source or sink for a single block of ore. Well 758 is seen as an example of a well in the tool crib pattern that is drilled entirely within a single ore block. The in situ permeability estimate obtained from well 758 (0.016 mD) matches that obtained from core Values of ambient degree of saturation (95 pct), and saturated permeability (0.016 mD) derived during the inversion process, and those Van Genuchten constants previously described, were the parameter specifications used to simulate fluid flow in forward modeling. The effect of injection pressure on the transient buildup of a saturated plume in an ore block is shown in figure 21. For early analysis of plume development, the solution domain was reduced to 25 by 25 m, corresponding to an internodal distance of 1.56 m, while using the same mesh (fig. 14) and boundary conditions (fig. 15). Flow simulations showed that an injection pressure of 1,500 psi (currently the practical limit at the tool cribsite) sustained for a 100-day period is required before a circular block of ore with a radius of 5 m becomes completely saturated. While lesser pressures could be employed, the waiting period for solution penetration would be correspondingly longer. Pressures at or below 500 psi would effectively contain solution within a 2-m radius around the injection source. Forward modeling demonstrates that it is possible to penetrate and saturate ore blocks with leach solution, allowing contact between leach solution and disseminated copper minerals. The hydraulic head, pressure head, and fluid distributions for an injection source (such as well 758) after a 1-year period, while subjected to an injection pressure of 1,500 psi, are given in figure 22. The symmetry of the pressure plume in these figures reflects the assumption of homogeneous and isotropic conditions within an ore block. The steepness of the cone is attributed to the large moisture gradient existing about the source. Converting the injection source after a 1-year period to a recovery sink, with a zero gage pressure or a negative (suction) pressure affects the head and fluid distribution, as shown in figures 23 and 24. In each case, the pressure plume around the sink is reduced. However, since the zero gage pressure would not be less than or equal to the air entry value (about -4 m) of the medium, only 1 pct of the injected solution can be recovered by the sink. The bulk of the solution remains captive in the rock by capillary tension. Because of the capillary effect, a dramatic increase in solution recovery, i.e., about 95 pct, can be achieved by inducing a suction head of -10 m at the sink as shown in figure 25. Hydrologic Design Forward modeling shows that solution penetration into ore blocks can be achieved, despite the reduced permeability associated with unsaturated conditions, if well injection pressures are of sufficient magnitude. From an operational standpoint, if the target ore zone were a single homogeneous ore block, a flux rate of .05 m/d would be unacceptably low. At the closest spacing, wells in the tool crib pattern are 5 m apart. The observed breakthrough times for flow of solution between injection and recovery wells at the tool cribsite ranges from 1 to 100 h, after the start of injection, indicating that the network of macro fractures which transect the target ore zone is, initially at least, the dominant flow path through the ore A hydrologic design suggested by the results of forward modeling is a variant of the push-pull test method. Injection in a core region of the tool crib wells at pressures of 1,500 psi is maintained for a 30- to 60-day period. The injection interval serves to saturate the ore blocks and build pressure in the saturated macrofracture network. Following the 30- to 60-day interval wells are selectively converted from injection to recovery mode. Suction pressure is applied, if practical. Otherwise, the lowest possible head condition should be imposed. To ensure that there is ample residence time to impregnate the injected leach solution with copper, and that the sorption of copper, and/or precipitation of gangue minerals does not occur, flow rates during recovery can be regulated by shutting in the well to some degree. The advantage of applying backpressure in the well is to prevent the established pressure plume from dissipating too rapidly. The concentration of copper and gangue minerals in solution would need to be continuously monitored to adjust the recovery flow rate. A hydrologic design involving rubblizing of a mineralized zone could be expected to improve fracture permeability in the ore zone by orders of magnitude. Rubblizing conducted in a manner similar to that used for in situ oil shale retorting is just one possibility. A factor of central importance for determining the desired degree of rubblizing (or the average block size) is the penetration rate of leach solution into unsaturated ore blocks. Forward modeling suggests that maintenance of injection pressures of 500 psi, or below, would effectively confine solutions to a rubblized zone in which the average block diameter is 4 m or less. Flow in blocks larger than this, on the periphery of the rubblized zone, would be unlikely without large-scale fractures, under the permeability constraints imposed by a partially saturated setting. The undisturbed (and unsaturated) rock would therefore serve to confine the solution plume within the rubblized zone, providing a safeguard against environmental contamination. The Casa Grande copper oxide deposit is characterized as a variably-saturated trimodal porosity system. A network of macrofractures segment the copper ore into blocks which have a bimodal distribution of microfractures and micropores. Once the macrofractures are saturated they act as a source-sink to these ore blocks. Moisture retention characteristics indicate that a microfracture-micropore permeability continuum exists in the rock blocks. The microfracture permeability diminishes nonlinearly with decreasing saturation, spanning six orders of magnitude; hence, fluid flow in the saturated micropores is essentially immobile. Since the microfractures provide internal access to copper-hosted minerals, the unsaturated condition (ambient saturation of 95 pct) will be the dominant control on flow capacity. Forward modeling demonstrated that solution penetration into ore blocks can be achieved, despite the reduced permeability associated with unsaturated conditions, if well injection pressures are of sufficient magnitude and time permits. Post injection-recovery simulations demonstrated, however, that the bulk of solution injected at 1,500 psi for 1 year (roughly 98 pct) is held captive in the rock by capillary tension, when using the conventional air-lifting technique. Conversely, inducing a suction head of -10 m (air entry value – 4 m) for a 2-week period immediately following the injection period increases the capacity to recover leach solution (roughly 95 pct). By specifying proper time and pressure constraints, forward modeling can be used to optimize a mine design by determining the maximum block diameter to be derived through rubbilizing. Upon rubbilizing a zone to an average block diameter of 4 m, and sustaining an injection pressure of 500 psi for a 100-day period, leach solution could be distributed effectively at the Casa Grande copper oxide tool cribsite, while inhibiting flow outward into the surrounding rock mass.
{"url":"https://www.911metallurgist.com/blog/isl-in-situ-copper-leaching/","timestamp":"2024-11-03T09:49:28Z","content_type":"text/html","content_length":"204440","record_id":"<urn:uuid:150ebcaa-540a-492b-84b8-1bcab7921792>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00587.warc.gz"}
Does Standard Deviation Have Units? (5 Key Ideas To Know) | jdmeducational Does Standard Deviation Have Units? (5 Key Ideas To Know) Standard deviation is a measure of spread that helps us to interpret data. Together with the mean, standard deviation can tell us a lot about the data, and the units can also help us to understand the data. So, does standard deviation have units? Standard deviation has units that are the same as the units for the data values. Standard deviation can have square units if the data values have square units (for example, an area in square feet). Variance (standard deviation squared) has units that are the square of the units for the data values. Of course, standard deviation will always have the same units as the mean, since both are measured in the units for the data values. In this article, we’ll talk about the units for standard deviation and how they are determined. We’ll also compare units for standard deviation, mean, variance, and the data values themselves. Let’s get started. Does Standard Deviation Have Units? Standard deviation has units. No matter what quantity we are measuring in our data (height, weight, length, width, time, day length, etc.), the calculated standard deviation will have units. The units can be linear (such as for lengths), squared (such as for areas), or cubed (such as for volumes). They can also be rates (such as for speeds). The units for standard deviation can be lengths, areas, volumes, and rates (such as the speed for a car). What Is Standard Deviation Measured In? Standard deviation is measured in the same units as the data points that were used to calculate the standard deviation. The units are the same for both sample standard deviation and population standard deviation (you can learn about the difference between them here). Example 1: Standard Deviation Units For Height If you have a set of data points that measure height in feet, then the standard deviation would also be given in terms of feet. If you are measuring height in inches, then the standard deviation will also have units of inches. If you converted all of the data values to inches (multiply feet by 12 to get inches), then the standard deviation would be given in terms of inches. If you convert the units for your data values, make sure to also report the standard deviation in the correct converted units! Example 2: Standard Deviation Units For Area If you have a set of data points that measure the area of property (lots) in acres, then the standard deviation would also be given in terms of acres. To measure the area of a lot, we would use square feet for our data values. In that case, the standard deviation would also be given in units of square feet. If you converted all of the data values to square feet (multiply acres by 43,560 to get feet), then the standard deviation would be given in terms of square feet. Example 3: Standard Deviation Units For Volume If you have a set of data points that measure the volume of a pool in cubic feet, then the standard deviation would also be given in terms of cubic feet. The units of measurement for a pool could be in gallons. In that case, the standard deviation would also be given in gallons. If you converted all of the data values to gallons (multiply cubic feet by 7.48 to get gallons), then the standard deviation would be given in terms of gallons. Example 4: Standard Deviation Units For Speed (Or Velocity) If you have a set of data points that measure the speed of an object (like a car) in meters per second, then the standard deviation would also be given in terms of meters per second. If we measure the speed of cars in meters per second, then the standard deviation of speeds would also be in given in units of meters per second. If you converted all of the data values to miles per hour (multiply meters per second by 2.237 to get miles per hour), then the standard deviation would be given in terms of miles per hour. Example 5: Standard Deviation Units For Acceleration If you have a set of data points that measure the acceleration of an object (like a car) in meters per second squared (meters/second^2), then the standard deviation would also be given in terms of meters per second squared (meters/second^2). If you converted all of the data values to miles per hour squared (multiply meters per second squared by 8053 to get miles per hour squared), then the standard deviation would be given in terms of miles per hour squared (miles/hour^2). Is Standard Deviation In The Same Units As The Mean? Standard deviation is in the same units as the mean for a given data set. As a result, the units “match up” in a sense. Since the units for mean and standard deviation are the same, it makes sense to talk about things like: In the process of finding standard deviation, the units for the data points are squared (when we take squared differences). However, we later take the square root of the sum of squared differences, which returns us to the original units. The formula for standard deviation below suggests how the “squared” and “square root” (radical) cancel each other out. This is the formula for population standard deviation. This is the formula for sample standard deviation. You can also see how this works in the steps below. To find the sample standard deviation, take the following steps: • 1. Calculate the mean of the sample (add up all the values and divide by the number of values). • 2. Calculate the difference between the sample mean and each data point. • 3. Square the differences from Step 2. • 4. Sum the squared differences from Step 3. • 5. Divide the sum from Step 4 by n – 1 (the sample size minus one). • 6. Take the square root of the quotient from Step 5. To find the population standard deviation, the process is very similar to the one we used for finding samples standard deviation. Here are the steps: • 1. Calculate the mean of the sample (add up all the values and divide by the number of values). • 2. Calculate the difference between the sample mean and each data point. • 3. Square the differences from Step 2. • 4. Sum the squared differences from Step 3. • 5. Divide the sum from Step 4 by N (the population size). • 6. Take the square root of the quotient from Step 5. Is Standard Deviation Units Squared? The units for standard deviation are only squared when the units for mean (and the original data values) are also squared. For example, if your data points are measurements for the area of lots in a city, then the units will be in square feet. Since the data points have square units, the mean and standard deviation will also have square units. The area of a lot in a city would be measured in square feet, and so the standard deviation is also given in units of square feet. Does Variance Have Units? Variance has units that are the square of the units for standard deviation. This applies to both “pure” units (like feet or pounds) and also rates (miles per hour) or areas (square feet). The table below gives some examples of how the units for variance compare to the units for standard deviation. Example 1: Variance Units For Height If you have a set of data points that measure height in feet, then the variance would be given in terms of feet squared (square feet). If you converted all of the data values to square inches (multiply square feet by 12^2 = 144 to get square inches), then the variance would be given in terms of square inches. If you convert the units for your data values, make sure to also report the variance in the correct converted units! Example 2: Variance Units For Area If you have a set of data points that measure the area of a garden in square yards (yard^2), then the variance would be given in terms of yard^4. If you converted all of the data values to square feet (multiply square yards by 3^2 = 9 to get square feet), then the variance would be given in terms of feet^4. Example 3: Variance Units For Volume If you have a set of data points that measure the volume of a pool in cubic feet (feet^3), then the variance would be given in terms of feet^6. If you converted all of the data values to gallons (multiply cubic feet by 7.48 to get gallons), then the variance would be given in terms of gallons^3. Example 4: Variance Units For Speed (Or Velocity) If you have a set of data points that measure the speed of an object (like a car) in meters per second, then the variance would also be given in terms of meters squared per second squared (meters^2/ If you converted all of the data values to miles per hour (multiply meters per second by 2.237 to get miles per hour), then the variance would be given in terms of miles squared per hour squared Example 5: Variance Units For Acceleration If you have a set of data points that measure the acceleration of an object (like a car) in meters per second squared (meters/second^2), then the variance would be given in terms of meters squared per second to the fourth (meters^2/second^4). If you converted all of the data values to miles per hour squared (multiply meters per second squared by 8053 to get miles per hour squared), then the variance would be given in terms of miles squared per hour to the fourth (miles^2/hour^4). Does Range Have Units? Range does have units, and they are the same units as the data values (and the same units as mean and standard deviation). For example, if you measure the weight of dogs in pounds, then the range would be given in pounds also. If your measurements, in pounds, were {25, 27, 33, 34, 40, 40, 41, 42, 50, 100}, then the range would be 100 – 25 = 75 pounds. If we measure the weights of dogs in pounds, then the standard deviation would also be given in units of pounds. Now you know all about the units for standard deviation and how they relate to the units for mean, variance, range, and the data values themselves. You can learn about the difference between standard deviation and standard error here. You can learn more about how to interpret standard deviation here. You can learn about when standard deviation is a percentage here. I hope you found this article helpful. If so, please share it with someone who can use the information. Don’t forget to subscribe to my YouTube channel & get updates on new math videos!
{"url":"https://jdmeducational.com/does-standard-deviation-have-units-5-key-ideas-to-know/","timestamp":"2024-11-02T08:04:20Z","content_type":"text/html","content_length":"87469","record_id":"<urn:uuid:ac542aeb-27dc-46a9-95f3-99425cd896ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00360.warc.gz"}
7 Digit Multiplication Worksheets Math, specifically multiplication, creates the foundation of numerous scholastic disciplines and real-world applications. Yet, for numerous students, grasping multiplication can pose a challenge. To address this obstacle, instructors and parents have actually welcomed an effective device: 7 Digit Multiplication Worksheets. Intro to 7 Digit Multiplication Worksheets 7 Digit Multiplication Worksheets 7 Digit Multiplication Worksheets - This page includes Long Multiplication worksheets for students who have mastered the basic multiplication facts and are learning to multiply 2 3 4 and more digit numbers Sometimes referred to as long multiplication or multi digit multiplication the questions on these worksheets require students to have mastered the multiplication facts from 0 to 9 Multiplication facts with 7 s Students multiply 7 times numbers between 1 and 12 The first worksheet is a table of all multiplication facts 1 12 with seven as a factor 7 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 Value of Multiplication Method Recognizing multiplication is essential, laying a strong foundation for sophisticated mathematical principles. 7 Digit Multiplication Worksheets provide structured and targeted technique, cultivating a much deeper understanding of this essential math operation. Advancement of 7 Digit Multiplication Worksheets Printable Multiplication Single Double Digit Worksheet Multiplication Printable Multiplication Single Double Digit Worksheet Multiplication Multiplication facts worksheets including times tables five minute frenzies and worksheets for assessment or practice Multiplication Facts Worksheets The remaining rows include each of the facts once but the target digit is randomly placed on the top or the bottom and the facts are randomly mixed on each row Multiplying 1 to 7 by Multiplication Worksheets These multiplication worksheets include timed math fact drills fill in multiplication tables multiple digit multiplication multiplication with decimals and much more And Dad has a strategy for learning those multiplication facts that you don t want to miss When you re done be sure to check out the unique spiral and bullseye multiplication worksheets to get a From traditional pen-and-paper workouts to digitized interactive layouts, 7 Digit Multiplication Worksheets have actually advanced, satisfying varied knowing styles and preferences. Sorts Of 7 Digit Multiplication Worksheets Standard Multiplication Sheets Basic workouts focusing on multiplication tables, aiding learners construct a strong arithmetic base. Word Problem Worksheets Real-life circumstances integrated right into issues, improving crucial reasoning and application skills. Timed Multiplication Drills Tests designed to improve rate and accuracy, aiding in quick mental math. Advantages of Using 7 Digit Multiplication Worksheets Multiplication Worksheets 4 Digit By 1 Digit PrintableMultiplication Multiplication Worksheets 4 Digit By 1 Digit PrintableMultiplication Build Your Own Multiplication Worksheets in Seconds Choose a topic below and check back often for new topics and features Multiplication Multiple Digit Worksheets Multiplication Single Digit Worksheets Multiplication 5 Minute Drill Worksheets Click the Create Worksheet button to create worksheets for various levels and topics Multiplication Puzzle Match 7s and 8s Only One way to have students use this puzzle is to find their partner Give each student one puzzle piece Half of the students will have a multiplication fact and the other half will have the answers Have them find their partner with the matching piece 3rd through 5th Grades Improved Mathematical Skills Constant practice sharpens multiplication proficiency, boosting total math abilities. Enhanced Problem-Solving Talents Word problems in worksheets develop logical reasoning and strategy application. Self-Paced Learning Advantages Worksheets fit individual learning rates, promoting a comfy and adaptable understanding atmosphere. How to Produce Engaging 7 Digit Multiplication Worksheets Including Visuals and Shades Lively visuals and shades record interest, making worksheets visually appealing and engaging. Consisting Of Real-Life Circumstances Associating multiplication to day-to-day scenarios adds relevance and practicality to exercises. Tailoring Worksheets to Various Ability Degrees Personalizing worksheets based on differing effectiveness levels makes certain comprehensive knowing. Interactive and Online Multiplication Resources Digital Multiplication Tools and Gamings Technology-based resources provide interactive understanding experiences, making multiplication interesting and enjoyable. Interactive Web Sites and Applications Online systems offer diverse and easily accessible multiplication technique, supplementing conventional worksheets. Tailoring Worksheets for Various Knowing Styles Visual Students Aesthetic aids and representations aid understanding for learners inclined toward aesthetic knowing. Auditory Learners Verbal multiplication troubles or mnemonics cater to students who grasp concepts via auditory ways. Kinesthetic Students Hands-on activities and manipulatives support kinesthetic learners in comprehending multiplication. Tips for Effective Application in Learning Uniformity in Practice Routine method reinforces multiplication skills, promoting retention and fluency. Balancing Rep and Range A mix of repetitive workouts and varied problem formats preserves passion and understanding. Offering Constructive Responses Feedback aids in recognizing locations of enhancement, urging ongoing progression. Challenges in Multiplication Practice and Solutions Inspiration and Engagement Obstacles Boring drills can bring about disinterest; innovative strategies can reignite inspiration. Getting Over Anxiety of Math Adverse assumptions around mathematics can hinder progression; producing a positive discovering setting is essential. Impact of 7 Digit Multiplication Worksheets on Academic Performance Studies and Research Findings Research study suggests a favorable correlation between regular worksheet use and enhanced math performance. Final thought 7 Digit Multiplication Worksheets emerge as functional devices, fostering mathematical effectiveness in students while accommodating varied discovering designs. From basic drills to interactive on the internet sources, these worksheets not only enhance multiplication skills yet likewise promote vital thinking and problem-solving capabilities. 2 Digit Multiplication Worksheets Free Two Digits Math Worksheets Activity Shelter Two Digit Check more of 7 Digit Multiplication Worksheets below Free Two Digits Math Worksheets Activity Shelter Free Multiplication Chart Worksheets Free Printable Worksheet Multiplying 4 Digit By 2 Digit Numbers P Multiplying 2 Digit By 1 Digit Numbers A Multiplication Worksheets One Digit Math Drills DIY Projects Double Digit Multiplication Worksheets 99Worksheets Multiplying by 7 worksheets K5 Learning Multiplication facts with 7 s Students multiply 7 times numbers between 1 and 12 The first worksheet is a table of all multiplication facts 1 12 with seven as a factor 7 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 Multiplication Worksheets K5 Learning Grade 5 multiplication worksheets Multiply by 10 100 or 1 000 with missing factors Multiplying in parts distributive property Multiply 1 digit by 3 digit numbers mentally Multiply in columns up to 2x4 digits and 3x3 digits Mixed 4 operations word problems Multiplication facts with 7 s Students multiply 7 times numbers between 1 and 12 The first worksheet is a table of all multiplication facts 1 12 with seven as a factor 7 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 Grade 5 multiplication worksheets Multiply by 10 100 or 1 000 with missing factors Multiplying in parts distributive property Multiply 1 digit by 3 digit numbers mentally Multiply in columns up to 2x4 digits and 3x3 digits Mixed 4 operations word problems Multiplying 2 Digit By 1 Digit Numbers A Free Multiplication Chart Worksheets Free Printable Worksheet Multiplication Worksheets One Digit Math Drills DIY Projects Double Digit Multiplication Worksheets 99Worksheets Multiplication Pages Practice One Page A Day Channies Multiplication 2x3 Digit Worksheet 1 Have Fun Teaching Multiplication 2x3 Digit Worksheet 1 Have Fun Teaching 4 Digit By 1 Digit Multiplication Worksheets Partial Products Area Frequently Asked Questions (Frequently Asked Questions). Are 7 Digit Multiplication Worksheets suitable for any age teams? Yes, worksheets can be tailored to different age and skill degrees, making them adaptable for various learners. Just how usually should pupils exercise using 7 Digit Multiplication Worksheets? Constant practice is vital. Normal sessions, preferably a couple of times a week, can generate substantial improvement. Can worksheets alone improve math abilities? Worksheets are a beneficial device however needs to be supplemented with diverse learning methods for detailed ability growth. Are there online systems supplying totally free 7 Digit Multiplication Worksheets? Yes, several academic internet sites offer open door to a variety of 7 Digit Multiplication Worksheets. How can parents support their kids's multiplication method in the house? Motivating consistent technique, supplying help, and producing a positive knowing environment are valuable actions.
{"url":"https://crown-darts.com/en/7-digit-multiplication-worksheets.html","timestamp":"2024-11-13T22:34:24Z","content_type":"text/html","content_length":"28555","record_id":"<urn:uuid:761a5e63-7b16-441b-8aa1-c39cf34a3fe6>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00759.warc.gz"}
Class 7 maths chapter 1 is about integers. It deals with the various comprehensive concepts and detailed learnings related to integers. You will find elaborate explanations and solutions for the chapter in this article that will help you understand the topic easily. The NCERT solutions for class 7 maths chapter 1 help students improve their scores by understanding the topic to the fullest. It also helps students access several practice questions and detailed solutions to the same. Some of these include fill-in-the-blanks, match-the-following, and true-or-false. All the questions in this chapter are divided according to concepts and sorted based on difficulty levels. The NCERT solutions for class 7 maths chapter 1 integers will teach you all about the addition and subtraction of the integers, their properties, the commutativity and associativity of integers, and the distributive laws. It comprises details about the various concepts of integers and their integration. NCERT solutions class 7 maths chapter 1 (Integers) In this chapter, you will learn about positive and negative numbers. When the positive and negative numbers make a set, they become integers. All whole numbers are integers. Zero is also considered to form an integer. Let us take a look at the most important topics of Chapter 1, Integers. They are listed as follows: □ Introduction of Integers □ Division of Integers □ Properties of Addition and Subtraction of Integers □ Multiplication of a Positive and Negative Integer □ Multiplication of two Negative Integers □ Multiplication of Integers □ Properties of Multiplication of Integers □ Properties of Division of Integers The details of the different topics of NCERT Solutions Class 7 maths chapter 1 are as follows:
{"url":"https://msvgo.com/cbse/ncert-solutions-class-7-maths-integers","timestamp":"2024-11-06T22:01:51Z","content_type":"text/html","content_length":"418531","record_id":"<urn:uuid:23d0351f-9fa4-4edc-9593-8631e8b09097>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00338.warc.gz"}
Particle Astrophysics Second Edition - SINP It is expressed in radians. THE RADIAL VELOCITY EQUATION 7 THE CENTER OF MASS FRAME OF REFERENCE The general two‐body equation for the center of mass is: € R = m 1 r 1 +m 2 r m 1 +m 2 where m 1 ≡ mass of the first body (which, in this derivation, is the star) m 2 ≡ mass of the second body (which, in The Radial Velocity Equation in the Search for Exoplanets ( The Doppler Spectroscopy or Wobble Method ) "Raffiniert ist der Herr Gott, aber Boshaft ist er nicht ( God is clever, but not dishonest - God is subtle, but he is not malicious )", Princeton University’s Fine Hall, carved over the fireplace in the Common Room with relativity equations as motif imprinted into the leaded glass windows If the stellar lines are displaced by Δλ from their laboratory values λ, then the radial velocity v is given simply by (18.7.1) v c = Δ λ λ. Note that this formula, in which c is the speed of light, is valid only if v << c. As a result, a fair estimate of the radial velocity is given by (9.101)Ur Um = 0.047η 1 − 0.414η2 (1 + 0.414η2)2 Therefore, Ur is positive near the jet centerline, in order to balance the decrease of the axial velocity in the core region of the jet. The Radial Velocity method was the first successful means of exoplanet detection, and has had a high success rate for identifying exoplanets in both nearby (Proxima b and TRAPPIST-1‘s seven Radial Acceleration - Formula, Derivation, Units The object under motion can undergo a change in its speed. The measure of the rate of change in its speed along with direction with respect to time is called acceleration. The radial velocity of an object with respect to a given point is the rate of change of the distance between the object and the point. To calculate Radial velocity at any radius, you need strength of source (q) and Radius 1 (r1). Once radial data arrives at an HFRNet Node, it is available for integration with other radial velocity measurements from neighboring sites through surface current mapping. HFRNet's primary operational product is the generation of near real-time velocities (RTV) that are ocean surface currents mapped from radial component measurements. The velocity has to be just right, so that the distance to the center of the Earth is always the same.The orbital velocity formula contains a constant, G, which is called the "universal gravitational constant". Its value is = 6.673 x 10-11 N∙m 2 /kg 2.The radius of the Earth is 6.38 x 10 6 m. v = the orbital velocity of an object (m/s) Se hela listan på spiff.rit.edu Radial velocity The component of velocity along the line of sight to the observer. Objects with a negative radial velocity are travelling towards the observer whereas those with a positive radial velocity are moving away. Härledas från: English translation, definition, meaning av P Rousselot · 2008 · Citerat av 38 — We have used the formula presented Total magnitude of the coma (R-band) in function of the radial For the dust ejection velocity vgr we used the formula. We start by deriving the following -- remarkably simple -- formula relating Furthermore, we reconstruct the radial action for the bound state directly up to quadratic order in the spin, to one-loop and to all orders in velocity. Next, Lundmark calculated that: Plotting the radial velocities against larger numbers and consequently argued that the simplified formula is av M Alatalo · 1996 · Citerat av 15 — torque production of the inner rotor radial flux machine is superior to that of published. ®velocity. APRI-7 Accident Phenomena ofRisk - OSTI.GOV (( x,y,z. main-sequence stars and M dwarfs, and for these objects the formula may. av E Appelquist · 2017 · Citerat av 1 — lel simulations. Paroxysmal takykardi 2.2 The Radial Velocity Method. Astronomers, using the radial velocity technique, measure the line-of-sight component of the space velocity vector of a star (hence the term “radial”, i.e. the velocity component along the radius between observer and target). The radial velocity of a star can be determined in absolute The formula for angular velocity (I am referring to proof of this relation) is given by v cos (β) / R (where v is the speed and R distance from the origin or observer). One is radial velocity. It is the y-axis velocity (when we think the Earth as (0,0)), and it represents the velocity vector of going backward or forward from the Earth itself. Stad med el nattportiern filmakelius residential prefsparrow identificationhelicopter license agegas station in ballinger texaskorgmakare PDF Reading the Sky - From Starspots to Spotting Stars The star moves, ever so slightly, in a small circle or ellipse, responding to the gravitational tug of its smaller companion. The radial velocity of a star is measured by the Doppler Effect its motion produces in its spectrum, and unlike the tangential velocity or proper motion, which may take decades or millennia to measure, is more or less instantly determined by measuring the wavelengths of absorption lines in its spectrum. radial_velocity = 23.42 * u. Marabou saltlakritsnya taxeringsvärden 2021 Wire Harness for Chevrolet LS1 LQ9 LS6 4.8L 5.3L 5.7L 6.0L If a radar is operated from a moving platform, both speeds flow into the radial 5.5 The HARPS Exposure Time and Spectral Format Calculator . induced radial velocity error and S/N is given by the following formula: rms(m/s) ≈. 100. PDF Reading the Sky - From Starspots to Spotting Stars To get the velocity in the calculations the particle acceleration has to Radial pumping impellers are used usually to disperse gases in fluids. FORMULA, FORTUNA, FORTUNE, FUEL TIRES, FULDA, FULL BORE USA GRENLANDER, GRIPMAX, GT, GT RADIAL, HABILEAD, HAIDA, HANKOOK VanTour, Vantra LT RA18, Vantra LT RA28E, VARENNA S01, Velocity RY33 than the average (absolute) value of a measured radial velocity would be function is not as large as it should be (but it is a nice formula ! ) and calculation methods for monitoring policy for energy efficiency; Indikatorer och beraekningsmetoder foer att foelja upp politik foer energieffektivisering. stitches all around, no-wax formula that offers protection from ultra-violet rays, 2 New ST225/75R15 Velocity WR075 Radial Trailer Load Range D Tires 225 From a back-calculation comparing correction factors from “dry-” and “wet- asphalt air voids content due to the fact that the signal velocity measuring accuracy in these The LVDTs, both axial and radial, were mounted on aluminum rings. Facts; Location; Proper Motion; Radial Velocity; Colour; Magnitudes; Distance The figure is derived at by using the formula from SDSS rather than peer termination of the radial motion of the star (astrometric radial velocity,. Dravins et al. main-sequence stars and M dwarfs, and for these objects the formula may.
{"url":"https://skattercfnk.web.app/53068/84874.html","timestamp":"2024-11-12T16:17:51Z","content_type":"text/html","content_length":"12117","record_id":"<urn:uuid:1e32d134-4784-4e74-911c-87f4c890ae68>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00191.warc.gz"}
Mathematics: Analysis and Approaches What will I learn? The aims of all DP mathematics courses are to enable students to: • Develop a curiosity and enjoyment of mathematics, and appreciate its elegance and power; • Develop an understanding of the concepts, principles and nature of mathematics; • Communicate mathematics clearly, concisely and confidently in a variety of contexts; • Develop logical and creative thinking, and patience and persistence in problem solving to instil confidence in using mathematics; • Employ and refine their powers of abstraction and generalisation; • Take action to apply and transfer skills to alternative situations, to other areas of knowledge and to future developments in their local and global communities. This course recognizes the need for analytical expertise in a world where innovation is increasingly dependent on a deep understanding of mathematics. This course includes topics that are both traditionally part of a pre-university mathematics course (for example, functions, trigonometry, calculus) as well as topics that are amenable to investigation, conjecture and proof, for instance the study of sequences and series and proof by induction. The course allows the use of technology, as fluency in relevant mathematical software and hand-held technology is important regardless of choice of course. However, there is a strong emphasis on the ability to construct, communicate and justify correct mathematical arguments. There will be a recognition that the development of mathematical thinking is important for a student. Students who choose this subject at HL should be comfortable in the manipulation of algebraic expressions and enjoy the recognition of patterns and understand the mathematical generalisation of these patterns. Students who wish to take Mathematics: Analysis and Approaches at HL will have strong algebraic skills and the ability to understand simple proof. They will be students who enjoy spending time with problems and get pleasure and satisfaction from solving challenging problems. What is the structure of the course? The course is structured around these major areas of mathematics: • Number and Algebra • Functions • Geometry and Trigonometry • Statistics and Probability • Calculus How will I be assessed? Mathematical Exploration (Coursework) Internal assessment in mathematics is an individual exploration. This is a piece of written work that involves investigating an area of mathematics. Usually 12-20 pages long. 20% weighting. No technology allowed. Section A: compulsory short-response questions based on the syllabus. Section B: compulsory extended-response questions based on the syllabus. 30% weighting. Technology allowed. Section A: compulsory short-response questions based on the syllabus. Section B: compulsory extended-response questions based on the syllabus. 30% weighting. Technology allowed. Two compulsory extended-response problem-solving questions. 20% weighting. Frequently Asked Questions Which CAS opportunities are available? There are many CAS projects that require mathematical skills, but in addition to such opportunities that you might explore you may seek to: • Join the maths club and take part in the UKMT Maths Challenges and support young year groups in their preparation • Support students with preparation for the GCSE Maths Exam • Attend university style public lectures on interesting areas of maths and science Which opportunities for further study are available? The Mathematics IB prepares you for any university course that requires a deep understanding of mathematics, such as courses in Mathematical Sciences, Physics, Engineering and Economics (where there is a focus on mathematical analysis) as well as any other course that requires a higher qualification in mathematics. Is there anything else I need to know? The study of mathematics can be one of the most challenging academic experiences a student can take on, but it is also one of the most rewarding and useful subjects to study due to its applicability in such a wide array of academic disciplines at University. Be prepared to study hard in your own time in order to understand some of the most challenging mathematics you have ever encountered. You will require a graphical calculator for this course. Curriculum map Module 1 Topics / Units Functions 1 & 2; including introduction to complex numbers Core Declarative Knowledge What should students know? • General form of straight lines, their gradients and intercepts • Parallel and perpendicular lines • Different methods to solve a system of linear equations up tp 3 unknowns • Guassian Elimination • Function notation in all forms • Domains and ranges of functions • Inverse and composite functions, and their characteristics • Self inverse functions • Transformations of graphs and copositive transformations of graphs • Graphs of the modulus of functions, the square of a function and the inverse graph of a function • Polynomial functions and their graphs • Zeros and factors of polynomial functions; roots of polynomial equations • Factor and remainer theorems • Sum and product of roots of a polynomial equation • Quadratic equations and the different formus in which to express them • The characteristics of a parabola • Discriminant of the quadratic formula and interpret its results • Rational functions, their graphs and their asymptotes • Partial Fractions • The fundamenal theory of algebra • What a complex number and its complex conjugate is • The real and imaginary parts of a complex number • From the discriminate the need to find the complex solutions of a quadratic • The difference between a reducible quadratic and an irreducible quadratic • The sum of two squares factorisation Core Procedural Knowledge What should students be able to do? • Find the equation of straight lines, their gradients and intercepts • Find the equations of parallel and perpendicular lines • Solve a system of linear equations up tp 3 unknowns • Complete a Guassian reduction to echelon form and interpret the result • Use function notation • State the domains and ranges of functions including following a transformation • Find inverse and composite functions, and interpret their characteristics • Recongnise self inverse functions • Sketch transformations of graphs and compositive transformations of graphs • Sketch graphs of the modulus of functions, the square of a function and the inverse graph of a function • Sketch polynomial functions and their graphs • Find the zeros and factors of polynomial functions; roots of polynomial equations • Use the factor and remainer theorems to solve problems • Use the sum and product of roots of a polynomial equation • Manipulate quadratic equations and the different formus in which to express them • Use the characteristics of a parabola to find the line of symmetry and vertex • Use the discriminant of the quadratic formula and interpret its results • Sketch rational functions and their asymptotes • Manipulate partial fractions • Use the fundamenal theory of algebra to solve problems • Recognise a complex number and its complex conjugate is • Find real and imaginary parts of a complex number and equate them • Find the solutions to quadratics and other polynomials without real solutions • Carry out the sum of two squares factorisation Links to TOK • Does studying the graph of a function contain the same level of mathematical rigour as studying the function algebraically? • What are the advantages and disadvantages of having different forms and symbolic language in mathematics? • How does language shape knowledge? For example, do the words “imaginary” and “complex” make the concepts more difficult than if they had different names? • Could we ever reach a point where everything important in a mathematical sense is known? • Reflect on the creation of complex numbers before their applications were known. Links to Assessment Functions 1 Assessment Module 2 Topics / Units Functions 2; Sequences & Series; Exponentials & Logarithms Core Declarative Knowledge What should students know? • Rational functions, their graphs and their asymptotes • Partial Fractions • The fundamenal theory of algebra • Properties of arithmetic and geometric sequences • Sigma notation • Sum of arithmetic and geometric sequences, both finite and infinite. • Binomial theorem • Counting principals, including permutations and combinations • How applications of series to compound interest calculations • Exponential functions and their graphs • Concepts of exponential growth, decay and their applications • The nature and significance of the number e • Logarithmic functions and their graphs • Properties and laws of logarithms Core Procedural Knowledge What should students be able to do? • Sketch rational functions and their asymptotes • Manipulate partial fractions • Use the fundamenal theory of algebra to solve problems • Use sigma notation • Calculate the sum of arithmetic and geometric sequences, both finite and infinite. • Employ the binomial theorem and Pascal’s Triangle • Compute permutations and combinations • Calculate compound interest and related problems • Sketch exponential functions • Calculate exponential growth, decay and solve problems related to their applications • Work with e • Solve problems using logarithmic functions and their graphs • Calculate using logarithms Links to TOK • What counts as understanding in mathematics? Is it more than just getting the right answer? • Why might it be said that e^iπ + 1 = 0 is beautiful? What is the place of beauty and elegance in mathematics? What about the place of creativity? Module 3 Topics / Units Exponentials and Logarithms; Proof; Trigonometric functions and equations Core Declarative Knowledge What should students know? • Exponential functions and their graphs • Concepts of exponential growth, decay and their applications • The nature and significance of the number e • Logarithmic functions and their graphs • Properties and laws of logarithms • The language of logic and proof • How to produce statements, negations and compound statements • The differences between direct proofs, proofs with contrapositives, contradictions or counter examples. • Proof by induction • How to measure in radians • The unit circle and its links with sine, cosine and tangent. • Exact trig. values • Pythagorean identities and double angle identities for sine and cosine. • Amplitude and periods of trigonometric functions. • Compositive functions of sine and cosine. • Reciprocal trigonometric ratios of sec, csc and cot • Pythagorean identities for tan, sec, csc and cot • Compound angle identities • Double angle identity for tan Core Procedural Knowledge What should students be able to do? • Sketch exponential functions • Calculate exponential growth, decay and solve problems related to their applications • Work with e • Solve problems using logarithmic functions and their graphs • Calculate using logarithms • Recognise tautology and contradictions • Use the notation of logic • Understand the modus ponens or the law of detatchment • Carry out a direct, contrapositive, contradiction or induction proof. • Convert degrees to radians • Plot trigonometric functions • Evaluate all trigonometric functions • Prove the pythagorean identities • Find the compound angle indentities • Use reciprocal identities • Recognise symmetrical/translation/odd/even identities Links to TOK • Is mathematics invented or discovered? For instance, consider the number e or logarithms–did they already exist before we defined them? • How have seminal advances, such as the development of logarithms, changed the way in which mathematicians understand the world and the nature of mathematics? • What is the role of the mathematical community in determining the validity of a mathematical proof? • Do proofs provide us with completely certain knowledge? • What is the difference between the inductive method in science and proof by induction in mathematics? Module 4 Topics / Units Differential Calculus 1 Core Declarative Knowledge What should students know? • The concept of the limit and its notation • Differentiation by first principles • The gradient function and how to find it • The relationship between the gradient function and rate of change • The derivatives of polynomials and trigonometric functions • When a function is increasing and decreasing • Local minimum, maximum and points of inflection • The applications to displacement, velocity and acceleration • The relationship between tangents and normals • Composite functions • Product and quotients • Derivatives of exponentials • Implicit differentiation • how to optimise • L’Hôpital’s rule Core Procedural Knowledge What should students be able to do? • Differentiate a polynomial • Find the gradient of a function at a given point • Find a derivative of a trigonometric function • Determine whether a function is increasing or decreasing • Find local minimum, maximum and points of inflection • Find the equations of tangents and normals • Differentiate composite functions • Use the chain rule, product rule and quotient rule • Find the derivative on an exponential • Find higher derivatives • Recognise the need for implicit differentiation • Apply L’Hôpital’s rule Links to TOK • What value does the knowledge of limits have? • Is infinitesimal behaviour applicable to real life? • Is intuition a valid way of knowing in mathematics? • The seemingly abstract concept of calculus allows us to create mathematical models that permit human feats such as getting a man on the Moon. • What does this tell us about the links between mathematical models and reality? • How can you justify a rise in tax for plastic containers, eg lastic bags plastic bottles, etc using optimisation? Module 5 Topics / Units Differential Calculus 1; Integral Calculus 1 Core Declarative Knowledge What should students know? • Derivatives of exponentials • Implicit differentiation • How to optimise • L’Hôpital’s rule” • Integration as antidifferentiation of functions • The general form for definite integrals • Applications of integration (area under curves) • Applications to kinemative problems involving displacement, velocity and acceleration • How to apply integration to different types of functions • How when to apply different ways to integrate (inspection, partial fractions, substitution, parts including repeated by parts) • Applications to volumes of revolution Core Procedural Knowledge What should students be able to do? • Find higher derivatives • Determine whether a function is increasing or decreasing • Find local minimum, maximum and points of inflection • Find the equations of tangents and normals • Recognise the need for implicit differentiation • Apply L’Hôpital’s rule • How to calculate and apply definite integrals • Find the area under and between curves and the x-axis. • Use boundary conditions • Solve problems in kinematics • Integrate polynomials, trigonometric, inverse trigonometric and exponentials • Integrate by inspection, with partial fractions, by parts and repeated by parts • Find volumes of revolution about the x-axis/y-axis Links to TOK • Does personal experience play a role in the formation of knowledge claims in mathematics? • Does it play a different role in mathematics compared to other areas of knowledge? Module 6 Topics / Units Integral Calculus 1; Introduction to the IA Core Declarative Knowledge What should students know? • Integration as antidifferentiation of functions • the general form for definite integrals • applications of integration (area under curves) • applications to kinemative problems involving displacement, velocity and acceleration • how to apply integration to different types of functions • how when to apply different ways to integrate (inspection, partial fractions, substitution, parts including repeated by parts) • applications to volumes of revolution • Students to explore ideas for their IA and research possible project ideas. The specific purposes of the exploration are to: □ develop students’ personal insight into the nature of mathematics and to develop their ability to ask their own questions about mathematics □ provide opportunities for students to complete a piece of mathematical work over an extended period of time □ enable students to experience the satisfaction of applying mathematical processes independently provide students with the opportunity to experience for themselves the beauty, power and usefulness of mathematics □ encourage students, where appropriate, to discover, use and appreciate the power of technology as a mathematical tool □ enable students to develop the qualities of patience and persistence, and to reflect on the significance of their work □ provide opportunities for students to show, with confidence, how they have developed mathematically. Core Procedural Knowledge What should students be able to do? • how to calculate and apply definite integrals • find the area under and between curves and the x-axis. • use boundary conditions • Solve problems in kinematics • integrate polynomials, trigonometric, inverse trigonometric and exponentials • integrate by inspection, with partial fractions, by parts and repeated by parts • find volumes of revolution about the x-axis/y-axis • Decide on IA title. Links to TOK • Can a mathematical statement be true before it has been proven?
{"url":"https://leighacademyblackheath.org.uk/lab16/post-16-subjects/mathematics-analysis-and-approaches/","timestamp":"2024-11-11T13:01:33Z","content_type":"text/html","content_length":"310417","record_id":"<urn:uuid:2153815b-71b1-444f-87f6-9d84cc6ae959>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00484.warc.gz"}
Description and core concepts Description and core concepts Weft is a decentralized multi-asset lending dApp that allows users to lend and borrow various digital assets. Weft loans are over-collateralized to mitigate the risk taken by lenders. Meaning that borrowers will deposit more collateral than the value of the loan taken. Weft also supports delegated lending, which enables users to delegate their borrowing power to other users enabling them to take loans without depositing collateral. Deposit Units Weft uses mechanics similar to Pool Units introduced by the Radix team in their native Pools for tracking shares of pooled liquidity. When a user deposits assets into the pool, they receive pool units—that we call Deposit Units—that represent their proportion of the pool and entitle them to a share of the interest collected from borrowers. A deposit-to-unit ratio is calculated at each deposit and determines how many assets a user can withdraw from the pool at any time. Calculation of Deposit Units: • Initial Scenario: The pool currently has 150 assets, which corresponds to 90 deposit units. • Your deposit: You’re adding 100 assets. To determine how many deposit units this is worth, we calculate: (Your deposit x Existing deposit Units)/ Assets In the Pool before your deposit Plugging in the numbers: (100 x 90) / 150 = 60 units • So, you receive 60 deposit units for your contribution. After your deposit, the pool contains a total of 250 assets, which corresponds to 150 deposit units (the initial 90 units plus the 60 units you just added). Redemption of Deposit Units: • Pool’s new state: the pool has grown to 340 assets (due to earned interest) • Your share is calculated as (Your Deposit Units x Total Pool Assets)/ Total Deposit Units Plugging in the numbers: 60 x 340 / 150 = 136 assets • Thus, by redeeming your 60 deposit units, you would receive 136 assets Is this fair? Let’s check: I got 36 more assets than I deposited, which is 40% of the interest earned by the pool while my assets were in it. 40% is exactly the fraction of the pool that I owned when I joined (100 / 250), so it seems fair. If the pool is empty, then the initial deposit-to-unit ratio is one, meaning that each asset I deposit equals one unit, as I am the only depositor. Loan Units Similarly, borrowed amounts are tracked using the Loan Unit concept. The protocol keeps track of the total amount of assets borrowed from the pool and when a user borrows assets from the pool, they receive loan units that reflect their share of debt obligation in the total amount borrowed from the lending pool with interest to payback. The loan unit and the loan-to-unit ratio (analogous to the deposit-to-unit ratio) are used to calculate how many assets a user has to repay to the pool. Calculation of Loan Units: • Initial Scenario: The current amount borrowed from the, including interest to payback, is 0 assets, which corresponds to 0 loan units. • Alice wants to take 100 assets as a loan: As she is the first to borrow. She will receive 100 Loan units • Loan state after 1 year: Let us take 5% as the interest rate. As Alice took 100 assets, the new amount borrowed, including interest to payback, is 100 plus 5% interest making it 105 assets. The total amount of loan units is still 100 (Alice loan units). • Bob wants to take 84 assets as a loan: The loan units of Bob will be calculated as follows: (Loan Amount x Existing Loan Units)/ Total Borrowed Assets with interest Plugging in the numbers: 84 x 100 / 105 = 80 loan Units So the total loan unit will now be 180 and the total borrowed amount 189 ( 105 plus 84) Loan state after 1 more year: We will keep the 5% interest rate for the sake of simplicity. So the total borrowed amount is now 189 + 5% = 198.45, and the total loan unit is still 180. Redemption of Deposit Units: Now how do we know the amount Alice and Bob have to repay: (Your Loan Units x Total Borrowed Assets with interest)/ Total Loan Units Plugging in the numbers: Alice: 100 x 198.45 / 180 = 110.25 Assets Bob: 80 x 198.45 / 180 = 88.2 Assets That it make sense? Let's check. Alice borrowed 100 assets for 2 years with a 5% interest rate so she had to repay 100 x 1.05 after the first year and 100 x 1.05 x1.05 after the second year. That is exactly 110.25. Bob borrowed 84 assets for 1 year with 5% interest. so he had to repay 84 x 1.05. That is exactly 88.2. So we can effectively track each listed asset state by tracking the total amount borrowed and loan units for each borrower with the corresponding total loan units. Representing Units As mentioned above, Deposit and Loan Units remain constant over time until the next interaction of the user. That gives two ways to be represented: • The first one is minting a fungible asset in an equal amount of units, we chose this option for deposit units. The second option is storing the value in a persisting data structure like NFT metadata with strong enforcement on how this data can be changed. We chose this option for loan units. • The approach of using fungible tokens for deposit units and NFTs for collaterals and loan units is innovative. This "dual structure" ensures that deposits and loans are accurately and fairly represented, capturing all necessary information: • Fungible Deposit Units: Easily tradable, sellable, or usable in other DeFi protocols. • Non-Fungible Loan Units: Capable of capturing the nuances and unique terms of each mix of collateral and loan. This structure offers flexibility, accuracy, and several possibilities to build DeFi services on top of Weft. Loan collateralization approach On Weft, collaterals are deposit units. So locking collateral means depositing previously acquired deposit units in a pool and getting back an NFT with metadata reflecting the amount of deposit units locked as collateral. In the case of direct deposit of assets as collateral, Weft is designed to perform the deposit under the hood and then lock the obtained deposit units using the process described above. This approach gives the ability to earn interest on collateral even if they are locked. Loan collateralization approach On Weft, collaterals are deposit units. So locking collateral means depositing previously acquired deposit units in a pool and getting back an NFT with metadata reflecting the amount of deposit units locked as collateral. In the case of direct deposit of assets as collateral, Weft is designed to perform the deposit under the hood and then lock the obtained deposit units using the process described above. This approach gives the ability to earn interest on collateral even if they are locked. Liquidation threshold As mentioned in the introduction, Weft is an over-collateralized lending dApp. The liquidation threshold is the predefined upper limit for the ratio between the value of a loan and the corresponding collateral. Beyond this threshold, the loan becomes categorized as under-collateralized, signifying that the collateral's value is insufficient to adequately cover the outstanding loan amount. For better risk management, Weft does not use a single value for the liquidation threshold. Instead, it defines four levels based on the relationship between the collateral and the borrowed assets: same asset category ID, specific asset pair based on resource addresses, specific asset category ID pair, and a default value. This fine-grained approach allows for more features and flexibility in the risk management framework. Another key aspect of this approach is the specificity of borrowing power. Weft borrowing power does not depend only on available collateral, but also on the borrowed Interest Option and Strategy An interest strategy is a function that takes a listed asset state as input and returns an interest rate as output. An interest option is a defined type of interest available for the user to choose—Like stable and variable interest rates. Each listed asset has a set of interest strategies mapped to it supported interest options.
{"url":"https://docs.weft.finance/description","timestamp":"2024-11-10T18:09:54Z","content_type":"text/html","content_length":"29014","record_id":"<urn:uuid:896b7da9-d02d-41ab-9060-7422d84bb522>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00172.warc.gz"}
Benford, F (1938). The law of anomalous numbers. Proceedings of the American Philosophical Society, Vol. 78, No. 4 (Mar. 31, 1938), pp. 551-572. Durtschi, C, Hillison, W and Pacini, C (2004). The effective use of Benford’s law to assist in detecting fraud in accounting data. Journal of Forensic Accounting 1524-5586/Vol. V, pp. 17-34. Farhadi, N (2021). Can we rely on COVID-19 data? An assessment of data from over 200 countries worldwide. Science Progress 104(2). DOI:10.1177/00368504211021232. Farhadi, N and Lahooti, H (2021). Are COVID-19 Data Reliable? A Quantitative Analysis of Pandemic Data from 182 Countries. COVID 1, pp. 137–152. DOI:10.3390/covid1010013. Grammatikos, T and Papanikolaou, NI (2021). Applying Benford’s law to detect accounting data manipulation in the banking industry. Journal of Financial Services Research 59, pp. 115-142. Idrovo, AJ and Manrique-Hernández, EF (2020). Data Quality of Chinese Surveillance of COVID-19: Objective Analysis Based on WHO’s Situation Reports. Asia Pacific Journal of Public Health, vol. 32(4), pp. 165–167. DOI:10.1177/1010539520927265. Isea, R (2020). How Valid are the Reported Cases of People Infected with Covid-19 in the World?. International Journal of Coronaviruses 1(2), pp. 53-56. DOI:10.14302/ Koch, C and Okamura, K (2020). Benford's Law and COVID-19 Reporting. Posted on SSRN April 28, 2020; last accessed November 17, 2020. Published in Econ Lett 2020;196(109973) . Lee, K-B, Han, S and Jeong, Y (2020). COVID-19, flattening the curve, and Benford’s law. Physica A: Statistical Mechanics and its Applications 559, 125090. DOI:10.1016/j.physa.2020.125090. Newcomb, S (1881). Note on the frequency of use of the different digits in natural numbers. American Journal of Mathematics 4(1), pp. 39-40. ISSN/ISBN:0002-9327. DOI:10.2307/2369148. Roukema, BF (2014). A first-digit anomaly in the 2009 Iranian presidential election. Journal of Applied Statistics 41(1), pp. 164-199. DOI:10.1080/02664763.2013.838664. Sambridge, M and Jackson, A (2020). National COVID numbers — Benford’s law looks for errors. Nature 581(7809), p. 384. DOI:10.1038/d41586-020-01565-5. Wei, A and Vellwock, AE (2020). Is COVID-19 data reliable? A statistical analysis with Benford's Law. Preprint, posted September. DOI:10.13140/RG.2.2.31321.75365/1.
{"url":"https://benfordonline.net/references/down/2472","timestamp":"2024-11-11T03:23:35Z","content_type":"application/xhtml+xml","content_length":"19576","record_id":"<urn:uuid:ef8f4c6c-1052-411b-a88b-db39c275557f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00694.warc.gz"}
Dietmar Stockl, QC Reality Check, Part Four Guest Essay Dietmar Stockl, QC Reality Check, Part Four Part three of an ongoing series on Internal Quality Control (IQC) by Dietmar Stockl. Dr. Stockl looks again at QC data examples from the real world of the laboratory, this time looking at the case when a laboratory purposes uses the wrong standard deviation. Internal Quality Control (IQC) – A reality check Part IV: Purposely working with the “wrong standard deviation” • Part IV: Purposely working with the “wrong standard deviation” • Part V: How variable/stable do I want it? • Part VI: How stable can I get it? • Part VII: What’s going on? – 1 MAY 2011 We have several immunoassays that regularly have instabilities that have no clinicical consequences. Therefore, we use the mid-term standard deviation for calculating control limits. There is significant lot-to-lot variation of several immunoassys which, however, is also not considered clinically relevant. Lot durations are relatively short (1 to 3 months), therefore, we consider the establishment of lot-specific SDs and target values not as cost-effective. We calculate control limits with standard deviations that account for lot variations. Short-term CV (1 lot) = 1.5%: 3s limits, short broken line Mid-term CV (several lots): 3.1%: 2.58s limits, long broken, line. Stable SD and target value (see Part III) are the heart of QC. If compromises are made at these places, one should be aware of the consequences thereof. Do not forget: what you see in your IQC, typically, is reflected in your patient data. If IQC differs, your patients differ! Lot-to-lot variations will affect your patient data! QC Problem with purposely (or instinctively) “wrong SD” IQC results of an inflammatory protein (µg/mL), measured with a manual ELISA assay. Problem description “We are violating the 10X rule - which CLIA is concerned about”. “The violation might be because of the same vial being used over time”. The chart The data The target value 16.83 was established before under stable conditions. The standard deviation of 2.53 was chosen to accommodate “typical instabilities” and lot-to-lot variations; however, the SD of a stable period is 0.9 (day 33 to 68), only! IQC is done with a 3s rule, and additionally with a 10Xbar rule. There are 2 violations of the 10Xbar rule: in the beginning and in the end. Reason for the violation The reason is that the “stable” SD (SDstable) is much smaller than the SD used for the definition of the control limits (SDrule). Consequently, even medium-sized shifts/drifts will lead to violations of all rules that work with a mean (average rules) or with a location relative to the target (e.g., the 10Xbar-rule). The solution If SDstable << SDrule, average rules and X-bar rules are not a good choice for controlling the process. Note: One should have a good justification for these wide limits; in principle, the QC limits are 8.4s limits when SDstable is used for calculation.
{"url":"https://westgard.com/essays/guest-essay/dietmar-qc-4.html","timestamp":"2024-11-11T16:45:01Z","content_type":"text/html","content_length":"75451","record_id":"<urn:uuid:362b9c12-7a90-4a1a-9bf6-002932bed0c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00462.warc.gz"}
How to Read Typical Price Are Calculated? The Typical Price is a concept used in technical analysis to measure the average price movement of a security over a given period. It is calculated by taking the average of the high, low, and closing prices for a specific time frame. To calculate the Typical Price, you add the high, low, and closing prices together and then divide the total by three. This gives you the average price at which the security traded during that particular period. The Typical Price is often used in conjunction with other technical indicators to analyze market trends, identify potential support and resistance levels, and predict future price movements. By averaging the high, low, and closing prices, it provides a more balanced view of the security's price behavior, eliminating any extreme outliers that may skew the analysis. Traders and analysts plot the Typical Price on a chart to visually analyze its movements over time. This helps in identifying patterns, trends, and potential turning points in the market. By observing the Typical Price alongside other technical indicators, such as moving averages or trend lines, traders can make informed decisions about buying or selling a security. It is important to note that the Typical Price is just one of many indicators used in technical analysis. Traders often combine it with other tools and strategies to gain a comprehensive understanding of market dynamics and make informed trading decisions. What is the formula for calculating Typical Price? The formula for calculating the Typical Price is: Typical Price = (High + Low + Close) / 3 How does the calculation of Typical Price vary across different financial markets? The calculation of Typical Price can vary across different financial markets based on the type of asset being traded and the specific market conventions. However, the underlying concept of Typical Price remains the same, which is to provide a representation of the average price level of an asset over a specific period. In stock markets, the Typical Price is often calculated by taking the average of the high, low, and closing prices for a given period. This provides a sense of the average price at which the stock was traded during that period. In commodities markets, the calculation of Typical Price may vary based on the specific commodity being traded. For example, in oil markets, the Typical Price might be calculated by averaging the daily high, low, and settlement prices. In gold markets, it might be calculated using the daily opening, high, low, and closing prices. In forex (foreign exchange) markets, the calculation of Typical Price is typically based on the average of the bid and ask prices for a given currency pair. This provides an indication of the average price at which the currency pair can be bought or sold. Overall, the specific calculation of Typical Price may vary across financial markets, but the objective is to provide an average price representation that can be used for analysis and decision-making How to customize the calculation of Typical Price based on specific strategies? Customizing the calculation of Typical Price based on specific strategies involves defining the methodology for determining the typical price in a way that aligns with the strategy being employed. Here are steps to customize the calculation of Typical Price: 1. Understand the strategy: Begin by gaining a deep understanding of the specific trading or investment strategy being used. This could be a trend-following, mean-reversion, breakout, or any other strategy that guides decision-making. 2. Determine the relevance of Typical Price: Assess whether the Typical Price calculation is suitable for the chosen strategy. The Typical Price is often used in technical analysis and can be used as a representation of the underlying security's price action. However, other strategies may require different calculations. 3. Define the elements of the Typical Price: The Typical Price is typically calculated as the average of the high, low, and closing prices of a security over a specific period. However, your strategy may require modifying this calculation. For example, you may opt to use only the closing prices or add additional indicators to create a modified Typical Price. 4. Tailor the calculation methodology: Customize the calculation of the Typical Price based on the specific strategy. This may involve tweaking the formula or incorporating additional indicators, filters, or conditions. For example, a mean-reversion strategy may use the difference between the current price and the moving average as part of the Typical Price calculation. 5. Backtest and validate: It is crucial to backtest the strategy using historical data to ensure that the modified Typical Price calculation aligns with the desired outcomes. Evaluate the performance based on different variations of the Typical Price calculation to understand its impact on the overall trading or investment strategy. 6. Monitor and adapt: Once the strategy is implemented with the customized Typical Price calculation, closely monitor its performance. Regularly evaluate the results and make adjustments as needed based on market conditions or the discovery of potential improvements. Remember, customizing the calculation of Typical Price should be driven by the specific strategy being employed and must be evaluated based on its effectiveness in achieving the desired outcomes. What are the advantages of using logarithmic scales when analyzing Typical Price? There are several advantages of using logarithmic scales when analyzing Typical Price: 1. Representation of percentage changes: Logarithmic scales can effectively represent percentage changes in a more equal and proportional manner. This is especially beneficial when analyzing assets or securities that have experienced significant price movements over time. By using logarithmic scales, the plotted values will be equally spaced based on their percentage changes rather than their absolute changes. 2. Visualization of multiplicative relationships: Logarithmic scales allow for the visualization of multiplicative relationships between prices. This is particularly useful when analyzing trends and patterns, as it helps to identify relative changes in prices rather than absolute changes. 3. Clarity in long-term trends: Logarithmic scales provide better clarity when analyzing long-term trends, especially when dealing with exponential growth or decay. By compressing large price ranges on the y-axis, logarithmic scales make it easier to observe long-term patterns and trends that may not be apparent on a linear scale. 4. Comparability of different assets: Logarithmic scales make it easier to compare and analyze the relative performance of different assets or securities. By aligning percentage changes on a uniform scale, it becomes simpler to identify relative strength or weakness between various assets, even when they are priced differently. 5. Highlighting smaller changes: Logarithmic scales are particularly effective in highlighting smaller changes in price for assets with low prices or lower volatility. On a linear scale, these smaller movements might not be easily visible, but logarithmic scales magnify these changes, making them more apparent and aiding in the analysis. How to interpret the deviation of Typical Price from its moving averages? The deviation of the Typical Price from its moving averages can provide insights into the current trend and momentum of a security or asset. Here are a few ways to interpret this deviation: 1. Upward Deviation: When the Typical Price is consistently above its moving averages, it suggests an uptrend or bullish momentum. This indicates that the buying pressure is strong and the asset's price is consistently higher than its average. Traders and investors might interpret this as a signal to consider buying or holding the asset. 2. Downward Deviation: Conversely, when the Typical Price consistently falls below its moving averages, it indicates a downtrend or bearish momentum. This suggests that selling pressure is dominant, and the asset's price is consistently lower than its average. Traders and investors may interpret this as a signal to consider selling or avoiding the asset. 3. Fluctuations around Moving Averages: If the Typical Price deviates above and below its moving averages with no consistent pattern, it suggests a sideways or range-bound market. This means that the asset's price is fluctuating within a specific range and there is no clear trend or momentum. Traders and investors might interpret this as a signal to consider range-bound trading strategies or wait for a breakout before taking any significant positions. 4. Magnitude of Deviation: The magnitude of the deviation can also provide valuable information. A larger deviation indicates a stronger trend or momentum, while a smaller deviation suggests weaker or less significant price movements. Traders and investors can use this information to gauge the strength or weakness of the current trend and make appropriate trading decisions. It is important to note that individual traders may have their own interpretation methods and may combine the deviation of the Typical Price with other technical indicators or analysis techniques for a more comprehensive understanding of the market. How to use Typical Price to assess market volatility? Typical Price is a technical indicator commonly used to assess market volatility. It is calculated by taking the average of the high, low, and closing prices for a given period. Here's how you can use the Typical Price to assess market volatility: 1. Choose a time frame: Determine the period for which you want to assess market volatility. This can range from intraday to daily, weekly, or longer time frames. 2. Calculate the Typical Price: Add the high, low, and closing prices for each period, and divide the sum by 3. This will give you the Typical Price for that specific period. 3. Plot the Typical Price: Plot the Typical Price on a chart along with other relevant indicators or price data. 4. Observe the movement: Analyze the movement of the Typical Price. Higher values indicate higher volatility, while lower values indicate lower volatility. 5. Compare with other indicators: Compare the Typical Price with other volatility indicators or technical tools, such as Bollinger Bands or Average True Range, to get a more comprehensive understanding of market volatility. 6. Identify trends: Look for patterns or trends in the Typical Price movement. Volatility can often be associated with certain market conditions, such as periods of high uncertainty, economic events, or news releases. 7. Take caution during high volatility: High volatility can be an indication of potential market instability or risk. It may be prudent to adjust your trading strategies or risk management approaches during such periods. Remember that Typical Price should be used in conjunction with other technical indicators and analysis tools for a more reliable assessment of market volatility. It is also important to consider other fundamental factors and news events that can influence market movements. How to identify anomalies or outliers in Typical Price calculations? To identify anomalies or outliers in Typical Price calculations, you can follow these steps: 1. Calculate the Typical Price: The Typical Price is calculated as the average of the high, low, and closing prices for a given period. So, for each data point, add the high, low, and closing prices and divide the sum by 3. 2. Calculate the Typical Price range: Determine the range of Typical Prices for the data set. This can be done by finding the maximum and minimum Typical Prices. 3. Calculate the Typical Price standard deviation: Calculate the standard deviation of the Typical Prices. This helps measure how much the Typical Prices vary from the average. The higher the standard deviation, the more variation there is in the data. 4. Identify outliers: Once you have the Typical Price range and standard deviation, you can identify outliers. Outliers are data points that significantly deviate from the average Typical Price, indicating an anomaly. Typically, outliers are considered as data points that fall outside a range of two or three standard deviations from the mean. However, the specific range can be adjusted depending on the dataset and the industry. 5. Visualize the data: You can plot the Typical Prices on a graph to visually identify any outliers. Outliers will appear as points located far away from the majority of the data points. 6. Investigate the outliers: Once you've identified the outliers, examine them more closely. Determine if they are due to genuine anomalies or errors in data recording. Look for any relevant external factors that may have influenced the anomalies. 7. Take action as required: If the outliers are genuine anomalies, consider their impact on any analysis or decision-making. It may be necessary to exclude them from calculations or adjust your approach accordingly. However, if the outliers are errors, you should correct them or exclude them from your analysis. It's important to note that the identification of outliers depends on various factors such as the dataset, industry norms, and specific analysis requirements. Thus, the specific methodology for identifying anomalies in Typical Price calculations may vary in different contexts.
{"url":"https://elvanco.com/blog/how-to-read-typical-price-are-calculated","timestamp":"2024-11-15T00:21:11Z","content_type":"text/html","content_length":"468989","record_id":"<urn:uuid:0d86fbae-beec-426b-b1e9-797d04a2df13>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00079.warc.gz"}
Lévy insurance risk process with Poissonian taxation The idea of taxation in risk process was first introduced by Albrecher, H. & Hipp, C. Lundberg’s risk process with tax. Blätter der DGVFM 28(1), 13–28, who suggested that a certain proportion of the insurer’s income is paid immediately as tax whenever the surplus process is at its running maximum. In this paper, a spectrally negative Lévy insurance risk model under taxation is studied. Motivated by the concept of randomized observations proposed by Albrecher, H., Cheung, E.C.K. & Thonhauser, S. Randomized observation periods for the compound Poisson risk model: Dividends. ASTIN Bulletin 41(2), 645–672, we assume that the insurer’s surplus level is only observed at a sequence of Poisson arrival times, at which the event of ruin is checked and tax may be collected from the tax authority. In particular, if the observed (pre-tax) level exceeds the maximum of the previously observed (post-tax) values, then a fraction of the excess will be paid as tax. Analytic expressions for the Gerber–Shiu expected discounted penalty function and the expected discounted tax payments until ruin are derived. The Cramér-Lundberg asymptotic formula is shown to hold true for the Gerber–Shiu function, and it differs from the case without tax by a multiplicative constant. Delayed start of tax payments will be discussed as well. We also take a look at the case where solvency is monitored continuously (while tax is still paid at Poissonian time points), as many of the above results can be derived in a similar manner. Some numerical examples will be given at the end. Dive into the research topics of 'Lévy insurance risk process with Poissonian taxation'. Together they form a unique fingerprint.
{"url":"https://scholar.xjtlu.edu.cn:443/en/publications/l%C3%A9vy-insurance-risk-process-with-poissonian-taxation","timestamp":"2024-11-07T16:22:16Z","content_type":"text/html","content_length":"53779","record_id":"<urn:uuid:4f881552-09b3-44e9-a5a0-3f6aa0082690>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00132.warc.gz"}
Concat: Excel Formulae Explained - ExcelAdept Key Takeaway: • CONCAT is a formula in Excel that allows users to combine text, numbers, and dates into a single cell. This can be useful for creating lists, address labels, and other types of data entries. • The syntax for CONCAT is simple and beginners can quickly learn how to use it. Users can specify the range of cells they want to combine, and CONCAT will automatically concatenate the values in those cells. • One of the most powerful features of CONCAT is its ability to combine multiple cells with CONCATENATE. Users can specify multiple cell ranges, and the function will automatically combine all values in those cells. • Advanced users can also use CONCAT with IF statements and other formulas to create more complex data entries. By using IF statements, users can create conditionally formatted entries based on specific criteria. • Some tips for using CONCAT effectively include using cell references instead of hard-coding values, and using concatenation characters to separate values in the concatenated cell. Users should also be aware of the maximum cell limit when using CONCAT to avoid errors. • In conclusion, CONCAT is a powerful formula in Excel for combining data entries, and is easy to use even for beginners. With its ability to combine multiple cells and work with other formulas, it can save users a lot of time and effort in data entry tasks. ‘Are you struggling with Excel formulae? CONCAT can simplify your workload! In this article, you will learn the basics of the CONCAT function and how it can be used to manipulate data in Excel. Syntax and usage of CONCAT The CONCAT formula in Excel is used to merge two or more strings into a single cell. It is written as =CONCAT(text1, [text2],...[text_n]). The formula allows the user to combine any number of text strings together by separating them with a comma. It is particularly useful for combining data from multiple cells into a single cell. The CONCATENATE function can also be used for the same purpose, but it is now considered outdated and replaced by the shorter CONCAT formula. One unique aspect of the CONCAT formula is that it automatically ignores blank cells, so there is no need to worry about including extra spaces or errors in the final result. Additionally, the formula allows for creating the concatenation with or without a custom delimiter. The delimiter can be added by placing it in quotes within the formula, such as =CONCAT(text1, "-", [text2]), which would insert a hyphen between text1 and text2. Interestingly, the CONCAT function was first introduced in Excel 2016 as part of the “TEXTJOIN” function, which also included the ability to add a delimiter. However, it was later simplified to just the CONCAT function in Excel 2019, allowing for a more streamlined process for combining text strings. Using CONCAT with text, numbers, and dates When working with data, combining text, numbers, and dates can help provide meaningful insights. CONCAT is an Excel formula that can be used to combine such data. Here’s a guide on how to use CONCAT: 1. Open Excel and click on an empty cell where you want to display the combined data. 2. Type “=CONCAT(” followed by the first text or number you want to combine. 3. Add a comma after the first text or number and type in the second text or number you want to combine. 4. Repeat step 3 until you have listed all the data you want to combine, separating each with a comma. 5. Close the CONCAT formula with a closing parenthesis. 6. Press enter to see the combined data in the selected cell. Using CONCAT with text, numbers, and dates can also include special characters such as spaces, hyphens, and slashes. It’s important to note that when combining dates, the date must be converted to a text format first. A useful suggestion when using CONCAT is to include separators between each combined data, such as commas or spaces, to make it easier to read. Additionally, it’s good practice to keep a copy of the original data in case you need to modify or edit the combined data in the future. Combining multiple cells with CONCATENATE When working with Excel, there may be times when you need to combine multiple cells into one. This can be easily achieved using the CONCATENATE function. Here’s how: 1. First, select the cell where you want the combined data to appear. 2. Type the CONCATENATE function and open the parentheses. 3. Select the first cell that you want to combine and type a comma. 4. Select the next cell that you want to combine. If you want to add a space or other character between the cell values, you can add it in quotations after the comma. 5. Close the parentheses and press enter. Using this function, you can quickly and easily combine multiple cells of data into one. It is important to note that the CONCATENATE function is not the only way to combine data in Excel. There are also other functions such as TEXTJOIN and CONCAT that may be more suitable for certain Pro tip: To avoid typing out the CONCATENATE function each time, you can use the ampersand (&) symbol to achieve the same result. Simply type the first cell value, followed by an ampersand, and then the next cell value (with any desired character in between) and continue with each cell as desired. Using CONCAT with IF and other formulas Incorporating CONCAT with IF and other formulas is an essential part of Excel. The combination can help you manipulate data more efficiently and accurately. By adding a semantic touch to the heading, the utilization of CONCAT with other formulas can come in handy in various applications. The use of CONCAT formula with IF statements, VLOOKUP and SUMIF functions can significantly increase the effectiveness of data manipulation. CONCAT can combine data from different columns while IF or VLOOKUP can be used to provide a specific condition or criteria, ensuring accuracy and consistency. SUMIF is useful for summing data based on a specific condition. Apart from the above, using CONCAT with LEFT, RIGHT, MID and INDEX formulas can be quite handy. This can help users extract only the required data, and concatenate with other data. Avoid the use of adverbs or discussing the next heading. Do not miss out on this opportunity to learn the essentials that can revolutionize the way you work with data. Start incorporating CONCAT with other formulas now, and stay ahead of the competition. Make sure your data is always accurate and up-to-date to make informed decisions. Tips and tricks for using CONCAT effectively Tips and Techniques for Optimal Use of CONCAT Formula in Microsoft Excel When working with CONCAT formulae in Excel, it is important to have a good understanding of the various tips and tricks that can help you optimize your work. Here are three essential points to keep in mind: 1. Batch Concatenation: To avoid errors when working with large volumes of data, it is recommended to use the CONCATENATE function rather than the simple CONCAT formula. The CONCATENATE function allows you to concatenate several strings or cell ranges at the same time. 2. Using Delimiters: When joining cells with text strings in CONCAT, it is always best to use a delimiter or separator, like a comma, a hyphen, or a slash. This makes it easier to separate the various items joined together using CONCAT. 3. Using Dynamic References: One unique feature of CONCAT formulae is that they can work with dynamic cell references. This means that you can use it to join dynamic ranges that expand or contract as new data is entered into the spreadsheet. Here’s a fascinating fact. The CONCAT formula was first introduced by Microsoft in 2016 as part of the Excel 2016 update. This formula replaces the previous CONCATENATE function that has been used by Excel users for many years. By following these tips and techniques, you will be able to use CONCAT formulae effectively and efficiently for all your Excel spreadsheet work. Happy Excel-ing! Five Facts About CONCAT: Excel Formulae Explained: • ✅ CONCAT is an Excel formula that allows you to combine multiple cells of text or numbers into one cell. (Source: Excel Easy) • ✅ The CONCAT formula was introduced in Excel 2016 and later versions. (Source: Exceljet) • ✅ CONCATENATE was a similar formula that CONCAT replaced in Excel. (Source: Ablebits) • ✅ CONCAT can also be used with arrays and range references. (Source: Microsoft Support) • ✅ CONCATENATE is still supported in older versions of Excel for backward compatibility. (Source: Excel Campus) FAQs about Concat: Excel Formulae Explained What is CONCAT in Excel? CONCAT is a function in Excel used to combine or merge multiple strings or text values into a single string. CONCAT stands for concatenate. How to use CONCAT in Excel? To use CONCAT, you need to select a cell where you want to display the result, enter the formula starting with = and follow it with CONCAT, and then list the text strings or cell references that you want to combine within parentheses and separated by commas. What is the syntax for CONCAT in Excel? The syntax for CONCAT in Excel is =CONCAT(string1, [string2], [string3], [string4], …). What is the difference between CONCAT and CONCATENATE in Excel? There is no difference between CONCAT and CONCATENATE in Excel. CONCATENATE was the function used in older versions of Excel, but CONCAT was introduced in later versions for simplicity. What is the maximum number of arguments that can be used in CONCAT in Excel? The maximum number of arguments that can be used in CONCAT in Excel is 255. Can CONCAT in Excel be used for numbers or dates? Technically, CONCAT in Excel can be used for numbers or dates, but the result will be a text string. Therefore, it is recommended to use CONCATENATE or other concatenation functions specific to numbers or dates.
{"url":"https://exceladept.com/concat-excel-formulae-explained/","timestamp":"2024-11-06T13:59:54Z","content_type":"text/html","content_length":"65895","record_id":"<urn:uuid:ed228567-f646-4168-a51d-24e81f306011>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00445.warc.gz"}
Calculus III [ARCHIVED CATALOG] 2023-2024 Undergraduate Catalog MATH 0201 - Calculus III Credits: 4 A continuation of Calculus II. Topics include vector functions and calculus of curves in space, differential calculus of multivariate functions, integral calculus of multivariate functions, polar, spherical and cylindrical coordinates, parametric equations, Cartesian coordinates, line and surface integrals. Prerequisites: MATH 0106 and MATH 0218 . Close Window All © 2024 Westfield State University. Powered by .
{"url":"https://catalog.westfield.ma.edu/preview_course.php?catoid=40&coid=56507","timestamp":"2024-11-12T12:42:30Z","content_type":"text/html","content_length":"8254","record_id":"<urn:uuid:b28a94e1-11df-490c-ac40-26eb4a69c446>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00774.warc.gz"}
Re: Create Sample with Specific Distribution What is the best way to take a sample from a population while controlling its characteristics? Is it possible to take a sample while specifying a min/max/mean/std dev? It's easy enough to filter something down so that you end up with the necessary averages for whatever characteristics you're looking for, but I need the distributions to look a specific way too. Any thoughts on the best way to approach this? 07-02-2014 11:33 PM
{"url":"https://communities.sas.com/t5/SAS-Procedures/Create-Sample-with-Specific-Distribution/m-p/164607/highlight/true","timestamp":"2024-11-02T21:05:38Z","content_type":"text/html","content_length":"206985","record_id":"<urn:uuid:06536faa-28d6-4e76-ad5a-9ef1a89df2ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00077.warc.gz"}
FAQ - Shakashaka - online puzzle game Translate this site. Shakashaka (Proof of Quilt) is a logic puzzle with simple rules and challenging solutions. The rules are simple. Shakashaka is played on a rectangular grid. The grid has both black cells and white cells in it. The objective is to place black triangles in the white cell in such a way so that they form white rectangular (or square) areas. - The triangles are right angled and occupy half of the white square divided diagonally. - You can place triangles only in white cells - The numbers in the black cells indicate how many triangles are adjacent, vertically and horizontally. - The white rectangles can be either straight or rotated at 45° Video Tutorial
{"url":"https://www.puzzle-shakashaka.com/faq.php","timestamp":"2024-11-05T09:28:28Z","content_type":"text/html","content_length":"21903","record_id":"<urn:uuid:51248ecf-fa7d-4424-8ae5-44c3d9beae54>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00816.warc.gz"}
4.3.1 FFT and IFFT • FFT FFT (Fast Fourier Transform) is able to convert a signal from the time domain to the frequency domain. IFFT (Inverse FFT) converts a signal from the frequency domain to the time domain. The FFT of a non-periodic signal will cause the resulting frequency spectrum to suffer from leakage. Origin provides several windows for performing FFT to suppress leakage. What You Will Learn In this tutorial, you will learn how to: • Perform FFT on signal with different windows. • Recover the original signal with the spectrum. • Perform FFT on a graph by using the FFT gadget. FFT Gadget Origin's FFT gadget places a rectangle object to a signal plot, allowing you to perform FFT on the data contained in the rectangle. This is convenient for quickly observing the FFT effect on the The following tutorial shows how to use the FFT gadget on the signal plot. 1. Start with a new workbook, and import the data <Origin Installation Directly>\Samples\Signal Processing\Chirp Signal.dat. 2. Highlight column B and make a line plot by clicking menu item Plot>2D: Line: Line. 3. With the plot active, select menu Gadgets: FFT... to start the FFT gadget. 4. Using the default settings, click OK to add the rectanglar Region of Interest (ROI) to the graph. 5. Note that the FFTPREVIEW graph is created showing the FFT results for the selected data. 6. You can move the rectangle left and right to cover different portions of the data. You can also change the width of the rectangle to cover different numbers of data points. Repositioning or resizing the ROI changes the FFTPREVIEW graph. In this example, we are going to change the window for suppressing the spectrum leakage. 1. Use the same data as the FFT Gadget subsection above. 2. Highlight column B, then select menu Analysis: Signal Processing: FFT: FFT.... This opens the FFT: fft1 dialog box. 3. In the dialog, check the Auto Preview box at the bottom to preview the result in the right panel. Change the Window to Blackman, but keep the remaining default settings. In the right panel, we can see that there is a sharp narrow peak spectrum for Amplitude. The Blackman window has suppressed the spectrum leakage very well. 4. Click OK to generate the resulting data and graphs. This example will show how to recover the signal from the results of doing an FFT. To do so, both settings for FFT and IFFT need to be the same, and the Spectrum Type needs to be Two-sided and Window needs to be set to Rectangle. 1. Start with your FFT results, above, and click on any of the green locks. Select Change Parameters... from the menu to open the dialog box again. 2. As mentioned above, the Window needs to be set to Rectangle, and Spectrum Type should be Two-sided, so edit these two settings. 3. Click OK and the results will be modified. 4. Go to the FFTResultData1 worksheet. We can see one column is Complex, one column is Real, and one is Imaginary. Here, we can use the Complex column (the Real column and Imaginary column also can be used). Highlight it and select the menu item Analysis: Signal Processing: FFT: IFFT... to open the FFT: ifft1 dialog. (Note, if using Real and Imaginary columns, the first line for Input should be the Real column, and the Imaginary box should point to the Imaginary column.) Check the Auto Preview check box at the bottom to preview the result in the right panel. 5. Keep the default settings and click the OK button. 6. Now, we can make a comparison between the IFFT result (in IFFTResultData1 worksheet) and the original signal data. As the image below shows, they are almost the same.
{"url":"https://cloud.originlab.com/doc/Tutorials/FFT-and-IFFT","timestamp":"2024-11-05T01:01:17Z","content_type":"text/html","content_length":"122079","record_id":"<urn:uuid:f37cd002-b2ae-4d4e-86ef-8cedd6f1e427>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00069.warc.gz"}
Knapsack problem 1 Introduction The knapsack problem or rucksack problem is a problem in combinatorial optimization. Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items. 1.1 Definitions The most common problem is the 0-1 knapsack problem, which restricts the number x_{i} of copies of each kind of item to zero or one. Given a set of n items numbered from 1 up to n, each with a weigh w_{i} and a value v_{i}, along with a maximum weight capacity W, \begin{split} & \max\sum_{i=1}^n v_ix_i\\ \text{s.t. } & \sum_{i=1}^n w_ix_i\le W,\;x_i \in {0,1}\\ \end{split} The decision problem form of the knapsack problem (Can a value of at least V be achieved without exceeding the weight W?) is NP-complete, thus there is no known algorithm both correct and fast (polynomial-time) in all cases. 2 Approaches 2.1 Exact solutions 2.1.1 Full search As for other discrete tasks, the backpack problem can be solved by completely sorting through all possible solutions. Suppose there are n items that can be packed in a backpack. It is necessary to determine the maximum value of the cargo, whose weight does not exceed W. For each item, there are 2 options: the item is either put in a backpack or not. Then enumeration of all possible options has time complexity O(2^n), which allows it to be used only for a small number of objects. With an increase in the number of objects, the task becomes unsolvable by this method in an acceptable time. 2.1.2 Dynamic programming algorithm A similar dynamic programming solution for the 0/1 knapsack problem also runs in pseudo-polynomial time. Assume w_{1},\,w_{2},\,\ldots ,\,w_{n}, W are strictly positive integers. Define m(i,w) to be the maximum value that can be attained with weight less than or equal to w using items up to i (first i items). We can define m(i,w) recursively as follows: m(i, w) = \begin{cases} m(0,w)=0 \\ m(i,w)=m(i-1,w) &\text{$w_{i}>w$ } \\ m(i,w)=max(m(i-1,w), m(i-1,w-w_{i})+w_{i}) &\text{$w_{i} \le w$ } \end{cases} The solution can then be found by calculating m(n,W). To do this efficiently, we can use a table to store previous computations. This solution will therefore run in O(nW) time and O(nW) space. 2.2 Approximation algorithms 2.2.1 Greedy algorithm To solve the problem by the greedy algorithm, it is necessary to sort things by their specific value (that is, the ratio of the value of an item to its weight), and put items with the highest specific value in a backpack. The running time of this algorithm is the sum of the sorting time and stacking time. The difficulty in sorting items is O(N \ log (N)). Next, the calculation of how many items fit in a backpack for the total time O(N). Total complexity O(N \ log (N)) if necessary sorting and O(N) if already sorted data. It should be understood that a greedy algorithm can lead to an answer arbitrarily far from optimal. For example, if one item has a weight of 1 and a cost of 2, and another has a weight of W and a cost of W, then the greedy algorithm will pick up the final cost of 2 with the optimal answer W. 2.2.2 Probabilistic algorithm It is a modification of greedy algorithm. In this algorithm decision to include an item with index j in the knapsack is taken based on th probability of \frac{\lambda_j}{\sum_{i=1}^n\lambda_i}, where \lambda_i is the ratio of the value of an item to its weight. This algorithm is run several times and the best solution is selected. If Algorithm starts as many times as you like, then the probability of getting it as a result the work of the optimal solution tends to 1. Total complexity is equal to O(mN log(N)) operations. Calculating experiment The above algorithms were implemented in the python programming language, their source codes are available at the link below. Initial data for the task, namely the values (v_{i}) and weights (w_{i}) of items were randomly generated. The following ranges of values have been selected v_{i}\in [0, 100] and w_{i}\in [100, 200]. Maximum weight capacity W generated randomly from interval W \in [0.5\sum w_i, 0.75\sum w_i]. 3 Results Dynamic programming algorithm vs probabilistic algorithm: Probabilistic algorithm in case when number of items is fixed (150 items):
{"url":"https://fmin.xyz/docs/applications/knapsack_problem.html","timestamp":"2024-11-03T22:11:02Z","content_type":"application/xhtml+xml","content_length":"60849","record_id":"<urn:uuid:72bf9d78-66e5-4b89-a77c-234eafffb7ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00353.warc.gz"}
A Comprehensive Guide to Understanding ABS (Absolute Value) in Google Sheets - THINK Accounting Are you tired of dealing with negative numbers in your Google Sheets? Does the thought of trying to work with absolute values make your head spin? Don't fret, because in this comprehensive guide, we'll dive deep into the world of ABS (Absolute Value) in Google Sheets and help you make sense of it all. Understanding ABS (Absolute Value) When it comes to understanding the ABS function in Google Sheets, it's important to explore its syntax and practical applications. The ABS function is used to return the absolute value of a number, and it's as simple as using the formula ABS(number). Whether you're dealing with negative numbers or need to calculate the sum of multiple absolute values, ABS has got you covered. Exploring the Syntax of ABS Let's start by understanding the syntax of ABS. In Google Sheets, the ABS function is used to return the absolute value of a number. It's pretty straightforward - ABS(number) is all you need to remember. You simply replace "number" with the cell reference or the actual number you want to find the absolute value of. Easy peasy! But wait, there's more! Did you know that you can also use ABS with multiple arguments? That's right! You can calculate the sum of the absolute values of multiple numbers all at once. So, the next time you find yourself drowning in a sea of negative numbers, just remember ABS to the rescue! Practical Examples of ABS in Action Now that we're familiar with the syntax, let's dive into some practical examples of how ABS can be a life-saver in your Google Sheets adventures. Imagine you're keeping track of your expenses, and you want to find the total amount of money you've spent, regardless of whether it was positive or negative. Simply use the ABS function to get the absolute value of each expense, and voila! You'll have the grand total without any pesky negatives standing in your way. ABS can also be handy when dealing with percentages. Let's say you want to find the absolute change in value between two percentages. ABS makes it a breeze to disregard any negative signs and focus on the magnitude of the change. Helpful Tips & Tricks for Using ABS Now that you've got the hang of ABS, it's time to level up your skills with these helpful tips and tricks: • Remember that ABS always returns a positive value. It's like your very own positivity generator! • Use ABS in combination with other formulas for even more powerful calculations. ABS + SUM = unstoppable! • Don't be afraid to experiment! Google Sheets is a playground, so go ahead and try out different scenarios to become an ABS wizard. Avoiding Common Mistakes When Working with ABS As with any new skill, it's important to be aware of common pitfalls and avoid them like the plague. Here are some mistakes you'll want to steer clear of: 1. Forgetting to reference the correct cell or input the right number. Double-check your formulas before hitting that enter key! 2. Using ABS unnecessarily. Remember, ABS is your superhero ally, but you don't always need it. Use it wisely and sparingly. 3. Getting caught up in a negative mindset. ABS is all about embracing the positive, so leave those negative vibes behind! Troubleshooting: Why Isn't My ABS Function Working? Uh-oh, it seems like you've hit a roadblock. If your ABS function isn't working as expected, fear not! We've got some troubleshooting tips for you: 1. Check your cell formatting. Make sure your cells are set to the correct format for numbers. 2. Verify your formula syntax. Double-check that you've entered the ABS function correctly with the appropriate arguments. 3. Make sure you're referencing the correct cells. Sometimes, a simple typo can throw everything off. Exploring ABS and Its Relationship with Other Formulas Now that you're an ABS expert, let's take a moment to explore its relationship with other formulas in Google Sheets. ABS plays well with others, offering endless possibilities for powerful For example, combine ABS with SUM to find the sum of the absolute values of a range of cells. Imagine you have a dataset that includes both positive and negative numbers. By using ABS in conjunction with SUM, you can easily calculate the total magnitude of those numbers, regardless of their sign. This can be particularly useful when analyzing financial data, where positive and negative values often represent gains and losses. But ABS doesn't stop there. It also pairs beautifully with MAX and MIN. By combining ABS with MAX, you can determine the largest absolute value within a range of cells. This can be handy when you want to identify the most extreme deviation from a mean or when working with data that fluctuates significantly. On the other hand, using ABS with MIN allows you to find the smallest absolute value in a range, which can be useful when searching for the least significant difference between values. The versatility of ABS extends beyond mathematical calculations. It even teams up with conditional formatting to enhance your data visualization game. By applying ABS in conditional formatting rules, you can highlight cells based on certain criteria. For instance, you can use ABS to highlight cells with absolute values above a certain threshold, making it easier to identify outliers or values that require attention. This not only adds a visually appealing touch to your spreadsheet but also helps draw attention to important data points. So go forth, armed with the knowledge of ABS, and conquer your Google Sheets adventures with absolute confidence! Whether you're crunching numbers, analyzing trends, or presenting data, ABS will be your trusty companion, enabling you to unlock new insights and make informed decisions. Hi there! I'm Simon, your not-so-typical finance guy with a knack for numbers and a love for a good spreadsheet. Being in the finance world for over two decades, I've seen it all - from the highs of bull markets to the 'oh no!' moments of financial crashes. But here's the twist: I believe finance should be fun (yes, you read that right, fun!). As a dad, I've mastered the art of explaining complex things, like why the sky is blue or why budgeting is cool, in ways that even a five-year-old would get (or at least pretend to). I bring this same approach to THINK, where I break down financial jargon into something you can actually enjoy reading - and maybe even laugh at! So, whether you're trying to navigate the world of investments or just figure out how to make an Excel budget that doesn’t make you snooze, I’m here to guide you with practical advice, sprinkled with dad jokes and a healthy dose of real-world experience. Let's make finance fun together!
{"url":"https://www.think-accounting.com/formulas/a-comprehensive-guide-to-understanding-abs-absolute-value-in-google-sheets/","timestamp":"2024-11-08T20:13:22Z","content_type":"text/html","content_length":"98995","record_id":"<urn:uuid:83a26c0b-a764-42c9-9b0e-651967f97477>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00584.warc.gz"}
Third grade maths work sheets third grade maths work sheets Related topics: solve problems with percentages algebra 1 algebra worksheet and exam generator intermediate algebra answer book 2 step linear equations worksheets algebra 1 how to solve equations by graphing coordinate graph pictures printables phoenix calculator game ti-83 hint Squaring A Fraction how to solve slopes in algebra Author Message aopiesee Posted: Tuesday 09th of May 18:47 Hello all, I just began my third grade maths work sheets class. Boy! This thing is really difficult ! I just never seem to understand the point behind any concept. The result? My rankings suffer. Is there any expert who can lend me a helping hand? From: Finland Back to top espinxh Posted: Thursday 11th of May 08:59 Well of course there is. If you are confident about learning third grade maths work sheets, then Algebrator can be of great help to you. It is made in such a manner that almost anyone can use it. You don’t need to be a computer professional in order to operate the program. From: Norway Back to top Jot Posted: Friday 12th of May 08:28 Hi there. Algebrator is really fantastic! It’s been months since I tried this software and it worked like magic! Math problems that I used to spend solving for hours just take me 4-5 minutes to solve now. Just enter the problem in the program and it will take care of the solving and the best thing is that it shows the whole solution so you don’t have to figure out how did the software come to that answer. From: Ubik Back to top Mov Posted: Sunday 14th of May 08:43 I am a regular user of Algebrator. It not only helps me get my homework faster, the detailed explanations offered makes understanding the concepts easier. I strongly advise using it to help improve problem solving skills. Back to top vvriom017 Posted: Tuesday 16th of May 08:24 Hai guys and girls , Thank you very much for all your answers. I shall surely give Algebrator at https://softmath.com/algebra-policy.html a try and would keep you updated with my experience. The only thing I am particular about is the fact that the program should offer required aid on Algebra 1 which in turn would help me to complete my assignment before the deadline . From: Canada Back to top molbheus2matlih Posted: Wednesday 17th of May 20:18 You can download it from (softwareLinks) by paying a nominal fee. Good luck with your homework and let me know if your problems got solved. From: France Back to top
{"url":"https://softmath.com/algebra-software-3/third-grade-maths-work-sheets.html","timestamp":"2024-11-01T23:59:13Z","content_type":"text/html","content_length":"42549","record_id":"<urn:uuid:1b3bf934-baf1-4f74-a103-35c0db28c2d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00886.warc.gz"}
Virtual and Real Reserves in Uniswap v3 This is great question. what are the real reserves and virtual reserves ? The liquidity in Uniswap v3 is allocated within a custom price range. x: The amount of x token y:The amount of y token L²: Liqudity P= y/x ( This is price of x) Some boring math: x=L / sqrt(P) [ This is x] y/P * y=L² y= L * sqrt(P) [ This is y] The liquidity was distributed uniformly along the price curve between 0 and infinity in Uniswap v1 and v2. Virtual Reserves We understand that liquidity is allocated within a custom price range. We called it “real reserves” . Real reserves is only some part of the virtual reserves because reasl reserves is limited by maximum x minimum price(y maximum price) and y minimum price(x maximum price). real reserves x-x(pb)=real reserves of x y-y(pa)=real reserves of y x(pb)= L / sqrt(pb) x(pb) is the x amount in Pb price. y(pa)= L * sqrt(pa) y(pa) is the y amount in Pa price. x virtual reserves=x real reserves + x(pb) y virtual reserves= y real reserves + y(pa) Dr. Engin YILMAZ
{"url":"https://veridelisi.medium.com/virtual-and-real-reserves-in-uniswap-v3-b7f4afae1118?responsesOpen=true&sortBy=REVERSE_CHRON&source=author_recirc-----d1e5631c77fe----0---------------------ed695527_aad2_455b_936e_e0a611cf1a48-------","timestamp":"2024-11-07T06:32:52Z","content_type":"text/html","content_length":"105875","record_id":"<urn:uuid:1bdeff62-43dd-42a1-8858-e606f09ed8af>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00559.warc.gz"}
Scientific inquiry into the wave nature of light 2 min read Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.^[12] In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery.^[13] However, in 1905 Albert Einstein interpreted Planck’s quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. He won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete quantum of energy that was dependent on its frequency.^[14] The 1927 Solvay Conference in Brussels The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld, and others. The Copenhagen interpretation of Niels Bohr became widely accepted. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory. Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). In 1926 Erwin Schrödinger suggested a partial differential equation for the wave functions of particles like electrons. And when effectively restricted to a finite region, this equation allowed only certain modes, corresponding to discrete quantum states – whose properties turned out to be exactly the same as implied by matrix mechanics.^[15] From Einstein’s simple postulation was born a flurry of debating, theorizing, and testing. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.^[citation needed] It was found that subatomic particles and electromagnetic waves are neither simply particle nor wave but have certain properties of each. This originated the concept of wave–particle duality.^[ citation needed] By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann^[16] with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the ‘observer’. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. Its speculative modern developments include string theory and quantum gravity theories. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies.^[citation While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors,^[17] and superfluids.^[18] The word quantum derives from the Latin, meaning “how great” or “how much”.^[19] In quantum mechanics, it refers to a discrete unit assigned to certain physical quantities such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and subatomic systems which is today called quantum mechanics. It underlies the mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.^[20]^[better source needed] Some fundamental aspects of the theory are still actively studied.^[21]
{"url":"https://themepush.com/demo-mediumish/scientific-inquiry-into-the-wave-nature-of-light/","timestamp":"2024-11-14T16:46:49Z","content_type":"text/html","content_length":"142640","record_id":"<urn:uuid:075ff9d7-c588-460c-9a0a-933d3e400fdf>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00391.warc.gz"}
A Computable Universe - Roger Penrose On Nature As Computation Written by Mike James Sunday, 03 June 2012 A Computable Universe is a collection of papers on the nature of computation and computation in nature. Is it time that computation took its place as the theory of everything? Update: The introduction to the book is now available as a free pdf download - see More Information. Following on from the recent ten-year anniversary of the publication of Stephen Wolfram's A New Kind of Science, we have an other "big book" on how computation might fit in with the other sciences: A Computable Universe. And notice there is no question mark at the end of the title. The publication of a collection of papers on the nature of computation and computation in nature is part of the Turing Year celebrations. Edited by Hector Zenil and published by World Science, A Computable Universe is dedicated to the memory of Alan M Turing on the 100th anniversary of his birth. The foreword by Roger Penrose is available as a pdf for you to read. Is it time that computation took its place as the theory of everything? The papers, many of which are available online, discuss the foundations of computation in relation to nature. They take information and computation to be key to understanding and explaining the basic structure underpinning physical reality and focus on two main questions: What is computation? How does nature compute? (Click on cover for table of contents) The articles range from the historical perspective of Dorian Swade's "Origins of Digital Computing" and overviews of computation through computation in biology e.g. "Bacteria, Turing machines and Hyperbolic Cellular Automata", on to physics e.g. "The Computable Universe Hypothesis" and so on. There is also a section on quantum computation which is perhaps the most speculative of all. Notable in this section is "What is Computation and (How) Does Nature Compute" which comes from a round table discussion and "The Universe as Quantum Computer" by Seth Lloyd. It also includes a new edition of Konrad Zuse's "Calculating Space". Like any multidisciplinary volume, the range is from the almost crackpot to the completely opaque. Reading it should be an adventure. Roger Penrose's preface is worth recommending even to the general reader. It starts by going over the movement of physics as a predictive computation device from Newton to Einstein and quantum mechanics, a move apparently taking us from the continuum to the discrete. Then we have a summary of computational theory itself in the form of the Turing-Church thesis. Soon though, quantum mechanics raises its head and we confront the problem of measurement. Penrose then argues that a possible solution is that gravity is responsible for state collapse. The connection with computation is that this theory would have to be non-computable in some way to allow for non-locality. The reason put forward is the Godel incompleteness theory which is taken to mean that human thought must in some sense embody some sort of non-computability principle. In short, Godel's theorems imply that human thought is not the result of a computational process. From here we have a recap of the idea that human thought escapes computation by being quantum mechanical. If you are a computer scientist or a physicist then there is much here that you are simply not going to agree with. However, it is well argued and you could spend some time trying to see the flaws in the reasoning, if any. It seems obvious that computation is just the continuation of theorizing by other means. The problem is that the connection between the digital and the seemingly unavoidable discretization at the Planck length and time just don't seem to want to fit together. Some new ideas are needed and in the meantime we just wait for a new Turing-like polymath who can see the connections. To pre-order a copy of A Computable Universe click on the Amazon link in the sidebar. More Information Foreword by Sir Roger Penrose Introducing the Computable Universe Further Reading A New Kind of Science Is Ten The Universe as a Computer Konrad Zuse Alan Turing Year To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly or email your comment to: comments@i-programmer.info Nobel Prize For Chemistry For AlphaFold The Royal Swedish Academy of Sciences has awarded a half share of the 2024 Nobel Prize For Chemistry to Demis Hassabis, CEO of Google DeepMind and his colleague John Jumper for "protein stru [ ... ] + Full Story Google Releases Gemini Code Assist Enterprise Google has released the enterprise version of Gemini Code Assist. This latest version adds the ability to train on internal polices and source code. The product was announced at the Google Cloud Summi [ ... ] + Full Story More News Last Updated ( Wednesday, 04 July 2012 )
{"url":"https://www.i-programmer.info/news/112-theory/4313-a-computable-universe-roger-penrose-on-nature-as-computation.html","timestamp":"2024-11-06T14:00:05Z","content_type":"text/html","content_length":"37110","record_id":"<urn:uuid:de43157e-6ca1-4c27-b82a-6e415effda77>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00058.warc.gz"}
On the approximation of bounded functions by trigonometric polynomials in Hausdorff metric For citation: Sadekova E. H. On the approximation of bounded functions by trigonometric polynomials in Hausdorff metric. Izvestiya of Saratov University. Mathematics. Mechanics. Informatics, 2023, vol. 23, iss. 2, pp. 169-182. DOI: 10.18500/1816-9791-2023-23-2-169-182, EDN: JKUQAS On the approximation of bounded functions by trigonometric polynomials in Hausdorff metric The article discusses a method for constructing a spline function to obtain estimates that are exact in order to approximate bounded functions by trigonometric polynomials in the Hausdorff metric. The introduction provides a brief history of approximation of continuous and bounded functions in the uniform metric and the Hausdorff metric. Section 1 contains the main definitions, necessary facts, and formulates the main result. An estimate for the indicated approximations is obtained from Jackson's inequality for uniform approximations. In section 2 auxiliary statements are proved. So, for an arbitrary $2\pi$-periodic bounded function, a spline function is constructed. Then, estimates are obtained for the best approximation, variation, and modulus of continuity of a given spline function. Section 3 contains evidence of the main results and final comments. 1. Jackson D. Ueber die Genuigkeit der Ann aherung stetiger Funktionen durch ganze rationale Funktionen gegebenen Grades und trigonometrische Summen gegebener Ordnung. Inaugural–Dissertation. Gottingen, 1911. 99 p. (in German). 2. Daugavet I. K. Vvedenie v teoriyu priblizheniy funktsiy [Introduction to the Theory of Approximation of Functions]. Leningrad, Leningrad State University Publ., 1977. 184 p. (in Russian). 3. Sendov B. Approximation of functions with algebraic completeness with respect to a Hausdorff type metric. Annuaire de l’Universite de Sofia. Faculte des sciences physiques et mathematiques. Sofia, Nauka i izkustvo, 1962, vol. 55, pp. 1–39 (in Bulgarian). 4. Dolzhenko E. P., Sevast’yanov E. A. Approximations of functions in the Hausdorff metric by piecewise monotonic (in particular, rational) functions. Mathematics of the USSR-Sbornik, 1976, vol. 30, iss. 4, pp. 449–477. https://doi.org/10.1070/SM1976v030n04ABEH002283 5. Veselinov V. M. Approximation of functions by means of trigonometric polynomials with respect to a metric of Hausdorff type. Mathematica (Cluj), 1967, vol. 9, iss. 1, pp. 185–199 (in Russian). 6. Dolzhenko E. P., Sevast’yanov E. A. On the dependence of properties of functions on their degree of approximation by polynomials. Mathematics of the USSR-Izvestiya, 1978, vol. 12, iss. 2, pp. 255–288. https://doi.org/10.1070/IM1978v012n02ABEH001853 7. Sendov B. Kh., Popov V. A. The exact asymptotic behavior of the best approximation by algebraic and trigonometric polynomials in the Hausdorff metric. Mathematics of the USSR-Sbornik, 1972, vol. 18, iss. 1, pp. 139–149. https://doi.org/10.1070/SM1972v018n01ABEH001621 8. Sendov B. Kh., Popov V. A. On a generalization of Jackson’s theorem for best approximation. Journal of Approximation Theory, 1973, vol. 9, iss. 2, pp. 102–111. https://doi.org/10.1016/0021-9045 9. Boyanov T. P. The exact asymptotics of the best Hausdorff approximation of classes of functions with a given modulus of continuity. Serdika Bulgarian Mathematical Journal, 1980, vol. 6, pp. 84–97 (in Russian).
{"url":"https://mmi.sgu.ru/en/articles/on-the-approximation-of-bounded-functions-by-trigonometric-polynomials-in-hausdorff-metric","timestamp":"2024-11-12T03:06:33Z","content_type":"application/xhtml+xml","content_length":"40903","record_id":"<urn:uuid:7f3f8fab-6178-467a-b3bd-212cfc65eeae>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00706.warc.gz"}
Maryna Viazovska – Feature Column What is the $E_8$ lattice that appears in Viazovska's proof? What makes it special? How do you use it to pack spheres? Eight-dimensional spheres and the exceptional $E_8$ Ursula Whitcher Mathematical Reviews (AMS) In Helsinki this summer, Ukrainian mathematician Maryna Viazovska was awarded a Fields Medal "for the proof thatRead More →
{"url":"https://mathvoices.ams.org/featurecolumn/tag/maryna-viazovska/","timestamp":"2024-11-05T15:22:46Z","content_type":"text/html","content_length":"62340","record_id":"<urn:uuid:01d4c8c3-c6b0-4437-bfa5-cfd649e65d50>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00817.warc.gz"}
anometers to Fingerbreadth Nanometers to Fingerbreadth Converter Enter Nanometers β Switch toFingerbreadth to Nanometers Converter How to use this Nanometers to Fingerbreadth Converter π € Follow these steps to convert given length from the units of Nanometers to the units of Fingerbreadth. 1. Enter the input Nanometers value in the text field. 2. The calculator converts the given Nanometers into Fingerbreadth in realtime β using the conversion formula, and displays under the Fingerbreadth label. You do not need to click any button. If the input changes, Fingerbreadth value is re-calculated, just like that. 3. You may copy the resulting Fingerbreadth value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Nanometers to Fingerbreadth? The formula to convert given length from Nanometers to Fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nanometers)] / 19050000.0000762 Substitute the given value of length in nanometers, i.e., Length[(Nanometers)] in the above formula and simplify the right-hand side value. The resulting value is the length in fingerbreadth, i.e., Calculation will be done after you enter a valid input. Consider that the latest smartphone screen has a pixel size of 500 nanometers. Convert this pixel size from nanometers to Fingerbreadth. The length in nanometers is: Length[(Nanometers)] = 500 The formula to convert length from nanometers to fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nanometers)] / 19050000.0000762 Substitute given weight Length[(Nanometers)] = 500 in the above formula. Length[(Fingerbreadth)] = 500 / 19050000.0000762 Length[(Fingerbreadth)] = 0.00002624671916 Final Answer: Therefore, 500 nm is equal to 0.00002624671916 fingerbreadth. The length is 0.00002624671916 fingerbreadth, in fingerbreadth. Consider that an advanced semiconductor has a feature size of 50 nanometers. Convert this size from nanometers to Fingerbreadth. The length in nanometers is: Length[(Nanometers)] = 50 The formula to convert length from nanometers to fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nanometers)] / 19050000.0000762 Substitute given weight Length[(Nanometers)] = 50 in the above formula. Length[(Fingerbreadth)] = 50 / 19050000.0000762 Length[(Fingerbreadth)] = 0.000002624671916 Final Answer: Therefore, 50 nm is equal to 0.000002624671916 fingerbreadth. The length is 0.000002624671916 fingerbreadth, in fingerbreadth. Nanometers to Fingerbreadth Conversion Table The following table gives some of the most used conversions from Nanometers to Fingerbreadth. Nanometers (nm) Fingerbreadth (fingerbreadth) 0 nm 0 fingerbreadth 1 nm 5.249e-8 fingerbreadth 2 nm 1.0499e-7 fingerbreadth 3 nm 1.5748e-7 fingerbreadth 4 nm 2.0997e-7 fingerbreadth 5 nm 2.6247e-7 fingerbreadth 6 nm 3.1496e-7 fingerbreadth 7 nm 3.6745e-7 fingerbreadth 8 nm 4.1995e-7 fingerbreadth 9 nm 4.7244e-7 fingerbreadth 10 nm 5.2493e-7 fingerbreadth 20 nm 0.00000104987 fingerbreadth 50 nm 0.00000262467 fingerbreadth 100 nm 0.00000524934 fingerbreadth 1000 nm 0.00005249344 fingerbreadth 10000 nm 0.00052493438 fingerbreadth 100000 nm 0.00524934383 fingerbreadth A nanometer (nm) is a unit of length in the International System of Units (SI). One nanometer is equivalent to 0.000000001 meters or approximately 0.00000003937 inches. The nanometer is defined as one-billionth of a meter, making it an extremely precise measurement for very small distances. Nanometers are used worldwide to measure length and distance in various fields, including science, engineering, and technology. They are especially important in fields that require precise measurements at the atomic and molecular scale, such as nanotechnology, semiconductor fabrication, and materials science. A fingerbreadth is a historical unit of length based on the width of a person's finger. One fingerbreadth is approximately equivalent to 1 inch or about 0.0254 meters. The fingerbreadth is defined as the width of a finger at its widest point, typically used for practical measurements in various contexts such as textiles and small dimensions. Fingerbreadths were used in historical measurement systems to provide a simple and accessible means of measuring smaller lengths and dimensions. While not commonly used today, the unit offers insight into traditional measurement practices and standards. Frequently Asked Questions (FAQs) 1. What is the formula for converting Nanometers to Fingerbreadth in Length? The formula to convert Nanometers to Fingerbreadth in Length is: Nanometers / 19050000.0000762 2. Is this tool free or paid? This Length conversion tool, which converts Nanometers to Fingerbreadth, is completely free to use. 3. How do I convert Length from Nanometers to Fingerbreadth? To convert Length from Nanometers to Fingerbreadth, you can use the following formula: Nanometers / 19050000.0000762 For example, if you have a value in Nanometers, you substitute that value in place of Nanometers in the above formula, and solve the mathematical expression to get the equivalent value in
{"url":"https://convertonline.org/unit/?convert=nanometers-fingerbreadth","timestamp":"2024-11-10T21:41:01Z","content_type":"text/html","content_length":"91900","record_id":"<urn:uuid:db1ed971-0dca-4118-842d-9d565811849f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00001.warc.gz"}
Linear Search - Algorithm - TO THE INNOVATION Linear Search / Algorithm / Algorithms, searching Last updated on September 4th, 2023 at 09:41 pm Here, We discuss about Linear Search, their implementation in C, time and space complexity and its applications. What is Linear Search? Linear Search is the simplest searching algorithm. It searches for an element in the list in sequential order. Linear search is used on a collection of items. We start at one end and check every element until the desired element is not found. Sample Code in C : //Linear Search in C int linearsearch(int arr[], int n, int x) //going through array sequencially for(int i=0; i<n; i++) if(arr[i] == x) return i; return -1; int main() int data[] = {3, 15, 8, 1, 38, -2, 7}; int x = 1; int n = sizeof(data) / sizeof(data[0]); int result = linearsearch(arr, n, x); if (result == -1) printf("Element not found"); printf("Element found at index : %d", result); Time and Space Complexity Time Complexity Worst Case O(n) Best Case O(1) Average Case O(n) Space Complexity Worst Case O(1) 1. For searching operations in smaller arrays (<100 items). Want to Contribute:- If you like “To The Innovation” and want to contribute, you can mail your articles to 📧 contribute@totheinnovation.com. See your articles on the main page and help other coders.😎 Depth First Search (DFS)
{"url":"https://totheinnovation.com/linear-search/","timestamp":"2024-11-02T10:55:22Z","content_type":"text/html","content_length":"191571","record_id":"<urn:uuid:f96a7e77-2ad1-4b3a-af8b-8cb5b5930916>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00600.warc.gz"}
Coenzyme Q - cytochrome c reductase Coenzyme Q - cytochrome c reductase complex, sometimes called the cytochrome bc[1] complex , and at other times Complex III, is the third complex in the electron transfer chain PDB 1KYO , EC 1.10.2.2). It is a transmembrane , and it catalyzes the reduction of cytochrome c by accepting reducing equivalents from Coenzyme Q CoQH[2]+ 2 Fe^+3-cytochrome c → CoQ + 2 Fe^+2-cytochrome c In the process, protons are translocated across the mitochondrial membrane . Therefore, the bc complex is a proton pump Compared to the other major proton pumping subunits of the electron transport chain, the number of subunits found can be small, as small as three polypeptide chains. This number does increase, and as many as eleven subunits can be found in higher animals. The major prosthetic groups in the complex are a pair of cytochromes, the b cytochrome and the c[1] cytochrome, and a two iron, two sulfur iron-sulfur cluster. More information can be found on the Cytochrome bc[1] complex page.
{"url":"http://www.fact-index.com/c/co/coenzyme_q___cytochrome_c_reductase.html","timestamp":"2024-11-10T21:03:18Z","content_type":"text/html","content_length":"5128","record_id":"<urn:uuid:36f85fc5-202a-440d-b762-d69a252fb61a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00738.warc.gz"}
Entropy and degeneracy: the equation no one tells you about but everyone uses A useful equation: $S=R \ln(g)$ Open any P-Chem textbook and you'll find this expression for the entropy (and often a reference to the fact it is inscribed on 's tombstone):$$S=k\ln(W)$$ $W$ is the multiplicity of the system, i.e. the number of (microscopic) arrangements producing the same (macroscopic) state, and is given by$$W=\frac{N!}{N_1!N_2! N_3!...N_g!}$$Here $N$ is the number of molecules and $N_i$ is the number of molecules with a particular microscopic arrangement $i$ of which there are $g$ different kinds. Confused? Believe me you are not the only one, and most scientists never use this form of the equation anyway. Instead they usually assume that all these microscopic arrangements have the same energy or are degenerate (same thing). This means that each macroscopic arrangement is equally likely and $N_1=N_2=...=N/g$. This simplifies the expression for the multiplicity, $$W=g^N$$and entropy$$S=Nk\ ln(g)$$significantly and for a mole of molecules we have $$S=R\ln(g)$$This formula relates the entropy to the $g$, the number of microscopic arrangements with the same energy A simple example Let's say two molecules bind to a receptor through a single hydrogen bond (indicated by " " in the figure) with the same strength. If you mix equal amounts of , and you will get more at equilibrium even though the hydrogen bond strength is the same in the two complexes. This is because molecule can bind in four different ways while can only bind one way, i.e. the R-A complex has a degeneracy of four ($g=4$) and the complex has a degeneracy of one ($g=1$). Put another way, the complex is more likely because it has a higher entropy ($S=R\ln(4)$) than the complex ($S=R\ln(1)$). Ifs, ands, or buts Of course this is a simplified picture where we only focus on conformational entropy and ignore contributions from translation, rotation and vibration, not only in the complexes but also for free Also it is quite unlikely that the hydrogen bond strength for two molecules will be identical or that molecule will be perfectly symmetrical so that the four binding modes are perfectly degenerate. In general $S=R\ln(g)$ will give you an estimate of the maximum possible value of the conformational entropy. See for example this interesting blog post on a paper were the authors rationalize the measured difference in binding entropy in terms of conformation. As I point out in the comments section, the conformational entropy difference ($S=R \ln(2) $) is smaller than the measured entropy difference, so there must be other - more important - contributions to the entropy change. If $N_1=N_2=...=N/g$ then $$W=\frac{N!}{(N/g)!^g}$$For large $N$ we can use Stirling's approximation,$x!\approx (x/e)^x$ $$W=\frac{(N/e)^N}{(N/ge)^{(N/g)g}}\\W=\left(\frac{N/e}{(N/e)(1/g)}\right)^N\\ Other posts on statistical mechanics This work is licensed under a Creative Commons Attribution 3.0 Unported License. 8 comments: mzh said... Is it fair to call 'W' the partition function of the system? Short answer: no. Long answer: Another expression for the entropy is $S=k\ln(Q)+\frac{kT}{Q}\left(\frac{\partial Q}{\partial T}\right)_V$ where $Q$ is the partition function. $Q=W$ only if $Q$ is independent of $T$. This happens only if all energy levels are degenerate. Yes, and you wrote "...they usually assume that all these microscopic arrangements have the same energy or are degenerate (same thing)." so you're actually implying that Q=W, not? For that special case, yes. Just want to comment this paragraph: «If you mix equal amounts of A, B, and R you will get more R-A than R-B at equilibrium even though the hydrogen bond strength is the same in the two complexes. This is because molecule A can bind in four different ways while B can only bind one way, i.e. the R-A complex has a degeneracy of four (g=4) and the R-B complex has a degeneracy of one (g=1). Put another way, the R-A complex is more likely because it has a higher entropy (S=Rln(4)) than the R-B complex (S=Rln(1)).» Isn´t the entropy of complexation of the case A higher, because the rotational entropy of the unbinded molecule A is smaller, rather than being the entropy of the complex itself higher? Shouldn´t be the degeneracy 8 (A) and 2 (B), instead of 4 (A) and 1 (B)? For molecule A, 4 with the molecule facing up and 4 with the molecule facing down. And for molecule B, one with the molecule facing up and one with the molecule facing down. Good questions! Let me take the last question first since it is a bit easier. If I understood your question correctly your talking about the extra degree of freedom due to rotation around the horizontal axis. In that case it depends on whether you view the model as being 2-dimensional or 3-dimensional. I view it as 2D where this degree of freedom is not allowed. However, if you view it as 3D (i.e. flat molecules in a 3D world) then you are correct. Now to your first question. Yes you can view it like that if the molecule is perfectly symmetric. The this effect is included in the rotational entropy of free A via the symmetry number (see this post). However, if molecule A is even *slightly* asymmetric then the effect enters as the conformational entropy of the complex. So I think in the symmetric case the effect can also be ascribed to a conformational entropy of the complex. One could argue that the entropy of A can be measured to settle this issue. However, I don't think that is so straightforward - even conceptually. The entropy at temperature T is measured relative to absolute zero where the entropy is zero by definition. However, this state cannot be reached, and even at very, very low T symmetric molecules will have so-called residual entropy related to their symmetry which will be hard - if not impossible - to measure accurately. But I am not sure. Bottom line: I think the answer to your question is: ultimately what is measured is an entropy *change* and the higher entropy change for A can be explained *either* as a lower conformational entropy of the complex or a higher rotational entropy of the free ligand. Both are formally correct but I would argue the former is more general as it also works for non-symmetric molecules. Very nice reply. Just some further points: «However, if molecule A is even *slightly* asymmetric then the effect enters as the conformational entropy of the complex.» Is this a «law» according to Quantum Mechanics? Because, if you apply the continuous symmetry arguments of Avnir, a molecule which is «slightly» assymetric (a flexible molecule) will have a «slight» degree of assymetry. In this case, a flexible A, for example, would have S = R ln(3.9). «Both are formally correct but I would argue the former is more general as it also works for non-symmetric molecules.» I do not fully agree with the reason here, because if for some reason the complex has an higher entropy because the ligand has more possibilities of binding, it means that the ligand itself has some symmetry operation somewhere and it can still be present on the rotational entropy of the ligand. It is true that if you use Avnir's method you will get a non-zero entropy contribution like $S = R \ln(3.9)$, but it won't be exactly the same as the conformational entropy. $S=-R\sum_i^4 p_i\ln (p_i)$ where $p_i=e^{-(G^\circ_i-G^\circ)/RT}$ and $G^\circ=-RT\sum_i^4 e^{-G^\circ_i/RT}$. $G^\circ$ is the free energy of the R-A complex, which has four different binding modes. If A has four-fold symmetry then $G^\circ_1=G^\circ_2=G^\circ_3=G^\circ_4$, $p_i=\frac{1}{4}$ and $S=R\ln(4)$. If you include the symmetry number when computing the entropy of A then this contribution to the free energy is taking care of that way. If A is formally $C_1$ then the four binding free energies will be different and the conformational entropy will be a value between $R\ln(4)$ and $R\ln(1)$. This is also true for Avnir's method but, based on his equations, I don't see why the entropies computed in these two ways would be the same. However, Avnir's method might yield a good approximation to the conformational entropy, I don't know. Notice also that the binding free energies can in principle be similar completely by accident and not to do with symmetry in any way.
{"url":"https://proteinsandwavefunctions.blogspot.com/2012/11/entropy-and-degeneracy-equation-no-one.html","timestamp":"2024-11-14T07:49:30Z","content_type":"application/xhtml+xml","content_length":"148738","record_id":"<urn:uuid:8bd37249-6342-4328-b71b-f4bbce0798d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00410.warc.gz"}
Gradient domain high dynamic range compression This command implements a tone mapping operator as described in: Gradient Domain High Dynamic Range Compression R. Fattal, D. Lischinski, and M. Werman In ACM Transactions on Graphics, 31(3), p. 249, 2002. With respect to the original paper, this program provides additional parameter which limits the amplification of noise. The noise is often starkly amplified because of division by zero in one of the equations in the paper. Extension contributed by Przemyslaw Bazarnik. At the core of the programme is a Poisson PDE which as suggested in the original paper is solved using a Full Multigrid Algorithm. However, this is an iterative solver which seems to lose accuracy when applied to higher resolution images resulting in halo effects and surreal looking images. For that reason a second solver has been implemented using the discrete cosine transform as the underlying method and is considerably more accurate mainly because it is a direct solver. This solver is the preferred method and is used by default. The old multigrid solver can be selected with the --multigrid (-m) option. Set alpha parameter. This parameter is depreciated as setting a <val> other than 1.0 has only the effect of a global gamma adjustment of the luminance channel which can be directly specified using the --gamma option. See the paper for the definition of alpha. It can be shown, although not mentioned in the paper, that setting alpha other than 1.0 has the same effect as setting gamma = alpha^(k*(1-beta)), where beta is the value as specified by --beta and k is the number of levels of the Gaussian Pyramid (see paper for details), which depends on the image pixel size (smallest k so that 2^(k+detail_level) >= min(width,height)/MSIZE, MSIZE see source code, e.g. 8 or 32). Set beta parameter. <val> sets the strength of gradient (local contrast) modification. Suggested range is 0.8 to 0.96, default is 0.9 (see paper for details). Value of 1 does not change contrasts, values above 1 reverse the effect: local contrast is stretched and details are attenuated. Values below 0.5 lead to very strong amplification of small contrast, so consider using --noise parameter to prevent noise. Set luminance gamma adjustment. This can be described as a global contrast enhancement and is applied after the local enhancement as specified by the parameter --beta is performed. Gamma adjustment or correction is defined by a power-law, in this case L_out(x,y) = L_in(x,y)^gamma, where L_in(x,y)=exp(I(x,y)) is the luminance value after the local contrast enhancement (I is the solution of the Poisson PDE). The suggested range for <val> is 0.6 to 1.0, default is 0.8. Amount of color saturation. Suggested range is 0.4 to 0.8. Default value: 0.8. Reduces the gradient amplification value for gradients close to 0 and reduces noise as a result. <val> defines gradient value (luminance difference of adjacent pixels) which is treated as noise. Suggested range is 0.0 to the value of alpha. Default value calculated based on alpha: 0.001*alpha. Specifies up to which detail level the local contrast enhancement should be performed. It basically means that local contrast levels within small squares of pixel size 2^<val> are not changed. In the implementation this corresponds to removing the <val> finest levels of the Gaussian Pyramid as described in the paper, i.e. the paper only considers <val>=0. Suggested values are 1, 2 or 3; 3 for high resolution images. The default is 3 for --fftsolver, and 0 if the original multi-level solver is used (to be consistent with the paper). Specifies the percentage of pixels which are allowed to be overexposed and therefore blown out. This can be useful for example when there is a very bright object in the image like the sun and details of it do not need to be resolved. As a result the overall image will look brighter the greater <val> is. Default is 0.5. Same as --white-point but for under-exposed pixels. Default is 0.1. Enable the use of the multigrid solver as suggested by the original paper. For accuracy the default fft solver is generally recommended especially when using high resolution images. The user will benefit by obtaining photo-realistic rather than surreal looking images. The fft solver is also faster despite the fact it is only O(n*log n) with n=width*height, as compared to O(n) for the multigrid solver. The speed improvement is thanks to the very efficient fftw3 library which is used to calculate the discrete cosine transform. Print additional information during program execution. Print list of command line options. pfsin memorial.hdr | pfstmo_fattal02 -v -t | pfsout memorial.png Tone map image (using fft solver) and save it in png format. pfsin memorial.hdr | pfstmo_fattal02 -v -t -b 0.85 -g 0.7 -w 2.0 \ | pfsout memorial.png Tone map image (using fft solver) with stronger contrast modification than default, i.e. beta=0.85, gamma=0.7 and white point 2.0%. pfsin memorial.hdr | pfstmo_fattal02 -v | pfsout memorial.png Tone map image (old style) and save it in png format. Known Issues For stronger local contrast enhancements (beta<0.9) the fft solver (--fftsolver) might produce slightly dark image corners. This can be mitigated using bigger values for the --noise parameter. With a value of --detail-level greater than 0, the internal implementation could be made much more efficient as only a reduced sized PDE would need to be solved, greatly improving speed. Please report bugs and comments on implementation to the pfstools discussion group (http://groups.google.com/group/pfstools). For bugs specific to the FFT solver email Tino Kluge
{"url":"https://www.mankier.com/1/pfstmo_fattal02","timestamp":"2024-11-14T10:40:31Z","content_type":"text/html","content_length":"12963","record_id":"<urn:uuid:55b3cbf2-7f23-4af5-8f6f-fe882676e927>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00522.warc.gz"}
pca and utility. 19 good questions, nicely answered! Pca and utility 19 important questions on Pca and utility What do you essentially do with a factor analysis on a test? You cluster different items into a few clusters/factors/dimensions to make the results more interpretable. How do you perform a factor analysis? • Go to analyse, dimension reduction, factor • put all the questions into variables • extraction; select principle axis factoring for method, check scree plot (after looking at the scree plot, select fixed number of factors; amount of factors) • rotation; select direct oblimin, • options; press sorted by size, (optional;) suppress small coefficents .3, do this just know that some values will not be shown. What does the table of total variance explained tell us? The amount of factors spss extracted based on the eigenvalue of the factor being bigger than 1. • Higher grades + faster learning • Never study anything twice • 100% sure, 100% understanding Discover Study Smart In a scree plot, where is the point of inflection? The dot from where the line goes to be horizontal How can you tell the amount of factors from a scree plot? The amount of dots left from the point of inflection. What do do after you've determined the amount of factors you have? Rerun the analysis, this time indicate that you want x amount of factors. What can you see in the pattern matrix? • You can see which questions form a cluster together • which questions belong to which factor • what the overarching theme of a factor is. What are the advantages and disadvantages of the classical test theory? • Intuitive and easy to apply □ It is in SPSS and it is easy to do in excel □ No large sample sizes/many items needed • – Focus on the test, not on the items • – Test properties depend on the population □ E.g.,: Reliability and difficulty of a test • Person properties depend on the test □ I.e., sum score is higher if the test is easy and lower if the test is difficult What do you do when using the item response theory? You specify a measurement model or a formula, which mathematically links the item scores to the construct as a latent variable. What are the assumptions of the item response theory? • Uni-dimensionality; you only measure one construct • local independence; the latent trait explains all item correlations • monotonicity; the higher the trait the higher the expected score. How can you tell the item difficulty from an item characteristic curve? The number of the x axis that the middle of the curve is on. How can you tell the item discrimination from the item characteristic curve? The steepness of the curve, if the curve is very steep then a small increase in laten trait will result to a big difference in chances correct. From the results of an item response analysis, how do you know which items are weak? When the discrimination is negative At which point of the item characteristic curve is the most information? If the slope of the curve is maximum What is a scale information function? An information function of the whole test instead of just one item. How can you tell that items are unfair towards certain groups? If you suspect a group having a disadvantage on an item check the item curves for both groups, if they're different there is an disadvantage What is computerized adaptive testing • Instead of giving everyone the same items and the whole range of difficulty • computerized adaptive testing will adjust difficulty of the next item based on the answer of the last item. How does the computer pick new questions in computerized adaptive testing • Based on the information of the last question, and estimate is calculated of the extend to which the trait is present • the next question is picked to have the highest information on that degree of trait presence What are the advantages and disadvantages of item response theory • Population and test independent • Focus on items • Statistical complex • Needs large samples The question on the page originate from the summary of the following study material: • A unique study and practice tool • Never study anything twice again • Get the grades you hope for • 100% sure, 100% understanding Remember faster, study better. Scientifically proven.
{"url":"https://www.studysmart.ai/en/summaries/scientific-and-statistical-reasoning-and-psychological-testing/item-curve-factors/","timestamp":"2024-11-02T07:51:28Z","content_type":"text/html","content_length":"140699","record_id":"<urn:uuid:def3534b-48e5-4503-a470-8c8a6c3ab42a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00856.warc.gz"}
Math 4 Wisdom. "Mathematics for Wisdom" by Andrius Kulikauskas. | Research / UniversalHyperbolicGeometry Investigation: Interpret and systematize the key formulas of universal hyperbolic geometry. Universal hyperbolic geometry Write down all of the results of Universal Hyperbolic Geometry. Organize them and interpret them in terms of • cross-ratio • symmetric functions of variables • matrices and symmetric functions of eigenvalues • Lie theory (Lie bracket, Jacobi identity...) • Which formulas are symmetric in their variables, and which are not symmetric, and why? • Look at Wildberger's three binormal forms in chromogeometry. Understand and interpret • Understand the dot product, especially in three dimensions. Why are u and v perpendicular iff their dot product is zero? • Understand the equations for a line and a plane. How should I understand the minus sign -nz ? • Understand how projection occurs on the plane z=1. • What does it mean to calculate the spread and quadrance as cross-ratios? • In what sense does the anharmonic ratio -1, 1/2, 2 collapse the trigonometric functions from six to three? • Understand Pythagoras's theorem. □ How does it come from the definition of the cross-ratio? □ And what is a geometric interpretation? □ Does the general Pythagoras's theorem work for an ellipse (and hyperbola) rather than just a circle? □ How does the law of cosines interpolate between linear addition a + b and quadratic addition {$a^2+b^2$}. □ Relate the motion of the line in the law of cosines to polarity. • What is the role of concurrency, colinearity in all of the formulas? • In what sense does the number i relate spread (and sin2 and trigonometric functions) and quadrance (and sinh2 and hyperbolic functions)? • Understand the relationship between the Lie bracket and dual points and lines. • {$1-q_1$} • {$q_1+q_2+q_3$} Formulate analogues of the deep ideas behind geometry and trigonometry, such as: A right triangle is half a rectangle. A triangle is the sum of two right triangles. Four times a right triangle is the difference of two squares. In hyperbolic geometry, the quadrance {$q$} (between points) and spread {$S$} (between lines) are dual. Each is equal to the cross-ratio. Quadrance Spread Duality Theorem. If {$a_1=A_1^{\perp}$} and {$a_2=A_2^{\perp}$}, then {$R(a_1,b_2;a_2,b_1)=q(a_1,a_2)=S(A_1,A_2)$}. Where {$a_1,a_2$} are points within a circle and {$b_1,b_2$} are points on their respective polars. In the plane, two vectors indicate the same line if the area they span is zero. {$x_1:y_1 = x_2:y_2 \iff \frac{x_1}{y_1} = \frac{x_2}{y_2} \iff x_1y_2 - x_2y_1 = 0$} In three-dimensional space, two vectors indicate the same line if, in each of the three planes, the area they span is zero. {$x_1:y_1:z_1 = x_2:y_2:z_2 \iff \frac{x_1}{y_1} = \frac{x_2}{y_2} \wedge \frac{x_1}{z_1} = \frac{x_2}{z_2} \wedge \frac{y_1}{z_1} = \frac{y_2}{z_2} \iff x_1y_2 - x_2y_1 = x_1z_2 - x_2z_1 = y_1z_2 - y_2z_1 = 0$} Given point a = [x:y:z] and line L = (l:m:n). a is on L (L passes through a) when {$xl+my=nz$} or when {$xl+my-nz=0$} or when {$a\circ L=0$}. a and L are dual ({$a^{\perp}=L$} and {$a=L^{\perp}$}) when [x:y:z]=[l:m:n] or (x:y:z)=(l:m:n). a is null when it lies on its dual line, thus when {$x^2+y^2=z^2$}, (when {$a\circ a^{\perp}=0$}), and likewise, L is null when it passes through its dual point. Points {$a_1=[x_1:y_1:z_1]$} and {$a_2=[x_2:y_2:z_2]$} are perpendicular when {$x_1x_2+y_1y_2=z_1z_2$} or when {$x_1x_2+y_1y_2-z_1z_2=0$} or when {$a_1\circ a_2=0$}. Similarly with lines. We have laws for: • {$q_1q_2=S_1S_2$} Spread law. • {$q_1q_2q_3$} Triple quad law. • {$q_1q_2S_3=q_1S_2q_3=S_1q_2q_3$} Cross law. These terms give the square of the area of the parallelogram whose sides have quadrances {$q_1$} and {$q_2$} separated by spread {$S_3$}. • {$S_1S_2q_3=S_1q_2S_3=q_1S_2S_3$} Cross dual law. • {$S_1S_2S_3$} Triple Spread law. The squares of the six trigonometric functions find expression as the six possible cross-ratios: q = sin2, 1-q = cos2, 1/q = csc2, 1/(1-q) = sec2, q/(1-q) = tan2, (1-q)/q = cot2. Similarly, the six hyperbolic functions express the quadrance. Symmetric formulas Spread law. {$\frac{S_1}{q_1}=\frac{S_2}{q_2}=\frac{S_3}{q_3}$} Triple quad formula. If {$a_1, a_2, a_3$} collinear, then {$(q_1+q_2+q_3)^2=2(q_1^2+q_2^2+q_3^2)+4q_1q_2q_3$}, which is {$p_1^2=2p_2+4e_3$}. Triple spread formula. If {$A_1, A_2, A_3$} concurrent, then {$(S_1+S_2+S_3)^2=2(S_1^2+S_2^2+S_3^2)+4S_1S_2S_3$}, which is {$p_1^2=2p_2+4e_3$}. These formula can be rewritten: {$q_1q_2q_3=(2(q_1^2+q_2^2+q_3^2)-(q_1+q_2+q_3)^2)/4$}, in other words, {$e_3=(2p_2-p_1^2)/4$} Nonsymmetric formulas Pythagoras's theorem. If {$a_1a_3\perp a_2a_3$} then {$q_3=q_1+q_2-q_1q_2$}. More elegantly: {$(1-q_3)=(1-q_1)(1-q_2)$}. Pythagoras's dual theorem. If {$A_1A_3\perp A_2A_3$} then {$S_3=S_1+S_2-S_1S_2$}. More elegantly: {$(1-S_3)=(1-S_1)(1-S_2)$}. Alternatively, {$|\textrm{cos}\,\theta_3|=|\textrm{cos}\,\theta_1||\ Cross law. {$(q_1q_2S_3-(q_1+q_2+q_3)+2)^2=4(1-q_1)(1-q_2)(1-q_3)$} {$q_1q_2S_3=q_1S_2q_3=S_1q_2q_3= q_1+q_2+q_3 -2 \pm 2\sqrt{(1-q_1)(1-q_2)(1-q_3)}$} Cross dual law. {$(S_1S_2q_3-(S_1+S_2+S_3)+2)^2=4(1-S_1)(1-S_2)(1-S_3)$} {$S_1S_2q_3=S_1q_2S_3=q_1S_2S_3= S_1+S_2+S_3 -2 \pm 2\sqrt{(1-S_1)(1-S_2)(1-S_3)}$} Note that {$\sqrt{1-S_i}=|\textrm{cos}\,\theta_i|$} The altitudes of a triangle meet at a point (the orthocenter). Affine geometry Lines are parallel: • All (n 2) slopes the same. • Collinearity: Triple quad formula Affine combinations Vector is the relation between parallel lines. Projective geometry Space is divided into "infinity" and "finity". Conformal geometry Lines are perpendicular: Pythagorean theorem, inner product Quadratic interpolations: law of cosines A2 - 2ABcos(theta) + B2 = C2 interpolation of (A-B)2 = C2, A2 + B2 = C2, (A+B)2 = C2. Consider triangles more generally. Or consider the function: A2 + B2 - C2 which is 0 when A and B are perpendicular, and nonzero 2ABcos(theta) otherwise. Inner product defines the equation of a line and whether points lie on it. (a1, a2, a3)(x1, x2, 1) = 0. Symplectic geometry Oriented area, volume given by determinant. • Quadrea: 4 x determinant squared. Determinant is positive three cycle and negative three cycle. • Quadrea: Archimedes function (and when zero, we have collinearity by the Triple quad formula. • Quadrea: Can repeatedly factor Archimedes function as a2-b2 = (a+b)(a-b) to get Heron's law for the area in terms of the lengths of the edges. What if we interpret the line as ax + by + cz = 0 where z=1 ? Why is the s1s2s3 term of the third power and not the second power? • The circle maps every point to a line and vice versa. • Quadrance (distance squared) is more correct than distance because quadrance makes positive distance and negative distance equivalent. Spread (absolute value of the sine of angle) is more correct than angle because the value of the spread is the same for all angles at an intersection, which is to say, for both theta and pi minus theta. In this way, quadrance and spread eliminate false distinctions and the problems they cause. • How Chromogeometry transcends Klein's Erlangen Program for Planar Geometries| N J Wildberger • Try to express projective geometry (or universal hyperbolic geometry) in terms of matrices and thus symmetric functions. What then is algebraic geometry and how do polynomials get involved? What is analytic geometry and in what sense does it go beyond matrices? How do all of these hit up against the limits of matrices and the amount of symmetry in its internal folding? N J Wildberger. Chromogeometry Symmetric bilinear forms given vectors {$A_1=[x_1,y_1]$} and {$A_2=[x_2,y_2]$} • parallel if {$x_1y_2-x_2y_1=0$} • blue (Euclidean) {$[x_1,y_1]\cdot_{b}[x_2,y_2]=x_1x_2+y_1y_2$} • red (relativistic) {$[x_1,y_1]\cdot_{r}[x_2,y_2]=x_1x_2-y_1y_2$} • green (relativistic) {$[x_1,y_1]\cdot_{b}[x_2,y_2]=x_1y_2+x_2y_1$} • For any two points {$A_1 ≡ [x_1, y_1]$} and {$A_2 ≡ [x_2, y_2]$} there is a unique line {$l ≡ A_1A_2$} which passes through them both. • The quadrance between the points {$A_1$} and {$A_2$} is the number {$Q (A_1, A_2) ≡ (A_2 − A_1) · (A_2 − A_1)$}. It is the distance squared. • The spread between the non-null lines {$A_1A_2$} and {$B_1B_2$} is the number {$$s(A_1A_2, B_1B_2) ≡ 1 − \frac{((A_2 − A_1) · (B_2 − B_1))^2}{Q(A_1, A_2) Q(B_1, B_2)}$$}. This is {$\textrm{sin}^2\theta$}. It is independent of the choice of points lying on the two lines. Two non-null lines are perpendicular precisely when the spread between them is 1. Five main laws of planar rational trigonometry • Cross Law. {$(Q_1 + Q_2 − Q_3)^2 = 4Q_1Q_2 (1 − s_3)$} This is the square of the law of cosines: {$c^{2}=a^{2}+b^{2}-2ab\cos \gamma $}. It relates the angles and the lengths of sides of a triangle. Two special cases: □ Triple Quad Formula. The points {$A_1$}, {$A_2$} and {$A_3$} are collinear precisely when the quadrances {$Q_1 ≡ Q(A_2, A_3)$}, {$Q_2 ≡ Q(A_1, A_3)$} and {$Q_3 ≡ Q(A_1, A_2)$} satisfy {$(Q_1 + Q_2 + Q_3)^2 = 2 (Q_1^2 + Q_2^2 + Q_3^2)$}. □ Pythagoras's Theorem. For {$A_1$}, {$A_2$} and {$A_3$} three distinct points, {$A_1A_3$} is perpendicular to {$A_2A_3$} precisely when the quadrances {$Q_1 ≡ Q (A_2, A_3)$}, {$Q_2 ≡ Q(A_1, A_3)$} and {$Q_3 ≡ Q(A_1, A_2)$} satisfy {$Q_1 + Q_2 = Q_3$}. • Spread Law. For a triangle, {$s_1/Q_1=s_2/Q_2=s_3/Q_3$}. This is the law of sines, and relates angles and lengths of sides for a triangle. • Triple spread formula. {$(s_1 + s_2 + s_3)^2 = 2 (s_1^2 + s_2^2 + s_3^2) + 4s_1s_2s_3$}. Relates the angles of a triangle and generalizes the formula for the sum of the angles.
{"url":"https://www.math4wisdom.com/wiki/Research/UniversalHyperbolicGeometry","timestamp":"2024-11-14T13:47:21Z","content_type":"application/xhtml+xml","content_length":"25874","record_id":"<urn:uuid:a6f850c9-eab8-41b0-ae1f-63bf62991c8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00083.warc.gz"}
Chapter 10 - Treatment Effects | The Effect For Whom the Effect Holds For most of this book, and indeed in the title, we have stuck to the fiction that there is such a thing as the effect. As though a treatment could possibly have a single effect - the same impact on literally everybody! That might be plausible in, say, physics. But in social science, everything affects everyone differently.145 If we’d just all become frictionless spheres, social science would be way easier. Downhill travel, too. To give a very simple example, consider a drug designed to reduce the rate of cervical cancer. This drug might be very effective. Perhaps it reduces the rate of cervical cancer by half… for people with a cervix. For people without a cervix, we can be pretty certain that the drug has absolutely no effect on the rate of cervical cancer.146 Ugh, no fair. So at the very least, the drug has two effects - one for people with a cervix, and one for people without. But we don’t need to stop there. Even if we just focus on people with a cervix, maybe the drug is highly effective for some people and not very effective for others. Something to do with body chemistry, or age, or dietary habits, who knows? The point is we might have a whole bunch of effects. Whenever we have a treatment effect that varies across a population (i.e., all the time), we can call that a heterogeneous treatment effect. We can actually think of each individual has having their own treatment effect. Maybe the drug reduces the cancer rate by 1% for you and 0% for me and .343% for the woman who lives next door to me. That’s not even to say anything of the variation in the effect outside what you can see in the sample! The true effect of treatment is likely to vary from country to country, from year to year, and so on.147 The ability of a study with a given sample to produce an estimate that works in other settings is called external validity. Just because, say, monetary stimulus improved the employment rate last time doesn’t mean it will this time, at least not to the same degree. Everything varies, even effects. We’re all unique, with different circumstances, lives, physiologies, and responses to the world. Why would we start with an assumption that any two of us would be affected in exactly the same way? It’s more of a convenience than anything. So what can we make of the idea that we have heterogeneous treatment effects? One thing we can try to do is to estimate those heterogeneous treatment effects. Instead of just estimating one effect, we can estimate a distribution of effects and try to predict, for a given person with a given set of attributes, what their effect might be. This is a valid goal, and it is something that people try to do. This idea is behind concepts you might have heard of like “personalized medicine.” It’s also one thing that machine-learning types tend to focus on when they get into the area of causal inference.148 If this kind of thing interests you, I recommend reading Chapter 21 and also go looking for anything and everything that the duo of Susan Athey and Guido Imbens have worked on together. However, in addition to being a valid goal and the subject of some extremely cool work, it also gets highly technical very quickly. So in this chapter, we will instead focus on the other thing we can do with the concept of heterogeneous treatment effects: ask “if effects are so heterogeneous, then what exactly are we identifying anyway?” After all, we’ve established in the rest of this book that we can identify causal effects if we perform the right set of adjustments. But whose causal effects are those? How can we tell? It turns out that, if we’ve done our methods right, what we get is some sort of average of the individual treatment effects. However, it’s often not the kind of average where everyone gets counted Different Averages What we have is the concept that each person has their own treatment effect. That means that we can think of there as being a distribution of treatment effects. This works just like any other distribution of a variable, like back in Chapter 3. The only difference is that we don’t actually observe the treatment effects in our data. And like any typical distribution, we can describe features of it, like the mean. The mean of the treatment effect distribution is called, for reasons that should be pretty obvious, the average treatment effect. The average treatment effect, often referred to as the ATE, is in many cases what we’d like to estimate. It has an obvious interpretation - if you impose the treatment on everyone, then this is the change the average individual will see. If the average treatment effect of taking up gardening as a hobby is an increase of 100 calories eaten per day, then if everyone takes up gardening, some people will see an increase of less than 100 calories, some will see more, but on average it will be 100 calories extra per person. However, estimating the average treatment effect is not always feasible, or in some cases even desirable. Let’s use the cervical cancer drug as an example. In truth, the drug will reduce Terry’s chances of cervical cancer by 2 percentage points and Angela’s by 1 percentage point, but Andrew and Mark don’t have cervices so it will reduce their chances by 0.149 Aside on statistical terminology: when talking about changes in a rate, like the rate or probability of cervical cancer, a percent change is proportional, as percent changes always are, but a percentage point change is the change in the rate itself. So if a 2% chance rises to 3%, that’s a \(((.03/.02)-1 = )\) 50% increase, or a \((.03 - .02 = )\) 1 percentage point increase. Beware percentage increases from low starting probabilities—they tend to be huge even for small actual changes. The average treatment effect is \((.02 + .01 + 0 + 0) = .0075\), or .75 percentage points. Now, despite your repeated pleas to the drug company, they refuse to test the drug on people without cervices, since they’re pretty darn sure it won’t do anything. They get a whole bunch of people like Terry and Angela and run a randomized experiment of the drug. They find that the drug reduces the chances of cervical cancer by, on average, \((.02 + .01) = .015\), or 1.5 percentage points. That’s not a wrong answer - in fact, their 1.5 is probably a better answer than your .75 for the research question we probably have in mind here - but it’s definitely not the average treatment effect among the population.150 It is the average treatment effect among their sample, but we certainly wouldn’t want to take that effect and assume it works for Andrew or Mark. So if it’s not the population average treatment effect, what is it? We will want to keep in our back pocket some ideas of other kinds of treatment effect averages we might go for or might identify. There are lots and lots and lots of different kinds of treatment effect averages,151 I even have one of my own! It’s called SLATE, and it’s not very widely used but it’s super duper cool and the way it works is hey where are you going? but only a few important ones we really need to worry about. They fall into two main categories: (1) treatment effect averages where we only count the treatment effects of some people but not others, i.e., treatment effect averages conditional on something, and (2) treatment effect averages where we count everyone, but we count some individuals more than others.152 Technically, (1) is just a special case of (2) where some people count 100% and other people count 0%. But conceptually it’s easier to keep them separate. What happens when we isolate the average effect for just a certain group of people? And how might we do it? To answer this question, let’s make some fake data. This will be handy because it will allow us to see what is usually invisible - what the treatment effect is for each person. Once we have our fake data, we will be able to: (a) discuss how we can take an average of just some of the people, and (b) give an example of how we could design a study to get that average. Table 10.1: Fake Data for Four Individuals Name Gender Untreated Outcome Treated Outcome Treatment Effect Alfred Male 1 2 1 Brianna Female 1 5 4 Chizue Female 2 5 3 Diego Male 2 4 2 We can see from Table 10.1 that these four individuals have different treatment effects. Keep in mind that this table is full of counterfactuals - we can’t possibly see someone both treated and untreated. The table just describes what we would see under treatment or no treatment. If nobody were treated, then Alfred and Brianna would have an outcome of 1, and Chizue and Diego would have an outcome of 2. But with treatment, Alfred jumps by 1, Brianna by 4, Chizue by 3, and Diego by 2. The average treatment effect is \((1 + 4 + 3 + 2)/4 = 2.5\). One common way we get an average effect for only a certain group is to literally pick a certain group. Notice in Table 10.1 that we have men and women. Let’s say we run an experiment but only recruit men in our experiment for whatever reason.153 Perhaps we are a labor economist from the 1980s, or a biologist using mice from the… very recent past. So we get a bunch of guys like Alfred and a bunch of guys like Diego and we randomly assign them to get treatment or not. Our data ends up looking like Table 10.2. Table 10.2: Men-Only Experiment Name Treated Outcome Alfreds Treated 2 Alfreds Untreated 1 Diegos Treated 4 Diegos Untreated 2 Then, using Table 10.2, we calculate the effect. We find that the treated people on average had an outcome of \((2+4)/2 = 3\), and the untreated had \((1+2)/2 = 1.5\) and conclude that the treatment has an effect of \(3-1.5 = 1.5\). This is the exact same as the average of Alfred’s and Diego’s treatment effect, \((1 + 2)/2 = 1.5\). So we have an average treatment effect among men, or an average treatment effect conditional on being a man. Again, this isn’t a wrong answer. It just represents only a certain group and not the whole population. It’s only a wrong answer if we think it applies to everyone. Another common way in which the average effect is taken among just one group is based on who gets treated. Based on the research design and estimation method, we might end up with the average treatment on the treated (ATT) or the average treatment on the untreated (ATUT), which averages the treatment effects among those who actually got treated (or not). To see how this works, imagine that we can’t randomize anything ourselves, but we happen to observe that Alfred and Chizue get treated, but Brianna and Diego did not. We do our due diligence of drawing out the diagram and notice that the outcome is completely unrelated to the probability that they’re treated or not.154 Knowing the secret counterfactuals that we do, we can see that the average outcome if treatment had never happened is exactly \((1 + 2)/2 = 1.5\) for both the treated and untreated groups. In other words, there are no back doors between treatment and outcome. The differences arise only because of treatment. So we’re identified! Great. Table 10.3: Assigning Alfred and Chizue to Treatment Name Treated Outcome Alfred Treated 2 Brianna Untreated 1 Chizue Treated 5 Diego Untreated 2 What do we get in our actual data? We can see in Table 10.3 that we get an average of \((2 + 5)/2 = 3.5\) among the treated people, and \((1 + 2)/2 = 1.5\) among the untreated people, giving us an effect of \(3.5 - 1.5 = 2\). It’s no coincidence that this is the average of Alfred’s and Chizue’s treatment effects, \((1 + 3)/2 = 2\). In other words, we’ve taken the average treatment effect among just the people who actually got treated. ATT!155 You can imagine how the ATT might crop up a lot. After all, we only see people getting treated if they’re… actually treated. So you can see how we might pick up their treatment effects and get an ATT. It’s almost hard to imagine how we could get anything else. How can we possibly ever get the average treatment effect, rather than the ATT, if we can’t see what the untreated people are like when treated? Well, it comes down to setting up conditions where we can expect that the treatment effect is the same in treated and untreated groups. In this example, they clearly aren’t. But if we truly randomized over a large group of people, there’s no reason to believe the treated and untreated groups would have different effect distributions, so we’d have an ATE. It’s a bit harder to imagine how we might get the average treatment effect among the untreated (ATUT). And indeed this one doesn’t show up as often. But one way it works is that you take what you know about how treatment varies, and what predicts who has big or small treatment effects, and then use that to predict what sort of effect the untreated group would see. For example, say we get a sample of 1,000 Alfreds and 1,000 Briannas, where 400 Alfreds and 600 Briannas have been assigned to treatment on a basically random basis, leaving 600 Alfreds and 400 Briannas untreated. The average outcome for treated people will be \((400\times2 + 600\times5)/1000 = 3.8\), and for untreated people will be 1. However, we can run our analysis an extra two times, once just on Alfreds and once just on Briannas, and find that the average treatment effect conditional on being Alfred appears to be 1, and the average treatment effect conditional on being Brianna appears to be 4. Since we know that there are 600 untreated Alfreds and 400 untreated Briannas, we can work out that the average treatment on the untreated is \((600\times1 + 400\times4)/1000 = 2.2\). ATUT! The distinction between ATT and ATUT, and knowing which one we’re getting, is an important one in nearly all social science contexts. This is because, in a lot of real-world cases, people are choosing for themselves whether to get treatment or not. This means that treated and untreated people are often different in quite a few ways (people who choose to do stuff are generally quite different from those who don’t), and we might expect the treatment effect to be different for them too. Borrowing the example from the start of the chapter, if people could choose for themselves whether to take the cervical cancer drug, who would choose to take it? People with a cervix! The drug is more effective for them than for people without a cervix, so the ATT and ATUT aren’t the same, and that’s not something we can avoid - the drug being more effective for them is why they chose to take it. One other way in which a treatment effect can focus on just a particular group is with the marginal treatment effect. The marginal treatment effect is the treatment effect of a person who is just on the margin of either being treated or not treated. This is a handy concept if the question you’re trying to answer is “should we treat more people?” I won’t go too much into the marginal treatment effect here, as actually getting one can be a bit tricky. But it’s good to know the idea is out there. Instead of focusing our average just on a group of people, what if we include everyone, but perhaps weight some people more than others? We can generically think of these as being called “weighted average treatment effects.” In general, a weighted average is a lot like a mean. Let’s go back to the average treatment effect - that was just a mean. The mean of 1, 2, 3, and 4 is \((1+2+3+4)/4 = 2.5\), as you’ll recall from our fake data, reproduced below in Table 10.4. Now, let’s not change that calculation, but just recognize that \(1 = 1\times1\), \(2 = 2\times1\), and so on. Table 10.4: Fake Data for Four Individuals Name Gender Untreated Outcome Treated Outcome Treatment Effect Alfred Male 1 2 1 Brianna Female 1 5 4 Chizue Female 2 5 3 Diego Male 2 4 2 Substituting in \(1\times1\) for \(1\), \(2\times1\) for \(2\), and so on, our calculation for the mean is now \((1\times1 + 2\times1 + 3\times1 + 4\times1)/(1+1+1+1) = 2.5\). Here, everyone’s number is getting multiplied by 1, and that’s the same 1 for everybody. This is a weighted average where everyone gets the same weight (1). But what if people got different numbers besides 1? Continuing with the same fake-data example, let’s say for some reason that we think Brianna should count twice as much as everyone else, and Diego should count half as much. Now our weighted average treatment effect is \((1\times1 + 4\times2 + 3\times1 + 2\times.5)/(1+2+1+.5) = 2.89\). There are some applications where we get to pick what these weights are and apply them intentionally.156 Survey/sample weights, for example, as discussed in Chapter 13. In the context of treatment effects, though, we rarely get to pick what the weights are. Instead, there’s something about the design that weights some people more than others. A common way this shows up is as variance-weighted average treatment effects. Statistics is all about variation. And the relationship between \(Y\) and \(X\) is a lot easier to see if \(X\) moves around a whole lot. If you don’t see a lot of change in \(X\), then it’s hard to tell whether changes in \(Y\) are related to changes in \(X\) because, well, what changes in \(X\) are we supposed to look for exactly? What’s the relationship between living on Earth and your upper-body strength? Statistics can’t help there, because pretty much everybody we can sample lives on Earth. We don’t see a lot of people living elsewhere, so we can’t observe how it makes them different to live elsewhere. As a result, if some kinds of people have a lot of variation in treatment while others don’t, our estimate may weight the treatment effect of those with variation in treatment more heavily, simply because we can see them both with and without treatment a lot. Let’s say that we get a sample of 1,000 Briannas and 1,000 Diegos. For whatever reason, half of all Briannas have ended up getting treatment, but 90% of Diegos have. So our data looks like Table 10.5 Table 10.5: Briannas and Diegos get Treatment at Different Rates Name N Treated Outcome Brianna 500 Treated 5 Brianna 500 Untreated 1 Diego 900 Treated 4 Diego 100 Untreated 2 Now, we can’t just compare the treated and untreated groups because we have a back door. “Being a Brianna/Being a Diego” is related both to whether you’re treated, and to the outcome (notice that their outcomes would be different if nobody got treated). So we want to close that back door. One way we can do that is by subtracting out mean differences between Brianna and Diego, both for the outcome and the treatment. When we do this, and reevaluate the treatment effect, we get an effect of 3.47.157 The math to get here gets a little sticky, although you can refer to the Conditional Conditional Means section of Chapter 4, or to Chapter 13. But basically, we subtract Brianna’s outcome average of 3 from her outcomes, giving treated Briannas a 2 outcome and Untreated Briannas a \(-2\) outcome, and her 50% treatment from her treatments, giving treated Briannas a “.5 treatment” and untreated Briannas a “\(-.5\) treatment”. Similarly, treated/untreated Diegos get \(.2/-1.8\) for outcome and \(.1/-.9\) for treatment. Fitting a straight line on what we have left tells us that a one-unit change in treatment gets a 3.47 change in outcome. This is closer to Brianna’s treatment effect of 4 than to Diego’s treatment effect of 2. We’re weighting Brianna more heavily. Specifically, we are weighting them both by the variance in their treatment, and Briana has more variance. The variance in treatment among Briannas is \(.5\times.5 = .25\).158 The variance of a binary variable is always (probability it’s 1)\(\times\)(probability it’s 0) - that’s worth remembering. The variance in treatment among Diegos is \(.9\times.1 = .09\). The weighted average, then, is \((.25\times4 + .09\times2)/(.25+.09) = 3.47\). Our estimate of 3.47 is closer to Brianna’s effect (4) than Deigo’s (2) because we see a lot of her both treated and untreated, whereas Diego is mostly treated. Less variation in treatment means we can see the effect of that variation less. Note also that Diego counts less even though we see a lot of treated Diegos - this isn’t the average treatment on the treated. We know we’re getting a variance-weighted average treatment effect rather than the average treatment on the treated, because if we were getting ATT, we’d be closer to Diego and farther from Brianna. Weighted average treatment effects pop up a lot whenever we start closing back doors. When we close back doors, we shut out certain forms of variation in the treatment. The people who really count are the ones who have a lot of variation left after we do that. Variance-weighted treatment effects aren’t the only kind of weighted average treatment effect. For example, if you close back doors by selecting a sample where the treated and untreated groups have similar values of variables on back door paths (i.e., picking untreated observations to match the treated observations), you end up with distribution-weighted average treatment effects, where individuals with really common values of the variables you’re matching on are weighted more heavily. Another form of weighted treatment effects that pops up often is based on how responsive treatment is. In Chapter 9 we discussed the different ways that we can isolate just part of the variation in treatment. We either focus just on the part of the data in which treatment is determined exogenously (like running an experiment, and only including data from the experiment in your analysis) or use some source of exogenous variation to predict treatment, and then use those predictions instead of your actual data on treatment. Of course, heterogeneous treatment effects don’t only apply to the effect of treatment on an outcome. They can also apply to the effect of exogenous variation on treatment. For example, suppose you’re running a random experiment about diet where the treatment is having to eat 100 fewer calories per day than you normally would, and the outcome is your weight. Some people have pretty good willpower and control over their diet. If you tell them to eat less, they can do that. If you tell them to keep doing what they normally do, they can do that too. Other people have less willpower (or less interest in satisfying a researcher).159 Or it’s the middle of a pandemic and the Hot Cheetos are right there in the pantry. They might only eat 90 fewer calories per day when told to eat 100 less. Or 50. Or 5. Or 0. Maybe a few people will be disappointed by being assigned to the “continue as normal” treatment and will cut their calories anyway. So for some people, being assigned to treatment makes them eat 100 fewer calories. For some people it’s 90, or 50, 0, or 10 more calories, or whatever. Heterogeneous treatment effects, but this time for the effect of treatment assignment on treatment, rather than the effect of treatment on outcome. Naturally, if we limit our data to just the people in our experiment and look at the impact of the experiment, it’s going to give us strange results. When this happens - we have exogenous variation, but not everybody follows it, we limit our data to just the people in our experiment, and we look at the relationship between treatment assignment and the outcome - what we get is called the intent-to-treat estimate.160 More broadly, we get intent-to-treat when we have exogenous variation of some sort driving treatment, and we look directly at the relationship between that exogenous variation and the outcome. Intent-to-treat is the effect of assigning treatment, although not the effect of treatment itself, since not everybody follows the Intent-to-treat gives us the average treatment effect of assignment, which is usually not what we want.161 Unless we’re going to use that same assignment in the real world. If I’m using “a policy that forces insurers to cover therapy” to understand the effect of therapy on depression, maybe I do want to know the effect of that policy, rather than the effect of therapy itself, since I have more control as a policymaker over that policy than I do over therapy. What does it give us for the effect of treatment? It’s not exactly a weighted average treatment effect at that point. It does weight each person’s treatment effect by the proportion of their treatment effect they received.162 In most cases, this is just “actually got the treatment” or “didn’t” so it’s just 0 and 1. So if you got enough treatment to get 50% of its effects, you get a weight of .5. This weighting makes a lot of sense - if you get the full treatment, we see the full effect of your treatment when we start adding up differences. If you don’t get the treatment you were assigned to, we still include you in our addition, but it couldn’t have had an effect so you get a 0.163 All of this applies even if treatment isn’t 0/1! In those cases the weights are “how much more treatment you got.” The thing that makes it not exactly a weighted treatment effect is that instead of dividing by the sum of the weights, you divide by the number of individuals. In a weighted average treatment effect, a weight of 0 (you didn’t respond to assignment at all) wouldn’t affect the weighted average treatment effect. But in intent-to-treat, someone with a weight of 0 has no effect on the numerator, but they do affect the denominator, bringing the effect closer to 0. Table 10.6: Fake Data for Four Individuals Name Gender Untreated Outcome Treated Outcome Treatment Effect Alfred Male 1 2 1 Brianna Female 1 5 4 Chizue Female 2 5 3 Diego Male 2 4 2 Returning to our fake data once more, if we recruited two Chizues and two Diegos and assigned one of each to treatment, but Chizue went along with assignment while Diego decided never to receive treatment, then in the treatment-assigned group we’d see Chizue’s 5 and Diego’s 2 (since Diego was never actually treated), and in the treatment-not-assigned group we’d see Chizue’s 2 and Diego’s 2. The calculated effect would be \((3.5 - 2) = 1.5\). This is also \((3\times 1 + 3\times1 + 2\times0 + 2\times0)/(1+1 + 1 + 1) = 1.5\), or the effect of the two Chizues weighted by 1 (since they receive full treatment when assigned) plus the effect of the two Diegos weighted by 0 (since they never receive any treatment), divided by the number of people (4). What if we take the other approach to finding front doors, where we use some source of exogenous variation to predict treatment, and then use those predictions instead of your actual data on This turns out to do something very similar to the intent-to-treat. However, because this approach doesn’t just say “were you assigned treatment or not?” but rather “how much more treatment do we think you got due to assignment?” we can now replace that “number of people” denominator with a “how much more treatment was there?” denominator. Since “how much more treatment” was also our weight in the numerator, we’re back to an actual weighted average treatment effect. Specifically, the weights are how much additional treatment each individual would get if assigned to treatment. We call this one the local average treatment effect (LATE). For example, let’s go back to Chizue and Diego, and Diego not going along with his treatment assignment. We look at assignment and at treatment, and notice that being assigned to treatment only seems to increase treatment rates by 50% (in the not-assigned group, nobody is treated; in the assigned group, 50% are treated). Based on that prediction, we expect to see only half of the treatment effect, and we can get back to the full treatment effect by dividing by .5. This gives us an effect estimate of \((3.5 - 2)/.5 = 3\). We can also get this 3 from \((3\times 1 + 3\times1 + 2\times0 + 2\times0)/(1+1 + 0 + 0) = 3\), which is the 3 effect of the two Chizues, each with a weight of 1 (since assignment increases their treatment from 0 to 1), and the 2 effect of the two Diegos, each with a weight of 0 (since assignment doesn’t affect their treatment). In other words, the LATE is a weighted average treatment effect where you get weighted more heavily the more strongly you respond to exogenous variation.164 It is common in an econometrics class to hear that the LATE is “the average treatment effect among those who respond to assignment” and you might hear those who respond called “compliers.” However, this is a simplification. If one person responds fully to assignment and another only has half a response, the LATE will not average them equally, even though both are compliers. It will weight the full-response person twice as much as the half-response person. This is kind of a strange concept - why would we want to weight people who respond to irrelevant exogenous variation more strongly? Well, maybe we don’t. But the LATE still looms large because it happens to be the weighted average treatment effect that pops up in a lot of research designs. Maybe not what you want, but what you get. And along those lines, what do you get? How do we know, for a given research design, which of these treatment effect estimates we will end up with? I Just Want an ATE, It Would Make Me Feel Great, What Do I Get? By this point we know that there are far more ways to get a single representative treatment effect than just averaging them (to get the average treatment effect). We can get the treatment effect just for certain groups, we can weight some individuals more heavily than others, we can weight people based on how the treatment was assigned. Now, usually (not always), what we want is the average treatment effect - the effect we’d see on average if we took a single individual and applied the treatment to them.165 Why might we not always want this? It depends what question we’re trying to answer. If we want to know “what was the actual effect of this historical policy?” then we might want to know what effect treatment had on the people it actually treated (ATT). If we want to know “what would be the effect if we treated more people?” we might want the treatment on the untreated (ATUT) or the marginal treatment effect. If we want to know “is this more effective for men or women?” we would want some conditional treatment effects. And so on. The reason we bring up most of those other treatment effects at all is that we don’t always get what we want! The treatment effect you get isn’t necessarily a choice you make. It’s a consequence of the research design you have.166 And the estimator you use. While there are limits, for any given research design there are generally different ways of estimating the effect, which may give different treatment effect averages. Some of those estimators are specifically designed to give the ATE for research designs that don’t normally produce it. And since there aren’t usually multiple available research designs that you can use to answer a given question, you’re often stuck with the treatment effect average you get. So for a given research design, which one do we get? The treatment effect we get is almost entirely determined by the source of treatment variation we use. That’s pretty much it. Ask where the variation in your treatment is coming from,167 After removing any variation you choose to remove by controlling for things, etc. and you’ll have a pretty good idea whose treatment effects you are averaging, and who is being weighted more heavily. We’ve already discussed one example of this. If we perform a randomized experiment, then we will be ignoring everyone who isn’t in our experiment. The only treatment variation we are allowing is among the people in our sample - any variation outside our sample is ignored. If our sample isn’t representative of the broader population,168 And thus doesn’t have the same average treatment effect as the broader population. then we will be getting the average treatment effect conditional on being in our sample, a conditional average treatment effect. Let’s take another example. Let’s say we’re interested in the effect of being sent to traffic school on your future driving performance. Let’s also say that we know there are only two reasons anyone goes to traffic school: making a terrible driving mistake, or having someone else make a terrible driving mistake that you are somehow punished for. This gives us the diagram in Figure 10.1. Recognizing the clear TrafficSchool \(\leftarrow\) YourBadDriving \(\rightarrow\) YourFutureDriving back door, we decide to identify the effect by measuring and controlling for your own bad driving This will identify the effect, but it will also shut out any variation in TrafficSchool that’s driven by YourBadDriving. So imagine two people, Rodney and Richard. Rodney has a 50% chance of not going to TrafficSchool, a 10% chance of going because of someone else’s bad driving, and a 40% chance of going because of his own bad driving. Richard has a 50% chance of not going to TrafficSchool, a 30% chance of going because of someone else’s bad driving, and a 20% chance of going because of his own bad driving. We’re tossing out that 40% for Rodney and 20% for Richard chances of going because of their own bad driving. There’s only a 10% chance that Rodney goes to TrafficSchool for the reason we still allow to count, and similarly a 30% chance for Richard. That means there’s more remaining variation in treatment for Richard than for Rodney, so Richard’s treatment effect will be weighted more heavily than Rodney’s will. A weighted average treatment effect! Following this logic - which treatment variation do we allow to count - will tell us almost every time which treatment effect we’re about to get. We can go a little bit further and apply this logic ahead of time to develop some rules of thumb. These are just shortcuts to applying that same logic, but they’re often easier to think about, and they work most of the time. Rule of thumb 1: If you have true randomization in a representative sample and don’t need to do any adjustment, you have an average treatment effect (ATE). Rule of thumb 2: If you have true randomization only within a certain group, and you isolate that group so you can take advantage of that randomization, you have a conditional average treatment Rule of thumb 3: If you know that some variation in treatment is connected to back doors and so you close those back doors, using only the remaining variation, you have a weighted average treatment effect - variance-weighted if you’re subtracting out explained variation, or weighted by how representative the observations are if you’re picking a subsample of the data or picking control observations by matching them with treated observations. Rule of thumb 4: If you are identifying your effect by assuming that some untreated group is what the treated group would look like if they hadn’t been treated, then we have the average treatment on the treated (ATT). Rule of thumb 5: If part of the variation in treatment is driven by an exogenous variable, and you isolate just the part driven by that exogenous variable, then you have a local average treatment effect (LATE). These rules of thumb are, of course, rules of thumb and not true all the time. One important caveat here, already mentioned in a sidenote earlier in the chapter, is that research design alone isn’t the only thing that determines which treatment effect average you get. The way you estimate the effect matters too. To give a basic example, take variance-weighted treatment effects that weight the effect by the variance of treatment. We can estimate the variance of treatment. What if we just… run our analysis while adding sample weights equal to the inverse of that variance? The variances will cancel out and we’ll have an average treatment effect. The research design is a good place to start but the estimator matters. Part 2 of the book will talk about alternate ways of estimating particular research designs that get you different treatment effect averages. Who Cares? It seems almost beyond the point to worry too much about which kind of treatment effect average we get, doesn’t it? After all, we’ve gone to all the work of identifying the effect in the first place. And each of these are averages of the actual treatment effects. Why should it matter? We should care because we’re interested in understanding causal relationships in the world! The reason for paying attention to treatment effect averages (and which ones we are getting) is very clear if the reason we care about causal effects is that we want to know what will happen if we Think way back to when we were defining causality back in Chapter 6 - one way we talked about it was in the form of intervention. If we were to intervene to change the value of \(X\), and \(Y\) changes as a result, then \(X\) causes \(Y\). This approach to causality is one reason why we care about getting causal effects in the first place. It’s useful. If we know that \(X\) causes \(Y\), then if we want to improve \(Y\), we can change \(X\). If aspirin reduces headaches, and you have a headache, then take an aspirin. We know what will happen because we’ve established the causal relationship. Bringing in treatment effect averages changes considerably what we can infer about what will happen based on the estimates we get in our analysis. For example, let’s say we suspect that the presence of lead in drinking water has led to increased crime.169 Which it might well do! See, for example, Reyes (2007Reyes, Jessica Wolpaw. 2007. “Environmental Policy as Social Policy? The Impact of Childhood Lead Exposure on Crime.” The BE Journal of Economic Analysis & Policy 7 (1).). If we find evidence that lead in the drinking water does cause crime to rise, what would we use that information to do? Probably get the lead out of the drinking water, right? However, what if it doesn’t reduce crime for everyone? Let’s say we found a number of localities that won government grants, awarded at random, to clean up the lead in their water. But among the localities that applied for the grants, there was no change in crime rates that followed. Perhaps their crime rates were already very low, or only localities with lead levels already too low to have an effect were the ones who applied for the grant. In the case of this study, we got an average treatment effect conditional on being in the study. That conditional average treatment effect misrepresented the average treatment effect that we would get if we reduced lead levels in everyone’s drinking water. If we don’t pay attention to which treatment effect average we’re getting, we might erroneously think that the effect is zero for everyone and decide not to bother removing lead from the water. This can go the other way too, where we estimate an average treatment effect but don’t want that. For example, imagine you develop a new (and you think better) vaccine for the measles. You study your new vaccine with an experiment in the United States. And because you want to get a really representative average effect, you do a very careful job randomly recruiting everyone into your study, sampling people from all walks of life completely at random. For simplicity, let’s assume nobody refuses being in your study. This approach - selecting people completely at random and nobody opting out of the study - will give us an average treatment effect (at least among people in the United States). Then you get the results back and you’re shocked! The vaccine reduces the chances of measles, but only by a few tenths of a percent. Well, that’s probably because in the United States, north of 90% of people already have a measles vaccine, so your vaccine won’t do much extra for them. What you wanted was the average treatment effect conditional on not already having had a measles vaccine.170 Strangely, this does not count as an average treatment on the untreated. In general, what you want is to think about what intervention would look like, whether it will be in the form of a policy that could be considered (changing how vaccinations occur, reducing lead for everyone, etc.) or in understanding how the world works (wages are going up for group X; what impact should we expect this will have on home ownership among group X?). Once we know what intervention looks like, we want a treatment effect average that will match it. Planning to apply treatment to everyone, or at random? The average treatment effect is what you want. Just to a particular group? The conditional average treatment effect for that group. Wanting to expand an already-popular treatment to more people? Probably want the average treatment on the untreated or a marginal treatment effect. Planning to continue a policy that people opt into? Average treatment on the treated! Understanding not just the overall effect, but who that effect is for, really fills in the gaps on making information from causal inference useful.
{"url":"https://theeffectbook.net/ch-TreatmentEffects.html","timestamp":"2024-11-13T19:19:22Z","content_type":"text/html","content_length":"75821","record_id":"<urn:uuid:6451a98d-b1c5-4fc7-859b-868f7daf39ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00076.warc.gz"}
RTI Calculator | Overland Weekly top of page Choose an option below to learn how the RTI score is calculated Explain it like I'm 5 Think of the RTI score like a test to see how well a car can climb over big bumps or hills without lifting its wheels off the ground. It’s like a game where we drive the car up a sloped ramp until one of the wheels just starts to lift off. We then measure how far the car went up the ramp. The RTI score is like a special score that tells us how good the car is at this game. It's calculated by comparing how far the car went up the ramp to how long the car is from front to back (that's the wheelbase), and then we multiply this by a special number (1000) to make the score easy to understand. Cars with a higher score are really good at climbing over bumps without losing touch with the ground, which is super important for off-road driving. Technical Formula The RTI (Ramp Travel Index) score is a quantitative assessment of a vehicle's axle articulation. This measure is pivotal in the realm of off-road performance, providing an empirical gauge of a vehicle's ability to maintain contact with the ground on uneven terrain. The calculation commences by driving the vehicle up a ramp at a specified angle until one wheel loses contact with the ground, marking the critical point of articulation. The RTI score is computed using the formula: (Distance Travelled Up The Ramp/Wheelbase)×1000. This formula ensures a normalized score that allows for cross-vehicle comparisons, irrespective of variations in wheelbase lengths. The resultant RTI score is inversely proportional to the ramp angle – a lower ramp angle typically yields a higher RTI score, signifying superior suspension flexibility. Super Nerdy Calculations The mathematical foundation for converting RTI scores between different ramp angles hinges on principles of trigonometry and the constant nature of the suspension's articulation point. The initial step involves determining the vertical height at which a wheel lifts off the ground when a vehicle ascends a ramp at a known angle. This vertical height remains constant regardless of the ramp angle and is a key to the conversion process. Given an original RTI score (calculated at a specific ramp angle), we first decode this score to find the vertical height of articulation. The RTI score is defined as (Distance Travelled Up The Ramp/ Wheelbase)×1000. By rearranging this formula, we can express the distance travelled up the ramp as (RTI Score×Wheelbase)/1000. Applying trigonometric principles, this distance up the ramp can be related to the vertical height using the sine function: Height=Distance Travelled×sin(Ramp Angle). To convert this score to a different ramp angle, we maintain the constant vertical height and apply it to the new angle. By rearranging the sine function, we find the new distance travelled up the ramp as New Distance=Height/sin(New Ramp Angle). Subsequently, the new RTI score is recalculated using the original formula with the new distance travelled. This process underscores the fundamental relationship between the ramp angle and the vehicle's articulation, emphasizing the angle's impact on the vehicle's perceived suspension capability.This meticulous calculation allows for an apples-to-apples comparison of vehicle suspension performance across various ramp angles, providing a robust tool for enthusiasts and engineers alike in assessing and benchmarking off-road vehicle bottom of page
{"url":"https://www.overlandweekly.com/rti-calculator","timestamp":"2024-11-07T18:31:29Z","content_type":"text/html","content_length":"445693","record_id":"<urn:uuid:ce849ee6-b2f2-4b27-8318-0d41fea728ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00229.warc.gz"}
Tektronix OM1106 Optical Modulation Analysis Software QAM measurements on the OMA software. In addition to the numerical measurements provided on individual plots, the Measurements tab provides a summary of all numeric measurements, including statistics. The Measurements tab- a summary of all measurements in one place Make measurements faster The Tektronix OM1106 software is designed to collect data from the oscilloscope and move it into the MATLAB workspace with extreme speed to provide the maximum data refresh rate. The data is then processed in MATLAB to extract and display the resulting measurements. Take control with tight MATLAB integration Since 100% of the data processing occurs in MATLAB, test engineers can easily probe into the processing to understand each step along the way. R&D labs can also take advantage of the tight MATLAB integration by writing their own MATLAB algorithms for new techniques under development. Use the optimum algorithm Don’t worry about which algorithm to use. When you select a signal type in the Tektronix OM1106 software (for example, PM-QPSK), the application applies the optimal algorithm for that signal type to the acquired data. Each signal type has a specially designed signal processing approach optimized for that signal. This means that you get results right away. Don’t get stymied by laser phase noise Signal processing algorithms designed for electrical wireless signals don’t always work well with the much noisier sources used for complex optical modulation signals. Our robust signal processing methods tolerate enough phase noise to make it possible to test signals that would traditionally be measured by differential or direct detection such as DQPSK. Find the right BER Q-plots are a great way to get a handle on your data signal quality. Numerous BER measurements versus decision threshold are made on the signal after each data acquisition. Plotting BER versus decision threshold shows the noise properties of the signal. Gaussian noise will produce a straight line on the Q-plot. The optimum decision threshold and extrapolated BER are also calculated. This gives you two BER values: the actual counted errors divided by the number of bits counted, as well as the extrapolated BER for use when the BER is too low to measure quickly. Constellation diagrams Once the laser phase and frequency fluctuations are removed, the resulting electric field can be plotted in the complex plane. When only the values at the symbol centers are plotted, this is called a Constellation Diagram. When continuous traces are also shown in the complex plane, this is often called a Phase Diagram. Since the continuous traces can be turned on or off, we refer to both as the Constellation Diagram. The scatter of the symbol points indicates how close the modulation is to ideal. The symbol points spread out due to additive noise, transmitter eye closure, or fiber impairments. The scatter can be measured by symbol standard deviation, error vector magnitude, or mask violations. Constellation diagram. Constellation measurements Measurements made on constellation diagrams are available on the “fly-out” panel associated with each graphic window. The measurements available for constellations are described below. Constellation measurements │ Measurement │ Description │ │ Elongation │ The ratio of the Q modulation amplitude to the I modulation amplitude is a measure of how well balanced the modulation is for the I and Q branches of a particular polarization’s │ │ │ signal │ │ Real Bias │ Expressed as a percent, this says how much the constellation is shifted left or right. Real (In-phase) bias other than zero is usually a sign that the In-phase Tributary of the │ │ │ transmitter modulator is not being driven symmetrically at eye center │ │ Imag Bias │ Expressed as a percent, this says how much the constellation is shifted up or down. Imaginary (Quadrature) bias other than zero is usually a sign that the Quadrature Tributary of │ │ │ the transmitter modulator is not being driven symmetrically at eye center │ │ Magnitude │ The mean value of the magnitude of all symbols with units given on the plot. This can be used to find the relative sizes of the two Polarization Signals │ │ Phase Angle │ The transmitter I-Q phase bias. It should normally be 90 │ │ StdDev by │ The standard deviation of symbol point distance from the mean symbol in units given on the plot. This is displayed for BPSK and QPSK │ │ Quadrant │ │ │ EVM (%) │ The RMS distance of each symbol point from the ideal symbol point divided by the magnitude of the ideal symbol expressed as a percent │ │ EVM Tab │ The separate EVM tab shown in the right figure provides the EVM% by constellation group. The numbers are arranged to correspond to the symbol arrangement. This is ideal for setting │ │ │ Transmitter modulator bias. For example, if the left side groups have higher EVM than the right side, adjust the In-phase Transmitter modulator bias to drive the negative rail │ │ │ harder │ │ Mask Tab │ The separate Mask tab shown in the right figure provides the number of mask violations by constellation group. The numbers are arranged to correspond to the symbol arrangement. The │ │ │ mask threshold is set in the Engine window and can be used for pass/fail transmitter testing │ │ Quadrature │ The deviation of the transmitter IQ phase from 90 degrees. │ │ Error │ │ │ IQ Offset │ The ratio between the carrier leakage power and the signal power in dB. This metric is impacted by Quadrature Error, Real and Imaginary bias. │ │ IQ │ The ratio of the real and imaginary constellation size in dB. It is related to the linear measure, Elongation. │ │ Imbalance │ │ Color features The Color Grade feature provides an infinite persistence plot where the frequency of occurrence of a point on the plot is indicated by its color. This mode helps reveal patterns not readily apparent in monochrome. Note that the lower constellation groups of the example below have higher EVM than the top groups. In most cases this indicates that the quadrature modulator bias was too far toward the positive rail. This is not evident from the crossing points which are approximately correct. In this case an improperly biased modulator is concealing an improperly biased driver amp. Color Grade Constellation. Color Grade with fine traces. Color Key Constellation Points is a special feature that works when not in Color Grade. In this case the symbol color is determined by the value of the previous symbol. If the prior symbol was in Quadrant 1 (upper right) then the current symbol is colored Yellow. If the prior symbol was in Quadrant 2 (upper left) then the current symbol is colored Magenta. If the prior symbol was in Quadrant 3 (lower left) then the current symbol is colored Light Blue (Cyan). If the prior symbol was in Quadrant 4 (lower right) then the current symbol is colored Solid Blue. This helps reveal pattern dependence. The following figure shows that pattern dependence is to blame for the poor EVM on the other groups. In QPSK modulation, the modulator nonlinearity would normally mask this type of pattern dependence due to RF cable loss, but here the improper modulator bias allows that to be transferred to the optical signal. Field eye diagram. Field eye measurements │ Measurement │ Description │ │ Q (dB) │ Computed from 20 × Log10 of the linear decision threshold Q-factor of the eye │ │ Eye Height │ The distance from the mean 1-level to the mean 0-level (units of plot) │ │ Rail0 Std Dev │ The standard deviation of the 0-level as determined from the decision threshold Q-factor measurement │ │ Rail1 Std Dev │ The standard deviation of the 1-level as determined from the decision threshold Q-factor measurement │ In the case of multilevel signals, the above measurements are listed in the order of the corresponding eye openings in the plot. The top row values correspond to the top-most eye opening. The above functions involving Q-factor use the decision threshold method described in the paper by Bergano . When the number of bit errors in the measurement interval is small, as is often the case, the Q-factor derived from the bit error rate may not be an accurate measure of the signal quality. However, the decision threshold Q-factor is accurate because it is based on all the signal values, not just those that cross a defined boundary. ^1N.S. Bergano, F.W. Kerfoot, C.R. Davidson, “Margin measurements in optical amplifier systems,” IEEE Phot. Tech. Lett., 5, no. 3, pp. 304-306 (1993). Additional measurements available for nonoffset formats │ Measurement │ Description │ │ Overshoot │ The fractional overshoot of the signal. One value is reported for the tributary, and for a multilevel (QAM) signal it is the average of all the overshoots │ │ Undershoot │ The fractional undershoot of the signal (overshoot of the negative-going transition) │ │ Risetime │ The 10-90% rise time of the signal. One value is reported for the tributary, and for a multilevel (QAM) signal it is the average of all the rise times │ │ Falltime │ The 90-10% fall time of the signal │ │ Skew │ The time relative to the center of the power eye of the midpoint between the crossing points for a particular tributary │ │ Crossing Point │ The fractional vertical position at the crossing of the rising and falling edges │ Measurements versus Time In addition to the eye diagram, it is often important to view signals versus time. For example, it is instructive to see what the field values were doing in the vicinity of a bit error. All of the plots that display symbol-center values will indicate if that symbol errors by coloring the point red (assuming that the data is synchronized to the indicated pattern). The Measurement versus Time plot is particularly useful to distinguish errors due to noise, pattern dependence, or pattern errors. For more information:
{"url":"https://gomeasure.dk/products/tektronix-om1106-optical-modulation-analysis-software","timestamp":"2024-11-13T11:20:47Z","content_type":"text/html","content_length":"407365","record_id":"<urn:uuid:8a1f1967-ac7c-432e-b2b0-5d39223b0eff>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00659.warc.gz"}
This function computes the sum of heat kernels associated with each of the source points, evaluating them at each query location. The window for evaluation of the heat kernel must be a rectangle. The heat kernel in any region can be expressed as an infinite sum of terms associated with the eigenfunctions of the Laplacian. The heat kernel in a rectangle is the product of heat kernels for one-dimensional intervals on the horizontal and vertical axes. This function uses hotrod to compute the one-dimensional heat kernels, truncating the infinite sum to the first nmax terms, and then calculates the two-dimensional heat kernel from each source point to each query location. If squared=TRUE these values are squared. Finally the values are summed over all source points to obtain a single value for each query location.
{"url":"https://www.rdocumentation.org/packages/spatstat.explore/versions/3.0-6/topics/hotbox","timestamp":"2024-11-03T22:36:39Z","content_type":"text/html","content_length":"69587","record_id":"<urn:uuid:822ffc74-1ab8-49c5-8ffb-3db389875c15>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00242.warc.gz"}
Jump in Base Fed Rate would have continued for July... What would the Fed funds rate be as if we were in normal times? Basically, what would the Fed funds rate be if we used a rule instead of the discretionary policy being used by the Fed? I use my Effective Demand rule using the CPI (less food & energy) data released just this morning. My ED rule has an advantage over other rules that use GDP numbers because they have to wait 3 months for the GDP numbers. I do not. So here is the monthly update of where the Fed's base nominal interest rate would be as if we were in normal times... The 3rd quarter of 2014 so far is starting out by reinforcing the jump up in the ED "rule" rate seen in 2nd quarter 2014. The ED rule is giving the same rate of 3.5% for July 2014. Let me show you how this is calculated. Here is the Effective Demand rule... Effective Demand Fed Rate Rule = z*(TFUR^2 + LSA^2) – (1 – z)*(TFUR + LSA) + inflation target + 1.5*(current inflation – inflation target) z = (2*LSA + NR)/(2*(LSA^2 + LSA)) TFUR = Total Factor Utilization Rate, (capacity utilization * (1 – unemployment rate)), 74.3% for July 2014. LSA = Effective Labor Share Anchor is currently 74.5. NR = Natural real rate of interest is assumed to be 1.8% currently. Inflation target = 2.0% Current inflation (CPI less food & energy) = 1.855% in July 2014. 1.5 coefficient = To give the Fed rate leverage when inflation gets off target. Fed rate would change 1.5x more than inflation is off target. We first determine the z coefficient... z = (2 * 74.5% + 1.8%)/(2*(74.5%^2 + 74.5%)) = 58.00% Then we determine the TFUR for July 2014... TFUR = capacity utilization * (1 - unemployment rate) TFUR = 79.2% * (1 - 6.2%) = 74.3% Now we use the Effective Demand rule to determine the base nominal rate... 58%*(74.3%^2 + 74.5%^2) - (1 - 58%)*(74.3% + 74.5%) + 2.0% + 1.5*(1.855% - 2.0%) = 3.5% The ED rule worked very well for decades before the crisis. We get an idea of the twilight zone in which monetary policy now finds itself. The Fed just does not understand the unusual shift in Effective Demand which has occurred in the last decade. You can follow this conversation by subscribing to the comment feed for this post.
{"url":"https://effectivedemand.typepad.com/ed/2014/08/jump-in-base-fed-rate-would-have-continued-for-july.html","timestamp":"2024-11-06T02:35:45Z","content_type":"application/xhtml+xml","content_length":"48878","record_id":"<urn:uuid:5506ef6c-9584-436d-a9d8-6f41511f68c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00416.warc.gz"}
Linear Regression with Pandas Dataframe | Saturn Cloud Blog Linear Regression with Pandas Dataframe As a data scientist or software engineer, you are likely to work with large amounts of data and need to extract insights from it. One of the most common tasks in data science is to predict a continuous variable based on one or more features. Linear regression is a popular and powerful tool for this purpose, and with the help of pandas, it becomes even easier to perform linear regression on your data. Linear Regression with Pandas Dataframe As a data scientist or software engineer, you are likely to work with large amounts of data and need to extract insights from it. One of the most common tasks in data science is to predict a continuous variable based on one or more features. Linear regression is a popular and powerful tool for this purpose, and with the help of pandas, it becomes even easier to perform linear regression on your data. What is Linear Regression? Linear regression is a statistical method used to model the relationship between a dependent variable and one or more independent variables. The goal of linear regression is to find the best line that fits the data in a way that minimizes the error between the predicted values and the actual values. In its simplest form, linear regression can be represented by the formula: y = mx + b where y is the dependent variable, x is the independent variable, m is the slope of the line, and b is the y-intercept. How to Perform Linear Regression with Pandas Dataframe Performing linear regression with pandas is a simple process that can be broken down into four steps: 1. Load the data into a pandas dataframe 2. Prepare the data for linear regression by separating the dependent variable and the independent variable(s) 3. Create a linear regression model using the sklearn library 4. Train the model and evaluate its performance Step 1: Load the Data into a Pandas Dataframe Start by loading your data into a pandas dataframe. The read_csv function is handy for reading CSV files and creating a dataframe. import pandas as pd data = pd.read_csv("D:\SamNewLocation\Desktop\data.csv", delimiter=';') Make sure to replace “D:\SamNewLocation\Desktop\data.csv” with the actual path to your CSV file. If your CSV file is in the same directory as your script or notebook, you can simply specify the file name without the full path: data = pd.read_csv("data.csv") OUTPUT : x y Step 2: Prepare the Data for Linear Regression Prepare the data by separating the dependent variable and independent variable(s). For example, let’s assume we want to predict the ‘Gender’ variable based on the ‘Age’ variable. x = data[['x']] y = data['y'] Step 3: Create a Linear Regression Model using sklearn Now that we have our data separated, we can create a linear regression model using the sklearn library. sklearn is a popular machine learning library that provides tools for data preprocessing, model selection, and evaluation. from sklearn.linear_model import LinearRegression model = LinearRegression() Step 4: Train the Model and Evaluate its Performance Train the model using the fit method and evaluate it’s performance using the score method, which returns the R-squared value. # Train the model model.fit(x, y) # Evaluate the model r2_score = model.score(x, y) print(f"R-squared value: {r2_score}") OUTPUT : R-squared value: 0.6000000000000001 The R-squared value measures how well the linear regression model fits the data, ranging from 0 to 1, where 1 indicates a perfect fit. In conclusion, linear regression is a powerful tool for predicting continuous variables. By following these four simple steps, you can easily perform linear regression on your data using pandas and sklearn. Whether you are a data scientist or a software engineer, mastering linear regression is a valuable skill that will enhance your effectiveness as a data analyst. About Saturn Cloud Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Request a demo today to learn more. Saturn Cloud provides customizable, ready-to-use cloud environments for collaborative data teams. Try Saturn Cloud and join thousands of users moving to the cloud without having to switch tools.
{"url":"https://saturncloud.io/blog/linear-regression-with-pandas-dataframe/","timestamp":"2024-11-11T19:56:11Z","content_type":"text/html","content_length":"38739","record_id":"<urn:uuid:9582684d-7b25-4a67-abc5-b04baab990b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00213.warc.gz"}
[ library(ic) | Reference Manual | Alphabetic Index ] Instantiates an integer IC variable to an element of its domain. An integer IC variable or an integer Simple predicate for instantiating an integer IC variable to an element of its domain. It starts with the smallest element, and upon backtracking tries successive elements until the entire domain has been explored, at which point the predicate fails. If Var is already a ground integer, then this predicate simply succeeds exactly once without leaving a choicepoint. See Also labeling / 1, :: / 2, ic_symbolic : indomain / 1, gfd : indomain / 1, sd : indomain / 1, fd : indomain / 1
{"url":"http://www.eclipseclp.org/doc/bips/lib/ic/indomain-1.html","timestamp":"2024-11-11T10:09:33Z","content_type":"text/html","content_length":"1610","record_id":"<urn:uuid:ef90fa86-7f87-4f6e-b37b-0f8028ec7cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00738.warc.gz"}
2D Lists and Grids Guide What are 2D Lists? 2D lists (or "nested lists") are simply lists that contain other lists as their elements. These are not only beneficial for storing complex sets of data, but also for representing things like grids and matrices. 2D Lists as Grids There are two ways that one can organize a 2D list to make a grid: as a single outer column where each inner list represents a complete row, or as a single outer row where each inner list represents a complete column. Each of these approaches has its own pros and cons. list[3][2] # U list[3][2] # P The approach where each inner list is a complete row may at first seem like the more natural or intuitive strategy. It certainly makes it easier to transform a text file into a grid, since each row can be read in one by one from the file and immediately transformed into the corresponding list. However, this approach also has a major drawback: the final grid must be accessed in reverse-coordinate order, using grid[y_index][x_index] notation. For example, let's say I am creating the letter grid shown above, which has a width of 6 and a height of 4. If I wish to access the bottom-right corner (the letter X), I would need to use the syntax grid[3][5]. This is because the first coordinate accesses the column position, since the outer list is a column, and the second coordinate accesses the row position, since the inner lists are rows. In order to access a grid position using correct coordinate order, using grid[x_index][y_index] notation, it must be organized as a single outer row of complete inner columns. In the same example, the bottom-right corner of a 6 x 4 grid is now grid[5][3], because the row position is accessed first. This allows for a much more intuitive access of individual grid cells, though it does make initializing the grid in-line more complicated (as shown below). TechSmart's Approach Because the dual-index notation for accessing grid positions is very common, grids in our TechSmart courses are generally organized in this second fashion: as a single outer list representing a row and each inner list representing a complete column. This allows students to use the same notation they are used to in graphical programs: the horizontal (x) location is specified first, followed by the vertical (y) location. However, Python is flexible: depending on the need of your specific program, it's entirely possible to organize your grids whichever way you choose.
{"url":"https://support.techsmart.codes/hc/en-us/articles/5945767988243-2D-Lists-and-Grids-Guide","timestamp":"2024-11-09T19:59:14Z","content_type":"text/html","content_length":"32329","record_id":"<urn:uuid:c0f4402e-7d2b-4311-8732-876e61665ed2>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00821.warc.gz"}
DbSchema Tutorial | SQL INTERSECT OPERATOR DbSchema Tutorial | SQL INTERSECT OPERATOR SQL (Structured Query Language) is a powerful tool for managing and manipulating relational databases. One of the lesser-known but highly useful operators in SQL is the INTERSECT operator. This article dives deep into the INTERSECT operator, providing a detailed explanation, its differences from other operators, and practical examples. What is the SQL INTERSECT Operator? The INTERSECT operator returns rows that are common between two or more result sets. Think of it as a filter that only lets through records that appear in all of the specified queries. SELECT column1, column2, ... FROM table1 SELECT column1, column2, ... FROM table2; INTERSECT vs INNER JOIN Though they can sometimes provide similar results, INTERSECT and INNER JOIN are fundamentally different. Feature INTERSECT INNER JOIN Purpose Finds common rows between datasets. Combines rows based on a condition. Column Requirement Columns must be of the same data type. Columns can be different. Result A single set of columns with common data. Multiple columns from both tables. Duplication Automatically removes duplicates from the result. Can produce duplicate rows. Detailed Examples with Results Finding Common Rows in the Same Table Consider a sample table named Students: ID Name Age 1 Alice 20 2 Bob 22 3 Carol 22 4 Dave 23 To find students with the same age: SELECT Age FROM Students WHERE Age = 22 SELECT Age FROM Students WHERE Age = 23; Only the age 22 is common between the two queries. INTERSECT with BETWEEN Operator Using the same Students table: SELECT Age FROM Students WHERE Age BETWEEN 20 AND 22 SELECT Age FROM Students WHERE Age BETWEEN 21 AND 23; The age 22 falls within both specified age ranges. INTERSECT with IN Operator SELECT Age FROM Students WHERE Age IN (20, 22) SELECT Age FROM Students WHERE Age IN (22, 23); Once again, only the age 22 is common in both queries. INTERSECT with LIKE Operator SELECT Name FROM Students WHERE Name LIKE 'A%' SELECT Name FROM Students WHERE Name LIKE 'Al%'; The name Alice matches both LIKE patterns. Intersect with WHERE Clause SELECT Name FROM Students WHERE Age > 20 SELECT Name FROM Students WHERE Age < 23; Both Bob and Carol fit the age criteria defined in both queries. SQL Intersect with 3 or More Tables Suppose we have an additional table Teachers: ID Name Age 1 Evan 22 2 Felicia 25 3 Gary 22 4 Helen 26 And another table Staff: ID Name Age 1 Ian 22 2 Jane 28 3 Kyle 27 4 Laura 22 Now, to find the common ages among these three tables: SELECT Age FROM Students SELECT Age FROM Teachers SELECT Age FROM Staff; The age 22 is common across all three tables. SQL Intersect With Multiple Expressions We’ll fetch both name and age: SELECT Name, Age FROM Students SELECT Name, Age FROM Teachers; There are no common name and age pairs between the two tables. SQL Intersect Using ORDER BY Clause (SELECT Name FROM Students) (SELECT Name FROM Teachers) ORDER BY Name; No common names exist between the two tables. Common Mistakes and Pitfalls: 1. Column Misalignment: Ensure that the order and data type of columns in both SELECT statements match. 2. Over-reliance on INTERSECT: Sometimes, a well-constructed JOIN or WHERE EXISTS might be more efficient than using INTERSECT. 1. How is INTERSECT different from UNION? □ INTERSECT returns only the common rows between result sets. UNION combines the result sets and returns all distinct rows. 2. How does INTERSECT handle NULL values? □ In the context of INTERSECT, two NULL values are considered equal. Practice Questions: 1. Retrieve common names from Students, Teachers, and Staff tables. 2. From the Students table, find students whose names start with A and B, and intersect those results. 3. Using a hypothetical Products table, find products that have a price range intersecting between $10-$50 and $40-$80. 4. From the Students table, intersect results of students aged 22 with those whose names end with an e. In conclusion, the INTERSECT operator is an incredibly useful tool to retrieve common data between result sets. It’s essential to understand its functionality and know when to use it for effective database querying. Happy Querying !!
{"url":"https://dbschema.com/2023/09/10/sql-tutorial/sql-intersect-operator/","timestamp":"2024-11-04T23:16:16Z","content_type":"application/xhtml+xml","content_length":"30331","record_id":"<urn:uuid:670dd009-b824-4c93-a91a-8651d0076ee3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00785.warc.gz"}
The Mathematical Patterns around Us “How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?” Albert Einstein I have always been pretty sure that mathematics and science are the best languages to explain the natural phenomena. While for many mathematics is too abstract, for me, it is the beautiful language of the universe. There is no upper limit to the numerical abilities of humans. First, we discovered fire, to get warm. Then, we needed light, so we invented electricity. When we needed to talk to someone 10,000 miles away, we invented the internet. Behind all these inventions, was mathematics. Of course, the universe cannot speak or think. However, we, the people, can read the universe. There are many scientifically and mathematically inclined people who can read the universe and find answers and then describe them. We, the normal people, can also use our imagination as an apparatus to read the universe and nature. If we can read, hence, something is written. In order to write, a language is always needed. So, the universe should have a language. The letters are circles, triangles, hexagons, etc. Everything in life has mathematical patterns. Think of the wild animals with stripes or patterns for the purposes of camouflage. But why does a leopard or cheetah or tiger have a particular design? The Enigma codebreaker, Alan Turing, had a mathematical theory about leopard’s spots. Turing suggested in his paper “The Chemical Basis of Morphogenesis” (published in 1952) “a mathematical schema for the formation of the patterns found in animals and plants.” This was 60 years ago [1]. Stars have patterns. Astrologists have been looking at the outer space searching for patterns to better understand life. Whatever it may be that they find, it is always about mathematics. Seasons have patterns. They come and go. And they influence nature: the climate changes, animals migrate north or south, rain comes, snow melts, the earth changes color, etc.… Of course, seasons cannot make these miracles. They can only have mathematical patterns. Einstein had pondered for years on how mathematics works so perfectly. He knew that mathematics is the bridge or the language that connects humans with the universe. And being a connection between us and the universe makes mathematics the greatest achievement of mankind. If you take a closer look at the patterns of our world, you will witness the language of mathematics. Let me give you some specific examples. Fibonacci, the golden ratio, spiral, cabbage… Our universe is filled with spiral designs. Spirals can be found in the shapes of the DNA double helix, flowers, elephant tusks, sunflowers, hurricanes, draining water, animal horns, a nautilus shell, a snail shell, a pinecone, a cabbage, a fingerprint, algae, galaxies... the list goes on and on. Tons of lifeless and living things have spiral designs. And they are not random spirals. They have something in common: the golden ratio! And surprisingly, “there is a strong case that this so-called ‘Golden Ratio’ (1.61803...) can be related not only to aspects of mathematics but also to physics, chemistry, biology and the topology of space-time” [2]. All these spirals in nature tell us there are numbers all around us. Let’s observe the numbers of petals on some flowers. When you count the number of petals of the flowers in your garden, you will get the numbers 3, 5, 8, 13, 21, 34, or 55. These numbers are not random numbers. These are very unique numbers; they are part of a sequence developed by Fibonacci, a 13th century mathematician, by adding up the last two numbers starting from 1: 1+1= 2, 1+2= 3, 2+3= 5, 3+5=8 … 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … F[n]= F[n-1]+F[n-2], F[1]=1, F[2]=1 But, why are those Fibonacci numbers so important? The key is, the relationship between the progression of growth and the proportion. There is a harmonic proportion hidden in the Fibonacci sequence. A fact: If you divide one number in the sequence by the previous number, the answers result in or come closer to phi: For example: 5/3 = 1.6666; 13/8 = 1.6250; 377/233 = 1.61802575; 317811/196418 = 1.618033399 Definition: In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities [3]. These numbers can be demonstrated with the spiral of the florets in a sunflower. The florets in a sunflower head also form two spirals. If you count the clockwise and counterclockwise spirals that reach the outer edge, you’ll usually find a pair of numbers from the sequence: 34 and 55. If it is a very large sunflower, you will get 89 and 144 [4]. These spirals are not only in sunflowers. You can see them if you look at a pine cone or a daisy. If you mark the spirals and count them, you will always get a number from the Fibonacci sequence. And if you count in the other direction, this time you will find an adjacent Fibonacci number. The nautilus shell, the golden mean What makes the nautilus shell so special for mathematicians? Having the Golden Mean. But how do we know that the nautilus shell has the Golden Mean? First of all, we will start with drawing a small, one unit square. Then we will draw another square which is larger than the previous one. We need to add in a counterclockwise direction. The length of each square has a value from the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 … Then we can draw spirals, starting with the smallest one, outward through the largest one. Then the Golden Mean will appear. Flowers, plants, or objects have no idea about mathematics, yet they manifest the best of mathematical patterns. This marvelous mathematical art has been placed in their nature for us not to be fascinated only but also to explore the mysteries behind it.
{"url":"http://80bola.com.fountainmagazine.com/all-issues/2019/issue-128-mar-apr-2019/the-mathematical-patterns-around-us","timestamp":"2024-11-04T23:33:40Z","content_type":"text/html","content_length":"101044","record_id":"<urn:uuid:9a8e133d-fe58-4cbf-8533-1fabd1e6d3c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00043.warc.gz"}
constraint_to_row: Constraint to row in bain: Bayes Factors for Informative Hypotheses Evaluate a constraint, which has been formatted as an equation, with all terms moved to the left hand side, and return a single row of a (in)equality constraint matrix. hyp Character. An (in)equality constraint formatted as equation, with all terms on the left hand side. Character. An (in)equality constraint formatted as equation, with all terms on the left hand side. For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/bain/man/constraint_to_row.html","timestamp":"2024-11-10T03:07:31Z","content_type":"text/html","content_length":"24027","record_id":"<urn:uuid:3c582450-58dd-49c7-b882-f01f6ce6f30a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00707.warc.gz"}
How To Copy A Formula In Excel - ManyCoders Key Takeaway: • Understanding relative and absolute cell references is crucial when copying formulas in Excel. By using the $ symbol in your cell references, you can ensure that the formula is copied correctly and the right values are used. • The fill handle is a quick and easy method to copy formulas in Excel. Simply click and drag the fill handle to the cells you want to copy the formula to, and Excel will automatically adjust the cell references. • Copying formulas to other worksheets in Excel requires you to understand the 3D reference function. By using brackets and specifying the worksheet name, you can copy formulas across multiple Struggling to copy a formula in Excel? You’re not alone! If you’re looking for an easy and fast way to apply a formula to multiple cells, this article is for you. Learn how to copy and paste a formula quickly and easily. How to Copy Formulas in Excel Do you ever spend ages inputting the same formula in Microsoft Excel? I did this frequently until I knew the simple trick of copying formulas. In this article, we’ll look at the significance of understanding relative and absolute cell references when copying formulas. Moreover, we’ll study two techniques: The Fill Handle and The Drag and Drop Method. These can save you time. Let’s explore copying formulas in Excel! Image credits: manycoders.com by Yuval Duncun The Importance of Understanding Relative and Absolute Cell References Understanding relative and absolute cell references is key for utilizing Excel properly. This helps copy formulas accurately and avert errors. A six-step guide can explain the importance of understanding these references better. 1. Cell referencing means identifying a single cell or range of cells in a sheet. 2. The default reference type used by Excel is relative. This means that when copying a formula to another cell, it adjusts based on its position. 3. Absolute references are fixed and do not alter when copied. 4. Mixed references let one part of the reference remain fix while the other stays relative. 5. Knowing which reference type to use stops errors from arising from copied formulas such as wrong totals or missed calculations. 6. Last, using the F4 key allows for simple switching between reference types during formula entry. It’s important to understand that cell references are an ongoing process that needs practice and application. Relative references may work at first, but not comprehending absolute and mixed references can lead to important details being forgotten, causing errors when working later. Understanding relative and absolute cell references is essential, as they provide strong building blocks for making accurate formulas in Excel spreadsheets. Not having the capacity to use these correctly will result in ineffective processes, wasting time and missing out on chances. The next section of the article talks about using the fill handle method – a rapid alternative for copying formulas – this part gives more clarity on copying formulas based on earlier discussed concepts without repeating. Using the Fill Handle: The Quick and Easy Method To copy formulas quickly in Excel and save time, you can use the Fill Handle. Here’s a 5-step guide on how to: 1. Select the cell with the formula. 2. Hover mouse over bottom right corner until it changes to a small crosshair. 3. Click and hold left mouse button. 4. Drag mouse down or across to fill adjacent cells with copied formula. 5. Release left mouse button. Double-check all formulas are correct, then adjust if needed. This method is helpful for large amounts of data, as manually copying formulas can be tedious and time-consuming. When I first used Excel, I didn’t know of this trick. I spent hours manually copying each formula, making me frustrated and unproductive. After discovering Fill Handle, my work became much more efficient and satisfactory. Next we’ll look at the Drag and Drop Method for copying formulas in Excel. Using the Drag and Drop Method Using the drag and drop method is a great way to copy formulas in Excel quickly. It’s easy – just follow these five steps: 1. Select the cell with the formula. 2. Hover your mouse over the bottom-right corner of the cell until it turns into a black cross. 3. Click and hold your mouse button and drag your cursor across the range of cells you want the formula in. 4. Release your mouse button when you reach the last cell. 5. The formula will now be copied into all the cells. However, note that this method only works for copying formulas within one worksheet. Make sure to check your formulas before finalizing the spreadsheet. Also, be careful not to overwrite any existing Pro Tip: Hold down the Ctrl key while selecting cells to exclude them from the selection. Now, let’s move on to copying formulas to other worksheets in Excel. Copying Formulas to Other Worksheets in Excel Managing data in Excel? Time is money! Copying formulas to other worksheets can make your workflow smoother. But it can also be confusing and frustrating. Here we go over techniques for copying formulas in Excel. Firstly – 3D reference function: what you need to know. Next – step-by-step guide on how to copy formulas. Lastly – best practices for copying formulas to multiple worksheets. To save time, improve productivity, and avoid common mistakes. Image credits: manycoders.com by Harry Arnold Understanding the 3D Reference Function: What You Need to Know The 3D reference function is a must-have in Excel. It allows you to do calculations between multiple worksheets. Knowing how to use it makes it easier to organize data. Here are the steps to understand the 3D reference function using Excel: 1. Select the cell where you want your formula. 2. Enter your formula, including the worksheet name and an exclamation point (eg. Sheet2!A1). 3. Press enter and your formula will reference data from multiple worksheets. Using this function helps reduce data input errors and makes tasks like inventory tracking or sales reporting more efficient. A financial analyst was able to complete a task in minutes instead of hours, after learning and using the 3D reference function. Next, let’s explore another important Excel feature – copying formulas across worksheets – step by step guide. Copying a Formula Across Worksheets: Step-by-Step Guide Copying formulas across worksheets is an invaluable skill when it comes to Excel. Here’s a step-by-step guide on how to do it: 1. Open the Excel workbook and select the cell(s) containing your formula(s). 2. Press Ctrl+C or right-click and select “Copy”. 3. Go to the destination worksheet where you want to paste the formula. 4. Select the cell range where you want the formula. 5. Right-click and choose “Paste” or press Ctrl+V. 6. Select “Formulas” under Paste Options. Did you know? Copying a formula can be a real time-saver, especially with large datasets. Any changes made to the source formula will also be reflected in the copied formulas, so double-check all formulas post-copying. Plus, Excel has over one billion users worldwide, making it one of Microsoft’s most popular products. In the next section, we’ll discuss best practices for copying a formula to multiple worksheets. Copying a Formula to Multiple Worksheets: Best Practices Copying formulas across multiple worksheets in Excel can save you time and effort. Here’s a 4-step guide to help you do it quickly and easily: 1. Select the cell containing the formula you want to copy. 2. Press Ctrl+C or right-click and select “Copy“. 3. Move to the first sheet you want to paste the formula in. 4. Select the cell where you want the formula to appear, and press Ctrl+V or right-click and select “Paste“. Repeat these steps for each worksheet where you want to apply the same formula. There is also a “Fill Across Worksheets” feature in Excel to make this process even faster. It allows you to copy a particular cell or range of cells from one sheet and paste it into corresponding cells in multiple sheets at once. It is best to use relative cell references when creating formulas. This way, when you copy your formulas to other worksheets, they adjust according to their position in each worksheet. Before proceeding with any work-related tasks, double-check your formulas to prevent any potential errors due to incorrect referencing or formatting. My colleague once spent hours manually copying a complex formula into dozens of different sheets. She could have saved so much time if someone had shown her how to do it all at once. Stay tuned for our next topic – how to copy formulas with multiple cells in Excel! Copying Formulas with Multiple Cells in Excel Do you work in Excel? Then you know how valuable formulas are. But what if you need to use one formula on multiple cells? That’s where copying formulas comes in! In this section, we’ll look at the different ways to copy formulas to many cells. We’ll look at copy & paste, the fill function, and the range function. With these tools, handling large amounts of data is easy! Image credits: manycoders.com by Adam Washington Copy and Paste Method: A Comprehensive Guide “Copy and Paste Method: A Comprehensive Guide” is a helpful tool in Excel. It saves you a lot of time by letting you duplicate content quickly. Follow these four simple steps to get started: 1. Step 1: Select the cells containing the content you want to copy. 2. Step 2: Press “Ctrl + C” on your keyboard or right-click on the selected cells and select “Copy.” 3. Step 3: Move your cursor to where you want to paste the copied cells. Right-click and select “Paste.” Or press “Ctrl + V” on your keyboard. 4. Step 4: Check everything is correct then save your work. Using this method is easy and helps you maintain consistency across your spreadsheet. If you have problems copying or pasting cells, check that all the necessary cells are selected. Click and drag to select them. You can also use this method to copy formulas. Just select the cell or group of cells containing the formula you want to copy instead of individual content. If your spreadsheet has many columns, using “Ctrl + C” and “Ctrl + V” can be tedious. So, try these shortcuts: • To copy without formatting – press “Ctrl + Shift + C“ • To paste without formatting (only text) – press “Ctrl + Alt+ V,” followed by the letter T You can also decide if you want to keep cell references when copying formulas by changing Excel’s settings. Go to File > Options > Formulas > Working with Formulas. Then tick “Use Relative Now, let’s talk about another great feature in Excel for copying formulas – Using The Fill Function. Using the Fill Function to Copy Formulas: Tips and Tricks Want to save time copying formulas? Use the Fill Function! It can copy numbers, texts, conditional formatting rules and patterns across multiple cells or vertical columns. The Autosum or Average Function also uses similar Fill Function capabilities. But, remember to track the parameters such as numbering systems, currency symbols, and date formats before applying the formulas to ensure accuracy. Copying Formulas with the Range Function: Expert Advice Copying formulas in Excel is a task many of us do. But, it can be slow and make mistakes if you don’t do it right. Here’s a 4-step guide on how to use the range function to copy formulas. 1. Select the cell or range of cells that have the formula you want to copy. 2. Click on the Home tab, find the Clipboard group. 3. Look for the Fill icon and click on the arrow next to it. 4. Choose “Down,” “Right,” “Up,” or “Left” from the options. Excel adjusts the cell references automatically when you paste a formula into a new cell or range. This saves time when you copy complex formulas or across multiple worksheets. You are also more accurate when copying a formula with this method since you only enter data once. If you want other efficient methods, try keyboard shortcuts like Ctrl + C and Ctrl + V, click-and-drag cells, or select entire tables with Ctrl + A. Mastering these tips takes some practice, but once you do, you’ll be working with Excel sheets faster. Now, let’s look into how to copy formulas with errors in Excel. We’ll give you expert advice for this. How to Copy Formulas with Errors in Excel Mastering Excel can make our lives easier at work or school. It’s about learning the tricks of the trade. One of these skills is formula replication. We’ll cover that in this section. In particular, we’ll focus on copying formulas with errors. First, we’ll look at the importance of understanding the error-checking feature in Excel. It can help us spot mistakes in our workbooks. Then, we’ll take a step-by-step guide on how to fix formulas using the error-checking feature. Finally, we’ll cover some best practices for copying formulas with errors checking enabled. Image credits: manycoders.com by Yuval Duncun The Importance of Understanding the Error Checking Feature Understanding Error Checking Feature is crucial for using Excel properly, especially when dealing with formulas which may have errors. Without reviewing your formulas for errors, you could make wrong calculations, which can lead to serious issues. Therefore, learning about Excel’s Error Checking Feature helps you spot and fix formula errors quickly. Follow these 6 steps to get the most out of the error checking feature in Excel: 1. Right-click a cell with an error in the worksheet. 2. Select “Show Calculation Steps” from the pop-up menu. 3. The Evaluation Tool Window appears; review it to find any possible problems. 4. From the cell’s location, click each step and analyze it for errors or mistakes. 5. To go to the next step while still checking the same element or formula part, press F9. 6. Continue checking all stages until you find an issue that needs attention. These steps make it easier to search all formulae in a worksheet quickly and accurately detect errors using Microsoft’s built-in system called ‘formulas auditing’. Knowing the Error Checking feature makes it simpler to troubleshoot your formulas if they are not giving out expected results. Other things to consider include avoiding incorrect data entry and broken links. However, using Excel’s sophisticated tools like this one can help protect against some potential issues. Investopedia sources on Microsoft Office 365 as of 2021 say: “Excel has automatic error checking capabilities to detect common mistakes in spreadsheets.” Fixing Formulas with the Error Checking Feature: Step-by-Step Guide Text: Need to fix formula errors in Excel? Here’s a 3-step guide. 1. Click the cell with an error. 2. Click the “Error Checking” button next to it. 3. Select an option from the drop-down menu & follow the steps. This makes fixing errors quick & easy. Plus, Microsoft provides extra info & resources. Error checking is one of the most effective methods. It not only helps you find & fix mistakes quickly, but also gives insights into how your spreadsheet works. According to a McKinsey & Company study, workers spend 28% of their workweek on emails. Streamline your workflow with tools like error checking and save time each week. Next up: Best Practices for Copying Formulas with Error Checking Enabled. Copying Formulas with Error Checking Enabled: Best Practices Make sure error checking is enabled when copying formulas in Excel. Use the Copy and Paste buttons or use CTRL + C and CTRL + V on your keyboard. Review any errors that appear and make corrections. It’s still possible to make mistakes even if error checking is enabled, so watch out for common errors such as dividing by zero or referencing cells that no longer exist. Test formulas on a smaller scale before applying them to larger sets of data. I once made a mistake by not enabling error checking when copying formulas in Excel. I didn’t notice an error in one of the cells until after I had sent out the report. Since then, I always make sure error checking is enabled when copying formulas. Now let’s take a look at how to copy formulas with conditional formatting in Excel. How to Copy Formulas with Conditional Formatting in Excel Working with spreadsheets in Excel? Learn the skill of copying formulas! It’s a must-have. Conditional formatting can be useful to achieve certain looks. We’ll explore the ins and outs of this task. From understanding the basics to tips and tricks, to applying conditional formatting after copying formulas. Let’s master this Excel skill! Image credits: manycoders.com by Yuval Woodhock Understanding Conditional Formatting: What You Need to Know Conditional Formatting is a tool that formats cells based on certain criteria. You can use it to highlight certain types of data like duplicates or values above/below a set threshold. Here are the steps to understand it better: 1. Open a blank worksheet in Excel. 2. Look for the ‘Conditional Formatting’ button in the Home tab’s style group. 3. Click the drop-down menu and select your criteria from the options, e.g. ‘Highlight Cell Rules’, ‘Top-Bottom Rules’, ‘Data Bars’ etc. 4. Input your parameters according to the data you want to highlight. Example: For duplicate values, select “Duplicate Values” from the drop-down list under “Highlight Cells Rules”. 5. Customize your format using built-in formats or creating your own custom formats in the “Format Cells” dialogue box. 6. Click OK to apply the formatting pattern across your excel sheet. It’s important to make full use of conditional formatting’s potential. It helps visualise patterns in complex datasets quickly and with less room for error, so that better decisions can be made. Next, we will explore tips and tricks to copy formulas with conditional formatting to speed up productivity. Copying Formulas with Conditional Formatting: Tips and Tricks Master copying formulas with conditional formatting in Excel! Here are some tips: 1. Select the cell with the formula. 2. Spot the fill handle in the bottom-right corner. 3. Click and drag it over the cells you want to copy the formula to. 4. Release mouse button to apply the formula. 5. Check the format painter icon in each cell to make sure the conditional formatting rules were preserved. Double-click on the fill handle to auto-apply the formula down a long column of data. Use relative referencing for formulas that adjust automatically when copied. For complex formulas or special characters, use ‘Paste Special’ to select which elements to preserve. Applying Conditional Formatting to Copied Formulas: Expert Advice Here’s a detailed guide on Applying Conditional Formatting to Copied Formulas: Expert Advice: 1. Select the cell with the formula and formatting. 2. Copy the cell with Ctrl+C or right-clicking and choosing Copy. 3. Highlight the range of cells where you want to paste the formula/formatting. 4. In the Home tab, select Paste Special then choose Formulas. When copying formulas, be sure the formatting is adjusted properly. Complex nested functions or embedded ranges require extra care. Relative references might cause weird behavior when pasting elsewhere. To avoid this, use absolute references like $A1 instead of A1. Using named ranges is a good tip for successful applying of conditional formatting with copied formulas. Named ranges make it easier for Excel to understand than chunky selections like A3:C25. They let you reference them throughout the spreadsheet. Five Facts About How to Copy a Formula in Excel: • ✅ Copying a formula in Excel can be done using the “Fill Handle” or the “Copy and Paste” method. (Source: Microsoft Excel Support) • ✅ The Fill Handle can be found at the bottom right corner of a selected cell. (Source: Excel Easy) • ✅ Excel also allows for relative and absolute referencing in formulas. (Source: Investopedia) • ✅ To copy a formula with absolute references, use the shortcut key F4. (Source: Excel Campus) • ✅ You can also use the “Paste Special” option to copy formulas without formatting or to transpose data. (Source: ExcelJet) FAQs about How To Copy A Formula In Excel How to copy a formula in Excel? To copy a formula in Excel, follow these simple steps: Select the cell containing the formula you want to copy, then place your cursor at the lower-right corner of the cell. When your cursor turns into a crosshair, click and drag it over the cells where you want the formula to be copied.
{"url":"https://manycoders.com/excel/how-to/copy-formula-in-excel/","timestamp":"2024-11-01T20:56:08Z","content_type":"text/html","content_length":"97333","record_id":"<urn:uuid:45554031-d628-4d88-896d-e2259ee61529>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00669.warc.gz"}
Myrtle, Charon, Jane in NC, and other math people that are into "Math as a subject" What do you look for in curriculum other than proofs? Is your goal to make your students really struggle through the process of learning? Is it that they discover some of these connections and concepts on their own? Or do you try to clarify and shovel in as much knowledge and logic as their little brains can take, sort of smoothing the way for them? I've got all of these math books coming so that my dh and I can give our kids the option to study math as a subject in and of itself, but never having done it myself, I am sort of confused on the approach. I usually like to do things in the most efficient way possible simply because I have a 2 year old terror that thinks that people can fly. And another question, Am I right in thinking that the way the US currently studies math is akin to studying writing without ever studying grammar? I'm a little slow at putting new ideas together sometimes, so thanks for the indulgence and the help. Hi Kimber, Frankly I did not put much thought into this issue until my son was at the point in the American progression at which he would take Algebra. I collected a stack of books from library sales and charity shops, then spent a few hours looking at them. Ugh. I did not like a single one. So I ordered a copy of the book that I had used for Algebra, an antique Dolciani. My husband and I then compared this book with others in the stack. It was more "mathy". Yeah, there were some proofs. Yeah, there were more interesting problems. But what ultimately sold us was the use of proper mathematical notation, the statement of axioms, and the interesting material in the "Extra for Experts" My son has been using a mid 70's Dolciani for Algebra II/Trig this school year, but I just jumped ship to a separate '70's era volume called Modern Trigonometry. It too is a Dolciani book, but it is co-authored by Beckenbach, who co-authored the book Modern Introductory Analysis with Dolciani. The latter, a book that I cherish, led me to become a math major. So perhaps it is more nostalgia than mathematics which leads to my book choices? One wonders. I can say that I have taught from a variety of precalculus texts, none of which I have liked. Having my son use the aforementioned Analysis text is a no brainer for me: I view it as a precalculus text which will prepare him for either engineering Calculus or an honors Calculus for Math majors. Back to questions asked in your post: To me mathematics is more than a series of algorithms. It is a way of thinking. I have my son read his math book before tackling his problems. (This is a skill that most kids in college lack. They use their math texts only for the problem sets but rarely read them!) If my son is stumped on something, I provide clues of varying levels. Sometimes I might offer the first step of a proof or suggest a path that he should follow. With word problems, I may walk him through the set of variables and picture demonstrating the problem by asking a series of leading questions. I do want him to scratch his head, learn to read the section, look back at previous material, and use the index. This process does indeed involve discovery--sometimes shoveling of concepts, I suppose. But I do not believe that students should bang their heads on walls for too long. Head scratching and a bang or two is fine. More than that leads to frustration! Math Sequence: Modern Algebra (Book One) by Allen and Pearson Geometry by Moise and Downes Modern Algebra (Book Two) by Allen and Pearson Principles of Mathematics by Oakley and Allendoerfer Abstract Algebra by I N Herstein Principles of Mathematical Analysis by Walter Rudin Logic/Philosophy Sequence: First Course in Mathematical Logic by Patrick Suppes Introduction to Logic by Patrick Suppes (The second book of logic should/must be done by the time you start Herstein.) Now let's look at why this is the answer. Well, if it really does end in Herstein and Rudin, then there is no doubt about it -- it is definitely "math for math majors". Both of those books are along the lines of honors senior college courses, so if you can do that, you win. The shakiest part of it all is the lack of very much matrix algebra, actually, which is largely taken for granted in various parts of both Herstein and Rudin. However, Myrtle, for instance, just got done doing a litany of mathematical induction problems out of Allendoerfer, and I showed her the very same problems right there in Herstein. So, that New Math is no joke. And, now that I am seeing it really work over time on both Myrtle and on our oldest, I am about ready to drop Gelfand and everything else just to do that. What's our pedagogical approach? We memorize and do problems. We memorize axioms. We memorize "scripts" for proving things. After they memorize a fair amount, it gives them a framework within which to figure things out. They rarely can just figure everything out on the fly the first time around. And, usually they need the other pieces that they haven't figured out yet in what they are doing for them to be able to do it at all. So, they just memorize that part and keep moving. By "keep moving", I do not mean move on to the next topic. I just mean keep trying to do what they are doing -- to do problem after problem until it starts to click with them. And, then the other part of this is doing the problems. In the end, there is one and only one thing that matters -- what problems can you do. Period. Don't kid yourself about any single other thing mattering, here, other than that. They have to do the problems themselves without coaching. So, that's what you do and how you do it. Of course, it is possible to trade out some of these books for true alternatives. You might be able to trade out Allendoerfer for Docliani's Modern Introductory Analysis. You might be able to switch out Herstein for Gallian or Rudin for Bartle. If you switch out real analysis for a calculus text, you lose. If you switch out a new math text for non new math or a not sufficiently hard on logic and set theory program, you will not be able to touch books like Herstein or Rudin in a hundred million years (which, again, means that you lose). Also, just to be clear, it is not even remotely disputable what the subjects of modern mathematics are. The three main subjects, as any mathematician should tell you, are abstract algebra (e.g. groups, rings, fields), topology, and analysis (e.g. rigorous calculus). Topology is usually chapter 2 of analysis (as it literally is for Rudin). That's why it is sometimes given short shrift and not done all on its own. But, if you are looking for the book on that, it is probably the one by Munkres. (Again, there are alternatives like Armstrong and others -- for the these three, any math department or mathematician can easily recommend a book. But, make sure they do not know what your true intentions are. Just say you want an introduction to the subject at the senior college level and there are tons of books.) No one does this inside or outside of America. Not the Russians nor the Japanese nor the Singaporeans. There is probably something to the fact that you will basically have to both tutor someone through it and really kind of "make" them do it in a way you just can't do with the general public. So, everyone else, at best, does a really good engineering math program. There are some pockets of Russian Math Circles or something else with a few very special and very motivated students that do something else, but even then it usually isn't the systematic training in math as its own subject. It is normally just some really good "real math"-type problems and the chance to talk with real (in many cases first rate) practicing mathematicians both of which are invaluable in their own right. And finally, there is probably one realistic alternative to this: Euclidean Geometry. It is "real math", but it really is profoundly detached from modern math. Basically, everyone does analytic geometry in reality. However, the axiomatics of it and the antiquarian nature of it as well as its historical status make it something kind of special. So, in particular, these crazy ideas of "I'll just do Euclid with my kids" are actually not that crazy at all. After spending some time on the matter, my real recommendation is to go find a book from the 19th century that literally goes through the Elements and gives exercises. I'm kind of down on even Kiselev nowadays. Solomonovich is just Kiselev on steriods, and to really be able to do all the stuff he touches on, it takes a lot more than is there, I think. In other words, I just don't think the student is walking away having mastered things like proof by mathematical induction or even the idea of the Method of Exhaustion as Eudoxus conceived it or anything like that. You're just going to have a nice conversation with a mathematician that maybe inspires you to figure out what the hell he was talking about, anyway. Birkhoff is, indeed, kind of like cheating or something. So are the SMSG axioms in the Moise and Downs book I have listed in The Answer, above. Actually, if you want to get technical, you need to use Tarski's axioms (not even Hilbert is good enough), but you are really missing the point at that point. So, just get a good 19th century text. (I was looking at this one in Google Books by Potts, the other day, for instance.) That would give you a truly classical education right there, and if you close your eyes and concentrate, you might, just for a moment, feel like you are actually standing in Plato's Academy over 20 centuries ago. At any rate, there you have it. Mathematicans, math educators -- probably no one will tell you this. It is ludicrous to suggest the possibility of texts like Herstein or Rudin as well as some sort of gaff to act like they are even meaningful without years of calculus and differential equations. But, I am not exaggerating and I am not just making this up. I've even tested a lot of the most important aspects of it empirically. I guess I won't "know for sure" until I take a few kids with a variety of "special ed" issues all the way from start to finish through it all. Now, give me your obolus or I'll beat you with my oar just like I do everyone else! To me mathematics is more than a series of algorithms. It is a way of thinking. This is no accident. The real math that graduate students and mathematicians do is something very unique indeed. It is the rare case of a formal a priori subject. Philosophy is, by and large, an informal a priori subject. Math and philosophy are, thus, sister disciplines. (Not math and science.) Also, while the debate between rationalists and empiricists continues (...I guess... :rolleyes:), the fact is that most of what is important -- most of the important and hard insights -- are a priori. They may or may not be "ultimately" tied to some sort of empirical "experience". It really doesn't matter. It is the getting from one step to the next that in some hard cases is what makes the difference. That is why math is special. Not "math" -- like teaching textbooks or even singapore or really even Gelfand's Algebra book. But, math like what R L Moore does in his graduate classes as well as the math that is in the various Bourbaki texts. That kind of math, where the proof is more important than the theorem, the kind of math that people put up a million dollar reward for -- like the millenium problems -- that math is something that anyone should feel lucky for the opportunity to beat themselves up with. No payments yet, although I owe y'all big time. :) I've spent all of my coins on researching math texts. Now I have to go and look up every author you've mentioned. I'm starting from ground zero with this stuff. I'm going to try Dolciani through to Modern Introductory Analysis. I think, unless we have a genius in the family that I'm unaware of and maybe even if we do, that I'll dual enroll them from then on. I was blessed enough to find teacher's editions and solutions to the books up through MIA. Last question, at least before I put the littles to bed :) , when do you exactly squeeze in the Logic books? Are they targeted for a certain age or grade level? Also, I did a quick search on line and the introductory text by Suppes has no solution manual. I've never had a logic course, so I think I need the answers. Are these books superior to a modern day math based logic text? I plan to finish 6th grade math and pre-algebra over the next year with my dd. She'll be almost 11 by then. Math isn't really her strong suit, at least according to her. Jane, why did you switch text for Trig? I searched and saw that a few of these are still around. Also, thank you for the response. I'm trying to teach my dd to actually start reading her text. Truthfully, we had the conversation for the first time this morning. But I know that I wasn't doing for the reasons that you mentioned. I'm just trying to get her follow directions. We'll be facing Algebra in a year and a half or so. So I'm trying to do better job for high school than I did with my kids in elementary. I totally wasn't prepared. Thank God I know better. Thank you for the info, Jane, why did you switch text for Trig? I searched and saw that a few of these are still around. Also, thank you for the response. I'm trying to teach my dd to actually start reading her text. On your second comment: In college math classes that I taught, it was such a pleasure to have the rare student at office hours not whine but ask for clarification on something in the text. Granted, learning to read a math book is made more challenging when the text is busy with lots of irrelevant sidebars. Extra fluff is contrary to mathematics as a discipline yet "fluffy" math books seem more the norm than the exception. To answer your question: I had picked up this old "Modern Trigonometry" text at a library sale and had assumed that it was the same material that was in the Algebra II/Trig text since they were both Dolciani. (My new mission in life is to acquire cheap Dolciani texts and then proselytize.) When my son was on the verge of starting trig, I took a second look and realized that it was something completely different. Trig functions are usually defined via triangular relationships on the unit circle. This book began with the trig functions as periodic, circular functions. It is a slightly different approach. Perhaps it is because Beckenbach is in the picture--I really don't know why--but there is a richness to the material that is lacking in the other book. For example when vectors are introduced in an Algebra II or Precalculus book, we see them as tool for use in physics. Any book will list properties of vectors, but this book demonstrates why the set of vectors is a commutative group under addition and a vector space when you throw in multiplication by a scalar. "So what?" you might ask. When you know that something is a particular structure, then you know how that structure should behave. Most people associate mathematics with getting the right number for the answer, but, as Charon will tell you, mathematicians seek to demonstrate existence and uniqueness of solutions, as well as develop a knowledge base of generalized structures and an understanding of the spaces in which these structures operate. My goal for my son is to provide a foundation that allows him to see what mathematics really is, while also demonstrating how mathematics can be used. But neither my husband nor I wish to do the latter in the generally sloppy way that applications are taught in most math classes. I have no problem with students learning about physical applications in mathematics, but physical applications should not be the reason that we study mathematics. Judging from most modern mathematical curricula which lead to and terminate with Calculus, the study of physical change, one would think that this is the only reason that mathematics was created. That was a long winded answer! Should I have just said that I like the second book better? A true statement. No payments yet, although I owe y'all big time. :) I've spent all of my coins on researching math texts. Now I have to go and look up every author you've mentioned. I'm starting from ground zero with this stuff. I'm going to try Dolciani through to Modern Introductory Analysis. I think, unless we have a genius in the family that I'm unaware of and maybe even if we do, that I'll dual enroll them from then on. I was blessed enough to find teacher's editions and solutions to the books up through MIA. Last question, at least before I put the littles to bed :) , when do you exactly squeeze in the Logic books? Are they targeted for a certain age or grade level? Also, I did a quick search on line and the introductory text by Suppes has no solution manual. I've never had a logic course, so I think I need the answers. Are these books superior to a modern day math based logic text? I plan to finish 6th grade math and pre-algebra over the next year with my dd. She'll be almost 11 by then. Math isn't really her strong suit, at least according to her. Myrtle thinks that the first book will be easy after Allen I. The second book naturally follows the first book. It is kind of redundant, actually. Theoretically, you could just do the second book ab initio. In fact, that is what that book is for -- just written to older students. The reason I wouldn't do it with a younger student is just because it is too fast and too hard probably to just jump into from scratch like that. Plus, the redundancy is pedagogically kind of a good thing when it comes to something like logic with a view to doing math. (You can never drill on too much logic.) So, the first book is just a way to get started, and, with a lot of overlap to the first book and ending with a lot of intuitive set theory, the second book is sort of where you want to end up. After that, jumping into Herstein shouldn't phase you one little bit, especially if you have done all of Allendoerfer's mathematical induction problems as well. (The last two starred problems in Allendoerfer, for instance, are to prove that Mathematical Induction is equivalent to the Well Ordering Principle. Also, several of the problems in Allendoerfer show up again in Herstein. So, this Allendoerfer book is awesome!) At any rate, we plan to just do it on the side, concurrently with our math. And, we are going to start the first book yesterday! LOL. We can start the first book at any time, I guess. With the next child we will plan on starting it earlier, perhaps, or maybe we will just save it for concurrent with Moise and Downes. I really doubt it will take as long as three years to get through them both. And, Myrtle just said she wants to see what Allendoerfer recommends, so maybe we'll end up substituting one of those books. (But, I must say, I do like Suppes' second book, specifically because it is written with an eye for doing math and because of the part two and specifically the fact that part two is naive set theory.)
{"url":"https://forums.welltrainedmind.com/topic/14510-myrtle-charon-jane-in-nc-and-other-math-people-that-are-into-math-as-a-subject/","timestamp":"2024-11-02T23:35:33Z","content_type":"text/html","content_length":"351267","record_id":"<urn:uuid:dfece095-4935-4277-b373-32c1f74a5b84>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00307.warc.gz"}
Nothing but Functions - Logic Most programmers have heard of Turing machines, an oversimplified model of computation as capable (algorithmically) as any programming language. They're useful for studing the capabilities of computers in a formal, no-frills setting. However, there exists a second, lesser-known model of computation which came slightly before Turing machines and is equally powerful: lambda calculus. (Don't panic — this isn't the kind of "calculus" found in most math classes.) Anything that a Turing machine can do, so can lambda calculus. Instead of procedural things, like memory and instructions, lambda calculus models universal computation using nothing but functions. No variables, no numbers, no booleans, no conditionals, no loops, not even multiple-argument functions. Like Turing machines, lambda calculus isn't immediately practical. Without any of the features found in modern languages, even normally-simple tasks can be cumbersome. However, it's a useful model for thinking about programming, and so it's still a valuable topic for modern programmers to understand. If you like, think of it as a game — what can we build using only functions? Let's build something in JavaScript following the rules of lambda calculus. To preserve our sanity, we'll use ECMAScript 6 Arrow Functions (see MDN or the spec itself), which are available in Firefox 22 or Chrome 45. In this syntax, writing a => b is the same as writing (function(a){ return b; }). Here are the rules: 1. Functions can only take exactly one argument and contain exactly one statement. 2. Closures are allowed — you can refer to arguments of functions in higher lexical scopes. 3. Aliases can be created with var to help with legibility, but variables cannot be reassigned or used in a way that the original function could not, like passing it to itself. Based on these rules, here is the most simple value, the identity function: var IDENT = x => x; This function returns whatever is passed to it, much like (function IDENT(x) { return x; }). We'll use a convention of all-capitals names for things that are aliases for lambda calculus functions that are following the rules. We'll use single-letter lowercase letters for arguments to be consistent with argument naming in lambda calculus. You can also make functions that make other functions to collect more than one argument. For example: var RETURN_FIRST_ARG = a => b => a; var RETURN_SECOND_ARG = a => b => b; // which is like writing, for example, var RETURN_FIRST_ARG = function (a) { return function(b) { return a; // then, later RETURN_FIRST_ARG("first")("second"); // "first" RETURN_SECOND_ARG("first")("second"); // "second" These functions store an argument and then immediately return another function awaiting another argument. When you call them, you pass each argument individually as a new function call. While this seems silly, it allows lambda calculus to stay as simple as possible. It also permits powerful capabilities such as partial application, wherein you pass only one argument and wait to pass the other until later. We'll rely on that more later. Now, we have one of the often-used capabilities found in lambda caluclus — selecting from multiple values. With this, we can build logic. Let's call the function that chooses its first argument "TRUE", and the function that chooses its second argument "FALSE". var TRUE = x => y => x; var FALSE = x => y => y; Now, we can write things like this: TRUE("value_if_true")("value_if_false"); // "value_if_true" some_boolean("it's true")("it's false"); // tells us whether some_boolean is true or false ...which seems silly until you consider that we've implemented the equivalent of the ternary operator: some_boolean ? "it's true" : "it's false"; // *also* tells us whether some_boolean is true or false Now that we have conditional logic, we can implement other operators: var AND = p => q => p(q)(p); var OR = p => q => p(p)(q); Each of these functions takes two boolean values and produces a boolean value. AND returns q (whatever it is) if p is true, and p otherwise (which must then be false). So, if p is false, false is returned. Otherwise, q is returned: if it's true, the result is true, and if it's false, the result is false. So, both p and q must be true for the result to be true. OR returns p if p is true (that is, if p is true, true is returned), and otherwise returns q (whatever it is). So, if p is false, q is returned: if it's true, the result is true, and if it's false, the result is false. So, both p and q must be false for the result to be false. We can also produce NOT: var NOT = p => x => y => p(y)(x); NOT looks like it takes three arguments, but think of it like it only takes one, p. It produces "a function which takes two things (x and y) and returns one of them" — a boolean value, like our TRUE and FALSE. The thing it returns is the result of p(y)(x), that is, calling p but with the argument order swapped. So now, we have a function where if p is true, it picks the second value, and if it's false, it picks the first value, just like if p were the opposite of whatever it is. So, NOT is a function that inverts the order of arguments for a boolean, turning TRUE into FALSE and vice-versa. Sometimes, we really just want to use the boolean like a ternary operator, but we want to make it clear we're doing that explicitly: var IF = p => x => y => p(x)(y); This function takes a boolean and calls it with the given arguments, which doesn't really do much, but it lets us write code that is a little easier to read: We can combine some of these functions to produce another function core to most modern computers: var NAND = p => q => NOT(AND(p)(q)); So, let's try out our new functions. First, we need a way to turn a boolean function back into something we can use in normal-land JavaScript: function l2js_bool(bool) { return bool("TRUE")("FALSE"); This function is our lambda-to-javascript function for booleans. It expects a boolean and asks it to choose between string representations of its value. We can use it to print out the result of boolean computations: [TRUE, FALSE].forEach(function(x){ [TRUE, FALSE].forEach(function(y){ This gives us a nice truth table: x y and or nand FALSE FALSE FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE Now, we have boolean logic using nothing but functions! In the next article, we'll build numbers and do some math. Stay tuned!
{"url":"http://hexatlas.com/entries/7","timestamp":"2024-11-04T02:15:16Z","content_type":"text/html","content_length":"11111","record_id":"<urn:uuid:c1033dd9-cde2-4e30-946e-9c045f974961>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00317.warc.gz"}
Basic introduction and comparison of gear tooth profile (a) The base circle is larger than the root circle (b) The base circle is smaller than the root circle In order to further study the dividing line of the number of teeth in the two cases, according to the basic parameter expression of the base circle and the root circle, the standard involute parameters, i.e. the pressure angle a = 20 °, the coefficient of addendum height h * a = L and the coefficient of addendum clearance C * = 0.25, are taken for calculation and analysis, and the radius curves varying with the number of teeth are drawn for comparison Fig. It can be seen from Figure 2 that with the increase of the number of teeth, the root circle gradually changes from less than the base circle to greater than the base circle. When the number of teeth is larger, the base circle is larger than the root circle, which is consistent with the situation shown in Figure 1 (a); when the number of teeth z > 41, the base circle is smaller than the root circle, as shown in Figure 1 (b). In view of these two different gear geometric effects, the following will select the gear pair under different conditions for the simulation calculation of tooth stiffness. According to the introduction of gear tooth profile composition, it is assumed that the initial meshing point is not from point B, but from point C, that is, point C is the initial meshing point B2. Thus, the BD segment of the involute curve can be further divided into non meshing segment BC and meshing segment CD. In order to verify the correctness of this hypothesis, the meshing angle parameters of meshing initial point B2 and involute curve initial point B or the radius of the line between OB and ob2 are compared and analyzed. The formula of meshing angle and radius related to point B2 and point B is as follows: The simulation results of the formula are drawn in Fig. 3. The results show that the change of the meshing angle with the number of teeth at the initial meshing point is greater than that at the initial point B of the involute curve, and the difference between the two radii is small but greater than 0, and presents the curve change effect of first increasing and then decreasing. According to the different meshing effects, the complete gear tooth profile can be divided into three parts: ab section of fillet transition curve, BC section of non meshing zone and CD section of meshing zone. According to the results of hypothesis verification, the stiffness components of gear tooth stiffness can be further analyzed and studied in sections. Combined with the formula, the detailed formulas of compression stiffness Ka, bending stiffness KB and shear stiffness KT are derived.
{"url":"https://www.zhygear.com/basic-introduction-and-comparison-of-gear-tooth-profile/","timestamp":"2024-11-03T22:14:51Z","content_type":"text/html","content_length":"170749","record_id":"<urn:uuid:c828e34d-f9ae-4c94-a7a2-96fbdcac74d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00870.warc.gz"}
Displacement, Velocity and Acceleration What are displacement, velocity ,and acceleration? In classical mechanics, kinematics deals with the motion of a particle. It deals only with the position, velocity, acceleration, and displacement of a particle. It has no concern about the source of Distance and displacement are two quantities that indicate the length between two points but they are completely different. Distance means, the total length of the path covered by an object irrespective of direction and it is a scalar quantity. Displacement means the overall change in the position of an object or a particle. It is a vector quantity, which means it has both direction and magnitude. Consider an object placed at A, it is displaced from its position to reach B. The displaced position of an object is known as displacement. Displacement is measured along a linear or straight line. Displacement, $\stackrel{\to }{AB}=\stackrel{\to }{OB}-\stackrel{\to }{OA}$ It is the vectors differences between the beginning stage or initial position and the completion stage or final position. If A is the initial position and B is the final position. Then the displacement is, where ∆x=displacement ${x}_{f}$= final position ${x}_{i}$=initial position The unit of displacement is meter (m). The rate of change of displacement of an object with respect to time and frame of reference is called velocity. It is a vector quantity. where ds=change in displacement dt= change in time The unit of velocity is m/s or $m{s}^{-1}.$ Like distance and displacement, speed and velocity both have distinct meanings. Speed is a scalar quantity and it refers to how fast the object is moving or the rate at which the object covers the distance. If an object moves at a fast rate, it covers a large distance in small time. In contrast, if the object is moving slowly, that means slow speed, it covers only a small distance in a same Instantaneous velocity The definition for instantaneous velocity is the velocity of the object under motion at a specific point of time, which is almost zero. ${v}_{i}=\underset{∆t\to \infty }{\mathrm{lim}}\frac{ds}{dt}$ Where ${v}_{i}$= instantaneous velocity ds=change in displacement dt= time interval Average velocity The total displacement of the body divided by the total time taken is called average velocity. The formula is, Where${v}_{av}$= average velocity ${d}_{2}and{d}_{1}$= final and initial position. ${t}_{2}and{t}_{1}$=final and initial time. It is described as the rate of change of velocity of an object with respect to time. It is a vector quantity. Also, if there is a change in the velocity of the object, then the object is said to accelerate. The formula is, The measuring unit of acceleration is $m/{s}^{2}orm{s}^{-2}.$^ If the object's speed is rising at a constant rate, then the object has unvarying acceleration or constant acceleration. If the ratio of velocity by time is evaluated for a finite interval of time, it is called average acceleration. If the ratio is evaluated for infinitesimally small interval time it is called instantaneous acceleration. Acceleration in projectile motion If an object is thrown upwards, the only primary force which acts on the object is gravity. Some other secondary forces are also acting on it, but compared to gravity, their effect is low. The force which drags the object downwards is called acceleration due to gravity. When an object is thrown obliquely from the earth's surface, it created a path that looks like a curve and reaches the center of the earth. The path created by the particle while traveling is called projectile and the motion is called projectile motion. Equation of motion It is the so-called kinematic equations of motion. It also explains the relationship between displacement, velocity, and acceleration. It explains the motion of the particle which moves in 1- dimensional or 2- dimensional or 3- dimensional areas. The motion may be uniform or non-uniform, accelerating or non- accelerating. These equations relate the parameters and help to acknowledge the motion of the object. The four basic equations are, Let us imagine a particle that makes a displacement s in the time t. Let u is initial velocity and v is final velocity. The object's motion is uniformly accelerated in the time interval t at the rate of (a) in order to frame of reference. By using differential and integral calculus we can derive the equations. First equation of motion From the definition of acceleration, a is given by, On integrating the equation, we get, ${\int }_{u}^{v}dv={\int }_{0}^{t}adt\phantom{\rule{0ex}{0ex}}{\left[v\right]}_{u}^{v}=a{\left[t\right]}_{0}^{t}\phantom{\rule{0ex}{0ex}}v-u=at\phantom{\rule{0ex}{0ex}}v=u+at$ (1) Second equation of motion The second equation of motion is derived from the velocity and from the first equation of motion. On integrating, ${\int }_{0}^{s}ds={\int }_{0}^{t}\left(u+at\right)dt\phantom{\rule{0ex}{0ex}}{\int }_{0}^{s}ds={\int }_{0}^{t}udt+{\int }_{0}^{t}atdt\phantom{\rule{0ex}{0ex}}{\left[s\right]}_{0}^{s}=u{\left[t\ Third equation of motion The third equation is derived from the acceleration. It is given as, Multiply and divide by ds on the right side, On integrating, ${\int }_{u}^{v}vdv={\int }_{0}^{s}ads\phantom{\rule{0ex}{0ex}}{\left[\frac{{v}^{2}}{2}\right]}_{u}^{v}=a{\left[s\right]}_{0}^{s}\phantom{\rule{0ex}{0ex}}\frac{{v}^{2}-{u}^{2}}{2}=as\phantom{\rule Displacement, velocity, acceleration in simple harmonic motion In classical mechanics, for 1-D SHM, the equation of motion is a second-order differential equation having constants coefficients. These constants are derived from Hooke's law and Newton's second law for a mass that is placed on a spring. The differential equation is, $m\frac{{d}^{2}x}{d{t}^{2}}=-kx$ (5) where m = inertial mass of the body. x= displacement of the mass from the mean position. k = the spring constant. $\frac{{d}^{2}x}{d{t}^{2}}=\frac{-k}{m}x$ (6) on solving the above differential equation, we get a sinusoidal equation. $x\left(t\right)={c}_{1}\mathrm{cos}\left(\omega t\right)+{c}_{2}\mathrm{sin}\left(\omega t\right)$(7) Where $\omega =\frac{k}{m}$ ${c}_{1}and{c}_{2}$ are differential constants. In order to find the values of these constants, put t=0, Where ${c}_{1}$ is the initial position of the particle. By differentiating equation (7), we can find the constant c2, $v\left(t\right)\equiv \frac{dx}{dt}=-\omega {x}_{0}\mathrm{cos}\omega t+{c}_{2}\omega \mathrm{sin}\left(\omega t\right)$ when t=0, $v\left(0\right)=\omega {c}_{2}\phantom{\rule{0ex}{0ex}}{c}_{2}=\frac{{v}_{0}}{\omega }$ Substitute all the values in the equation (7), $x\left(t\right)={x}_{0}\mathrm{cos}\frac{k}{m}t+{v}_{0}\omega \mathrm{sin}\frac{k}{m}t$ (8) The another form of the equation is, $x\left(t\right)=A\mathrm{cos}\left(\omega t-\phi \right)$ Where$A={c}_{1}^{2}+{c}_{2}^{2}$and$\mathrm{tan}\phi =\frac{{c}_{2}}{{c}_{1}}$ where A = amplitude. ω=angular frequency. φ= phase angle or initial phase. In this solution, the constants are determined from their initial conditions, and their original position is set in an equilibrium position. These two constants carry a physical value of this motion. By using the calculus method, we can find the velocity and acceleration of the oscillation. From the displacement equation, we can determine the velocity, $v\left(t\right)=\frac{dx}{dt}=\frac{d}{dt}\left(A\mathrm{cos}\left(\omega t-\phi \right)\phantom{\rule{0ex}{0ex}}v\left(t\right)=-A\mathrm{sin}\left(\omega t-\phi \right)$ The first derivation of velocity is the acceleration. Therefore, by differentiating, $a\left(t\right)=\frac{dv}{dt}=\frac{d}{dt}\left(-A\mathrm{sin}\left(\omega t-\phi \right)\right)\phantom{\rule{0ex}{0ex}}a\left(t\right)=-A{\omega }^{2}c\mathrm{os}\left(\omega t-\phi \right)$ The acceleration found from the above equation is at the equilibrium position. At the extreme points, the maximum acceleration is $A{\omega }^{2}$ It is interpreted that when a mass is subjected to oscillation, the acceleration is directly proportional to displacement. The graph shows the relationship between displacement, velocity, and acceleration in a simple harmonic motion. Whenever the velocity is maximum, the acceleration is zero. The velocity is ahead by phase angle (π/2) from displacement. Acceleration is ahead by (π/2) from velocity or π from displacement. The formula of velocity is, Average velocity formula is, Instantaneous velocity formula is ${v}_{i}=\underset{∆t\to \infty }{lim}\frac{ds}{dt}$ The formula of acceleration is, The equations of motion are, Context and Applications It is a basic and necessary topic for all graduates and postgraduates particularly for, bachelors and masters in science (physics). Practice Problems Question 1: A car is traveling from rest. It attains a velocity of $20m{s}^{-1}$ in 10 seconds. Find the acceleration of the car. Given data: Initial velocity, u=$0m{s}^{-1}$ Final velocity, v=$20m{s}^{-1}$ Time, t=10s From the first equation of motion, The acceleration of the car is $2m{s}^{-2}$. Answer: The correct option is a. Question 2: A ball is moving in a velocity 0.3m/s. Its velocity is decreasing at the rate of 0.05 $m/{s}^{2}$. Find the velocity of the ball after 5s. Given data: Initial velocity, $u=0.3m{s}^{-1}$ Acceleration, a=$0.05m{s}^{-2}$ Time, t=5s By first kinematic equation, The velocity of the ball is 0.05 m/s. Answer: The correct option is b. Question 3: A truck starts from rest with a uniform acceleration of $7m/{s}^{2}$. Find the distance travelled by the truck after 5 s. a. 75m b. 95m c. 87.5m d. 75.5m Given data: Initial velocity, $u=0m{s}^{-1}$ Acceleration, $a=7m{s}^{-2}$ Time, t=5 s From the second kinematic equation, The distance travelled by the truck is 87.5 m. Answer: The correct option is c. Question 4: Average velocity is______. a. Total distance/time taken b. Total displacement/total time taken c. Distance/speed d. Speed/time Answer: The correct option is (b). Explanation: The average velocity is defined as the total displacement of the body divided by total time taken. Question 5: The motion of the object from one position to another position is called ________. a. Velocity b. Speed c. Displacement d. Distance Answer: The correct option is (c). Explanation: Displacement is defined as the movement of an object from one place to another. And it is the shortest distance between the object's initial position and final position. Want more help with your physics homework? We've got you covered with step-by-step solutions to millions of textbook problems, subject matter experts on standby 24/7 when you're stumped, and more. Check out a sample physics Q&A solution here! *Response times may vary by subject and question complexity. Median response time is 34 minutes for paid subscribers and may be longer for promotional offers. Search. Solve. Succeed! Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month. Tagged in Motion in 1-D Displacement, velocity and acceleration Displacement, Velocity and Acceleration Homework Questions from Fellow Students Browse our recently answered Displacement, Velocity and Acceleration homework questions. Search. Solve. Succeed! Study smarter access to millions of step-by step textbook solutions, our Q&A library, and AI powered Math Solver. Plus, you get 30 questions to ask an expert each month. Tagged in Motion in 1-D Displacement, velocity and acceleration
{"url":"https://www.bartleby.com/subject/science/physics/concepts/displacement-velocity-and-acceleration","timestamp":"2024-11-03T03:19:33Z","content_type":"text/html","content_length":"412862","record_id":"<urn:uuid:64ee7939-c03a-4c1d-97c2-74df855864e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00402.warc.gz"}