content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Strokes-Gained: Some Questions Asked and Answered
In a recent post,
it was argued that many of the findings in Mark Broadie’s book,
Every Shot Counts
were previously presented in a book written over 40 years ago by Alastair Cochran.
Professor Broadie challenged that conclusion and made other criticisms of Cochran’s work.
The purpose of this note is to examine three questions: 1) Are Broadie’s criticisms of Cochran’s work valid? 2) What are the true origins of the strokes-gained concept, and 3) Is the strokes-gained
statistic either revolutionary or of value? Each question is discussed in turn.
1. Are Broadie’s Criticisms of Cochran’s work valid? – Broadie’s criticisms are in italics followed by an analysis of their validity:
Even though Cochran and Stobbs title their chapter “Long Approach Shots-Where Tournaments are Won,” they actually present no evidence that this is the case.
It is true Cochran does not write that long iron play is the ultimate determinant of who wins. That would be foolish, and Cochran is clearly not a fool. Players win for a variety of reasons as
Broadie’s own work shows. Cochran merely argues iron play is important and presents a table showing the difference in iron play between the top nine and bottom nine players (see Table 31:6 below).
Normalizing for the play of non-iron shots, Cochran concluded the top nine players had a five stroke advantage over the bottom nine players. This is clearly evidence that iron play is important in
winning. To claim that Cochran presents no evidence of the importance of iron play is clearly wrong. Perhaps Cochran can be criticized for an overly definitive chapter title. But you cannot argue
Cochran did not identify the importance of long iron play long before the publication of Every shot Counts.
Table 31:6 Long Approaches at Birkdale: How the leaders compared with the tail-enders
│ │Distance from which shot is played (Yards) │
│ ├──────────┬──────────┬──────────┬──────────┤
│ │ 140-160 │ 160-180 │ 180-200 │ 200-220 │
│ │Top nine players │ 9.8 │ 9.0 │ 13.6 │ 14.5 │
│Median finishing distance from hole (yards)│ │ │ │ │ │
│ │ │ (46) │ (22) │ (63) │ (50) │
│ ├───────────────────┼──────────┼──────────┼──────────┼──────────┤
│ │Bottom nine players│ 12.0 │ 10.8 │ 13.8 │ 17.7 │
│ │ │ │ │ │ │
│ │ │ 42 │ (17) │ (64) │ (54) │
Broadie saves his most strident criticism for Table 31:8 that is reproduced below:
This is the main table in the chapter, on which the title is based. It (Table 31:8) compares a 50% reduction in long approach shot errors (completely unrealistic given the 14% above) with “doubling
the accuracy of putting” (also completely unrealistic) with hitting drives 20 yards further and having them all finish in the fairway (also completely unrealistic). There are two problems with this.
First, the “strokes gained per round” from these assumptions is not explained and I’m pretty sure is not correct. Second, even if we accept those numbers, it makes no sense to conclude anything from
hypothetical, unrealistic assumptions that are not at all comparable. It would be like saying the strokes gained per round from hitting drives 3 yards further is much less than one-putting twice as
often, therefore putting is “most important.”
Cochran and Stobbs (see below) write “we are not implying these improvements are equally easy to achieve” yet they still base their conclusions (“But it would not be too far to conclude”) solely on
this one table with completely arbitrary assumptions. It is crucial to base conclusions on comparable improvements; completely arbitrary assumptions lead to conclusions that are completely arbitrary.
Table 31:8 How much the pros at Birkdale would have gained by drastic improvement in their game
│ │Gain in Strokes per Round│
│By “doubling” accuracy of putting │ 4.2 │
│By “doubling the accuracy of short approaches │ 1.7 │
│By “doubling the accuracy of long approaches │ 5.5 │
│By 100% accuracy and extra 20 yards on drives │ 2.2 │
│Total │ 13.6 │
Broadie engages in sophistry in attempting to prove his points.[5] Nowhere does Cochran maintain this table proves the primacy of iron play. In fact, this table is never cited in the text. This would
be strange if Cochran thought this to be his “main table.” Cochran does conclude “it would not be going too far to conclude that it is in full iron play as well as putting that the main difference in
caliber makes itself felt between different levels of top class professional golfers and the degree of success they have in tournaments.” [6] His conclusion is based (I assume) on the study of
difference in putting shown in Table 29:6 (Putting at Birkdale: How the leaders and the tail-enders compared with the field as a whole) and Table 31:6 shown above. To argue that Cochran based his
conclusion on Table 31:8 is a misstatement of the facts.
Cochran readily admits the improvements assumed in Table 31.8 are unrealistic.[7] Broadie complains that not much can be drawn when unrealistic assumptions are made. Yet that does not stop Broadie
from analyzing the impact of hitting the ball 20 yards further.[8] He argues this assumption is unrealistic but it is important to understand “the trade-off between distance and accuracy for course
strategy.”[9] Cochran was merely using the same literary device as Broadie and is undeserving of criticism.
The gain in strokes per round shown Table 31:8 could represent estimates of the marginal value of improvement in each category. The marginal value of improvement must be weighed against the marginal
cost of that improvement to arrive at the optimum practice plan. For the average player, Cochran concludes improving his iron play is probably the best use of his time. This is not much different
than Broadie’s finding “The biggest contributor to scoring? Approach shots, which contribute 40% to the total strokes gained.”
Broadie complains the methodology behind the “strokes-gained” calculation is not explained. Cochran did show estimates for iron play in Table 31:7 shown below.
Table 31:7 - The benefits of improved iron approaches: an estimate of strokes gained by halving their inaccuracy (Abridged)
│ │Length of hole remaining after 250 yard drive │Shots to get down from this distance│
│ │ ├────────────────┬───────────────────┤
│Hole│ │ │ │
│ │ │Normal Standard │ Improved Standard │
│ 1│ 243 │ No long approach required │
│ 2│ 177 │ 3.13│ 2.74│
│ 9│ 160 │ 3.05│ 2.71│
│ 18│ 200 │ 3.23│ 2.79│
│Total │ 43.84│ 38.39│
To examine the benefits of improved iron approaches Cochran compares the hole scores for players with standard accuracy with a player who is 50 percent more accurate. Broadie is correct that Cochran
does not detail the method used to estimate the “shots to get down.” (I assume this was the book editor’s decision much as in Broadie’s book where any description of his simulation model is omitted.)
It is possible to speculate on possible methods and show that Cochran’s estimates are not too far off, if at all. One possible method would be to first estimate the normal standard finishing distance
from the hole. Cochran has established that the medium finishing position of an iron shot is approximately 7.5% of the starting distance. At hole 9, for example, the average finishing position would
be 12 yards. From the graph in Fig. 29.2, the average number of putts from 12 yards is 2.05. Therefore the standard number of shots to get down from 160 yards is 3.05 (2.05 +1).
If a player improved his accuracy, his finishing distance would be 6 yards. The number of putts to get down from this distance is approximately 1.82. With improved iron play, the player would get
down in 2.82 strokes. Cochran, however, reports the player would get down in 2.71 strokes. Cochran could have used a more sophisticated method where a player with an improved standard hit more greens
and/or missed more bunkers. In any case, Cochran’s estimates in Table 31:7 appear to be in the ballpark. Contrary to Broadie’s assertion, Cochran does not base a conclusion on them.
In summary, Broadie’s criticisms are not valid. Cochran could have presented more detail about his work, but there does not appear to be any evidence of poor scholarship as Broadie asserts.
2. What are the True Origins of the Strokes-Gained statistic?” - In Every Stroke Counts, Professor Broadie takes credit for the “strokes-gained” concept of measuring player performance:
Strokes-gained is the name for this new way of measuring shot quality. It uses the same unit-strokes – to calculate the skill of the many kinds of shots taken throughout each round of golf. The
origins of most new ideas can be traced to the earlier work of others, and strokes-gained is no exception. The term owes its heritage to a brilliant applied mathematician of the mid 20^th century
(Richard Bellman), and a grand theory he called “dynamic programming” developed at the dawn of the computer era….Using this technique, I (emphasis added) developed a way to compare golf shots and
quantify a golfer’ skill.[11]
I do not believe the lineage of “strokes-gained” goes back to the works of Richard Bellman.
The reference to Bellman seems soley intended to give “strokes-gained” some mathematical credibility by association.
Stripped down to its bare essentials, Broadie had access to a big pile of data from which he calculated the average score to complete a hole as a function of distance. You take the average score from
Point A and subtract the average score from Point B. The difference minus one is the strokes-gained for the shot. This is not dynamic programming as Broadie implies, but rather an exercise in second
grade arithmetic. Broadie admits as much later in the book when he writes “Once you have access to the data base showing average strokes to hole out from any given distance, there’s no rocket science
involved in calculating strokes-gained, just subtraction.”
I strongly believe Broadie owes an intellectual debt to Cochran. Both Broadie and Cochran tried to identify the importance of each type of shot is isolation from the others. Only Cochran did it
first. Broadie refined the methodology to make it player-specific while Cochran’s results were only tournament-specific. A review of how each researcher found the strokes-gained will reveal the
similarities and differences in their approaches.
– Broadie has estimated the average number of putting strokes it takes to hole the ball for various distances from the hole. The average number of strokes minus the number of actual strokes taken is
the putting
strokes gained
or lost on that hole. Cochran’s method is similar. Due to data limitations he compares the top nine players with the bottom nine players in an attempt to explain the difference in performance.
Cochran makes some calculations about the distribution of the lengths of first putts “assuming that their play up to the green has been the same—which, of course, it wasn’t.”
Cochran then applies the probabilities of each group to make a putt of a certain length. He found the top nine players gained about a tenth of a stroke per hole over the bottom nine players due to
putting. Cochran attempted to normalize the distance of the putt in order to isolate the value of putting. This is the basic idea behind Broadie’s strokes-gained putting.
Long Approach Shots – Broadie has estimated the average number of shots it takes to hole a ball from various distances and positions. The strokes gained from a long approach shot is calculated as
Strokes- Gained = Average Number of Strokes to Hole from Starting Position – (Average Number of Strokes to Hole from finishing position +1)
As discussed under the previous question, Cochran employs much the same method. Cochran writes:
“Taking all eighteen of the players concerned to have hit the average drive of the whole field, and to have played their short approaches and putted to the average standard too, the team were able to
show that the difference in standard of strokes from 140 to 220 yards gave the top nine an advantage of about one and a quarter strokes per round over the bottom.
ShotLink allows Broadie to be more precise. He can take a player’s actual drive and long approach and estimate the strokes gained. This allows Broadie to estimate the shots gained for each player.
Without the vast data of ShotLink, Cochran had to assume all players started from the average drive. He then applied the long approach accuracy estimates of Table 31:6 for each group of players to
determine where the long approach shot finished. From the finishing positions the number of strokes needed to hole the ball was estimated. The difference in the estimates of the top nine and bottom
nine players was the estimate of the strokes gained by the better players. Broadie’s and Cochran’s methods are remarkably similar in concept.
Driving - Broadie has used ShotLink data to estimate the shots needed to complete a hole from any distance and a variety of positions (fairway, rough, sand bunker). He takes the estimate of the
number of shots to complete the hole from the tee and subtracts the number of strokes to complete the hole from where the drive finishes plus one. While Broadie never presents any evidence of the
accuracy or inherent variability of these estimates, they do allow for a straightforward calculation of strokes-gained driving for each player. Cochran was hampered by his small data set. The
difference in driving between the top nine and bottom nine players was only 7 yards. Cochran, by taking into account calculations made for the other types of strokes, showed the top nine players
gained a half a stroke per round by their driving over the bottom nine. Cochran does not specify his calculations. It can be assumed that he used the same method as for the other types of shot. That
is, the longer drives led to more accurate long approaches, which led to fewer putts and a lower average score. Both researchers reached the same conclusion that when it comes to driving, i.e.,
length counted for more than accuracy.
Short Approaches
– Both Broadie and Cochran have difficulty dealing with this shot. Under Broadie’s method, the number of shots to finish a hole is a function of distance. There may be so much variability in 20 foot
chip shots, for example, that Broadie’s method needs more refinement before it can be implemented on the PGA Tour.
Cochran found no significant difference in the short approaches of the top nine and bottom nine players. Both groups had similar median finishing distances for their short approach shots so no
strokes were gained.
Broadie does not give Cochran credit for coining the term “strokes-gained.” Instead that honor goes to unnamed colleagues at MIT.
His only mention of Cochran comes in the acknowledgments where he writes:
When I started research into golf, I didn’t even remember that in high school I had picked up from the library the now classic book Search for the Perfect Swing. The authors Cochran and Stobbs in the
1960s were the first ones to record and analyze individual shots.[19]
(That must have been one great high school library since it acquired the book some twelve years before it was published in the United States.) Broadie does not give Cochran any credit for birthing
the strokes-gained method. Given the similarities in their research approach described above, Cochran deserves much more recognition than he is given.
Is the strokes-gained statistic either revolutionary or have value?
- The subtitle of Broadie’s book is “Using the Revolutionary Strokes Gained Approach to Improve Your Golf Performance.” A new statistic can be termed revolutionary if it changes the way the game is
played like
did for baseball.
Broadie presents no evidence that knowledge of this statistic has altered a player’s decision-making the way
changed managerial decisions in baseball.
The strokes-gained statistic could also be revolutionary if it produced information about player performance that was previously unknown. That does not appear to be the case. Listed below are some of
the findings in Every Shot Counts:
· Players putt better when they win tournaments than when they lose them-p. 24.
· Tournaments are won by players excelling in different parts of the game-p. 17.
· Bubba Watson gained strokes because of his driving- p. 91.
· Luke Donald is a good putter and Sergio Garcia is not–p. 94.
· Tiger’s secret weapon is approach shots-p. 116.
These same findings could be found by mining other statistics published by the PGA Tour. Tiger Wood’s secret is also revealed by his greens in regulation statistic (GIR). Woods was ranked first in
GIR in both 2006 and 2007. The PGA Tour also publishes a player’s GIR from various distances so a player can determine his relative strength at various approach shots. Such detailed information is
not available from Broadie’s strokes-gained statistic.
The contribution of the strokes-gained statistic is that it allows the total strokes-gained to be allocated to different shot categories. A GIR ranking can show how good an iron player you are, but
strokes-gained can estimate how much of your success is due to iron play. A study of a player’s relative rankings can also give a clear picture of his strengths and weaknesses. Strokes-gained,
however, appears to be more definitive in determining why a player wins. This is what enraptured John Paul Newport when he wrote:
Finely-focused strokes-gained analyses like these will make stat-watching more engaging in the years ahead.[21]
Wide acceptance of strokes-gained, however, is not likely to happen for the following reasons:
1. Strokes-gained is an estimate based on a model of unknown accuracy. Many of the assumptions in the strokes-gained model are clearly not true--i.e., distance is not the sole determinant of the
difficulty of an approach shot or the value of a drive. Nevertheless, Broadie takes his estimate of strokes-gained to the second decimal and never discusses the size of any possible error.
2. Alternatives to strokes-gained can be measured with precision and are easily understood. GIR and average distance to the hole, for example, are not based on any underlying model. They both can be
measured with negligible error. They are also statistics a television viewer can relate to his own game. If two players are coming down the stretch in a tournament, and the on-course reporter says
one player has gained 1.21 strokes by his approach shots and the other player has only gained 1.05 strokes, most viewers would be bewildered. The reporter could go on to explain that the difference
was due to the first player hitting shorter drives, but still hitting the greens with comparable accuracy. That is, the first player had a lower strokes-gained for his drives. It is unlikely the
television director would encourage such descriptions of the action.
3. It would be difficult to estimate strokes-gained in real time so its use would be restricted to post-mortems. Even here the value of strokes-gained is questionable. For example, Newport
demonstrates that John Senden had the best strokes-gained short game performance of the season at the Valspar Championship—he hit three very close approach shots and holed out from 23 yards. There
does not appear to be much value in this statistic. Can it be used to predict future performance? Can it guide others to the path for victory on the PGA Tour? The answer to both questions is “No.”
The statistic only demonstrated that to win on the Tour some players have to be good and lucky.
In summary, Every Shot Counts is not revolutionary. Golf will not be transformed as of the book’s publication date. As a statistic, strokes-gained has three failings: 1) the estimate of
strokes-gained is based on a model of unproven validity which leads to measurement errors of unknown size, 2) it is not easily understood and possibly of little interest to the average player, and 3)
the statistic does not contain any information that is not present in other statistics kept by the PGA Tour. These defects should deter the PGA Tour from expanding the use of the stroke- gained
method to shots other than putts.
Broadie, Mark,
Every Shot Counts
, Gotham Books, New York, NY, 2014.
Cochran, Alastair and Stobbs, John,
The Search for the Perfect Swing
, Golf society of Great Britain, London, 1968.
Broadie, Mark, email to author, April 22, 2014.
Another example of Broadie’s sophistry is his defense of strokes-gained putting. Broadie writes “Before strokes gained putting stat, people used to count putts as the way of judging golfers (p. 33).
Of course, serious people did not count putts but rather average putts per green in regulation. In essence, Broadie built a straw man to convince the reader of the superiority of the strokes-gained
op, cit
, p. 201.
p. 96.
The play of a hole can be described as a dynamic programming problem. “Strokes-gained,” however, only gives the player a payoff value for every finishing position of his shot. Golf is not the simple
problem described by Broadie on p. 30. The result of any shot is a stochastic process--i.e., there is a range of outcomes--not a deterministic one as in Broadie’s example. To actually evaluate the
millions of options for each shot would extend the playing time for a round into days. Instead, a player uses simple algorithms to make his way around--i.e., hit it straight, don’t go for a sucker
pin, hit away from trouble, etc.
Broadie also takes quotes from Bellman’s autobiography (
Eye of the Hurricane
, World Scientific Press, Singapore, 1984) without attribution. Since there are no footnotes in
Every Shot Counts
, this may have been an editorial decision.
, op.cit
. p. 83.
op. cit
., p. 191.
Newport, John Paul, “A New Golf Statistic Goes for a Test Drive,”
Wall Street Journal
, March 21, 2014. In this column, Steve Evans of the PGA Tour identified the problem of a purely distanced-based system. The same objection could be raised about the Strokes Gained-Putting
statistic—i.e., not all 20 foot putts are of equal difficulty.
op. cit
., p. 57.
Lewis, Michael,
, W.W. Norton, N.Y., NY, 2003.
2 comments:
1. I'm not sure how you can say that strokes gained doesn't contain more info than other stats. If I'm a player who hits 70% GIR, but make 10% fewer birdies than fellow players who hit 70% GIR, I
can't tell why. Is it because I make fewer putts? If I make fewer putts is it because I'm worse putter, or is my average putt length longer? If ave putt length is longer, why? Is it because my
approach shots aren't as good, or is it because I'm coming in from a longer distance.
Strokes gained puts a value on those questions and answers.
2. Strokes Gained falls into the same statistical genre as :the "Ten Best Cities to Retire." You know it is a good place to retire but not why. Strokes Gained Putting can identify you as a great
putter, but not delineate why. Are you a great lagger of the golf ball or a killer at 10-15 feet? PGA Tour Statistics, on the other hand, can identify your strengths and weaknesses in putting
from various distances. Television commentators rarely mention Strokes Gained--probably because they can't or fear losing their audience in statistical minutia. If you find Strokes Gained useful,
however, that is what counts.
|
{"url":"http://www.ongolfhandicaps.com/2014/05/strokes-gained-some-questions-asked-and.html","timestamp":"2024-11-02T18:01:24Z","content_type":"text/html","content_length":"157103","record_id":"<urn:uuid:302949df-0fe5-4db7-af5c-4628b4663267>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00005.warc.gz"}
|
Statistics for Data Science and Machine Learning
Population vs. Sample
A population consists of the entire set of individuals or items that are the subject of a statistical study. It encompasses every member that fits the criteria of the research question.
• Characteristics:
□ Comprehensive: Includes all individuals or items of interest.
□ Parameters: Measurements that describe the entire population. Examples of parameters include:
☆ Population Mean (μ): The average of all values in the population.
☆ Population Standard Deviation (σ): A measure of the dispersion of values in the population.
Example: All students enrolled in a university.
A sample is a subset of the population selected for the purpose of analysis. It allows researchers to draw conclusions about the population without examining every individual.
• Characteristics:
□ Subset: A smaller, manageable group chosen from the population.
□ Statistics: Measurements that describe the sample. Examples of statistics include:
☆ Sample Mean (x̄): The average of all values in the sample.
☆ Sample Standard Deviation (s): A measure of the dispersion of values in the sample.
Example: A group of 200 students chosen randomly from a university’s total enrollment.
Mean, Median, and Mode
The mean, or average, is a measure of central tendency that is calculated by summing all the values in a dataset and then dividing by the number of values.
Mean (x̄) = (Σx) / N
• Σx is the sum of all values in the dataset.
• N is the number of values in the dataset.
For the dataset: 2, 4, 6, 8, 10
Mean (x̄) = (2 + 4 + 6 + 8 + 10) / 5 = 30 / 5 = 6
The median is the middle value of a dataset when it is ordered from least to greatest. If the dataset has an odd number of observations, the median is the middle value. If it has an even number of
observations, the median is the average of the two middle values.
For an odd number of observations: Median = middle value
For an even number of observations: Median = (middle value 1 + middle value 2) / 2
For the dataset (odd number): 1, 3, 3, 6, 7, 8, 9
Median = 6
For the dataset (even number): 1, 2, 3, 4, 5, 6, 8, 9
Median = (4 + 5) / 2 = 9 / 2 = 4.5
The mode is the value that appears most frequently in a dataset. A dataset can have more than one mode if multiple values have the same highest frequency, or no mode if all values are unique.
Mode = value with the highest frequency
For the dataset: 1, 2, 2, 3, 4, 4, 4, 5, 5
Mode = 4
Variance and Standard Deviation
Variance measures the spread of a set of numbers. It represents the average of the squared differences from the mean, providing a sense of how much the values in a dataset deviate from the mean.
For a population:
Variance (σ²) = Σ (x - μ)² / N
For a sample:
Variance (s²) = Σ (x - x̄)² / (n - 1)
• Σ is the sum of all values.
• x is each individual value.
• μ is the population mean.
• x̄ is the sample mean.
• N is the total number of values in the population.
• n is the total number of values in the sample.
For the sample dataset: 2, 4, 4, 4, 5, 5, 7, 9
1. Calculate the sample mean (x̄): x̄ = (2 + 4 + 4 + 4 + 5 + 5 + 7 + 9) / 8 = 40 / 8 = 5 2. Calculate each (x - x̄)²: (2 - 5)² = 9 (4 - 5)² = 1 (4 - 5)² = 1 (4 - 5)² = 1 (5 - 5)² = 0 (5 - 5)² = 0 (7 - 5)² = 4 (9 - 5)² = 16 3. Sum of squared differences: Σ (x - x̄)² = 9 + 1 + 1 + 1 + 0 + 0 + 4 + 16 = 32 4. Calculate the variance: s² = 32 / (8 - 1) = 32 / 7 ≈ 4.57
Standard Deviation
Standard deviation is the square root of the variance. It provides a measure of the spread of the values in a dataset in the same units as the data, making it easier to interpret.
For a population:
Standard Deviation (σ) = √(Σ (x - μ)² / N)
For a sample:
Standard Deviation (s) = √(Σ (x - x̄)² / (n - 1))
Using the variance calculated above (s² ≈ 4.57):
Standard Deviation (s) = √4.57 ≈ 2.14
Correlation Coefficient
The correlation coefficient measures the strength and direction of the linear relationship between two variables. It ranges from -1 to 1, where:
• r = 1: Perfect positive correlation
• r = -1: Perfect negative correlation
• r = 0: No correlation
The Pearson correlation coefficient (often denoted as r) is calculated using the following formula:
r = Σ((x - x̄)(y - ȳ)) / √(Σ(x - x̄)² * Σ(y - ȳ)²)
• x and y are the individual values of the two variables.
• x̄ and ȳ are the means of the two variables.
• Σ denotes the summation over all data points.
• r > 0: Positive correlation (as one variable increases, the other tends to increase).
• r < 0: Negative correlation (as one variable increases, the other tends to decrease).
• r = 0: No linear correlation.
• The closer r is to 1 or -1, the stronger the correlation.
Consider two variables, X (hours of study) and Y (exam scores), for a group of students:
Hours of Study (X) Exam Scores (Y)
• Calculate the means (x̄ and ȳ).
• Calculate the deviations from the means (x – x̄ and y – ȳ).
• Square the deviations and sum them.
• Multiply the deviations of X and Y, sum them, and divide by the product of the square roots of the sum of squared deviations.
r ≈ 0.98
The correlation coefficient r is approximately 0.98, indicating a strong positive linear relationship between hours of study and exam scores. As hours of study increase, exam scores tend to increase
as well.
Point Estimation
Point estimation is a statistical method used to estimate an unknown parameter of a population based on sample data. It involves using a single value, called a point estimate, to approximate the true
value of the parameter.
Key Concepts
• Population: The entire group of individuals, items, or events of interest in a statistical study.
• Parameter: A numerical characteristic of a population that is unknown and typically of interest in statistical analysis. Examples include the population mean, population proportion, population
variance, etc.
• Sample: A subset of the population from which data is collected.
• Point Estimate: A single value, calculated from sample data, that serves as the best guess for the true value of the population parameter. It is denoted by a specific symbol, such as “x̄” for a
point estimate of parameter “μ”.
Properties of Point Estimates
• Unbiasedness: A point estimate is unbiased if its expected value is equal to the true value of the parameter being estimated.
• Efficiency: An efficient point estimate has the smallest possible variance among all unbiased estimators of the parameter.
• Consistency: A consistent point estimate converges to the true value of the parameter as the sample size increases.
Point Estimate Symbols
• Population Mean: “μ”
• Sample Mean: “x̄”
• Population Variance: “σ²”
• Sample Variance: “s²”
• Population Standard Deviation: “σ”
• Sample Standard Deviation: “s”
Suppose we want to estimate the mean income of all households in a city. We collect a random sample of 100 households and calculate the mean income of the sample (“x̄”). We use “x̄” as our point
estimate of the population mean income (“μ”).
An estimator is a statistical function or rule used to estimate an unknown parameter of a population based on sample data. It calculates a point estimate, which serves as the best guess for the true
value of the parameter.
Types of Estimators
1. Unbiased Estimator: An estimator whose expected value is equal to the true value of the parameter being estimated.
2. Consistent Estimator: An estimator that converges to the true value of the parameter as the sample size increases.
3. Efficient Estimator: An estimator with the smallest possible variance among all unbiased estimators of the parameter.
Biased and Unbiased Estimators
Unbiased Estimator
An estimator is unbiased if its expected value is equal to the true value of the population parameter it is estimating. In other words, an unbiased estimator does not systematically overestimate or
underestimate the parameter.
Example: Sample Mean as an Unbiased Estimator of Population Mean
• The sample mean (“x̄”) is an unbiased estimator of the population mean (“μ”). This means that, on average, the sample mean will equal the population mean when taken over many samples.
Formula for Sample Mean:
x̄ = Σx / n
• Σx is the sum of all sample values.
• n is the number of sample values.
Biased Estimator
An estimator is biased if its expected value is not equal to the true value of the population parameter it is estimating. A biased estimator systematically overestimates or underestimates the
Example: Sample Variance as a Biased Estimator of Population Variance
• The sample variance calculated using the formula with “n” in the denominator (instead of “n-1”) is a biased estimator of the population variance (“σ²”). This formula tends to underestimate the
true population variance, especially for small sample sizes.
Biased Formula for Sample Variance:
s²_biased = Σ(x - x̄)² / n
• Σ(x – x̄)² is the sum of squared deviations from the sample mean.
• n is the number of sample values.
To correct this bias, we use Bessel’s correction, replacing “n” with “n-1” in the denominator, which provides an unbiased estimator of the population variance.
Unbiased Formula for Sample Variance:
s²_unbiased = Σ(x - x̄)² / (n - 1)
• Σ(x – x̄)² is the sum of squared deviations from the sample mean.
• n is the number of sample values.
Hypothesis Testing
Hypothesis testing is a statistical method used to make inferences or draw conclusions about a population based on sample data. It involves making an initial assumption (the null hypothesis) and
determining whether the sample data provides sufficient evidence to reject this assumption in favor of an alternative hypothesis.
Key Concepts
• Null Hypothesis (H₀): The statement being tested, typically representing no effect or no difference. It is assumed to be true unless the data provides strong evidence against it.
• Alternative Hypothesis (H₁ or Ha): The statement we want to test for, representing an effect or a difference. It is accepted if the null hypothesis is rejected.
• Significance Level (α): The threshold for determining whether the evidence is strong enough to reject the null hypothesis. Common significance levels are 0.05, 0.01, and 0.10.
• Test Statistic: A standardized value calculated from sample data, used to determine whether to reject the null hypothesis. Different tests have different test statistics, such as the t-statistic
or z-statistic.
• p-value: The probability of obtaining a test statistic as extreme as the one observed, assuming the null hypothesis is true. If the p-value is less than or equal to the significance level, we
reject the null hypothesis.
• Type I Error (α): The error made when the null hypothesis is wrongly rejected (false positive).
• Type II Error (β): The error made when the null hypothesis is not rejected when it is false (false negative).
Steps in Hypothesis Testing
1. State the Hypotheses:
□ Null Hypothesis (H₀): Example – The population mean is equal to a specified value (μ = μ₀).
□ Alternative Hypothesis (H₁): Example – The population mean is not equal to the specified value (μ ≠ μ₀).
2. Choose the Significance Level (α):
□ Common choices are 0.05, 0.01, or 0.10.
3. Select the Appropriate Test and Calculate the Test Statistic:
□ Depending on the sample size and whether the population standard deviation is known, choose a test (e.g., z-test, t-test).
□ Calculate the test statistic using the sample data.
4. Determine the p-value or Critical Value:
□ Compare the test statistic to a critical value from statistical tables or calculate the p-value.
5. Make a Decision:
□ If the p-value ≤ α, reject the null hypothesis (H₀).
□ If the p-value > α, do not reject the null hypothesis (H₀).
6. Interpret the Results:
□ Draw conclusions based on the decision made in the previous step.
Example: t-Test
The t-test is a statistical test used to determine whether there is a significant difference between the means of two groups or between a sample mean and a known population mean. It is particularly
useful when the sample size is small and the population standard deviation is unknown.
Types of t-Tests
1. One-Sample t-Test: Tests whether the mean of a single sample is significantly different from a known population mean.
2. Independent Two-Sample t-Test: Tests whether the means of two independent samples are significantly different.
3. Paired Sample t-Test: Tests whether the means of two related groups (e.g., measurements before and after treatment) are significantly different.
Key Concepts
• Null Hypothesis (H₀): The hypothesis that there is no effect or no difference. It assumes that any observed difference is due to sampling variability.
• Alternative Hypothesis (H₁ or Ha): The hypothesis that there is an effect or a difference. It suggests that the observed difference is real and not due to chance.
• Degrees of Freedom (df): The number of independent values or quantities that can vary in the analysis. It is used to determine the critical value from the t-distribution table.
• Significance Level (α): The threshold for rejecting the null hypothesis. Common significance levels are 0.05, 0.01, and 0.10.
• Test Statistic: A value calculated from the sample data that is used to make a decision about the null hypothesis.
One-Sample t-Test
Purpose: To determine if the sample mean is significantly different from a known population mean.
t = (x̄ - μ) / (s / √n)
• x̄ is the sample mean.
• μ is the population mean.
• s is the sample standard deviation.
• n is the sample size.
1. State the hypotheses.
2. Choose the significance level (α).
3. Calculate the test statistic (t).
4. Determine the critical value or p-value.
5. Make a decision and interpret the results.
Independent Two-Sample t-Test
Purpose: To determine if the means of two independent samples are significantly different.
t = (x̄₁ - x̄₂) / √[(s₁² / n₁) + (s₂² / n₂)]
• x̄₁ and x̄₂ are the sample means.
• s₁² and s₂² are the sample variances.
• n₁ and n₂ are the sample sizes.
1. State the hypotheses.
2. Choose the significance level (α).
3. Calculate the test statistic (t).
4. Determine the degrees of freedom (df).
5. Determine the critical value or p-value.
6. Make a decision and interpret the results.
Discover more from Coursity
Subscribe to get the latest posts sent to your email.
Leave a Comment
|
{"url":"https://coursity.com.ng/statistics-for-data-science-and-machine-learning/","timestamp":"2024-11-03T03:31:09Z","content_type":"text/html","content_length":"64928","record_id":"<urn:uuid:ec11b6d7-74b3-4903-9899-60003408ccaa>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00648.warc.gz"}
|
Ghestalt is the content management system under development for ghilbert.org
The text markup language is very strongly based on Creole, with only tables and images not yet implemented, and a few extensions. All deviations are described on the WikiMarkup
Raph Levien, its co-developer, is particularly proud of the ghmarkup
{{ log.ldata|ghmarkup }}
The code is accessible as a darcs repository at http://raph.levien.com/garden/ghestalt/
Add new attachment
Only authorized users are allowed to upload new attachments.
This page (revision-1) was last changed on
08-Jan-2007 20:30
|
{"url":"http://www.wikicreole.org/wiki/Ghestalt","timestamp":"2024-11-13T09:59:54Z","content_type":"application/xhtml+xml","content_length":"29211","record_id":"<urn:uuid:8392849d-a4ab-4262-b702-79113e9b1f52>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00448.warc.gz"}
|
tests/web_2/instantiation_stub_test.dart - sdk.git - Git at Google
// Copyright (c) 2018, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
// @dart = 2.7
// dart2jsOptions=--strong
import 'package:expect/expect.dart';
// This needs one-arg instantiation.
T f1a<T>(T t) => t;
// This needs no instantiation because it is not closurized.
T f1b<T>(T t1, T t2) => t1;
class Class {
// This needs two-arg instantiation.
bool f2a<T, S>(T t, S s) => t == s;
// This needs no instantiation because it is not closurized.
bool f2b<T, S>(T t, S s1, S s2) => t == s1;
int method1(int i, int Function(int) f) => f(i);
bool method2(int a, int b, bool Function(int, int) f) => f(a, b);
int method3(int a, int b, int c, int Function(int, int, int) f) => f(a, b, c);
main() {
// This needs three-arg instantiation.
T local1<T, S, U>(T t, S s, U u) => t;
// This needs no instantiation because but a local function is always
// closurized so we assume it does.
T local2<T, S, U>(T t, S s, U u1, U u2) => t;
Expect.equals(42, method1(42, f1a));
Expect.equals(f1b(42, 87), 42);
Class c = new Class();
Expect.isFalse(method2(0, 1, c.f2a));
Expect.isFalse(c.f2b(42, 87, 123));
Expect.equals(0, method3(0, 1, 2, local1));
Expect.equals(42, local2(42, 87, 123, 256));
|
{"url":"https://dart.googlesource.com/sdk.git/+/refs/tags/2.18.0-243.0.dev/tests/web_2/instantiation_stub_test.dart","timestamp":"2024-11-10T11:56:23Z","content_type":"text/html","content_length":"24034","record_id":"<urn:uuid:b83f0c10-a535-4245-b3e6-97daddcbcbae>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00614.warc.gz"}
|
How to use COUNTIF function in Excel? - Excel Academy
How to use COUNTIF function in Excel?
COUNTIF is very useful function in Excel. It allows to calculate, how many cells in given cell range include specified criteria.
SYNTAX of COUNTIF function
Range – required; in simple words, this argument tell where do you want to find specified Criteria;
Criteria – required; it defines, what would you like to find in cells specified in Range.
To use COUNTIF function, start typing “= COUNTIF(” in the formula bar, and then use the fx icon to open the window with arguments specification:
How to define COUNIT criteria?
Criteria for COUNTIF function has to be defined in a specific way – the same way as used for SUMIF function. In table below you can find the most common examples for COUNTIF criteria:
Criteria Correct criteria definition
Less than 0 “<0”
Greater than 0 “>0”
Equals to 0 “=0”
Less or equals to 0 “<=0”
Greater or equals to 0 “>=0”
Different than 0 “<>0”
Blank cells “”
Not blank cells “<>”
Text in cell begins with abc “abc*”
Text in cell ends with abc “*abc”
Example 1 – basic usage of COUNTIF function
Table below contains list of employees with the information, which department they work at and how much they earn per month:
For this example, we would like to know:
How many employees work in Accounting Department?
How many employees earn more than $1,000 per month?
How many employees work in this company?
In the last example, I have used “<>” as criteria, because I would like to find and count all non-empty cells in column A, which define name of employee.
Here are the results I have received:
Example 2 – criteria defined in different cell
Table below contains information about sales revenue divided by product categories:
In cells A20, A21 and A22 there are 3 categories, and I would like to know, how many times these categories has been listed in table above.
Correct formula to be used is:
This formula can be dragged down for remaining categories, but we need to remember about correct cell references by adding $:
File with examples – download
|
{"url":"https://akademia-excela.pl/en/how-to-use-countif-function-in-excel/","timestamp":"2024-11-10T12:46:18Z","content_type":"text/html","content_length":"61304","record_id":"<urn:uuid:a4ca6f89-92d4-4742-be01-3ec26247066e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00532.warc.gz"}
|
straightedge-and-compass construction
✅ Correct GPT-4 ⚔️ Default
You are a math graduate. Explain how build an equilateral triangle using a compass and a straight rule construction.
Building an equilateral triangle using only a compass and a straightedge is a classic exercise in geometric construction. Here’s a step-by-step guide on how to do it:
1. Draw the Base Line:
□ Start by drawing a straight line segment using the straightedge. This will be one side of the equilateral triangle. Let’s call the endpoints of this segment (A) and (B).
2. Draw Arcs to Find the Third Point:
□ Place the compass' point at one endpoint of the line segment, say (A), and adjust its width to the length of the line segment (AB).
□ Without changing the compass width, draw an arc above the line segment. This arc will be part of a circle centered at (A) with radius (AB).
□ Now, move the compass to the other endpoint, (B), and draw another arc that intersects the first arc. The intersection point of these two arcs is equidistant from (A) and (B), ensuring that
(AB), (AC), and (BC) can all be the same length.
3. Draw the Remaining Sides:
□ Use the straightedge to draw a line segment from (A) to the intersection point of the arcs. This is the second side of the triangle.
□ Draw another line segment from (B) to the intersection point. This completes the equilateral triangle.
4. Finalize the Equilateral Triangle:
□ You now have an equilateral triangle (ABC) where all sides (AB), (AC), and (BC) are of equal length, and all angles are 60 degrees.
This construction is based on the principle that the set of all points a fixed distance (the length of (AB)) from a given point (either (A) or (B)) forms a circle. The intersection of the circles
centered at (A) and (B) is equidistant from (A) and (B), ensuring that all sides of the triangle are of equal length, which is the defining characteristic of an equilateral triangle.
|
{"url":"https://willaidothis.com/straightedge-and-compass-construction-4","timestamp":"2024-11-13T12:31:22Z","content_type":"text/html","content_length":"11139","record_id":"<urn:uuid:e6ce3dbd-e407-450e-a93f-55454279bff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00262.warc.gz"}
|
HP Prime equivalent of OBJ-> ? - Printable Version
HP Prime equivalent of OBJ-> ? - John Colvin - 11-18-2013
I couldn't find anything in the HP Prime user manual that addresses
how to perform the equivalent of OBJ-> supported by the HP 50G and earlier calculators. This is annoying because solutions to polynomial
equations, for example, are returned in brackets. Is there a command
or set of commands in HP Prime that can remove these elements and
place them individually on the stack?
Edited: 18 Nov 2013, 5:39 a.m.
Re: HP Prime equivalent of OBJ-> ? - Didier Lachieze - 11-18-2013
You may find some answers in this thread: HP Prime SIZE and OBJ-> with matrices/vectors/lists
Re: HP Prime equivalent of OBJ-> ? - John Colvin - 11-18-2013
Thanks. At least I can write a program that can perform
this operation.
|
{"url":"https://archived.hpcalc.org/museumforum/printthread.php?tid=256208","timestamp":"2024-11-08T17:01:14Z","content_type":"application/xhtml+xml","content_length":"3825","record_id":"<urn:uuid:84ad38f5-2129-43b5-94c7-dae43265a0db>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00603.warc.gz"}
|
How to Use a Fraction Calculator to Simplify Fractions? - The Data Scientist
How to Use a Fraction Calculator to Simplify Fractions?
Fractions are very essential in mathematics, which represents parts of a whole. They consist of a numerator (the top number) and a denominator (the bottom number). But when it comes to simplifying
the fractions, it’s a tricky part.
However, you can easily simplify fractions by using a fraction calculator. It only requires some inputs to perform the calculations. In the below article, we have discussed it in detail, so keep
reading the article till the end!
What is a Fraction Calculator?
The fraction calculator is an online tool developed by fraction-calculator.net. These operations include addition, subtraction, multiplication, division, and simplification. It is especially useful
for simplifying fractions.
Why Simplify Fractions?
Fractions are easier to understand and work with when they are simplified. A fraction is simplified when the numerator (top number) and the denominator (bottom number) are the smallest possible
numbers that can represent the same value. For example, 4/8 can be simplified to 1/2.
Steps to Simplify Fractions Using a Fraction Calculator
Step 1: Find the Calculator
First, find a reliable calculator. There are many free online options available. So simply search for “Fraction Calculator” and choose one that has good reviews and an easy-to-use interface.
Step 2: Enter the Fraction
Once you have found the calculator, you need to enter the fraction you want to simplify. Most calculators will have two input fields: one for the numerator and one for the denominator.
For example: If you want to simplify the fraction 16/24, you will enter 16 in the numerator field and 24 in the denominator field.
Step 3: Choose the Simplify Option
After entering your fraction, look for the option to simplify the fraction. Click on this button to proceed.
Step 4: Review the Result
The calculator will display the simplified fraction. In our example, 16/24 will be simplified to 2/3. Double-check to ensure you entered the original fraction correctly. If everything looks good, you
now have your simplified fraction!
Understanding the Simplification Process
Finding the Greatest Common Divisor (GCD)
To simplify a fraction, the calculator finds the greatest common divisor (GCD) of the numerator and the denominator. The GCD is the largest number that divides both the numerator and the denominator.
Dividing by the GCD
Once the GCD is found, the calculator divides both the numerator and the denominator by the GCD. This gives the simplified fraction.
For example: In the fraction 16/24, the GCD of 16 and 24 is 8. Dividing both the numerator and the denominator by 8 gives us the simplified fraction 2/3.
Common Features of a Fraction Calculator
Addition and Subtraction
• Some calculators allow you to add or subtract fractions. To do this, enter the fractions you want to add or subtract and choose the appropriate operation. The calculator will provide the result
in simplified fractions.
Multiplication and Division
• You can also multiply or divide fractions using a calculator. Enter the fractions and select the multiplication or division option. The calculator will handle the calculations and provide a
simplified result.
Tips for Using the Calculator
Double-Check Your Input
You have to make sure that you enter the correct numbers in the numerator and denominator fields. Because a small mistake can lead to incorrect results.
Understand the Result
Take a moment to understand the simplified fraction. If you know how the calculator gives these results then it can help you learn the simplification process.
Use a Reliable Calculator
Choose the calculator from a reputable source. This ensures the calculator you have chosen is accurate and reliable.
Practice Makes Perfect
The more you use the calculator, the more comfortable you will become with simplifying fractions. So practice with different fractions to build your confidence.
Benefits of Using this Calculator
Saves Time
Manually simplifying fractions can be time-consuming, especially when you deal with large numbers. So the calculator simplifies fractions quickly and saves you time.
Reduces Errors
Manual calculations can sometimes lead to errors. The calculator reduces the chances of errors and provides you with accurate results every time.
Easy to Use
Fraction calculators are developed to be user-friendly. Even if you are not good at math, you can easily use this calculator to simplify fractions.
Educational Tool
Using this tool helps you understand fractions better. By seeing the simplified results, you can learn about common factors and the simplification process easily.
What Does Simplify a Fraction Mean?
When a fraction is simplified, it is expressed in its simplest form. This is achieved by dividing both the numerator (top number) and denominator (bottom number) by their greatest common divisor
Does the Calculator Show the Steps to Simplify?
Some calculators offer a step-by-step solution. These calculators will often identify the GCD and show the process of dividing both the numerator and denominator by it.
Can I Simplify Any Fraction with a Calculator?
Yes, most calculators can simplify any fraction as long as it’s entered correctly. However, some calculators might not handle mixed numbers (whole numbers with a fractional part).
What If the Fraction Is Already in Its Lowest Terms?
The calculator will usually just display the original fraction if it’s already simplified.
|
{"url":"https://thedatascientist.com/how-to-use-a-fraction-calculator-to-simplify-fractions/","timestamp":"2024-11-07T09:51:28Z","content_type":"text/html","content_length":"156753","record_id":"<urn:uuid:4724ab5b-c6d2-43ce-900e-8c6594f8a935>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00268.warc.gz"}
|
MathematicsSplitting subspaces, Krylov subspaces and polynomial matrices over finite fieldsOn the distribution and applications of Ramanujan sumsOn some special classes of additive codes over finite fieldsCuisine fusion
http://repository.iiitd.edu.in/xmlui/handle/123456789/618 2024-11-07T06:49:13Z http://repository.iiitd.edu.in/xmlui/handle/123456789/1698 Splitting subspaces, Krylov subspaces and polynomial matrices
over finite fields Aggarwal, Divya; Ram, Samrith (Advisor) Let V be a vector space of dimension n over the finite field Fq and T be a linear operator on V . Given an integer m that divides n, an
m-dimensional subspace W of V is T-splitting if V = W ⊕ TW ⊕ · · · ⊕ Td−1W where d = n/m. Let σ(m, d; T) denote the number of m-dimensional T-splitting subspaces. Determining σ(m, d; T) for an
arbitrary operator T is an interesting problem. We prove that σ(m, d; T) depends only on the similarity class type of T and give an explicit formula in the special case where T is cyclic and
nilpotent. Denote by σ(m, d; τ ) the number of m-dimensional splitting subspaces for a linear operator of similarity class type τ over an Fq-vector space of dimension md. For fixed values of m, d and
τ , we show that σ(m, d; τ ) is a polynomial in q. This problem is closely related to another open problem on Krylov spaces. We discuss this connection and give explicit formulae for σ(m, d; T) in
the case where the invariant factors of T satisfy certain degree conditions. A connection with another enumeration problem on polynomial matrices is also discussed. We finally present a brief review
of some recent developments in the splitting subspaces problem. In particular, the connection of this problem to the theory of symmetric functions is highlighted. Lastly, we conclude with some future
directions for research. 2024-09-01T00:00:00Z http://repository.iiitd.edu.in/xmlui/handle/123456789/1652 On the distribution and applications of Ramanujan sums Goel, Shivani; Chaubey, Sneha (Advisor)
While studying the trigonometric series expansion of certain arithmetic functions, Ramanujan, in 1918, defined a sum of the nth power of the primitive qth roots of unity and denoted it as cq(n).
These sums are now known as Ramanujan sums. Since then, Ramanujan sums have been widely used and studied in mathematics and other areas. Most importantly, it is used in the proof of Vinogradov’s
theorem that every sufficiently large odd number is the sum of three primes. It is also used to simplify the computations of Arithmetic Fourier Transform (AFT), Discrete Fourier Transform (DFT), and
Discrete Cosine Transform (DCT) coefficients for a special type of signal. We study Ramanujan sums in the context of the k-tuple prime conjecture. A twin prime is a prime number that is either two
less or two more than another prime number. It is conjectured that there are infinitely many twin primes. Hardy and Littlewood generalized the twin prime conjecture and gave the k-tuple conjecture.
Let d1, · · · , dk be distinct integers, and b(p) is the number of distinct residue classes (mod p) represented by di . If b(p) < p for every prime p, the k-tuple conjecture gives an asymptotic
formula for the number of n ≤ x such that all the k numbers n + di are primes. We study a heuristic proof of the k-tuple conjecture using the convolution of Ramanujan sums. Additionally, we study
questions on the distribution of Ramanujan sums. One way to study distribution is via moments of averages. Chan and Kumchev studied the first and second moments of Ramanujan sums. In this thesis, we
estimate the higher moment of their averages using the theory of functions of several variables initiated by Vaidyanathaswamy. Ramanujan sums can also be generalized over number fields. A number
field is an extension field K of the field of rational numbers Q such that the field extension K/Q has a finite degree. Nowak first studied the first moment for Ramanujan sums over quadratic number
fields, and later, it was estimated for the higher degree number fields as well. For a general number field, assuming generalized Lindelöf Hypothesis, we improve the first moment result and also
study the second moment. Furthermore, unconditionally, we estimate asymptotic formulas for the second moment for quadratic, cubic, and cyclotomic number fields. Our primary tool for these results is
a Perron-type formula. Finally, we obtain the second moment result for certain integral domains called Prüfer domains. 2024-05-01T00:00:00Z http://repository.iiitd.edu.in/xmlui/handle/123456789/1632
On some special classes of additive codes over finite fields Sharma, Sandeep; Sharma, Anuradha (Advisor) In this thesis, we define and study a new class of additive codes over finite fields, viz.
multi-twisted (MT) additive codes, which is a generalization of constacyclic additive codes and an extension of MT (linear) codes introduced by Aydin and Halilović [5]. We study their algebraic
structures by writing a canonical form decomposition of these codes using the Chinese Remainder Theorem and provide an enumeration formula for these codes. With the help of their canonical form
decomposition, we also provide a trace description for all MT additive codes over finite fields. We further apply probabilistic methods to study the asymptotic properties of the rates and relative
Hamming distances of a special subclass of 1-generator MT additive codes. We show that there exists an asymptotically good infinite sequence of MT additive codes of length p↵` and block length p↵ ! 1
over Fqt with rate v p⌘`t and relative Hamming distance at least , where ` 1 and t 2 are integers, q is a prime power, Fqt is the finite field of order qt , p is an odd prime satisfying gcd(p, q)=1,
v = ordp(q) is the multiplicative order of q modulo p, ⌘ is the largest positive integer such that p⌘ | (qv 1) and is a positive real number satisfying hqt () < 1 1 `t, (here hqt (·) denotes the qt
-ary entropy function). This shows that the family of MT additive codes over finite fields is asymptotically good. As special cases, we deduce that the families of constacyclic and cyclic additive
codes over finite fields are asymptotically good. By placing ordinary, Hermitian and ⇤ trace bilinear forms, we study the dual codes of MT additive codes over finite fields and derive necessary and
sufficient conditions under which an MT additive code is (i) self-orthogonal, (ii) self-dual and (iii) an additive code with complementary dual (or an ACD code). We also derive a necessary and
sufficient condition for the existence of a self-dual MT additive code over a finite field and provide enumeration formulae for all self-orthogonal, self-dual and ACD MT additive codes over finite
fields with respect to the aforementioned trace bilinear forms. We further employ probabilistic methods and results from groups and geometry to study the asymptotic behavior of the rates and relative
Hamming distances of self-orthogonal, self-dual and ACD MT additive codes over finite fields with respect to the aforementioned trace bilinear forms. We establish the existence of asymptotically good
infinite sequences of self-orthogonal and ACD MT additive codes of length p↵` and block length p↵ ! 1 over Fqt with relative Hamming distance at least and rates v p⌘`t and 2v p⌘`t with respect to the
aforementioned trace bilinear forms, where is a positive real number satisfying hqt () < 1 2 1 2`t, p is an odd prime coprime to q, v = ordp(q) is the multiplicative order of q modulo p, and ⌘ is the
largest positive integer satisfying p⌘ | (qv 1). This shows that self-orthogonal and ACD MT additive codes over finite fields are asymptotically good. As special cases, we deduce that self-orthogonal
and ACD constacyclic additive codes over finite fields are asymptotically good. We also establish the existence of asymptotically good infinite sequences of self-dual MT additive codes of length p↵`
and block length p↵ ! 1 over Fqt with relative Hamming distance at least and rate 1 2 with respect to the aforementioned trace bilinear forms, where is a positive real number satisfying hqt () < 1 2
`t, p is an odd prime coprime to q such that v = ordp(q) is even. As special cases, we deduce that self-dual cyclic and negacyclic additive codes over finite fields are asymptotically good. We also
define and study a special class of self-dual MT additive codes over Fqt with respect to the ordinary trace bilinear form, viz. doubly-even self-dual MT additive codes and characterize a special
class of these codes in terms of their constituents with the help of their trace description, where q is an even prime power. With the help of this characterization and using probabilistic methods
and results from groups and geometry, we further study the asymptotic behaviour of their relative Hamming distances and show that doublyeven self-dual MT additive codes over F2t are asymptotically
good. As a special case, we also deduce that doubly-even self-dual cyclic additive codes over F2t are asymptotically good.We next define and study a new class of additive codes over finite fields,
viz. additive quasi-Abelian (QA) codes, which is a generalization of a special class of MT additive codes over finite fields and an extension of linear QA codes over finite fields. We study the
algebraic structures of these codes and their dual codes with respect to ordinary, Hermitian and ⇤ trace bilinear forms. We further express these codes as direct sums of additive codes over finite
fields and derive necessary and sufficient conditions under which an additive QA code is (i) self-orthogonal, (ii) self-dual and (iii) ACD. We also derive necessary and sufficient conditions for the
existence of a self-dual additive QA code over finite fields. Besides this, we obtain explicit enumeration formulae for all self-orthogonal, self-dual and ACD additive QA codes over finite fields. We
also list several MDS and almost MDS codes belonging to the family of additive QA codes over finite fields, which shows that additive QA codes over finite fields is a promising class of codes to find
codes with good and optimal parameters. 2024-05-01T00:00:00Z http://repository.iiitd.edu.in/xmlui/handle/123456789/1614 Cuisine fusion Gupta, Arsh; Makkar, Kshitij; Bagler, Ganesh (Advisor) With
growing diversity in personal food preference and regional cuisine style, personalized information systems that can transform a recipe into any selected regional cuisine style that a user might
prefer would help food companies and professional chefs create new recipes. The aim of the study is to explore computational techniques which can be utilised in order to convert a recipe which
belongs to a cuisine (Source cuisine) to another cuisine (Target cuisine) by changing one ingredient from the Original recipe. For ease of understanding we will call the starting recipe from the
Source cuisine as Original recipe and the final recipe which belongs to the Target cuisine as the Transformed recipe. There are two major tasks that need to be done in order to change a recipe from
one cuisine to another computationally. (1) Swap each ingredient of the Original recipe with the ingredients present in the database and, (2) Classify the recipe based on its ingredients and check
which cuisine does the recipe belong to. The Dataset used mainly comprises of labeled corpus of Yummly Dataset recipes. We make use of different Machine Learning, natural Language Processing and Deep
Learning techniques to achieve the aim of the study. In recent years, Travel and Tourism has flourished and different ethnicities have started to live in the same countries. Some people feel the need
of fusion cuisines. Some people also love to cook their own food and customize the recipes to their need. These types of computational models and studies in the field of Computational Gastronomy are
not only research fields, and can also be used to create new recipes and innovation in the food industry. 2023-11-29T00:00:00Z
|
{"url":"https://repository.iiitd.edu.in/xmlui/feed/rss_1.0/123456789/618","timestamp":"2024-11-07T06:47:32Z","content_type":"application/rdf+xml","content_length":"13766","record_id":"<urn:uuid:7405a361-903a-48e5-b3f1-49f5228ffe0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00330.warc.gz"}
|
Math, Grade 6, Expressions, Substituting Numbers for Letters
Make a Train
Work Time
Make a Train
• Use the Creating Trains interactive to make a train that has at least 8 cars. The table shows the types of cars that you can use to make your train.
• Write two different expressions that represent the length of your train. Use the letters specified in the table to represent the lengths of the different types of cars.
• Find the length of your train by replacing the letters in each of your expressions with the values given for each letter in the table.
• Did you make a train and write an expression for the train?
• Did you replace each variable in the expression with the appropriate car length from the table?
• Once you complete these steps, you can calculate the length of the train.
|
{"url":"https://openspace.infohio.org/courseware/lesson/2123/student/?section=3","timestamp":"2024-11-07T21:31:44Z","content_type":"text/html","content_length":"33500","record_id":"<urn:uuid:d3e710dd-1a34-4a11-bd80-0378464ebf64>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00488.warc.gz"}
|
Seeing Through the Lens of Geometry: The Link Between Mathematics and Reality - Eye Of Unity FoundationSeeing Through the Lens of Geometry: The Link Between Mathematics and Reality - Eye Of Unity FoundationSeeing Through the Lens of Geometry: The Link Between Mathematics and Reality
Seeing Through the Lens of Geometry: The Link Between Mathematics and Reality
Geometry, often considered as a branch of mathematics, is more than just a collection of shapes and measurements. It is a powerful tool that helps us understand the world we live in and unravel the
mysteries of the universe. From the structure of molecules to the trajectory of celestial bodies, geometry provides us with a lens through which we can see the interconnectedness of mathematics and
The Fundamental Concepts of Geometry
Geometry is built upon a set of fundamental concepts that form the basis of its applications in various fields. These concepts include points, lines, angles, planes, and shapes. By understanding how
these elements interact, mathematicians can create models that represent real-world phenomena.
Points, Lines, and Planes
A point is a basic building block of geometry, representing a location in space. It has no size or shape, and its position is denoted by coordinates. Lines are made up of an infinite number of points
and have no thickness or endpoints. They extend infinitely in both directions. Planes are flat surfaces that extend indefinitely in all directions, formed by an infinite number of lines.
Angles are formed when two lines meet at a point. They can be classified based on their measurements, such as acute (less than 90 degrees), obtuse (more than 90 degrees but less than 180 degrees),
right (exactly 90 degrees), or straight (exactly 180 degrees).
Shapes in geometry include polygons, circles, spheres, and more. Polygons are closed figures made up of line segments, while circles are a set of points equidistant from a central point. Spheres, on
the other hand, are three-dimensional shapes formed by rotating a circle around its diameter.
Applications of Geometry in the Real World
Geometry has far-reaching applications in various fields, including architecture, engineering, art, physics, and even computer graphics. It helps us understand the structure and design of buildings,
bridges, and other physical structures. Architects and engineers use geometric principles to ensure structural stability and balance in their designs.
Furthermore, geometry plays a crucial role in computer graphics, allowing the creation of realistic 3D models and simulations. By using geometric algorithms, computer-generated imagery (CGI) can
mimic real-world objects and environments, enriching our visual experiences in movies, video games, and Virtual reality.
In physics, geometry is used to describe and explain the behavior of objects in space and time. From Newton’s laws of motion to Einstein’s theory of general relativity, mathematics and geometry are
essential tools for understanding the fundamental laws that govern our universe.
Geometry as a Language for Understanding Reality
Geometry can be seen as a language that enables us to interpret the world around us. It provides a framework for describing and analyzing complex phenomena in a concise and precise manner. By using
geometric models, we can visualize and comprehend intricate patterns, relationships, and symmetries that exist in nature.
For example, the Fibonacci sequence, a series of numbers where each number is the sum of the two preceding ones, can be observed in many natural phenomena such as the arrangement of leaves on a stem
or the spiral pattern of a seashell. This sequence can be understood and represented mathematically through geometric principles, revealing the underlying order and symmetry present in nature.
Geometry also helps us understand the concept of space and the relationships between objects. Euclidean geometry, developed by the ancient Greek mathematician Euclid, provides a system of axioms and
theorems that describe the properties of flat, two-dimensional space. Non-Euclidean geometries, such as hyperbolic or spherical geometry, extend these principles to curved and higher-dimensional
spaces, allowing us to explore the geometry of the universe itself.
Q: Why is geometry important?
A: Geometry is important because it allows us to understand and describe the world around us. It provides a language for interpreting complex phenomena, from the microscopic world of atoms to the
vastness of the cosmos. Geometry also plays a crucial role in practical applications such as architecture, engineering, and computer graphics.
Q: How is geometry related to mathematics?
A: Geometry is a branch of mathematics that deals with the properties and relationships of shapes, lines, angles, and spaces. It uses mathematical principles to describe and analyze the physical
world, making it an integral part of mathematics.
Q: Can geometry help in problem-solving?
A: Absolutely! Geometry provides a logical framework for problem-solving. By breaking down complex problems into simpler geometric models, we can apply mathematical reasoning to find solutions.
Geometry teaches us to think analytically, spatially, and systematically, skills that are valuable in many areas of life.
Q: Are there real-life examples of geometry in action?
A: Yes, geometry is present in numerous real-life examples. For instance, architects and engineers use geometric principles to design stable and aesthetically pleasing structures. Artists utilize
geometric patterns and symmetry to create visually appealing compositions. Additionally, GPS navigation systems rely on geometric principles to calculate distances and provide accurate directions.
Q: Can geometry be applied to other sciences?
A: Absolutely! Geometry is widely used in various scientific disciplines. In physics, for example, geometry is used to describe the motion of objects and the behavior of light. In astronomy, geometry
helps us understand the shape and formation of celestial bodies. Geometry even plays a role in biology, where it helps model the structure of molecules and DNA.
Geometry is far more than just a mathematical concept. It is a powerful tool that allows us to see the world through a different lens, revealing the hidden connections and patterns that govern our
reality. From the smallest atoms to the vastness of the cosmos, geometry provides us with a language to understand and appreciate the beauty and complexity of the universe.
|
{"url":"https://eyeofunity.com/seeing-through-the-lens-of-geometry-the-link-between-mathematics-and-reality/","timestamp":"2024-11-08T05:00:46Z","content_type":"text/html","content_length":"86160","record_id":"<urn:uuid:f6d25150-0b9a-4aae-91a8-1624ee6b731a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00715.warc.gz"}
|
Mechanics and dynamics of translocating MreB filaments on curved membranes
MreB is an actin homolog that is essential for coordinating the cell wall synthesis required for the rod shape of many bacteria. Previously we have shown that filaments of MreB bind to the curved
membranes of bacteria and translocate in directions determined by principal membrane curvatures to create and reinforce the rod shape (Hussain et al., 2018). Here, in order to understand how MreB
filament dynamics affects their cellular distribution, we model how MreB filaments bind and translocate on membranes with different geometries. We find that it is both energetically favorable and
robust for filaments to bind and orient along directions of largest membrane curvature. Furthermore, significant localization to different membrane regions results from processive MreB motion in
various geometries. These results demonstrate that the in vivo localization of MreB observed in many different experiments, including those examining negative Gaussian curvature, can arise from
translocation dynamics alone.
The role of membrane curvature in influencing the cellular location and function of proteins has been increasingly appreciated (McMahon and Gallop, 2005; Zimmerberg and Kozlov, 2006; Baumgart et al.,
2011). In cells, membrane curvature can both be induced by proteins that bind to membranes as well as recruit proteins that bind to this curvature (Peter et al., 2004; Renner and Weibel, 2011; Wang
and Wingreen, 2013; Ursell et al., 2014; Hussain et al., 2018; Wu et al., 2018). Here, we focus on MreB, an actin homolog essential for coordinating the cell wall synthesis required for the rod shape
of many bacteria (Jones et al., 2001). We examine how MreB orientation and motion along directions of principal curvature affect its localization in different cellular geometries. MreB filaments
polymerize onto membranes (Figure 1A) (Salje et al., 2011; van den Ent et al., 2014; Hussain et al., 2018), creating short filaments that move along their lengths in live cells. Viewed dynamically,
MreB filaments are seen to rotate around the rod width, a motion powered by the activity of associated cell wall synthesis enzymes (Figure 1B and Figure 1—video 1) (Garner et al., 2011;
Domínguez-Escobar et al., 2011; van Teeffelen et al., 2011; Reimold et al., 2013; Hussain et al., 2018). The orientation of MreB filaments coincides with the direction of their motion, as filaments
move along the direction in which they point (Figure 1C) (Hussain et al., 2018; Olshausen et al., 2013). However, studies examining the localization of MreB in kymographs (Ursell et al., 2014) or at
single time points (Bratton et al., 2018) have found that, in Escherichia coli, MreB filaments are enriched at regions of negative Gaussian curvature or small mean curvature (Figure 1D). This prompts
the question of how the collective motion of MreB filaments could affect, or give rise to, their enrichment. As filaments are constantly moving, their instantaneous localization to any one point is
transient: a typical filament (~250 nm long) in Bacillus subtilis moves through a 1 µm^2 region in ~30 s. To understand how the previously observed enrichment at negative Gaussian curvatures could
arise from the dynamics of filaments moving around the cell, we sought to relate the binding and motion of MreB filaments to their distribution in different geometries.
In previous work (Hussain et al., 2018), we demonstrated that MreB filaments orient and translocate along directions of largest principal curvature inside differently shaped B. subtilis cells and
liposomes. To understand how MreB filaments orient along different membrane curvatures and how their motion along this orientation affects their cellular localization, we first model the mechanics of
MreB-membrane binding and provide a quantitative description of how MreB filaments bind both in vivo and in vitro. Next, we model the curvature-dependent motion of MreB filaments in different
geometries and examine how this motion affects their distribution to different membrane regions. Strikingly, we find that the dynamics of MreB translocation alone, without requiring any intrinsic
preference of MreB filaments for curved regions of the cell, results in differential enrichment of MreB filaments at regions of negative Gaussian curvature or small mean curvature similar to those
observed in cells.
We first model how inwardly curved (Salje et al., 2011; van den Ent et al., 2014; Hussain et al., 2018) MreB filaments bind and orient on membranes. Previous theoretical studies have modeled the
binding of protein filaments to membranes and demonstrated that binding conformations can be influenced by both filament thickness and twist. In a seminal theoretical work, Wang and Wingreen modeled
twisted bundles of MreB approximately six-fold thicker than typical filaments (Wang and Wingreen, 2013). This study nicely demonstrated that the mechanics of binding alone could orient bundles and
suggested that bundle length could be limited by twist. Another elegant theoretical study by Quint et al. demonstrated that twisted filaments of varying rigidities could bind to regions of negative
Gaussian curvature in manners that are particularly energetically favorable (Quint et al., 2016). While it is intriguing to examine the effects of filament rigidity and twist on general filament
systems (discussed below), here we focus on modeling thin, inwardly curved MreB filaments with no twist. We focus on these parameters as they reflect the observations of all available in vitro
studies of membrane-associated MreB: three different cryo-electron microscopy studies have shown that membrane-bound MreB filaments are flat and untwisted, binding to the membrane on one filament
face (Figure 1A) (Salje et al., 2011; van den Ent et al., 2014; Hussain et al., 2018). These studies also suggest that inward curvature, and not twist, limits filament length: MreB filaments are
short when polymerized onto non-deforming planar-supported lipid bilayers, but become extremely long when polymerized inside deformable liposomes (Salje et al., 2011; van den Ent et al., 2014;
Hussain et al., 2018). The model we present here extends our previous work (Hussain et al., 2018) and demonstrates that thin, untwisted filaments orient robustly along directions of largest principal
curvature, thus providing a generic mechanism for orienting their motion.
We model an MreB filament as a polymer which binds linearly along its length to a membrane in an energetically favorable manner, namely via burial of hydrophobic residues on one face (Figure 2A; see
Figure 2—figure supplement 1 for additional details) (Salje et al., 2011). We assume the filament to be a curved, cylindrical, linear-elastic rod which is free to bend to maximize membrane
interaction, so that its elastic energy of deformation is $Ebend=(πYrf4/8)×∫(κ−κs)2dℓ$, where $rf$ is the filament cross-sectional radius, $Y$ is the filament Young’s modulus, $κ$ is the curvature of
the deformed state, $κs$ is the intrinsic filament curvature, and the integration is over the filament length (Landau and Lifshitz, 1970). We will minimize the free energy change due to binding with
respect to the deformed curvature, $κ$.
Next, we assume an isotropic, fluid, bilayer membrane, where there are no in-plane shears and the only in-plane deformations are compressions and expansions (Safran, 2003). The membrane free energy
is given by the Helfrich form,
(1) $ℱ={\int }_{S}\left[\frac{{k}_{b}}{2}{\left(2H-{H}_{s}\right)}^{2}+\frac{{k}_{t}}{2}K+\gamma \right]𝑑A-p{\int }_{S}𝑑V,$
where $kb$ is the bending rigidity of the membrane, $kt$ is the saddle-splay modulus of the membrane, $Hs$ is the spontaneous curvature, $γ$ is the membrane surface tension, $p$ is the pressure
difference across the membrane, $H$ and $K$ are the mean and Gaussian curvatures of the membrane surface, $S$, respectively, and $dA$ and $dV$ denote area and volume elements, respectively (
Helfrich, 1973; Safran, 2003; Zhong-can and Helfrich, 1987; Zhong-can and Helfrich, 1989). For the small membrane deformations considered in this work, we assume that excess phospholipids can be
freely added to the membrane to compensate for stretching, so that the membrane surface tension $γ=0$ (Safran, 2003). Note that a nonzero surface tension would make the preferred binding orientation
determined below more energetically favorable. For simplicity, we also assume $Hs=0$ and note that the case of a nonzero $Hs$ can be considered similarly. Finally, we assume that the membrane surface
can be parameterized in the Monge gauge by a function $h=h(x,y)$, where $x$ and $y$ are real numbers and terms of quadratic order or higher in the gradient of $h$, $∇h$, are neglected. As shown in
Appendix 1, the mechanical energy associated with membrane deformations is determined by the solution of the shape equation
(2) ${\mathrm{\Delta }}^{2}h=\frac{p}{{k}_{b}},$
which is similar to the equilibrium equation of a thin plate (Ventsel and Krauthammer, 2001; Timoshenko and Woinowsky-Krieger, 1959). Here $Δ2$ is the biharmonic operator and Equation (2) is subject
to Dirichlet boundary conditions enforcing continuity of mean curvature and surface height in a manner compatible with the deformed filament. Equation (2) can then be decomposed as two Poisson
equations, each with Dirichlet boundary conditions, and solved numerically using the finite element method (Appendix 1).
The free energy change due to filament binding is determined by $Ebend$, $ℱ$, and the solution of Equation (2). For characteristic parameter values relevant to the binding of MreB filaments to
bacterial membranes, as summarized in Supplementary file 1, our model predicts a preferred, circumferential orientation of MreB binding in a rod-shaped cell. This result arises because the intrinsic
curvature of filaments is smaller than characteristic cell radii. While the filament bends to conform to the membrane for physiological values of $p$—as shown in previous work (Hussain et al., 2018)
—and grossly deforms the membrane for small values of $p$, the preferred binding orientation is robust to changes in $p$ (Figure 2B). In fact, across a wide range of parameter values including the
filament bending rigidity ($B=πYrf4/4$), the filament intrinsic curvature ($κs$), and the membrane pressure difference ($p$), a preferred binding orientation exists and coincides with the direction
of largest principal curvature for any membrane which is less curved than the filament. In the case of $p=0$, as discussed in Appendix 1, the prediction that MreB binding induces large membrane
deformations is consistent with cryo-electron microscopy images of MreB binding to vesicles (Figure 1A) (Salje et al., 2011; van den Ent et al., 2014). Importantly, the energetic penalties for
deviatory binding conformations are larger than the energy of thermal fluctuations across a large range of $p$, suggesting the empirically observed variation in binding orientation (Figure 1C) to be
caused by other sources of stochasticity. The energetic penalties are also decreased in wider membranes for both small and physiological values of $p$, the latter of which is consistent with the
gradual widening of the distributions of MreB trajectory angles in wider B. subtilis protoplasts (Hussain et al., 2018).
In general, the mechanics of filament binding are well described by the pressure difference across the membrane and the filament bending energy, which can be viewed as order parameters that largely
dictate whether the filament predominantly bends the membrane, bends to conform to the membrane, or both. An approximate phase diagram for MreB binding to any membrane which is less curved than the
filament can be determined (Figure 2C and Appendix 1). Below, we suppose the membrane surface to be less curved than the filament—so that it is always energetically favorable to orient along
directions of largest membrane curvature—and model filament translocation along these directions.
Dynamics of translocation
We next examine how the translocation of MreB filaments, once bound to the membrane, affects their distribution in different geometries. Inside cells, MreB filaments move along the membrane in the
direction of their orientation (Hussain et al., 2018; Olshausen et al., 2013) (Figure 1C). This directional and processive motion is driven by cell wall synthesis (Sliusarenko et al., 2010; van
Teeffelen et al., 2011; Olshausen et al., 2013), and filaments may reorient according to different membrane geometries (Hussain et al., 2018). Thus, highly-bent MreB filaments translocate along the
direction of largest curvature, a direction that minimizes the energetic cost of binding. As discussed below, this hypothesis is supported by observations that (1) MreB moves circumferentially in
live B. subtilis cells, but this motion becomes disoriented if cells become round, (2) circumferential motion is re-established when round cells are confined into rods, (3) MreB moves directionally
in bulges protruding from round cells, and (4) MreB filaments rapidly translocate out of poles in rods, reorienting when filaments reach the cylindrical bulks (Hussain et al., 2018). Assuming
translocation on a static surface, we may model the trajectories of filaments as biased random walks as follows (with more details provided in Appendix 1). The case of a dynamical surface, as
expected for MreB-directed growth, can be considered similarly. Note that a ‘biased random walk’ refers to a succession of random steps which may be processive: while the mean-squared displacement of
a filament will be approximately quadratic, and not linear, in time, the processive motion we consider is random only because the translocation direction can deviate from directions of largest
membrane curvature due to sources of stochasticity (Figure 1C). We will show that our biased random walk model of filament trajectories leads to predictions of MreB localization.
We consider the membrane as a parametric surface, $𝐫=r(u,v)$, embedded in three-dimensional space ($ℝ3$) with surface coordinates $u$ and $v$ and a filament as a point on this surface which, at any
moment in time, translocates along the largest principal direction $𝐝$—that is, the direction of largest curvature of the surface. As $𝐝$ is a vector in $ℝ3$, arbitrarily moving in the direction of
$𝐝$ may move the filament off of the surface. To define the translocation consistently, we set $η=cos−1d⋅rθ||d||⋅||rθ||$, where $η$is an angular deviation from the largest principal direction on the
surface introduced by possible sources of stochasticity, the modified direction corresponds to an angle $θ$ relative to the $u$-axis in parametric coordinates, $rθ∈R3$is the derivative of $r$ in the
direction of $θ$, and distances are defined by the surface metric. Translocating along an angle $θ$ with respect to the $u$-axis in $(u,v)$-coordinates then ensures that the filament remains on the
surface, and the direction of translocation corresponds to that on a patch of $𝐫$.
As a discrete-time flow in $(u,v)$-coordinates, and with suitable units of time so that the filament may reorient at every timestep, the 2D equation of filament motion is
(3) ${X}_{n+1}={X}_{n}+{\chi }_{n}{\ell }_{n}\left(\mathrm{cos}{\theta }_{n},\mathrm{sin}{\theta }_{n}\right),$
where $Xn$, $ℓn$, and $θn$ are the position, step size, and translocation angle, respectively, of the filament at a timestep $n$. Here $θn$ is the value of $θ$ computed at the surface point
corresponding to $Xn$ and assuming $η∼𝒩(0,σ2)$—that is, the angular noise is normally-distributed, with mean zero and a variance, $σ2$, to be inferred from data—and note that the translocation noise,
$σ$, may depend on quantities such as the principal curvatures, as discussed later. $χn$ is a random sign, which accounts for the possibility of both left-handed and right-handed translocation, and
may not substantially vary in $n$ if the filament does not backtrack, as is assumed for the remainder of this work. We assume that $ℓn$ satisfies an integral equation which relates it to a constant
and finite filament step size, $L$, on the surface (Appendix 1), and note that inertia in MreB motion, as measured previously by the velocity autocorrelation (Kim et al., 2006; van Teeffelen et al.,
2011), is modeled by the finite step size. Note that, when both principal curvatures are equal, so that $𝐝$ is not well defined, we will assume steps of uniformly random angles and size $L$ on the
surface. Furthermore, for finite step sizes, Equation (3) extrapolates the translocation direction in surface coordinates: while motion along the largest principal direction can be maintained
irrespective of parameterization by parallel transport, we anticipate the distinction from Equation (3) to be insignificant for many geometries due to the small step sizes considered in this
work. Determining the probability distribution of $X$ then determines the probability distribution of the filament on $𝐫$, and this can be done analytically and numerically for several geometries as
discussed below.
We next consider the dynamics of activating and deactivating filaments as follows. We suppose that a filament may be activated at a position $X$ at any timestep at a constant rate $k≥0$ with
probability proportional to the membrane surface area, $dA(X)$, and deactivated at a constant rate $λ≥0$ which determines the filament’s processivity—that is, the mean number of steps that a
filament takes on the membrane surface before becoming inactive (Figure 3A). The case of $k$ being dependent on fields, such as mechanical strains (Wong et al., 2017), can be considered similarly but
is not necessary for the results below. An ensemble of filaments produced by such dynamics will exhibit filament numbers, $NF$, that vary in space and time, and likewise for the filament
concentration $CF(X,n)=NF(X,n)/dA(X)$. Below, we discuss characteristic parameter values relevant to MreB and show that the dynamics of Equation (3) gives rise to localization. We then examine
the model in detail and describe how localization depends on different parameters of the model.
Implications to MreB localization
Previous fluorescence microscopy measurements provide estimates for the step size ($L$), deactivation rate ($λ$), and translocation noise ($σ$) of MreB filaments in cells. We assume $L$ to be $200nm$
as a modeling choice, but show in Appendix 1 that the results discussed below are qualitatively similar for significantly larger $L$ ($∼2μm$). Similar experiments have estimated the persistence time
of MreB filaments in E. coli as ~5 min (Ursell et al., 2014), while a characteristic translocation noise of $σ≈0.3rad$ in B. subtilis has been found separately by (1) measuring filament trajectory
angles relative to the midline and (2) measuring binding angles in confined protoplasts (Figure 1C) (Hussain et al., 2018). While we assume the values of λ and σ to be based on these measurements, we
examine the effects of varying λ and σ in the following section. Furthermore, in recent studies, rod-shaped cells have been perturbed to be in geometries other than a spherocylinder (Ursell et al.,
2014; Wong et al., 2017; Hussain et al., 2018; Renner et al., 2013; Amir et al., 2014). As the distribution of MreB filament angles gradually becomes broader as B. subtilis cells become wider (
Hussain et al., 2018), it may also be reasonable to suppose that $σ$ depends on the difference, $Δc$, of principal curvatures at the location of any MreB filament: $σ=α(Δc)-1$, where
$α≈0.6rad⋅μm−1$ is a constant of proportionality determined by experimental data. While all our results pertaining to MreB below assume this dependence as to be consistent with data, we show in
Appendix 1 that our results are similar for different dependencies of σ on $Δc$.
Given the aforementioned parameters, Equation (3) leads to predictions for the statistics of the filament position ($X$) and the ensuing filament concentration ($CF$) across different membrane
geometries. For a cylindrical cell, analytical expressions for the statistics of $X$ show that translocation noise does not significantly affect the mean or variance of the circumferential
displacement of a filament (Appendix 1). In contrast, the value of $σ≈0.3rad$ corresponds to a standard deviation of approximately $0.2μm$ for the axial displacement of a filament per hoop of wall
material inserted. This value is consistent with experimental measurements (Figure 3B), showing that deviations from a circumferential translocation direction can significantly contribute to wall
insertions in the axial direction and disordered wall architecture.
MreB filaments have been observed to be depleted from the hemispherical poles of spherocylindrical cells compared to the cylindrical bulks (Kawazura et al., 2017; Ursell et al., 2014). Observations
of filament dynamics revealed a possible explanation: MreB filaments reorient rapidly in, and translocate out of, the poles and into the bulks, where motion then becomes aligned (Hussain et al., 2018
). Consistent with this observation, simulations of Equation (3) on a spherocylindrical surface show that the concentration of filaments in the bulk is enhanced (Figure 3C). The average filament
concentration is predicted to be approximately two-fold higher in the bulk than the poles, in agreement with experimental measurements in E. coli (Ursell et al., 2014). Simulations of Equation (3) on
a toroidal surface are also quantitatively consistent with prior measurements of MreB fluorescence in E. coli cells confined to donut-shaped microchambers, which have shown that MreB intensity is
increased at the inner edges by a factor of ∼1.1 relative to the midlines (Figure 3D) (Wong et al., 2017). For a spherocylinder, filament enrichment arises because the cylindrical bulk retains
filaments: oriented motion is preserved in the bulk, while disordered motion at the poles eventually becomes ordered. In contrast, filament enrichment arises in a curved cell because filaments become
uniformly distributed along circumferential hoops. The smaller arclength along the inner edge then results in a greater density of filaments.
In our previous study, we found that MreB rotation and localization at small protrusions in B. subtilis protoplasts preceded rod shape generation from these protrusions (Hussain et al., 2018). To
model the geometry observed in these experiments, we consider a cylindrical body with a protruding bulge in which filament trajectories become parallel to the cylinder long axis. Simulations of
Equation (3) on this geometry reveal that the filament concentration is larger in the bulge and that the predicted enrichment is quantitatively consistent with the MreB enhancement observed in bulged
cells, without any fitting parameters (Figure 3E and Figure 3—figure supplement 1). Similar to the case of a spherocylinder, localization arises due to the bulge attracting filaments. The dynamics of
Equation (3) therefore results in localization which contributes to de novo generation of rod shape.
Finally, previous work has examined MreB localization in E. coli cells (1) with submicron-scale shape fluctuations or (2) confined in sinusoidal chambers (Ursell et al., 2014; Shi et al., 2017;
Bratton et al., 2018). The empirically observed magnitudes of MreB enrichment at regions of negative Gaussian curvature or small mean curvature in these studies are consistent with our modeling. To
model the cell shapes observed in these experiments, we consider filament translocation on a geometry with both negative and positive Gaussian curvatures and undulations of smaller wavelengths than
the surface size (Figure 3F and Figure 3—figure supplement 2). As discussed in Appendix 1, the Gaussian and mean curvatures in this geometry are positively correlated and consistent with experimental
observations (Ursell et al., 2014). For this geometry, filament translocation results in increased values of concentration ($CF$) at regions of negative Gaussian curvature or small mean curvature (
Figure 3F). This effect arises because the principal curvatures away from these regions reorient filaments axially, instead of circumferentially, so that regions of negative Gaussian curvature or
small mean curvature attract filaments. Furthermore, the magnitude of this enhancement is consistent with the amount of MreB enrichment observed (Figure 3G and Figure 3—figure supplement 3),
demonstrating that translocation dynamics alone can negatively correlate filament concentration with Gaussian or mean curvature in cells with similar short wavelength undulations.
Dependence of localization on processivity and Gaussian curvature
As we anticipate our model to be applicable to general filament systems, we now explore the response of the filament concentration ($CF$) to (1) different parameter values and (2) other geometries.
We fix the filament step size ($L$) and suppose the translocation noise (σ) and the deactivation rate (λ) to be constants which are varied within a broad range. We show in Appendix 1 that, for any
value of processivity and zero translocation noise, $CF$ is uniform over the surface of an ellipsoid, as is generally the case for any surface when the processivity is small (or, equivalently, λ is
large). In contrast, in the case of small λ corresponding to large processivity—a limiting case that is relevant to MreB—and over a range of σ, $CF$ is larger at the inner edge of a torus, at the
inner edge of a helix, and at the tips of an ellipsoid (Figure 4A and Figure 4—figure supplements 1 and 2). As discussed above, localization occurs geometrically in these cases due to the filament
number ($NF$) becoming uniform over the surface and spatial variations in the surface area element. The magnitude of the localization can be quantitatively predicted by geometric parameters alone
(Appendix 1). The mechanism underlying localization is different for a spherocylinder or a bulged cylinder, for which surface regions attract filaments. Nevertheless, a nonzero processivity is
required for localization even in geometries which attract filaments (Figure 4B and Figure 4—figure supplements 1 and 2).
Since filament enrichment depends on both processivity and geometry, we wondered if the localization of processive filaments always correlates with the Gaussian or mean curvatures, regardless of
overall geometry. Although Figure 3G demonstrates that filament enrichment correlates with negative Gaussian curvature or small mean curvature in a specific, undulating geometry, this correlation is
reversed in bulged cylinders (Figure 3E). Furthermore, Figure 4C illustrates a surface of zero Gaussian curvature exhibiting regions which attract filaments, as filaments change from moving
circumferentially to moving axially in such regions (see Figure 4—figure supplement 3 for additional details). Examining Equation (3) on different surfaces therefore shows that $CF$ need not depend
on Gaussian curvature at all, and the dynamics modeled in this work cannot act as a generic mechanism for sensing Gaussian curvature.
Finally, while large filament bundles or twist have not been observed in MreB filaments reconstituted in vitro (Salje et al., 2011; van den Ent et al., 2014; Hussain et al., 2018), it is possible
that general filament systems could exhibit these properties (Wang and Wingreen, 2013; Quint et al., 2016). The binding and activation of twisted filaments may also depend on membrane Gaussian
curvature, as previously demonstrated (Quint et al., 2016). We systematically explore the effects of varying filament bending rigidity, filament twist, and Gaussian curvature-dependent activation in
Appendix 1, where we show that our model predictions remain largely robust across a broad range of these parameters (Figure 4—figure supplement 4). Thus, we expect filament dynamics to contribute to
localization in different filament systems, regardless of the details of filament rigidity, twist, and other parameters of our model.
An outstanding problem in bacterial physiology has been to understand how short and disconnected filaments distribute themselves within cells to conduct different cellular functions (Eun et al., 2015
). In this work, we have examined an aspect of this problem by modeling the direct binding of protein filaments to membranes and the curvature-based translocation of an ensemble of such filaments.
Our results provide a theoretical framework for prior work examining MreB dynamics and localization (Hussain et al., 2018; Salje et al., 2011; Wong et al., 2017; Ursell et al., 2014; Shi et al., 2017
; Bratton et al., 2018; Renner et al., 2013). Furthermore, our results are consistent with the cellular localization observed in all these works and demonstrate that filament motion alone can
correlate enrichment with Gaussian curvature in specific geometries. Our work may be extended by modeling an evolving membrane surface, as expected for MreB-directed growth, and it would be
intriguing to explore whether and how principal curvature-based translocation contributes to determining cell width.
The main contribution of this work is to show that the biological results of MreB localization, as observed in many different experiments involving a range of cell shapes (Hussain et al., 2018; Wong
et al., 2017; Ursell et al., 2014; Shi et al., 2017; Bratton et al., 2018), can arise from processivity and principal curvature-dependent motion alone. Our study therefore helps to unravel how rod
shape formation may be achieved through subcellular-scale mechanisms (Amir and van Teeffelen, 2014; Shi et al., 2018; Surovtsev and Jacobs-Wagner, 2018). More broadly, our work shows that the
localization of translocating protein filaments can vary significantly depending on membrane geometry. This paves the way for exploring similar behavior in other contexts, such as bacterial
cytokinesis and eukaryotic membrane trafficking and transport. For example, in bacterial cytokinesis, filaments of the tubulin homolog FtsZ assemble at, and treadmill around, the septum, a process
which directs the insertion of new PG and constricts the cell (Bisson-Filho et al., 2017; Yang et al., 2017). Like MreB, FtsZ filaments are curved and could orient along the largest principal
direction on membranes through bending alone (Osawa et al., 2009; Erickson et al., 2010). Treadmilling along such directions would then allow filaments to drive PG synthesis circumferentially at the
Aside from MreB and FtsZ, septins, BAR-domain-containing proteins, dynamins, and endoproteins are known to exhibit similar, curvature-dependent membrane binding behaviors important for membrane
trafficking, growth, and movement in both prokaryotes and eukaryotes (Baumgart et al., 2011; Zimmerberg and Kozlov, 2006; McMahon and Gallop, 2005; Peter et al., 2004; Low and Löwe, 2006; Raiborg and
Stenmark, 2009; Teo et al., 2006; Kostelansky et al., 2007). Like MreB filaments, many such proteins sense membrane curvature through mechanical deformations of either the membrane or the protein
itself. Unlike MreB or FtsZ, these proteins do not translocate; rather, they often induce membrane curvature to facilitate downstream processes. One example is BAR-domain-containing proteins, which
scaffold higher-order assemblies of dynamin that actively constrict for vesicle scission (McMahon and Gallop, 2005). It would be interesting to apply the methods introduced here to this and other
biological systems where molecules are known to bind to membranes or sense membrane curvature. These systems are widespread and involved in pathogenesis (Baumgart et al., 2011; Frost et al., 2009),
cell division (Renner et al., 2013; Ramamurthi and Losick, 2009; Ramamurthi et al., 2009; Frost et al., 2009), intracellular trafficking (Zimmerberg and Kozlov, 2006; McMahon and Gallop, 2005;
Raiborg and Stenmark, 2009; Ford et al., 2002; Frost et al., 2009; Römer et al., 2007), and cell migration (Frost et al., 2009; Zhao et al., 2013). The mathematical model introduced in this work,
which requires minimal assumptions as to how filaments bind to and translocate on membranes, should be widely applicable to these and other broader contexts.
1.1. Model of a protein filament binding to a membrane
We consider the protein as a filament with monomeric subunits that bind to a membrane in an energetically favorable manner, such as burial of hydrophobic residues (Hussain et al., 2018). When a
filament binds to a membrane, an energetic cost $Edef(ℓb)$ is associated to deformations which deviate from a position at mechanical equilibrium, while the free energy may be lowered by an amount
$Eint(ℓb)$ due to interaction (Figure 2—figure supplement 1a). Both the deformation and interaction energies are expressed as functions of the bound filament length, $ℓb$, which is less than or
equal to the total filament length, $Lf$. We wish to minimize the free energy due to filament binding, $ΔE=Edef-Eint$. If $ΔE(ℓb)$ is negative, then it is energetically favorable for the filament
to bind to the membrane along a length $ℓb$. We estimate $Eint$ as
(S1) ${E}_{\mathrm{int}}\left({\mathrm{\ell }}_{b}\right)={\epsilon }_{\mathrm{int}}{\mathrm{\ell }}_{b},{\epsilon }_{\mathrm{int}}\equiv {N}_{\mathrm{int}}{\epsilon }_{0}/{L}_{f},$
where $Nint$ denotes the total number of membrane binding sites of the filament and $ε0$ denotes an independent and additive single binding site energy, which is given for MreB along with other
parameter values in Supplementary file 1.
We assume that the binding sites are arranged linearly along the filament, and in particular, that the filament is not twisted. In this case, it suffices to account only for filament bending, and we
may decompose the deformation energy $Edef$ into the bending energy of the filament, $Ebend$, and the deformation energy of the membrane, $Emem:Edef=Ebend+Emem$. With notation as in the main text, we
model the filament as a curved, cylindrical elastic rod with a circular cross-section of radius $rf$ and curvature $1/Rs$, so that the elastic energy density per unit length of bending the filament
from a curvature of $1/Rs$ to a curvature of $1/R$ is
(S2) ${\epsilon }_{\mathrm{bend}}=\frac{\pi Y{r}^{4}}{8}{\left(\frac{1}{R}-\frac{1}{{R}_{s}}\right)}^{2}=\frac{B}{2}{\left(\frac{1}{R}-\frac{1}{{R}_{s}}\right)}^{2},$
where $Y$ is the elastic modulus of the filament and $B=πYrf4/4$ is its flexural rigidity (Landau and Lifshitz, 1970). The resulting filament bending energy is $Ebend=εbendℓb$. For simplicity, we
have assumed the filament to be bent uniformly, but the case of a curvature which varies with position along the filament length can be considered similarly.
As stated in the main text, we assume an isotropic, fluid bilayer membrane, where there is no in-plane shear modulus and the only in-plane deformations are compressions and expansions. The membrane
free energy assumes the form of Equation (1) in the main text. The mechanical energy required to bend a membrane from a surface $S0$, with a mean curvature $H0$, to a surface $S$ is then the
difference of the corresponding free energies:
(S3) ${E}_{\mathrm{mem}}=\underset{S}{\mathrm{min}}\left[2{k}_{b}{\int }_{S}\left({H}^{2}-{H}_{0}^{2}\right)𝑑A-p\int 𝑑V\right],$
where the surface integrals of the Gaussian curvature are topological invariants by the Gauss-Bonnet theorem and therefore cancel in the difference, and the volume integral is understood to be a
difference of the volumes in the deformed and undeformed states.
Minimizing $ΔE$ requires the minimization of $Emem$ given some value $1/R$ of the deformed filament curvature. To minimize $Emem$ in Equation (S3), we assume that the surface $S$ can be
parameterized in the Monge gauge $h=h(x,y)$, where $(x,y)∈ℝ2$, and furthermore that $|∇h|≪1$: this means that the membrane surface is not excessively curved or kinked. We assume the same for the
undeformed surface $S0$, which is parameterized by a function $h0$ in the Monge gauge. In the case of binding to a cylindrical membrane, we may, for instance, take the undeformed surface to
correspond to a cylinder with radius $Rcell$:
(S4) ${h}_{0}\left(x,y\right)={R}_{\mathrm{cell}}-\sqrt{{R}_{\mathrm{cell}}^{2}-{x}^{2}},|x|<{R}_{\mathrm{cell}}.$
In the Monge gauge, the mean curvature can be expanded as $H=12∇2h+O[(∇h)2]$, where the big-$O$ notation signifies $|H-12∇2h|≤M(∇h)2$ when $0<(∇h)2<δ$ for some positive numbers $δ$ and $M$.
The membrane bending energy in Equation (S3) can then be rewritten with the Laplacian, $Δ$, as
(S5) ${E}_{\mathrm{mem}}=\underset{h}{\mathrm{min}}ℱ\left[h\right],ℱ\left[h\right]=\frac{{k}_{b}}{2}{\int }_{\mathrm{\Omega }}\left[{\left(\mathrm{\Delta }h\right)}^{2}-{\left(\mathrm{\Delta }{h}_{0}
\right)}^{2}\right]𝑑x𝑑y-p{\int }_{\mathrm{\Omega }}\left(h-{h}_{0}\right)𝑑x𝑑y,$
for some domain $Ω⊂ℝ2$ of $h$ and $h0$ not containing the domain $U$ of the filament surface (Figure 2—figure supplement 1a). Setting the first variation of $ℱ[h]$ to zero, we find that the
equilibrium membrane shape is given by the solution of the shape equation
(S6) ${\mathrm{\Delta }}^{2}h=\frac{p}{{k}_{b}},$
where $Δ2$ is the biharmonic operator. Equation (S6) is subject to the Dirichlet boundary conditions
(S7) $\left\{\begin{array}{ll}h\left(x,y\right)=\varphi \left(x,y\right)& \left(x,y\right)\in \mathrm{\partial }\mathrm{\Omega }\\ \mathrm{\Delta }h\left(x,y\right)=\psi \left(x,y\right)& \left(x,y\
right)\in \mathrm{\partial }\mathrm{\Omega }.\end{array}$
Here $ϕ$ and $ψ$ are indicator functions defined by their values on the boundary of $Ω$, $∂Ω$. In the case of a cylindrical membrane, for instance,
(S8) $\left\{\begin{array}{cc}\varphi \left(x,y\right)={h}_{0}\left(x,y\right)& \left(x,y\right)\in \mathrm{\partial }\mathrm{\Omega }-\mathrm{\partial }U\\ \varphi \left(x,y\right)={p}_{0}\left(x,y\
right)& \left(x,y\right)\in \mathrm{\partial }U,\end{array}\phantom{\rule{2em}{0ex}}\left\{\begin{array}{cc}\psi \left(x,y\right)=1/{R}_{\mathrm{c}\mathrm{e}\mathrm{l}\mathrm{l}}& \left(x,y\right)\in
\mathrm{\partial }\mathrm{\Omega }-\mathrm{\partial }U\\ \psi \left(x,y\right)=2{C}_{0}& \left(x,y\right)\in \mathrm{\partial }U,\end{array}$
where, as above, $h0$ parameterizes the undeformed surface $S0$, $p0(y)$ is a quadratic function describing the values of the filament height at the curve $∂U$ parameterizing the binding region,
and $C0$ is the mean curvature of the filament along $∂U$. Thus, the first condition in Equation (S7) comes from imposing continuity of membrane height with respect to the filament surface, while
the second condition comes from imposing continuity of mean curvature. In live cells, we treat the periplasm as a rigid, undeformable body (Hussain et al., 2018), so that, for instance, $p0(y)≥h0
(x,y)$ for all $(x,y)∈Ω$ when $h0$ assumes the form of Equation (S4) above. With the boundary conditions of Equation (S7), Equation (S6) can be conveniently decoupled as two Poisson equations, each
with Dirichlet boundary conditions:
(S9) $\left\{\begin{array}{cc}\mathrm{\Delta }h\left(x,y\right)=f\left(x,y\right)& \left(x,y\right)\in \mathrm{\Omega }\\ h\left(x,y\right)=\varphi \left(x,y\right)& \left(x,y\right)\in \mathrm{\
partial }\mathrm{\Omega },\end{array}\phantom{\rule{2em}{0ex}}\left\{\begin{array}{cc}\mathrm{\Delta }f\left(x,y\right)=p/{k}_{b}& \left(x,y\right)\in \mathrm{\Omega }\\ f\left(x,y\right)=\psi \left
(x,y\right)& \left(x,y\right)\in \mathrm{\partial }\mathrm{\Omega }.\end{array}$
Since any solution to the Poisson equation with Dirichlet boundary conditions is unique, the decomposition above yields a unique solution for $h$.
As the foregoing considerations assume a fixed $Ω$ in determining h, the size of $Ω$ is an additional variable that must be considered. Since shape space is infinite-dimensional, determining $h$ for
an arbitrarily large $Ω$ does not necessarily imply that the global minimum of $ΔE$ is achieved; neither does it necessarily determine the appropriate decay length of the indentation, since $h(x,y)
=h0(x,y)$ is generally not a solution to Equation (S9). It is possible that $ΔE$ may be minimized at a finite $Ω$. Due to the boundary conditions (Equation (S7) and (S8)), solutions of Equation
(S9) over finite $Ω$ are continuous, with continuous mean curvatures, and could in fact be physically plausible. This subtlety can be addressed by choosing $Ω$ so that the numerically computed value
of $ΔE$ is minimal among differently sized $Ω$, as discussed below.
1.2. Finite element solutions of the shape equation
Given values of $kb,p,Rcell,C0$, and the filament height along $∂U$, we numerically solved Equation (S6) by individually solving the decoupled equations, Equation (S9), with a two-dimensional finite
element Poisson equation solver (Figure 2—figure supplement 1b–c) (Burkardt, 2011). The value of $ΔE$ was then calculated numerically from the height function, $h$, by triangulating $h$ and
extracting the mean curvature and enclosed volume of the resulting mesh using pre-existing MATLAB (Mathworks, Natick, MA) software (Mecklai, 2004; Kroon, 2014; Suresh, 2010). To find the
energy-minimizing filament-membrane conformation, $ΔE$ was numerically computed while varying the deformed filament curvature, $1/R$, the size of $Ω$, and the size of, and the filament height in,
$U$ (the size of $U$ corresponds to the bound length of the filament; consistent with the discussion below, we find that $ℓb=Lf$ in all cases of interest). As the simulation details above assume a
perpendicular binding orientation relative to the long axis of a cylinder, for simplicity we model binding to deviatory angles by perpendicular binding to a different cell radius, $R$, where $θ=
cos-1(R/Rcell)$. These implementation details were used to generate Figure 2—figure supplement 1b–c and Figure 4—figure supplement 4.
1.3. Preferred orientation of filament binding and binding phase diagram
In this section, following Hussain et al. (2018), we provide details of analytic calculations complementing the numerical calculations discussed above. For a cylindrical membrane whose radius is
larger than the radius of curvature of a filament, it is energetically favorable for the filament to bind at an angle of $θ=90∘$ relative to the long axis of the cell, as this orientation requires
minimal bending of both the protein and the membrane. For deviatory angles, $|θ−90∘|>0∘$, an effective correction to the cell radius $Rcell$ is a multiplicative factor of $1/cosθ$, which makes
binding less energetically favorable. The energetic penalty for MreB filaments binding at deviatory angles is approximately $40kT$ for a broad range of membrane pressures and depending on the
membrane radius (below and Figure 2B of the main text). In general, higher membrane pressures make it more energetically favorable for the protein filament to bend, which minimizes the amount of
volume displaced by the protein-membrane interaction (see below) (Hussain et al., 2018), while smaller membrane pressures make it more energetically favorable for the membrane to bend. Since a
filament can always bend to conform to the membrane curvature, we see that large pressure differences across the membrane may enhance the energetic preference of an MreB filament for the
perpendicular orientation.
We may summarize our results over a wide range of parameter values with an approximate phase diagram, assuming a cylindrical membrane (Figure 2C in the main text). We use the membrane pressure $p$
and the filament Young modulus $Y$, which varies with the bending energy $Ebend$ of the filament, as order parameters. By considering only the volume and surface height displaced directly underneath
the binding region of the membrane, the deformation energy of the binding region, $U$, is (Hussain et al., 2018)
(S10) ${E}_{\mathrm{def},U}\approx \underset{R}{\mathrm{min}}\left[\frac{B}{2}{\left(\frac{1}{R}-\frac{1}{{R}_{s}}\right)}^{2}{\mathrm{\ell }}_{b}+\frac{\pi b{k}_{b}{\mathrm{\ell }}_{b}}{{r}_{f}}+\
frac{p{r}_{f}{\mathrm{\ell }}_{b}^{3}}{12}\left(\frac{1}{R}-\frac{1}{{R}_{\mathrm{cell}}}\right)\right],$
where $b$ is the fraction of the filament cross-section needed to adhere to the membrane. The critical pressure at which the deformation is dominated by filament bending can be estimated as the value
that sets $Edef,U$ to be minimal at $R=Rcell$:
(S11) ${p}^{*}\approx \frac{12B}{{\mathrm{\ell }}_{b}^{2}{r}_{f}}\left(\frac{1}{{R}_{s}}-\frac{1}{{R}_{\mathrm{cell}}}\right),$
which, for the parameter values summarized in Supplementary file 1, estimates $p∗≈20 kPa$ (Hussain et al., 2018). This value of $p*$ is smaller than estimates of the turgor pressures of both B.
subtilis and E. coli (Supplementary file 1), suggesting that in vivo, MreB filaments always bend to adhere to the inner membrane. Equation (S10) is used under this assumption to generate the curves
in Figure 2B of the main text. Furthermore, the linear dependence on $ℓb$ of the first term of Equation (S10) implies that MreB filaments bind fully along their lengths. If $p<p∗$, as is the case for
vesicles, then both the membrane and the MreB filament can deform each other in a manner that minimizes the total energy, with the membrane shape determined by Equation (S6). For a large range of
membrane pressures $0≤p≲p∗$, we find that bound MreB filaments induce membrane curvature, and for vesicles where the pressure difference across the membrane is vanishingly small, the shape equation
also predicts that MreB filaments can grossly deform the membrane (Figure 2—figure supplement 1c), a prediction consistent with experimental observations (Salje et al., 2011; van den Ent et al., 2014
). For a filament with fixed dimensions, both $Ebend$ and the expression for $p*$ in Equation (S11) are proportional to $Y$. Hence, $Ebend∝p*$ as $Y$ increases, and this relation is shown as the
diagonal line in Figure 2C of the main text.
Similarly, considering only the membrane curvature induced by the binding region $U$, the filament does not bend if $Ebend+E>Eint$, where $E=πbkbLf/rf$. If this inequality were satisfied, then the
interaction energy $Eint$ may be too small to justify membrane binding, which requires a combination of polymer and membrane bending. Note that, while we assume $b=1/6$ in this work, our results do
not significantly change for different $b$, as shown in Figure 4 of (Hussain et al., 2018), and the regimes delineated in this section are summarized in Figure 2C of the main text.
Finally, we note that, for the parameter values summarized in Supplementary file 1, the prediction that it is energetically favorable for MreB filaments to align along the circumferential direction
of a rod-like cell is robust in the case where the intrinsic curvature varies with position along the filament. In particular, a calculation based on Equation (S2) shows that this conclusion follows
given that the filaments are, on average, more curved than the membrane. To see this, let $κs(ℓ)$ denote the intrinsic filament curvature as a function of position along the filament length, $ℓ$,
and $κ$ denote the deformed curvature. In live cells, $κ$ does not vary as a function of $ℓ$ because it is most energetically favorable for the filament to bend completely to match the ambient
membrane curvature, as we have shown above. The total bending energy of the filament is then
(S12) ${E}_{\mathrm{bend}}=\frac{B}{2}{\int }_{0}^{{L}_{f}}{\left(\kappa -{\kappa }_{s}\left(\mathrm{\ell }\right)\right)}^{2}𝑑\mathrm{\ell }.$
When binding to an angle that deviates from the circumferential direction in a cylinder, the deformed curvature will be smaller: let $κ=κ0-κ′$, where $κ0$ is the curvature along the circumferential
direction of a cylinder and $κ′≥0$ is a constant correction to $κ0$ depending on the binding angle. Then, the difference in bending energies between binding in the direction of $κ$ as opposed the
circumferential $κ0$ direction is
(S13) $\frac{B}{2}{\int }_{0}^{{L}_{f}}{\left({\kappa }^{\prime }\right)}^{2}𝑑\mathrm{\ell }+2{\kappa }^{\prime }\frac{B}{2}{\int }_{0}^{{L}_{f}}\left({\kappa }_{s}\left(\mathrm{\ell }\right)-{\kappa
}_{0}\right)𝑑\mathrm{\ell },$
which is larger than zero provided the filament is, on average, more curved than the cell (that is, the second term above is non-negative). Hence, because the binding orientation is robust, our model
predictions will remain the same. Nevertheless, we note that cryo-EM experiments (Hussain et al., 2018; Salje et al., 2011; van den Ent et al., 2014) support the assumption of a uniformly bent MreB
filament, and we have therefore focused on this case in the main text.
In the foregoing argument, we have assumed that $κ$ does not vary as a function of $ℓ$. The case in which $κ$ varies significantly as a function of $ℓ$, which is relevant to the geometry considered
in Figure 3F of the main text, is more delicate. Note, however, that in this geometry $κ(ℓ)$ can be explicitly calculated to show that MreB still orients in the largest principal direction. We
consider, in particular,
(S14) ${E}_{1}\left(v\right)=\frac{B}{2}{\int }_{v-d}^{v+d}{\left({\kappa }_{1}\left(\mathrm{\ell }\right)-{\kappa }_{s}\right)}^{2}𝑑\mathrm{\ell }\text{and}{E}_{2}\left(v\right)=\frac{B}{2}{\int }_
{v-d}^{v+d}{\left({\kappa }_{2}\left(\mathrm{\ell }\right)-{\kappa }_{s}\right)}^{2}𝑑\mathrm{\ell },$
where, at an axial coordinate $v$, $E1$ is the bending energy of a (constantly curved) filament aligning in the axial direction (with principal curvature $κ1$) and $E2$ is the bending energy of a
filament aligning in the circumferential direction (with principal curvature $κ2$). Here the integration is over the axial coordinates corresponding to filament length, $[v-d,v+d]$. Figure 3—figure
supplement 2c–d shows that, in this case, binding along the greatest principal direction generally still incurs the least bending energy.
2. Curvature-based translocation of filaments
Consider a surface parameterized by $𝐫=r(u,v)⊂ℝ3$, with $(x1,x2)=(u,v)∈ℝ2$, which, for simplicity, we assume to be smooth almost everywhere so that the following quantities are well defined. The
considerations below can be readily extended to the case of a piecewise smooth surface, as discussed in the following section (§2.1). We model a curved filament as a point $𝐩$ on this surface which
translocates in a direction $𝐝∈ℝ3$. At $𝐩$, the principal direction, $𝐰$, represented in the basis of the tangent space satisfies
(S15) $S\mathbf{𝐰}=c\mathbf{𝐰},$
where $S$ is the shape operator at $𝐩$ and $c$ is one of the principal curvatures of $𝐫$, hereafter taken to be the largest. Throughout this work, we assume the sign convention that the curvature is
positive when any normal vector at $𝐩$ points towards the interior of $𝐫$, so that, for instance, the largest principal curvature is always positive for a cylinder (§2.2). When the filament
translocates in the direction of the largest principal curvature, $𝐝$ satisfies $𝐝=𝐰⋅(𝐫u,𝐫v)$, where $𝐫u=∂u𝐫$ and $𝐫v=∂v𝐫$.
For convenience, we recall the following statements from the main text. We set $η=cos−1d⋅rθ||d||⋅||rθ||$, where $η$ is an angular deviation from the largest principal direction on the surface
introduced by possible sources of stochasticity, the modified direction corresponds to an angle $θ$ relative to the $u$-axis in parametric coordinates, $rθ∈R3$ is the derivative of $r$ in the
direction of $θ$, and distances are defined by the surface metric. Note that, when the largest principal direction is well defined, it is independent of the parameterization up to a sign (Lee, 2009
). After the filament orients, it may translocate in the direction of $𝐝$ for a certain distance, dependent on the filament speed, before reorienting its direction of motion. As $𝐝$ is a vector in
$ℝ3$, arbitrarily moving in the direction of $𝐝$ may, however, move the filament out of $𝐫$. To define the translocation consistently, we define it $(u,v)$-space by taking the translocation angle
with respect to the $u$-axis to also be $θ$. Translocating along an angle $θ$ with respect to the $u$-axis in $(u,v)$-space ensures that the filament remains on the surface, and the direction of
translocation corresponds to that on the smooth patch defined locally on $𝐫$.
As a discrete-time flow in parametric space, and with suitable units of time so that the filament may reorient at every timestep, the 2D equation of filament motion is
(S16) ${X}_{n+1}={X}_{n}+{\chi }_{n}{\ell }_{n}\left(\mathrm{cos}{\theta }_{n},\mathrm{sin}{\theta }_{n}\right),$
where, in $(u,v)$-space, $Xn$, $ℓn$, and $θn$ are the position, step size, and translocation angle, respectively, of the filament at a timestep $n$. Here $θn$ is the value of $θ$ computed at the
surface point corresponding to $Xn$ and assuming $η∼𝒩(0,σ2)$—that is, the angular noise is normally-distributed, with mean zero and variance $σ2$. The strength of the noise may depend on factors such
as the energetic difference of binding along the two principal directions, and we examine cases where $σ$ depends on the difference, $Δc$, between principal curvatures below. $χn$ is a random sign,
which accounts for the possibility of both left-handed and right-handed translocation, and may not substantially vary in $n$ if filament motion is processive (namely, the filament does not
backtrack). We assume that $ℓn$ satisfies the integral equation
(S17) $L={\int }_{0}^{{\ell }_{n}}\sqrt{{g}_{11}\left(X\left(\tau \right)\right){\mathrm{cos}}^{2}{\theta }_{n}+2{g}_{12}\left(X\left(\tau \right)\right)\mathrm{cos}{\theta }_{n}\mathrm{sin}{\theta }
_{n}+{g}_{22}\left(X\left(\tau \right)\right){\mathrm{sin}}^{2}{\theta }_{n}}d\tau ,$
where $X(τ)=Xn+τ(cosθn,sinθn)$ and $g$ denotes the metric tensor (with $u$ and $v$ corresponding to the indices $1$ and $2$, respectively), which relates it to a constant filament step size, $L$,
on the surface. When $g$ is slowly varying, that is $g(X(τ))≈g(Xn)$, as is expected in the limit of small $ℓn$, or in particular
(S18) $\frac{{\mathrm{\ell }}_{n}\stackrel{^}{Y}\cdot abla I\left({X}_{n}\right)}{4I\left({X}_{n}\right)}\ll 1,$
where $Y^=(cosθn,sinθn)$, $I$ is the integrand of Equation (S17), and we discard higher-order terms in $∇I$ and $ℓn$, Equation (S17) simplifies to a linear equation for $ℓn$:
(S19) $L={\ell }_{n}\sqrt{{g}_{11}\left({X}_{n}\right){\mathrm{cos}}^{2}{\theta }_{n}+2{g}_{12}\left({X}_{n}\right)\mathrm{cos}{\theta }_{n}\mathrm{sin}{\theta }_{n}+{g}_{22}\left({X}_{n}\right){\
mathrm{sin}}^{2}{\theta }_{n}}.$
In general, we wish to determine the distribution of $X$ in $(u,v)$-space, which would determine the distribution of the filament on the surface. Below, we introduce activation and deactivation
dynamics, as discussed in the main text, and examine the statistics of $X$ on several surfaces.
2.1. Numerical solutions in the finite and continuum cases
Before discussing the implications of Equation (S16), we provide implementation details for the simulations and numerical calculations discussed below and in the main text. We numerically implemented
the dynamics of Equation (S16) with Langevin simulations and verified our results with coarse-grained, continuum calculations corresponding to solutions of the corresponding Fokker-Planck equation.
For the former, individual filaments were activated and deactivated as discussed in the main text, and their trajectories were simulated directly according to Equation (S16). The final positions of
all filaments were pooled together to determine the corresponding filament concentrations. For the latter, we discretized $(u,v)$-space uniformly into $Mu×Mv$ rectangular elements, computed the
transition matrix corresponding to Equation (S16) and a given set of parameter values, and multiplied a vector corresponding to the filament number, $NF$, to model the dynamics. Deactivation effects
were modeled by multiplying the vector of filament numbers after each timestep by $e-λ$, and filament concentrations were determined by dividing the vector of filament numbers by the corresponding
surface area elements. Specific implementation details, such as geometric parameters, are summarized for each figure in this work in §3.
For a piecewise parametric surface, we allowed filaments to translocate between two subsurfaces while conserving the total step size, $L$, translocated at every timestep. For the piecewise geometries
considered in this work, the filament coordinates were connected between subsurfaces in an obvious manner.
Consider a cylinder parameterized by $𝐫=(acosu,a sinu,av)$, where $a>0$, $u∈[0,2π]$, $v∈[0,z]$, and $z$ is a large enough constant so that we do not consider filament translocation out of the
cylinder. The nonvanishing components of the metric tensor are $g11=g22=a2$. Hence, the step sizes in parameter space are identical at all timesteps $n$ and equal to $ℓn=ℓ=L/a$, and translocation
occurs mainly along the $u$-axis. Assuming processive filament motion, we recover a Pearson-like random walk in a 2D plane with periodic boundary conditions in $u$, where the translocation angle
satisfies $θn∼N(0,σ2)$, and $χn=±1$ for all $n$. Correlated Pearson random walks of a similar form have been studied by Kareiva and Shigesada in the context of insect movement (Kareiva and Shigesada,
1983). Denoting by $UN$ and $VN$ the displacement along the $u$ and $v$ coordinate, respectively, after $N$ steps, with $UN=∑i=1Nℓcosθi$ and $VN=∑i=1Nℓsinθi$, we find that
(S20) $⟨{U}_{N}⟩=\ell N{e}^{-{\sigma }^{2}/2},\phantom{\rule{1em}{0ex}}⟨{V}_{N}⟩=0,\phantom{\rule{1em}{0ex}}\mathrm{V}\mathrm{a}\mathrm{r}\left({U}_{N}\right)={\ell }^{2}N{e}^{-{\sigma }^{2}}\left(\
mathrm{cosh}\left({\sigma }^{2}\right)-1\right),\phantom{\rule{0ex}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{V}\mathrm{a}\mathrm{r}\left({V}_{N}\right)={\ell }^{2}N{e}^{-{\sigma }^{2}}\mathrm{sinh}\left
({\sigma }^{2}\right),$
in agreement with simulations of Equation (S16) for parameter values relevant to B. subtilis (§3.1 and Figure 3B of the main text).
Let $𝐫=((R+asinu)cos(av/R),(R+asinu)sin(av/R),acosu)$, where $u∈[0,2π]$, $v∈[0,RΦ/a]$, $Φ$ is the axial subtended angle, and $0<a<R$. Here the nonvanishing elements of the metric tensor
are $g11=a2$ and $g22=a2(1+(a/R)sinu)2$, and the step sizes depend on $u$. As the principal curvatures in the circumferential $u$-direction, $cu$, and in the axial $v$-direction, $cv$, satisfy $cu
=1/a>cv=(R+asinu)−1sinu$, translocation occurs mainly in the direction of $u$. Due to the dependence of the step sizes on $u$, $⟨UN⟩$, $⟨VN⟩$, $Var(UN)$, and $Var(VN)$ may differ from that of a
cylinder and are analytically difficult to calculate. We focus instead on determining the filament concentration, $CF$, which, unlike the case of a cylinder, will be non-uniform in $X$.
For intuition, we consider the case of no translocation noise. Consider the probability $PN(X;Y)$ of observing the filament at any point $X$ in $(u,v)$-space at a timestep $N$ given that it is
initially at $Y$, and assume $pinit(Y)$ to be proportional to $dA=a2(1+(a/R)sinu)dudv$. Since $dA$ depends on $u$, the probability $PN(X)$ of observing the filament at a timestep $N$,
averaged over initial positions, is not uniform. Nevertheless, considering the dynamics of activating and deactivating filaments as above, we find that, in the limit of continuous time, the expected
number of filaments at a coordinate $X=(u,v)$ and time $t$ is
(S21) $\begin{array}{ll}{N}_{F}\left(X,t\right)& ={\int }_{0}^{t}\left[dA\left(u-\tau u ,v\right)+dA\left(u+\tau u ,v\right)\right]{k}^{\prime }{e}^{-\lambda \tau }d\tau \\ & =2{a}^{2}{k}^{\prime }\
left(\frac{1-{e}^{\lambda t}}{\lambda }+\frac{a\lambda \mathrm{sin}u-a\lambda {e}^{-\lambda t}\mathrm{sin}u\mathrm{cos}\left(tu \right)+au {e}^{-\lambda t}\mathrm{sin}u\mathrm{sin}\left(tu \right)}{R
\left({\lambda }^{2}+{u }^{2}\right)}\right),\end{array}$
where $k′=kpinit/dA$ and $ν$ is a speed corresponding to $UN$ in the continuum limit. As $λ→∞$, corresponding to the limit in which filaments are instantaneously deactivated, Equation (S21)
predicts a number enhancement on the outer edge of the form $NF(X,t)=(2a2k′/λ)(1+(a/R)sinu)$, while as $λ→0$, corresponding to the limit in which filaments persist indefinitely, Equation (S21)
predicts an approximate uniform distribution $2a2k′(t+aRνsin(tν)sinu)≈2a2k′t$ in the limit of large $t$. While the concentration of filaments, $CF$, is uniform over the surface in the
former case, the latter case corresponds to the ‘washing-out’ of initial activation conditions and implies a larger value of $CF$ on the inner edge. Namely, in the formal limit $λ→0$ followed by
$t→∞$, with $λt→0$,
(S22) $\frac{{C}_{F}\left(X,t\right)}{t}\to \frac{2{k}^{\prime }}{\left(1+\left(a/R\right)\mathrm{sin}u\right)},$
which is maximized at $u=-π/2$, corresponding to the inner edge of the torus. The filament concentration at the inner edge becomes larger than that of the outer edge by a factor of $(1+a/R)/(1-a/R)$,
which depends only on the geometry of the torus. This effect may be interpreted as a ‘geometric focusing’ caused by both the filament number, $NF$, becoming uniform in $(u,v)$-coordinates and
variations in the area element $dA$ in $u$ or $v$. Even for a range of noises, including the limit of large noise ($σ→∞$), simulations show that this description is valid and agrees with Equation
(S22) (Figure 4—figure supplement 1a).
While the geometric focusing effect is clearly applicable to other geometries, particularly those involving bent or sinusoidal surfaces (Takeuchi et al., 2005; Renner et al., 2013), we examine
whether it applies to ellipsoids and other geometries considered in this work below. We also note that this effect is different from filaments translocating along a direction of curvature until it is
‘attracted,’ as is the case for a spherocylinder (§2.5) or a surface with small wavelength undulations (Figure 3F of the main text and §2.8). The main difference is that the dynamics corresponding to
geometric focusing is recurrent, and hence, due to spatial variations in the area element $dA$, leads to $CF$ depending on geometric features such as in Equation (S22). In contrast, the dynamics
corresponding to an attractor is not. For instance, in the absence of noise and the limit of large processivity, $CF$ is nonzero over a toroidal surface and the relative localization is determined by
Equation (S22), but $CF$ vanishes at the hemispherical caps on a spherocylinder, as discussed later (§2.5).
Note that characteristic values of MreB persistence, as summarized in Supplementary file 2, suggest that MreB filaments lie in the $λ≈0$ regime. The observation of enhanced MreB concentration at the
inner edge of toroidal cells is quantitatively consistent with the geometric focusing effect discussed in this section, as explained in the main text and prior work (Wong et al., 2017).
Finally, we note that similar arguments as that above describe the case of a helix, which can be parameterized by
(S23) $\begin{array}{l}\mathbf{r}=\left(\left(R+a\mathrm{sin}u\right)\mathrm{cos}\left(av/R\right)+\frac{\varphi a\mathrm{sin}\left(av/R\right)\mathrm{cos}u}{\sqrt{{R}^{2}+{\varphi }^{2}}},\left(R+a\
mathrm{sin}u\right)\mathrm{sin}\left(av/R\right)\\ -\frac{\varphi a\mathrm{cos}\left(av/R\right)\mathrm{cos}u}{\sqrt{{R}^{2}+{\varphi }^{2}}},\frac{\varphi av}{R}+\frac{aR\mathrm{cos}u}{\sqrt{{R}^{2}
+{\varphi }^{2}}}\right),\end{array}$
where $u∈[0,2π]$, $v∈[0,RΦ/a]$, $Φ$ determines the extended length of the helix, $0<a<R$, and $ϕ$ is the helical pitch. Here the nonvanishing elements of the metric tensor are $g11=a2$, $g12=g21=
ϕa3(RR2+ϕ2)-1$, and $g22=a2(2(R2+ϕ2)2+a2(R2+2ϕ2)$$+aR(4(R2+ϕ2)sinu−aRcos2u))(2R2(R2+ϕ2))−1$, and the step sizes again depend on $u$. As the principal curvatures in the circumferential direction,
$cu$, and in the axial direction, $cv$, satisfy $cu=1/a>cv=(R2+ϕ2+aRsinu)−1Rsinu$, translocation occurs mainly in the direction of $u$. Like the case of a torus, we expect filaments to become
uniformly distributed across circumferential hoops in the limit of infinite processivity. The predicted value of $CF$ in this case, $CF∝1/dA=(a2(R2+ϕ2+aRsinu)dudv)-1RR2+ϕ2$, is
quantitatively consistent with numerical simulations and larger at the inner edge (Figure 4—figure supplement 1b–c).
Consider the surface $𝐫=(asinucosv,asinusinv,bcosu)$, where $a,b>0$, $u∈[0,π]$, and $v∈[0,2π]$. Here the nonvanishing components of the metric tensor are $g11=(a2+b2+(a2-b2)cos(2u))/2$
and $g22=a2sin2u$, and $dA=12asinu a2+b2+(a2-b2)cos(2u)dudv$. When $b>a$, the principal curvatures in the circumferential $v$-direction, $cv$, and in the axial $u$-direction, $cu$,
(S24) ${c}_{v}=\frac{\sqrt{2}b}{a\sqrt{{a}^{2}+{b}^{2}+\left({a}^{2}-{b}^{2}\right)\mathrm{cos}\left(2u\right)}}\ge {c}_{u}=\frac{2\sqrt{2}ab}{{\left({a}^{2}+{b}^{2}+\left({a}^{2}-{b}^{2}\right)\
with equality only at the poles ($u=0,π$). Hence, in the case of no noise, filaments always translocate in the direction of $v$, and similarly, the opposite is true when $b<a$.
As calculations involving a finite noise are analytically complex, we first consider the case of no noise. Here $UN=0$ and $VNu=LN/(asinu)$, where $UN$ is the displacement along the $u$-coordinate
after $N$ steps and $VNu$ is the displacement along the $v$-coordinate after $N$ steps assuming that the filament remains on the curve of constant $u$. For an ensemble of filaments, considering
similar activation and deactivation dynamics as above gives
(S25) $\begin{array}{}\text{(S25)}& {N}_{F}\left(X,t\right)& ={\int }_{0}^{t}\left[dA\left(u,v-\tau \frac{u }{\mathrm{sin}u}\right)+dA\left(u,v+\tau \frac{u }{\mathrm{sin}u}\right)\right]{k}^{\mathrm
{\prime }}{e}^{-\lambda \tau }d\tau & \text{(S26)}& & =\frac{\sqrt{2}{k}^{\mathrm{\prime }}\left(1-{e}^{-\lambda \tau }\right)}{\lambda }a\mathrm{sin}u\sqrt{{a}^{2}+{b}^{2}+\left({a}^{2}-{b}^{2}\
right)\mathrm{cos}\left(2u\right)}& \text{(S27)}& & =\frac{2{k}^{\mathrm{\prime }}\left(1-{e}^{-\lambda \tau }\right)}{\lambda }×\frac{dA}{dudv},& \end{array}$
where $k′$ is the same as above and $ν$ is a constant speed corresponding to $VN=VNusinu$ in the continuum limit. Hence, for any value of $λ$, $CF$ is a constant, in agreement with numerics (Figure
4—figure supplement 2).
We next consider the case of a finite noise. We first note that the area element $dA$ vanishes at the tips, and here the resulting value of $CF$ would diverge if $NF$ were nonzero. To compare
filament concentrations over regions with differently sized area elements, it is convenient to define the average concentration on the (sub)surface $S$ as
(S28) ${⟨{C}_{F}⟩}_{S}=\frac{{\sum }_{X\in S}{N}_{F}\left(X\right)}{{\sum }_{X\in S}dA\left(X\right)},$
which is different from a direct averaging of $CF$ over area elements as considered for a torus (the latter has the form of a harmonic sum). Numerical calculations accounting for an ensemble of
filaments with similar activation and deactivation dynamics as above show that, intriguingly, $⟨CF⟩S$ is enhanced at the ellipsoidal poles ($S={X:u<π/4oru>3π/4}$) for a range of noises (Figure
4—figure supplement 2). Different from the case of zero noise, here the increased values of $CF$ at the poles arise because filaments may randomly translocate to the poles and the area element is
significantly smaller there than that away from the poles. Similar observations also hold in the limit of large noise, $σ→∞$, and the limiting value of the ratio of $⟨CF⟩S$ at the tips versus the
bulk is the ratio of the corresponding areas (Figure 4—figure supplement 2).
In the case of a finite noise, localization in a torus, a helix, and an ellipsoid arises because of $NF$ becoming spatially homogeneous and spatial variations of the area elements. However, in the
case of zero noise, we remark on a key difference between $CF$ on a torus and an ellipsoid as follows. For a torus, spatially heterogeneous filament concentration arises due to the trajectories of
individual filaments being closed orbits in the circumferential direction and the long persistence of filaments, which ‘washes out’ the initial distribution of filament position. For an ellipsoid,
filaments do not move in the axial direction and the resultant filament concentration remains uniform on the surface, regardless of persistence. That a uniform distribution is maintained in the
absence of noise on an ellipsoid is also different from the cases of a spherocylinder (§2.5) and a surface with small wavelength undulations (Figure 3F of the main text and §2.8), as discussed later.
In general, $CF$ would be uniform in the case of zero noise for any surface where the direction in which $dA$ varies does not coincide with the direction of filament translocation.
Consider the surface parameterized piecewise by $𝐫1=(acosu,asinu,av)$, where $a>0$, $u∈[0,2π]$, $v∈[0,z]$ for some $z>0$, $𝐫2=(asinvcosu,asinvsinu,acosv+az)$, where $u∈[0,2π]$ and
$v∈[0,π/2]$, and $𝐫3=𝐫2-(0,0,az)$, where $u∈[0,2π]$ and $v∈[π/2,π]$. In the absence of noise, filaments translocate along the $u$ axis in the cylinder and a random angle in the hemispheres. This
could allow for translocation out of the caps and into the cylindrical body. Indeed, in the absence of noise, it is clear that any filament in the hemispherical caps will eventually translocate into
and remain at the cylindrical rims.
For the spherocylindrical surface described above, general calculations in the case of a finite noise are analytically complex due to the irregular geometry. In the presence of noise, however, the
depletion of filaments at the hemispherical caps can be supported numerically. For parameter values relevant to MreB in E. coli and B. subtilis (Supplementary file 2), numerical calculations show
that $⟨CF⟩S$ is larger by a multiplicative factor of ∼2.0 when $S$ is the cylindrical bulk than that when $S$ is a hemispherical endcap (Figure 3C of the main text).
2.6. Filament concentration is independent of Gaussian curvature
While filament movement depends on the principal curvatures, we wondered whether this implies that filament concentration always depend on the Gaussian curvature. Here we show that, for general
surfaces, filament concentration is independent of Gaussian curvature under the dynamics considered in this work. Consider the parameterization $𝐫=(sinu,(1-cosu)cosu,v)$, where $u∈[-π,π]$, $v∈
[0,z]$, and $z$ is a large enough constant so that we do not consider filament translocation out of the surface. Although the Gaussian curvature vanishes everywhere, any cross-section of the surface
has regions of both negative and positive principal curvature, and the regions of positive principal curvature represent attractors to the filament dynamics: filaments translocating into such regions
rarely translocate out.
While numerical simulations show that filaments are localized at regions where the largest principle direction coincides with the axial direction (Figure 4C in the main text and Figure 4—figure
supplement 3a–b), analytical calculations are difficult to undertake due to the complicated geometry. Nevertheless, we may consider a similar 1D problem of a particle moving with velocity $ν$ along a
circular ring, which is parameterized without loss of generality by $u∈[0,2π]$, and contains a single absorbing coordinate at $u=0$. Accounting for activation and deactivation dynamics as above, the
filament number at $u=0$ at a time $t>t∗$ in the case of vanishing translocation noise ($σ=0$) and assuming $ν>2πλ$, so that the flux described below is nonzero, can be written as
(S29) ${N}_{F}\left(u=0\right)={\int }_{0}^{t}\left({\int }_{\mathrm{max}\left({t}_{1}-{t}^{*},0\right)}^{{t}_{1}}{k}^{\prime }dA\left(u-u \left({t}_{1}-{t}_{0}\right),v\right){e}^{-\lambda \left({t}
_{1}-{t}_{0}\right)}d{t}_{0}\right){e}^{-\lambda \left(t-{t}_{1}\right)}d{t}_{1},$
where $k′$ is defined above, the term in parenthesis is the filament flux into $u=0$ at time $t1$, and $t*=2π/ν$ represents the maximal time needed for any filament to become absorbed. Further
assuming a constant value of $dA(u,v)=dA$ for simplicity, direct evaluation of Equation (S29) yields
(S30) ${N}_{F}\left(u=0\right)=\frac{{k}^{\prime }dA}{{\lambda }^{2}}\left(1-{e}^{-\frac{2\pi \lambda }{u }}-\frac{2\pi \lambda }{u }{e}^{-\lambda t}\right),$
which quantitatively describes the dependence of the filament number at the absorbing coordinate as a function of the processivity, which is determined by $λ$, and other parameters. Note that, as
$λ→0$, corresponding to the case of infinite processivity, $NF(u=0)$ is predicted to diverge, while as $λ→∞$, corresponding to the case of zero processivity, $NF(u=0)→0$. It is straightforward to
generalize Equation (S30) to the case of several absorbing points and different geometries, provided that similar simplifying assumptions can be employed as above. Importantly, these calculations can
be compared to numerical calculations for the geometry considered in this section. While the value of $NF$ at attracting regions may generally depend on the geometry of such regions, the dependence
of $NF$ on the processivity, $λ$, can be conveniently explored by defining the localization ratio, $ρ$, as the ratio between filament numbers at a certain value of $λ$ compared to that at $λ/2$:
(S31) $\rho =\frac{{N}_{F}\left(u=0,\lambda \right)}{{N}_{F}\left(u=0,\lambda /2\right)}.$
Defining $ρ$ analogously for the geometry considered in this section, we find that numerical calculations of $ρ$ for the geometry considered here across a range of $σ$ are consistent with the
theoretical prediction for $ρ$ based on the simplified model considered in this paragraph (Figure 4—figure supplement 3c). Hence, we conclude that Equation (S30) captures the dependence of $NF$ on
$λ$ for more general geometries.
2.7. Cylinder with a bulge
Building on work that has examined MreB motion in cells with membrane bulges (Hussain et al., 2018), here and below we consider bulges of positive Gaussian curvature on a cylindrical surface. We
consider a class of cylinders with bulges parameterized piecewise by $𝐫1=(v,cosu,sinu)$, where $u∈[0,2π]$ and $v∈[0,z]$ for some $z>0$ and $(u-π/2)2+(v-z/2)2<b$ and $𝐫2=(bsinvcosu+z/
2,bsinvsinu,sccosv+1)$ where $u∈[0,2π]$ and $v∈[0,π/2]$ otherwise. We suppose $b,c>0$ and $b<z$, so that the intersection area constitutes a small fraction of the cylindrical body. Here $s=1$
if the bulge protrudes outward and $s=-1$ if the bulge protrudes inward. For this class of parametric surfaces, three general cases of outward bulges ($s=1$) can be classified depending on whether $c
=b$, $c<b$, or $c>b$. Upon computing the principal curvatures in each case, we find that the three cases correspond respectively to random translocation ($c=b$), in which case the translocation
direction is random; polar translocation ($c<b$), in which case translocation occurs predominantly along the bulge $v$-coordinate; and circumferential translocation ($c>b$), in which case
translocation occurs predominantly along the bulge $u$-coordinate (Figure 3—figure supplement 1a). We note that inward bulges of positive Gaussian curvature ($s=-1$) exhibit the same behavior, with
polar translocation when $c>b$ and circumferential translocation when $c<b$.
Figure 4B of the main text illustrates filament dynamics for both random and circumferential translocation where $b=c=0.5$ (spherical bulge) and $b=0.5,c=1$ (ellipsoidal bulge), in both the cases of
zero and infinite processivity. We focus here on the latter geometry as the motion there is consistent with experiments (Hussain et al., 2018). When there is no noise, considering similar activation
and deactivation dynamics as above shows that, similar to a spherocylinder (§2.5), the bulge attracts filaments in the case of small $λ$ or large processivity. As simulations demonstrate, this is
also true in the case of a finite noise, where we find the bulge to contain larger numbers of filaments relative to the case of a uniform distribution on the surface (Figure 4B in the main text). For
characteristic parameter values relevant to B. subtilis MreB, we find that $⟨CF⟩S$ is increased at the bulge (Figure 3E of the main text) and substantial localization may occur at the bulge neck,
consistent with previous experiments (Figure 3—figure supplement 1b).
2.8. Cylinder with small wavelength undulations
Previous work has examined the correlation of MreB concentration with subcellular-scale shape fluctuations in E. coli cells (Ursell et al., 2014). Based on the weak correlation observed between outer
and inner contour curvatures in Ursell et al. (2014), the authors determined that the experimentally observed MreB enrichment was not caused by bending modes: in a circular torus, for instance, the
correlation between outer and inner contour curvatures should be strictly negatively correlated. The authors concluded that short length-scale, high magnitude fluctuations dominate experimental
observations of cell shape.
To probe such a geometry, we consider the parameterization $𝐫=((a+csin(Pv))sinu,(a+csin(Pv))cosu,v)$, for $u∈[0,2π]$ and $v∈[0,2π]$. Here $a$ denotes an average cylinder radius, we assume
that $0<c≪a$, and $P>0$ is a variable controlling the number of periods. For large wavelength undulations, $P$ is less than, or on the order of, unity. However, the Gaussian and mean curvatures are
uncorrelated in this case, inconsistent with the positive correlation observed in Ursell et al., 2014 for cells growing in sinusoidally-shaped chambers, and the ranges of Gaussian and mean curvatures
are significantly smaller than those measured for wild-type, unconfined, sinusoidally-confined, thin, and wide cells in different experiments (Ursell et al., 2014; Shi et al., 2017; Bratton et al.,
2018) (Figure 3—figure supplement 2a and Figure 3G of the main text). We therefore consider a geometry with short wavelength undulations, for which $P≥1$ (Figure 3—figure supplement 2b). Numerical
results for principal curvature-dependent translocation on this geometry, which is consistent with the predicted filament binding orientation (Figure 3—figure supplement 2c–d), are presented in
Figure 3G in the main text.
2.9. Effects of principal curvature-dependent translocation noise and varying filament step size on model predictions
Prior experiments have shown that the variation in MreB trajectory direction is width-dependent, suggesting that the translocation noise may depend on the difference of principal curvatures, $Δc$ (
Hussain et al., 2018). As discussed in the main text, we may model this observation by letting $σ$ vary with $Δc$: for simplicity, we set
(S32) $\sigma =\alpha {\left(\mathrm{\Delta }c\right)}^{-1}$
(Figure 3—figure supplement 3a), but note that more complicated functional dependencies, such as a quadratic dependence of the form $σ=β(Δc)-2$, do not significantly change our results (Figure
3—figure supplement 3b). Unless otherwise indicated (see §3), all simulations and calculations in this work pertaining to MreB assume Equation (S32), and we further note that, for the parameter
values relevant to MreB translocation in E. coli and B. subtilis summarized in Supplementary file 2, taking a constant value of $σ=0.3$ also does not significantly change our results. Similarly, we
verify that localization arises even for vary large step sizes ($L=2μm$), in which case MreB filaments realign infrequently (Figure 3—figure supplement 3b).
2.10. Effects of filament twist, flexural rigidity, and Gaussian curvature-dependent activation on model predictions
Recent work has shown that regions of negative Gaussian curvature can allow twisted filaments to bind with low elastic energy (Quint et al., 2016). In another study, Wang and Wingreen assumed that
MreB assembles into bundles with significantly larger flexural rigidity than that of filaments considered in this work (Wang and Wingreen, 2013). In this section, we examine how our model predictions
for MreB localization in E. coli change over ranges of three parameters: (1) the intrinsic filament twist, $ω0$, (2) the coupling, $γ$, of filament activation to Gaussian curvature, and (3) the
filament flexural rigidity, $B$.
We first note that $γ$ may vary independently of $ω0$. Quantitatively, the rate of filament activation may not only depend on the parameters of the twist, but also on other cellular parameters (Wong
et al., 2017). We assume that the filament activation rate per unit area, $k$, varies with $γ$ as
(S33) $k\left(u,v\right)={k}_{0}-\gamma G\left(u,v\right),$
where $k0$ and $γ$ are constants and $G(u,v)$ is the Gaussian curvature at the parametric coordinate $(u,v)$. When $γ=0$, we recover our original assumption that the activation rate per unit area is
We now explore the effects of twist and Gaussian curvature-dependent activation on filament concentration. For simplicity, we consider a torus (Figure 4—figure supplement 1a), for which $G=sinu[a
(R+asinu)]-1$ (c.f. §2.3), but show later in Figure 3—figure supplement 3b that the results for the undulating geometry of Figure 3F of the main text also remain qualitatively similar. Due to
twist, MreB filaments may move in a direction which deviates from the direction of largest curvature. In particular, while further work should verify the robustness of this equation for nonzero
turgor pressures and intrinsic filament curvatures, Equation (12) of Quint et al. (2016),
(S34) $2B\frac{{\mathrm{sin}}^{3}{\theta }_{0}\mathrm{cos}{\theta }_{0}}{{a}^{2}}=K\frac{\mathrm{cos}\left(2{\theta }_{0}\right)}{2a}\left(\frac{\mathrm{sin}\left(2{\theta }_{0}\right)}{2a}+{\omega }
predicts the angular deviation from the largest principal direction in a cylinder, $θ0$, due to twist in the limit that MreB filaments are fully bound to the membrane. Here $B$ is the flexural
rigidity of a filament, $K$ is the elastic twist stiffness, $a$ is the cell radius, and $ω0$ is the intrinsic helical twist. Thus, upon knowing the values of $k0$, $ω0$, $γ$, and $B$, we may compute
$k$ and $θ0$ using Equation (S33) and (S34), from which we can determine $CF$ via simulations similar to those above. Doing so for a large parameter range which includes characteristic values of $B$
(Supplementary file 1) and theoretically hypothesized values for MreB of $K=2000kT⋅nm$ and $ω0a=1$ to $5$ (Quint et al., 2016) reveals the final ratio of filament concentrations to be
quantitatively similar in all cases (Figure 4—figure supplement 4a–f). We note, in particular, that the effect of filament twist alone is small in all cases: this is because the biasing of
translocation angles due to twist, as predicted by Equation (S34), is irrelevant for the toroidal geometry, where translocating along deviatory angles still traces out hoops (c.f. §2.3). In these
simulations, to illustrate the range of translocation behavior we have assumed that binding is always energetically favorable; however, there will be an energetic cost of unwinding of the form
(S35) ${E}_{\mathrm{twist}}=\frac{K}{2}{\int }_{0}^{{\mathrm{\ell }}_{b}}{\left(\omega -{\omega }_{0}\right)}^{2}𝑑l,$
where $ω$ is the bound filament twist (Quint et al., 2016). For the parameter values considered in this work (Supplementary file 1), binding becomes energetically unfavorable at large twists (
$ω0a≳100$), for which $θ0≈π/4$.
Additionally, our previous measurements of MreB fluorescence in bent E. coli cells confined to toroidal microchambers (which may be modeled as torii with geometric parameters $a=1$ and $R=10$) shows
MreB concentration to be enhanced at the inner edges by a factor of approximately $1.1$ (Wong et al., 2017). This suggests the strength of Gaussian curvature coupling for MreB, if indeed such
coupling does exist, to be small, as shown in Figure 4—figure supplement 4g–h. As we have demonstrated previously (Wong et al., 2017), the small observed enhancement is consistent with processivity
alone. Furthermore, as mentioned above, modeling characteristic parameter values of both filament twist and Gaussian curvature-dependent activation still results in localization for geometries other
than a torus, such as that considered in Figure 4F of the main text (Figure 3—figure supplement 3b).
Finally, we were interested to determine if our model predictions were robust to variation in the filament flexural rigidity, which for the parameter values in Supplementary file 1 has a value of
$B≈1.65×10-25 J⋅m$. To explore this for a torus, we repeated the foregoing dynamical simulations across a range of flexural rigidities. We first considered a ten-fold smaller value of $B=1.0×10-26
J⋅ m$, which, by Equation (S10), still predicts (1) the filament to bend to conform to the membrane and (2) the depth of the potential well corresponding to Figure 2B of the main text to be
approximately $3kT$, a value that may be large enough to be robust to thermal noise and other sources of stochasticity (Figure 4—figure supplement 4k). We next considered a 100-fold larger value of
$B=1.5×10−23 J⋅m$, approximately the flexural rigidity of a thick bundle with $rf=10nm$ (Wang and Wingreen, 2013). For this value of $B$ and $rf$, no twist, and the remaining parameters summarized in
Supplementary file 1, Equation (S11) estimates the critical pressure to be $p∗≈5atm$, a value which is larger than characteristic estimates of turgor in E. coli. As the corresponding value of $Ebend$
is less than $Eint$, we anticipate that both the membrane and the bundle may bend (see Figure 2C of the main text). To show that it remains energetically favorable for thick bundles to bind to
membranes and that the binding orientation along the largest principal direction is robust, we numerically solved the shape equation (Equation (S6)) and found that the free energy change due to
binding, $ΔE$, decreases with the size of the domain $Ω$ and, for a given $Ω$, is minimal when the membrane bends to conform to the bundle (Figure 4—figure supplement 4k). In the limit $Ω→U$, the
Monge gauge assumption underlying Equation (S6) becomes invalid; nevertheless, $ΔE$ tends to the analytical expression of Equation (S10) under the condition that the minimizer, $R$, is close to the
intrinsic radius of curvature of the bundle, $Rs$. In both this limit and the simulations of Figure 4—figure supplement 4k, binding remains energetically favorable and the binding orientation remains
robust even for thick filaments, suggesting that translocating along directions of largest principal curvature remains relevant. Figure 4—figure supplement 4f,i–j shows simulation results for both
values of $B$ compared to the value ($1.65×10-25 J⋅m$) assumed in this work. We find that, in all cases, the model predictions remain quantitatively similar.
For convenience, here we summarize implementation details used to generate figures in this work.
3.1. Figure 3B of the main text
Here $a=1$ and $z=100$. Langevin simulations to generate trajectories were undertaken with $105$ activated filaments, $L=0.4$, $σ=0.3$, and $N$ determined by the number of steps needed to translocate
one hoop. Filaments were activated at the center ($v=0$) so that none of them translocated beyond the range specified by $z$.
3.2. Figure 3C of the main text
Here $a=1$ and $z=4$. Langevin simulations to generate a representative trajectory were undertaken with a single activated filament and the parameters relevant to E. coli summarized in Supplementary
file 2, except $N=500$. Note that we use the linear relation $σ=α(Δc)-1$, where $Δc$ is the difference of principal curvatures and the value of $α$ is provided in Supplementary file 2. Numerical
calculations for ensemble dynamics were undertaken with the parameters relevant to E. coli summarized in Supplementary file 2. Each subsurface is discretized into $60×60$ bins.
3.3. Figure 3D of the main text
Here $a=1$, $R=10$, and $Φ=π$. Periodic boundary conditions in $v$ are assumed. Note that we use the linear relation $σ=α(Δc)-1$, where $Δc$ is the difference of principal curvatures and the value
of $α$ is provided in Supplementary file 2. Numerical calculations for ensemble dynamics were undertaken with the parameters relevant to E. coli summarized in Supplementary file 2. The surface is
discretized into $30×30$ bins.
3.4. Figure 3E of the main text
Here $z=4$, $b=0.5$, and $c=1$. Periodic boundary conditions in $v$ for the cylinder are assumed. Note that we use the linear relation $σ=α(Δc)-1$, where $Δc$ is the difference of principal
curvatures and the value of $α$ is provided in Supplementary file 2. Numerical calculations for ensemble dynamics were undertaken with the parameters relevant to B. subtilis summarized in
Supplementary file 2. The bulge is discretized into $20×20$ bins and the cylinder is discretized into $40×40$ bins.
3.5. Figure 3F of the main text
Here $c=0.1$, $P=4$, and $z=π$. Periodic boundary conditions in $v$ are assumed. Note that we use the linear relation $σ=α(Δc)-1$, where $Δc$ is the difference of principal curvatures and the
value of $α$ is provided in Supplementary file 2. Langevin simulations to generate a representative trajectory were undertaken with a single activated filament and the parameters relevant to E. coli
summarized in Supplementary file 2, except $N=100$. Langevin simulations and numerical calculations for ensemble dynamics were undertaken with the parameters relevant to E. coli summarized in
Supplementary file 2. In the finite case, the surface is discretized into $25×25$ bins into which individual trajectories are aggregated. In the continuum case, the surface is discretized into
$200×200$ bins.
3.6. Figure 4A of the main text
For the ellipsoid, $a=1$ and $b=2$. For the torus, $a=1$, $R=2$, and $Φ=2π$. For the helix, $a=1$, $R=2$, $Φ=2π$, $ϕ=1$, and periodic boundary conditions in $v$ are assumed. For all geometries,
Langevin simulations to generate a representative trajectory were undertaken with a single activated filament and $L=0.4$, $σ=0.3$, and $N=300$. Numerical calculations for ensemble dynamics were
undertaken with the parameters summarized in Supplementary file 2 but $σ=0.3$, $N$ large enough to correspond to a fixed point for the dynamics ($N=103$), and either $λ=0$ (infinite processivity) or
$λ=∞$ (zero processivity). The surfaces are discretized into $30×30$ bins.
3.7. Figure 4B of the main text
Here $z=4$, $b=0.5$, and $c=1$ or $c=0.5$. Periodic boundary conditions in $v$ for the cylinders are assumed. Langevin simulations to generate a representative trajectory were undertaken with a
single activated filament and $L=0.4$, $σ=0.3$, and $N=30$. Numerical calculations for ensemble dynamics were undertaken with the parameters summarized in Supplementary file 2 but $σ=0.3$, $N$ large
enough to correspond to a fixed point for the dynamics ($N=103$), and either $λ=0$ (infinite processivity) or $λ=∞$ (zero processivity). The bulges are discretized into $20×20$ bins and the cylinders
are discretized into $40×40$ bins.
3.8. Figure 4C of the main text
Here $z=4$ with periodic boundary conditions in $v$. Langevin simulations to generate a representative trajectory were undertaken with a single activated filament and $L=0.4$, $σ=0.3$, and $N=15$.
Numerical calculations for ensemble dynamics were undertaken with the parameters summarized in Supplementary file 2 but $σ=0.3$, $N$ large enough to correspond to a fixed point for the dynamics ($N=
103$), and either $λ=0$ (infinite processivity) or $λ=∞$ (zero processivity). The surface is discretized into $60×60$ bins.
The numerical results shown in panel a are identical to those in Figure 4B of the main text, with an additional representative trajectory in the case $c=0.2$. For the simulation in the inset of panel
a, $σ=0$. The numerical result shown in panel b is identical to that in Figure 3E of the main text.
The numerical results shown in panel b are identical to those in Figure 3G of the main text, with the exception of (1) a constant value of $σ=0.3$ and (2) a quadratic dependence of $σ$ on the
difference of principal curvatures, $σ=β(Δc)-2$, where the value of $β$ is provided in Supplementary file 2. A numerical result corresponding to Figure 3G of the main text, but with a step size of
$L=2μm$, is also shown. Here the same parameters summarized in Supplementary file 2 apply, except the larger value of $L$ implies the following rescaling of parameters: $L=4$, $N=6$, and $λ=0.66$.
Finally, a numerical result corresponding to Figure 3G of the main text, but for a nonzero filament twist of $|ω0a|=5$ and Gaussian curvature-dependent activation parameter of $γ/k0=1$ (see also
§2.10 and Figure 4—figure supplement 4h) is shown.
For the torus, $a=1$, $R=2$, and $Φ=2π$. For the helix, $a=1$, $R=2$, $Φ=2π$, and $ϕ=1$ unless otherwise stated, and periodic boundary conditions in $v$ are assumed. Numerical calculations for
ensemble dynamics were undertaken with the parameters summarized in Supplementary file 2 but varying $σ$, $N$ large enough to correspond to a fixed point for the dynamics ($N=103$), and either $λ=0$
(infinite processivity) or $λ=∞$ (zero processivity). In panel c, $σ$ is fixed at $σ=0$ while $ϕ$ varies. The surfaces are discretized into $30×30$ bins.
Here $a=1$ and $b=2$. Numerical calculations for ensemble dynamics were undertaken with the parameters summarized in Supplementary file 2 but varying $σ$, $N$ large enough to correspond to a fixed
point for the dynamics ($N=103$), and either $λ=0$ (infinite processivity) or $λ=∞$ (zero processivity). The ellipsoid is discretized into $30×30$ bins.
The numerical results shown in Figure 4—figure supplement 3a are identical to those in Figure 4C in the main text. The same details apply for Figure 4—figure supplement 3c, except that $σ$ and $λ$
are varied.
The numerical results shown in all panels use the parameter values relevant to E. coli as summarized in Supplementary file 2. Note that we use the linear relation $σ=α(Δc)-1$, where $Δc$ is the
difference of principal curvatures and the value of $α$ is provided in Supplementary file 2. Generally, $a=1$, $R=2$, and $Φ=2π$ except for panels g and h, for which $R=10$. All simulated torii were
discretized into $30×30$ bins.
All data generated or analyzed during this study are included in the manuscript and supporting files.
21. Book
Theory of Elasticity
Pergammon Press.
22. Book
Manifolds and Differential Geometry
American Mathematical Society.
38. Book
Statistical Thermodynamics of Surfaces, Interfaces, and Membranes
Westview Press.
47. Book
Theory of Plates and Shells
New York: McGraw-Hill.
51. Book
Thin Plates and Shells: Theory, Analysis, and Applications
Marcel Dekker, Inc.
Article and author information
Author details
National Science Foundation (DGE1144152)
Quantitative Biology Initiative at Harvard
National Institutes of Health (DP2AI117923-01)
Searle Scholar Fellowship
• Ethan C Garner
• Ariel Amir
Materials Research and Engineering Center at Harvard
Kavli Institute for Bionano Science and Technology at Harvard
Alfred P. Sloan Foundation
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
FW was supported by the National Science Foundation Graduate Research Fellowship under grant no. DGE1144152 and the Quantitative Biology Initiative at Harvard. ECG was supported by the National
Institutes of Health under grant no. DP2AI117923-01, the Smith Family Award, and the Searle Scholar Fellowship. AA was supported by the Materials Research and Engineering Center at Harvard, the Kavli
Institute for Bionano Science and Technology at Harvard, and the Alfred P Sloan Foundation. ECG and AA were supported by the Volkswagen Foundation. We thank Carl Wivagg, Saman Hussain, Ned Wingreen,
and Siyuan (Steven) Wang for discussions and Sven van Teeffelen, Jie Lin, and Po-Yi Ho for comments on the manuscript.
© 2019, Wong et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
A two-part list of links to download the article, or parts of the article, in various formats.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
1. Felix Wong
2. Ethan C Garner
3. Ariel Amir
Mechanics and dynamics of translocating MreB filaments on curved membranes
eLife 8:e40472.
Further reading
1. Microbiology and Infectious Disease
MreB is essential for rod shape in many bacteria. Membrane-associated MreB filaments move around the rod circumference, helping to insert cell wall in the radial direction to reinforce rod shape.
To understand how oriented MreB motion arises, we altered the shape of Bacillus subtilis. MreB motion is isotropic in round cells, and orientation is restored when rod shape is externally
imposed. Stationary filaments orient within protoplasts, and purified MreB tubulates liposomes in vitro, orienting within tubes. Together, this demonstrates MreB orients along the greatest
principal membrane curvature, a conclusion supported with biophysical modeling. We observed that spherical cells regenerate into rods in a local, self-reinforcing manner: rapidly propagating rods
emerge from small bulges, exhibiting oriented MreB motion. We propose that the coupling of MreB filament alignment to shape-reinforcing peptidoglycan synthesis creates a locally-acting,
self-organizing mechanism allowing the rapid establishment and stable maintenance of emergent rod shape.
1. Immunology and Inflammation
2. Microbiology and Infectious Disease
The parasitic nematode Heligmosomoides polygyrus bakeri secretes the HpARI family, which bind to IL-33, either suppressing (HpARI1 and HpARI2) or enhancing (HpARI3) responses to the cytokine. We
previously showed that HpARI2 also bound to DNA via its first complement control protein (CCP1) domain. Here, we find that HpARI1 can also bind DNA, while HpARI3 cannot. Through the production of
HpARI2/HpARI3 CCP1 domain-swapped chimeras, DNA-binding ability can be transferred, and correlates with in vivo half-life of administered proteins. We found that HpARI1 and HpARI2 (but not
HpARI3) also binds to the extracellular matrix component heparan sulphate (HS), and structural modelling showed a basic charged patch in the CCP1 domain of HpARI1 and HpARI2 (but not HpARI3)
which could facilitate these interactions. Finally, a mutant of HpARI2 was produced which lacked DNA and HS binding, and was also shown to have a short half-life in vivo. Therefore, we propose
that during infection the suppressive HpARI1 and HpARI2 proteins have long-lasting effects at the site of deposition due to DNA and/or extracellular matrix interactions, while HpARI3 has a
shorter half-life due to a lack of these interactions.
1. Microbiology and Infectious Disease
The gut microbiota is implicated in the pathogenesis of hyperuricemia (HUA) and gout. However, it remains unclear whether probiotics residing in the host gut, such as Lactobacillus, can prevent
HUA development. Herein, we isolated Lactobacillus plantarum SQ001 from the cecum of HUA geese and conducted in vitro assays on uric acid (UA) and nucleoside co-culture. Metabolomics and
genome-wide analyses, revealed that this strain may promote nucleoside uptake and hydrolysis through its nucleoside hydrolase gene. The functional role of iunH gene was confirmed via heterologous
expression and gene knockout studies. Oral administration of L. plantarum SQ001 resulted in increased abundance of Lactobacillus species and reduced serum UA levels. Furthermore, it downregulated
hepatic xanthine oxidase, a key enzyme involved in UA synthesis, as well as renal reabsorption protein GLUT9, while enhancing the expression of renal excretion protein ABCG2. Our findings suggest
that L. plantarum has potential to ameliorate gut microbial dysbiosis with HUA, thereby offering insights into its potential application as a probiotic therapy for individuals with HUA or gout.
|
{"url":"https://elifesciences.org/articles/40472","timestamp":"2024-11-11T13:55:20Z","content_type":"text/html","content_length":"578454","record_id":"<urn:uuid:51b34439-6e65-42a3-a5b0-faf8ccff9345>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00806.warc.gz"}
|
Pedro Gonzalez often dedicate $5,one hundred thousand at the beginning of each year for another 9 many years
37. The interest rate is 8 percent. What is the future value? An effective. $58,471. B. $62,440. C. $67,435. D. $72,435.
38. Ambrin Corp. expects to receive $2,000 per year for 10 years and $3,500 per year for the next 10 years. What is the present value of this 20 year cash flow? Use an 11% discount rate.Good. $19,033
B. $27,870 C. $32,389 D. none of these
39. Dr. J. wants to buy a Dell computer which will cost $2,788 four years from today. He would like to set aside an equal amount at the end of each year in order to accumulate the amount needed. He
can earn 7% annual return. How much should he set aside? A. $ B. $ C. $ D. $
Exactly what go back need to their money secure very he might located yearly benefits of $20,000 for the next fourteen ages
40. Mr. Fish wants to build a house in 10 years. He estimates that the total cost will be $170,000. If he can put aside $10,000 at the end of each year, what rate of return must he earn in order to
have the amount needed? A. Between 11% and 12% B. Between 8% and 9% C. 17% D. None of these
41. The shorter the length of time between a present value and its corresponding future value, A. the lower the present value, relative to the future value. B. the higher the present value, relative
to the future value. C. the higher the interest rate used in the present-valuation. D. none of these.
42. A dollar today is worth more than a dollar to be received in the future because A. risk of nonpayment in the future. B. the dollar can be invested today and earn interest. C. inflation will
reduce purchasing power of a future dollar. D. None of these.
43. Mr. Darden is selling his house for $165,000. He bought it for $55,000 nine years ago. What is the annual return on his investment? A. 3% B. Between 14% and 16% C. 13% D. None of these
44. Increasing the number of periods will increase all of the following except A. the present value afrointroductions profil örnekleri of an annuity. B. the present value of $1. C. the future value
of $1. D. the future value of an annuity.
45. Joe Nautilus has $120,000 and wants to retire. A. 12% B. Between 12% and 13% C. 14% D. Greater than 15%
46. You will deposit $2,000 today. It will grow for 6 years at 10% interest compounded semiannually. The annual interest rate is 8%. Your annual withdrawal will be: A. $2,340 B. $4,332 C. $797 D.
You may then withdraw the cash annually along the second cuatro age
47. Carol Thomas will pay out $6,000 at the end of the year 2, $8,000 at the end of year 3, and receive $10,000 at the end of year 4. With an interest rate of 13 percent, what is the net value of the
payments vs. receipts in today’s dollars? A. $ 7,326. B. $10,242. C. $16,372. D. $ 4,112.
48. John Doeber borrowed $125,000 to buy a house. His loan cost was 11% and he promised to repay the loan in 15 equal annual payments. How much are the annual payments? A. $3,633 B. $9,250 C. $13,113
D. $17,383
49. John Doeber borrowed $125,000 to buy a house. His loan cost was 11% and he promised to repay the loan in 15 equal annual payments. What is the principal outstanding after the first loan payment?
A. $121,367 B. $123,088 C. $107,617 D. None of these
|
{"url":"https://induprojekt.pl/pedro-gonzalez-often-dedicate-5-one-hundred/","timestamp":"2024-11-05T22:08:52Z","content_type":"text/html","content_length":"74590","record_id":"<urn:uuid:ed63de25-b3e8-4b6e-b58c-f5ef8aa55529>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00161.warc.gz"}
|
Square root of 4.1- do you have to use calculus?
• Thread starter Femme_physics
• Start date
In summary, calculators cannot give an exact result for problems like solving square roots. However, they can give up to n digits of the correct result.
Square root of 4.1-- do you have to use calculus?
In order to solve square root of 4.1, is there a simple arithmetic method or do you have to use calculus and use that "tangent line approximation formula"?
I wonder how do calculators do it... does anybody know?
I'm assuming you by "tangent line approximation method" you mean Newton's method. What exactly is wrong with it?
I suppose if you wanted a more "elementary method", you could the following:
x_{n+1} = \frac{1}{2} (x_n + \frac{4.1}{x_n} )
That sequence will converge to your square root, and it's an analog of Newton's method.
Oh, okay, I see the next topic in my book is Newton's method. Guess I should've read that first :shy:
Okay, I read Newton's method. Both methods don't give the exact results (to all sig fig). Do calculators give the exact result? I wonder.
Question: what book are you reading?
Calculus textbook, of course! It's produced by the OpenU of my country (IL)
Dory said:
Okay, I read Newton's method. Both methods don't give the exact results (to all sig fig). Do calculators give the exact result? I wonder.
No, calculators don't give an exact result, in general. How could they, since they only display a finite number of digits? They use an approximation algorithm like the one described above.
Dory said:
Okay, I read Newton's method. Both methods don't give the exact results (to all sig fig). Do calculators give the exact result? I wonder.
[tex]\sqrt{4.1}= \frac{\sqrt{41}}{\sqrt{10}}[/tex]
Neither 41 nor 10 is a perfect square so their square roots are irrational. Further, they have no common factors so the ratio of [itex]\sqrt{41}[/itex] to [itex]\sqrt{10}[/itex] is irrational. It
be written as a finite decimal expansion nor as a ratio of integers (fraction). No, calculators do not give an exact result for a problem like that- the exact result cannot be written in any "place
value" notation.
Note, however, that if the calculator displays n digits of the result, those n digits are the correct first n digits of the decimal expansion of the number. The approximative methods allow you to
calculate the correct digits up to any precision you desire.
Thanks Halls, guys. That pretty much clears it up for me :)
FAQ: Square root of 4.1- do you have to use calculus?
1. What is the square root of 4.1?
The square root of 4.1 is approximately 2.0248.
2. Do you always have to use calculus to find the square root of 4.1?
No, you do not always have to use calculus to find the square root of 4.1. There are other methods such as estimation or using a calculator.
3. Why is calculus sometimes used to find the square root of 4.1?
Calculus is often used to find the square root of 4.1 because it is a more precise method of calculation and can be applied to more complex numbers and equations.
4. Can you explain the calculus involved in finding the square root of 4.1?
To find the square root of 4.1 using calculus, you would use the derivative of the function f(x) = √x, which is f'(x) = 1/2√x. By setting f'(x) = 1/2√4.1 and solving for x, you can determine the
value of the square root of 4.1.
5. Are there any other fields of math that can be used to find the square root of 4.1?
Yes, there are other fields of math such as algebra and geometry that can also be used to find the square root of 4.1. It ultimately depends on the complexity of the number and the desired level of
|
{"url":"https://www.physicsforums.com/threads/square-root-of-4-1-do-you-have-to-use-calculus.434661/","timestamp":"2024-11-10T12:40:32Z","content_type":"text/html","content_length":"120875","record_id":"<urn:uuid:066afc38-1e68-4bb5-bab3-acf09ff56be4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00666.warc.gz"}
|
Represents a mesh triangle defined by its vertex indices. More...
unsigned int V0
The first vertex index of the triangle.
unsigned int V1
The second vertex index of the triangle.
unsigned int V2
The third vertex index of the triangle.
Represents a mesh triangle defined by its vertex indices.
|
{"url":"https://help.rengabim.com/api/struct_triangle.html","timestamp":"2024-11-12T10:02:14Z","content_type":"application/xhtml+xml","content_length":"6841","record_id":"<urn:uuid:4a2cfaad-9eab-45cd-b6ac-25b65f0a02ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00406.warc.gz"}
|
MCQ Questions on Geometry for Competitive Exams
ln a square ABCD, diagonals AC and BD interest at O. The angle bisector of ∠CAB meets BD and BC at F and G, respectively. OF : CG is equal to:
In the given figure, PQR is a triangle and quadrilateral ABCD is inscribed in it, QD = 2 cm, QC = 5 cm, CR = 3 cm, BR = 4 cm, PB = 6 cm, PA = 5 cm and AD = 3 cm. What is the area (in cm^2) of the
quadrilateral ABCD?
In ΔABC, AC = BC and ∠ABC = 50°, the side BC is produced to D so that BC = CD then the value of ∠BAD?
In the given figure, ∠ONY = 50° and ∠OMY = 15°. Then the value of the ∠MON is
In the given figure, two identical circles of radius 4 cm touch each other. A and B are the centres of the two circles. If RQ is a tangent to the circle, then what is the length (in cm) of RQ?
Two equal circles intersect so that there centres, and the point at which they intersect from a square of side 1 cm. The area (in sq. cm) of the portion that is common to the circles is
ABC is a triangle in which ∠ABC = 90°. BD is perpendicular to AC. Which of the following is TRUE?
I. Triangle BAD is similar to triangle CBD.
II. Triangle BAD is similar to triangle CAB.
III. Triangle CBD, is similar to triangle CAB.
ABC is a right angled triangle, right angled at A. A circle is inscribed in it. The lengths of two sides containing the right angle are 48 cm and 14 cm. The radius of the inscribed circle is:
PQRS is a cyclic quadrilateral in which PQ = 14.4 cm, QR = 12.8 cm and SR = 9.6 cm. If PR bisects QS, what is the length of PS?
In the given figure, chords PQ and RS intersect each other at point L. Find the length of RL.
Read More Section(Geometry)
Each Section contains maximum 100 MCQs question on Geometry. To get more questions visit other sections.
|
{"url":"https://www.examveda.com/arithmetic-ability/practice-mcq-question-on-geometry/","timestamp":"2024-11-11T21:36:54Z","content_type":"text/html","content_length":"75421","record_id":"<urn:uuid:d595578a-dc66-4a04-bcf9-cb8aa2f3c708>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00156.warc.gz"}
|
tf.cond | TensorFlow v2.15.0.post1
Return true_fn() if the predicate pred is true else false_fn().
pred, true_fn=None, false_fn=None, name=None
def fun1(x,y):
if x > 0: # AutoGraph converts if-statement to tf.cond().
z = y+1
z = y-1
return z
fun1(tf.constant(7), tf.constant(3)).numpy()
def fun2(x,y):
pred = x > 0
true_fn = lambda: y+1
false_fn = lambda: y-1
return tf.cond(pred, true_fn, false_fn) # Use tf.cond() explicitly.
fun1(tf.constant(7), tf.constant(3)).numpy()
For more information, see tf.function and AutoGraph guide.
true_fn and false_fn both return lists of output tensors. true_fn and false_fn must have the same non-zero number and type of outputs.
Although this behavior is consistent with the dataflow model of TensorFlow, it has frequently surprised users who expected a lazier semantics. Consider the following simple program:
x, y = tf.constant(2, dtype=tf.int32), tf.constant(4, dtype=tf.int32)
z = tf.multiply(x, y)
r = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.multiply operation is always executed,
Note that cond calls true_fn and false_fn exactly once (inside the call to cond, and not at all during Session.run()). cond stitches together the graph fragments created during the true_fn and
false_fn calls with some additional graph nodes to ensure that the right branch gets executed depending on the value of pred.
tf.cond supports nested structures as implemented in tensorflow.python.util.nest. Both true_fn and false_fn must return the same (possibly nested) value structure of lists, tuples, and/or named
tuples. Singleton lists and tuples form the only exceptions to this: when returned by true_fn and/or false_fn, they are implicitly unpacked to single values.
pred A scalar determining whether to return the result of true_fn or false_fn.
true_fn The callable to be performed if pred is true.
false_fn The callable to be performed if pred is false.
name Optional name prefix for the returned tensors.
Tensors returned by the call to either true_fn or false_fn. If the callables return a singleton list, the element is extracted from the list.
TypeError if true_fn or false_fn is not callable.
ValueError if true_fn and false_fn do not return the same number of tensors, or return tensors of different types.
x = tf.constant(2)
y = tf.constant(5)
def f1(): return tf.multiply(x, 7)
def f2(): return tf.add(y, 3)
r = tf.cond(tf.less(x, y), f1, f2)
# r is set to f1().
# Operations in f2 (e.g., tf.add) are not executed.
|
{"url":"https://tensorflow.google.cn/versions/r2.15/api_docs/python/tf/cond","timestamp":"2024-11-05T12:01:23Z","content_type":"text/html","content_length":"56483","record_id":"<urn:uuid:7ad49089-a809-4aae-84e1-e8536ec57f7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00688.warc.gz"}
|
Data Navigator for Schools
Example capital expenditure plot
This plot shows the expected total capital expenditure over time for building, IT, fixtures and fittings and motor vehicles. There are three different dropdowns on this page which are explained
Units dropdown
Units dropdown control
1. Actual value, which is the capital expenditure calculated from survey question 44. Capital Expenditure.
2. Percentage, calculated as the capital expenditure divided by total capital expenditure for each year.
3. Per pupil, which is the capital expenditure divided by the number of pupils. The total number of pupils is calculated from survey question 13. Number of pupils by boarding type as at 31 August.
Quantiles dropdown
Quantiles dropdown control
You have the option to display different quantile bands on the chat or none at all. A quantile is a value that divides a dataset into different parts, helping you understand how data is distributed.
As an example, the 75% quantile is the point in the dataset where 75% of the benchmark data falls below that point and 25% is above it.
Benchmark statistic dropdown
Benchmark statistic dropdown control
You have the option to display the benchmark mean or benchmark median for the capital expenditure.
|
{"url":"https://datanavigator.barnett-waddingham.co.uk/man/expenditure-expenditure_cap_projects-capex_details-manual.html","timestamp":"2024-11-11T11:23:47Z","content_type":"application/xhtml+xml","content_length":"30287","record_id":"<urn:uuid:ba9a13fb-571b-4e5b-befa-be41fa8685ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00736.warc.gz"}
|
Methods and Applications of Algorithmic Complexity: Beyond Statistical Lossless Compression
May 20, 2022 Books
English | 2022 | ISBN: 978-3662649831 | 276 Pages | PDF, EPUB | 44 MB
This book explores a different pragmatic approach to algorithmic complexity rooted or motivated by the theoretical foundations of algorithmic probability and explores the relaxation of necessary and
sufficient conditions in the pursuit of numerical applicability, with some of these approaches entailing greater risks than others in exchange for greater relevance and applicability.
Some established and also novel techniques in the field of applications of algorithmic (Kolmogorov) complexity currently coexist for the first time, ranging from the dominant ones based upon popular
statistical lossless compression algorithms (such as LZW) to newer approaches that advance, complement, and also pose their own limitations. Evidence suggesting that these different methods
complement each other for different regimes is presented, and despite their many challenges, some of these methods are better grounded in or motivated by the principles of algorithmic information.
The authors propose that the field can make greater contributions to science, causation, scientific discovery, networks, and cognition, to mention a few among many fields, instead of remaining either
as a technical curiosity of mathematical interest only or as a statistical tool when collapsed into an application of popular lossless compression algorithms. This book goes, thus, beyond popular
statistical lossless compression and introduces a different methodological approach to dealing with algorithmic complexity.
For example, graph theory and network science are classic subjects in mathematics widely investigated in the twentieth century, transforming research in many fields of science from economy to
medicine. However, it has become increasingly clear that the challenge of analyzing these networks cannot be addressed by tools relying solely on statistical methods. Therefore, model-driven
approaches are needed. Recent advances in network science suggest that algorithmic information theory could play an increasingly important role in breaking those limits imposed by traditional
statistical analysis (entropy or statistical compression) in modeling evolving complex networks or interacting networks. Further progress on this front calls for new techniques for an improved
mechanistic understanding of complex systems, thereby calling out for increased interaction between systems science, network theory, and algorithmic information theory, to which this book
Download from free file storage
Resolve the captcha to access the links!
|
{"url":"https://scanlibs.com/methods-applications-algorithmic-complexity/","timestamp":"2024-11-05T21:48:54Z","content_type":"text/html","content_length":"41211","record_id":"<urn:uuid:811787f3-ea81-4d13-914f-7c848c94691a>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00543.warc.gz"}
|
An Accurate Doppler Parameters Calculation Method of Geosynchronous SAR Considering Real-Time Zero-Doppler Centroid Control
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Beijing Institute of Tracking and Telecommunication Technology, Beijing 100094, China
Author to whom correspondence should be addressed.
Submission received: 25 August 2021 / Revised: 25 September 2021 / Accepted: 7 October 2021 / Published: 11 October 2021
The zero-Doppler centroid control in geosynchronous synthetic aperture radar (GEO SAR) is beneficial to reduce the imaging complexity (reduces range-azimuth coupling in received data), which can be
realized by adjusting the radar line of sight (RLS). In order to maintain the zero-Doppler centroid throughout the whole orbit of the GEO SAR satellite, the RLS needs to be adjusted in real-time. Due
to the ultra-long synthetic aperture time of GEO SAR, the RLS variation during the synthetic aperture time cannot be neglected. However, in the previous related papers, the real-time variation of RLS
during the synthetic aperture time was not taken into account in the calculation of Doppler parameters, which are closely related to the RLS, resulting in inaccurate calculation of Doppler
parameters. Considering this issue, an accurate Doppler model (the model of relative motion between satellite and ground target) of GEO SAR is proposed in this paper for the accurate calculation of
Doppler parameters (Doppler centroid and Doppler bandwidth and other parameters). Finally, simulation experiments are designed to confirm the effectiveness and necessity of the proposed model. The
results indicate that the RLS variation during the synthetic aperture time has a considerable effect on Doppler parameters performance of the GEO SAR, and refers to a more stable azimuth resolution
performance (the resolution is kept near a relatively stable value at most positions of the elliptical orbit) compared with the case that does not consider the real-time zero-Doppler centroid
1. Introduction
Since the concept of geosynchronous synthetic aperture radar (GEO SAR) was first proposed by K. Tomiyasu in 1978, it has been regarded as a potential SAR imaging mode to provide Earth observation
with a wide swath and short revisit time (24 h) [
]. At present, the main research orbit scheme of GEO SAR includes near-zero inclined orbit and inclined orbit, and the near-zero inclined orbit is mainly studied by European scholars, while the
inclined orbit is studied by American and Chinese scholars. Compared with the large antenna size and higher transmission power required by the highly inclined orbit, the theoretical research on the
small inclined orbit scheme has made great progress 20 years ago [
]. Since 2000, with the rapid development of electronic technology, many studies of inclined GEO SAR have gained momentum again. In the inclined orbit scheme, the nadir-point trajectory of the
high-inclination orbit is a large figure shape, which can provide a large observation area [
]. These unique advantages enable GEO SAR to not only provide surface coverage for approximately one-third of the globe but also greatly improve the response speed of emergencies in the interested
area [
], which attracts increasing attention from engineers and scholars.
In space-borne SAR, the Doppler centroid is of particular importance in imaging processing [
]. In order to avoid the complex process of the Doppler centroid estimation and improve the imaging processing accuracy, a yaw steering method was proposed to obtain zero-Doppler centroid under the
hypothesis of circular orbits for the first time [
]. However, the residual Doppler central frequency can be up to kilohertz when the satellite runs at the geosynchronous orbit with a non-negligible orbital eccentricity, due to the Earth rotation and
elliptical orbit [
]. Reference [
] proposed an attitude steering method combining pitch and rotation to accurately reduce the residual Doppler center error when the look angle of radar is out of an expected range. However, this
approach is difficult to implement in GEO SAR. According to the different attitude steering of pitch steering and yaw steering, two new methods of zero-Doppler centroid control called pitch-yaw
steering and yaw-pitch steering were derived [
], but the yaw angle variation can reach several tens of degrees, which is impractical for the colossal platform of GEO SAR. The two-dimensional (2D) phase scanning described by look-down and squint
angles is a good substitution for the attitude steering, which can be accomplished with a phased-array antenna. Moreover, we fully analyze the zero-Doppler centroid attitude steering control method
in this paper. Since there are no geosynchronous satellites in orbit, it is not possible to obtain the measured data based on a specific hardware platform, but we can realize the accurate Doppler
model through the real-time variation of the look-down angle and the squint angle throughout the orbital period.
With consideration of the zero-Doppler centroid control, Doppler parameters of the GEO SAR including the synthetic aperture time, Doppler bandwidth, and azimuth resolution can be calculated for the
system design and demonstration. However, the real-time variation of the look-down and squint angles within the synthetic aperture time has never been considered in the calculation of Doppler
parameters, which can bring about incorrect calculation results and lead to poor systematical designation [
]. This paper focuses on the study of the accurate Doppler model based on real-time zero-Doppler centroid control.
This paper is organized as follows. In
Section 2
, the real-time zero-Doppler control is introduced. In
Section 3
, the Doppler model considering the real-time variation of the look-down angle and squint angle within the synthetic aperture time is accurately derived. In
Section 4
, simulation experiments are designed to validate the effectiveness of the proposed model. Finally,
Section 5
concludes the entire study, including a discussion on future research.
2. Real-Time Zero-Doppler Centroid Control
2.1. Accurate Attitude Steering Control
The geometry of the Earth-centered inertial (ECI) coordinate system (CS) and the mass-centered orbit (MCO) CS is illustrated in
Figure 1
, which refer to
$x y z$
$X Y Z$
, respectively. Furthermore,
$R s$
$R t$
represent the position vectors of the satellite and the target, respectively,
is denoted as
$R s$
$R t$
, whose absolute value is the slant range, and
is the mass center of the satellite in the MCO CS [
]. Based on the derivations in [
], the calculation formula of the Doppler centroid can be presented as:
is a function of satellite position,
is the unit vector of the radar line of sight (RLS). Further derivation is carried out in the MCO CS, so vectors
can be expressed as:
${ P = ( R s ( α · − ω e cos α i ) , − R s ω e sin α i cos α , − R s · ) Q = ( 0 , − sin γ 0 , − cos γ 0 )$
$α i$
is the orbital inclination angle,
is the argument of latitude,
$α ·$
is the derivation of
with respect to time,
$ω e$
is the Earth’s rotational angular velocity, and
$γ 0$
is the original look-down angle.
In order to realize the zero-Doppler centroid, the attitude steering is operated in different orders of pitch steering and yaw steering. The yaw angle and pitch angle needed to be adjusted by the two
attitude steering control methods have different expressions. The pitch steering and yaw steering operators are expressed as follows:
$M pitch = [ cos θ 0 − sin θ 0 1 0 sin θ 0 cos θ ] , M yaw = [ cos φ sin φ 0 − sin φ cos φ 0 0 0 1 ]$
is the pitch angle and φ is the yaw angle. Attitude steering is essentially adjusting the RLS by multiplying the vector
by matrix operators. In this paper, we only consider the pitch-yaw steering attitude control; the zero-Doppler centroid controlling equation can be expressed as:
$P · ( M yaw M pitch Q ) = P · Q pitch-yaw = 0$
From Equation (4), the solution can be derived as:
${ φ = arctan ( ω e sin α i cos α α · − ω e cos α i ) θ = arctan ( − R s · / R s ( α · − ω e cos α i ) cos φ + ω e sin α i cos α sin φ )$
Then, based on the parameters listed in
Table 1
, the variation curves of the yaw angle and pitch angle are drawn in
Figure 2
2.2. Attitude Steering Implement Method
As shown in
Figure 2
, the variation scope of the yaw angle can reach several tens of degrees during a satellite period, which is impractical for the colossal platform of GEO SAR. Therefore, an equivalent method called
antenna phase centroid scanning has been proposed as a good substitute for the attitude steering, which can be perfectly accomplished with a phased-array antenna and intuitively described with
look-down and squint angles. In reality, the essence of platform rotation in attitude steering is for adjusting the RLS, and the unit vector of the adjusted RLS can be denoted as:
$Q ′ = ( − sin γ cos ϕ , − sin γ sin ϕ , − cos γ )$
is the adjusted look-down angle and
is the squint angle, and they are both defined in the MOC CS. By solving
$Q pitch-yaw = Q ′$
, we obtain the solution to antenna phase centroid scanning in Equation (7). The newly solved look-down angle and squint angle are related to the original look-down angle and orbital parameters.
${ γ = arccos ( cos γ 0 cos θ ) ϕ = arccos ( sin γ 0 sin φ − cos γ 0 cos φ sin θ ( sin γ 0 sin φ − cos γ 0 cos φ sin θ ) 2 + ( sin γ 0 cos φ + cos γ 0 sin φ sin θ ) 2 )$
Then, the variation curves of the squint angle and look-down angle in a satellite period are drawn in
Figure 3
As shown in
Figure 3
, the squint angle varies in a larger scope compared with the look-down angle. Furthermore, with the increase of the eccentricity, the look-down angle also varies widely.
3. Accurate Doppler Model
In this section, a more accurate calculation method of Doppler parameters will be emphasized by considering the variation of the RLS within the synthetic aperture time compared with the formulas for
calculating these parameters given in the previous related papers. In the application of zero-Doppler centroid control, the RLS should be continually adjusted along the orbit to obtain the
zero-Doppler centroid, which is not only for the elliptical orbit but also for the near-circular orbit. Due to the ultra-long synthetic aperture time of GEO SAR, variations of the RLS including
look-down angle and squint angle during the synthetic aperture time are considerable, which should be taken into account when Doppler parameters are analyzed.
The fluctuations of squint angle and look-down angle during the synthetic aperture time are analyzed. The pitch-yaw attitude steering control method is adopted and the argument of latitude is assumed
to be
= 75°. The RLS fluctuation within the synthetic aperture time is shown in
Figure 4
As shown in
Figure 4
, the fluctuation scope of the squint angle is larger, which can reach tens of degrees during the synthetic aperture time. Furthermore, with the increase of the eccentricity, the variation of the
look-down angle within the synthetic aperture time is greater. The look-down angle and the squint angle correspond to each other. Only when both of them have appropriate angles can the Doppler
centroid be zero. It can be seen that even small fluctuation of the look-down angle plays a very significant role. Therefore, the variation of Doppler parameters caused by the real-time variation of
the two must be fully considered within the synthetic aperture time. The following is an accurate analysis of Doppler parameters.
First, in order to illustrate the necessity of accurately calculating the Doppler parameters, the Doppler centroid results with or without considering the variation of the RLS during the synthetic
aperture time are compared. The simulation results are shown in
Figure 5
As can be seen from
Figure 5
, without considering the variation of RLS within the synthetic aperture time, and when the eccentricity is small, the Doppler centroid residual can be ignored. However, when the eccentricity reaches
0.1, a large Doppler centroid residual will be generated, which will cause many problems in imaging processing. In contrast, when the variation of RLS is considered, the Doppler centroid residual is
negligible even the eccentricity is large. Therefore, the real-time variation of the RLS must be considered in order to make the Doppler model more accurate. Then parameters of synthetic aperture
time, azimuth bandwidth, and azimuth resolution are analyzed using the accurate Doppler model.
The geometry of the GEO SAR is completely different from that of the low-Earth-orbit SAR (LEO SAR). Therefore, the accuracy of the classical method of calculating synthetic aperture time is not
sufficient for the GEO SAR system. A more accurate formula is derived to calculate synthetic aperture time (
). It can be accurately defined according to the 3 dB beam width (
$θ b w$
${ 〈 R s ( t 1 ) , R ( t 1 ) 〉 = θ b w / 2 〈 R s ( t 2 ) , R ( t 2 ) 〉 = − θ b w / 2 T = t 2 − t 1$
$〈 · 〉$
is used to calculate the angle between two vectors,
$t 1$
is the start instant when the target comes into the antenna beam coverage and
$t 2$
is the end instant. In order to fully illustrate the relationship between the synthetic aperture time and various variables, we can use geometric knowledge to express it with mathematical expressions
in Equation (9). The geometric model is shown in
Figure 6
${ T a = | t 2 − t 1 | = R c V g = R e [ arcsin ( | R s ( t 1 ) | + | R s ( t 2 ) | R e sin ( 0.443 λ L a ) ) − 0.443 λ L a × 2 ] V s − g 2 + V e − g 2 − 2 V s − g V e − g cos ( k ( γ , ϕ ) α i cos (
k ( γ , ϕ ) α ) ) V s − g = μ a 3 R e V e − g = R e w e cos [ arcsin ( sin ( α i ) sin ( α ) ) − cos ( k ( γ , ϕ ) α i cos ( k ( γ , ϕ ) α ) ) arcsin ( | R s ( t ) | sin ( γ ) R e ) − γ ]$
$R e$
is the radius of the Earth,
is the angle between the satellite and the center of the Earth during synthetic aperture time,
$R c$
is the distance formed by the RLS trajectory on the Earth surface within synthetic aperture time,
is constant of earth gravitation,
is the semi-major axis,
$V g$
is the ground velocity, which is attributed to the movement of satellite
$V s − g$
and the rotation of earth
$V e − g$
, λ is the signal wavelength,
$L a$
is the antenna azimuth size, and
is the azimuth time.
$k ( γ , ϕ )$
is a coefficient introduced by
. It can be seen from Equation (9) that the synthetic aperture time is related to the orbital elements, wavelength, look-down angle, and squint angle. In addition, both the ground velocity and the
two beam angles change during the synthetic aperture time, which is consistent with the results in
Figure 4
. According to Equations (8) and (9), the synthetic aperture time can not only be calculated accurately, but also can be intuitively analyzed for the convenience of system design.
Next, the azimuth bandwidth is analyzed, which is of great significance to the azimuth resolution. In future GEO SAR systems, the azimuth bandwidth should be calculated as accurately as possible to
meet the processing accuracy requirements. The high order Taylor expansion slant range model is adopted in the GEO SAR systems [
]. Calculation of the azimuth frequency difference between the beginning and ending positions within the synthetic aperture time is the basic idea of azimuth bandwidth
$B a$
calculation, as shown in Equation (10) [
${ f ( t ) = − 2 λ · d R ( t ) d t = − 2 λ · ∑ n = 1 N n · k n · t n − 1 B a = f max − f min = 2 λ · ∑ n = 1 N n · k n · ( t 2 n − 1 − t 1 n − 1 )$
is the instantaneous Doppler frequency,
$R ( t )$
is the variable slant range within the synthetic aperture time, and
$k n ( n = 1 , 2 , ⋯ , N )$
is the coefficients of the higher-order Taylor expansion slant range model. It can be seen from Equation (10) that the calculation of azimuth bandwidth is related to the synthetic aperture time.
Therefore, when accurately calculating the azimuth bandwidth, it is necessary to consider the variation of the RLS within the synthetic aperture time.
Finally, another important Doppler parameter, azimuth resolution, is analyzed accurately. The choice of azimuth resolution depends on the requirements of radar application scenarios and must be
considered in system design. The expression of azimuth resolution
$ρ a$
can be obtained based on the accurate azimuth bandwidth, as shown in Equation (11).
The expression for the ground velocity $V g$ has been given as Equation (9). It is easy to see from Equation (11) that the accurate calculation of the azimuth resolution is also related to the
look-down angle and the squint angle, and their angle variation must be considered within the synthetic aperture time.
According to the above analysis, it can be concluded that the calculation of the three Doppler parameters, synthetic aperture time, azimuth bandwidth, and azimuth resolution, are directly or
indirectly related to the look-down angle and the squint angle. Moreover, they play a vital role in the system design and SAR processing, therefore, we must accurately calculate them to meet the
higher level of application requirements.
4. Simulations
Here, numerical experiments are carried out to validate the accuracy of the proposed Doppler model. We compare the results with or without considering the variation of RLS within the synthetic
aperture time. The simulation of the synthetic aperture time, azimuth bandwidth, and azimuth resolution are respectively carried out. The parameters used are shown in
Table 1
First, we give the comparison results of synthetic aperture time in different look-down angles (0°, 1.5°, and 3°, and 0° is simply a reference and will not be used in the real system). The result is
shown in
Figure 7
It can be seen from
Figure 7
that the variation of RLS direction within the synthetic aperture time has a great influence on the calculation of synthetic aperture time. In some parts of the orbit, if the real-time variation of
RLS is ignored, it will bring great error to SAR imaging. With the increase in the look-down angle, the synthetic aperture time becomes longer in general. Compared with the results which ignore the
variation of RLS, the results calculated by the accurate Doppler model are mostly larger than the latter, except for some singularity positions.
Then, we simulate the azimuth bandwidth and give the comparison results whether the variation of RLS is considered within the synthetic aperture time. The comparison results are shown in
Figure 8
As can be seen from the comparison results in
Figure 8
, results vary greatly as to whether the variation of RLS is considered within the synthetic aperture time, especially at some singularity positions. If the pulse repetition frequency (PRF) is
designed without considering the variation of RLS, it will cause Doppler ambiguity, which will bring a range of issues to the imaging.
Finally, the comparison results of azimuth resolution are given. The results are shown in
Figure 9
It can be seen from
Figure 9
that whether the variation of RLS is considered has a great influence on the calculation of the azimuth resolution, and the difference becomes more remarkable with the increase in the look-down
angle. In addition, when the variation of RLS is considered, the constant azimuth resolution of the full aperture can be obtained except for some singularity positions.
For the figure-8-like orbit, the observation area can be controlled on the outer or inner side of the orbit by selecting the left-looking or right-looking. These singularities occur when the area is
located on the inside and the initial look-down angle is small because the platform velocity is nearby the same as the target (derived from the rotation). In this case, zero-Doppler centroid can be
abandoned, and squint mode imaging can be used to obtain stable Doppler performance, which can increase the flexibility.
5. Discussion
It is important to make the Doppler centroid frequency zero in GEO SAR. In this paper, the zero-Doppler centroid is realized by the pitch-yaw steering attitude control, and the concrete realization
method in the GEO SAR system is given. In the actual GEO SAR system, the RLS is adjusted in real time. If the real-time variation is not considered in the calculation of system parameters, a large
calculation error will be introduced, resulting in the system design not meeting the expectations. In order to improve the calculation accuracy, an accurate Doppler calculation model with full
consideration of real-time variation of RLS is proposed. Finally, simulation experiments are conducted to compare the results of the two groups with or without considering the variation of RLS within
the synthetic aperture time in different look-down angles. The results show that the difference between the two groups of results is more significant as the look-down angle increases, thus
demonstrating the necessity of the proposed accurate Doppler model. The zero-Doppler is not always persistently adopted in GEO SAR. Squint mode is a quite common working strategy for wide coverage
and can improve the flexibility of system design. However, the range-azimuth coupling is too severe to compensate. We will study the squint mode imaging algorithm in the future to improve the imaging
Author Contributions
All authors have made substantial contributions to this work. F.C. and Y.J. formulated the theoretical framework. F.C. designed the simulations; F.C. carried out the simulation experiments; F.C.,
D.L. and Z.D. analyzed the simulated data; F.C. wrote the manuscript; D.L., C.Y. and Z.D. reviewed and edited the manuscript; Z.D. gave insightful and enlightening suggestions for this manuscript.
All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
The authors would like to thank all those who gave valuable help and suggestions to this manuscript, which were essential to the outcome of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 3. Squint and look-down angles along the orbit. (a) The eccentricity is 10^−8; (b) The eccentricity is 0.01.
Figure 4. Variations of the RLS during the synthetic aperture time. (a) The eccentricity is 10^−8; (b) the eccentricity is 0.01.
Figure 5. Doppler centroid along the orbit at different eccentricities. (a,c,e) consider the variation of the RLS, and the eccentricity is 10^−8, 0.01, and 0.1, respectively. (b,d,f) without
considering the variation of the RLS compared to (a,c,e).
Figure 7. Synthetic aperture time along the orbit. (a) With the consideration of variations of the RLS; (b) without the consideration of variations of the RLS.
Figure 8. Doppler bandwidth along the orbit. (a) With the consideration of variations of the RLS; (b) without the consideration of variations of the RLS.
Figure 9. Azimuth resolution along the orbit. (a) With the consideration of variations of the RLS; (b) without the consideration of variations of the RLS.
Parameter Value Parameter Value
Semi-major axis 42,164.17 km Right ascension of ascending 115°
Eccentricity 1 × 10^−8 Perigee 270°
Orbital inclination 60° Incident angle 20°
Carrier frequency 1.25 GHz Antenna size 30 m × 30 m
Pulse duration 2.5 μs Chirp bandwidth 30 MHz
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Chang, F.; Yu, C.; Li, D.; Ji, Y.; Dong, Z. An Accurate Doppler Parameters Calculation Method of Geosynchronous SAR Considering Real-Time Zero-Doppler Centroid Control. Remote Sens. 2021, 13, 4061.
AMA Style
Chang F, Yu C, Li D, Ji Y, Dong Z. An Accurate Doppler Parameters Calculation Method of Geosynchronous SAR Considering Real-Time Zero-Doppler Centroid Control. Remote Sensing. 2021; 13(20):4061.
Chicago/Turabian Style
Chang, Faguang, Chunrui Yu, Dexin Li, Yifei Ji, and Zhen Dong. 2021. "An Accurate Doppler Parameters Calculation Method of Geosynchronous SAR Considering Real-Time Zero-Doppler Centroid Control"
Remote Sensing 13, no. 20: 4061. https://doi.org/10.3390/rs13204061
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2072-4292/13/20/4061","timestamp":"2024-11-12T20:44:22Z","content_type":"text/html","content_length":"436795","record_id":"<urn:uuid:296e8c90-7a69-48ac-8f00-04dfb4221e29>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00344.warc.gz"}
|
| Tool Preview
Must have a column with a measure of time and status (0,1) at observation.
Use a column name from the file header if the data has one, or use one from the list supplied below, or use col1....colN otherwise to select the correct column
Use a column name from the header if the file has one, or use one from the list supplied below, or use col1....colN otherwise to select the correct column
Special characters will probably be escaped so do not use them
The column names supplied for time, status and so on MUST match either this supplied list, or if none, the original file header if it exists, or col1...coln as the default of last resort.
If there are exactly 2 groups, a log-rank statistic will be generated as part of the Kaplan-Meier test.
This is a wrapper for some elementary life table analysis functions from the Lifelines package - see https://lifelines.readthedocs.io/en/latest for the full story
Given a Galaxy tabular dataset with suitable indicators for time and status at observation, this tool can perform some simple life-table analyses and produce some useful plots. Kaplan-Meier is the
default. Cox Proportional Hazards model will be tested if covariates to include are provided.
This is always performed and a survival curve is plotted. If there is an optional "group" column, the plot will show each group separately. If there are exactly two groups, a log-rank test for
difference is performed and reported
This is always performed and a survival curve is plotted.
If there is an optional "group" column, the plot will show each group separately. If there are exactly two groups, a log-rank test for difference is performed and reported
The Cox Proportional Hazards model can be tested, if a comma separated list of covariate column names is supplied on the tool form.
These are used in as covariates. Although not usually a real problem, some diagnostics and advice about the assumption of proportional hazards are are also provided as outputs - see https://
Although not usually a real problem, some diagnostics and advice about the assumption of proportional hazards are are also provided as outputs - see https://lifelines.readthedocs.io/en/latest/
A big shout out to the lifelines authors - no R code needed - nice job, thanks!
|
{"url":"https://toolshed.g2.bx.psu.edu/repository/display_tool?repository_id=d58a1461e31a4c11&tool_config=%2Fsrv%2Ftoolshed-repos%2Fmain%2F006%2Frepo_6928%2Flifelines_tool%2Flifelineskmcph.xml&changeset_revision=dd5e65893cb8&render_repository_actions_for=tool_shed","timestamp":"2024-11-05T08:49:03Z","content_type":"text/html","content_length":"9449","record_id":"<urn:uuid:adcbae3f-fbd3-40b8-82de-38408ec64b15>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00847.warc.gz"}
|
profitable tour problem
We study the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visits a subset of vertices such that the length of the tour
plus the sum of penalties associated with vertices not in the tour is as small as possible. In \cite{Amico}, the authors … Read more
|
{"url":"https://optimization-online.org/tag/profitable-tour-problem/","timestamp":"2024-11-13T11:21:48Z","content_type":"text/html","content_length":"82919","record_id":"<urn:uuid:db3fab6d-da80-47e1-a154-45c24b8e421d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00665.warc.gz"}
|
Dakota Prep - How to Pass your Electrical Exam
Understanding how to calculate range demand is crucial for electrical professionals. These calculations are essential for properly sizing service entrance conductors and other components of
electrical systems in residential and commercial kitchens. Mastering this topic is vital for passing NEC electrical exams and ensuring safe, code-compliant installations in the field.
Example Range Demand Calculation Questions on NEC Electrical Exams
A single-family dwelling has two ranges rated at 4.5 kW each and another range rated at 7 kW. Calculate the demand load for these kitchen appliances on the ungrounded service entrance conductors
using the standard method of calculation for dwellings.
In a commercial kitchen, there is one range rated at 15 kW. What is the calculated demand load for this range?
A residential kitchen is equipped with two ranges: one rated at 14 kW and another at 18 kW. Determine the demand load for these appliances on the service entrance conductors.
How to Identify a Range Demand Calculation Question on NEC Electrical Exams
Key phrases to look out for in range demand calculation questions:
• "Household cooking appliances"
• "Range(s)", "Oven(s)", "Cooktop(s)"
• "Demand factor"
• "Service entrance conductors"
• "kW ratings" of appliances
When you spot these elements:
• Confirm it's a range demand calculation question, not a general load calculation question
• Refer to NEC section 220.55
Range Demand Calculation Articles: NEC 220.55
To correctly apply NEC Article 220.55, focus on these main elements:
• Table 220.55: Demand Factors and Loads for Household Electric Ranges, Wall-Mounted Ovens, Counter-Mounted Cooking Units, and Other Household Cooking Appliances
• Notes 1 through 5 under Table 220.55
• Column A for loads less than 3½ kW rating
• Column B for loads 3½ kW through 8¾ kW rating
• Column C for all ranges above 8¾ kW rating
To summarize, use Table 220.55 to determine the appropriate demand factor based on the ratings of individual appliances. Pay special attention to the notes, which provide guidance for specific
scenarios and adjustments to the demand factors.
Walkthrough for a NEC Electrical Exam Range Demand Calculation Question
Example 1: Mixed Range Ratings
Question: A single-family dwelling has two ranges rated at 4.5 kW each and another range rated at 7 kW. Calculate the demand load for these kitchen appliances on the ungrounded service entrance
conductors using the standard method of calculation for dwellings.
Step 1: Identify all ranges and their ratings
• Two ranges: 4.5 kW each (Table 220.55, Column B)
• One range: 7 kW (Table 220.55, Column B)
• Column B for all ranges as they fall between 3½ kW and 8¾ kW
Step 2: Calculate the total connected load
• Total connected load = (4.5 kW + 4.5 kW + 7 kW) = 16 kW
Step 3: Find the demand factor in Column B for 3 appliances
• For 16 kW with 3 appliances, the demand factor is 55%
Step 4: Calculate the demand load
• Demand load = Total connected load × Demand factor Demand load = 16 kW × 55% = 8.8 kW
Therefore, the calculated demand load for these kitchen appliances on the ungrounded service entrance conductors is 8.8 kW.
Example 2: Single Range Above 12 kW
Question: In a commercial kitchen, there is one range rated at 15 kW. What is the calculated demand load for this range?
Step 1: Identify the range and its rating & determine which column of Table 220.55 to use
• Range: 15 kW (Column C, as it's above 8¾ kW)
Step 2: Find the maximum demand in Column C
• For a 15 kW range, we start with base maximum demand of 8 kW value in Column C
Step 3: Apply Note 1 under Table 220.55
Note 1 states: "Over 12 kW through 27 kW ranges all of same rating. For ranges individually rated more than 12 kW but not more than 27 kW, the maximum demand in Column C shall be increased 5% for
each additional kilowatt of rating or major fraction thereof by which the rating of individual ranges exceeds 12 kW."
Step 4: Calculate the increase based on rating above 12 kW
Additional kW over 12 kW = 15 kW - 12 kW = 3 kW
Increase = 5% × 3 = 15%
Step 5: Apply the increase to the base demand from Column C
Final demand load = 8 kW + (8 kW × 15%) = 8 kW + 1.2 kW = 9.2 kW
Therefore, the calculated demand load for the 15 kW range is 9.2 kW.
Example 3: Two Ranges Above 12 kW
Question: A residential kitchen is equipped with two ranges: one rated at 14 kW and another at 18 kW. Determine the demand load for these appliances on the service entrance conductors.
Step 1: Identify the ranges and their ratings to determine which column of Table 220.55 to use
• Range 1: 14 kW (Column C)
• Range 2: 18 kW (Column C)
• Column C for both ranges as they're above 8¾ kW
Step 2: Calculate the total connected load
Total connected load = 14 kW + 18 kW = 32 kW
Step 3: Apply Note 2 under Table 220.55
Note 2 states: "Over 12 kW through 27 kW ranges of unequal ratings. For ranges individually rated more than 12 kW but not more than 27 kW, an average value of rating shall be calculated by adding
together the ratings of all ranges to obtain the total connected load (32 kW) and dividing by the number of ranges (2). Then the maximum demand in Column C shall be increased 5% for each kW or major
fraction thereof by which this average value exceeds 12 kW."
Average rating = 32 kW ÷ 2 = 16 kW
Step 4: Calculate the increase based on the average rating
Increase = (16 kW - 12 kW) × 5% = 4 × 5% = 20%
Step 5: Apply the increase to the base demand from Column C
Base demand for 2 appliances from Column C = 11 kW
Final demand load = 11 kW + (11 kW × 20%) = 11 kW + 2.2 kW = 13.2 kW
Therefore, the calculated demand load for the two ranges (14 kW and 18 kW) on the service entrance conductors is 13.2 kW.
Want to try more of these questions & pass your electrical exam on the first try?
Discover why over 10,000 students trust Dakota Prep to ace their licensing exams and boost their scores. Our platform is used by top unions and JATCs across the country, offering a comprehensive
question bank of over 3,000 expertly designed questions. Start today for free and see how Dakota Prep can help you succeed!
|
{"url":"https://www.dakotaprep.com/post/guide-to-range-demand-calculation-questions-on-nec-electrical-exams","timestamp":"2024-11-01T19:37:13Z","content_type":"text/html","content_length":"16655","record_id":"<urn:uuid:be194108-fcd3-452a-985f-af905f6f2e9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00868.warc.gz"}
|
Useful Examples For DBAs of Single Row Numeric Functions
In the previous post, we started talking about Single Row Functions, which return a single value (row) for each row of a queried object, basically they operate on one row at a time. Today we will
talk about Numeric Functions, and see some examples! Single Row Numeric Functions can be found in the SELECT, WHERE, START WITH and CONNECT BY, HAVING clauses of a select statement.
Numeric Functions accept an input and return a value. Both input and return values are numeric. The return value is usually a NUMBER that can be accurate up to 30-36-38 decimal digits, depending on
the actual function.
Transcendental functions are also numberic functions. These include the exponential, logarithm, and trigonometric functions: ACOS, ASIN, COS, SIN, TAN,EXP,LN, LOG and more. As DBAs, we do not use
transcendental functions vey much.
Let’s look at numeric functions that are more commonly used by DBAs: MOD, REMAINDER, ROUND, TRUNC.
MOD Function takes two input values, a,b and it returns the remainder of a divided by b. If b is 0 then MOD returns a, as we cannot divide by 0. So MOD(a,b) will return the remainder of a divided by
b, and if b is 0 then it will return a.
Where do we use the MOD function?
Usually, we use this function to see if a number is odd or even, or if a number is divisible by another number. Here are some examples of MOD function in the SELECT clause and WHERE clause of a SQL
1) Return the remainder
select mod(3,2) as remainder from dual;
2) Check if number is odd or even
select case mod(100,2) when 0 then 'Even' else 'Odd' end as number_type
from dual;
3) Check if number is divisible by another number:
select case mod(100,3)
when 0 then 'Divisible by 3'
else 'Not divisible by 3' end as number_type from dual;
Not divisible by 3
4) Return every second row (or third row) in a result set:
select last_name, salary, row_number
from ( select last_name, salary, rownum as row_number
from hr.employees order by last_name)
where mod(row_number,2)=0; --returns every second row
5) Return only employees with odd employee number:
select empno, last_name
from hr.employees
where mod(empno,2)=1; --returns only odd employee numbers
Fun Fact: You can use any numeric value in MOD or REMAINDER functions, not just integers!
select mod(1.5,2) as remainder from dual;
select mod(1.5,1.2) as remainder from dual;
The REMAINDER function is similar to MOD function, the difference is that REMAINDER uses ROUND in the formula, versus MOD is using FLOOR to calculate the values.
ROUND Function, for numbers, returns the input number rounded to the integer places to the right of the decimal point. So ROUND(a,b) returns the number a, rounded to the b number of right decimal
point, where b is an integer: round(5.187,1) = 5.2
If b is not specified (aka is 0) then number a is rounded to zero places. The integer b can also be negative, then a is rounded off to the left of the decimal point.
select round(20.4591,3) as example1, round(20.4596,3) as example 2 from dual;
EXAMPLE1 EXAMPLE2
-------- --------
20.459 20.46
select round(20.2591,-2) as example3,round(20.2591,-1) as example4 from dual;
EXAMPLE3 EXAMPLE4
-------- --------
TRUNC function, for numbers, returns the input number truncated to the integer number of decimal places. So TRUNC(a,b) returns the number a, truncated to the b decimal places. If you omit b or b is 0
then a is truncated to 0 places. B can be negative as well, and then the number a will be truncated to b digits left of the decimal point.
select trunc(20.4591,2) as example1, trunc(20.4596,3) as example2 from dual;
EXAMPLE1 EXAMPLE2
-------- --------
20.45 20.459
select trunc(20.4591,-1) as example1, trunc(20.4596,-2) as example2 from dual;
EXAMPLE3 EXAMPLE4
-------- --------
If you enjoyed this article, and would like to learn more about databases, please sign up to my weekly email, and you will receive my FREE guide: 7 Questions to Ask When Troubleshooting Database
Performance Problems!
If you are interested in improving your Oracle Tuning skills, check out my course theultimatesqltuningformula.com. Follow the link to get Today‘s Special, only $13.99 CAD !
|
{"url":"https://dbaparadise.com/2022/08/useful-examples-for-dbas-of-single-row-numeric-functions/","timestamp":"2024-11-09T18:58:21Z","content_type":"text/html","content_length":"45961","record_id":"<urn:uuid:36730f8f-4f50-42f5-a680-d7fb73be2a4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00350.warc.gz"}
|
Olympiad Maths Trainer 1 - Giftedthinkers
Welcome once more to the paradise of Mathematical Olympiad where the enthusiastic young minds are challenged by the non-routine and exciting mathematical problems!
In the first two books of this new series, students are introduced to 5 different types of mathematical problems every 12 weeks. They can then apply different thinking skills to each problem type and
gradually break certain mindsets in problem-solving. The remaining four books comprise 6 different types of mathematical problems in the same manner. In essence, students are exposed to stimulating
and interesting mathematical problems where they can work on creatively.
Secondly, the depth of problems in the Mathematical Olympiad cannot be underestimated. The series contains additional topics such as the Konigsberg Bridge Problem, Maximum and Minimum Problem, and
some others which are not covered in the first series, Maths Olympiad – Unleash the Maths Olympian in You!
ISBN: 9789812749048
|
{"url":"https://giftedthinkers.net/product/olympiad-maths-trainer-1/","timestamp":"2024-11-15T00:46:21Z","content_type":"text/html","content_length":"194226","record_id":"<urn:uuid:cb34c974-3843-44e1-a575-8b4e6451b811>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00239.warc.gz"}
|
[QSMS 집중강연 Representation theory 2023-02-07] Mini-course on quantum symmetric pairs I
1. Introduction to i-quantum groups
• Date : 2023-02-07 (Tue) 09:00 ~ 11:00 AM
2023-02-13 (Mon) 09:00 ~ 11:00 AM
• Speaker : Weiqiang Wang (University of Virginia)
• Abstract :
In this minicourse, we shall introduce i-quantum groups arising from quantum symmetric pairs as a generalization of Drinfeld-Jimbo quantum groups. We will introduce i-divided powers and show how they
conceptually lead to i-Serre relations and Serre presentations for (quasi-)split i-quantum groups. We will construct a new bar involution and i-canonical basis for any integrable module over a
quantum group (viewed as modules over i-quantum groups); i-divided powers are examples of i-canonical basis elements on a (modified) i-quantum group.
2. Invitation to crystal bases for quantum symmetric pairs
• Date : 2023-02-15 (Wed) AM 10:00 ~ 12:00
2023-02-17 (Fri) AM 10:00 ~ 11:00
• Speaker : Hideya Watanabe (OCAMI)
• Abstract :
The theory of crystal bases for quantum symmetric pairs, i.e., $\imath$crystal bases, which is still in progress, is an $\imath$quantum group (also known as ``quantum symmetric pair coideal
subalgebra'') counterpart of the theory of crystal bases.A goal of the theory of $\imath$crystal bases is to provide a way to recover much information about the structures of representations of $\
imath$quantum groups from its crystal limit, just like the theory of crystal bases for quantum groups.In these three hours of lecture, we first review basic theory of canonical bases and crystal
bases for quantum groups, and $\imath$canonical bases for $\imath$quantum groups. Then, we introduce a recent progress on the theory of $\imath$crystal bases of quasi-split locally finite type. As
mentioned above, the theory of $\imath$crystal bases of arbitrary type is not completed yet. Toward a next step, we discuss how the already known theory of $\imath$crystal bases could be generalized
to locally finite types. It would be a great pleasure for the speaker if the audience would be interested in and develop this ongoing project.
*This seminar will be held on Zoom.
|
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&document_srl=2405&listStyle=viewer&page=7","timestamp":"2024-11-07T12:38:16Z","content_type":"text/html","content_length":"22546","record_id":"<urn:uuid:9745a91c-7a8a-4ec7-9c49-57f7db5194fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00183.warc.gz"}
|
Upper A 12-pound box sits at rest on a horizontal surface, and there is friction between the box and the surface. One side of the surface - DocumenTVUpper A 12-pound box sits at rest on a horizontal surface, and there is friction between the box and the surface. One side of the surface
Upper A 12-pound box sits at rest on a horizontal surface, and there is friction between the box and the surface. One side of the surface
Upper A 12-pound box sits at rest on a horizontal surface, and there is friction between the box and the surface. One side of the surface is raised slowly to create a ramp. The friction force f
opposes the direction of motion and is proportional to the normal force Upper F Subscript Upper N exerted by the surface on the box. The proportionality constant is called the coefficient of
friction, mu. When the angle of the ramp, theta, reaches 20degrees, the box begins to slide. Find the value of mu.
in progress 0
Physics 3 years 2021-07-17T23:43:17+00:00 2021-07-17T23:43:17+00:00 1 Answers 12 views 0
Answers ( )
1. 0
2021-07-17T23:44:50+00:00 July 17, 2021 at 11:44 pm
The coefficient of static friction is : 0.36397
When we have a box on a ramp of angle
In such system, the force of gravity acting down the incline is the product of the box’s weight times the sine of the angle:
Recall as well that component of the box’s weight that contributes to the Normal N (component perpendicular to the ramp) is given by:
and the force of static friction (f) is given as the static coefficient of friction (
When the box starts to move, we have that the force of static friction equals this component of the gravity force along the ramp:
Now we use this last equation to solve for the coefficient of static friction, recalling that the angle at which the box starts moving is 20 degrees:
Leave an answer
Vân Khánh
About Vân Khánh
|
{"url":"https://documen.tv/question/upper-a-12-pound-bo-sits-at-rest-on-a-horizontal-surface-and-there-is-friction-between-the-bo-an-16260053-89/","timestamp":"2024-11-01T19:02:16Z","content_type":"text/html","content_length":"84231","record_id":"<urn:uuid:5a56278b-454b-485c-8063-049708705916>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00815.warc.gz"}
|
Expectations and covariance
Having known the distribution of a set of random variables , what one would be typically interested in for real-life applications is to be able to estimate the average values of these random
variables and the correlations between them. These are computed formally using the following expressions:
For example, in the case of two-dimensional normal distribution, if we are interested in finding the correlation between the variables and , it can be formally computed from the joint distribution
using the following formula:
A binomial distribution is a discrete distribution that gives the probability of heads in n independent trials where each trial has one of two possible outcomes, heads or tails, with the probability
of heads being p. Each of the trials is called a Bernoulli trial. The functional form of the binomial distribution is given by:
Here, denotes the probability of having k heads in n trials. The mean of the binomial distribution is given by np and variance is given by np(1-p). Have a look at the following graphs:
The preceding graphs show the binomial distribution for two values of n; 100 and 1000 for p = 0.7. As you can see, when n becomes large, the Binomial distribution becomes sharply peaked. It can be
shown that, in the large n limit, a binomial distribution can be approximated using a normal distribution with mean np and variance np(1-p). This is a characteristic shared by many discrete
distributions that, in the large n limit, they can be approximated by some continuous distributions.
The Beta distribution denoted by is a function of the power of , and its reflection is given by:
Here, are parameters that determine the shape of the distribution function and is the Beta function given by the ratio of Gamma functions: .
The Beta distribution is a very important distribution in Bayesian inference. It is the conjugate prior probability distribution (which will be defined more precisely in the next chapter) for
binomial, Bernoulli, negative binomial, and geometric distributions. It is used for modeling the random behavior of percentages and proportions. For example, the Beta distribution has been used for
modeling allele frequencies in population genetics, time allocation in project management, the proportion of minerals in rocks, and heterogeneity in the probability of HIV transmission.
The Gamma distribution denoted by is another common distribution used in Bayesian inference. It is used for modeling the waiting times such as survival rates. Special cases of the Gamma distribution
are the well-known Exponential and Chi-Square distributions.
In Bayesian inference, the Gamma distribution is used as a conjugate prior for the inverse of variance of a one-dimensional normal distribution or parameters such as the rate () of an exponential or
Poisson distribution.
The mathematical form of a Gamma distribution is given by:
Here, and are the shape and rate parameters, respectively (both take values greater than zero). There is also a form in terms of the scale parameter , which is common in econometrics. Another related
distribution is the Inverse-Gamma distribution that is the distribution of the reciprocal of a variable that is distributed according to the Gamma distribution. It's mainly used in Bayesian inference
as the conjugate prior distribution for the variance of a one-dimensional normal distribution.
The Dirichlet distribution is a multivariate analogue of the Beta distribution. It is commonly used in Bayesian inference as the conjugate prior distribution for multinomial distribution and
categorical distribution. The main reason for this is that it is easy to implement inference techniques, such as Gibbs sampling, on the Dirichlet-multinomial distribution.
The Dirichlet distribution of order is defined over an open dimensional simplex as follows:
Here, , , and .
The Wishart distribution is a multivariate generalization of the Gamma distribution. It is defined over symmetric non-negative matrix-valued random variables. In Bayesian inference, it is used as the
conjugate prior to estimate the distribution of inverse of the covariance matrix (or precision matrix) of the normal distribution. When we discussed Gamma distribution, we said it is used as a
conjugate distribution for the inverse of the variance of the one-dimensional normal distribution.
The mathematical definition of the Wishart distribution is as follows:
Here, denotes the determinant of the matrix of dimension and is the degrees of freedom.
A special case of the Wishart distribution is when corresponds to the well-known Chi-Square distribution function with degrees of freedom.
Wikipedia gives a list of more than 100 useful distributions that are commonly used by statisticians (reference 1 in the Reference section of this chapter). Interested readers should refer to this
|
{"url":"https://subscription.packtpub.com/book/data/9781783987603/1/ch01lvl1sec06/expectations-and-covariance","timestamp":"2024-11-04T11:11:35Z","content_type":"text/html","content_length":"295891","record_id":"<urn:uuid:753eb486-43c5-458f-bfad-4d636676d12c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00699.warc.gz"}
|
findIPs: Detect Influential Points for Feature Rankings
Feature rankings are important in analyzing high-throughput data, particularly for bioinformatics studies. These rankings are typically obtained by calculating the marginal correlations with the
outcomes. The ordered feature list reflects the importance of features, which plays a vital role in guiding subsequent research. For instance, researchers usually focus on a small subset of important
features that are associated with research objectives. However, the feature ranking can be distorted by a single case. The case exerts abnormal influence on the feature ranking is termed as
influential points (IPs). The presence of IPs renders the feature ranking unreliable, consequently affecting the subsequent analysis based on feature rankings.
The findIPs R package is specifically designed to detect IPs for feature rankings. The method utilized in this package is based on the leave-one-out strategy, which involves comparing the rank
difference between the original ranking and a new ranking that is obtained by removing a single observation (1). The new rankings are leave-one-out rankings. The rank difference obtained through this
comparison helps to determine the influence of the deleted observation.
The whole process can be divided into three steps,
Step 1, generate the original and leave-one-out rankings using a feature ranking method, such as t-test. A dataset with n cases will result in one original ranking and n leave-one-out feature
Step 2, calculate rank changes. It is advisable to use top-prioritized weights when comparing ranks.
Step 3, calculate the cumulative rank changes for each observation. A diagnostic check is also required to identify any potential influential points.
The findIPs package can be installed from Bioconductor using the following commands:
if (!requireNamespace("BiocManager", quietly = TRUE))
The findIPs package includes the miller05 microarray data (2). The data contains 236 samples and 1,000 genes. It has two types of responses: binary and survival. The binary response is classified
based on the p53 mutation: 58 cases with a p53 mutant and 193 cases with the wild-type p53 mutation. The survival response is associated with a total of 55 recorded events.
X <- miller05$X
y <- miller05$y
surv <- miller05$surv
Detect IPs using getdrop1ranks() and sumRanks()
We use a simple example where features are ranked based on t.test to demonstrate the use of findIPs package for IPs detection. We use getdrop1ranks() to derive the original ranking and leave-one-out
rankings. Features are simply ranked according to the p.values of t.test. Of note, the rank criteria is the p.value if fun = “t.test”. P.values are ranked in ascending order by specifying decreasing
= FALSE. We select the top 100 important features in the original ranking. The function returns an object containing a vector of original ranking (origRank) and a matrix of leave-one-out rankings
obj <- getdrop1ranks(X, y,
fun = "t.test",
decreasing = FALSE,
topN = 100)
## List of 2
## $ origRank : chr [1:100] "202580_x_at" "205240_at" "205394_at" "212494_at" ...
## $ drop1Rank: chr [1:100, 1:236] "202580_x_at" "205240_at" "205394_at" "212494_at" ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : NULL
## .. ..$ : chr [1:236] "GSM79114" "GSM79115" "GSM79116" "GSM79118" ...
After obtaining the original ranking and leave-one-out rankings using the getdrop1ranks() function, we use the sumRanks() function to compute the distance between them. This function provides three
methods for comparing ranks: unweighted, weighted Spearman, and method with adaptive weights. The unweighted method directly compares the ranks and assumes that all ranks have equal importance.
However, this is not always the case as the top-ranked methods are usually more important. The weighted Spearman and adaptive weights methods address this issue by emphasizing the importance of the
top-ranked methods (3). The adaptive weights method can further adjust the weights based on the distribution of rank changes in the data.
results <- sumRanks(origRank = obj$origRank,
drop1Rank = obj$drop1Rank,
topN = 100,
method = "adaptive")
## List of 6
## $ kappa : num 0.0226
## $ score : Named num [1:236] 0.2 2.476 2.01 1.648 0.254 ...
## ..- attr(*, "names")= chr [1:236] "GSM79114" "GSM79115" "GSM79116" "GSM79118" ...
## $ origRank : chr [1:100] "202580_x_at" "205240_at" "205394_at" "212494_at" ...
## $ drop1Rank : int [1:100, 1:236] 1 2 3 4 6 5 7 8 9 10 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : NULL
## .. ..$ : chr [1:236] "GSM79114" "GSM79115" "GSM79116" "GSM79118" ...
## $ origRankWeighted : num [1:100] 0.025 0.0244 0.0239 0.0233 0.0228 ...
## $ drop1RankWeighted: num [1:100, 1:236] 0.025 0.0244 0.0239 0.0233 0.0223 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : NULL
## .. ..$ : chr [1:236] "GSM79114" "GSM79115" "GSM79116" "GSM79118" ...
The outputs of sumRanks() are diverse across the selected methods. For method = “adaptive”, sumRanks() returns a list with the following elements:
1, kappa, the shape parameter of the adaptive weights method;
2, score, the accumulated weighted rank changes, reflecting the influence of each sample;
3, origRank, the original ranking;
4, drop1Rank, the leave-one-out rankings;
5, origRankWeighted, weighted original ranking;
6, drop1RankWeighted, weighted leave-one-out rankings.
However, if the method is “weightedSpearman” or “unweighted”, the function will only return three elements: “score”, “origRank”, and “drop1RankWeighted”. The elements “kappa”, “origRankWeighted”, and
“drop1RankWeighted” will not be returned.
Use findIPs() to detect IPs in one-step
findIPs() combines getdrop1ranks() and sumRanks() into one step. The output is identical to that using the two-step process.
results.ipsr <- findIPs(X, y,
fun = "t.test",
decreasing = FALSE,
method = "adaptive")
identical(results, results.ipsr)
## [1] TRUE
Results visualization
findIPs package offers three visualization functions: plotIPs(), plotRankScatters(), and plotAdaptiveWeights(). plotIPs() can directly utilize the output of findIPs() or sumRanks() to create a
lollipop plot that displays the influence of each case. In Figure 1, we can see that the observation 68 (obs68) seems to be more influential on the final results. However, the difference between
obs68 and the other observations is not that distinct, indicating a lower possibility of the presence of an influential observation.
par(mar = c(4, 4, 2, 2))
plotIPs(results.ipsr, topn = 5, ylim = c(0, 8))
In addition to the lollipop, findIPs also provides a simple visualization function plotRankScatters() that exhibits the distribution of rank changes using a scatter plot (Figure 2). Alike plotIPs(),
plotRankScatters() simply uses the output of findIPs() or sumRanks(). According to Figure 2, we can observe more rank changes in the tail side, but less changes in the head. The black points denote
the rank changes caused by the most influential case.
par(mar = c(4, 4, 2, 2))
The plotAdaptiveWeights() function aims to visualize weight function if adaptive weights are used for rank comparison, that is method = “adaptive” for findIPs() or sumRanks(). The argument kappa
refers to the shape parameter of the weight function. Here, the optimized kappa is 0.023 (Figure 3). n is the length of the feature list. We select the top 100 features, hence, n = 100. We can
observe that more weights are allocated to the top-ranked features.
par(mar = c(4, 4, 2, 2))
plotAdaptiveWeights(results.ipsr$kappa, n = 100, type = "line")
Use findIPs in survival data
For survival analysis, we offer the option to rank features using univariate Cox model by setting fun = “cox”. The features are ranked in ascending order based on their P-values.
par(mar = c(4, 4, 2, 2))
results.cox <- findIPs(X, surv,
fun = "cox",
decreasing = FALSE,
method = "adaptive")
Customize the rank criteria
In addition to the provided ranking criteria in findIPs() or sumRanks(), which includes “t.test”, “cox”, “log2fc”, and “kruskal.test”. We can also rank features based on a specified rank criterion.
To this end, we can pass a function to the fun argument. The function should take x and y as inputs and output the rank criterion, such as p-values.
As an example, we can rank features based on the p-values obtained from the kruskal.test. We can either specify fun = “kruskal.test”, as this test has been implemented in the package, or define our
own function passing to getdrop1ranks. Both methods produce the same results.
fun <- function(x, y){
kruskal.test(x, y)$p.value
kruskal.test1 <- getdrop1ranks(X, y,
fun = fun,
decreasing = FALSE)
kruskal.test2 <- getdrop1ranks(X, y,
fun = "kruskal.test",
decreasing = FALSE)
identical(kruskal.test1, kruskal.test2)
## [1] TRUE
The choice of rank comparison methods
findIPs provides three rank comparison methods: unweighted, weighted Spearman, and adaptive weights. We recommend using the adaptive weights. Here, we compare the three me
|
{"url":"https://bioconductor.org/packages/devel/bioc/vignettes/findIPs/inst/doc/findIPs.html","timestamp":"2024-11-12T07:24:30Z","content_type":"text/html","content_length":"1049201","record_id":"<urn:uuid:8a172e8d-9df7-47fb-b45a-de1f68f683c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00507.warc.gz"}
|
Impact of the torso model on the inverse localization of i…
Impact of the torso model on the inverse localization of ischemia
Impact of the torso model on the inverse localization of ischemia
In this simulation study the accuracy of the inverse localization of an ischemic lesion was investigated if a patient-adjusted general torso model and four different approximate heart models were
used. Surface ECGs were simulated by a normal heart and by hearts with 18 different ischemic lesions in 7 realistic torso models. Position of each lesion represented by a single dipole was then
searched by an inverse solution. Difference QRST integral maps reflecting differences between cardioelectric fields of the ischemic and normal hearts were used. With a standard heart model the mean
error of the lesion localization was 3.4 cm. With a standard heart model shifted to an inversely estimated position this error was 3.9 cm, for an equally shifted and properly formed and rotated heart
model the error was 2.4 cm and for a heart model properly shifted, formed and rotated the error was 1.1 cm. If realistic CT or MRI-based torso model was used the lesion localization error was 0.7 cm.
From the results it can be concluded that use of adjusted standard torso model with properly positioned and formed standard heart model can lead to acceptable accuracy of the inverse localization of
an ischemic lesion.
Ischemic lesion, Body surface potential mapping, Inverse solution, Torso and heart model
Autoři: Jana Lenková; Jana Švehlíková; Milan Tyšler
Působiště autorů: Slovak Academy of Sciences ^; Department of Biomeasurements, Institute of Measurement Science ^
Vyšlo v časopise: Lékař a technika - Clinician and Technology No. 4, 2013, 43, 14-17
Kategorie: Původní práce
In this simulation study the accuracy of the inverse localization of an ischemic lesion was investigated if a patient-adjusted general torso model and four different approximate heart models were
used. Surface ECGs were simulated by a normal heart and by hearts with 18 different ischemic lesions in 7 realistic torso models. Position of each lesion represented by a single dipole was then
searched by an inverse solution. Difference QRST integral maps reflecting differences between cardioelectric fields of the ischemic and normal hearts were used. With a standard heart model the mean
error of the lesion localization was 3.4 cm. With a standard heart model shifted to an inversely estimated position this error was 3.9 cm, for an equally shifted and properly formed and rotated heart
model the error was 2.4 cm and for a heart model properly shifted, formed and rotated the error was 1.1 cm. If realistic CT or MRI-based torso model was used the lesion localization error was 0.7 cm.
From the results it can be concluded that use of adjusted standard torso model with properly positioned and formed standard heart model can lead to acceptable accuracy of the inverse localization of
an ischemic lesion.
Ischemic lesion, Body surface potential mapping, Inverse solution, Torso and heart model
It is believed that local cardiac ischemia caused by occlusion of single coronary artery can be non-invasively assessed by solving the inverse problem of electrocardiology using multichannel surface
ECG and proper torso and heart model (its geometry and conductivity). However, accuracy and reliability of the solution can be influenced by many factors such as the number of measured ECG leads, the
noise in ECG, selected model of the cardiac generator and inverse method or the fidelity of the used geometrical torso and heart model. In this study we concentrated on the last mentioned factor. It
is obvious that an accurate, realistic model of the heart and whole torso based on CT or MR imaging would be desirable. However, in practical situations the whole torso imaging is usually not
available and approximate models have to be used instead of it.
In our previous study [1] we have concluded that a general torso shape patient-specifically adjusted according the measured patient chest dimensions with properly positioned electrodes together with
an “accurate” heart model yields an acceptable inverse solution. In this simulation study we investigated what accuracy of the inverse solution can be expected if the adjusted general torso shape is
used together with various approximate heart models created from a general heart model.
Material and Method
Normal surface ECGs and ECGs corresponding to hearts with single ischemic lesion were first simulated using a well-defined forward model. The simulated ECGs were then used as input for the inverse
solution attempting to localize the position of each simulated lesion. Adjusted general torso shape and several heart models were used and their influence on the accuracy of the lesion localization
was investigated.
Forward simulation of surface ECGs
Surface ECGs generated by a normal heart and by hearts with 18 different ischemic lesions (Fig. 1) characterized by changed repolarization with action potential shortened by 20% were simulated.
Fig. 1: Examples of the simulated ischemic lesions: small, medium and large anterior endocardial lesions (aen1, aen2, aen3), small anterior epicardial lesion (aep1), small inferior endocardial lesion
(ien1) and small posterior endocardial lesion (pen1).
The lesions were modeled as a part of a sphere or ellipsoid in three areas typical for stenosis of one of the three main coronary vessels: anterior (a-) in the region supplied by the left descending
artery (LAD), posterior (p-) in the region supplied by the left circumflex artery (LCx) and inferior (i-) in the region supplied by the right coronary artery (RCA). They were located either at the
endocardial (-en-) or epicardial (-ep-) surface of the modeled ventricular myocardium. Three lesion sizes were modeled: small (-1) occupying 0.5-1% of the ventricular volume, medium (-2) occupying
2.5-6% and large (-3) occupying 8-14%.
The surface ECGs were simulated in 7 realistic inhomogeneous torso models (6 men, 1 woman) obtained from MRI. While the original torso and lungs shapes were fully preserved, the original hearts were
substituted by simplified heart models used for the simulation of the normal and pathological activation but their size, position and rotation were adjusted in agreement with the original heart in
that torso (Fig. 2). The potentials bm(t) in 62 surface sites corresponding to an 62-lead ECG system (Amsterdam 62) were computed using the boundary element method:
bm(t) = A g(t) (1)
where A is the time independent transfer matrix that represents the properties of the inhomogeneous torso model as a volume conductor and g(t) is the multiple dipole generator in particular time
instant t of the heart activation. The dipoles represent contributions of activated elements of the ventricular model.
Fig. 2: Seven realistic torso models (6 men, 1 woman) based on real MRI scans were used in the forward simulations. For each torso the size, position and rotation of the inserted simplified heart
model was adjusted in agreement with the original heart in that torso.
Inverse solution using different heart models
For the inverse localization of each lesion QRST integral maps were used. Their values in 62 surface points are defined as
For identification of the pathological source (the ischemic lesion) the difference QRST integral map between integral maps generated by the normal and pathologically changed heart was used as input:
where g[n](t) and g[p](t) are multiple dipole generators representing the normal and the pathological activation of the ventricular myocardium and Δs represents the integral multiple dipole generator
characterizing only the changes of the electrical activity in the modeled lesion.
The equivalent integral generator EIG representing the original integral multiple dipole generator Δs can be then computed from equation (3) as:
EIG = A^+ Δim (4)
where A^+ is the pseudoinverse of the transfer matrix A.
This ill-posed problem can be solved in case of local ischemic lesions where the EIG represents only small volume of the myocardium by approximating it by a single dipole. In our method [2] the EIG
is computed in predefined possible positions within the ventricular volume that are about 1 cm apart. The position of the lesion is then defined as the location of the EIG that best represents the
input data Δim – i.e. the root mean squared difference between the Δim and the map generated by the EIG is minimal.
In the inverse calculations four torso models with standard torso shapes adjusted to particular patient torso, the same standard lungs shapes and containing different heart models were attempted
• A – general heart model in a standard vertical position defined by the position of the ECG lead V2 and with “standard” rotation,
• B – general heart model vertically shifted to the inversely estimated position by locating the site of the initial ventricular activation to the mid septal region [3],
• C – heart model vertically shifted as in B but properly formed and rotated in agreement with the model used in the forward computation,
• D – heart model properly shifted, formed and rotated in agreement with the model used in the forward computations.
Fig. 3: Example of torso-heart models used in the inverse computations (for torso 5 in Fig. 2). Four models (A, B, C, D) with patient adjusted general torso shape and approximate heart models
described in the text. The fifth model (MRI) is the realistic MRI-based model used also in the forward computations.
The results for these 4 torso and heart models were compared with the results using the MRI-based torso models used also in the forward calculations.
For each torso-heart model used in the inverse computations two parameters were evaluated:
1. Mean error of the heart model position relatively to the position used in the forward computation,
2. Mean lesion localization error defined as the distance between the gravity center of the simulated lesion and the inversely estimated position of the dipole representing the lesion.
The errors of the heart position for all 5 heart models and all 7 different patient torsos were computed. In all these cases the lesion localization errors for all 18 simulated lesions were also
evaluated. The mean values of these errors are shown in Table I.
Tab. 1. Mean heart position error and lesion localization errors for the tested heart models.
The description of individual heart models is shown in the method.
As it can be seen, the worst results were achieved by the unadjusted standard heart models (model A). If the heart model was vertically moved to a position estimated by inverse localization of the
initial activation site (model B) the heart position error decreased substantially but the lesion localization error still remained high. Proper forming and rotation of the heart model (model C) did
not change the heart position error but decreased the lesion localization error significantly. Additional improvement of the heart model position (model D) decreased the lesion localization error to
an acceptable value of 1.1 cm what is comparable with the intrinsic error of 0.7 cm of the used method (model MRI).
For individual cases and lesions the particular values of the lesion localization errors varied in relatively wide range. In Fig. 4 and Fig. 5 examples of “better” and “worse” results are
Fig. 4: Example of one satisfactory inverse localization of a medium anterior endocardial lesion aen2. Position of the lesion and results of the inversely located representing dipoles are shown. The
found lesion locations for models A and C and for models B and MRI are the same. All locations are within or near the simulated lesion.
Fig. 5: Example of variability of inverse localization of a large posterior endocardial lesion pen3 for heart models with different accuracy of of heart position and orientation. Position of the
lesion and results of the inversely located representing dipoles are shown. The localization errors correspond with the results in table Tab. I.
As the inverse methods enable the location of the inversely estimated dipoles only to predefined positions, the intrinsic error of the method is dependent on their volume density. In our study the
distance between the positions was about 1 cm what led to a reasonable intrinsic “grid error” of 0.7 cm.
Similar as in previous studies, the results showed that the method is not optimal for larger lesions and in such cases a modified method using a cluster of dipoles instead of one dipole performs
better [4].
The torso and heart models were selected so that they simulate the conditions that can be expected in real situations. For example, if no imaging is available and only surface dimensions can be
measured, the heart model B corresponds to that situation. If the heart size and rotation can be estimated from an ultrasound examination, the heart model C is appropriate. In many cases the CT or
MRI of the heart region is available but not of the whole torso. This situation is particularly modeled by the heart model D.
If the whole-torso CT or MRI is not available, a torso model with the shape adjusted to the patient’s chest dimensions and with an approximate, properly positioned, formed and oriented heart model
can give acceptable accuracy of the inverse localization of an ischemic lesion represented by single dipole. However, the accuracy of the localization may vary and depends on the available
information on the heart size and position.
The present study was supported by the research grant 2/0131/13 from the VEGA Grant Agency and by the grant APVV-0513-10 from the Slovak Research and Development Agency.
Jana Lenková
Jana Švehlíková
Milan Tyšler
Department of Biomeasurements
Institute of Measurement Science
Slovak Academy of Sciences
Dúbravská cesta 9, 841 04 Bratislava
Slovak Republic
[1] LENKOVA J., SVEHLIKOVA J., TYSLER M.: Individualized model of torso surface for the inverse problem of electrocardiology.
J. of Electrocardiology, 2012, vol. 45, p. 231-236, ISSN (printed) 0022-0736. ISSN (electronic) 1532-8430.
[2] TYŠLER M., KNEPPO P., TURZOVÁ M., ŠVEHLÍKOVÁ J., KARAS S., HEBLÁKOVÁ E., HÁNA K., FILIPOVÁ S.:
Non-invasive Assessment of Local Myocardium Repolarization Changes using High Resolution Surface ECG Mapping. Physiological Research, Vol. 56, Suppl 1, 2007, S133-S141, ISSN 0862-8408.
[3] ŠVEHLÍKOVÁ J., LENKOVÁ J., DRKOŠOVÁ A., FOLTÍN M., TYŠLER M.:. ECG based assessment of the heart position in standard torso model. IFMBE Proceedings, 2012, vol. 37, p. 474-477. ISSN 1680-0737.
[4] TYSLER M., SVEHLIKOVA J.: Noninvasive finding of local repolarization changes in the heart using dipole models and simplified torso geometry. J. of Electrocardiology, 2013, vol. 46, (in press).
231-236, ISSN (printed) 0022-0736. ISSN (electronic) 1532-8430.
Článok vyšiel v časopise
Lékař a technika
Najčítanejšie v tomto čísle
Zvýšte si kvalifikáciu online z pohodlia domova
Všetky kurzy
|
{"url":"https://www.prelekara.sk/casopisy/lekar-a-technika/2013-4/impact-of-the-torso-model-on-the-inverse-localization-of-ischemia-48569","timestamp":"2024-11-02T18:49:59Z","content_type":"text/html","content_length":"59112","record_id":"<urn:uuid:c425996d-e38b-46c0-baa6-486df66c0de0>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00529.warc.gz"}
|
slurry flow through a ball mill
Gold CIL – HYTS
After grinding and classification, the ore material can meet the requirements of leaching (85~95% through 200 mesh). In the classical process, the grate ball mill and the spiral classifier constitute
the first stage grinding and classification unit, and the overflow ball mill and the hydro-cyclone constitute the second stage grinding and grading unit.
اقرأ أكثر
Optimization of mill performance by using
The ball load and pulp slurry detection is performed, on a mill section, at every revolution. Those raw signals are sent through a wireless link to a central unit where they are processed. The four
angles are then computed and made available online to the customer supervision system via a standard OPC link or 4–20 mA electrical signals.
اقرأ أكثر
Cement Manufacturing Process: What is Cement made of.
Ball mills are generally used for preliminary grinding. Second, the fine grinding, in which the size of the materials is reduced to 200 mesh. This is done by grinding in Tube Mills. ... Flow Diagram
of Dry Method of Manufacturing of Cement. (i) Preparation of Slurry: ... All the moisture is driven off from the slurry as it passes through the ...
اقرأ أكثر
Got Hard Rock? Choose the MDX Pump
Ball and rod mill discharge: Rod mills are best in wet applications, where the slurry has up to 50% solids by mass; however, these can be used in dry applications as well. The MDX pump's hard metal
construction resists damage in the event of rod tanglings.
اقرأ أكثر
Flue Gas Desulfurization (FGD) Working | Thermal Power Plant
One mill slurry tank and one slurry pump is supplied for one wet ball mill. The mill slurry pump will send limestone to ball mill classifier to classify big size limestone. Then, the overflow of the
ball mill classifier shall go to the central slurry tank. The agitator is provided to keep the slurry solids in suspension during tank usage. 3.
اقرأ أكثر
An Effective Mixing for Lithium Ion Battery Slurries
The lower two curves in Figure 12 represent the viscosity behaviour of the cathode slurries mixed in the 3-D [ ] and ball mill [ ] mixers using the third (multi-stage) and the first (normal mixing)
These two flow curves agree extremely well and both exhibit nearly Newtonian behaviour over the same shear rate range. This suggests that the ...
اقرأ أكثر
US3180581A - Ball mill discharge trommel - Google Patents
US3180581A US210975A US21097562A US3180581A US 3180581 A US3180581 A US 3180581A US 210975 A US210975 A US 210975A US 21097562 A US21097562 A US 21097562A US 3180581 A US3180581 A US 3180581A
Authority US United States Prior art keywords trommel discharge annulus mill opening Prior art date Legal status (The …
اقرأ أكثر
SLURRY PUMPING MANUAL - pumpfundamentals
SLURRY PUMPING MANUAL iv–1 Symbols used The terms slurry and mixture in this Manual are used interchangeably to describe a mix of any loose solids, made up in any proportions and combinations of any
particle sizes and any conveying liquid.The subscript w refers to densities and specific gravities of liquids – mostly, but not exclusively, water.
اقرأ أكثر
Combined DEM and SPH simulation of overflow ball mill ...
In this paper we present a study of the discharge of balls and fluid slurry out of the end of an industrial scale overflow ball mill and into a trommel using a 2-way fully coupled DEM + SPH model.
The DEM sub-model is used to represent the ball charge while the SPH method is used for the fluid slurry containing the fine product.
اقرأ أكثر
Coal Water Slurry Ball Mill | Ball Mill for Coal Water ...
The mixed slurry of clean coal is sent to the coal water slurry ball mill for wet grinding. During the grinding process, a certain proportion of additives needs to be added. Slurry filtration. The
coal water slurry finely ground by the coal water slurry ball mill flows through the vibrating screen for slurry filtering operation.
اقرأ أكثر
Augustino Makokha - Academia.edu
The slurry flow and mixing behaviour displayed a marked dependence on the mill speed and slurry viscosity. The slurry mixing time in the ball charge varied directly with the slurry viscosity but
inversely with the mill rotational speed.
اقرأ أكثر
Lecture 11: Material balance in mineral processing
of solid and ρ is density of slurry Mass flow rate of dry solids in pulp (slurry) ... Ball mills use ~35% water for milling and in the discharge water is further added for separation in solids Most
flotation operations are performed in between 25 ... alance in mineral processing is discussed through some problems.
اقرأ أكثر
Technical Capabilities - Polycorp
steel lined mills, to improve mill availability and mill throughput. p Study of slurry flow from SAG/AG mills to reduce pulp lifter and grate wear. p Power calculations for maximum power draw for
Ball Mill Liners for maximum throughput. Installation and After Sales Support p Polycorp has trained professionals to provide supervision of Mill ...
اقرأ أكثر
ESP and FGD System: Absorber System
The slurry drops fall rapidly in converse flow against rising gas, through which most of the pollutant contained in gas is washed into slurry and remove by reaction with slurry. While the limestone
absorbent is being consumed, SO2, SO3, HCL and HF in the flue gas are separated, the same with most of the fly ash, and byproduct gypsum (CaSO4 ...
اقرأ أكثر
US5333804A - Agitator mill - Google Patents
The slurry with grinding media is forced through openings 37 in shaft extension member 30 and then along screen 50. The screen size is selected to permit the ground product or slurry to flow through
the screen into a narrow channel 60 defined below screen 50 and then into the discharge tube 44 and then out through discharge outlet 19.
اقرأ أكثر
Slurry system - LINWOOD MICHAEL W.
A system for separating a slurry flow includes a distributor having an inlet for receiving the slurry flow and at least two outlets for redirecting the slurry flow to respective designations. A flow
s ... (one of 3 illustrated) through return lines 30 and from the ball mill to the distribution box 16 through lines 32. [0014] The distribution ...
اقرأ أكثر
Permanent magnetic drum separator | Henan Deya Machinery ...
The sorting process is (refer to Figure 1) after the slurry is fed into the bottom tank 3 through the feeding box 7, the ore particles enter the feed area at the bottom of the box in a loose state
under the action of the water flow from the feed water spray system 6.
اقرأ أكثر
(PDF) Slurry flow in mills with TCPL — An efficient pulp ...
GDM include autogenous (ag), semiautogenous (sag) The ideal slurry flow in a typical grate discharge mill is and grate discharge ball mills. The purpose of the pulp schematically shown in Fig. 1.
lifters is simply to transport the slurry passing through The geometry of conventional pulp lifters is such that the grate holes into the discharge ...
اقرأ أكثر
Slurry Flow Rate Through a Mill
Slurry Flow Rate Through a Mill The factors upon which the rate of flow of the pulp through a mill depends appear not to have received extensive investigation. In an article by Anselm translated by
Pearson a method for the calculation of the time of passage of cement through a ball mill was given.
اقرأ أكثر
An Overview of Lime Slaking and Factors That Affect the ...
The ball mill slakers are much more expensive than paste or slurry slakers. They are available in sizes ranging from 1000lb/hr to 50 tons/hr. Figure 3 shows an attritor type vertical ball mill lime
slaker. The ball mill slakers are equipped with an external classifier, which separates slurry from the oversized grit and impurities. The oversize ...
اقرأ أكثر
OSTI.GOV Technical Report: IMPROVING ENERGY EFFICIENCY VIA ...
FlowMod calculates the slurry flow through the grate and pulp lifters. Based on this data the two models were fine-tuned to fit the Cortez SAG will. In the summer of 2004 a new design of shell
lifters were presented to Cortez and in September 2004 these lifters were installed in the SAG mill.
اقرأ أكثر
How Slurry Transport inside a Tumbling Grinding Mills
The results from the grate-only experiments have shown that the build-up of slurry (hold-up) inside the mill starts from the shoulder of the charge, while the toe position of the slurry progressively
moves towards the toe of the charge with increasing flowrate.
اقرأ أكثر
11.25 Clay Processing - US EPA
mill is mixed with water and bulk loaded as a slurry for shipping. Figure 11.25-3 depicts the process flow for ball clay processing. Indirect rotary or vibrating grate dryers are used to dry ball
clay. Combustion gases from the firebox pass through an air-to-air heat exchanger to heat the drying air to a temperature of approximately 300°C (570 ...
اقرأ أكثر
Pharmaceutical Technology: PRACTICAL I : BALL MILLING
It is assumed to subside completely when the grain size reaches a critical value. This is a consequence of the force applied to the slurry as two milling balls approach one another, causing a slurry
flow away from the balls prior to collision, as seen in Figure 3. The smaller the particle, the more likely it is to be caught in the slurry flow.
اقرأ أكثر
Effect of Slurry Solids Concentration and Ball Loading on ...
Feed Flow RateAt steady state, the volumetric flow rate of slurry through the mill is considered to be constant. So by assuming homogeneous mixing of salt and slurry inside the mill, the mean flow
rate of pulp (solids and water) through the mill can be analytically determined based on the tracer concentration profile at the mill discharge which ...
اقرأ أكثر
mill lining system improves P80 and increases ...
evaluating complex slurry flow in grinding mills. SPH is especially useful in evaluating pulp lifter discharge, with problems such ... the Minerals team accurately quantified slurry flows through ...
keeping 11 of standard design, this helped in relieving the load off the ball mill. • In the third stage, the number of new-design trial ...
اقرأ أكثر
(PDF) Simulation of overflow ball mill discharge and ...
Discharge of pebbles, finer rock, ball scats and slurry from mills and its flow through trommels, and into other processing operations has strong impacts on overflow ball mill performance.
اقرأ أكثر
Publications - SMC Testing
Modelling And Simulation Techniques Applied For Optimisation Of Mine To Mill Operations And Case Studies Modelling The Influence On Power Draw Of The Slurry Phase In Autogenous (AG), Semi-Autogenous
(SAG) And Ball Mills Ore Change, Ball Load And Material Flow Effects On An Energy Based SAG Mill Model
اقرأ أكثر
Fluid mechanics of slurry flow through the grinding media ...
The slurry transport within the ball mill greatly influences the mill holdup, residence time, breakage rate, and hence the power draw and the particle size distribution of the mill product. However,
residence-time distribution and holdup in industrial mills could not be predicted a priori.
اقرأ أكثر
Studies Show Snags to Mill Optimization | E & MJ
Therefore, the null hypothesis for the Eagle study was tracers do not move through the ball mill at the same rate as coarse particles. Coarse Particles Take Twice as Long. At Eagle, the ball milling
circuit starts with a crushing plant and ends with a 12.7-mm screen. "The ball mill is 3.20 m diameter and 4.88 m long," the study said.
اقرأ أكثر
|
{"url":"https://www.tartakpila.pl/20613+slurry+flow+through+a+ball+mill","timestamp":"2024-11-08T21:52:23Z","content_type":"text/html","content_length":"39474","record_id":"<urn:uuid:2e0355c6-d17c-4351-a69a-8793cc1e7f92>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00725.warc.gz"}
|
(Get Answer) - Image transcription text Where In the House Does Farmer John Keep...| Transtutors
We store cookies data for a seamless user experience. To know more check the Privacy Policy
Image transcription text Where In the House Does Farmer John Keep His Pigs? Indicate whether...
• 1+ Users Viewed
• 0+ Downloaded Solutions
• New Jersey, US Mostly Asked From
Image transcription text
Where In the House Does Farmer John Keep His Pigs? Indicate whether the statement is true or false by circling the appropriate letter. Write T F this letter in the box containing the exercise number.
If the statement is false, explain why. Use the figures below . KI 1. A, C, E, and F are all points on AH. If false, why? P S 2. BD intersects AH at F. D If false, why? E V 3. EF, GE, and FG are all
names for the same line. If false, why? G H N 4. CF, HC, and FG are all line segments that lie on AH. If false, why? F L D 5. HF, HC, and HA are all names for the same ray. If false, why? E H OI 6.
AH and HA are two names for the same ray. If false, why? U A 7. AH and CH are two names for the same ray. If false, why? N F E 8. EG and CF are parallel lines. If false, why? O S T 9. ZNOR, ZPON, and
ZO are all names for the same angle. If false, why? P R O I 10. ZRON and ZNRO are two names for the same
Show less
Copy and paste your question here...
|
{"url":"https://www.transtutors.com/questions/image-transcription-text-where-in-the-house-does-farmer-john-keep-his-pigs-indicate--10665228.htm","timestamp":"2024-11-09T04:27:22Z","content_type":"application/xhtml+xml","content_length":"73637","record_id":"<urn:uuid:d621b681-84e5-4677-a6b5-76c4959571a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00819.warc.gz"}
|
Political Centrism and Extremism: A Mathematical Analysis | SIAM
Political Centrism and Extremism: A Mathematical Analysis
Figure 1. Campaign posters at the office of The Washington Post. Figure courtesy of Ron Cogswell and Flickr under the Attribution 2.0 Generic (CC BY 2.0) license.
Politicians adopt various strategies to appeal to voters and win elections (see Figure 1). In a two-party system, these strategies are often driven by extensive polling and careful positioning of one
candidate relative to their opponent. Nevertheless, the strategy that helps a candidate win one election might cause them to lose another. Our recent work explores a simplified mathematical model of
this political process and uncovers unexpected phenomena and mechanisms that we then compare to real-life political strategies and outcomes [2, 3].
In the U.S., some politicians (e.g., Bill Clinton) have won elections by moving to the political center, while others (e.g., Donald Trump) have succeeded by securing the vote among their bases. We
refer to these two strategies as centrism and extremism, but also recognize that not all off-center positions can reasonably be considered “extreme.”
Some voters always vote for the candidate whose views most closely match their own — regardless of how close they actually are. They vote for the least undesirable candidate even if all contenders
are undesirable to them; we call these voters loyal. Others will abstain if no candidate closely represents their views; we call these voters disloyal (without intending any moral judgment).
The median voter theorem confirms the reasonable guess that centrism is an effective tactic when most voters are loyal [1]. We would also expect extremism to be a good strategy for a candidate with a
strong off-center base of disloyal voters. Is there anything remarkable about the optimal strategy’s transition from centrism to extremism given a change in parameters, such as voter loyalty? Can we
view this transition as a bifurcation in a dynamical system? If so, what kind of bifurcation and what does it say about politics more generally? Finally, are the answers to such questions reflected
in real-life politics?
Figure 2. Possible probability density functions of voter views. Figure courtesy of the authors.
To create a simple setting, we pretend that any voter’s or candidate’s political positions can be reduced to a single number \(x\), where negative \(x\) corresponds to “left-wing” views and positive
\(x\) corresponds to “right-wing” views. We further envision a continuum of voters with views that can be described by a probability density function \(f\) (see Figure 2). Without loss of generality,
we assume that the median of \(f\) is \(0\).
Now consider two competing candidates \(L\) and \(R\) whose views lie at positions \(\ell\) and \(r\), with \(\ell < r\). To start, we assume that voter loyalty is maximal — i.e., everyone votes for
the candidate who is closest to their own position. The vote shares of \(L\) and \(R\) will thus be
\[S_L = \int_{-\infty}^{(\ell+r)/2} f(x) ~\! dx ~~~~\mbox{and} ~~~~ S_R = \int_{(\ell+r)/2}^\infty f(x) ~\! dx.\]
Candidate \(L\) wins if \(S_L> \frac{1}{2}\) and candidate \(R\) wins if \(S_R > \frac{1}{2}\). Since the median of \(f\) is \(0\), we can equivalently say that \(L\) wins if \(\frac{\ell+r}{2} > 0\)
and \(R\) wins if \(\frac{\ell+r}{2} < 0\). This result means that the candidate who is closest to the median will win, in agreement with the median voter theorem [1].
To capture voter loyalty, we now assume that only a fraction \(g(z)\) of voters still vote if the candidate closest to them is at distance \(z\), where \(g(z)\) is a decreasing function of \(z \geq 0
\) with \(g(0)=1\). As an example, consider \(g(z) = e^{- \frac{z}{\gamma}}\) with \(\gamma > 0\). Larger values of \(\gamma\) correspond to greater voter loyalty. The two candidates’ vote shares
\[S_L = \int_{-\infty}^{(\ell+r)/2} f(x) e^{-\frac{|\ell-x|}{\gamma}} ~\! dx ~~~~\mbox{and} ~~~~ S_R = \int_{(\ell+r)/2}^\infty f(x)e^{-\frac{|r-x|}{\gamma}} ~\! dx.\]
We assume that the candidates will adjust their positions with time, so \(\ell=\ell(t)\) and \(r=r(t)\), governed by equations of the form
\[\frac{d \ell}{dt} = \alpha \frac{\partial S_L}{\partial \ell}, ~~~~ \frac{d r}{dt} = \beta \frac{\partial S_R}{\partial r}.\tag1\]
Here, \(\alpha\) and \(\beta\) are positive constants that measure the eagerness with which \(L\) and \(R\) adjust their positions to gain votes. We next study the changes in this system’s solutions
as \(\gamma\) varies [2]. Under some circumstances—with a polarized electorate and a significant difference between \(\alpha\) and \(\beta\)—the transition from centrism to extremism involves a
saddle-node bifurcation, and optimal candidate positions can jump discontinuously as a result. In Figure 3, we assume that \(\beta=0\) and \(\alpha=1\); \(r\) thus remains fixed while \(\ell\) moves,
governed by \(\frac{d \ell}{dt} = \frac{\partial S_L}{\partial \ell}\). For \(\gamma=2\), there is a stable fixed point near \(-1\). As \(\gamma\) rises to \(4\), the fixed point moves closer to \
(−0.5\). When \(\gamma\) rises even further, the fixed point collides with an unstable fixed point, and both are annihilated. From that moment on, it is in \(L\)'s best interest to move towards \(R\)
's position at \(r=1\). \(L\)'s optimal strategy hence changes abruptly and discontinuously as \(\gamma\) rises above a threshold value that is slightly higher than \(4\).
Figure 3. A saddle-node bifurcation that occurs in \((1)\). Figure courtesy of the authors.
Do these types of abrupt shifts occur in real life? In 2016, Donald Trump won the U.S. presidential election by not moving towards the center and instead securing the votes of his right-of-center
base. But he lost the subsequent 2020 election, perhaps because \(\gamma\) had increased; emotions were running high and many people were determined to vote. Perhaps Trump would have won if he had
moved towards the center. And perhaps the abruptness of the transition caught many people by surprise, making them susceptible to claims that the election was conducted improperly.
Our model is a suitable example for an undergraduate course on ordinary differential equations [2]. We also wrote a paper that is specifically aimed at undergraduate students and their instructors
[3], and created a web-based app that explores our model’s parameters.
Natasa Dragovic delivered a contributed presentation on this research at the 2023 SIAM Conference on Applications of Dynamical Systems, which took place in Portland, Ore., last year.
[1] Black, D. (1948). On the rationale of group decision-making. J. Polit. Econ., 56(1), 23-34.
[2] Börgers, C., Boghosian, B., Dragovic, N., & Haensch, A. (2023). A blue sky bifurcation in the dynamics of political candidates. Amer. Math. Monthly.
[3] Börgers, C., Dragovic, N., Haensch, A., Kirshtein, A., & Orr, L. (2024). ODEs and mandatory voting. CODEE J., 17, 11.
About the Authors
Professor, Tufts University
Christoph Börgers is a professor of mathematics at Tufts University. He holds a Ph.D. from the Courant Institute of Mathematical Sciences at New York University. His current research focus is on the
probabilistic analysis of interacting particles and agents.
Assistant professor, University of St. Thomas
Natasa Dragovic is an assistant professor in the Department of Mathematics at the University of St. Thomas. She holds a Ph.D. in mathematics from the University of Texas at Austin. Dragovic’s
research lies at the intersection of probability, dynamical systems, and social science, with a focus on mathematical modeling that examine changes in opinions over time.
Senior Data Scientist, Data Intensive Studies Center
Anna Haensch is a senior data scientist in the Data Intensive Studies Center at Tufts University with a secondary appointment in the Department of Mathematics. She holds a Ph.D. in mathematics from
Wesleyan University. Haensch’s research lies at the intersection of mathematics and the social sciences; she explores the many ways through which data can contribute to a safer, more sustainable, and
more equitable world.
Stay Up-to-Date with Email Alerts
Sign up for our monthly newsletter and emails about other topics of your choosing.
|
{"url":"https://www.siam.org/publications/siam-news/articles/political-centrism-and-extremism-a-mathematical-analysis","timestamp":"2024-11-10T15:18:12Z","content_type":"text/html","content_length":"117343","record_id":"<urn:uuid:19943221-557a-4867-bb8b-7ed75960c7ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00553.warc.gz"}
|
Improving the Thermal Efficiency and Performance of Refrigeration Systems: Numerical-Experimental Analysis of Minimization of Frost Formation
DOI: 10.32604/ee.2022.019625
Improving the Thermal Efficiency and Performance of Refrigeration Systems: Numerical-Experimental Analysis of Minimization of Frost Formation
1Federal University of Technology–Parana (UTFPR), Curitiba, Brazil
2Federal University of Technology–Parana (UTFPR), Guarapuava, Brazil
3University of Campinas (UNICAMP), Campinas, Brazil
4Federal University of Technology–Parana (UTFPR), Ponta Grossa, Brazil
*Corresponding Author: Thiago Antonini Alves. Email: antonini@utfpr.edu.br
Received: 02 October 2021; Accepted: 04 April 2022
Abstract: The frost growth on cold surfaces in evaporators is an undesirable phenomenon which becomes a problem for the thermal efficiency of the refrigeration systems because the ice layer acts as a
thermal insulation, drastically reducing the rate of heat transfer in the system. Its accumulation implies an increase in energy demand and a decrease in the performance of various components
involved in the refrigeration process, reducing its efficiency and making it necessary to periodically remove the frost, resulting in expenses for the defrost process. In the present work, a
numerical-experimental analysis was performed in order to understand the formation process of porous ice in flat plates with different surface treatments and parameters. This understanding is of
utmost importance to minimize the formation of porous ice on cold surfaces and improve equipment efficiency and performance. In this context, a low-cost experimental apparatus was developed, enabling
an experimental analysis of the phenomenon under study. The environmental conditions evaluated are the temperature of the cold surface, room temperature, humidity, and air velocity. The material of
the surfaces under study are aluminum, copper, and brass with different surface finishes, designated as smooth, grooved (hydrophilic), and varnished (hydrophobic). The numerical-experimental analysis
demonstrates measurements and simulations of the thickness, surface temperature, and growth rate of the porous ice layer as a function of the elapsed time. The numerical results were in good
agreement with the experimental results, indicating that the varnished surface, with hydrophobic characteristics, presents greater difficulty in providing the phenomenon. Therefore, the results
showed that application of a coating allowed a significant reduction on the frost formation process contributing to the improvement of thermal efficiency and performance of refrigeration systems.
Keywords: Frost layer growth; frost thickness; minimization of frost; hydrophobic surface; refrigeration systems
Bi Biot number
Bim Biot number to mass transfer
cp specific heat at constant pressure [J/(kg K)]
d surface diameter of the porous ice layer
D water vapor diffusion coefficient in the air
hc convection heat transfer coefficient [W/(m2 K)]
hm convection mass transfer coefficient [m/s]
hsg sublimation enthalpy [kJ/mol]
Ja Jakob number
K thermal conductivity [W/(m K)]
m˙ water vapor phase change rate [kg/s]
Nu Nusselt number
P pressure [Pa]
q heat flux [W/m2]
t time [s]
T temperature [K] [°C]
u, v air velocity [m/s]
w absolute humidity of the air [kgwater/kgdry air]
Greek Symbols
α thermal diffusivity [m2/s]
σ surface energy [J/m2]
δ ice thickness [mm]
ε porosity of volumetric fraction
θ surface contact angle [°]
μ dynamic viscosity [Pa s]
ρ specific mass [kg/m3]
σ interfacial energy [kJ/m2]
φ source term [J]
∂ partial derivative
λ factor referring to flow turbulence
τ tortuosity
υ kinematic viscosity [m2/s]
Ф relative humidity of the air [%]
air air
β solid phase
c cold
conv convection
eff effective
dz differential of the coordinate on the normal shaft the flat plate
γ gas phase
f frost
sat saturated
tr transition
* dimensional
The formation of porous ice is a physical phenomenon that occurs when the flow of the mixture formed by air and water vapor comes into contact with surfaces that have temperatures below 0°C. Examples
of applications where this phenomenon occurs are aircraft wings, refrigeration system evaporators, compressor rotors, cryogenic liquid transfer and storage, and many others [1]. The equipment used in
the most diverse types of refrigeration segments, whether commercial or industrial, works with evaporation temperatures close to −10°C, favoring the generation and accumulation of porous ice (frost)
on the surface of the devices used in the process of cooling [2].
When the formation of the frost layer occurs on a heat exchanger this phenomenon becomes a problem for the thermal efficiency of the system because the ice layer acts as a thermal insulation,
drastically reducing the rate of heat transfer in the cooling system. This fact is aggravated over time, as the thickness of the ice layer increases and thus the airflow decreases, according to Liu
et al. [3] in some exceptional cases the layer increases to the point of completely blocking the airflow in the heat exchanger. Avoiding ice accumulation in heat exchangers is essential for the
proper functioning of the system. According to Wu et al. [4], the frost layer in heat exchangers not only increases thermal resistance, but also reduces airflow causing a drop in performance in heat
Every refrigeration system is impaired if the heat exchanger does not work properly; preventing the formation of ice buildup in the heat exchanger is essential for the proper functioning of any
refrigeration system. New energy efficiency programs are increasingly tight on the efficient use of energy sources, with actions requiring manufacturers to develop new products with high performance
and low energy demand [5].
Kinsara et al. [6] investigated the possible reduction in the formation of porous ice in evaporators, dehumidifying the air through a dehumidification system using a hygroscopic substance capable of
absorbing water. They concluded that the system under investigation was somewhat unviable, as it has a high cost for smaller applications.
Cai et al. [7] analyzed the effects of surface energy on the phase change of water vapor in the first stage of the growth of porous ice, carrying out tests on two surfaces, coated with copper and
wax, respectively, to find an effective method of restricting the growth of frosts. The results indicate that the wax coating restricts the growth of the porous ice layer, presenting a lower height
and density when compared to the surface coated with copper.
Other studies have investigated the application of coatings made of hydrophobic substances (which do not dissolve in water or repel it) and hydrophilic substances (which have an affinity for and are
soluble in water), as methods to prevent the formation of porous ice.
Liu et al. [3] investigated experimentally the anti-frosting performance of a paint made up of hydrophilic and polymeric substances. The growth of the porous ice layer was monitored, with a metallic
plate as a cold surface, half of which was covered by the paint. The results showed that the start of the nucleation phase was delayed by 15 min with a reduction of at least 40% in the thickness of
the porous ice layer, proving the effectiveness of the proposed coating.
Liu et al. [8] concluded in their studies that changes in the surface energy of the cooling plate are a viable way to alter the properties of porous ice and modify the formation of crystals, which
can originate weaker or looser ice sheets, easily removed.
Kim et al. [9] investigated the characteristics of porous ice formation by comparing surfaces with hydrophilic (polyacrylate resin), hydrophobic (fluorinated resin), and uncoated (neutral) coatings.
The authors concluded that the coatings provided a delay in the formation of porous ice on both surfaces, however, in the case of the hydrophobic surface, the effect was negligible, while for the
hydrophilic one, a thinner and denser ice layer was obtained.
Hermes et al. [10] proposed the model along with experimental data obtained in-house to validate a semiempirical modeling approach for predicting frost accretion on hydrophilic and hydrophobic
substrates. An algebraic expression for the frost thickness as a function of the time, the modified Jakob number, the Sherwood number, the humidity gradient, and the surface contact angle was devised
from the frost formation theory. When compared to the experimental data for the frost thickness, the proposed semiempirical model showed errors within ±15% bounds and an average predictive error of
11.7%. Since the model carries the contact angle as an independent parameter, a sensitivity analysis of the frost growth rate about it is also reported.
Amini et al. [11] investigate the frost formation and flow over a fin and tube heat exchanger due to natural convection for various conditions of relative humidity, air ambient temperature, and mean
refrigerant temperature. The results include frost deposition, steps of frost formation, and its effect on heat transfer rate for different conditions. The results show that frost is formed only on
the tip of the fins with higher thickness from top to bottom due to the small distance between the fins. Frost causes air trapping which increases the thermal resistance and reduces heat transfer in
the system.
Niroomand et al. [12] investigated an experimental study on the formation of ice on a plate under natural convection. Frost thickness, mass, density, and surface roughness were measured during each
test. The effect of operating conditions (plate temperature, room temperature, and relative humidity) on the properties of the ice was investigated. In this work, it was shown that the surface
roughness of the ice is related to the shape of the ice layer, porosity, and density. It has also been found that the temperature of the plate significantly affects the roughness of the frozen
surface; as the temperature of the plate decreases, the ice layer has a high average roughness and negative asymmetry, which corresponds to a low density and highly porous ice layer. The increase in
humidity and air temperature slightly affects the average surface roughness, but not the distortion of the frozen surface.
Amer et al. [13] studied the influence of contact angle on the freezing process of water droplets on cold surfaces under natural convection. It was found that it effectively affects the way ice
crystals grew as well as the freezing time, being directly proportional. The contact angle is a macroscopic parameter that determines the microstructure that the ice will assume in the porous layer [
Wang et al. [15] investigated an experimental study that examines the effect of ultrasonic vibration on ice formation under a convective environment. It was found that the increase in relative
humidity increased the thickness of the ice and influenced the property of the ice. The test results indicated that non-contact vibration does not affect ice formation.
A large number of studies have been carried out in recent years, to characterize the phenomenon of formation and accumulation of porous ice (frost). However, it appears that the minimization
mechanisms are not yet fully known and that new attempts and assumptions must be made, to assess the real effectiveness of the use of coatings or minimization methods. The general objective of this
study is to perform a numerical-experimental analysis of the use of coatings on flat surfaces to ascertain the minimization of the formation of the porous ice layer. This understanding is of utmost
importance to improve refrigeration equipment efficiency and performance.
The mathematical formulation of the porous ice formation in flat plates, based on the models developed by Sedano [16] and Tao et al. [17], present in Biglia [5]. The formation of pore ice can be
divided into three stages:
i) period of crystal growth;
ii) period of pore ice layer growth;
iii) period of intense growth of the porous ice layer.
Tao et al. [17] subdivide the formation process based on the last two stages as:
i) one-dimensional crystal growth;
ii) crystal branching and formation of the porous layer.
The parameter called transition time (ttr), which establishes the beginning and the end of each stage and indicates the transition between them, is used to perform the mathematical formulation of
this phenomenon. It can be subdivided into two stages:
i) formation of ice cores;
ii) one-dimensional growth.
Each stage has a specific mathematical modeling. In the second stage, where crystal branching occurs, the ice is considered to be porous, increasing the complexity of the modeling. To solve the
equations governing this stage, the volumetric averaging technique developed by Whitaker [18] for modeling the drying process in porous media was used.
2.1.1 Modeling the First Stage of the Frost Formation Process
In the first stage of pore ice formation, nuclei are formed, with one-dimensional growth in the direction perpendicular to the surface. The short duration of this stage does not diminish its
importance in the mathematical modeling of the formation process, being a requirement that must be met to reach the second stage, whose influence will be noticeable only in the second stage.
In the formation process of this phenomenon, heat and mass transfer happen simultaneously; consequently, the governing equations for this stage are obtained from heat and mass balances. Eqs. (1) and
(2) represent the mathematical modeling of the first stage of the pore ice formation process, which is considered to be one-dimensional.
i) Energy Equation:
ii) Diffusion Equation:
2.1.2 Modeling the Second Stage of the Frost Formation Process
In modeling this stage, the model developed by Whitaker [18] for porous media is used, based on local volumetric averaging. The mathematical formulation of the phenomenon of porous ice formation in
the second stage is developed from the basic equations governing the heat and mass transport phenomenon. Eqs. (3) and (4) correspond to the model of the second stage of the porous ice formation
i) Ice Phase Continuity Equation (β):
ii) Gas Phase Diffusion Equation (γ):
The program was implemented in Python using the Jupyter Notebook Environment, which consists of an open-source application (BSD license) that allows the creation and resolution of routines that
contain iterative codes and equations, as well as obtaining graphics (Matplotlib). The equations that model the first stage of the pore ice layer growth are solved using finite differences. To
develop the discretization, the Finite Difference Method is used centered on the derivative in space and intermediate points, and implicit formulation for the transition time. Applying a fixed mesh,
the values of the properties are interpolated to each new position of the boundary, through the Spline Method, which is an approximation technique, by means of polynomial interpolation, where the
interval of interest is divided into several subintervals, which are interpolated with polynomials of smaller degrees.
i) Discretization of the Energy Equation:
ii) Discretization of the Diffusion Equation:
The differential equations that model the second stage are again solved using the Finite Difference Method. Centered finite differences are used for the derivatives in space and, for the intermediate
points, the Upwind Technique is used for the time derivative. A fixed mesh is used, as before, and the values of the variables and properties under study are interpolated to each of the new boundary
positions, the differential equations are solved by iterations, until a difference equal to or less than 10−5 is obtained between the values of the variables in two successive iterations.
i) Discretization of the Energy Equation:
ii) Discretization of the Gas Phase (Vapor) Diffusion Equation:
m˙jn=(1−ρvjnP1)−1{ [ 112(1−εβ)j−1−γj+1/2 ]n−1ρvj−1n+[ 56(1−εβ)j+γj+1/2+γj−1/2 ]n−1ρvjn++[ 112(1−εβ)j+1−γj+1/2 ]n−1ρvj+1n−δjn−1 }.(8)
iii) Discretization of the β-Phase Continuity Equation:
The flowcharts of the solution algorithms for the first and second stage of porous ice formation are illustrated in simplified form in Figs. 1a and 1b, respectively.
The experimental apparatus according to Fig. 2, consists of a test section, which contains a Peltier TEC1-12706 thermoelectric chip, a finned heat sink with Cooler MasterTM Hyper T4 heat pipes, a
KeysightTM 34970A data acquisition system with a KeysightTM 34901A multiplexer with 18 channels, a KeysightTM U8002A power supply, a SonyTM Cyber-Shot DSC-W530 Digital Camera with 14.1 MP and 90 DPI,
a PolaroidTM tripod, a DellTM notebook and an NHSTM UPS. The test section consists of an acrylic box (casing), an aluminum support base and a MultilaserTM axial fan. A schematic of the experimental
apparatus is shown in Fig. 3.
The methodology used in the experimental procedures can be divided in 10 steps, which are the following:
#1) isolating the testing environment;
#2) turning on the cooling, control and data acquisition systems;
#3) waiting the required interval of time for the environmental parameters to be in the steady conditions;
#4) fixating the sample plate to be tested through the use of thermal paste;
#5) turning on the electrical components of the experimental apparatus, fixating the inner air velocity in 0.5m/s through the control and data acquisition systems;
#6) preparing and checking the measurement systems, such as the digital camera, infrared thermometer, the thermocouples and other sensors;
#7) performing the first measurement, time 0;
#8) activating the cold surface through the power source in the voltage of 11.9V;
#9) performing the measurements in each interval of time of 10 min during the total time of 90 min, collecting all data in a digital spreadsheet;
#10) saving the obtained data for analysis with the specifics of the sample plate being tested.
The flat surfaces used in the experiments consist of square plates of aluminum, copper, and brass, with a 40mm edge and 2mm thick. Having different surface finishes: smooth (Sample I), grooved
(Sample II, hydrophilic), and varnished (Sample III, hydrophobic), as shown in Table 1 and Fig. 4. The objective of this experimental study is to analyze the formation and growth of porous ice on
cold surfaces by comparing different surface and material treatments.
In order to prepare the square samples, they were initially cut 40mm wide. Then, they were grinded with silicon carbide sandpaper with a successively smaller grain size, rotating them in 90° in each
subsequent sandpaper. The adopted sequence was 180, 220, 320, 400, and #600 mesh, using the maximum rotation available in the equipment, aiming to eliminate risks and marks existent in the samples
surfaces. After the grinding process, the samples were cleaned with water and then ethylic alcohol, in order to made the surface free of dust and abrasive traits.
At this point, the stages concerning the surface treatment given to the smooth defined surface (Sample I) were finished. The grooved plates (Sample II) were obtained by the milling process containing
grooves with 1×1mm along the plate. Finally, the Sample III received a varnish layer generating the hydrophobic surface.
The experimental uncertainties analysis was associated with the uncertainties of the frost thickness, temperatures, humidity, air velocity and time. The data collected in the experimental tests have
the uncertainties shown in Table 2.
The experimental results presented refer to the thickness, the surface temperature, and the growth rate of the ice layer on aluminum, copper, and brass flat plates, with different surface treatments
(smooth plate, hydrophilic surface, and hydrophobic surface), in the time interval of 10 min during 90 min.
For the analysis of the frost thickness, the software ImageJ© was used in the treatment of the images, setting the same measurement scale according to the specifications of the digital camera used,
such as resolution and DPI (dots per inch), which provides the width and height of the image file, in order to enable the conversion of length, in this case the height, in pixels into millimeters,
and subsequently allowing the superimposition of all images, having as reference the initial time, as exemplified by Fig. 5.
Fig. 6 shows the experimental results obtained comparing the aluminum, brass, and copper samples, without specific surface treatment, with the environmental parameters presented in Table 3.
The results presented in Fig. 6 show good agreement regarding the behavior of the increase in the thickness of the porous ice layer due to time, with the results published in literature by Liu et al.
[19] and Sommers et al. [20].
Fig. 7 shows the experimental results obtained comparing the aluminum, brass, and copper in the grooved sample (hydrophilic surface), with the environmental parameters presented in Table 4.
During the formation and increase of the thickness of the porous ice layer, the thermal conductivity of the flat surface material has a great influence on the convection heat transfer process that
occurs between the cold surface and the fluid flow, intensifying the exchange by increasing the thermal conductance and decreasing the thermal resistance.
Fig. 8 shows the experimental results of the hydrophobic surface for flat plates of different materials, with the environmental parameters presented in Table 5.
The increase in the thickness of the porous ice layer on all surfaces occurs in the same way, with small deviations, indicating that varnishing, hydrophobic surfaces, can provide its characteristic
behavior to the phenomenon, regardless of the base material used, transferring its thermal characteristics.
Fig. 9 illustrates the comparison between the three surface treatments with porous ice thickness, for aluminum, copper, and brass based materials, respectively. It shows that regardless of the base
material, the smooth and varnished samples present similar behaviors; in the case of smooth, keeping close values, then in the case of varnished ones, having both coherence and repeatability. In the
case of grooved samples, it can be seen that in the behaviors presented there is no rationality or logic regarding possible influences caused by basic materials or environmental parameters used, with
no possibility for predictions, as seen on the brass base material, in which the thickness of the porous ice layer is shown greater up to 80 min when compared to the smooth and varnished samples.
The numerical data obtained in this study were validated, based on the numerical models proposed by Sedano [16] and Maldonado et al. [1], through the experimental data collected during the tests.
Fig. 10 shows the thickness of the porous ice layer according to the base material of the surfaces of the flat plates, in which it is noticed that the behavior is similar to the one obtained
The graph compares the experimental results with those obtained numerically, on smooth surfaces with different materials and a hydrophobic varnished surface, having as the largest percentage
difference between the numerical and experimental results ∼24% for the time of 20 min for Sample I of copper. Some differences are noted in the curves that predict the growth of the porous ice layer
in the varnished and brass (smooth) samples, showing different behavior to the others. In the case of the brass surface, this is possibly due to the value of the k0,eff used, since there are several
alloys, the same one adopted by Rohsenow et al. [21]. For the varnished surface, two possible factors justify the deviations: the value of k0,eff, similarly to the brass surface, but the same being
extrapolated on this occasion and also the tendency of stability that is perceived after 70 min have elapsed, emphasizing that after 90 min (fixed time) no experimental tests were performed to
validate the model.
Frost growth on the heat exchanger surface can significantly alter the system performance. An experimental and numerical analysis of the porous ice formation process is of utmost importance to avoid
this physical problem and improve the thermal efficiency and performance of equipment that are subject to this problem. The present study was aimed at understanding the formation process of porous
ice in flat plates with different surface treatments and parameters. This understanding is an important tool to minimize the formation of porous ice on cold surfaces and improve equipment efficiency
and performance. A low-cost experimental apparatus was conceived and developed, showing itself capable of providing the phenomenon of porous ice formation on flat surfaces. The environmental
conditions evaluated were the temperature of the cold surface, room temperature, humidity, and air velocity. The material of the surfaces under study were aluminum, copper, and brass with different
surface finishes, designated as smooth, grooved (hydrophilic), and varnished (hydrophobic). Results showed that application of a coating allowed a significant reduction of formation process of porous
ice. The results indicate a good agreement with the literature that describes the behavior of the formation of porous ice and the frost growth rate. Regarding the surface treatments used, the smooth
and varnished samples showed coherent behaviors, allowing numerical predictions, a fact not seen in the grooved surfaces. The results indicate that the simple application of varnish, which provides a
hydrophobic surface, results in an effective decrease in the layer of porous ice, especially in materials with high thermal conductivity, since it transmits its characteristics and properties to the
surface in question. Finally, it is emphasized that this research brings a contribution to science as the numerical and experimental results are compatible with the existing ones in the literature,
and can serve as a support for new investigations regarding minimization of frost formation. It is essential to offer contributions about frost formation reduction, since the phenomenon generates
unnecessary energy demand, damages the refrigeration equipment and consequently brings financial loss because of maintenance.
Acknowledgement: The authors acknowledge the Federal University of Technology–Parana (UTFPR), Ponta Grossa/Brazil.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. Maldonado, P. A. D., Silva, R. C. R., Salinas, C. T., Biglia, F. M., Antonini Alves, T. (2022). Experimental and numerical study of frost formation with natural convection in a triangular
arrangement of slender vertical tubes. Thermal Science and Engineering Progress, 27, 101138. DOI 10.1016/j.tsep.2021.101138. [Google Scholar] [CrossRef]
2. Ismail, K. A. R., Salinas, C., Gonçalves, M. M. (1997). Frost growth around a cylinder in a wet air stream. International Journal of Refrigeration, 20, 106–119. DOI 10.1016/S0140-7007(96)00065-5.
[Google Scholar] [CrossRef]
3. Liu, Z., Wang, H., Zhang, X., Meng, S., Ma, C. (2006). An experimental study on minimizing frost deposition on a cold surface under natural convection conditions by use of a novel anti-frosting
paint. Part I. Anti-frosting performance and comparison with the uncoated metallic surface. International Journal of Refrigeration, 29, 229–236. DOI 10.1016/j.ijrefrig.2005.05.018. [Google Scholar] [
4. Wu, X., Dai, W., Shan, X., Wang, W., Tang, L. (2007). Visual and theoretical analyses of the early stage of frost formation on cold surfaces. Journal of Enhanced Heat Transfer, 3, 257–268. DOI
10.1615/JEnhHeatTransf.v14.i3.70. [Google Scholar] [CrossRef]
5. Biglia, F. M. (2018). Numerical-experimental analysis of the minimization frost formation on flat plates (in Portuguese) (M.Sc. Dissertation). Department of Mechanical Engineering, Federal
University of Technology-Parana, Ponta Grossa, Brazil. [Google Scholar]
6. Kinsara, A. A., Al-Rabghi, O. M., Elsayed, M. M. (1997). Parametric study of an energy efficient air conditioning system using liquid desiccant. Applied Thermal Engineering, 18, 327–335. DOI
10.1016/S1359-4311(97)00037-9. [Google Scholar] [CrossRef]
7. Cai, L., Hou, P. X., Wang, R. H., Zhang, X. S. (2010). Effects of different characteristic surfaces at initial stage of frost growth. Journal of Central South University of Technology, 17,
413–418. DOI 10.1007/s11771-010-0061-z. [Google Scholar] [CrossRef]
8. Liu, Z., Zhang, X., Wang, H., Meng, S., Cheng, S. (2007). Influences of surface hydrophilicity on frost formation on a vertical cold plate under natural convection conditions. Experimental Thermal
and Fluid Science, 31, 789–794. DOI 10.1016/j.expthermflusci.2006.08.004. [Google Scholar] [CrossRef]
9. Kim, K., Lee, K. S. (2011). Frosting and defrosting characteristics of a fin according to surface contact angle. International Journal of Heat and Mass Transfer, 54, 2758–2764. DOI 10.1016/
j.ijheatmasstransfer.2011.02.065. [Google Scholar] [CrossRef]
10. Hermes, C. J. L., Sommers, A. D., Gebhart, C. W., Nascimento Jr., V. S. (2018). A semi-empirical model for predicting frost accretion on hydrophilic and hydrophobic surfaces. International
Journal of Refrigeration, 87, 164–171. DOI 10.1016/j.ijrefrig.2017.09.022. [Google Scholar] [CrossRef]
11. Amini, M., Yaghoubi, M., Pishevar, A. R. (2019). Analysis of frost visualization over a fin and tube heat exchanger by natural convection. Experimental Heat Transfer, 32, 36–50. DOI 10.1080/
08916152.2018.1473528. [Google Scholar] [CrossRef]
12. Niroomand, S., Fauchoux, M. T., Simonson, C. J. (2019). Experimental characterization of frost growth on a horizontal plate under natural convection. Journal of Thermal Science and Engineering
Applications, 11, 011020. DOI 10.1115/1.4040989. [Google Scholar] [CrossRef]
13. Amer, M., Wang, C. C. (2020). Experimental investigation on defrosting of a cold flat plate via ultrasonic vibration under natural convection. Applied Thermal Engineering, 179, 115729. DOI
10.1016/j.applthermaleng.2020.115729. [Google Scholar] [CrossRef]
14. Huang, L., Liu, Z., Liu, Y., Gou, Y., Wang, L. (2012). Effect of contact angle on water droplet freezing process on a cold flat surface. Experimental Thermal and Fluid Science, 40, 74–80. DOI
10.1016/j.expthermflusci.2012.02.002. [Google Scholar] [CrossRef]
15. Wang, F., Liang, C., Zhang, X. (2016). Visualization study of the effect of surface contact angle on frost melting process under different frosting conditions. International Journal of
Refrigeration, 64, 143–151. DOI 10.1016/j.ijrefrig.2016.01.008. [Google Scholar] [CrossRef]
16. Sedano, C. T. S. (1996). Ice formation in flat plate (in Portuguese) (M.Sc. Dissertation). School of Mechanical Engineering, University of Campinas, Campinas/Brazil. [Google Scholar]
17. Tao, Y. X., Besant, R. W., Rezkallah, K. S. (1993). A mathematical model for predicting the densification and growth of frost on a flat plate. International Journal of Heat and Mass Transfer, 36,
353–363. DOI 10.1016/0017-9310(93)80011-I. [Google Scholar] [CrossRef]
18. Whitaker, S. (1977). Simultaneous heat, mass, and momentum transfer in porous media: A theory of drying. Advances in Heat Transfer, 13, 119–203. DOI 10.1016/S0065-2717(08)70223-5. [Google Scholar
] [CrossRef]
19. Liu, Z., Gou, Y., Wang, J., Cheng, S. (2008). Frost formation on a super-hydrophobic surface under natural convection conditions. International Journal of Heat and Mass Transfer, 51, 5975–5982.
DOI 10.1016/j.ijheatmasstransfer.2008.03.026. [Google Scholar] [CrossRef]
20. Sommers, A. D., Gebhart, C. W., Hermes, C. J. L. (2018). The role of surface wettability on natural convection frosting: Frost growth data and a new correlation for hydrophilic and hydrophobic
surfaces. International Journal of Heat and Mass Transfer, 122, 78–88. DOI 10.1016/j.ijheatmasstransfer.2018.01.074. [Google Scholar] [CrossRef]
21. Rohsenow, W. M., Hartnett, J. P., Cho, Y. I. (1998). Handbook of heat transfer. New York: McGraw-Hill. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
|
{"url":"https://www.techscience.com/energy/v119n5/48945/html","timestamp":"2024-11-06T08:28:30Z","content_type":"application/xhtml+xml","content_length":"109603","record_id":"<urn:uuid:a0ed6846-b201-4716-bfaf-b409966ed605>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00387.warc.gz"}
|
meanfit: Fit a two-way table using row and column means in twoway: Analysis of Two-Way Tables
x a numeric matrix or data frame
... other arguments passed down
na.rm logical. Should missing values be removed?
An object of class c("twoway") with the following named components:
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/cran/twoway/man/meanfit.html","timestamp":"2024-11-12T02:40:37Z","content_type":"text/html","content_length":"25333","record_id":"<urn:uuid:8c7bc524-13fb-4092-be8b-4d479e02d5d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00716.warc.gz"}
|
Math 4 Wisdom. "Mathematics for Wisdom" by Andrius Kulikauskas. | Research / MathDiscoverySixPairs
• Index
• Writings
Andrius Kulikauskas
• m a t h 4 w i s d o m - g m a i l
• +370 607 27 665
• My work is in the Public Domain for all to share freely.
• 读物书影片维基百科
Introduction E9F5FC
Questions FFFFC0
• View
• Edit
• History
• Print
Math Discovery Examples
Math Discovery Six Pairs: Restructuring
Restructuring* 10 Tree of variations, 20 Adjacency graph, 21 Total order, 32 Powerset lattice, 31 Decomposition, 30 Directed graph The structures above are graph-like geometries. They are six ways
that we visualize structure. We visualize by restructuring a sequence, hierarchy or network. We don't and can't visualize such structures in isolation, but rather, we visualize the restructuring of,
for example, a network which becomes too robust so that we may restructure it with a hierarchy of local and global views, which we visualize as an Atlas, or we may restructure it with a sequence,
which we visualize as a Tour that walks about the network. Here are the six visualizations, accordingly: ("Hierarchy => Sequence" means "Hierarchy restructured as Sequence", etc.)
• 10 Evolution: Hierarchy => Sequence (for determining weights)
• 20 Atlas: Network => Hierarchy (for determining connections)
• 21 Canon: Sequence => Network (for determining priorities)
• 32 Chronicle: Sequence => Hierarchy (for determining solutions)
• 31 Catalog: Hierarchy => Network (for determining redundancies)
• 30 Tour: Network => Sequence (for determining paths)
I expect that they relate 0 Truth, 1 Model, 2 Implication, 3 Variable as follows ... I expect that each geometry reflects a particular way that we're thinking about a variable. I expect them to
illustrate the six qualities of signs... Consider the geometry suggested by (6 of the 8) axioms of Zermelo-Fraenkel set theory, for example, the power set axiom. These are axioms for restructuring.15
☆ Graph* The concept of a graph is very simple: merely a finite collection of vertices and edges. ... Just about any situation involving "relationships" between "objects" can be recast as a
graph, where the vertices are the "objects" and we join vertices with edges if the corresponding objects are "related". pg.120, The Art and Craft of Problem Solving, Paul Zeitz, 1999,
John Wiley & Sons, Inc.2155
• Models of Multiplication* Six ways of thinking of multiplication: Fractal, Proportion, Tally, Box, Label, Divide out. (Andrius thinking out loud) I think that the six pairs of levels, six kinds
of variables, six Zermelo-Frankel axioms of set theory can be illustrated by models of multiplication as Maria Droujkova has been studying. Question: What does it mean to cancel out units as
physicists do?
• The addition rule is at work, adding exponents. Multiplying by 10 or dividing by 10 shifts the number with regard to the decimal point, although it looks like the decimal point is moving. We may
think of this as simply changing the units, the base unit.
• I think of rescaling as a product of actions that either make bigger (numerator) or make smaller (denominator). They are all multiplying against some unknown, acting upon it. I call the actions
"multiplication drops", either "magnifying drops" (say, multiplying by 10) or "shrinking drops" (dividing by 10). So these are actions x actions x (object with units). Thus magnifying and
shrinking can cancel out. Also, actions can be decomposed into component actions, into primes.
• Repeated addition is a recounting, a shift from larger units to smaller units. 3 x (23 x dollars) becomes (3 x 23) x dollars. Amount x (large unit) becomes action x (amount x small unit) becomes
(action x amount) x small unit becomes amount x small unit.
• Multiplication can give the ways of matching units, multiple units times multiple units, as in box multiplication, accounting for all possibilities. Units times units means that conditions are
satisfied, thus generating all of the solutions.
• Multiplication can be thought of as counting items that have been grouped where each group has the same number of items. For example, we can count coins by grouping together the pennies, nickels,
dimes, quarter, placing them in rows or groups of 4 or 5 or 10. Number x (Value x Unit).
• Dividing out, for example, money per person. This is like multiple units. (Number of cycles) x (Number of people x Units )
Tree of variations
Tree of variations* Model truth (can distinguish possibilities). Weighted averages, moves in games. 10 Evolution: Hierarchy => Sequence (for determining weights). Examination of cases.60
☆ Axiom of pairing* Wikipedia: If x and y are sets, then there exists a set which contains x and y as elements. This relates to evolution perhaps as a notion of "counting up" or "sorting
☆ Multiplication, division by 10 moves the number* One example of fractal multiplication that comes to mind is multiplication by 10. I noticed this year that the decimal point is not
logically positioned. It should be at (under or over) the "ones" place. Then the system would be symmetric. The space to the left would be the "tens" and the space to the right would be
the "tenths". A little (video) essay could explain that when we multiply or divide by 10, it is the number that actually moves, not the decimal point. It's like the fact that the Sun is
at (or near) the center of the solar system and the Earth revolves around it, not the other way around, as it may seem. The way that the decimal point is placed is very destructive
pedagogically, it makes people (like me) think that there is something between the digit; that there is some mysterious difference between the whole numbers and the fractional numbers;
and most sadly, that (jaded) teachers don't see that there's something wrong with the system, as I remember thinking as a child; and that there's no reason to care. I suspected that the
decimal point is where it is for typographical reasons; maybe because it arose along with printing? or derived from accounting notation? My point being that an adult who appreciates what
I've just written (and more along these lines) would appreciate some thing very deep about math (including how to calculate 10% discounts by shifting numbers to the right). Gospel Math.
□ Algorithmic Proof* ... the argument below can easily be rewritten as an induction proof. But it is much more instructive to present a new type of argument, an algorithmic proof where we give
a general recipe for the construction of an Eulerian path. Consider first a graph with exactly two odd-degree vertices, which we will call s and f. Let us try to naively draw an Eulerian
path, starting from s. We will travel randomly ... But this doesn't quite work ... We would be stuck at vertex f, with no way to "backtrack" and traverse the other edges. In this case, let us
temporarily remove the edges that we have traveled. We are left with the subgraph ... Since the original graph was connected, the subgraph "intersected" some of the edges that we removed ...
Now let us apply the "naive" algorithm to the subgraph ... So now we can perform "reconstructive surgery" on our original path and get an Eulerian path for the entire graph. ... This method
will work in general. We may have to repeat the "remove the traveled edges and travel the subgraph" step several times ... but since the graph is finite, eventually we will finish.
pg.124-125, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.2159
□ Dividing into cases* Sometimes you can reduce the number of pigeonholes by dividing into cases. pg.96, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.1639
□ Examination of cases* One of the methods of proof is examination of cases, for example, considering odd and even separately, as in a proof of 1+2+3+...+ n = n(n+1)/2.1640
□ Fractal multiplication: Recopying the whole* A whole can be recopied (however many copies), then again, then again. This is like fractal multiplication, as with your five-legged starfish
whose each leg holds another five-legged starfish. It is like multiplying by powers of 10. The addition rule is at work, adding exponents as in (10**2)(10**3) = 10**5. Multiplying by 10 or
dividing by 10 shifts the number with regard to the decimal point, although it looks like the decimal point is moving. We may think of this as simply changing the units, the base unit, which
may be unknown.1526
☆ Repeatedly folding paper*
• If a rectangular paper is folded in half, and half again, and yet again, and so on, and they are all considered repeatedly applied transformations of the same whole, then that is fractal
multiplication, "recopying the whole", as with your paper snowflake.
• If a paper is folded once, and then again, and again, but those actions are thought as taking place separately, and especially, if I'm focused on the labelled components (rather than the
repeating whole), then it is label multiplication, "redistributing the multiple", as with your pie halves sliced in five slices each.
Adjacency graph
Adjacency graph* Imply truth (can determine connectedness) (Connectedness, coloring, triangulation of polygon) 20 Atlas: Network => Hierarchy (for determining connections) 69
☆ All parabolas have the same shape* I've found in teaching the quadratic equations (and searching for a use for them) that the parabola is arguably an ideal curve for learning to graph.
That's because all parabolas have one and the same shape, if you discount zooming in and out. Each parabola, if you zoom in, will look flat, and if you zoom out, will look narrow, in
exactly the same proportions. You can see this if you substitute x -> ax and y->ay and thereby you can transform x = y**2 to xa = y**2 a**2 so x = a y**2 effectively where a can be as you
like. This means that all parabolas look the same and their graphs differ only in how you move them around, up and down, left or right, negative or positive, zoom in or zoom out. Also, I
teach my students to draw too graphs because most never realize how the parabola completely flattens out at the bottom where -1 < x < 1. It's a bit like filming a movie where you have to
combine full length shots of people with head shots. One shot won't do it. Gospel Math.1841
☆ Axiom of extensionality* Wikipedia: Two sets are equal (are the same set) if they have the same elements. Note that there are thus two levels of equality. Equality is a bidirectional
relationship. And the two levels are like levels of an atlas. An atlas defines, at different levels, what can be consider the "same" point or location from far away, though may be
different from close up. The "equivalence" and equivalence classes may be "turned on or off" selectively in the atlas.1164
□ Coloring* The use of coloring is related to parity and modular arithmetic, except that we are not constrained by the algebraic properties of integers. pg.111, The Art and Craft of Problem
Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc. 1745
□ Divisibility* Given two natural numbers a, b, their greatest common factor (a,b) ... is defined to be the largest integer which divides both a and b. ... If the GCD of two numbers is 1, we
say that the two numbers are relatively prime. ... If g divides a and g divides b, then g divides ax + by, where x and y can be any integers ... An important consequence of the division
algorithm, plus [the last fact], is that The greatest common divisor of a and b is the smallest positive linear combination of a and b. ... a great showcase for the use of the extreme
principle plus argument by contradiction. Define u to be the smallest positive linear combination and let g=(a,b) ... certainly g divides u ... suppose that u does not divide a. Then by the
division algorithm, there exists a quotient k>=1 and positive remainder r < u such that a = ku+r. But then r = a-ku is positive and less than u. This is a contradiction, because r is also a
linear combination of a and b ... Consequently, u divides a, and likewise u divides b. So u is a common divisor; thus u=g. ... This linear combination characterization of the GCD is really
quite remarkable, for at first one would think that PPF's are needed to compute the GCD of two numbers. But in fact, computing the GCD does not rely on PPF's ... but we can use [instead the
fact that if there exists x, y such that ax + by = 1, then a and b are relatively prime]. ... we do not need to assume the truth of the FTA in order to compute the GCD. The GCD is the grid
size that results on a number line from taking steps back and forth of size a and b. We are thus thinking not in terms of primes as components, but of grids (the numbers they divide) that can
be included or not in each other, and are thus organized by a lattice of conditions (of which points can be reached or not), where satisfying all conditions means including all points and
being relatively prime and writing a ratio in reduced form. pg.245 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2216
□ Graph* Just about any situation involving "relationships" and "objects" can be recast as a graph, where the vertices are the "objects" and we join vertices with edges if the corresponding
objects are "related". pg.120, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc. 1752
☆ Handshake Lemma* In any graph, the sum of the degrees of all the vertices is equal to twice the number of edges. pg.121, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley
& Sons, Inc. 1754
☆ Sleeping mathematicians* During a certain lecture, each of five mathematicians fell asleep exactly twice. For each pair of these mathematicians, there was some moment when both were
sleeping simultaneously. Prove that, at some moment, some three of them were sleeping simultaneously. pg.120, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons,
Inc. 1753
□ Least Common Multiple* ...we define the least common multiple, or LCM, of a and b to be the least positive integer which is a multiple of both a and b. pg.245 The Art and Craft of Problem
Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2217
□ Proportion multiplication: Rescaling the whole* A whole can be rescaled. This is proportion, as with your teddy bear projected on a screen. The rescalings are actions that can be composed,
magnifying and shrinking. They are all multiplying against some unknown, acting upon it, yielding actions x actions x (object with units). They can be reorganized and canceled away. I
sometimes talk to my students about "magnifying drops" (each drop multiplying by 10) and "shrinking drops" (each drop dividing by 10) and ask what happens when we add one drop after another
drop. Also, actions can be decomposed into component actions, into primes. I relate this to the adjacency graph and the Atlas view because it consists of a hierarchy of global and local views
upon a network, thus the same relation can appear at different scales. 1527
☆ Bicycle gearing* Bicycle gears are given by the formula: (number of front teeth / number of back teeth) x circumference of wheel = distance traveled per revolution. That's an example of
the proportional scaling ("rescaling the whole", like with projecting a teddy bear ). You could imagine starting out with two gears (the gear fixed to the wheel, and the gear you are
pedaling) each the size of the back wheel. Replacing those giant gears with smaller gears is an action of magnifying or shrinking the distance traveled. You could add more gears to
further magnify or shrink that ratio. Adding teeth to the gears is changing the units (and that aspect could be considered "rescaling the multiple", as with skip counting, but that is the
correspondence of the number of teeth to the circumference of the gear, NOT the relationship between the two gears). The fact that multiplication is taking place at two different levels
makes it challenging to think about because the gear ratio is "abstract" and not concretely, directly related to the size of the wheel. See also Dmitri's game!1763
□ The Two Men of Tibet* Two men are located at opposite ends of a mountain range, at the same elevation. If the mountain range never drops below this starting elevation, is it possible for the
two men to walk along the mountain range and reach each other's starting place while always staying at the same elevation? ... As long as it is legal to walk backward, it is pretty easy...
But why does it work? ... define a graph G whose vertices are all ordered pairs (x,y) where x,y [are the "interesting" places] and x and y are at the same elevation. ... the vertices of G
consist of all possible legal configurations of where the two dots could be ...we shall join two vertices by an edge if it is possible to travel between the two configurations in one "step"
... if we can show that there is a path from (a,s) to (s,a) we'd be done. ... by the handshake lemma, the sum of the degrees of the vertices of this subgraph must be even. Since the only two
vertices with odd degree are (a,s) and (s,a), this subraph must contain (s,a) as well. ... we solved this hard problem with a very simple parity analysis. Of course, we first needed the
insight of constructing a graph, and the crux move of defining the vertices and edges in a very clever way. pg.126-128, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley &
Sons, Inc.2161
□ Triangulating, then coloring* The walls of a museum gallery form a polygon with n sides, not necessarily regular or even convex. Guards are placed at fixed locations inside the gallery.
Assuming that guards can turn their heads, but do not walk around, what is the minimum number of guards needed to assure that every inch of wall can be observed? ... A coloring reformulation
comes to the rescue: Triangulate the gallery polygon. pg. 43, 61 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.1512
Total order
Total order* Imply model (can order procedures) (Strong induction, decision making, total ranking, integers) 21 Canon: Sequence => Network (for determining priorities)70
☆ Well-ordering theorem* Wikipedia: Axiom 9: For any set X, there is a binary relation R which well-orders X. This means R is a linear order on X such that every nonempty subset of X has a
member which is minimal under R. ... Given Zermelo-Fraenkel axioms 1-8, there are many statements provably equivalent to axiom 9, the best known of which is the axiom of choice (AC),
which goes as follows. Let X be a set whose members are all non-empty. Then there exists a function f from X to the union of the members of X, called a "choice function", such that for
all Y element of X one has f(Y) element of Y. Since the existence of a choice function when X is a finite set is easily proved from axioms 1-8, AC only matters for certain infinite sets.
AC is characterized as nonconstructive because it asserts the existence of a choice set but says nothing about how the choice set is to be "constructed." The Well-ordering theorem seems
tightly related to the concept of a total order. It's interesting that it's related to the axiom of choice.1167
□ If you can count it, it's an integer* Show that the product of k consecutive integers is divisible by k! ... simply observe that m(m+1)...(m+k-1)/k! = (m+k-1 k) and binomial coefficients are
integers! The moral of the story: Keep your point of view flexible. Anything involving integers is fair game for combinatorial reasoning. pg.273 The Art and Craft of Problem Solving, Paul
Zeitz, 1999, John Wiley & Sons, Inc2228
□ Permutations* Permutations are the ways of reordering a collection of objects, all distinct. A permutation labels a total order. 2199
□ Strong induction* Strong induction gets its name because we use a "stronger" inductive hypothesis. After establishing the base case, we assume that for some n, all of the following are true:
P(n0), P(n0+1), P(n0+2), ... , P(n), and we use this assumption to prove that P(n+1) is true. Sometimes strong induction will work where standard induction doesn't. ... Behind the idea of
strong induction is the notion that one should stay flexible when it comes to defining hypotheses and conclusions. pg.52, 54, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John
Wiley & Sons, Inc.1501
□ Tally multiplication: Rescaling the multiple* A multiple can be rescaled. This is like skip counting or repeated addition. Note that here the numbers added are cardinals, which is to say, we
don't care in each subgroup what order they had, it's not relevant, we're simply adding up the sums. This is a recounting, a shift from larger units to smaller units. 3 x (23 x dollars)
becomes (3 x 23) x dollars. Amount x (large unit) becomes action x (amount x small unit) becomes (action x amount) x small unit becomes amount x small unit. We can count coins by grouping
together the pennies, nickels, dimes, quarter, placing them in rows or groups of 4 or 5 or 10. Number x (Value x Unit) => (Number x Value) x Unit.1528
☆ Cutting a stack of cheese slices* If I have a stack of 10 slices of cheese and I slice them all in two, then:
• if I'm simply changing the units, so that I'm thinking now of 20 small slices rather than 10 large slices, so that 1 large slice = 2 small slices, then I'm skip counting, "rescaling the
• if I'm thinking of each large slice as consisting of a left piece and a right piece, (a first piece and a second piece, distinguishable or "labelled" with a child's ID 10x2), then I'm
"redistributing the multiple", as with "sets, per each".
□ Bucket elimination* Wikipedia: Bucket elimination is a satisfiability algorithm. It can be defined as a reformulation of adaptive consistency. Its definitions uses buckets, which are
containers for constraint, each variable having an associated bucket. A constraint always belongs to the bucket of its highest variable. The bucket elimination algorithm proceeds from the
highest to the lowest variable in turn. At each step, the constraints in the buckets of this variable x_i are considered. By definition, these constraints only involve variables that are
lower than x_i. The algorithm modifies the constraint between these lower variables (if any, otherwise it creates a new one). In particular, it enforces their values to be extendible to x_i
consistently with the constraints in the bucket of x_i. This new constraint, if any, is then placed in the appropriate bucket. Since this constraint only involves variables that are lower
than x_i, it is added to a bucket of a variable that is lower than x_i. This algorithm is equivalent to enforcing adaptive consistency. Since they both enforce consistency of a variable with
all its parents, and since no new constraint is added after a variable is considered, what results is an instance that can be solved without backtracking. Since the graph of the instance they
produce is a subgraph of the induced graph, if the induced width is bounded by a constant the generated instance is of size polynomial in the size of the original instance. As a result, if
the induced width of an instance is bounded by a constant, solving it can be done in polynomial time by the two algorithms.938
Powerset lattice
Powerset lattice* Vary implication (can satisfy various conditions) 32 Chronicle: Sequence => Hierarchy (for determining solutions) 71
☆ Axiom of power set* Wikipedia: In mathematics, the axiom of power set is one of the Zermelo-Fraenkel axioms of axiomatic set theory. Given any set A, there is a set P(A) such that, given
any set B, B is a member of P(A) if and only if B is a subset of A.1160
☆ Distributing with box mathematics* We can distribute 23 x 15 using grid paper as (20 + 3) x (10 + 5). We can change the tens to hundreds or thousands or millions. Likewise we can
distribute (2X + 3) x (X + 5). Compare with two digit multiplication. Gospel Math. 1848
□ Box multiplication: Redistributing the set* A set can be redistributed. A set is anything which can be thought as multiple units, where each element is distinct. A set can be the rows of a
chessboard or an array. It can be the breakdown of a length, for example, 2 feet and 1/2 foot. We multiply the set by applying the distributive law and multiplying each element of the set
separately. And that multiplication can be noncommutative, which is to say, we can make sure that the element is followed by the action. Such multiplication typically looks like a set matched
with another set, yielding "box multiplication", as when we multiply (3 feet + 1/4 foot)(2 feet + 1/2 foot) and get 4 products which we then sum up. This is also the standard computation of
multiplication, for example, multiplying 2 digit numbers together. Multiplication thus gives the ways of matching units, multiple units times multiple units, as in box multiplication,
accounting for all possibilities. Units times units means that conditions are satisfied, thus generating all of the solutions. Here the units are all known, they form sets, thus there are
distinct terms in expressions with multiple units.1529
□ Multiply in a particular order* Simplify .... We could multiply out all the terms, but it would take a long time, and we'd probably make a mistake. We need a strategy. If this expression is
to simplify, we will probably be able to eliminate radicals. If we multiply any two terms, we can use the difference of two squares formula and get expressions which contain only one radical.
pg.166 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.2174
□ Polya's pattern of two loci* 899
☆ Illustrative example* Euclid's first problem in his Elements is: In drawing an equilateral triangle, given the first side AB, how do we draw the other two? The solution is: to draw a
circle c(A) around A of length AB and to draw a circle c(B) around B of length AB. The third point C of the equilateral triangle will be at a point where the two circles intersect. (There
are two such points, above and below the line segment.) Polya notes that this solution is a particular example of a general pattern of "two locii", which is to say, we can often find a
desired point by imagining it as the intersection of two curves. I note further that each curve may be thought of as a condition (X="points within a distance AB of A", Y="points within a
distance AB of B"). The solution created four regions:
• Solutions to both X and Y.
• Solutions to X.
• Solutions to Y.
• Solutions to the empty set of conditions.
The solver's thought process leveraged a deep math structure: the powerset lattice of conditions: {{X,Y}, {X}, {Y}, {}}. The solver envisaged the solution as the union of two conditions. In this deep
structure, there is no reference to triangles, circles, lengths, continuity or the plane, all of which turn out to be of superficial importance. Here the crux, the mental challenge of the problem, is
expressed exactly by the powerset lattice. And, notably, that is a mathematical structure! Math is the deep structure of math! 900
□ Reimagining the monk problem* A monk climbs a mountain. He starts at 8 am and reaches the summit at noon. He spends the night on the summit. The next morning, he leaves the summit at 8am and
descends by the same route he used the day before, reaching the bottom at noon. Prove that there is a time between 8 am and noon at which the monk was at exactly the same spot on the mountain
on both days One solution is to allow for two monks traveling, one up and the other down, so that it is clear they must meet. In this way the solution is where the two conditions are both
satisfied. pg.19, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.1503
□ Constraint satisfaction* Wikipedia: In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose
conditions that the variables must satisfy. A solution is therefore a vector of variables that satisfies all constraints.939
• Creative rethinking* Many creative methods have us rethink, reorganize, untangle, recombine the conditions that define a problem.909
☆ The two ropes* Paul Zeitz gives the following problem that can be solved (the two ropes can be retrieved) by teasing out the conditions that the solution must satisfy and organizing them
thoughtfully. You are locked in a 50 x 50 x 50-foot room which sits on 100-foot stilts. There is an open window at the corner of the room, near the floor, with a strong hook cemented into
the floor by the window. So if you had a 100-foot rope, you could tie one end to the hook, and climb down the rope to freedom. (The stilts are not accessible from the window.) There are
two 50-foot lengths of rope, each cemented into the ceiling, about 1 foot apart, near the center of the ceiling. You are a strong, agile rope climber, good at tying knots, and you have a
sharp knife. You have no other tools (not even clothes). The rope is strong enough to hold your weight, but not if it is cut lengthwise. You can survive a fall of no more than 10 feet.
How do you get out alive? pg. 27, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc. 1382
Mind Your Decisions. Can you crack these 2 logical puzzles? Each picks whole number 1 to 30. A: Is your number double mine? B: I don't know. Is your number double mine? A: I don't know. Is your
number half mine? B: I don't know. Is your number half mine? A: I don't know. B: I know your number.
• The way to solve this is to actually consider each of the possibilities specifically. Then we see that certain possibilities are forbidden. They would lead to an answer "no" rather than "I don't
know". If a number is odd, then it can't be double. If a number is greater than 15, then it can't be half. As the conditions grow, we find that the number is 4. This is in the spirit of
examination of cases.
Decomposition* Vary model (can variously combine factors) (Pigeonhole principle, partitions, factorizations, encoding, full range of outputs, principle of inclusion-exclusion) 31 Catalog: Hierarchy
=> Network (for determining redundancies)72
☆ Axiom of union* Wikipedia: For any set F there is a set A containing every set that is a member of some member of F1163
☆ Reorganizing addition to do it in your head* Adding 256 + 256 in your head is easier to do from left to right than from right to left because the intermediate response (500) is simpler
because there is nothing to carry. Gospel Math.1843
☆ Reorganizing multiplication to do it in your head* Given a problem 5,000 x 50,000, we can break it down in different ways such as 5 x 50 x 1,000 x 1,000 so that we can do it in our head.
Gospel Math.1842
□ Binomial theorem* Pascal's Triangle contains all of the binomial coefficients. ... We derived the binomial theorem above by observing that the coefficients in the multiplication of the
polynomial (x+y)**k by (x+y) obeyed the summation formula. Here is a more direct "combinatorial" approach, one where we think about how multiplication takes place in order to understand why
the coefficients are what they are. Consider the expansion... notice how we can read off the "history" of each term...Now let us think about combining like terms... pg.211 The Art and Craft
of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2203
□ Complement PIE* The combination of PIE with counting the complement is so common, it is worth noting (and verifying) the following alternative formulation of PIE [principle of
inclusion-exclusion] ... Given N items, and k sets A1, A2, ..., Ak, the number of these items which lie in none of the Aj is equal to N-S1+S2-S3+...+-Sk, where the Si is the sum of the
cardinalities of the intersections of the Aj's taken i at a time, and the final sign is plus or minus, depending on whether k is even or odd. pg.230 The Art and Craft of Problem Solving, Paul
Zeitz, 1999, John Wiley & Sons, Inc2211
□ Conspicuous ceiling* 41 rooks are placed on a 10x10 chessboard. Prove that there must exist 5 rooks, none of which attack each other. ... When you see the number 41 juxtaposed with 10, you
know that the pigeonhole principle is lurking about, since 41 is just one more than 4x10. Once attuned to the pigeonhole principle, we cannot help but notice that [41/10] = 5, which is
encouraging, for this is the number of rooks we seek. pg.95, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc. 1740
□ Factoring* Here is a simple problem with a "complete" solution, illustrating one of the most important tactics: factoring. ... Find all right triangles with integer sides such that the
perimeter and area are equal. ... xy/2 = x + y + square root of (x**2 + y**2) ... xy - 4x - 4y + 8 = 0 ... Now we do something clever: add 8 to both sides to make the left-hand side factor.
We now have (x-4)(y-4) = 8, and since the variables are integers, there are only finitely many possibilities ... The only tricky step was finding the factorization. But this wasn't really
hard, as it was clear that the original left-hand side "almost" factored. As long as you try to factor, it usually won't be hard to find the proper algebraic steps. The factor tactic is
essential for finding solutions. pg.264-265 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2225
□ Fundamental Theorem of Arithmetic* ...every natural number greater than 1 can be factored completely into primes. ... In fact, this factorization is unique, up to the order that we write the
primes. This property of unique factorization is called the Fundamental Theorem of Arithmetic (FTA). We call the grouping of factors into primes the prime-power factorization (PPF). An ugly,
but necessary, notation is sometimes to write a number n in "generic" PPF: n = (p1**e1)(p2**e2)...(pt**et). ... If p is a prime and p divides ab, then p divides a or p divides b ... is the
key idea that we need ... Here is a simple example of FTA-style reasoning ... Prove that if a monic polynomial has a rational zero, then this zero must in fact be an integer. ... Let u/v be a
zero of this polynomial. The crux move: without loss of generality, assume that u and v are relatively prime. ... get rid of fractions, by multiplying both sides by v**n ... This gives us ...
u**n = [a multiple of v] ... we must conclude that v = 1 or -1, ie., u/v is an integer. The assumption that u and v are relatively prime means that they taken to be distinct as atoms.
pg.244-248 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2218
□ Halt or repeat* If there are only finitely many states as something evolves, either a state will repeat, or the evolution will eventually halt. pg.113, The Art and Craft of Problem Solving,
Paul Zeitz, 1999, John Wiley & Sons, Inc. 1749
□ Intermediate pigeonhole* If you have p pigeons and h holes, then at least one of the holes contains at least [p/h] pigeons. pg.94, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John
Wiley & Sons, Inc.1638
□ Label multiplication: Redistributing the multiple* A multiple can be redistributed. This is similar to skip counting but it is counting collections, and so it is counting everything up as if
we were counting up each item in each collection. Thus it is recounting ordinals. We can think of it as a labeling process, where each group and subgroup is counted and labeled. Thus it
proceeds from whole to parts, in the opposite direction as tallying, which proceeds from parts to whole.1392
□ Partitioning* Partitioning is a tactic that deliberately focuses our attention on addition, by breaking a complex problem into several smaller and simpler pieces. [Often used in tandem with
encoding]. ... A partition of a set is a division of S into a union of mutually exclusive (pairwise disjoint) sets. ... if S has been partitioned by the Ai, we must have [the cardinality of S
equals the sum of the cardinalities of the Ai] ... This leads to a natural combinatorial tactic: Divide the thing that we want to count into mutually exclusive and easy-to-count pieces. ...
For example ... the number of subsets of S must be (n 0) + (n 1) + (n 2) + ... + (n n) ... The partitioning tactic makes a pretty utopian assumption, namely that the thing we wish to count
can be nicely broken down into pairwise disjoint subsets. Reality is often messier... pg.213-214, 225 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2207
□ Partitions* Given a positive integer n, a partition of n is a representation of n as a sum of positive integers. The order of the summands does not matter, so they are conventionally placed
in increasing order. pg.151, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.2167
□ Pigeonhole principle* If you have more pigeons than pigeonholes, and you try to stuff them into the holes, then at least one hole must contain at least two pigeons. ....sometimes also called
the Dirichlet principle. pg.92, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.1635
☆ n+1 positive integers* Show that among any n+1 positive integers, there must be two whose difference is a multiple of n. ... The penultimate step ... is realizing that the two desired
numbers must have same remainder upon division by n. Since there are only n possible remainders, we are done. pg. 93, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley &
Sons, Inc.1637
☆ Two points a mile away* Every point on the plane is colored either red or blue. Prove that no matter how the coloring is done, there must exist two points, exactly a mile apart, which are
the same color. ... imagine the vertices of an equilateral triangle with side length one mile. There are three vertices, but only two colors available! The pigeonhole principle tells us
that two vertices must be the same color! pg.93, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.1636
□ Repeated application of pigeonhole principle* ...This rather elaborate problem was a good illustration of both the pigeonhole principle and the wishful thinking strategy, i.e., not giving up.
When you think that the problem can be attacked with the pigeonhole principle, first try to do the job neatly. Look for a way to define pigeons and holes that yields a quick solution. If that
doesn't work, don't give up! Use the pigeonhole principle once again to gain a foothold, then try to use it again. Keep extracting information! pg.95, The Art and Craft of Problem Solving,
Paul Zeitz, 1999, John Wiley & Sons, Inc. 1741
□ The Factor Tactic* Multiplication rarely simplifies things. Instead you should Factor relentlessly. pg. 163, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.
□ The principle of inclusion-exclusion* The partitioning tactic makes a pretty utopian assumption ... We shall explore a number of ways of dealing with situations where sets overlap and
overcounting is not uniform ... Sometimes the complement counting tactic fails us, because the complement is just as complicated as the original set. The principle of inclusion-exclusion
(PIE) is a systematic way to handle these situations. In simplest form, PIE states that the number of elements in the union of two sets is equal to the sum of the number of elements in the
sets minus the number of elements in their intersection. ... It is easy to see why this is true: Adding |A| and |B| overcounts the value of |A union B|. This overcounting is not uniform; we
did not count everything twice, just the elements of A intersection B. Consequently, we can correct by subtracting |A intersect B|. ... In general, we conjecture The cardinality of the union
of n sets = + (sum of the cardinalities of the sets) - (sum of the cardinalities of all possible intersections of two sets) + (sum of the cardinalities of all possible intersections of three
sets) ... + [if n is odd] or - [if n is even] (the cardinality of the intersection of all n sets). It is pretty easy to see why the formula is true in an informal way. For example, consider
the following [Venn] diagram, which illustrates the three-set situation ... It makes sense that in the general case, we would alternate between adding and subtracting ... We have reduced PIE,
then, to proving the identity ... (r 0) - (r 1) + (r 2) - ... + (-1)**r (r r) = 0. ... Just expand 0 = (1-1)**r with the binomial theorem and you immediately get [it]! ... Whenever you see
the words "at least" you should be alerted to the possibility of counting complements (1-1)**r is the binomial theorem expressing the Venn diagram of conditions acting on parity/balance.
pg.226-229 The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc2210
Directed graph
Directed graph* Vary truth (can add or remove circular behavior) (With or without cycles) 30 Tour: Network => Sequence (for determining paths) 73
☆ Axiom of regularity* Wikipedia: Every non-empty set x contains a member y such that x and y are disjoint sets. This eliminates, for example, the possibility of there being a set X which
is its own element X = {X}. (Note that the empty set has no members). This allows us to think of set inclusion relations in terms of directed graphs because then it is clear which is the
element of which, although there can be cycles. Thus the axiom of regularity may be thought to allow directed graphs to make sense. Wikipedia: The axiom of regularity is arguably the
least useful ingredient of Zermelo-Fraenkel set theory, since virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity ... However,
it is used extensively in establishing results about well-ordering and the ordinals in general.1166
□ Connectivity and Cycles* By [cycle] we mean a closed path that "travels" along the edges. ... A graph is connected if every pair of vertices has a path between them. If a graph is not
connected, one can always decompose the graph into connected components. ... A connected graph that contains no cycles is called a tree. ... For trees, the number of edges is exactly one less
than the number of vertices. ... If a graph has e edges and v vertices and e>=v, then the graph must contain a cycle. pg.120-123, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John
Wiley & Sons, Inc.2157
□ Divide out Multiplication: Redistributing the whole* A whole can be redistributed. This is long division. We can focus on cases where the remainder is zero, or we can simply keep dividing
forever. This is like children (or pirates) "dividing out" money, "counting out" money ("one for me, one for you, ..."). Most of the whole is divided out; then more of it; then more and so
on. This yields multiple units. (Number of cycles) x (Number of people x Units )1530
□ Eulerian Path* Find the conditions on a connected graph (or multigraph) for which it is possible to travel a path that traverses every edge exactly once. Such paths are called Eulerian ... A
connected graph (or multigraph) possesses an Eulerian path if and only if it has zero or exactly two odd-degree vertices. In the former case, the path is a closed path. In the latter, the
path must begin and end at the odd-degree vertices. pg.123-124, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.2158
□ Hamiltonian Path* The "dual" of an Eulerian path is a Hamiltonian path ... a path that visits each vertex exactly once. If the path is closed, it is called a Hamiltonian cycle. While Eulerian
paths possess a "complete theory", very little is known about Hamiltonian paths. At present, the necessary and sufficient conditions for Hamiltonians paths are unknown. This is unfortunate,
because many practical problems involve Hamiltonian paths. ... Many problems involving scheduling and optimization of network paths can be recast as searches for Hamiltonian paths. pg.125,
The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley & Sons, Inc.2160
□ Handshake Lemma* In any graph, the sum of the degrees of all the vertices is equal to twice the number of edges. pg.121, The Art and Craft of Problem Solving, Paul Zeitz, 1999, John Wiley &
Sons, Inc.2156
|
{"url":"https://www.math4wisdom.com/wiki/Research/MathDiscoverySixPairs","timestamp":"2024-11-14T14:08:06Z","content_type":"application/xhtml+xml","content_length":"58816","record_id":"<urn:uuid:9cfca202-e6b8-4788-a78d-f5a367326a1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00584.warc.gz"}
|
Effect of operating conditions on crude oil fouling through CFD simulations
A single horizontal tube in a typical shell and tube heat exchanger is considered in this study, the geometry of which is shown in Figure 3. A three-dimensional model of the heat exchanger tube is
developed in CFD to investigate the effect of various operating conditions on crude oil fouling. The basic equations which together form the model of the heat exchanger tube are explained below. The
fluid-flow in the heat exchanger domain is governed by incompressible Navier-Stokes equations for mass, momentum and energy. All the governing equations (1) – (9) together form a complete
mathematical model of the heat exchanger tube undergoing chemical reaction fouling. The equations are then solved on each node to obtain a numerical solution.
Figure 3. Heat exchanger tube
The basic CFD equations are as follows:
Continuity equation:
$\frac{\partial \rho}{\partial t}+\nabla \cdot \overline{(\rho v)}=0$ (1)
Momentum equation:
$\frac{\partial(\overline{\rho v )}}{\partial t}+\nabla \cdot(\overline{\rho v v )}=-\nabla p+\nabla \cdot(\overline{\tau})+\rho \overline{g}$ (2)
Energy equation:
$\frac{\partial\left(\rho C_{P} T\right)}{\partial t}+\nabla \cdot\left(\rho C_{P} \vec{u} T\right) )=\nabla \cdot(k \Delta T)+H$ (3)
Turbulence model:
In order to analyze the flow fields involving turbulence, a set of Reynolds-Averaged Navier-Stokes (RANS) equations, which are developed by adapting suitable time-averaging techniques on
Navier-Stokes equations, are assembled. Several turbulence models such as k-ɛ, k-ω, Reynolds Stress Model (RSM), etc., are available within the RANS equations to approximate the influence of
turbulent fluctuations in the flow domain. In k-ɛ turbulence model, the energy in the turbulence is computed from the turbulent kinetic energy (k) and the rate of dissipation of the turbulent kinetic
energy is computed from the turbulent dissipation (ɛ). The k-ω turbulence model predicts the turbulence with turbulent kinetic energy (k) with a specific rate of dissipation (ω). RSM is a
higher-level turbulence model which is considered for predicting the complex interactions in the turbulence flow fields. The most common turbulence model considered in the field of crude oil fouling
is k-ɛ model [9, 28, 31, 33] which assumes that the turbulence is isotropic and requires less computational time for simulation. Therefore, in the present work, the RANS k-ɛ turbulence model is used
to analyze the fluid-flow in the heat exchanger tube.
The turbuent kinetic energy, k is described as:
$\frac{\partial k}{\partial t}+\left(u_{i} \cdot \nabla\right) k-\nabla \cdot\left(\frac{\mu_{t}}{\sigma_{k}} \nabla \cdot k\right)=P^{k}-\varepsilon+S_{k}$ (4)
and dissipation rate, ɛ is given by
$\frac{\partial \varepsilon}{\partial t}+\left(u_{i} \cdot \nabla\right) \varepsilon-\nabla \cdot\left(\frac{\mu_{t}}{\sigma_{\varepsilon}} \nabla \varepsilon\right)=\frac{\varepsilon}{k}\left(C_{1}
P^{k}-C_{2} \varepsilon\right)+S_{\varepsilon}$ (5)
Species-transport equation:
Petroleum, asphaltenes, non-asphaltenes and coke are considered as species in the crude oil and the transport mechanism is studied through species-transport mathematical tool available in the
commercial CFD software Ansys FLUENT version 16.2. Gravitational force is enabled for all the species, which is the main reason for the species to transport from the bulk to the heat transfer
Species transport equation for species j is given as:
$\frac{\partial}{\partial t}\left(\rho \Upsilon_{i}\right)+\nabla \cdot\left(\rho \Upsilon_{i} \overline{u}\right)=-\nabla \cdot \overline{j_{i}}+R_{i}+S_{i}$ (6)
Reaction kinetics equations:
The reaction of C, the foulant particles, to form D, coke is considered in this study. The foulant particles are assumed to be made of precipitated asphaltenes particles and the coking reaction is
given by:
2C→D (7)
Further, the coking reaction is assumed to take place on the heat transfer surface only. The reaction rate is given by:
$-r_{3}=k_{1}[a s]$ (8)
the reaction rate constant, k, is given by:
$k_{1}=A \exp \left(\frac{-E}{R T}\right)$ (9)
where, frequency factor (A) is 9.12 x 10^11 (s^-1) and activation energy (E) is 1.6 x 10^5 (J/mol).
All the governing equations of the fluid flow are then solved numerically by discretizing all the above equations of the model. The discretization process is commonly performed through three methods,
namely: (1) finite difference method, (2) finite element method and (3) finite volume method. The most popular is finite volume method and is considered in the present study. The discretized
governing equations are solved on a structured mesh generated for the heat exchanger tube as shown in Figure 4.
Figure 4. Heat exchanger tube mesh
The final mesh after the mesh dependence study consists of 0.198 million quadrilateral cells. The choice of very fine mesh near the heat exchanger tube surface was aimed at considering the capability
to capture the thin coke deposition layers. The number of mesh elements required to obtain mesh independent simulation results has been determined by performing steady-state simulations on different
Table 1. Operating conditions
│Description │Value/condition│
│Flow velocity (m/s) │0.14-3.7 │
│Wall temperature (K) │397 │
│Bulk temperature (K) │358 │
│Asphaltenes mass fraction │0.02 │
│Petroleum and non-asphaltenes mass fraction │0.98 │
Table 2. Crude oil species properties
│ │Petroleum │Asphaltenes │Coke │
│Parameters │ │ │ │
│ │and non-asphaltenes [38] │[38] │[2] │
│ │Density │ │ │ │ │
│ │ │860 │1200 │900 │ │
│ │(kg/m^3) │ │ │ │ │
│ │Specific heat (J/kg·K) │213 │920 │1500 │ │
│ │Thermal conductivity (W/m·K) │0.120 │0.75 │1.5 │ │
The governing equations together with the turbulence model and species-transport equations of the fluid flow will result in a solution with the precise boundary and operating conditions. The common
boundary conditions are usually observed in the fluid flow probems are inlets, solid walls, symmetric boundaries, pressure boundary conditions and outflow. At the inlet, fluid enter the heat
exchanger domain and therefore, inlet suface is specied with velocity inlet boudanry condition. The outlet surface is specified with outflow boundary condition. The inlet and outlet boundary
conditions are specified from the boundary conditions panel available in the commercial CFD software Ansys FLUENT version 16.2. Initially, the wall boundary condition is specified as no-slip and
smooth surface wall. Once the coke deposition rate and fouling resistance were calculated, the effects of wall shear and surface roughness on coke deposition rate and fouling resistance are
investigated by varying the wall boundary conditions. The operating and boundary conditions of flow through the heat exchanger tube are detailed in Table 1. The properties of crude oil species are
given in Table 2.
The simulation methodology followed in this study is shown in flow chart in Figure 5:
Figure 5. Flow chart – simulation methodology
The governing equations for fluid flow, turbulence, heat transfer and chemical reactions are solved through finite volume method. Often times, the governing equations solved with low-order
discretization schemes can impair the quality of CFD simulations. Accuracy of the results can be a major problem with first-order schemes particularly for complex naure of fluid flows. As the
simulation involves complex crude oil fouling phenomena, the discretization of convective transport terms was performed through a high-order differencing scheme i.e. Quadratic Upstream Interpolation
for Convective Kinematics (QUICK) and second-order upwind scheme for momentum, turbulent kinetic energy and dissipation rate. The Second-order upwind schemes are one of the most stable discretization
schemes and highly used for CFD simulations involving chemical reactions [9].
Once the heat exchanger tube geometry and mesh are developed, the operating and boundary conditions are specified for mesh dependence study. The chosen mesh was used to perform the non-reacting flow
simulation with crude oil species except asphaltenes. The simulation is iterated till the desired convergence criteria of tolerance 1x10-6 is achieved. Then, the species-transport is activated and
the petroleum species are introduced into the bulk-fluid and surface reactions are activated. The reacting flow is simulated for 190 h of flow-time.
The coke particles stick to the heat transfer surface and grow in thickness over a period of time. As the thickness of the coke layer increases, the resistance to heat transfer increases. The fouling
resistance is calculated by:
$R_{f}=\frac{1}{U_{d}}-\frac{1}{U_{c}}$ (10)
The process of crude oil fouling through coking reactions is simulated at different wall boundary conditions (shear stress and surface roughness). The simulations were repeated at 0.14, 0.31 and 0.47
m/s flow velocities for Cases 2 – 5 in Table 3 in which the matrix of wall boundary conditions is given.
Table 3. Matrix of wall conditions
│Case│Shear Stress │Surface roughness │
│1 │No-slip │Smooth surface │
│2 │0.03 Pa │Smooth surface │
│3 │0.05 Pa │Smooth surface │
│4 │No-slip │0.03 mm │
│5 │No-slip │0.05 mm │
All the governing equations are numerically computed at thousands of discrete points (computational mesh) in the heat exchanger tube. In this context, the validation of the developed CFD model and
the methodology is highly necessary to predict the accuracy of the results with realistic models. Validation of the CFD model provides evidence that the conceptual computational model is computed
accurately compared to the reality. The validation of the CFD model is performed by predicting the heat transfer coefficients with crude oil as fluid medium in the heat exchanger tube. Steady state
CFD simulations are performed and heat transfer coefficients are evaluated at different flow velocities. The calculated HTC’s from CFD simulations are compared with the existing theoretical heat
transfer correlations. The graphical representation of heat transfer coefficients at various crude oil velocities calculated through the CFD simulations and following empirical correlations are shown
in Figure 6.
Dittus and Boelter correlation [39]:
$N u=0.023 R e^{0.8} \operatorname{Pr}^{0.3}$ (11)
Colburn correlation [40]:
$N u=0.023 R e^{0.8} P r^{0.4}\left(\frac{\mu}{\mu_{w}}\right)^{0.14}$ (12)
The HTC’s calculated from the simulation results were impressively correlated with the Colburn correlation with maximum 9.81 % deviation. Therefore, the CFD code and the mathematical model can be
considered validated.
Figure 6. Heat transfer coefficients vs flow velocity
|
{"url":"https://iieta.org/journals/ijht/paper/10.18280/ijht.350440","timestamp":"2024-11-14T15:41:06Z","content_type":"text/html","content_length":"120194","record_id":"<urn:uuid:4262d6d0-c1a9-48dc-8062-3d12f59dcc79>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00533.warc.gz"}
|
What is: Student's T-Test
What is: Student’s T-Test
What is Student’s T-Test?
The Student’s T-Test is a statistical method used to determine if there is a significant difference between the means of two groups. This test is particularly useful when the sample sizes are small
and the population standard deviation is unknown. Developed by William Sealy Gosset under the pseudonym “Student,” the T-Test is widely utilized in various fields, including psychology, medicine, and
social sciences, to analyze experimental data and make inferences about population parameters.
Types of Student’s T-Test
There are three primary types of Student’s T-Test: the one-sample T-Test, the independent two-sample T-Test, and the paired sample T-Test. The one-sample T-Test compares the mean of a single sample
to a known value or population mean. The independent two-sample T-Test assesses whether the means of two independent groups differ significantly. In contrast, the paired sample T-Test evaluates the
means of two related groups, such as measurements taken before and after a treatment on the same subjects, making it essential for repeated measures designs.
Assumptions of the T-Test
For the Student’s T-Test to yield valid results, certain assumptions must be met. First, the data should be approximately normally distributed, especially for small sample sizes. Second, the samples
must be independent in the case of the independent two-sample T-Test. Third, the variances of the two groups being compared should be equal, which can be tested using Levene’s Test. If these
assumptions are violated, alternative statistical methods, such as the Mann-Whitney U test, may be more appropriate.
Calculating the T-Statistic
The T-Statistic is calculated using the formula: ( T = frac{bar{X} – mu}{frac{s}{sqrt{n}}} ) for a one-sample T-Test, where ( bar{X} ) is the sample mean, ( mu ) is the population mean, ( s ) is the
sample standard deviation, and ( n ) is the sample size. For the independent two-sample T-Test, the formula is slightly modified to account for the means and standard deviations of both groups. The
calculated T-Statistic is then compared against a critical value from the T-distribution table to determine statistical significance.
Interpreting T-Test Results
Interpreting the results of a Student’s T-Test involves examining the p-value associated with the calculated T-Statistic. A p-value less than the predetermined significance level (commonly set at
0.05) indicates that the null hypothesis, which states that there is no difference between the group means, can be rejected. In contrast, a p-value greater than 0.05 suggests insufficient evidence to
conclude that a significant difference exists. It is crucial to report both the T-Statistic and the p-value in research findings for transparency and reproducibility.
Applications of Student’s T-Test
The Student’s T-Test is widely applied in various research scenarios. In clinical trials, researchers may use it to compare the effectiveness of a new drug against a placebo. In educational research,
it can assess the impact of different teaching methods on student performance. Additionally, in market research, the T-Test can evaluate consumer preferences between two products. Its versatility
makes it an essential tool for data analysis across numerous disciplines.
Limitations of the T-Test
Despite its widespread use, the Student’s T-Test has limitations. One significant limitation is its sensitivity to outliers, which can skew results and lead to inaccurate conclusions. Moreover, the
T-Test assumes that the data is continuous and measured on an interval or ratio scale, which may not always be the case in real-world applications. Additionally, when comparing more than two groups,
the T-Test is not appropriate, and alternative methods such as ANOVA should be employed to avoid inflating the Type I error rate.
Software for Conducting T-Tests
Various statistical software packages can perform Student’s T-Tests, including R, Python (with libraries such as SciPy), SPSS, and SAS. These tools facilitate the calculation of T-Statistics and
p-values, making it easier for researchers to conduct analyses without manual computations. Additionally, many software programs provide visualizations, such as box plots, to help interpret the
results more effectively. Familiarity with these tools is essential for data scientists and analysts in today’s data-driven environment.
Conclusion on the Importance of the T-Test in Data Analysis
The Student’s T-Test remains a cornerstone of statistical analysis in research, providing valuable insights into group differences. Its ability to handle small sample sizes and unknown population
variances makes it particularly useful in many practical applications. As researchers continue to explore complex datasets, understanding and correctly applying the T-Test will remain critical for
drawing valid conclusions and advancing knowledge across various fields.
|
{"url":"https://statisticseasily.com/glossario/what-is-students-t-test/","timestamp":"2024-11-12T06:06:14Z","content_type":"text/html","content_length":"138571","record_id":"<urn:uuid:71c23abc-a701-486c-b880-e313252693fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00869.warc.gz"}
|
% % (c) The GRASP/AQUA Project, Glasgow University, 1992-1998 % \section[FloatOut]{Float bindings outwards (towards the top level)} ``Long-distance'' floating of bindings towards the top level. \
module FloatOut ( floatOutwards ) where
import CoreSyn
import CoreUtils
import CoreArity ( etaExpand )
import CoreMonad ( FloatOutSwitches(..) )
import DynFlags ( DynFlags, DynFlag(..) )
import ErrUtils ( dumpIfSet_dyn )
import CostCentre ( dupifyCC, CostCentre )
import Id ( Id, idType, idArity, isBottomingId )
import Type ( isUnLiftedType )
import SetLevels ( Level(..), LevelledExpr, LevelledBind,
setLevels, isTopLvl )
import UniqSupply ( UniqSupply )
import Bag
import Util
import Maybes
import UniqFM
import Outputable
import FastString
\end{code} ----------------- Overall game plan ----------------- The Big Main Idea is: To float out sub-expressions that can thereby get outside a non-one-shot value lambda, and hence may be shared.
To achieve this we may need to do two thing: a) Let-bind the sub-expression: f (g x) ==> let lvl = f (g x) in lvl Now we can float the binding for 'lvl'. b) More than that, we may need to abstract
wrt a type variable \x -> ... /\a -> let v = ...a... in .... Here the binding for v mentions 'a' but not 'x'. So we abstract wrt 'a', to give this binding for 'v': vp = /\a -> ...a... v = vp a Now
the binding for vp can float out unimpeded. I can't remember why this case seemed important enough to deal with, but I certainly found cases where important floats didn't happen if we did not
abstract wrt tyvars. With this in mind we can also achieve another goal: lambda lifting. We can make an arbitrary (function) binding float to top level by abstracting wrt *all* local variables, not
just type variables, leaving a binding that can be floated right to top level. Whether or not this happens is controlled by a flag. Random comments ~~~~~~~~~~~~~~~ At the moment we never float a
binding out to between two adjacent lambdas. For example: @ \x y -> let t = x+x in ... ===> \x -> let t = x+x in \y -> ... @ Reason: this is less efficient in the case where the original lambda is
never partially applied. But there's a case I've seen where this might not be true. Consider: @ elEm2 x ys = elem' x ys where elem' _ [] = False elem' x (y:ys) = x==y || elem' x ys @ It turns out
that this generates a subexpression of the form @ \deq x ys -> let eq = eqFromEqDict deq in ... @ vwhich might usefully be separated to @ \deq -> let eq = eqFromEqDict deq in \xy -> ... @ Well,
maybe. We don't do this at the moment. %************************************************************************ %* * \subsection[floatOutwards]{@floatOutwards@: let-floating interface function} %* *
%************************************************************************ \begin{code}
floatOutwards :: FloatOutSwitches
-> DynFlags
-> UniqSupply
-> [CoreBind] -> IO [CoreBind]
floatOutwards float_sws dflags us pgm
= do {
let { annotated_w_levels = setLevels float_sws pgm us ;
(fss, binds_s') = unzip (map floatTopBind annotated_w_levels)
} ;
dumpIfSet_dyn dflags Opt_D_verbose_core2core "Levels added:"
(vcat (map ppr annotated_w_levels));
let { (tlets, ntlets, lams) = get_stats (sum_stats fss) };
dumpIfSet_dyn dflags Opt_D_dump_simpl_stats "FloatOut stats:"
(hcat [ int tlets, ptext (sLit " Lets floated to top level; "),
int ntlets, ptext (sLit " Lets floated elsewhere; from "),
int lams, ptext (sLit " Lambda groups")]);
return (concat binds_s')
floatTopBind :: LevelledBind -> (FloatStats, [CoreBind])
floatTopBind bind
= case (floatBind bind) of { (fs, floats) ->
(fs, bagToList (flattenFloats floats)) }
\end{code} %************************************************************************ %* * \subsection[FloatOut-Bind]{Floating in a binding (the business end)} %* *
%************************************************************************ \begin{code}
floatBind :: LevelledBind -> (FloatStats, FloatBinds)
floatBind (NonRec (TB var level) rhs)
= case (floatRhs level rhs) of { (fs, rhs_floats, rhs') ->
-- A tiresome hack:
-- see Note [Bottoming floats: eta expansion] in SetLevels
let rhs'' | isBottomingId var = etaExpand (idArity var) rhs'
| otherwise = rhs'
in (fs, rhs_floats `plusFloats` unitFloat level (NonRec var rhs'')) }
floatBind (Rec pairs)
= case floatList do_pair pairs of { (fs, rhs_floats, new_pairs) ->
-- NB: the rhs floats may contain references to the
-- bound things. For example
-- f = ...(let v = ...f... in b) ...
if not (isTopLvl dest_lvl) then
-- Find which bindings float out at least one lambda beyond this one
-- These ones can't mention the binders, because they couldn't
-- be escaping a major level if so.
-- The ones that are not going further can join the letrec;
-- they may not be mutually recursive but the occurrence analyser will
-- find that out. In our example we make a Rec thus:
-- v = ...f...
-- f = ... b ...
case (partitionByMajorLevel dest_lvl rhs_floats) of { (floats', heres) ->
(fs, floats' `plusFloats` unitFloat dest_lvl
(Rec (floatsToBindPairs heres new_pairs))) }
-- For top level, no need to partition; just make them all recursive
-- (And the partition wouldn't work because they'd all end up in floats')
(fs, unitFloat dest_lvl
(Rec (floatsToBindPairs (flattenFloats rhs_floats) new_pairs))) }
(((TB _ dest_lvl), _) : _) = pairs
do_pair (TB name level, rhs)
= case (floatRhs level rhs) of { (fs, rhs_floats, rhs') ->
(fs, rhs_floats, (name, rhs')) }
floatList :: (a -> (FloatStats, FloatBinds, b)) -> [a] -> (FloatStats, FloatBinds, [b])
floatList _ [] = (zeroStats, emptyFloats, [])
floatList f (a:as) = case f a of { (fs_a, binds_a, b) ->
case floatList f as of { (fs_as, binds_as, bs) ->
(fs_a `add_stats` fs_as, binds_a `plusFloats` binds_as, b:bs) }}
\end{code} %************************************************************************ \subsection[FloatOut-Expr]{Floating in expressions} %* *
%************************************************************************ \begin{code}
floatExpr, floatRhs, floatCaseAlt
:: Level
-> LevelledExpr
-> (FloatStats, FloatBinds, CoreExpr)
floatCaseAlt lvl arg -- Used rec rhss, and case-alternative rhss
= case (floatExpr lvl arg) of { (fsa, floats, arg') ->
case (partitionByMajorLevel lvl floats) of { (floats', heres) ->
-- Dump bindings that aren't going to escape from a lambda;
-- in particular, we must dump the ones that are bound by
-- the rec or case alternative
(fsa, floats', install heres arg') }}
floatRhs lvl arg -- Used for nested non-rec rhss, and fn args
-- See Note [Floating out of RHS]
= floatExpr lvl arg
floatExpr _ (Var v) = (zeroStats, emptyFloats, Var v)
floatExpr _ (Type ty) = (zeroStats, emptyFloats, Type ty)
floatExpr _ (Lit lit) = (zeroStats, emptyFloats, Lit lit)
floatExpr lvl (App e a)
= case (floatExpr lvl e) of { (fse, floats_e, e') ->
case (floatRhs lvl a) of { (fsa, floats_a, a') ->
(fse `add_stats` fsa, floats_e `plusFloats` floats_a, App e' a') }}
floatExpr _ lam@(Lam _ _)
= let
(bndrs_w_lvls, body) = collectBinders lam
bndrs = [b | TB b _ <- bndrs_w_lvls]
lvls = [l | TB _ l <- bndrs_w_lvls]
-- For the all-tyvar case we are prepared to pull
-- the lets out, to implement the float-out-of-big-lambda
-- transform; but otherwise we only float bindings that are
-- going to escape a value lambda.
-- In particular, for one-shot lambdas we don't float things
-- out; we get no saving by so doing.
partition_fn | all isTyCoVar bndrs = partitionByLevel
| otherwise = partitionByMajorLevel
case (floatExpr (last lvls) body) of { (fs, floats, body') ->
-- Dump any bindings which absolutely cannot go any further
case (partition_fn (head lvls) floats) of { (floats', heres) ->
(add_to_stats fs floats', floats', mkLams bndrs (install heres body'))
floatExpr lvl (Note note@(SCC cc) expr)
= case (floatExpr lvl expr) of { (fs, floating_defns, expr') ->
-- Annotate bindings floated outwards past an scc expression
-- with the cc. We mark that cc as "duplicated", though.
annotated_defns = wrapCostCentre (dupifyCC cc) floating_defns
(fs, annotated_defns, Note note expr') }
floatExpr lvl (Note note expr) -- Other than SCCs
= case (floatExpr lvl expr) of { (fs, floating_defns, expr') ->
(fs, floating_defns, Note note expr') }
floatExpr lvl (Cast expr co)
= case (floatExpr lvl expr) of { (fs, floating_defns, expr') ->
(fs, floating_defns, Cast expr' co) }
floatExpr lvl (Let (NonRec (TB bndr bndr_lvl) rhs) body)
| isUnLiftedType (idType bndr) -- Treat unlifted lets just like a case
-- I.e. floatExpr for rhs, floatCaseAlt for body
= case floatExpr lvl rhs of { (_, rhs_floats, rhs') ->
case floatCaseAlt bndr_lvl body of { (fs, body_floats, body') ->
(fs, rhs_floats `plusFloats` body_floats, Let (NonRec bndr rhs') body') }}
floatExpr lvl (Let bind body)
= case (floatBind bind) of { (fsb, bind_floats) ->
case (floatExpr lvl body) of { (fse, body_floats, body') ->
case partitionByMajorLevel lvl (bind_floats `plusFloats` body_floats)
of { (floats, heres) ->
-- See Note [Avoiding unnecessary floating]
(add_stats fsb fse, floats, install heres body') } } }
floatExpr lvl (Case scrut (TB case_bndr case_lvl) ty alts)
= case floatExpr lvl scrut of { (fse, fde, scrut') ->
case floatList float_alt alts of { (fsa, fda, alts') ->
(add_stats fse fsa, fda `plusFloats` fde, Case scrut' case_bndr ty alts')
-- Use floatCaseAlt for the alternatives, so that we
-- don't gratuitiously float bindings out of the RHSs
float_alt (con, bs, rhs)
= case (floatCaseAlt case_lvl rhs) of { (fs, rhs_floats, rhs') ->
(fs, rhs_floats, (con, [b | TB b _ <- bs], rhs')) }
\end{code} Note [Avoiding unnecessary floating] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In general we want to avoid floating a let unnecessarily, because it might worsen strictness: let x = ...(let y =
e in y+y).... Here y is demanded. If we float it outside the lazy 'x=..' then we'd have to zap its demand info, and it may never be restored. So at a 'let' we leave the binding right where the are
unless the binding will escape a value lambda. That's what the partitionByMajorLevel does in the floatExpr (Let ...) case. Notice, though, that we must take care to drop any bindings from the body of
the let that depend on the staying-put bindings. We used instead to do the partitionByMajorLevel on the RHS of an '=', in floatRhs. But that was quite tiresome. We needed to test for values or trival
rhss, because (in particular) we don't want to insert new bindings between the "=" and the "\". E.g. f = \x -> let in We do not want f = let in \x -> (a) The simplifier will immediately float it
further out, so we may as well do so right now; in general, keeping rhss as manifest values is good (b) If a float-in pass follows immediately, it might add yet more bindings just after the '='. And
some of them might (correctly) be strict even though the 'let f' is lazy, because f, being a value, gets its demand-info zapped by the simplifier. And even all that turned out to be very fragile, and
broke altogether when profiling got in the way. So now we do the partition right at the (Let..) itself. %************************************************************************ %* * \subsection
{Utility bits for floating stats} %* * %************************************************************************ I didn't implement this with unboxed numbers. I don't want to be too strict in this
stuff, as it is rarely turned on. (WDP 95/09) \begin{code}
data FloatStats
= FlS Int -- Number of top-floats * lambda groups they've been past
Int -- Number of non-top-floats * lambda groups they've been past
Int -- Number of lambda (groups) seen
get_stats :: FloatStats -> (Int, Int, Int)
get_stats (FlS a b c) = (a, b, c)
zeroStats :: FloatStats
zeroStats = FlS 0 0 0
sum_stats :: [FloatStats] -> FloatStats
sum_stats xs = foldr add_stats zeroStats xs
add_stats :: FloatStats -> FloatStats -> FloatStats
add_stats (FlS a1 b1 c1) (FlS a2 b2 c2)
= FlS (a1 + a2) (b1 + b2) (c1 + c2)
add_to_stats :: FloatStats -> FloatBinds -> FloatStats
add_to_stats (FlS a b c) (FB tops others)
= FlS (a + lengthBag tops) (b + lengthBag (flattenMajor others)) (c + 1)
\end{code} %************************************************************************ %* * \subsection{Utility bits for floating} %* *
%************************************************************************ Note [Representation of FloatBinds] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The FloatBinds types is somewhat important. We can
get very large numbers of floating bindings, often all destined for the top level. A typical example is x = [4,2,5,2,5, .... ] Then we get lots of small expressions like (fromInteger 4), which all
get lifted to top level. The trouble is that (a) we partition these floating bindings *at every binding site* (b) SetLevels introduces a new bindings site for every float So we had better not look at
each binding at each binding site! That is why MajorEnv is represented as a finite map. We keep the bindings destined for the *top* level separate, because we float them out even if they don't escape
a *value* lambda; see partitionByMajorLevel. \begin{code}
type FloatBind = CoreBind -- INVARIANT: a FloatBind is always lifted
data FloatBinds = FB !(Bag FloatBind) -- Destined for top level
!MajorEnv -- Levels other than top
-- See Note [Representation of FloatBinds]
type MajorEnv = UniqFM MinorEnv -- Keyed by major level
type MinorEnv = UniqFM (Bag FloatBind) -- Keyed by minor level
flattenFloats :: FloatBinds -> Bag FloatBind
flattenFloats (FB tops others) = tops `unionBags` flattenMajor others
flattenMajor :: MajorEnv -> Bag FloatBind
flattenMajor = foldUFM (unionBags . flattenMinor) emptyBag
flattenMinor :: MinorEnv -> Bag FloatBind
flattenMinor = foldUFM unionBags emptyBag
emptyFloats :: FloatBinds
emptyFloats = FB emptyBag emptyUFM
unitFloat :: Level -> FloatBind -> FloatBinds
unitFloat lvl@(Level major minor) b
| isTopLvl lvl = FB (unitBag b) emptyUFM
| otherwise = FB emptyBag (unitUFM major (unitUFM minor (unitBag b)))
plusFloats :: FloatBinds -> FloatBinds -> FloatBinds
plusFloats (FB t1 b1) (FB t2 b2) = FB (t1 `unionBags` t2) (b1 `plusMajor` b2)
plusMajor :: MajorEnv -> MajorEnv -> MajorEnv
plusMajor = plusUFM_C plusMinor
plusMinor :: MinorEnv -> MinorEnv -> MinorEnv
plusMinor = plusUFM_C unionBags
floatsToBindPairs :: Bag FloatBind -> [(Id,CoreExpr)] -> [(Id,CoreExpr)]
floatsToBindPairs floats binds = foldrBag add binds floats
add (Rec pairs) binds = pairs ++ binds
add (NonRec binder rhs) binds = (binder,rhs) : binds
install :: Bag FloatBind -> CoreExpr -> CoreExpr
install defn_groups expr
= foldrBag install_group expr defn_groups
install_group defns body = Let defns body
partitionByMajorLevel, partitionByLevel
:: Level -- Partitioning level
-> FloatBinds -- Defns to be divided into 2 piles...
-> (FloatBinds, -- Defns with level strictly < partition level,
Bag FloatBind) -- The rest
-- ---- partitionByMajorLevel ----
-- Float it if we escape a value lambda, *or* if we get to the top level
-- If we can get to the top level, say "yes" anyway. This means that
-- x = f e
-- transforms to
-- lvl = e
-- x = f lvl
-- which is as it should be
partitionByMajorLevel (Level major _) (FB tops defns)
= (FB tops outer, heres `unionBags` flattenMajor inner)
(outer, mb_heres, inner) = splitUFM defns major
heres = case mb_heres of
Nothing -> emptyBag
Just h -> flattenMinor h
partitionByLevel (Level major minor) (FB tops defns)
= (FB tops (outer_maj `plusMajor` unitUFM major outer_min),
here_min `unionBags` flattenMinor inner_min
`unionBags` flattenMajor inner_maj)
(outer_maj, mb_here_maj, inner_maj) = splitUFM defns major
(outer_min, mb_here_min, inner_min) = case mb_here_maj of
Nothing -> (emptyUFM, Nothing, emptyUFM)
Just min_defns -> splitUFM min_defns minor
here_min = mb_here_min `orElse` emptyBag
wrapCostCentre :: CostCentre -> FloatBinds -> FloatBinds
wrapCostCentre cc (FB tops defns)
= FB (wrap_defns tops) (mapUFM (mapUFM wrap_defns) defns)
wrap_defns = mapBag wrap_one
wrap_one (NonRec binder rhs) = NonRec binder (mkSCC cc rhs)
wrap_one (Rec pairs) = Rec (mapSnd (mkSCC cc) pairs)
|
{"url":"https://downloads.haskell.org/~ghc/7.0.3/docs/html/libraries/ghc-7.0.3/src/FloatOut.html","timestamp":"2024-11-03T21:31:16Z","content_type":"text/html","content_length":"81781","record_id":"<urn:uuid:d614002a-0843-49eb-966e-4721d5a01b75>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00830.warc.gz"}
|
Exponential Equations - Definition, Solving, and Examples - Grade Potential Kent, OH
Exponential EquationsDefinition, Solving, and Examples
In arithmetic, an exponential equation occurs when the variable appears in the exponential function. This can be a scary topic for children, but with a bit of instruction and practice, exponential
equations can be determited easily.
This blog post will talk about the explanation of exponential equations, types of exponential equations, proceduce to solve exponential equations, and examples with solutions. Let's get started!
What Is an Exponential Equation?
The primary step to figure out an exponential equation is understanding when you have one.
Exponential equations are equations that consist of the variable in an exponent. For instance, 2x+1=0 is not an exponential equation, but 2x+1=0 is an exponential equation.
There are two primary things to keep in mind for when attempting to determine if an equation is exponential:
1. The variable is in an exponent (meaning it is raised to a power)
2. There is only one term that has the variable in it (besides the exponent)
For example, look at this equation:
y = 3x2 + 7
The most important thing you must observe is that the variable, x, is in an exponent. Thereafter thing you must observe is that there is additional term, 3x2, that has the variable in it – not only
in an exponent. This implies that this equation is NOT exponential.
On the contrary, take a look at this equation:
y = 2x + 5
Once again, the primary thing you must observe is that the variable, x, is an exponent. Thereafter thing you must note is that there are no other terms that includes any variable in them. This
signifies that this equation IS exponential.
You will run into exponential equations when you try solving various calculations in compound interest, algebra, exponential growth or decay, and various distinct functions.
Exponential equations are essential in mathematics and play a pivotal responsibility in working out many computational questions. Therefore, it is important to completely understand what exponential
equations are and how they can be used as you move ahead in your math studies.
Kinds of Exponential Equations
Variables appear in the exponent of an exponential equation. Exponential equations are remarkable easy to find in daily life. There are three primary kinds of exponential equations that we can solve:
1) Equations with identical bases on both sides. This is the simplest to solve, as we can simply set the two equations same as each other and solve for the unknown variable.
2) Equations with dissimilar bases on both sides, but they can be made the same using rules of the exponents. We will take a look at some examples below, but by changing the bases the equal, you can
observe the described steps as the first event.
3) Equations with different bases on each sides that cannot be made the similar. These are the trickiest to solve, but it’s possible using the property of the product rule. By increasing both factors
to the same power, we can multiply the factors on each side and raise them.
Once we have done this, we can set the two latest equations identical to each other and solve for the unknown variable. This blog do not contain logarithm solutions, but we will tell you where to get
guidance at the very last of this article.
How to Solve Exponential Equations
Knowing the explanation and kinds of exponential equations, we can now learn to solve any equation by ensuing these easy steps.
Steps for Solving Exponential Equations
We have three steps that we are required to follow to work on exponential equations.
First, we must determine the base and exponent variables in the equation.
Second, we have to rewrite an exponential equation, so all terms have a common base. Subsequently, we can solve them utilizing standard algebraic rules.
Lastly, we have to solve for the unknown variable. Now that we have solved for the variable, we can plug this value back into our original equation to find the value of the other.
Examples of How to Work on Exponential Equations
Let's take a loot at some examples to note how these process work in practicality.
Let’s start, we will work on the following example:
7y + 1 = 73y
We can observe that all the bases are the same. Thus, all you are required to do is to rewrite the exponents and figure them out using algebra:
Now, we substitute the value of y in the specified equation to support that the form is true:
71/2 + 1 = 73(½)
Let's observe this up with a further complicated problem. Let's solve this expression:
As you can see, the sides of the equation do not share a identical base. But, both sides are powers of two. In essence, the solution includes decomposing both the 4 and the 256, and we can replace
the terms as follows:
Now we work on this expression to come to the ultimate answer:
Apply algebra to solve for x in the exponents as we conducted in the previous example.
We can recheck our answer by replacing 9 for x in the initial equation.
Keep searching for examples and problems over the internet, and if you use the properties of exponents, you will turn into a master of these theorems, solving most exponential equations without
Better Your Algebra Skills with Grade Potential
Solving questions with exponential equations can be tough with lack of guidance. Even though this guide covers the fundamentals, you still may encounter questions or word questions that might stumble
you. Or perhaps you need some further guidance as logarithms come into play.
If you feel the same, consider signing up for a tutoring session with Grade Potential. One of our professional tutors can support you enhance your skills and mental state, so you can give your next
exam a grade-A effort!
|
{"url":"https://www.kentinhometutors.com/blog/exponential-equations-definition-solving-and-examples","timestamp":"2024-11-05T03:50:51Z","content_type":"text/html","content_length":"76777","record_id":"<urn:uuid:f5633d9c-89dc-41df-91bf-11be623e7f07>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00467.warc.gz"}
|
De Sitter Universe - Tru Physics
The de Sitter universe is a solution to Einstein’s field equations of general relativity, representing a cosmological model. Named after the Dutch astronomer Willem de Sitter, it describes an empty
universe with a positive cosmological constant, implying a constant positive curvature of space. This universe exhibits exponential expansion, a concept that is key to the theory of cosmic inflation.
Cosmological Constant and de Sitter Universe
The cosmological constant, denoted by
Exponential Expansion
In the de Sitter universe, the scale factor, which describes the expansion of the universe, grows exponentially with time. This is represented mathematically as:
De Sitter Space and Inflation
The concept of a de Sitter universe plays a critical role in inflationary cosmology, which posits a brief period of rapid, exponential expansion of the universe shortly after the Big Bang. During
inflation, the universe is approximately a de Sitter space. The inflationary period helps to explain several cosmological observations, such as the flatness and isotropy of the observable universe.
De Sitter Universe and Dark Energy
The cosmological constant in the de Sitter model is closely related to the concept of dark energy, a mysterious energy form that permeates all of space and tends to accelerate the expansion of the
universe. In the context of the current cosmological consensus model (the
The de Sitter universe offers a simplified, yet profound model for understanding the large-scale dynamics of our universe. The mathematical elegance of the de Sitter universe provides valuable
insights into the nature of cosmic expansion, the phenomenon of cosmic inflation, and the role of dark energy in accelerating the expansion of the universe.
Do you prefer video lectures over reading a webpage? Follow us on YouTube to stay updated with the latest video content!
Want to study more? Visit our Index here!
Have something to add? Leave a comment!
|
{"url":"https://tru-physics.org/2023/05/16/de-sitter-universe/","timestamp":"2024-11-04T13:39:54Z","content_type":"text/html","content_length":"126531","record_id":"<urn:uuid:7fd7d614-9344-4a1e-a2a2-099862966f30>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00588.warc.gz"}
|
Applications of variational principles in computing rotational flows
Ecer and Akay (1983) have developed a variational formulation of rotational flow for Euler equations. The present paper provides a summary of these developments. The considered variational
formulation provides a transformation of a type considered by Clebsch (1859). In this transformation, a new set of variables replaces the more commonly used primitive variables u(i), rho and p. Here,
u(i) denotes the velocity components, while rho is the density, and p the pressure. The employed transformation produces a natural uncoupling of the equations when written in a quasi-linear form.
After obtaining the governing equations in terms of the 'Clebsch variables', a solution scheme developed for calculating steady flows is discussed. Attention is given to numerical solutions of Euler
equations based on the derived variational principles, and a study of inviscid, separated flows is conducted.
IN: Advances in computational transonics (A86-20926 08-02). Swansea
Pub Date:
□ Flow Characteristics;
□ Inviscid Flow;
□ Rotating Fluids;
□ Separated Flow;
□ Transonic Flow;
□ Variational Principles;
□ Channel Flow;
□ Corner Flow;
□ Differential Equations;
□ Euler Equations Of Motion;
□ Lagrange Multipliers;
□ Steady Flow;
□ Unsteady Flow;
□ Fluid Mechanics and Heat Transfer
|
{"url":"https://ui.adsabs.harvard.edu/abs/1985act..book..777E/abstract","timestamp":"2024-11-02T18:26:14Z","content_type":"text/html","content_length":"36341","record_id":"<urn:uuid:ab20ca97-8769-430f-9826-c365981c20be>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00145.warc.gz"}
|
Location in space -- Coordinates
Further Reading
While you've all used graphs and coordinate systems in your math classes, to describe motion we have to take the additional step of tying the coordinate system to the physical world. The two-axis
graph in math (an "x-y plot") is a mathematical structure that allows us to use the tools of both geometry and algebra.
But in tying a graph to the physical world, we are doing more. We are making a mathematical model of something in the physical world — something non-trivial, both in making sense of the physics
concepts and in deciding how much of the math we get to legitimately use. Let's go through the process carefully.
We've talked about how we can assign a number to a length using an operational definition by comparing a standard to the length we want to quantify. This isn't good enough for describing location.
Consider the following story.
Where was he?
This is clearly silly. If he paints the "X" on his boat, it moves with him and it won't help him find the place on the next day. To be able to find it again he needs a marker that is a fixed
reference that he can use as a starting point to find the places he wants to find. He needs:
• a starting point,
• a direction to go in, and
• a distance to go along that direction.
These are what we need to set up a specification of position that communicates where something is. We call the way we do it a spatial coordinate system.
Creating a spatial coordinate system
A spatial coordinate system is a very particular kind of graph; it is one in which the points on the graph are meant to correspond to the points in real space — like a map.
In general to specify a position, since we live in three dimensional (3D) space, we will need 3 numbers. For example, if I want to tell you to meet me at my favorite Chinese restaurant in Washington,
DC, I can tell you it's at the corner or 7th street and H street NW on the 3rd floor. 7th street gives you an east-west location, H street gives you a north-south location, and 3rd floor tells you
the vertical location. (And "NW" tells you which quadrant of the city since DC doesn't choose to use negative numbers like we do.)
But for most of the examples we'll deal with in this class, we'll restrict our motions to one or two dimensions (1D or 2D) so we can use a plane.
1. Choose a reference point (origin).
2. Choose two axes (called here x and y and taken to be perpendicular to each other)
3. Choose a length scale for measuring distances (here taken to be the same in both directions - the "m" on the graph stands for "meters").
In a spatial coordinate system, a curve might represent a path an object follows. Since an object can go anywhere, the curve can go back and forth, cross itself, and do lots of other things that
graphs in a math class don't usually do. (In math the term coordinate system by itself is often used to represent the axes on any kind of graph and we will also do that.)
(Most of the graphs we will draw in this class won't be like this but will be more abstract and need interpretation. See Kinematic graphs.)
Conventions for spatial coordinate systems
There are a number of conventions that we will apply in this class for creating spatial coordinate systems.
• The two axes cross at the origin.
□ Sometimes in non-spatial coordinate systems the origin is not shown. This is called a suppressed zero and might be used to magnify the variation in a curve. (But it is often done for the
purpose of misleading the viewer into thinking an effect is more important than it really is.)
• The positive direction of the axis is indicated with an arrowhead.
□ The other direction is negative.
• The axes are labeled including specifying the unit in which the axis is measured.
□ Because we are mapping something physical, as always, units are crucial.
These conventions will turn out to be really important since we will be making many different kinds of graphs and things can get very confusing when they are not followed.
Location vs displacement and length
Three concepts associated with the measurement of location are often confused: position, displacement, and length. If we are specifying something's location by giving its position along a line, we
might give its coordinate, $x$. If an object moves from point $x_1$ to a point $x_2$ it has moved a distance, $\Delta x = x_2 - x_1$, a displacement. If we are specifying the size of an object we
might write that it's length is the difference of the positions of its endpoints: $L = x_2 - x_1$. All three, $x$, $\Delta x$, and $L$, have dimensions [$x$] = [$\Delta x$] ={$L$] = L and all are
measured in the same units. But they mean very different things. This can be very confusing, since equations describing motion given in high school physics classes often write position, $x$ when what
is really meant is displacement, $\Delta x$. While you can get away with this if you always start your displacements at the position $x = 0$ (and similarly for choosing your start time at $t = 0$)
but our problems will not be that simple and we will not be able to get away with this. You will have to be careful to separate these items conceptually. (See the page Values, change, and rates of
Joe Redish and Wolfgang Losert 9/2/12
Article 308
Last Modified: April 9, 2019
|
{"url":"https://www.compadre.org/nexusph/course/view.cfm?ID=308","timestamp":"2024-11-05T02:59:25Z","content_type":"text/html","content_length":"19374","record_id":"<urn:uuid:f96c12ae-9270-4dd9-9abe-74a1f7244c33>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00284.warc.gz"}
|
Rated Contest 2 P1 - An Odd Ski Trip
Submit solution
Points: 5 (partial)
Time limit: 2.0s
Memory limit: 256M
Daniel is planning a ski trip with his friends for New Year's! But first, he must decide which ski resort to stay at... with a twist. Daniel loves odd numbers, so he will only go on ski hills with an
odd number height. In addition, Daniel's favorite number is the number , so he wants the total height of all ski hills to be divisible by .
Can you help Daniel design a ski resort with hills, each with a different odd-value height? Each height must be in the range —we don't want Daniel and his friends to die if the hill is too tall!
Subtask 1 [50%]
is odd.
Subtask 2 [50%]
Input Specifications
The first and only line of input contains the integers and .
Output Specification
Output integers on a single line, the heights of the hills. Output -1 if there exists no such solution.
Sample Input
Sample Output
One possible solution is the following:
Sample Explanation
Each hill height is a unique odd integer between , and the sum of all hill heights is , which is divisible by .
There are no comments at the moment.
|
{"url":"https://mcpt.ca/problem/ratedc2p1","timestamp":"2024-11-11T10:45:34Z","content_type":"text/html","content_length":"37349","record_id":"<urn:uuid:131735a1-b427-4bb3-b749-7b74f9ac2948>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00315.warc.gz"}
|
Secular Perturbations of Planetary Orbits and their Representation as Series | Request PDF
Secular Perturbations of Planetary Orbits and their Representation as Series
The long term changes of the orbital elements of the planets are described by secular perturbation theories. After a short historical discussion, the secular perturbation equations are derived by
means of the formalism of the Lie series transformations. To solve the classical problem of the long term changes in the major semiaxes second order effects have to be computed. As for the long term
changes in the eccentricities and inclinations, they can be computed by means of higher degree theories. However the time span over which the latter apply cannot be increased at will. This because of
the divergence of the perturbative series, a fundamental property of a non-integrable system such as the N-body problem. Numerical integrations are therefore an essential tool both to assess the
reliability of any analytic theory and to provide data on the fundamental frequencies of the secular system and on the occurrence of secular resonances. Examples are taken from the LONGSTOP
integrations of the outer planets for 100 million years.
Mean orbital elements are obtained from their instantaneous, osculating counterparts by removal of the short periodic perturbations. They can be computed by means of different theories,
analytical or numerical, depending on the problem and accuracy required. The most advanced contemporary analytical theory (Knežević 1988) accounts only for the perturbing effects due to Jupiter
and Saturn, to the first order in their masses and to degree four in eccentricity and inclination. Nevertheless, the mean elements obtained by means of this theory are of satisfactory accuracy
for majority of the asteroids in the main belt (Knežević et al. 1988), for the purpose of producing large catalogues of mean and proper elements, to identify asteroid families, to assess their
age, to study the dynamical structure of the asteroid belt and chaotic phenomena of diffusion over very long time spans. In the vicinity of the main mean motion resonances, however, especially
2:1 mean motion resonance with Jupiter, these mean elements are of somewhat degraded accuracy.
Project LONGSTOP was set up to investigate the long term dynamics of the outer solar system over timescales comparable to its age. This was done by means of numerical integrations on a CRAY-1S
computer. Comparison with analytic theories required the use of filtering procedures and Fourier analysis. The 6-body point-mass newtonian problem, plus a gaussian ring model for the effect of
the inner planets, turned out to be a good approximation to the real system; general relativity corrections can be easily introduced although they are not yet critical over 100 Myr. Long term
variations in shape and orientation of planetary orbits from numerical integrations over 9.3 Myr suggested that analytic theories must be improved in order to be valid for such a timespan.
Variations in the major semiaxes of Uranus and Neptune with a 1.119 Myr period have been found in the data; they could be recovered also analytically once the amplifying effect of the 2/1
quasi-resonance in mean motion between Uranus and Neptune was taken into account. The 100 Myr integration LONGSTOP 1B revealed the presence of a very small divisor with 31 Myr period. In relation
with this small divisor, and with others which could not be identified with combinations of up to 8 fundamental frequencies, there appeared to be an accumulation of spectral lines of comparable
amplitude in some regions of the spectrum. This was not the case when the output of the 9.3 Myr integration LONGSTOP 1A was analyzed; it suggests that 100 Myr might be long enough a timespan
already to reveal the presence of non regular regions of motion in the phase space.
Though the concept of the Lie transform dates back to more than a century ago, it is only in about the last thirty years that this concept has been introduced into perturbative theories and then
applied on a vast scale in various fields of physics. As we recall in the course of this chapter, the field where the concept of the Lie transform was introduced for the first time is celestial
mechanics and, incredibly, this concept is the only development of perturbation theory which cannot in some way be made to date back to Poincaré. Equally surprising is that the “old” canonical
perturbation theory, in spite of the awkwardness involved by the use of a generating function with “mixed” variables, has ruled up until now, never falling into discredit, not even as a
consequence of exaggerations like that of Delaunay, who had to calculate no less than 505 successive canonical transformations. We think that the record for absurdity has been set in plasma
physics, where the use was established by somebody of quantizing classical systems, applying quantum perturbation theory (which provides more practical rules) and then letting the Planck constant
h → 0 in the result. That was the situation until three decades ago. For these reasons, we thought it right to follow, in our exposition, wherever it has been possible, the chronological order in
which the various contributions have appeared, at the end showing how the Lie transform method is substantially the right method for implementing KAM techniques. In this chapter, as in the
preceding ones, we have tried to isolate what appeared to us to be the fundamental concepts and to insist on them, instead of dwelling upon the exposition of complicated examples or involved
formulae for calculations. For the latter, the reader will find all the necessary information in the bibliographical notes.
We analyze the dynamics of a driven, damped pendulum as used in mechanical clocks. We derive equations for the amplitude and phase of the oscillation, on time scales longer than the pendulum
period. The equations are first order ODEs and permit fast simulations of the joint effects of circular and escapement errors, friction, and other disturbances for long times. The equations
contain two averages of the driving torque over a period, so that the results are not very sensitive to the fine structure of the driving. We adopt a constant-torque escapement and study the
stationary pendulum rate as a function of driving torque and friction. We also study the reaction of the pendulum to a sudden change in the driving torque, and to stationary noisy driving. The
equations for the amplitude and phase are shown to describe the pendulum dynamics quite well on time scales of one period and longer. Our emphasis is on a clear exposition of the physics.
The purpose of this series of lecture notes is to give an outline of the basic tools required to show the occurrence of chaotic motions in the simplest non--integrable problems in Celestial
Mechanics, such as the circular restricted planar 3--body problem. No formal proofs will be given here; they can be found in the references given for each section. Section 1 describes the linear
and local theory of ordinary differential equations in the neighbourhood of a fixed point; the problems arising in the embedding of invariant stable and unstable manifolds are also discussed.
Section 2 is about periodic orbits; the subjects discussed include variational equations, surfaces of section, the continuation of periodic orbits in the restricted 3--body problem, and
bifurcation of hyperbolic periodic orbits from resonant periodic orbits. Section 3 covers fundamental models of resonance, the global behaviour of separatrices, and their intersections; all this
allows to give at least an outline of the proof of the fundamental result --presented by Poincaré in his book Les méthodes nouvelles de la mécanique céleste-- by which homoclinic points must
necessarily occur in the restricted problem. A short conclusion underscores the fact --shown in a rigorous vay much later-- that this in turn implies that chaos in the strongest possible sense
occurs in the restricted problem, and is an essential feature of every non--integrable system, even very simple ones with only two degrees of freedom. I apologize for reporting here my lectures
in a very short format, almost without comments in between the formulas and statements of the main results; my understanding of the purpose of this notes is that they should serve as a reminder
of the existence of many subjects to be studied, rather than a complete presentation which could not be contained in this format.
A simple model is presented for the coupled dynamics of the orbit-rotation-climate system of Mars. Changes in the orientation of the spin pole, relative to the orbit pole, influence the
spatiotemporal pattern of incident radiation and thus drive climatic mass transport into and out of the polar regions on a variety of timescales. Changes in the mass distribution occur from
direct climatic forcing and compensating viscous flow in the interior. The net change in mass distribution influences the rate of spin axis precession and thereby influences obliquity. The rate
of secular obliquity drift depends on several poorly known parameters, including the magnitudes and response times of volatile inventories and viscosity structure within Mars. Even relatively
modest secular obliquity drift can lead to trapping in nearby resonances. The dissipative nature of the coupled dynamical system makes reconstruction of past evolution much more difficult than
for a purely inertial system. The long-term obliquity history of Mars is dominated by climate.
The long term evolution of the orbits of the asteroids is studied by means of proper elements, which are quasi-integrals of the motion. After a short review of the classical theories for secular
perturbations, this paper presents the state of the art for the computation of proper elements. The recent theories have been extended to higher degree in the eccentricities and inclinations, and
to the second order in the perturbing masses; they use new iterative algorithms to compute secular perturbations with fixed initial conditions but variable frequencies. This allows to compute
proper elements stable over time spans of several million years, within a range of oscillations small enough to allow the identification of asteroid families; the same iterative algorithm can
also be used to automatically detect secular resonances, that is to map the dynamical structure of the main asteroid belt. However the proper element theories approximate the true solution of the
N-body problem with a conditionally periodic solution of a truncated problem, while the orbits of most asteroids are not conditionally periodic, but chaotic; positive Lyapounov exponents have
been detected for a large number of real asteroids. The phenomenon of stable chaos occurs whenever the range of oscillation of the proper elements, as computed by state of the art theories,
remains small for time spans of millions of years, while the Lyapounov time (in which the orbits diverge by a factor exp(1) is much shorter, e.g. a few thousand years. This can be explained only
by a theory which accounts correctly for the degeneracy of the unperturbed 2-body problem used as a first approximation. The two stages of computation of mean and proper elements are each subject
to the phenomana of resonance and chaos; stable chaos occurs when a weak resonance affects the computation of mean elements, but the solution of the secular perturbation equations is regular.
The problem of calculating the proper elements of asteroids for the purpose of family identification is examined analytically. The derivation of the Lie-series theoretical model of Yuasa (1973)
is reviewed; the selection of a coordinate system and of data for the motion of the major planets is explained; and an indirect technique for estimating the accuracy of the elements calculated is
described. Results for 158 Koronis, 221 Eos, 24 Themis, and 8 Flora are presented in extensive tables and graphs and discussed in detail. While the elements for Flora are found to be inaccurate,
calling into question the membership or even the reality of its family of asteroids, those for Koronis, Eos, and Themis are more accurate than those previously available, although not yet at the
0.001 level required for reliable family classification.
A new theory for the calculation of proper elements, taking into account terms of degree four in the eccentricities and inclinations, and also terms of order two in the mass of Jupiter, has been
derived and programmed in a self contained code. It has many advantages with respect to the previous ones. Being fully analytical, it defines an explicit algorithm applicable to any chosen set of
orbits. Unlike first order theories, it takes into account the effect of shallow resonances upon the secular frequencies; this effect is quite substantial, e.g. for Themis. Short periodic effects
are corrected for by a rigorous procedure. Unlike linear theories, it accounts for the effects of higher degree terms and can thus be applied to asteroids with low to moderate eccentricity and
inclination; secular resonances resulting from the combination of up to four secular frequencies can be accounted for. The new theory is self checking : the proper elements being computed with an
iterative algorithm, the behaviour of the iteration can be used to define a quality code. The amount of computation required for a single set of osculating elements, although not negligible, is
such that the method can be systematically applied on long lists of osculating orbital elements, taken either from catalogues of observed objects or from the output of orbit computations. As a
result, this theory has been used to derive proper elements for 4100 numbered asteroids, and to test the accuracy by means of numerical integrations. These results are discussed both from a
quantitative point of view, to derive an a posteriori accuracy of the proper elements sets, and from a qualitative one, by comparison with the higher degree secular resonance theory.
Planetary and satellite theories have been historically and are presently intimately related to the available computing capabilities, the accuracy of observational data, and the requirements of
the astronomical community. Thus, the development of computers made it possible to replace planetary and lunar general theories with numerical integrations, or special perturbation methods. In
turn, the availability of inexpensive small computers and high-speed computers with inexpensive memory stimulated the requirement to change from numerical integration back to general theories, or
representative ephemerides, where the ephemerides could be calculated for a given date rather than using a table look-up process. In parallel with this progression, the observational accuracy has
improved such that general theories cannot presently achieve the accuracy of the observations, and, in turn, it appears that in some cases the models and methods of numerical integration also
need to be improved for the accuracies of the observations. Planetary and lunar theories were originally developed to be able to predict phenomena, and provide what are now considered low
accuracy ephemerides of the bodies. This proceeded to the requirement for high accuracy ephemerides, and the progression of accuracy improvement has led to the discoveries of the variable
rotation of the Earth, several planets, and a satellite. By means of mapping techniques, it is now possible to integrate a model of the motion of the entire solar system back for the history of
the solar system. The challenges for the future are: Can general planetary and lunar theories with an acceptable number of terms achieve the accuracies of observations? How can numerical
integrations more accurately represent the true motions of the solar system? Can regularly available observations be improved in accuracy? What are the meanings and interpretations of stability
and chaos with respect to the motions of the bodies of our solar system? There has been a parallel progress and development of problems in dealing with the motions of artificial satellites. The
large number of bodies of various sizes in the limited space around the Earth, subject to the additional forces of drag, radiation pressure, and Earth zonal and tesseral forces, require more
accurate theories, improved observational accuracies, and improved prediction capabilities, so that potential collisions may be avoided. This must be accomplished by efficient use of computer
The variations in the obliquity of Mars are considered to be the likely source of major climatic variations on that planet. This paper explores the range of uncertainty in the obliquity history
of Mars associated with the present uncertainty in the axial precession rate, applying three different analytic techniques. It is shown that, within the observationally allowed range of axial
precession rates, there are some intervals where the obliquity history of Mars is only weakly dependent on the precession rate, and other intervals where the obliquity is very sensitively
dependent on the precession rate. A very wide range of obliquity histories are possible, including some which involve resonance passages within the relatively recent past. It is estimated that
obliquities as high as 51.4 deg or as low as 0.2 deg may have occurred within the last ten million years.
Equations of motion for the general four-body problem are derived in terms of Jacobian coordinates. A reduction to the three-body problem yields an approach for studying the perturbations
experienced by a binary about whose center of mass a third mass revolves. A stability criterion for the binary and the revolving mass is proposed. This stability criterion can be related to a
similar criterion developed by Zare (1977). Known triple star systems, data from the solar system, and numerical experiments conducted for triple mass problems are analyzed through use of the
stability criterion. Difficulties in applying the stability criterion to the Saturn-Titan-Hyperion and sun-Neptune-Pluto systems also receive attention.
Project LONGSTOP was set up to investigate the long term dynamics of the outer solar system over timescales comparable to its age. This was done by means of numerical integrations on a CRAY-1S
computer. Comparison with analytic theories required the use of filtering procedures and Fourier analysis. The 6-body point-mass newtonian problem, plus a gaussian ring model for the effect of
the inner planets, turned out to be a good approximation to the real system; general relativity corrections can be easily introduced although they are not yet critical over 100 Myr. Long term
variations in shape and orientation of planetary orbits from numerical integrations over 9.3 Myr suggested that analytic theories must be improved in order to be valid for such a timespan.
Variations in the major semiaxes of Uranus and Neptune with a 1.119 Myr period have been found in the data; they could be recovered also analytically once the amplifying effect of the 2/1
quasi-resonance in mean motion between Uranus and Neptune was taken into account. The 100 Myr integration LONGSTOP 1B revealed the presence of a very small divisor with 31 Myr period. In relation
with this small divisor, and with others which could not be identified with combinations of up to 8 fundamental frequencies, there appeared to be an accumulation of spectral lines of comparable
amplitude in some regions of the spectrum. This was not the case when the output of the 9.3 Myr integration LONGSTOP 1A was analyzed; it suggests that 100 Myr might be long enough a timespan
already to reveal the presence of non regular regions of motion in the phase space.
The secular system of the 8 main planets of the solar system has been computed up to order 2 and degree 5. It was numerically integrated over 30 million years and Fourier analysed to obtain a
solution similar to the results of analytical theories.
In Poincarés “Méthodes Nouvelles de la Mécanique Celeste”, pp. 127–135, there is given a formal solution of the problem of three bodies in motion under their mutual gravitational attractions, in
the case where one of the bodies (the “primary”) has mass appreciably larger than the others, and where the motion of the others relative to the primary is nearly circular, and nearly coplanar.
We may call this the “planetary case” of the general gravitational problem of three bodies. The solution is derived using Von Zeipel type transformations, in a number of stages, making use of
infinite series in powers of the ratios of the smaller masses to that of the primary, and, in the treatment of the secular variations, powers of quantities of the order of the orbital
eccentricities and orbital inclinations. In the present treatment, the simple extension to n planets is made, and the transformations employing power series are introduced making use of the Lie
series method introduced by Hori (Pub. Astron. Soc., Japan, Vol. 18, pp. 287-295, 1966) which gives explicit expressions for the transformed variables in terms of the untransformed, and
vice-versa. The short-period terms are removed by a single transformation at the outset, so that the elements of the matrix defining the linear transformation employed in the secular variation
theory are functions of the constant transformed major axes, and so do not themselves possess short-period terms as those in Poincare’s solution do.
After a presentation of Lyapunov characteristic exponents (LCE) we recall their basic properties and numerical methods of computation. We review some numerical computations which are concerned
with LCEs mainly those concerning the dimensions of invariant manifolds and chaotic attractors.
A theory of general perturbations based on Lie series is presented together with its predecessor, the von Zeipel method. Some drawbacks of the von Zeipel method are shown together with the
motivation for an improved theory by a pedagogical purpose. It is shown that perturbation theories based on canonical transformations and averaging principles yield the same results (through the
second order) irrespective of the implicit or explicit type of the transformations. The motion of a near earth satellite in an orbit with small eccentricity and inclination, and that with small
angular momentum are presented as two examples of the use of the theory.
Results are reported from long-term numerical integrations of the motion of the outer planets, undertaken as part of the Longstop project (Nobili, 1987). The theoretical basis and numerical
implementation of the computations are reviewed, and the results are presented graphically and analyzed in detail. In the 9.5-Myr computations of Longstop 1A, the dynamical structure of the outer
solar system is shown to be significantly affected by the secular frequency g5-g7 (1.119 Myr), involving the pericenters of Jupiter and Uranus. The accumulation of spectral lines observed in the
long-period spectrum of the outer solar system is tentatively attributed to a secular small divisor of period about 31 Myr, found in the 100-Myr integration Longstop 1B.
For the ‘planetary case’ of the gravitationaln-body problem in three dimensions, a sequence of Lie series contact transformations is used to construct asymptotic series representations for the
canonical parameters of the instantaneous orbits in a Jacobi formulation. The series contain only periodic terms, the frequencies being linear combinations of those of the planetary orbits and
those of the secular variations of the apses and nodes, and the series are in powers of the masses of the planets in terms of that of the primary, and of a quantity of the order of the excursions
of the eccentricities and inclinations of the orbits. The treatment avoids singularities for circular and coplanar orbits. It follows that the major axes are given by series of periodic terms
only, to all orders in the planetary masses.
Modern theories of dynamical systems have very clearly demonstrated the unexpected fact that systems governed by the equations of Newtonian dynamics do not necessarily exhibit the
'predictability' property. Indeed, very recent researches have shown that in wide classes of very simple systems satisfying those equations predictability is impossible beyond a certain definite
time horizon.
CONTENTSIntroduction § 1. Results § 2. Preliminary results from mechanics § 3. Preliminary results from mathematics § 4. The simplest problem of stability § 5. Contents of the paperChapter I.
Theory of perturbations § 1. Integrable and non-integrable problems of dynamics § 2. The classical theory of perturbations § 3. Small denominators § 4. Newton's method § 5. Proper degeneracy § 6.
Remark 1 § 7. Remark 2 § 8. Application to the problem of proper degeneracy § 9. Limiting degeneracy. Birkhoff's transformation § 10. Stability of positions of equilibrium of Hamiltonian
systemsChapter II. Adiabatic invariants § 1. The concept of an adiabatic invariant § 2. Perpetual adiabatic invariance of action with a slow periodic variation of the Hamiltonian § 3. Adiabatic
invariants of conservative systems § 4. Magnetic traps § 5. The many-dimensional caseChapter III. The stability of planetary motions § 1. Picture of the motion § 2. Jacobi, Delaunay and Poincaré
variables §3. Birkhoff's transformation § 4. Calculation of the asymptotic behaviour of the coefficients in the expansion of \Bar{\Bar F}_1 § 5. The many-body problemChapter IV. The fundamental
theorem § 1. Fundamental theorem § 2. Inductive theorem § 3. Inductive lemma § 4. Fundamental lemma § 5. Lemma on averaging over rapid variables § 6. Proof of the fundamental lemma § 7. Proof of
the inductive lemma § 8. Proof of the inductive theorem § 9. Lemma on the non-degeneracy of diffeomorphisms § 10. Averaging over rapid variables § 11. Polar coordinates § 12. The applicability of
the inductive theorem § 13. Passage to the limit § 14. Proof of the fundamental theoremChapter V. Technical lemmas § 1. Domains of type D § 2. Arithmetic lemmas § 3. Analytic lemmas § 4.
Geometric lemmas § 5. Convergence lemmas § 6. NotationChapter VI. Appendix § 1. Integrable systems § 2. Unsolved problems § 3. Neighbourhood of an invariant manifold §4. Intermixing § 5.
Smoothing techniquesReferences
The 'LONGSTOP' project, which investigates solar system dynamical stability over timescales that are comparable to its lifetime through numerical integrations, has concentrated on obtaining a
compression of the output which will be suitable for comparison with analytical theories. Attention is presently given to output decimation and digital filtering, which facilitate the study of
long term changes in the orbital elements as deduced from the output of a numerical integration, and to Fourier analysis techniques, by means of which prominent lines in the dynamical spectrum
can be identified when theoretical constraints on the allowed combination frequencies are taken into account. In this way, a synthetic secular perturbation theory is built from the numerical
experiments for comparison with available analytical theories.
The short-periodic perturbations of asteroid elements are calculated by means of the fourth-degree analytic theory of Yuasa (1973) and, for comparison, by means of a combination of numerical
integration and filtering techniques. The accuracy and reliability of the obtained mean elements are analysed from the point of view of their applicability as parameters for asteroid
classification into families. Special attention has been paid to the influence of near resonances and to the problem of the deterioration of the accuracy for high eccentricity and high
inclination objects. It has been found that in most cases a simple and fast analytic procedure can provide data accurate enough for a reliable classification.
The author has computed the differential system giving the secular evolution of the 8 main planets of the solar system up to the order 2 with respect to the masses and degree 5 in eccentricity
and inclination including lunar and relativistic contributions. This secular system is numerically integrated over 30 million years. A modified Fourier analysis is performed to obtain a solution
for the secular evolution of the orbits on a quasi-periodical form. Comparison with Bretagnon's ephemeris VSOP82 allows to derive uncertainties for the determination of the main frequencies of
the secular system. Comparisons are made with the results of long term numerical integrations of Applegate et al. (1986) and Carpino et al. (1986) and with the analytical theory of Bretagnon
(1974, 1984). The solutions of the outer solar system appear to be more stable than the solutions of the inner solar system.
Within the LONGSTOP (Long-term Gravitational Stability Test of the Outer Planets) research project, numerical integrations of the orbits of the outer planets show, for the first time, dynamical
features in the behavior of the semimajor axes of these planets over time scales of the order of millions of years. The most interesting one is an oscillation, in antiphase, of the semimajor axes
of Uranus and Neptune revealing an almost exact exchange of energy of the two planets with one another. The period of the oscillation is 1,119,416 years - the same as the period of the exchange
in angular momentum between Jupiter and Uranus.
Long term numerical integrations of planetary orbits designed to study the stability of the Solar System over timescales comparable to its age have become very promising thanks to the
availability of very powerful computers and to a substantial improvement in methods of investigating the stability of hierarchical dynamical systems. The stability of such numerical integrations
relies on the ability to control all possible sources of error. Among the errors caused by the inadequacy of the physical model are those due to the fact that Newton's theory of gravitation is
used instead of general relativity. It is shown that the secular advance of perihelia predicted by general relativity can be simulated exactly by a 1/r-squared perturbing potential with almost
negligible additional cost in computer time.
One of the oldest problems of celestial mechanics is that of the long-term behaviour of the semimajor axes a of the planetary orbits. Analytical theories1,2 predict periodic variations in a, some
of which may have very long periods, but these terms have never been computed. We have now performed a 9.3-Myr numerical integration of the orbits of the outer planets, using a pure newtonian
point mass model. An accurate integrator and an effective low-pass filtering of the output allow us to detect high-order variations in the energies, and hence also in a, with periods ranging from
tens of thousand to millions of years. The most interesting feature is an energy exchange between Uranus and Neptune with a period of 1,119,000 years, the same as the period of the libration
between the perihelia of Jupiter and Uranus3,4. The mechanism involves Jupiter and also Saturn; moreover, their energy shows puzzling longer-term trends. The energy of Pluto changes mostly with
periods close to that of the 3:2 libration in mean motion with Neptune. Its spectrum in this region shows a very complicated structure; however, we have found no indication of chaotic behaviour.
Five outer planets are numerically integrated over five million years in the Newtonian frame. The argument of Pluto's perihelion librates about 90 degrees with an amplitude of about 23 degrees.
The period of the libration depends on the mass of Pluto: 4.0×106 years forM pluto=2.78×10−6M sun and 3.8×106 years forM pluto=7.69×10−9M sun, which is the newly determined mass. The motion of
Neptune's perihelion is more sensitive to the mass of Pluto. ForM pluto=7.69×10−9M sun, the perihelion of Neptune does circulate counter-clockwise and forM pluto=2.78×10−6M sun, it does not
circulate and the Neptune's eccentricity does not have a minimum. With the initial conditions which do not lie in the resonance region between Neptune and Pluto, a close approach between them
takes place frequently and the orbit of Pluto becomes unstable and irregular.
The analytical stability criterion applicable to coplanar hierarchical three-body systems described in the first paper of this series, Walkeret al. (1980), is modified to give an exact
representation ofHill-type stability in all such cases. The dependence of the stability on all orbital parameters (in the coplanar case) is taken into account. The criterion for stability is now
dependant upon the participating masses, the elements of the initial osculating Keplerian orbits of the system (viz. the orbits ofm 2 aboutm 1 andm 3 about the mass-centre of the (m 1,m 2)
system) and the positions within these orbits. The behaviour of the stability of such systems is demonstrated (both analytically and numerically) with respect to certain of the parameters
involved to consider effects not dealt with in the above-mentioned paper. In particular two interesting real cases of triple systems in the Solar System are discussed, namely Sun-Jupiter-Saturn
and Earth-Moon-Sun. The results of the present paper are compared with those of past authors who considered the same systems. Finally some general features arising out of our analysis are
Hierarchical stability of the outer Solar System is monitored through its 3-body subsystems by using numerically computed ephemerides for 5106 yr. It is found that the stability parameters of
Sun-Jupiter-Saturn and Sun-Uranus-Neptune oscillate in anti-phase in 1.1106 yr. The mechanism responsible for this locking is a secular resonance between Uranus' perihelion and Jupiter's
aphelion: the difference between the two librates within 70 with the same period of 1.1106 yr.
We propose a canonical transformation reducing the averaged planar planetary problem near resonance to a one degree of freedom problem when the perturbation is truncated at the first order in the
eccentricities. This reducing transformation leads to a very simple explanation of the puzzling behaviour of the Apocentric Librators, a class of asteroids identified by Franklinet al. (1975). An
exploration of the phase space of the average problem with the use of the mapping technique shows that the alternation of two libration mechanism is a common feature for initial conditions near,
but not inside, the deep resonance region.
Generalized Jacobian coordinates can be used to decompose anN-body dynamical system intoN-1 2-body systems coupled by perturbations. Hierarchical stability is defined as the property of
preserving the hierarchical arrangement of these 2-body subsystems in such a way that orbit crossing is avoided. ForN=3 hierarchical stability can be ensured for an arbitrary span of time
depending on the integralz=c 2 h (angular momentum squared times energy): if it is smaller than a critical value, defined by theL 2 collinear equilibrium configuration, then the three possible
hierarchical arrangements correspond to three disconnected subsets of the invariant manifold in the phase space (and in the configuration space as well; see Milani and Nobili, 1983a). The same
definitions can be extended, with the Jacobian formalism, to an arbitrary hierarchical arrangement ofN[(z)\tilde]23\tilde z_{23} and [(z)\tilde]34\tilde z_{34} , so that both the subsystems are
initially hierarchically stable. Then the hierarchical arrangement of the 4 bodies cannot be broken until eitherz 23 orz 34 is changed by an amount [(z)\tilde]ij - zij ( 0 )\tilde z_{ij} - z_{ij}
\left( 0 \right) ; that is the whole system is hierarchically stable for a time spain not shorter than the minimum between Dt23 = ( [(z)\tilde]23 - z23 ( 0 ) ) \mathord/ \vphantom ( [(z)\tilde]23
- z23 ( 0 ) ) [(z)\dot]23 [(z)\dot]23 \Delta t_{23} = {{\left( {\tilde z_{23} - z_{23} \left( 0 \right)} \right)} \mathord{\left/ {\vphantom {{\left( {\tilde z_{23} - z_{23} \left( 0 \right)} \
right)} {\dot z_{23} }}} \right. \kern-\nulldelimiterspace} {\dot z_{23} }} and Dt34 = ( [(z)\tilde]34 - z34 ( 0 ) ) \mathord/ \vphantom ( [(z)\tilde]34 - z34 ( 0 ) ) [(z)\dot]34 [(z)\dot]34 \
Delta t_{34} = {{\left( {\tilde z_{34} - z_{34} \left( 0 \right)} \right)} \mathord{\left/ {\vphantom {{\left( {\tilde z_{34} - z_{34} \left( 0 \right)} \right)} {\dot z_{34} }}} \right. \kern-\
nulldelimiterspace} {\dot z_{34} }} . To estimate how long is this stability time, two main steps are required. First the perturbing potentials have to be developed in series; the relevant small
parameters are some combinations of mass ratios and length ratios, the[(z)\dot]ij\dot z_{ij} (we limit ourselves to the planar case). To assess the long term behaviour of the system, we can
neglect the short-periodic perturbations and discuss only the long-periodic and the secular perturbations. By using a Poisson bracket formalism, a generalization of Lagrange theorem for semimajor
axes and a generalization of the classical first order theories for eccentricities and pericenters, we prove that thez ij do not undergo any secular perturbation, because of the interaction with
the other subsystem, at the first order in the ik . After the long-periodic perturbations have been accounted for, and apart from the small divisors problems that could arise both from ordinary
and secular resonances, only the second order terms have to be considered in the computation of t 23, t 34. A full second order perturbative theory is beyond the scope of this paper; however an
order-of-magnitude lower estimate of the t ij can be obtained with the very pessimistic assumption that essentially all the second order terms affect in a secular way thez ij . The same method
could be applied also toN5 body systems. Since almost everyN-body system existing in nature is strongly hierarchical, the product of two ij is very small for almost all the real astronomical
problems. As an example, the hierarchical stability of the 4-body system Sun, Mercury, Venus, and Jupiter is investigated; this system turns out to be stable for at least 110 million years.
Although this hierarchical stability time is 10 times less than the real age of the Solar System, taking into account that many pessimistic assumptions have been done we can conclude that the
stability of the Solar System is no more a forbidden problem for Celestial Mechanics.
In this paper a proof is given of Kolmogorov’s theorem on the existence of invariant tori in nearly integrable Hamiltonian systems. The scheme of proof is that of Kolmogorov, the only difference
being in the way canonical transformations near the identity are defined. Precisely, use is made of the Lie method, which avoids any inversion and thus any use of the implicit-function theorem.
This technical fact eliminates a spurious ingredient and simplifies the establishment of a central estimate. Nel presente lavoro si dimostra il teorema di Kolmogorov sull’esistenza di tori
invarianti in sistemi Hamiltoniani quasi integrabili. Si usa lo schema di dimostrazione di Kolmogorov, con la sola variante del modo in cui si definiscono le trasformazioni canoniche prossime
all’identità. Si usa infatti il metodo di Lie, che elimina la necessità d’inversioni e quindi dell’impiego del teorema delle funzioni implicite. Questo fatto tecnico evita un ingrediente spurio e
semplifica il modo in cui si ottiene una delle stime principali. В этой работе предлагается доказательство теоремы Колмогорова о существовании инвариантных торов в квази-интегрируемых
Гамильтоновых системах. Используется схема доказательства, предложенная Колмогоровым, единственное отличие состоит в способе, которым определяются канонические преобразования. В этой работе
используется метод Ли, которыи исключает необходимость инверсии и, следовательно, использование теоремы для неявной функции. Этот технический прием исключает ложный ингрдеиент и упрощает
получение главной оценки.
The accuracy and reliability of the proper orbital elements used to define asteroid families are investigated by simulating numerically the dynamical evolution of families assumed to arise from
the “explosion” of a parent object. The orbits of the simulated family asteroids have then been integrated in the frame of the elliptic restricted three-body problem Sun-Jupiter-asteroid, for
times of the order of the circulation periods of perihelia and nodes. By filtering out short-periodic perturbations, we have monitored the behavior of the proper eccentricities and inclinations,
computed according to the linear secular perturbation theory. Significant long-period variations have been found especially for families having nonnegligible eccentricities and/or inclinations
(like the Eos family), and strong disturbances due to the proximity of mean motion commensurabilities with Jupiter have been evidenced (for instance, in the case of the Themis family). These
phenomena can cause a significant “noise” on the proper eccentricities and inclinations, probably affecting in some cases the derived family memberships. They can also give rise to a spurious
anisotropy in the fragment ejection velocity fields computed from the dispersion in proper elements observed in each family, and this could explain the puzzling anisotropies of this kind actually
found in real families by D. Brouwer (1951, Astron. J.56, 9–32) and by V. Zappalà, P. Farinella, Z. Knežević, and P. Paolicchi (1984), Icarus59, 261–285).
A special-purpose computer is used to integrate the orbits of the outer five planets for more than 100 Myr into the future and more than 100 Myr into the past. The strongest features in the
Fourier transforms of the orbital elements of the Jovian planets can be identified with the frequencies predicted by linear secular theory. Many of the weaker features in the Fourier spectra are
identified as linear combinations of the basic frequencies. Serious differences are noted between the present measurements and the predictions of Bretagnon (1974). The amplitude of the 3.796 Myr
period libration of Pluto's longitude of perihelion is modulated with a period of 34 Myr. Very long periods, on the order of 137 Myr, are also seen. The orbit of Pluto is stable for the duration
of the integration; the maximum Liapunov characteristic exponent is less than 10 to the -6.8 power/yr.
The General Uranus Satellite Theory GUST (Laskar, 1986) is used for the construction of an analytical ephemeris for the Uranian satellites. The theory is fitted against earth-based observations
from 1911 to 1986, and all radio and optical data obtained during Voyager encounter with Uranus. Earth-based observations alone allow the determination of masses which are within 15 percent of
the values determined by the Uranus flyby. The analysis of all the observations confirm the values of the masses obtained during the encounter (Stone and Miner, 1986) and give a complete set of
dynamical parameters for the analytical theory. An analytical ephemeris, GUST86, with an estimated precision of about 100 km with respect to Uranus is obtained.
We have designed and built the Orrery, a special computer for high-speed high-precision orbital mechanics computations. On the problems the Orrery was designed to solve, it achieves approximately
10 Mflops in about 1 ft3of space while consuming 150 W of power. The specialized parallelarchitecture of the Orrery, which is well matched to orbital mechanics problems, is the key to obtaining
such high performance. In this paper we discuss the design, construction, and programming of the Orrery. Copyright © 1985 by The Institute of Electrical and Electronics Engineers, Inc.
• The recently recognized failure of predictability in Newtonian dynamics
• Memoir on the secular variation of the elements of the orbits of the eight principal planets
|
{"url":"https://static.hlt.bme.hu/semantics/external/pages/szoftver-planet%C3%A1rium/www.researchgate.net/publication/241262773_Secular_Perturbations_of_Planetary_Orbits_and_their_Representation_as_Series.html","timestamp":"2024-11-04T23:57:06Z","content_type":"text/html","content_length":"501225","record_id":"<urn:uuid:2ed6cbdd-c7a4-44fe-8050-af83383415fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00710.warc.gz"}
|
Mathematical Visualization
[Question to my readers: is there a way to get a compact enumeration in this blog? If I write a list using the list menu item in the wordpress editor, the entries are v e r y f a r a p a r
This post picks up where the last one left off. Our task here is to present the essentials of projective geometric algebra and use to obtain a metric-neutral formulation of the solution to the
problem discussed in the previous two posts.
Projective space to the rescue. To obtain the projective models of the metric planes of interest, begin with the vector space $\mathbb{R^3}$ and projectivize it to produce the real projective plane
$\mathbf{P}^2$. Define an exterior algebra $\bigwedge{\mathbf{P}^2}$ by defining the points of $\mathbf{P}^2$ to be the 1-vectors. Take the scalar field $\mathbb{R}$ as the 0-vectors. Create the
higher grades using the anti-symmetric wedge product $\wedge$. $\mathbf{x} \wedge \mathbf{y}$ is a 2-vector, the joining line of the arguments. The wedge of three linearly independent points $\
mathbf{x} \wedge \mathbf{y} \wedge \mathbf{z}$ is the projective plane itself, and is called a pseudo-scalar.
Duality to the rescue. We can do the same with the dual algebra. Then the 1-vectors are lines and the wedge operation is the meet of two lines. By a little bit of abstract nonsense, we can in fact
restrict our attention to one of these algebras and “import” the other wedge product when we need it (by using Poincare duality, the details are in my thesis). For our purposes we write the join
operator as $\vee$ and the meet operator as $\wedge$ (note this choice is motivated to be consistent with the related set-theoretic operations union $\cup$ and intersection $\cap$). For a good intro
to exterior algebras see this wikipedia article.
Adding a metric. To work in the metric relations, we have to incorporate the inner product with this outer product. To do so we define three different signatures for our inner product: (+++), (++-),
and (++0), which corresponds to elliptic, hyperbolic and euclidean plane, resp. Write the inner product of two 1-vectors with respect to the chosen inner product as $\mathbf{m}\cdot \mathbf{n}$. We
begin with the dual exterior algebra (where 1-vectors are lines) and define a geometric product on the 1-vectors via: \[ \mathbf{m}\mathbf{n} := \mathbf{m}\cdot \mathbf{n} + \mathbf{m}\wedge \mathbf
{n}\] Note that this product is the sum of a 0-vector (a scalar) and a 2-vector. This product can be extended to all grades just by writing every k-vector as a sum of products of orthogonal basis
1-vectors (which is always possible), and reducing the products whenever the square of a basis 1-vector occurs (since the square of a 1-vector is a scalar). In this way one obtains an associative
algebra, called the Clifford or geometric algebra, characterized by its signature. This is the projective geometric algebra we have been promising to describe. We denote the three algebras of
interest as $\mathcal{C}_\kappa$ where $\kappa$ takes on the three values of $\{-1,0,1\}$ (hyperbolic, euclidean, elliptic, resp.) To be precise, we work with an orthonormal basis for the 1-vectors
$\{\mathbf{e_0}, \mathbf{e_1},\mathbf{e_2}\}$ with the metric relations given by $ \mathbf{e_0}^2 = \kappa$, $ \mathbf{e_1}^2= \mathbf{e_2}^2 = 1$.
Why (++0) is euclidean: In $\mathbb{R}^2$, consider two lines $m_{1} : a_{1}x+b_{1}y+c_{1}=0$ and $m_{2} : a_{2}x+b_{2}y+c_{2}=0$. Assuming WLOG $a_{i}^{2}+b_{i}^{2} = 1$, then $\cos{\alpha} = a_
{1}a_{2}+b_{1}b_{2}$, where $\alpha$ is the angle between the two lines: changing the $c$ coefficient translates the line but does not change the angle it makes to other lines. That means, the $c$
coefficient makes no difference when we measure the angle between two lines. Thus $(++0)$ is the correct choice for the inner product for euclidean planar geometry.
Why we have to use the dual algebra: If you’re wondering why we started with the dual exterior algebra, that answer is: because that’s how God make the world we live in. Euclidean lines are less
degenerate than euclidean points. If you try to start with the standard exterior algebra (where 1-vectors are points) you cannot obtain euclidean geometry; you end up with dual euclidean geometry,
in which points are less degenerate than lines, which does not at all match up with the space we live in. (It’s a fascinating space in its own right and I hope to post something on it in the near
Normalizing. It’s useful to work with normalized points and lines. To obtain this normalization, notice that $\mathbf{m}^2$ for a 1-vector or 2-vector is a scalar. Define the norm $\|\mathbf{m}\| :=
\sqrt{| \mathbf{m}^2 |}$. Then when $\| \mathbf{m} \| \neq 0$, $\dfrac{\mathbf{m}}{\| \mathbf{m}^2 \|}$ has unit norm. For the three metric geometries we are working with, the proper points and
lines always can be normalized to have square -1 and +1, resp. (In fact, that can serve as a definition of what it means to be proper.)
Implementing the construction. Here are the steps of the construction, translated into $\mathcal{C}_\kappa$. The elements in the above diagram have been embedded in the natural way into the
Clifford algebra as 1- or 2-vectors. We assume that all points and lines are normalized in the formulas which follow, since it simplifies the formulas. (Normalizing ideal points is a tricky issue
which we skip over in this abbreviated account. See my thesis.) For example, this allows us the find midpoints and angle bisectors by simply adding together the two arguments. In the formulas
below, $\mathbf{X}$ is an arbitrary point or line. $\mathbf{R_x}$ is the geometric reflection in the line $\mathbf{x}$. $\mathbf{R_C}$ is the desired rotation around the point $\mathbf{C}$. The
exceptional configurations have not been taken into account in this description.
$ \mathbf{M} &:= \mathbf{m} \wedge \mathbf{m’} &&\text{Intersection of the two lines} \\ \mathbf{a} &:= \mathbf{A} \vee \mathbf{A’} &&\text{joining line of the two points} \\ \mathbf{A_m} &:= \
mathbf{A} + \mathbf{A’} && \text{midpoint of the two points} \\ \mathbf{r} &:= \mathbf{A_m} \mathbf{a} &&\text{perpendicular bisector of AA’} \\ \mathbf{c} &:= \mathbf{m} + \mathbf{m’} &&\text{angle
bisector of the two lines} \\ \mathbf{C} &:= \mathbf{r} \wedge \mathbf{c} && \text{center of rotation} \\ \mathbf{s} &:= \mathbf{A} \vee \mathbf{C} && \text{one reflection line} \\ \mathbf{R_s}(\
mathbf{X}) &:= \mathbf{s} \mathbf{X} \mathbf{s} &&\text{reflection in line} \mathbf{~s} \\ \mathbf{R_C} &:= \mathbf{c} (\mathbf{s} \mathbf{X} \mathbf{s}) \mathbf{c} &&\text{desired rotation: product
of two reflections} $
References: my thesis, a euclidean extract
Exercise. Express the exceptional configurations described in the previous post, in terms of the geometric algebra description above.
Remarks on isometries as rotors instead of matrices. I’d like to close this post by discussing the representation of isometries in the context of PGA (projective geometric algebra). Note from the
above formulae that a reflection in a proper line can be written as a “sandwich operator” with the line as the “bread” (appearing on left and right) and the object to be reflected as the meat (in the
middle). This should be familiar to readers who know about the representation of quaternions as sandwich operations. Concatenating an even number of such reflections leads to direct isometries, as
the expression for the rotation $\mathbf{R_C}$ above also shows.
Terminology. A product of 1-vectors is called a versor; a product of proper 1-vectors is a proper versor. A proper versor is then an isometry of the geometry, and any such isometry may be written as
a versor. A product of an even number of 1-vectors is called an even versor; every direct isometry represents a direct isometry, and (with a few exceptions) vice-versa. An even versor is also
called a rotor, due to the obvious connection to rotational motion. The set of proper rotors, normalized to have norm 1, forms a group, the spin group of the algebra, written Spin. The full set of
proper versors also forms a group, the pin group, written Pin.
Notation. In order to express the sandwich operation with a k-versor succinctly, we write the versor as $\mathbf{g} := \mathbf{m}_1 \mathbf{m}_2 … \mathbf{m}_k$, where each $\mathbf{m}_i$ is a
1-vector. Then the sandwich operation can be written as $\mathbf{g} \mathbf{X} \widetilde{\mathbf{g}}$ where $\widetilde{\mathbf{g}} = \mathbf{m}_k … \mathbf{m}_2 \mathbf{m}_1$ is the reversal of $\
mathbf{g}$ and is obtained by reversing the order of the products in the definition of the element.
Advantages of this approach. The existence of a versor representation for isometries in the metric space (or plane, as here), means that one no longer needs to rely on linear algebra for this
representation. No more matrices! Seriously, the advantages of the versor approach are worth pointing out. We restrict our attention to the two dimensional case we have been discussing to make
things easy to understand; but the observant reader can easily extrapolate to higher dimensions. In the first place, the representation is grade-independent, meaning the meat of the sandwich can be
any grade; the result of the sandwich operation will represent the isometry applied to the meat. Compare this to linear algebra, where the transformation matrix for an isometry applied to a point is
not the same as that applied to a plane (one is the adjoint of the other). A further advantage of the versor formulation is that the the isometry can be read off from its versor representation. For
example, for a 1-versor, the isometry is the reflection in the line which the versor represents. For a rotor, the isometry is a rotation whose center is given by the grade-2 part of the versor,
through an angle equal to $2\arccos(s)$ where $s$ is the grade-0 part (scalar) of the rotor. (We are assuming the rotor has been normalized to have unit norm). If you have ever tried to read off
from a 3×3 matrix the center and angle of a rotation, you’ll appreciate how convenient this second advantage is.
The final advantage of the versor representation is that every rotor has an exponential representation (just as a unit quaternion does). Since $\mathbf{P}^2=-1$ for a proper point $\mathbf{P}$, the
expression $e^{t\mathbf{P}}$ can be evaluated just as if $\mathbf{P}$ were the complex unit $i$, and one obtains $\cos{t} + \sin{t}\mathbf{P}$ as the result. In fact, when you activate the time
slider of the webstart associated to this post (see this post), the interpolated isometry is calculated using the exponential representation of the motion described here (although of course when it
comes to drawing the rotated line, it has to be converted into a matrix and shipped over to the graphics card, which is still living in the old world of vectors and matrices.)
The following figure shows 20 equal steps in the interpolation of the isometry obtained in this way. The interpolated lines are are drawn as transparent objects to reduce the clutter in the image.
You can see that the envelope of the moving line is a circle.
If you’re still with me, I hope that you have gotten a taste of how projective geometric algebra can be a powerful, practical, and elegant tool for doing geometry in a metric-neutral way.
Introduction to projective geometric algebra, II
This post builds on the previous one, but returns to the original problem of finding the axis of a hyperbolic isometry. We show that by using the projective model of hyperbolic geometry one can
directly adapt the euclidean solution to obtain the hyperbolic solution also.
The hyperbolic case. The webstart, as explained above, also includes an option to switch the metric to elliptic or hyperbolic We want now to discuss the latter option. Then the figure includes the
unit circle, the boundary of the hyperbolic plane, as the following figure shows:
Does the description of the euclidean construction carry over to the hyperbolic case pictured above? Clearly there are some constraints on the configuration if there is to be a solution. For
example, the points $\mathbf{A}$ and $\mathbf{A’}$ must be hyperbolic points, and the lines $\mathbf{m}$ and $\mathbf{m’}$ must also be hyperbolic. But their intersection $\mathbf{M}$ does not need
to be hyperbolic; it can be ideal (lie on the unit circle) or also hyper-ideal (lie outside). Assuming these conditions are met, as they are in the above figure, how does the construction proceed?
The definition of the perpendicular bisector $\mathbf{r}$ is valid. But what is the angle bisector $\mathbf{c}$ of an angle which, as in this case, lies outside the hyperbolic disk? As in the
euclidean case, the traditional approach to hyperbolic geometry does not define this. However, in the projective model one can define it meaningfully. To do so, one must invoke an involutive
correlation $\mathbf{\Pi}$ (a fancy name for a transformation which switches points and lines, and whose square is the identity) on the projective plane known as the polarity on the metric quadric.
This transformation sends a point $\mathbf{P}$ to the line $\mathbf{P^\perp}$ consisting of points whose inner product with $\mathbf{P}$ is 0, the orthogonal complement with respect to the
hyperbolic metric.
The result is that one obtains a second model of the hyperbolic plane in the exterior of the unit disk, in which the roles of point and line have been reversed. Then, for example, the point $\mathbf
{M}$ when it lies outside the hyperbolic plane, is polar to the common orthogonal line of $\mathbf{m}$ and $\mathbf{m’}$ inside the hyperbolic disk, and the angle bisector of these two lines is
polar to the midpoint of this common orthogonal segment. Proceeding in this way, one can show that the desired metric properties can be “mirrored” from the exterior of the hyperbolic plane to the
interior and hence that the construction given above also solves the given problem for the hyperbolic plane too, although some of the steps may look unfamiliar, since they may involve polar entities,
but the metric relations remain unchanged. A full discussion of this phenomena lies beyond the scope of this post, but hopefully you can get a sense of how the polarity operator in effect produces
two hyperbolic planes such that every metric relationship in one is mirrored (with points and lines reversed) in the other.
We leave the same positive conclusion for the elliptic plane for the never-tiring reader. Here the situation is simplified since there are no real ideal points, every point of the projective plane
is also a point of the elliptic plane and any configuration of the initial data is valid, and the construction carried out in the euclidean case goes through here without difficulties.
Having shown that the projective model of these three metric planes provides a reliable basis for solving the construction problem posed in the first post, and doing so in a metric-neutral way, the
next post will provide the promised introduction to projective geometric algebra, by giving a (metric-neutral) algebraic formulation of the construction which has formed the content of this and the
previous post.
Introduction to projective geometric algebra, I
At a student seminar at the TU Berlin in fall 2013, the following problem in plane geometry was posed, “Given a point lying on a line, and a second point lying on a second line, find the unique
direct isometry mapping the first point/line pair to the second.” The original context was the hyperbolic plane, in thinking about the problem I realized I could solve it in a metric-neutral way,
and this blog post and its sequels present that solution. I chose the title since the solution is based on an nifty tool called projective geometric algebra (PGA), and I can’t think of a better
introduction to PGA than working out a concrete example like this one. Buckle your seat belts and enjoy the ride!
What I want to do in this post:
1. Demonstrate an interactive application for playing with this problem.
2. Derive a solution for the problem in the euclidean plane.
3. Discuss some exceptional configurations and show how they can be handled within the context of the projective model of euclidean geometry in a seamless fashion.
4. Indicate how the approach leads to a “cool tool”: projective geometric algebra, a subject for a future post.
I’ve also implemented this result in a jReality webstart application. If you start this application up, you should see a picture like the following:
Directions for using the webstart. To begin with, you may need to display the control panel to the left of the graphics window. To do this, select the menu item “Window->Left slot” (and deselect
“Window->right slot” if you wish). This is necessary due to a heightened security in Java 7 which prevents correct reading of the built-in property file (which should set these window slots
If you’re using the webstart application, you can drag any of the three points $\mathbf{A}$, $\mathbf{A’}$ and $\mathbf{M}$ and the diagram adjusts accordingly. Use the slider labeled time to see
how the rotation acts, also to confirm that the rotation actually does what is claimed. Furthermore, to reverse the orientation of either line $\mathbf{m}$ or $\mathbf{m’}$, click on the line; the
orientation arrow on the line should flip and the angle bisector $\mathbf{c}$ switches accordingly to the supplementary angle. Finally, you can switch the metric used using the combo box on the left
inspector panel to choose the hyperbolic or elliptic metric also. The given data remains the same, but the construction is carried out using this metric instead of the euclidean one. This “metric
neutral” capability will be more fully discussed in a later blog.
Return to the geometric problem. Here we want to focus on the challenge of finding a euclidean isometry which moves the point $\mathbf{A}$ to the point $\mathbf{A’}$, and the line $\mathbf{m}$ to the
line $\mathbf{m’}$. (The two lines should be thought of as oriented, so that the orientation is preserved by the isometry.) We claim that the solution is a rotation around point $\mathbf{C}$, the
intersection of line $\mathbf{r}$ (cyan) and line $\mathbf{c}$ (green). $\mathbf{r}$ is the perpendicular bisector of the segment $\mathbf{AA’}$, while $\mathbf{c}$ is the angle bisector of the
angle formed by $\mathbf{m}$ and $\mathbf{m’}$.
Indeed, the center $\mathbf{C}$ of a rotation moving $\mathbf{A}$ to $\mathbf{A’}$ must lie the same distance to both points, which is the defining condition of $\mathbf{r}$. Similarly, since the
desired rotation maps $\mathbf{m}$ to $\mathbf{m’}$, the closest point of $\mathbf{m}$ to $\mathbf{C}$ is mapped to the closest point of $\mathbf{m’}$ to $\mathbf{C}$, hence $\mathbf{C}$ lies the
same distance to both points. This condition is satisfied exactly by points on the angle bisector $\mathbf{c}$ of the two lines. (Here one must be careful to specify which angle bisector one means.
This depends on the relative orientation of the two lines, and is easy to determine.) Hence the center of the desired rotation must lie on $\mathbf{r}$ and on $\mathbf{c}$, so it is the intersection
of these two lines, as shown in the diagram.
Once the center has been found, it´s not hard to express the desired rotation as the composition of two reflections in lines passing through $\mathbf{C}$: first in the line $\mathbf{s}$ (red)
followed by reflection in line $\mathbf{r}$ (cyan). Why does this produce the desired rotation? Clearly reflection in $\mathbf{s}$ fixes $\mathbf{A}$ while that in $\mathbf{c}$ maps $\mathbf{A}$ to
$\mathbf{A’}$. Since $\mathbf{C}$ is fixed by the composition, it must be the rotation we are looking for. Writing the reflection in line $\mathbf{s}$ as $\mathbf{R_s}$, the rotation is then the
composition $\mathbf{R_C} := \mathbf{R_c}\circ \mathbf{R_s}$.
Exceptional configurations. It’s interesting to review the construction for possible exceptional configurations. For example, $\mathbf{m}$ and $\mathbf{m’}$ may be parallel; then their intersection
$\mathbf{M}$ is a so-called point at infinity or, more neutrally, ideal point. What is the angle bisector $\mathbf{c}$ in this case? If the two lines have different orientations (that is,
translate one to the other and compare the orientations), then the angle bisector $\mathbf{c}$ is the mid-line, parallel to both, and the construction continues as before. If not, then $\mathbf{c}$
is the line at infinity itself (!), and the construction continues as before.
Or, $\mathbf{c}$ and $\mathbf{r}$ may be parallel; then their intersection $\mathbf{C}$ is a so-called point at infinity or ideal point, and $\mathbf{s}$, since it goes through $\mathbf{C}$ too, is
also parallel to these lines. Hence the product of the two reflections is not a rotation but a translation. A bit poetically, one can say, a euclidean translation is a rotation around a point at
infinity — by a vanishing angle!
A further exceptional configuration can occur if the lines $\mathbf{r}$ and $\mathbf{c}$ are the same line. Then they don’t have a well-defined intersection. It’s not hard to see that this can
happen only when $\mathbf{A}$ and $\mathbf{A’}$ lie the same distance from $\mathbf{M}$, so I can rotate $\mathbf{A}$ into $\mathbf{A’}$ by a rotation around $\mathbf{M}$. Now, if the orientations
of $\mathbf{m}$ and $\mathbf{m’}$ are preserved when I perform this rotation, then it is the desired rotation, and I can construct it by setting $\mathbf{C}$ to $\mathbf{M}$ and continuing as before;
if not, the desired rotation center is found by constructing the perpendicular line $\mathbf{a}$ to $\mathbf{m}$ at $\mathbf{A}$; then the intersection of $\mathbf{a}$ and $\mathbf{c}$ is the desired
rotation center $\mathbf{C}$ and the construction can continue as before.
Are there other exceptional configurations? Please post as comment any other ones you find!
The path to geometric algebra. The observant reader will not have missed noting that the correct solution of the exceptional configurations described above proceeded a bit magically. In particular,
finding the correct solution for the first two configurations involved the ideal points and line of the euclidean plane. These are concepts which are not necessarily associated to euclidean
geometry. That they can be used is due to the discovery by Arthur Cayley and Felix Klein in the 1860’s that projective space can be converted into a metric space in many ways by selecting a
quadratic form, the so-called Absolute of the metric space. Details lie outside the scope of this blog, see my thesis, chapter 4. This Cayley-Klein construction can be directly applied when the
quadratic form is non-degenerate, and produces most importantly elliptic (or spherical) and hyperbolic space of any dimension. It also works for euclidean space, but requires a degenerate quadratic
form (some subspaces consist of points with vanishing norm).
For this post, the relevant object is the projective model of the euclidean plane. Then the points of vanishing norm form a line, the so-called ideal line of the euclidean plane. Parallel lines meet
in points of this line. This circumstance allowed us to handle the first two exceptional configurations above, where we sought the intersection of two parallel lines. In almost every euclidean
construction or proof there arise such configurations, which have to be handled separately if one is relying of “traditional” euclidean geometry, but which in the context of “projective” euclidean
geometry can be handled uniformly. For this reason, the projective model of euclidean geometry has clear advantages over the traditional approach. In order to carry this out in a rigorous fashion,
its useful to translate things into in the correct algebraic setting.
The traditional way of proving that the construction outlined above is correct relies on converting the steps into expressions in vector analysis of the plane. This approach avoids mention of ideal
points and line, and is limited to the euclidean setting. Is there a better algebraic tool for the job? In fact, there is a much more comprehensive algebraic structure — which includes vector
analysis as a small sub-algebra — such that every step of the above construction can be expressed by a single compact expression. Furthermore, even though the steps of the construction are made based
on the euclidean metric, the resulting expressions also provide a correct representation of the same construction when the underlying metric is elliptic or hyperbolic. The differences that arise
express naturally the differences between these metrics. For example, in elliptic space there are no parallel lines. This “metric neutrality” is an expression of the Cayley-Klein construction
underlying this approach. The webstart application allows the user to use these other metrics and confirm that the result is, in fact, metric neutral.
The same comments made regarding the projective model of the euclidean plane apply also to the hyperbolic plane. Instead of having just one line of ideal elements, in the hyperbolic case there are a
circle’s worth of ideal points and lines, and beyond them a second model of hyperbolic geometry, the polar hyperbolic plane, described briefly above. They are not part of the traditional definition
of the hyperbolic plane, but can be integrated in an effortless way into the standard model of the hyperbolic plane with the advantage that all projective elements have a significance in hyperbolic
geometry, not just the points of the unit disk. This geometry can also be integrated into the algebraic structure mentioned above.
This algebraic structure is called projective geometric algebra. I’ll share the details of this algebraic “translation” of our construction problem here in a subsequent post on this blog.
11-13.02 Final week: GA, project presentations, …
May 1, 2014: I’m finally getting the projects available as webstarts on the homepage for the course. Please let me know if you encounter problems running them.
Breaking news: Those of you who attended the last few weeks of lectures know that one of my hobby-horses is geometric algebra, projective GA to be exact. As part of my missionary work I’ve worked up
a proposal for a tutorial on the subject at an upcoming international conference. You can see it here. Feedback on content and appearance welcome.
This is the last week of classes! It will be a little different than usual, since two group projects will be presented, one on Tuesday and on on Thursday, both at 10:15. From 11:00 Mr. Gunn will
continue lecturing on geometric algebra and will also make some final remarks on technical details of project completion. The projects being presented are:
• Tuesday: “Packing circles on a cylinder” by Michael Gubik and Nick Bremer
• Thursday: “The Shadow Problem for 2-Surfaces” by Tatiana Grandon
Details of project completion. Before doing anything else, review the project guidelines given here. They are from last year’s course, but they continue to apply to this year’s projects. In order
to simplify the publication of the projects (via Java webstart technology), I need your help. First, make sure you have organized your source code correctly (see point 3 below). Furthermore, to
generate an entry in the course website where the project is pictured and described, I need to get two small files from each team. I have added some files to the teacher’s git repository in order
to streamline this process, see points 4+ below. As you complete your project, but no later than the first week of April, please make sure you have carried out the following directions. If you
finish early, please let me know! Then I can check that everything works and if not, give you a chance to correct things. It will be much more difficult to effect the grading process if the project
is only ready on the day of the presentation! Thanks in advance for your cooperation!
1. Note: In this discussion, when you are asked to create files or packages involving the string “nnmm“, it should be replaced with the two 2-digit student numbers for the team members, for example
“1723” or “0208”.
2. Update to the teacher’s git repository. Your project must run using the latest teacher’s repository code! If you are having problems updating to this repository (as wingthor does), there is now
an alternative as follows:
1. Instead of having you work directly with the source classes for the template package(s), I’ve made a jar file named mv13-template.jar and checked it in with the other jar files in the lib
folder of the repository.
2. There are two ways to access this jar file. The first is preferred since it allows you to check for updates more easily.
1. In eclipse: In the git repository browser, fetch the teacher’s repository (it should be listed under “Remotes“). Back in package explorer, select the project and look at the history
view. You should see near the top of the list an entry for the teacher’s repository you just fetched. Look for a recent commit for the teacher’s repository with the comment “Add jar file
for the template package”, or “Update jar file for the template package.” Use right mouse (context menu) and select “Cherry pick”, this should just bring over the desired jar file to the
lib folder.
2. If you want to get this jar file directly without using eclipse or git, use this link. Download the file and copy it into the lib folder of your git project.
3. Once the jar file is in the lib folder of your project, add it to your classpath in Eclipse (using Project properties->Java build path->Libraries).
1. Then, either delete the template package (and all subpackages) in the Package Explorer, or configure the build path (in the tab labeled “Order and Export”) so that mv13-template.jar comes
before the src folder of the project.
4. If this approach works for you, you can delete the template package completely from your git project, and push the result to your remote repository, so that I only fetch the code you have
written and not the template stuff.
5. To view the source code of the classes in the jar remember:
1. In Package Explorer, navigate to “Referenced Libraries”.
2. Find “mv13-template.jar” and open it via the little triangle to its left side.
3. The classes you see can be opened and looked at (though not written) in the same way as .java files can.
3. Setup your package structure as follows — please follow these directions!
1. Make sure that your project application is a subclass of template.Assignment. This is important to guarantee that the webstarts function correctly. Furthermore, if there are problems in the
Assignment class, your application will be fixed when I fix this class.
2. Make sure that all the code that you have written that is needed to run your application is contained in a single Java package, named reasonable something like studentnnmm.project. (You are
allowed to have subpackages in this package.) This is important, since I’ll create a jar file for your webstart from this package/folder.
3. Your on-line documentation: should be a html file in the same folder as your project, whose name includes the string “nnmm” — otherwise there may be name collisions when I collect these html
files into one place. Please try to include nnmm in all filenames which are associated to your on-line documentation.
1. If your html file does not load any other local files, it can be left in the same directory as the .java class.
2. If however it refers to other local files (such as a screen shot of your project), then please create a subfolder named html in your project folder and put all the html-related files into
this folder. Then I know which files I have to collect! Remember to change the return value for the getDocumentationFile() to reflect this change.
3. Example, if you have a doc file named doc007.html and it refers to a screenshot ss007.png, put both into a folder named html and change getDocumentationFile() to return “html/doc007.html
4. If you use extra jar files in your project, be sure you check them into your repository (you can put them into the lib folder) and that they are included in the .classpath in your repository.
(This will be the case if you have your eclipse build path set up to include this jar.) I’ll have to identify these jars and copy them to the webstart site.
4. Run your application and create a representative image (256 x 192) of your application. Use the “File->Export->Image…” menu item to do this.
□ In the file dialog you will probably need to unclick the constrained shape icon to allow these two dimensions to be entered separately from each other. Save the image with the name
projectPic-nnmm.png in the project folder you created above.
5. Copy the file projectDesc.txt from template.project to your own project folder (created above) and edit it as follows:
1. Replace the string “nnmm” by your student numbers where it occurs.
2. Replace the string “Twirly objects based on spherical point groups” with the title of your project.
3. Change the descriptive paragraph at the end to be a description of your project with approximately the same number of characters/words.
4. Notice there is also a commented-out text for your presentation slides. If you want your presentation slides to be available on the course web-site, please email them to me and I will
activate this link in the documentation.
6. Presentation slides can also be checked into your git repository. If I find a file named studentsnnmmSlides.pdf then I will stash it away and link it to the project documentation on the website,
so that visitors can also look at the slides of your presentation. (Again “nnmm” should be replaced by your 2-digit student numbers.)
7. Commit the new files you’ve created and then push to your remote git repository.
8. Send me an email to notify me that you’ve completed this phase.
9. When I next fetch from your respository, I’ll receive the small image and the html fragment. I will generate a jar file from your project package and collect these in a single place where I can
then invoke them to run the jar files as webstarts. I will use the image and text to construct the list of student projects for the web-site. I’ll construct a jnlp file to start your application
as a webstart. There is currently one example on the course web-site here, you can consult it to see what I need all these pieces for.
10. Deadline: Teams presenting in April are expected to have this all done by 24 hours after your presentation. Please understand that I cannot complete the grading process until the above
directions have been satisfactorily completed. I’m going on vacation on Friday April 11 and I want to leave my desk “clean” when I go!
04-06.14 More geometric algebra
Note: the April project presentations have been shifted to run from Monday, April 7 to Wednesday, April 9. (No one had signed up for Thursday and there was interest in Monday.)
If you’re wondering what I’m expecting in the project, take a look (again) at EvaluationFormMVWS13 based on last year’s projects.
On Tuesday February 4 Mr. Gunn continued to develop the geometric algebra he began the previous week. In particular he focused on sandwich operators in the geometric algebra of $\mathbb{R}^3$. He
reviewed the sandwiches with a 1-vector as bread, and showed they are rotations of 180 degrees around the vector. (These are called half-turns in English or Umwendungen in German). He then looked
into the product of two unit 1-vectors $\mathbf{u}$ and $\mathbf{v}$, which gives rise to a sandwich, rotating around the common perpendicular of $\mathbf{u}$ and $\mathbf{v}$, through twice the
angle separating the vectors. Such an element is called a rotor, and has the form \[\cos{\frac{\alpha}{2}} + \sin{\frac{\alpha}{2}} \mathbf{U} \] where $\mathbf{U} := \dfrac{\mathbf{u} \wedge \mathbf
{v}}{\sin{\frac{\alpha}{2}}}$ is the normalized form of the plane spanned by the 1-vectors. There is again an exponential form for such rotors: \[\cos{\frac{\alpha}{2}} + \sin{\frac{\alpha}{2}} \
mathbf{U} =e^{\frac{\alpha}{2}\mathbf{U}}\]
Mr. Gunn then gave an overview of the steps remaining to obtain the geometric algebra of the Euclidean plane $\mathbf{E}^2$ from the geometric algebra of the euclidean vector space $\mathbb{R}^3$:
1. Replace the standard exterior algebra based on the join operator $\vee$ with the dual exterior algebra based on the meet operator $\wedge$.
2. Projectivize the vector space structure of the exterior algebra to obtain an exterior algebra describing the subspace structure of projective space $P(\mathbb{R}^3)$ instead of the vector space $
3. Replace the standard inner product $+++$ with the “slightly” degenerate inner product $++0$.
Notationally: instead of the Clifford algebra $Cl(\mathbb{R}_{3,0,0})$ we need the Clifford algebra $P(Cl(\mathbb{R}^*_{2,0,1}))$. Here, the triple $(3,0,0)$ represents an inner product with
signature +++ while $(2,1,0)$ represents one with signature $++-$ and $(2,0,1)$ represents signature $++0$. The latter is important for the euclidean plane. To motivate this fact Mr. Gunn turned to
a a speedy review of the geometry of lines in the plane.
Review of lines in the plane. A line whose equation is $ax+by+c=0$ is represented by the vector $\mathbf{v} := [a,b,c]$ or any positive multiple. The oppositely oriented line is obtained by $-\
mathbf{v}$ or any negative multiple. The normal vector to the line is the vector $(-b, a)$. I can normalize line equations so that $a^2+b^2=1$ (except the triple $(0,0,c)$ which represents the
ideal line of the euclidean plane). For two such normalized euclidean lines, it’s then easy to see that $\mathbf{u} \cdot \mathbf{v} := a_u a_v + b_u b_v$ is the cosine of the angle between the two
lines. Notice that this inner product does not involve the $c-$ coordinate. It’s signature is $(2,0,1)$. The important point is that the angle between two lines is totally determined by this inner
product, even though it is “degenerate”.
|
{"url":"http://dgd.service.tu-berlin.de/wordpress/vismathws12/2014/02/","timestamp":"2024-11-04T05:49:47Z","content_type":"text/html","content_length":"89431","record_id":"<urn:uuid:9e0c335d-aa67-4033-a997-ae652fa98e31>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00666.warc.gz"}
|
Best Subsets Regression Essentials in R
The best subsets regression is a model selection approach that consists of testing all possible combination of the predictor variables, and then selecting the best model according to some statistical
In this chapter, we’ll describe how to compute best subsets regression using R.
Loading required R packages
• tidyverse for easy data manipulation and visualization
• caret for easy machine learning workflow
• leaps, for computing best subsets regression
Example of data
We’ll use the built-in R swiss data, introduced in the Chapter @ref(regression-analysis), for predicting fertility score on the basis of socio-economic indicators.
# Load the data
# Inspect the data
sample_n(swiss, 3)
Computing best subsets regression
The R function regsubsets() [leaps package] can be used to identify different best models of different sizes. You need to specify the option nvmax, which represents the maximum number of predictors
to incorporate in the model. For example, if nvmax = 5, the function will return up to the best 5-variables model, that is, it returns the best 1-variable model, the best 2-variables model, …, the
best 5-variables models.
In our example, we have only 5 predictor variables in the data. So, we’ll use nvmax = 5.
models <- regsubsets(Fertility~., data = swiss, nvmax = 5)
## Subset selection object
## Call: st_build()
## 5 Variables (and intercept)
## Forced in Forced out
## Agriculture FALSE FALSE
## Examination FALSE FALSE
## Education FALSE FALSE
## Catholic FALSE FALSE
## Infant.Mortality FALSE FALSE
## 1 subsets of each size up to 5
## Selection Algorithm: exhaustive
## Agriculture Examination Education Catholic Infant.Mortality
## 1 ( 1 ) " " " " "*" " " " "
## 2 ( 1 ) " " " " "*" "*" " "
## 3 ( 1 ) " " " " "*" "*" "*"
## 4 ( 1 ) "*" " " "*" "*" "*"
## 5 ( 1 ) "*" "*" "*" "*" "*"
The function summary() reports the best set of variables for each model size. From the output above, an asterisk specifies that a given variable is included in the corresponding model.
For example, it can be seen that the best 2-variables model contains only Education and Catholic variables (Fertility ~ Education + Catholic). The best three-variable model is (Fertility ~ Education
+ Catholic + Infant.mortality), and so forth.
A natural question is: which of these best models should we finally choose for our predictive analytics?
Choosing the optimal model
To answer to this questions, you need some statistical metrics or strategies to compare the overall performance of the models and to choose the best one. You need to estimate the prediction error of
each model and to select the one with the lower prediction error.
Model selection criteria: Adjusted R2, Cp and BIC
The summary() function returns some metrics - Adjusted R2, Cp and BIC (see Chapter @ref(regression-model-accuracy-metrics)) - allowing us to identify the best overall model, where best is defined as
the model that maximize the adjusted R2 and minimize the prediction error (RSS, cp and BIC).
The adjusted R2 represents the proportion of variation, in the outcome, that are explained by the variation in predictors values. the higher the adjusted R2, the better the model.
The best model, according to each of these metrics, can be extracted as follow:
res.sum <- summary(models)
Adj.R2 = which.max(res.sum$adjr2),
CP = which.min(res.sum$cp),
BIC = which.min(res.sum$bic)
## Adj.R2 CP BIC
## 1 5 4 4
There is no single correct solution to model selection, each of these criteria will lead to slightly different models. Remember that, “All models are wrong, some models are useful”.
Here, adjusted R2 tells us that the best model is the one with all the 5 predictor variables. However, using the BIC and Cp criteria, we should go for the model with 4 variables.
So, we have different “best” models depending on which metrics we consider. We need additional strategies.
Note also that the adjusted R2, BIC and Cp are calculated on the training data that have been used to fit the model. This means that, the model selection, using these metrics, is possibly subject to
overfitting and may not perform as well when applied to new data.
A more rigorous approach is to select a models based on the prediction error computed on a new test data using k-fold cross-validation techniques (Chapter @ref(cross-validation)).
K-fold cross-validation
The k-fold Cross-validation consists of first dividing the data into k subsets, also known as k-fold, where k is generally set to 5 or 10. Each subset (10%) serves successively as test data set and
the remaining subset (90%) as training data. The average cross-validation error is computed as the model prediction error.
The k-fold cross-validation can be easily computed using the function train() [caret package] (Chapter @ref(cross-validation)).
Here, we’ll follow the procedure below:
1. Extract the different model formulas from the models object
2. Train a linear model on the formula using k-fold cross-validation (with k= 5) and compute the prediction error of each model
We start by defining two helper functions:
1. get_model_formula(), allowing to access easily the formula of the models returned by the function regsubsets(). Copy and paste the following code in your R console:
# id: model id
# object: regsubsets object
# data: data used to fit regsubsets
# outcome: outcome variable
get_model_formula <- function(id, object, outcome){
# get models data
models <- summary(object)$which[id,-1]
# Get outcome variable
#form <- as.formula(object$call[[2]])
#outcome <- all.vars(form)[1]
# Get model predictors
predictors <- names(which(models == TRUE))
predictors <- paste(predictors, collapse = "+")
# Build model formula
as.formula(paste0(outcome, "~", predictors))
For example to have the best 3-variable model formula, type this:
get_model_formula(3, models, "Fertility")
## Fertility ~ Education + Catholic + Infant.Mortality
2. get_cv_error(), to get the cross-validation (CV) error for a given model:
get_cv_error <- function(model.formula, data){
train.control <- trainControl(method = "cv", number = 5)
cv <- train(model.formula, data = data, method = "lm",
trControl = train.control)
Finally, use the above defined helper functions to compute the prediction error of the different best models returned by the regsubsets() function:
# Compute cross-validation error
model.ids <- 1:5
cv.errors <- map(model.ids, get_model_formula, models, "Fertility") %>%
map(get_cv_error, data = swiss) %>%
## [1] 9.42 8.45 7.93 7.68 7.92
# Select the model that minimize the CV error
## [1] 4
It can be seen that the model with 4 variables is the best model. It has the lower prediction error. The regression coefficients of this model can be extracted as follow:
coef(models, 4)
## (Intercept) Agriculture Education Catholic
## 62.101 -0.155 -0.980 0.125
## Infant.Mortality
## 1.078
This chapter describes the best subsets regression approach for choosing the best linear regression model that explains our data.
Note that, this method is computationally expensive and becomes unfeasible for a large data set with many variables. A better alternative is provided by the stepwise regression method. See Chapter
|
{"url":"http://sthda.com/english/articles/37-model-selection-essentials-in-r/155-best-subsets-regression-essentials-in-r/","timestamp":"2024-11-04T23:22:45Z","content_type":"text/html","content_length":"65991","record_id":"<urn:uuid:ce03e438-e805-4fea-9c58-d3f0af5d9691>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00789.warc.gz"}
|
locally finitely presentable category
Is there an example of a category which is
- locally finitely presentable
- Barr-exact (aka effective regular)
but is not the category of algebras of a Lawvere algebraic theory?
Added to references
• Maru Sarazola, An introduction to locally finitely presentable categories, (pdf)
diff, v25, current
Added mention of Gabriel–Ulmer duality.
diff, v24, current
I’m making some edits to locally finitely presentable category, and removing some old query boxes. A punchline was extracted, I believe, from the first query box. The second I don’t think is too
important (it looks like John misunderstood).
+–{: .query} Mike: Do people really call finitely presentable objects “finitary”? I’ve only seen that word applied to functors (those that preserve filtered colimits). Toby: I have heard
’finite’; see compact object. Mike: Yes, I’ve heard ’finite’ too. =–
+– {: .query} Toby: In the list of equivalent conditions above, does this essentially algebraic theory also have to be finitary?; that is, if it's an algebraic theory, then it's a Lawvere theory?
Mike: Yes, it certainly has to be finitary. Possibly the standard meaning of “essentially algebraic” implies finitarity, though, I don’t know. Toby: I wouldn't use ’algebraic’ that way; see
algebraic theory. John Baez: How come the first sentence of this paper seems to suggest that the category of models of any essentially algebraic theory is locally finitely presentable? The
characterization below, which I did not write, seems to agree. Here there is no restriction that the theory be finitary. Does this contradict what Mike is saying, or am I just confused?
Mike: The syntactic category of a non-finitary essentially algebraic theory is not a category with finite limits but a category with $\kappa$-limits where $\kappa$ is the arity of the theory. A
finitary theory can have infinitely many sorts and operations; what makes it finitary is that each operation only takes finitely many inputs, hence can be characterized by an arrow whose domain
is a finite limit. I think this makes the first sentence of that paper completely consistent with what I’m saying. =–
Almost any presheaf topos would do. (A topos is Barr-exact, and in the presheaf case $[C^{op}, Set]$, this is equivalent to the category of lex functors $D \to Set$ where $D^{op}$ is the
finite-colimit completion of $C$, if I have my variances straight.)
In additive context
• Henning Krause, Functors on locally finitely presented additive categories, Colloq. Math. 75:1 (1998) pdf
diff, v26, current
If $V$ is a locally finitely presentable symmetric monoidal closed category then there is a bijection between exact localizations of the $V$-category of $V$-enriched presheaves on a $V$-category $C$
and enriched Grothendieck topologies on $C$:
• Francis Borceux, Carmen Quinteiro, A theory of enriched sheaves, Cahiers Topologie Géom. Différentielle Catég. 37 (1996), no. 2, 145–162 numdam
diff, v26, current
• P. Gabriel, F. Ulmer, Lokal präsentierbare Kategorien, Springer Lect. Notes in Math. 221 1971 Zbl0225.18004 MR327863
• Jiří Adámek, Jiří Rosicky, Locally presentable and accessible categories, Cambridge University Press 1994.
diff, v26, current
Added a link to locally strongly finitely presentable category.
diff, v28, current
Made explicit the characterisation in terms of ind-objects.
diff, v29, current
|
{"url":"https://nforum.ncatlab.org/discussion/4196/locally-finitely-presentable-category/?Focus=97434","timestamp":"2024-11-06T22:03:46Z","content_type":"application/xhtml+xml","content_length":"60349","record_id":"<urn:uuid:2b5576d5-bdbb-4c23-8a48-88cda6ee8fa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00053.warc.gz"}
|
How to Find the Domain and Range of a Function
In mathematics, understanding the domain and range of a function is crucial for solving equations, graphing functions, and analyzing their behavior. The domain represents the set of input values for
which the function is defined, while the range represents the set of output values produced by the function. Whether you’re a student learning calculus or someone interested in mathematics, this
article will guide you through the process of finding the domain and range of a function.
What Is a Function?
A function is a mathematical relationship that assigns each element from one set (called the domain) to a unique element in another set (called the codomain or range). In simpler terms, it takes an
input (usually denoted as “x”) and produces an output (usually denoted as “f(x)”). The output of a function depends on the input, and each input corresponds to one and only one output.
Finding the Domain of a Function
The domain of a function defines the set of all permissible input values for that function. In other words, it answers the question: “What values of ‘x’ can I plug into the function?”
Look for Restrictions: Start by examining the function for any potential restrictions. Common restrictions include:
Division by zero: Identify any denominators in the function, and determine when they equal zero. Exclude those values from the domain.
Square roots: If the function contains square roots or any even roots, the value inside the root (the “radicand”) must be non-negative. So, set the radicand greater than or equal to zero and solve
for ‘x’.
Logarithms: If the function involves logarithms, the argument of the logarithm must be greater than zero.
Consider Rational Expressions: For functions with rational expressions (fractions), look for values of ‘x’ that make the denominator equal to zero. These values must be excluded from the domain to
avoid division by zero.
Check for Square Roots and Even Roots: For functions containing square roots or even roots, ensure that the radicand is non-negative. Set the radicand greater than or equal to zero and solve for ‘x’
to find the valid domain.
Determine Inequality Constraints: If the function is subject to specific inequalities (e.g., “x > 0” or “x < 5”), these conditions should be taken into account when determining the domain.
Consider Piecewise Functions: In piecewise functions, each piece may have its own domain restrictions. Find the domain for each piece separately.
Combine All Valid Input Ranges: Finally, combine all the valid input ranges you found in the previous steps. The domain of the function is the intersection of all these ranges.
Finding the Range of a Function
The range of a function represents the set of all possible output values that the function can produce. To find the range of a function, follow these steps:
Analyze the Behavior: Start by analyzing the behavior of the function, particularly its graph. Visualize the graph and observe its highest and lowest points, horizontal asymptotes, and any intervals
where the function is increasing or decreasing.
Use Calculus: If you’re dealing with more complex functions, you can use calculus to find the range. Determine the derivative of the function and find critical points where the derivative is zero or
undefined. These critical points are potential extrema of the function.
Consider Asymptotes: If the function has horizontal asymptotes, take into account how the function approaches these values as ‘x’ approaches positive or negative infinity.
Test Values: Choose test values within the domain of the function and evaluate the function at these points to determine the corresponding output values. These test points can provide insight into
the range.
Combine All Possible Output Values: Combine all the possible output values you found using the above methods. The range of the function is the set of all these output values.
Example: Let’s find the domain and range of the function f(x) = 1/x.
Domain: The domain of this function excludes x = 0, as division by zero is undefined. So, the domain is all real numbers except x = 0, often expressed as “x ∈ ℝ, x ≠ 0.”
Range: The range is all real numbers except zero because as ‘x’ approaches positive or negative infinity, the function approaches zero. This can be expressed as “f(x) ∈ ℝ, f(x) ≠ 0.”
Determining the domain and range of a function is an essential part of understanding its behavior and applications in mathematics. By analyzing the function for potential restrictions, inequalities,
and asymptotic behavior, you can effectively find the valid domain and range. Whether you’re working with simple linear functions or complex equations, following these steps will help you identify
the input and output values that make up the domain and range of a function.
|
{"url":"https://www.studentsgroom.co/2023/11/how-to-find-the-domain-and-range-of-a-function.html","timestamp":"2024-11-08T04:47:18Z","content_type":"text/html","content_length":"58433","record_id":"<urn:uuid:42df2aff-fc74-4729-9ef4-ecf3bc31f9b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00876.warc.gz"}
|
Online Tutors for Pre-Algebra, Algebra 1 and 2 for Grade 6 to 12
online algebra tutors
Online Tutors for Pre-Algebra, Algebra 1 & 2
When your child opens the algebra book and asks a great question on exponents (or scientific notation), you might struggle to explain to remember the procedure for finding the x.
But the good news is that the Guru At Home has our back! When it comes to algebra.
Our expert online tutors for maths focus on what your child requires to master, teach, and motivate them through clear instructions and engaging lessons of algebra I and 2.
Whether it’s pre algebra, algebra one or algebra 2, our one on one live sessions will provide an excellent foundation to help your child to learn algebra in the future. In addition, we’ll ensure your
child masters every algebraic concept before proceeding to the next grade.
Best Pre-Algebra Tutors - Guru At Home
Pre Algebra is fundamental to modern Mathematics, but it’s often an intimidating subject that’s difficult to master. If you’re struggling with pre-algebra and need a tutor but you can’t just take any
tutor, because you can’t be sure if they’re qualified or competent. You need Guru At Home’s Online Pre Algebra Tutors that are experienced, qualified, and with proven results.
Pre-Algebra Topics Covered in Our Private Tutoring Classes
• Factors and multiples: Pre-algebra
• Patterns: Pre-algebra
• Ratios and rates: Pre-algebra
• Percentages: Pre-algebra
• Exponents intro and order of operations: Pre-algebra
• Variables & expressions: Pre-algebra
• Equations & inequalities introduction: Pre-algebra
• Percent & rational number word problems: Pre-algebra
• Proportional relationships: Pre-algebra
• One-step and two-step equations & inequalities: Pre-algebra
• Roots, exponents, & scientific notation: Pre-algebra
• Multi-step equations: Pre-algebra
• Two-variable equations: Pre-algebra
• Functions and linear models: Pre-algebra
• Systems of equations
Best Algebra 1 Tutoring Classes - Guru At Home
Great math algebra 1 teachers develop a passion for teaching others, and students typically come to them prepared, eager, and fully invested in seeking help. However, teachers often find themselves
struggling to help students overcome their algebra 1 hurdles. Here comes the role of our experienced algebra 1 tutors.
Algebra 1 Topics Covered in Our Private Tutoring Classes
• Algebra foundations: Algebra 1
• Solving equations & inequalities: Algebra 1
• Working with units: Algebra 1
• Linear equations & graphs: Algebra 1
• Forms of linear equations: Algebra 1
• Systems of equations: Algebra 1
• Inequalities (systems & graphs): Algebra 1
• Functions: Algebra 1
• Sequences: Algebra 1
• Absolute value & piecewise functions: Algebra 1
• Exponents & radicals: Algebra 1
• Exponential growth & decay: Algebra 1
• Quadratics: Multiplying & factoring: Algebra 1
• Quadratic functions & equations: Algebra 1
• Irrational numbers: Algebra 1
• Creativity in algebra
Online Tutoring Lessons For Algebra 2 - Guru At Home
Success in Pre-Algebra classes is critical to success in high school mathematics. The key to that success is having the best Pre-Algebra tutors. Your teacher can’t give that to you, so you need to
find a private tutor. But it’s hard to find a great Pre-Algebra tutor. Worry not, Guru At Home’s online algebra 2 tutors got you covered.
• Algebra 2 Topics Covered in Our Private Tutoring Classes
• Polynomial arithmetic: Algebra 2
• Complex numbers: Algebra 2
• Polynomial factorization: Algebra 2
• Polynomial division: Algebra 2
• Polynomial graphs: Algebra 2
• Rational exponents and radicals: Algebra 2
• Exponential models: Algebra 2
• Logarithms: Algebra 2
• Transformations of functions: Algebra 2
• Equations: Algebra 2
• Trigonometry: Algebra 2
• Modeling
Online Algebra Tutors for Grade 6th to Grade 8th
At Guru At Home’s math tutoring website, you will find online algebra tutors for grade 6th to 8th. Our primary target is to provide online algebra tutors for the students who are unable to get enough
help from schools.
Online Algebra Tutors for Grade 9th to Grade 10th
Guru At Home is a leading tutoring platform trusted by students & parents. Our online math tutors are experts in teaching algebra for grade 9th to grade 10th. In our one on one algebra classes our
tutors use the real time problem solving approach.
Online Algebra Tutors for Grade 11th to Grade 12th
In 11th and 12th Mathematics is one of the toughest subjects to study, especially for students who are not strong in this subject but with our online algebra tutors, you will find a competitive edge
over your fellow mates and score good marks in algebra.
Algebra is a field of mathematics that deals with symbolism and the arithmetic operations performed on these symbols. These symbols don’t have fixed values and are known as a variable. We frequently
encounter specific values that seem to keep shifting in our real-life issues.
There is also an ongoing need to represent the fluctuating values. In algebra, the values are usually represented by symbols like x and y and the letters z, p, or q. These symbols are referred to as
Furthermore, to determine the value, these symbols can be altered by various arithmetic operations of subtraction, addition, subtraction, multiplication, and division.
Concepts that are Associated with Algebra
Algebra is broken down into various subjects to aid in an in-depth analysis. Important topics of algebra, such as algebraic equations and expressions, sequence and series exponents, logarithms,
exponents, and sets.
Basic Algebra includes simple rules and operations with numbers, like:
• Addition
• Subtraction
• Multiplication
• Division
• Techniques for solving equations
• Variables
• Functions
• Polynomials
• Algebraic Expressions
The sky’s the limit to the complexity of these concepts in the sense “Algebra” is concerned. The newer concepts and ideas can be added as the standard of education increases. The concepts discussed
are helpful and simple to comprehend if they are taught correctly.
Why do students find difficulty in Algebra?
Algebra is not simply maths with letters that represent numbers. It’s a different type of thinking. Arithmetic is a subject that many find challenging to master. However, the majority achieve success
to different degrees, and it takes a lot of practice.
This is because the primary components of arithmetic numbers are a natural part of the world around us when we measure things, count objects, purchase items or make them, use the phone, visit the
bank, and review baseball scores and so on.
Algebra is a way of thinking logically about numbers, not working with numbers. It’s surprising that one of the main reasons why students struggle in Algebra is that they’re in a position of being
unable or unwilling to write the steps of solving a problem on paper.
Poor handwriting can ruin an ambitious Algebra student who cannot return and look over the things they have recorded.
In most cases, students are enticed by the temptation to solve problems in their heads instead of recording all the steps, which can haunt them regardless of how skilled in math they are.
Why Students Need Guru At Home's Online Tutoring Services?
24 Hour Availability
Online tutoring provides a huge opportunity for students. Now, with online tutoring, students have the opportunity to get help from tutors 24/7. They can work at their own pace, and study in a way
which suits them best. This type of tutoring works best with younger students, who are active and often need frequent reminders.
Online Tutoring is Flexible
Online Tutoring is seen as a viable alternative to having tutors visit students at home. A student’s schedule can be flexible, allowing them to receive tutoring online at a time that works best for
Online Tutoring is Affordable
There are times at which everyone needs help with school work. While some parents may turn to homeschooling their kids, others look to online tutoring for help. This is especially useful when a
student is struggling in a particular subject or just needs help catching up on work that they feel is below their grade level.
Online Tutoring is Convenient
Whether it’s learning a new language, solving math problems, or perfecting writing mechanics, tutoring online is a convenient alternative to in-person learning. Students who live far from school, in
remote areas, or who face other challenges find online tutoring to be beneficial because they can tutor at their own convenience—even from other countries. Online tutoring is also beneficial for
students who seek short-term goals, such as preparing for a specific test, or who just want to study at their own pace.
Online Algebra Tutoring | Best Online Maths Tutor – Guru At Home
Improve your scores in Algebra by getting help with your homework 24/7 from the best maths tutor. Our Maths tutors are specialists in Algebra and are ready to help you solve your specific Algebra
We assure that All Algebra Skills Levels and Concepts are covered in our tutoring session right from the beginning, and by the end of it, your child will not struggle anymore with algebra.
You can find the right Algebra tutor online when you’re doing homework or studying, whether after classes, after practice, or even on the weekend. We can help Algebra students of all grades and
levels, including the Pre-Algebra, Algebra I, and Algebra II–get help with Algebra concepts, such as:
• Algebraic equations
• Algebra word problems
• Algebra formulas
• Polynomial factoring
• Linear inequalities
• The graphing of equations
• Algebraic expressions
Our Maths online tutors can also assist you to locate Algebra worksheets and practice questions to help prepare for Algebra exams. You’ll receive individualized, one-on-one assistance from our
tutors, who are experts.
Let us know about the issue, and we’ll help you find the most suitable teacher for your needs. Your Algebra tutor will be working in our online class and utilizing the interactive whiteboard to work
on Algebra problems, review Algebra homework, and revise Algebra formulas.
Students who utilize guruathome.com receive better marks, are more confident and can finish their assignments on time.
Get better grades in your Algebra class by attending algebra online courses by the maths experts and getting One-to-One Algebra Help.
Anyone Algebra 1 student who wants to achieve an A grade must master the understanding of these concepts and abilities.
• Arithmetic
• Order of Operations
• Integers
• Working with Variables
• Memorizing Formulas
• The Organizing of problems on paper
The following fundamental ideas during Algebra 1.
• Simplifying
• Equations and Inequalities
• Word Problems
• Functions and graphing
• Linear Equations
• Systems of Equations
• Polynomials and Exponents
• Factoring
• Rational Expressions
• Radicals
If you’re looking for ways to get through Algebra 1, the key is getting individualized instruction. The past was when this was costly private tutoring. Today, however, it is affordable. Algebra
online tuition is now available via videos and guided exercises that include audio explanations at guru at home.
Algebra 1 takes about 6 to 12 months to master. The length of time it takes to learn depends on the student’s math knowledge and ability to learn math naturally and what time they have allocated for
assistance each day.
Our price range is affordable and it starts from
|
{"url":"http://christinechung.me/index-40.html","timestamp":"2024-11-03T06:16:59Z","content_type":"text/html","content_length":"299482","record_id":"<urn:uuid:2443fbdf-327f-4145-9fa0-f4b237c02e9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00050.warc.gz"}
|
How to Create a Configurable Filter Using a Kaiser Window
This article explains how to create a windowed-sinc filter with a Kaiser (or Kaiser-Bessel) window. The windowed-sinc filters in previous articles such as How to Create a Simple Low-Pass Filter
typically had two parameters, the cutoff frequency \(f_c\) and the transition bandwidth (or rolloff) \(b\). With a Kaiser window, there is a third input parameter \(\delta\), the ripple. All three
parameters are illustrated in Figure 1.
For the specific case of the Kaiser window, the same \(\delta\) is used in the passband and in the stopband. The impulse response of the underlying sinc filter is, as in How to Create a Simple
Low-Pass Filter, still defined as
where \(f_c\) is the cutoff frequency, and where \(n\) runs from \(-\infty\) to \(+\infty\). Remember that this range of \(n\) is actually the reason that a window is necessary. The impulse response
of the ideal filter is infinite, and needs to be “shortened” in a way that is better than simple truncation.
The main advantage of the Kaiser window is that it is more configurable than, e.g., the Blackman window. The acceptable ripple of the filter can be specified, while it is fixed (but known) for the
basic window types.
Kaiser Window
The definition of the Kaiser window is given by
with \(N\) the length of the filter, \(n\) running from zero to \(N-1\), and \(\beta\) a parameter that can be chosen freely (see below). \(I_0(\cdot)\) is the zeroth-order modified Bessel function
of the first kind. Especially with the Bessel function, this expression looks more complicated than the expressions for windows such as Blackman. However, Bessel functions have many applications and
are directly available in Python and MATLAB.
By varying the length \(N\) and the parameter \(\beta\), the properties of the final filter, which is the product of \(h[n]\) (shifted to the range \([0,N-1]\)) and \(w[n]\), can be determined. This
results in the three mentioned tunable parameters, the cutoff frequency \(f_c\), the transition bandwidth \(b\), and the ripple \(\delta\). Traditionally, \(\delta\) is seen as the tunable parameter,
and a new parameter \(A\) is then computed as
Personally, I would suggest that you take this \(A\) to be the tunable parameter, since it is simply the attenuation of the filter in the stopband in dB. If the ripple in the passband is very
important, you might not be able to do that. However, for a typical filter that is used to remove a range of frequencies more or less completely, you’ll need to make \(A\) relatively large, resulting
in a small ripple in the passband anyway.
Kaiser then empirically, through numerical experimentation, determined which value of \(\beta\) to use to end up with a given value for \(A\), as
0.1102(A-8.7),&A\gt 50\\[0.3em]
0.5842(A-21)^{0.4}+0.07886(A-21),&21\leq A\leq 50\\[0.3em]
0,&A\lt 21
The final parameter that you need is then the filter length \(N\), also determined empirically by Kaiser as
\[N=\dfrac{A-8}{2.285\cdot2\pi b}+1.\]
I’ve added the term \(+1\) because the formula of Kaiser estimates the filter order, which is one less than the filter length. Additionally, you often still want the filter to have an odd length, so
that its delay is an integer number of samples, so you might have to do another \(+1\) if the original estimate is even…
And that’s it. You can then use these parameters \(\beta\) and \(N\) in the expression for \(w[n]\) above to create a Kaiser window with the given transition bandwidth \(b\) and attenuation in the
stopband \(A\).
For the parameters \(f_c=0.25\), \(b=0.1\), and \(A=40\), the result is given in Figure 2. In this case, \(N=25\) and \(\beta=3.395\).
For the parameters \(f_c=0.125\), \(b=0.05\), and \(A=60\), the result is given in Figure 3. In this case, \(N=75\) and \(\beta=5.653\).
To create other types of filters such as high-pass or band-pass, the standard techniques can be used, for example as described in How to Create a Simple High-Pass Filter and How to Create Simple
Band-Pass and Band-Reject Filters. Figure 4 shows an example of a high-pass filter that was created through the technique described in Spectral Reversal to Create a High-Pass Filter, from the same
parameters that were used for Figure 3.
Python Code
In Python, all these formulas can be implemented concisely.
from __future__ import division
import numpy as np
fc = 0.25 # Cutoff frequency as a fraction of the sampling rate (in (0, 0.5)).
b = 0.1 # Transition band, as a fraction of the sampling rate (in (0, 0.5)).
A = 40 # Attenuation in the stopband [dB].
N = int(np.ceil((A - 8) / (2.285 * 2 * np.pi * b))) + 1
if not N % 2: N += 1 # Make sure that N is odd.
if A > 50:
beta = 0.1102 * (A - 8.7)
elif A <= 50 and A >= 21:
beta = 0.5842 * (A - 21) ** 0.4 + 0.07886 * (A - 21)
beta = 0
n = np.arange(N)
# Compute sinc filter.
h = np.sinc(2 * fc * (n - (N - 1) / 2))
# Compute Kaiser window.
w = np.i0(beta * np.sqrt(1 - (2 * n / (N - 1) - 1) ** 2)) / np.i0(beta)
# Multiply sinc filter by window.
h = h * w
# Normalize to get unity gain.
h = h / np.sum(h)
Applying the filter \(h\) to a signal \(s\) by convolving both sequences can then be as simple as writing the single line:
In the Python script above, I compute everything in full to show you exactly what happens, but, in practice, shortcuts are available. For example, the Kaiser window can be computed with w = np.kaiser
(N, beta).
Filter Design Tool
This article is complemented with a Filter Design tool. With respect to Kaiser windows, it allows building low-pass, high-pass, band-pass, and band-reject filters. Try it now!
Hello Tom, if you're still there! I hit this page on a search for sinc filter coefficients sox since the sox manual page doesn't give a reference for the use of the sinc function. Thanks for
providing this information, I think it's probably just what I need. Best regards, Tim S. (My web site is always years out of date and barely functional.)
Yeah, I'm still here! And also: uh oh, people are starting to notice that I haven't posted anything in over a year... :-) However, it's a pause, and not a final stop!
The content of this field is kept private and will not be shown publicly.
Spam avoidance measure, sorry for this.
|
{"url":"https://tomroelandts.com/articles/how-to-create-a-configurable-filter-using-a-kaiser-window","timestamp":"2024-11-01T20:31:03Z","content_type":"text/html","content_length":"35527","record_id":"<urn:uuid:63b8a030-d7a8-4afa-8010-d95ec56d3a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00864.warc.gz"}
|
Degeneracy of the Quantum Harmonic Oscillator
May 17, 2019
Note: This post uses MathJax for rendering, so I would recommend going to the site for the best experience.
I just love being able to find neat ways to solve problems. In particular, there’s something about a combinatorial problem that is so satisfying when solved. The problem may initially look difficult,
but a slight shift in perspective can bring the solution right into focus. This is the case with this problem, which is why I’m sharing it with you today. Don’t worry if you don’t know any of the
quantum mechanics that goes in here. The ingredients themselves aren’t important to the solution of the problem.
Energy of the quantum harmonic oscillator
If you have taken a quantum mechanics class, there’s a good chance you studied this system. The quantum harmonic oscillator is one that can be solved exactly, and allows one to learn some interesting
properties about quantum mechanical systems. Briefly, the idea is that the system has a potential that is proportional to the position squared (like a regular oscillator). In the quantum mechanical
case, the aspect we often seek to find are the energy levels of the system. This is what is of interest in our problem here.
In one dimension, the energy is given by the relation $E_n = \left(n+1/2 \right) \hbar \omega$, where n is an integer greater or equal to zero, and the terms outside of the parentheses are constant
(Planck’s constant and the angular frequency, respectively). However, what’s nice is that this extends into any number of dimensions in a straightforward way. If we want to look at the harmonic
oscillator in three dimensions, the energy is then given by:
\[E_{n_x,n_y,n_z} = \left(n_x + n_y + n_z +3/2 \right) \hbar \omega.\]
In other words, there’s a n value for each dimension. We can even consider the harmonic oscillator in N dimensions, and the energy would change in the same way. We would just add a new n index, and
throw in an extra factor of 1/2. Furthermore, it’s important to know that each combination of n’s gives a different physical system.
What you might notice from the three dimensional case is that there are different combinations of n[x], n[y], and n[z] that give rise to the same total energy. For example, we can note that the
combinations (1,0,0), (0,1,0), and (0,0,1) all give the same total energy. This is called degeneracy, and it means that a system can be in multiple, distinct states (which are denoted by those
integers) but yield the same energy. In this essay, we are interested in finding the number of degenerate states of the system.
The counting problem
Here’s the question. Given a certain value of n (which in the three-dimensional case is n = n[x] + n[y] + n[z]), how many different combinations of those three numbers can you make to get the same
energy? If we want to be more general, for a given n and N (the number of dimensions), what is the degeneracy?
If we do a few examples, we will see that the degeneracy in three dimensions is one (no degeneracy) for n = 0, three for n = 1, six for n = 2, and ten for n = 3.
I don’t know if you’re seeing a pattern here, but it’s not super clear to me. I definitely don’t see how to generalize this to any n, let alone for more dimensions. As such, we’re going to look at
this in a whole different way.
The method I’m going to discuss is one I found here, and is called the “stars and bars” method. It’s a beautiful technique that captures exactly what this problem is asking.
We start by thinking about how we can represent this problem. For a given n, we want to find a way to split this number into separate parts. Say we want to split the number into four parts. Then, we
would need to introduce three splits in the number n so that there are four “pieces”.
How many parts do we want for our particular problem? Well, it depends on the dimension we’re working in! For example, if the dimension is three, we want to split n into three parts. This means we
need to “cut” the number twice.
Words don’t describe this as well as an explicit visual example. Let’s pretend we have n = 5, and we are working in three dimensions. We will represent the number five by circles, and the splitting
will be done using vertical bars. Then, here’s a way we can “cut” n.
As you can see, this is just another way to word our original question. What’s neat though is that the construction doesn’t actually “know” about the way the number n is being split. In other words,
all we’ve done is introduce a second kind of object into the mix (the vertical bars). We recognize those vertical bars as dividing the number into three pieces, but the mathematics doesn’t care.
If you’ve taken some discrete mathematics, you may know where we’re going with this. We’ve reduced the question to finding the number of ways we can arrange objects. This is a common combinatorial
problem, and one that is well-studied. The answer is ripe for the taking.
For our scenario, how many objects do we have in all? Well, there are n objects, and we have to also include the number of bars. But the number of bars is just N-1. Therefore, the total number of
objects is N-1+n. Then, from those objects, we want to fix the position of the bars. Therefore, we get the usual combinations formula. If we want to label the degeneracy as g[n], we get:
\[g_n = {N-1+n \choose N-1}.\]
In particular, we can now solve this question for three dimensions. Substituting N=3, we get:
\[g_n = {2+n \choose 2} = \frac{(2+n)!}{2! n!} =\frac{(n+1)(n+2)}{2} .\]
All I can say is that this is slick. I remember first trying to solve this on my own, and getting stuck. Even when I was given the answer, it didn’t feel satisfying to me. I knew that there had to be
some better explanation for the degeneracy. I felt like there should be some combinatorial argument for the degeneracy, and it turned out that I was right! I hope that this argument helps clear
things up for other students who were wondering about the formula and how to get it. In my mind, this is one of the clearest ways to get it.
|
{"url":"https://jeremycote.net/quantum-harmonic-oscillator-degeneracy","timestamp":"2024-11-08T18:59:18Z","content_type":"text/html","content_length":"9597","record_id":"<urn:uuid:e2817ddf-e125-48f5-a785-df6722504a33>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00663.warc.gz"}
|
Poke-a-Dot: An Alphabet Eye Spy
Australian Curriculum: Description
Foundation Year – Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point (ACMNA001)
Teaching ideas
Poke-a-Dot:Who’s in the Ocean?
Australian Curriculum: Description
Foundation Year – Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point (ACMNA001)
Teaching ideas
Poke-a-Dot: 10 Little Monkeys
Australian Curriculum: Description
Foundation Year – Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point (ACMNA001)
Teaching ideas
Poke-a-Dot: Old MacDonald’s Farm
Australian Curriculum: Description
Foundation Year – Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point (ACMNA001)
Teaching ideas
Mathterpieces: The Art of Problem-Solving
Maths Concepts
Australian Curriculum: Description
Foundation Year – Subitise small collections of objects (ACMNA003);
Year 1 – Represent and solve simple addition and subtraction problems using a range of strategies including counting on, partitioning, and rearranging parts (ACMNA015);
Year 2 – Describe, continue, and create number patterns resulting from performing addition or subtraction (ACMNA035).
Teaching ideas
Students investigate other situations involving subitising and counting on with and without distractors in routine and non-routine situations. Students could write the number sentences represented by
art in this book (representational to abstract).
Adam Spencer’s Mind-Boggling Maths, Outrageous Puzzles, Enormous Super-Cool Games Book of Numbers and heaps of other fun stuff!
Australian Curriculum: Description
Prep-Sort, describe and name familiar two-dimensional shapes and three-dimensional objects in the environment (ACMMG009); YR1-Recognise and classify familiar two-dimensional shapes and
three-dimensional objects using obvious features (ACMMG022); YR2-Describe the features of three-dimensional objects (ACMMG043); YR3-Identify symmetry in the environment (ACMMG066); YR4-Investigate
number sequences involving multiples of 3, 4, 6, 7, 8, and 9 (ACMNA074); YR5-Use efficient mental and written strategies and apply appropriate digital technologies to solve problems (ACMNA291);
YR6-Construct simple prisms and pyramids (ACMMG140); YR7-Draw different views of prisms and solids formed from combinations of prisms (ACMMG161); YR8-Solve a range of problems involving rates and
ratios, with and without digital technologies (ACMNA188); YR9-Express numbers in scientific notation (ACMNA210)
Teaching ideas
Real-world application in this book. Cross-curricular links are strong too, science, humanities, music, HPE etc
Where is the Green Sheep?
Maths Concepts
Australian Curriculum: Description
Sort and classify familiar objects and explain the basis for these classifications. Copy, continue and create patterns with objects and drawings (ACMNA005); 1-Investigate and describe number patterns
formed by skip-counting and patterns with objects (ACMNA018)
Teaching ideas
Subitising small numbers, patterns
Wild Fibonacci: Nature’s Secret Code Revealed
Maths Concepts
Australian Curriculum: Description
Describe, continue, and create number patterns resulting from performing addition or subtraction (ACMNA060); 4-Find unknown quantities in number sentences involving addition and subtraction and
identify equivalent number sentences involving addition and subtraction (ACMNA083)
Teaching ideas
Puzzle collaboration to create some summary notes for the maths workbook. Like the clip titled Fibonacci Numbers: Identifying Patterns on Teaching Channel
Maths Concepts
Australian Curriculum: Description
Sort and classify familiar objects and explain the basis for these classifications. Copy, continue and create patterns with objects and drawings (ACMNA005); Investigate and describe number patterns
formed by skip-counting and patterns with objects (ACMNA018)
Teaching ideas
Other activities that involve recognising patterns of objects.
Australian Curriculum Year Level
Year 1
Maths Concepts
Australian Curriculum: Description
Recognise, describe and order Australian coins according to their value (ACMNA017); Investigate and describe number patterns formed by skip-counting and patterns with objects (ACMNA018)
Teaching ideas
Explore a more reasonable price for a cap in todays times. Explore cap patterns
|
{"url":"http://learningyou.com.au/recommended-books/strand/algebra/","timestamp":"2024-11-05T21:39:20Z","content_type":"application/xhtml+xml","content_length":"120246","record_id":"<urn:uuid:83642fda-2fd0-46e9-b39c-7d3459afb3f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00657.warc.gz"}
|
Hexadecimal notation
This quick introduction to hex was originally to be part of another article, but became quite long and therefore I decided to separate it. Even though it doesn’t have anything to do with Longhorn in
particular, I believe it will be worth the read.
A bit of binary to begin with
At its core, a computer can only distinguish between two values. A value is either high or low, 1 or 0. This means a single digit is binary (can take two values). Often when handling a binary digit
it’s simply called “bit”. Not many interesting calculations can be carried out using only 1 and 0. To enable computers to perform calculations like in mathematics a special “encoding scheme” is
needed to map numbers to various orderings of 1’s and 0’s.
An example of a binary value can be 110100. To calculate a decimal number from the given value, start at the rightmost bit and multiply it by 2^n where n is the place of the bit. Next, do the same
calculation for the bit directly left to it and continue doing this until no bits are left. Finally, sum all the answers. Here is an example:
2^1*0 = 0
2^2*0 = 0
2^3*1 = 4
2^4*0 = 0
2^5*1 = 16
2^6*1 = 32
---------- +
It’s good to know different amounts of bits have different names. For example, four bits are called a “nibble” and eight bits are referred to as one byte. Byte really is the starting point of all
other multitudes of data we know: Kibibyte (2^10 bytes), Mebibyte (2^20 bytes), Gibibyte (2^30 bytes) etc. Note the usage of multitudes described by IEEE 1541 guideline instead of SI prefixes (Kilo,
Mega, Giga and so on). As you might realize, it’s a pain to write large numbers down in binary notation. A solution to this problem is the hexadecimal notation which will be discussed in the next few
Just like your decimal numeral system
Hexadecimal notation or hex for short, is a notation to simplify reading binary data. Each hex digit can have 16 different values. The value ranges from 0 through 9 and A through F. The characters A,
B, C, D, E, F represent numeric values 10 through 15 respectively. Fundamentally the hexadecimal numeral system has a lot in common with the more generally known decimal numeral system. Like decimal
numeral system, hex is a positional numeral system. The difference lies in the “base”: where decimal uses base-10, hex uses base-16. To illustrate how this works in practice, I have worked out some
Decimal notation of 256:
2*10^2 + 5*10^1 + 6*10^0
It’s not very difficult to see the above is true. Because we are so used to counting base-10, it all seems very natural. The same number would be calculated base-16 this way:
1*16^2 + 0*16^1 + 0*16^0
To distinguish between decimal and hexadecimal notation often 0x is added when hexadecimal values are treated. This results in the above value being written as 0x100.
Another example for 123:
7*16^1 + B*16^0
It’s obvious how 7*16 can be calculated. It’s, however, less intuitive how one can calculate B*16^0. As pointed out in the first paragraph, hex uses A through F to denote values 10 trough 15. In this
case B is actually a number! Substituting 11 for B will result in 11*16^0 which is, again, easy to calculate. The final sum:
7*16 = 112
11*1 = 11
---------- +
Indeed, 0x7B equals 123.
Hex editors
When using a hex editor like HxD hex digits are mostly grouped two by two; this is not a coincidence. A single hex digit can at most be 15, namely value 0xF. 15 written in binary form is 1111. Four
bits, called a “nibble”, form exactly half a byte data. This means that two hex digits form exactly one byte or eight bits of data. As many will understand, this significantly decreases the amount of
digits needed to be written down: each eight bits will only need two hex digits.
Reading an offset
To be able to find certain parts of data in a file some notion of offsets is required. In short, an offset indicates where a certain part of data in a file begins and consists of both a row and
column. Offset 0x1F denotes the last column (16) of row 2 (because numbering starts at 0). Likewise, offset 0xFF denotes the last column of row F.
The hex editor below shows a selection made in a PE header starting in at offset 0x4E with a length of 0x26 bytes. The block selected reaches from offset 0x4E to 0x73. The offset of the next value
after the selection (value 2E) is 0x74.
As seen in the image above, hexadecimal values are represented as ANSI encoded characters in a separate column next to the data. When values represent plain text it can immediately be read.
Endianness describes the order in which bytes are written down. Using decimal notation, the most significant number is always written first (big-endian). It’s, however, common to have the least
significant bit upfront (little-endian) in data files. In a little-endian formatted data file, a saved offset of two bytes long might read ‘24 1D’. Since it’s written down little-endian, the most
significant byte is the last one, and the actual offset will therefore be 0x1D24.
|
{"url":"https://longhorn.ms/hexadecimal-system/","timestamp":"2024-11-13T08:01:40Z","content_type":"text/html","content_length":"12532","record_id":"<urn:uuid:cea33488-121b-4d9b-8fc3-245c504f31c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00868.warc.gz"}
|
QFT on curved spacetimes is the so called semiclassical approximation to Quantum gravity. Free Quantum fields can be defined on any globally hyperbolic spacetime using the theory of general
hyperbolic equations. Physical states are believed to satisfy the Hadamard property, that is their n-point distributions should satisfy a condition that restricts their wavefront set. Using these
concepts the theory analysis perturbative and non-perturbative aspects of QFT interacting with a classical gravitational field.
|
{"url":"https://www.analysis.uni-hannover.de/de/institut/personenverzeichnis/professor-alexander-strohmaier/research","timestamp":"2024-11-11T11:05:43Z","content_type":"application/xhtml+xml","content_length":"25627","record_id":"<urn:uuid:ced2f479-c3c5-4796-867c-8175153eed0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00178.warc.gz"}
|
Severe high altitude aircraft turbulence on thunderstorm peripheries
Severe high altitude aircraft turbulence on thunderstorm peripheries
Monday, 18 January 2010
Handout (370.4 kB)
Most aircraft encounters with severe or greater turbulence associated with thunderstorms occur within a storm's convective core. However, occasionally a pilot reports severe turbulence on a storm's
periphery. Such an encounter can be dangerous because the pilot, thinking the aircraft is safely away from storm, may not have passengers and equipment secured.
The horizontal divergence at a storm's top may be so large that it alters the ambient flow, leading to conditions favorable for severe turbulence some distance from the primary convection. Unknown in
these cases is whether the altered flow increases vertical wind shear sufficient to generate the turbulence or if the altered flow creates gravity waves which further modify the local environment to
generate it.
To shed light on this question, we examined two high altitude severe turbulence encounters with data computed on high resolution numerical forecast models available at the National Centers for
Environmental Prediction. Since these models successfully predicted the actual convection, we applied the ULTURB algorithm to these two cases to examine the flow characteristics even though ULTURB
was created to predict upper level turbulence in low Rossby number situations typical of synoptic scale flows.
In the first case, near Saint Louis, Missouri, on 8 May 2009, the model data showed that a large mesoscale convective complex over Missouri's southern half produced an outflow jet greater than 130
knots at FL400 in St. Louis's vicinity. This lowered the Richardson number to less than 0.75 but not low enough to expect turbulence to develop. The Lighthill-Ford radiation computed by the ULTURB
algorithm was very large indicating the likelihood that gravity waves were emanating from the storm. Thus, the ULTURB forecast was for severe turbulence near St. Louis at FL400.
In a sidebar, for this case we examine two Lighthill-Ford radiation terms not included in ULTURB. From scaling arguments we expect these terms to be important in high Rossby number flows of
thunderstorm environments, and indeed they are about an order of magnitude larger than the synoptic scale terms in this case but in the same general location as the terms already in ULTURB. However,
because the synoptic scale Lighthill-Ford radiation was already an order of magnitude larger than in typical synoptic scale cases, the additional terms did not improve the successful ULTURB
turbulence prediction. Many more cases need to be documented to determine if these additional terms need to be included in a ULTURB algorithm for thunderstorm peripheries.
The second case, near New Orleans, Louisiana, on 2 April 2009, was a different situation. The thunderstorm complex blocked an already strong jet level flow. This blocking slowed the ambient flow
instead of accelerating it as in the first case. The Richardson number in the storm-altered flow was greater than one but, with sufficiently high Lighthill-Ford radiation near New Orleans ULTURB
successfully predicted severe turbulence on the storm's periphery.
We conclude that for these two cases while the storm-modified flow did lower the flows' Richardson numbers, it was likely the generation of gravity waves caused the severe turbulence.
|
{"url":"https://ams.confex.com/ams/90annual/webprogram/Paper157512.html","timestamp":"2024-11-05T05:53:46Z","content_type":"text/html","content_length":"13934","record_id":"<urn:uuid:40dfc923-a035-4b7d-aa01-2b0b0f206ca5>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00138.warc.gz"}
|
© 1999-2022 Bill & Melinda Gates Foundation.
This is an online tool to explore the effects of exposure heterogeneity on HIV vaccine efficacy, given a leaky vaccine.
A potential outcome of pathogen exposure heterogeneity (i.e. variation among individuals in the risk of getting infected) is that vaccine efficacy measured from a trial (i.e. the clinical efficacy)
is lower than the biological vaccine efficacy (i.e. the per-exposure or per-contact vaccine efficacy). This distinction, between the per-exposure vaccine efficacy and the clinical efficacy of the
same vaccine, is the focus of this tool. Many authors have explored this issue, e.g. Halloran et al., 1992; White et al., 2010; O'Hagan et al., 2013; Edlefsen, 2014; Coley et al., 2016; Gomes et al.,
2016; Kahn et al., 2018.
Here we use a simple epidemic model with exposure heterogeneity to simulate an HIV vaccine trial. Our goals are to:
1. Raise awareness of the distinction between per-exposure vaccine efficacy, clinical vaccine efficacy, and population vaccine effectiveness
2. Assess if this effect might contribute to the difference between the RV144, HVTN 702, and HVTN 705 vaccine trial outcomes
3. Assist in the design or interpretation of HIV prevention trials, from this exposure heterogeneity perspective, and
4. Use this framework as a means to explore HIV infection risk; more specifically, to delineate differences among individuals in HIV risk that is due to variation in per-exposure infection
probability, HIV prevalence in a contact network, or the number of sexual contacts.
The separate tabs in this R Shiny app include:
1. Model description, showing the structure of the model and the parameters included.
2. Initial example plots, showing how the model works and what simulated epidemic and trial outputs we focus on.
3. Parameter sweeps, which allows you to compare the impact of multiple parameter values in the same plots.
4. Model fitting example, which allows you to use the model to examine specific trial results.
5. Model fitting output, which shows parameter combos that are consistent with a given trial result.
|
{"url":"https://leakyvaccine.bmgf.io/","timestamp":"2024-11-13T17:51:43Z","content_type":"text/html","content_length":"50905","record_id":"<urn:uuid:0b1e4693-f73d-47d5-a41a-52c1bbdab2e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00804.warc.gz"}
|
Statistical Physics and Modeling of Human Mobility
Gallotti, Riccardo
Statistical Physics and Modeling of Human Mobility
, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in
, 25 Ciclo. DOI 10.6092/unibo/amsdottorato/5198.
Documenti full-text disponibili:
Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Download (4MB) | Anteprima
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and
covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out
"universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities
and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We
discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the
urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities
that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns,
showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are
violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering
of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and
covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out
"universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities
and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We
discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the
urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities
that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns,
showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are
violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering
of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Altri metadati
Statistica sui download
|
{"url":"http://amsdottorato.unibo.it/5198/","timestamp":"2024-11-02T11:29:09Z","content_type":"application/xhtml+xml","content_length":"41433","record_id":"<urn:uuid:94934286-7304-478e-a945-64fd7a2e2da6>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00391.warc.gz"}
|
Simplifying Algebraic Fractions
An algraic fraction is an fraction in which the numerator or denominator or both, include x terms.
Apart from the last term, these fractions cannot be simplified. For an algebraic fraction to be simplified, the numerator or denominator or both must factorise so that there are common factors in
numerator and denominator. These will then cancel, leaving a simpler expression.
The last example above factorises as
The following examples all show how numerator and denominator can be factorised and factors cancelled to leave a simpler expression.
It is important to realise that not all expressions simplify. The second example above only sim[lifies because all the terms in numerator and denominator have a common factor of 2, which then cancels
|
{"url":"https://astarmathsandphysics.com/gcse-maths-notes/651-simplifying-algebraic-fractions.html","timestamp":"2024-11-09T20:57:43Z","content_type":"text/html","content_length":"30024","record_id":"<urn:uuid:f2c9c3f8-6b4e-4224-9fbd-a5aab3059275>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00721.warc.gz"}
|
Advice for students
I am happy to receive applications from students asking for internships, for thesis guidance and project assistant positions in scientific computing. All of my work involves computations and requires
proficiency in working with computers and coding. Here is some advice on what you can do to improve your chances of success in scientific computing.
• You need to know some theory. Here is a list of books I recommend.
• Learn to work on Unix/Linux and the command line. Use a proper text editor like Vi/Vim or Emacs.
• Learn a programming language really well, e.g., Fortran and C/C++. I would recommend learning all three of these. (Matlab is not a programming language. I dont have any projects that can be done
in Matlab.)
• Learn Python, which is great for scripting, data analysis and visualization.
• Learn to use some visualization tools like gnuplot, Paraview and VisIt. Python also has good support for visualization, see also PyVista.
• Learn to use a version control system like git.
• Learn parallel programming concepts and MPI.
• Learn linear algebra libraries like Petsc and/or Trilinos.
• Learn Latex. Write your reports/cv/application using Latex. (Video)
• Put up your scientific computing project work on github, gitlab or Bitbucket.
• Take the many online courses being offered these days on scientific computing topics. Do the assignments and exams.
• Finally, send your cv/application in PDF format ONLY.
When you write to me, it is highly desirable if you can show me some proof for all/some of the above in terms of actual code, results and project reports. In your CV, mention all the courses, in
person or online, that you have done in Physics, Applied Math, Numerical Methods, Scientific Computing, etc.
To work with me
The above advice is fairly general for anybody wanting to work in scientific computing, numerical solution of PDE, finite element methods, computational fluid dynamics, etc. If you want to work with
me, it will help if you are able to satisfy some of the above requirements. In addition, you will have a better chance if you have worked with any of the following.
|
{"url":"https://cpraveen.github.io/forstudents.html","timestamp":"2024-11-15T04:47:34Z","content_type":"text/html","content_length":"8663","record_id":"<urn:uuid:0a41ecc4-dcfa-48e5-8404-94d56dd91cb4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00744.warc.gz"}
|
Top 5 Best Poisson Distribution Calculators
In statistical analysis, the Poisson distribution plays a pivotal role, particularly when predicting the probability of a given number of events happening within a set period. The application of this
distribution is vast, ranging from business forecasting to scientific research. However, accurately calculating Poisson probabilities can be a complex task, necessitating specialized calculators.
These calculators are designed to simplify the process, making it accessible and efficient for professionals and students alike.
Today’s market offers a variety of Poisson distribution calculators, each with unique features and capabilities. Selecting the right calculator is crucial for ensuring accuracy and efficiency in
computations. This article will guide you through the top 5 available Poisson distribution calculators, highlighting their functionalities and use cases. Whether you’re a statistician, a student, or
a professional needing reliable statistical tools, this list will help you find the perfect calculator to meet your specific needs.
Top 5 Poisson Distribution Calculators
The following Top 5 Poisson distribution calculators represent the best of what is available in the market. They range from simple, user-friendly interfaces for beginners to more advanced systems
catering to the needs of seasoned statisticians. Each calculator is evaluated based on its accuracy, ease of use, unique features, and overall reliability. Whether you need a quick calculation for a
classroom assignment or a robust tool for professional data analysis, this list has something for everyone.
1. Stat Trek’s Poisson Distribution Calculator
Stat Trek's Poisson Distribution Calculator is a remarkable tool designed for ease and efficiency in computing Poisson probabilities. This online calculator stands out for its user-friendly interface, allowing users to enter values and obtain individual and cumulative Poisson probabilities easily. To use the calculator, one inputs values into two unshaded text boxes: the Poisson random variable (x) and the average rate of success (μ). Upon clicking the 'Calculate' button, it displays a range of probabilities, including P(X=x), P(X<x), P(X≤x), P(X>x), and P(X≥x).
This calculator is especially useful for students and professionals who require quick and accurate Poisson probability calculations. Its straightforward design ensures that even those with minimal statistical background can navigate and utilize its functions effectively. Additionally, Stat Trek provides resources such as frequently asked questions and sample problems, enhancing the user experience by offering educational support alongside computational capability. The ability to calculate both individual and cumulative probabilities makes it a versatile tool suitable for a wide array of applications, from academic research to professional data analysis. This combination of simplicity, comprehensive functionality, and educational support positions Stat Trek's Poisson Distribution Calculator as a top choice for anyone working with Poisson distribution.
2. University of Iowa’s Poisson Distribution Applet
The University of Iowa's Poisson Distribution Applet is a sophisticated yet user-friendly tool designed for computing probabilities associated with the Poisson distribution. This applet allows users to enter the rate (λ) in a dedicated box, and by hitting "Tab" or "Enter," it plots the probability mass function (pmf) for the given λ. To compute specific probabilities, users can select $�\left(\mathrm{�=�}\right)$ from a dropdown menu, enter a numeric value for x, and press "Enter." The probability $�\left(\mathrm{�=�}\right)$ then appears in a designated area. This applet also enables users to select $�\left(\mathrm{�\le �}\right)$ for left-tail probabilities (cumulative distribution function).
Additionally, the applet provides valuable insights into the Poisson distribution's properties, including the formula for the probability mass function, mean ($\mathrm{�=�}\left(�\right)=�$), variance (${\mathrm{�2}}^{}\left(�\right)=�$), and standard deviation ($\mathrm{�=��}\left(�\right)=�$). This level of detail makes it an invaluable tool for both educational and professional purposes, particularly for those who require a deeper understanding of the underlying statistical concepts. Its combination of functionality, educational content, and ease of use positions it as an essential tool for anyone working with or studying the Poisson distribution.
3. Statistics Helper’s Poisson Probability Calculator
Statistics Helper's Poisson Probability Calculator is an excellent tool designed to compute Poisson probabilities with precision and ease. Users can input the average number of occurrences in a given time interval (λ) and the number of occurrences (x). The calculator offers options to determine various types of probabilities, including exactly x occurrences, less than x occurrences, at most x occurrences, more than x occurrences, and at least x occurrences.
The practicality of this calculator is evident in its straightforward approach. For example, to find the probability of exactly 6 occurrences with a λ of 4.8, users can directly input these values into the calculator. The tool employs the Poisson probability formula $�\left(�\right)={\mathrm{�-�}}^{\mathrm{��}}!$ to calculate the probability, which in this instance equals 0.13979814691511 for $�\left(6\right)$. This level of detail and accuracy makes it an invaluable resource for both educational purposes and professional data analysis. The calculator simplifies what would be a cumbersome manual calculation process, making it accessible for users across various proficiency levels .
4. Areppim’s Poisson Distribution Probability Calculator
Areppim's Poisson Distribution Probability Calculator is designed to calculate the probability of a specific number of occurrences of an event over a continuum. This calculator is particularly useful for events like customer entries into a shop, defectives in a batch, cars arriving at a tollgate, or calls arriving at a switchboard, within a specific time interval or space.
To use this calculator, users must enter the mean (average rate of occurrence) and the variable (number of occurrences). The instructions are clear and straightforward: enter the mean in the text field next to "Mean (average rate of occurrence)," ensuring it's a natural number higher than 0, and enter the random variable in the text field next to "Variable (number of occurrences)." To correct an entry, there is a reset button. This calculator is beneficial for its simplicity and direct approach to calculating Poisson probabilities, making it a valuable tool for academic and practical applications in various fields.
5. Statology’s Poisson Distribution Calculator
Statology's Poisson Distribution Calculator is another excellent tool that finds Poisson probabilities associated with a given Poisson mean and a value for a random variable. It's designed to be user-friendly, making it suitable for both beginners and experienced users in the field of statistics. The calculator requires two primary inputs: $�$ (average rate of success) and $�$ (random variable).The calculator provides various probabilities, such as $�\left(\mathrm{�=�}\right)$, $�\left(\mathrm{�<�}\right)$, $�\left(\mathrm{�\le �}\right)$, $�\left(\mathrm{�>�}\right)$, and $�\left(\mathrm{�\ge �}\right)$
. For instance, it can calculate the probability of a random variable being equal to, less than, at most, more than, or at least a certain number. This level of detail in the outputs makes the calculator a versatile and essential tool for statistical analysis, particularly for those dealing with Poisson distributions in fields like business, science, and engineering.
The choice of a Poisson distribution calculator can significantly impact the efficiency and accuracy of statistical analysis. The top 5 calculators listed here offer a range of functionalities
catering to diverse needs and skill levels. From basic calculations to intricate data analysis, these tools provide the necessary support to both novice and statistic experts. The advancement in
computational tools has made complex statistical calculations more accessible, enabling a broader understanding and application of the Poisson distribution in various sectors.
In conclusion, whether you are delving into academic research or tackling real-world data challenges, the right Poisson distribution calculator is a key asset. It simplifies complex computations and
ensures precision and reliability in your results. As statistical analysis continues to evolve, these calculators will remain indispensable tools, empowering users to harness the full potential of
the Poisson distribution in their respective fields.
Recommended articles: Top 10 Cons & Disadvantages of Poisson Distribution and Poisson Distribution: 5 Use Cases with 5 Advantages
|
{"url":"https://projectmanagers.net/top-5-best-poisson-distribution-calculators/","timestamp":"2024-11-08T01:35:11Z","content_type":"text/html","content_length":"566953","record_id":"<urn:uuid:159ea63f-6f28-4f54-b94b-f86498fce0b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00659.warc.gz"}
|
Rounding Fractions to the Nearest Quarter
Question Video: Rounding Fractions to the Nearest Quarter
Round 17/36 to the nearest quarter.
Video Transcript
Round seventeen over thirty-six to the nearest quarter.
Now we’ve been given a number between zero and one, and we’ve got to round it to the nearest quarter. Well first, let’s imagine a scale from zero to one. Now if we mark that off in quarters, we’ve
got zero, one quarter, two quarters, three quarters, or four quarters. Well zero is just zero, one quarter is just one quarter. But two quarters, we can simplify to a half. Three quarters doesn’t
simplify, and four quarters is the same as one. So what we’ve got to do in this question is decide, which one of those seventeen thirty-sixths is nearest.
Now I’m just gonna think about how many thirty-sixths represent each of those fractions. Well no thirty-sixths is representing zero. And thirty-six thirty-sixths would represent one. And a half of
thirty-six is eighteen, so eighteen thirty-sixths represents a half. And a quarter is half of a half, so half of eighteen is nine, so nine thirty-sixths represents one quarter. And that leaves us
with three quarters. Well three times one quarter would be three quarters, so three lots of nine thirty-sixths are twenty-seven thirty-sixths.
So whereabouts on that scale would seventeen thirty-sixths sit? Well it would sit in here somewhere. It’s bigger than nine thirty-sixths, but it’s less than eighteen thirty-sixths. So which is it
nearer to, eighteen thirty-sixths, or nine thirty-sixths? Well the difference between seventeen thirty-sixths and eighteen thirty-sixths is just one thirty-sixth. But the difference between nine
thirty-sixths and seventeen thirty-sixths is eight thirty-sixths. So seventeen thirty-sixths is much closer to eighteen thirty-sixths than nine thirty-sixths.
And remember, eighteen thirty-sixths is the same as two quarters, which is equivalent to a half. So to the nearest quarter, we could say it’s two quarters, but we’d probably simplify that to a half.
So our answer is, rounded to the nearest quarter, seventeen thirty-sixths, is approximately equal to a half.
|
{"url":"https://www.nagwa.com/en/videos/958173236789/","timestamp":"2024-11-02T11:47:21Z","content_type":"text/html","content_length":"242410","record_id":"<urn:uuid:d9ef35a9-7533-4f90-a32b-18942ef42404>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00853.warc.gz"}
|
Diffusion-limited Reactions - Definition & Detailed Explanation - Astrochemistry Glossary - Sentinel Mission
Diffusion-limited Reactions – Definition & Detailed Explanation – Astrochemistry Glossary
I. What are Diffusion-limited Reactions?
Diffusion-limited reactions are chemical reactions that are limited by the rate at which reactants can come into contact with each other due to the slow diffusion of molecules. In these reactions,
the rate of reaction is determined by how quickly molecules can move through a medium to reach each other and react. This can lead to slower reaction rates compared to reactions that are not
Diffusion-limited reactions are common in systems where the concentration of reactants is low or where the reactants are moving through a medium with limited space for movement. In these cases, the
rate of reaction is limited by the diffusion of molecules rather than the intrinsic reactivity of the molecules themselves.
II. How do Diffusion-limited Reactions occur in Astrochemistry?
In astrochemistry, diffusion-limited reactions can occur in a variety of environments, including interstellar clouds, protoplanetary disks, and the atmospheres of planets and moons. These reactions
are important for understanding the chemical processes that occur in space and can provide insights into the formation and evolution of celestial bodies.
One example of a diffusion-limited reaction in astrochemistry is the formation of complex organic molecules in interstellar clouds. In these clouds, molecules are constantly moving and colliding with
each other, but the low temperatures and densities of the clouds can limit the rate at which reactions can occur. This can result in the formation of complex molecules through slow, diffusion-limited
III. What factors influence the rate of Diffusion-limited Reactions?
Several factors can influence the rate of diffusion-limited reactions, including the temperature, pressure, and density of the medium in which the reaction is occurring. Higher temperatures can
increase the rate of diffusion by increasing the kinetic energy of molecules, while higher pressures can increase the rate of collisions between molecules.
The size and shape of the molecules involved in the reaction can also affect the rate of diffusion. Larger molecules may diffuse more slowly through a medium, while molecules with a more compact
shape may diffuse more quickly. Additionally, the presence of catalysts or other substances that can facilitate the reaction can also impact the rate of diffusion-limited reactions.
IV. What are some examples of Diffusion-limited Reactions in space?
Diffusion-limited reactions are common in space and can occur in a variety of environments. One example is the formation of complex organic molecules in the atmospheres of planets and moons. In these
environments, molecules can diffuse slowly through the atmosphere, leading to the formation of complex molecules through diffusion-limited reactions.
Another example is the formation of icy grains in protoplanetary disks. In these disks, molecules can diffuse and collide with each other, leading to the formation of icy grains through
diffusion-limited reactions. These grains can then come together to form planets and other celestial bodies.
V. How do scientists study Diffusion-limited Reactions in astrochemistry?
Scientists study diffusion-limited reactions in astrochemistry using a variety of techniques, including laboratory experiments, computer simulations, and observations of celestial bodies. Laboratory
experiments can be used to simulate the conditions of space and study how diffusion-limited reactions occur in different environments.
Computer simulations can also be used to model diffusion-limited reactions and predict how they may occur in space. By inputting data on the temperature, pressure, and density of a medium, scientists
can simulate the diffusion of molecules and predict the rate of diffusion-limited reactions.
Observations of celestial bodies can provide insights into diffusion-limited reactions in space. By studying the composition of interstellar clouds, protoplanetary disks, and other environments,
scientists can identify the presence of complex molecules formed through diffusion-limited reactions.
VI. What are the implications of Diffusion-limited Reactions for understanding the universe?
Diffusion-limited reactions are important for understanding the chemical processes that occur in space and can provide insights into the formation and evolution of celestial bodies. By studying
diffusion-limited reactions, scientists can learn more about the conditions in different environments and how molecules interact with each other.
Understanding diffusion-limited reactions can also help scientists predict the formation of complex molecules in space, which can have implications for the origins of life in the universe. By
studying the rate of diffusion-limited reactions and the factors that influence them, scientists can gain a better understanding of the chemical processes that occur in space and their role in
shaping the universe.
|
{"url":"https://sentinelmission.org/astrochemistry-glossary/diffusion-limited-reactions/","timestamp":"2024-11-07T16:30:35Z","content_type":"text/html","content_length":"91882","record_id":"<urn:uuid:391cbac6-399d-41e4-b412-d488de75d856>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00426.warc.gz"}
|
Factors that Impact Preservice Teachers’ Growth in Conceptual Mathematical Knowledge During a Mathematics Methods Course
Teachers’ conceptual understanding of elementary mathematics is believed to be fundamental to effective classroom level mathematics reform. This study examined preservice teachers’ change in
conceptual mathematical knowledge after taking a reform-based mathematics methods course as part of a teacher certification program, and investigated the relationship between this change and factors
such as preservice teachers’ academic background, initial levels of conceptual and procedural mathematical knowledge and values, and the number of mathematics courses taken in high school and
university. The results of this study suggest that the number of mathematics courses taken in high school may influence growth in conceptual mathematical knowledge, while preservice teachers’
subject-area background and the number of university mathematics courses taken did not appear to influence growth in conceptual mathematical knowledge as needed to teach in a reform-based manner.
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Article Type: Research Article
INT ELECT J MATH ED, Volume 4, Issue 2, May 2009, 57-76
Publication date: 08 Aug 2009
Article Views: 2947
Article Downloads: 2881
Open Access References How to cite this article
|
{"url":"https://www.iejme.com/article/factors-that-impact-preservice-teachers-growth-in-conceptual-mathematical-knowledge-during-a","timestamp":"2024-11-12T10:35:16Z","content_type":"text/html","content_length":"37471","record_id":"<urn:uuid:0d394fef-69cf-491a-8c04-9e6e4928cc45>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00443.warc.gz"}
|
Finding the Next Term of a Given Geometric Sequence
Question Video: Finding the Next Term of a Given Geometric Sequence Mathematics • Second Year of Secondary School
Find the next term of the geometric sequence −5, −5/4, −5/16, −5/64, _.
Video Transcript
Find the next term of the geometric sequence negative five, negative five-fourths, negative five sixteenths, negative five sixty-fourths, and then we will be finding that last term.
The geometric sequence is where each term is determined by multiplying by a non-zero constant, called a common ratio, by the previous term. So if we begin with negative five, we would multiply by
this common ratio we don’t know, as 𝑥, and we would get the next term, negative five-fourths. And then we would take negative five-fourths multiplied by the common ratio 𝑥 which we will find, and we
will get the next term, negative five sixteenths. And then we would take negative five sixteenths times the common ratio, we will get the next term, negative five sixty-fourths. And then lastly, we
would take negative five sixty-fourths times the common ratio to get our last term, what we’re trying to find.
So how do we find this common ratio 𝑥? And it’s the same for every single one. So we can just pick one of the sets and solve for 𝑥. Let’s go ahead and use the first one because it seems the simplest.
So to solve for 𝑥, we need to divide both sides by negative five. When you’re taking fractions and dividing, because really the negative five is a negative five over one, instead of dividing by a
fraction, you multiply by its reciprocal. So we’re multiplying by the reciprocal of negative five over one which is just where you flip it. So we’re multiplying by one over negative five.
So when we multiply fractions, we multiply the numerators and we multiply the denominators. So on the top, we get negative five and on the bottom, we get negative 20. The negatives will cancel and
then five twentieths reduces to one-fourth. So one fourth is that common ratio that we’re going to be multiplying by to every single term so we can get the next one.
Just to double-check, if we would plug in one-fourth in for 𝑥 back into each of these to find the next term, it does indeed work. Negative five times one-fourth is negative five-fourths, negative
five-fourths times one-fourth is negative five sixteenths because the top multiplies be it negative five. And four and four, being multiplied on the bottom, gives you 16. And then negative five
sixteenths times one-fourth is negative five sixty-fourths. And then lastly, we can find the last term by multiplying by that one-fourth.
So the next term of the geometric sequence would be negative five two hundred and fifty-sixths.
|
{"url":"https://www.nagwa.com/en/videos/965175917417/","timestamp":"2024-11-12T07:19:41Z","content_type":"text/html","content_length":"250702","record_id":"<urn:uuid:751ac8ad-2629-45f8-866a-dc45c2272497>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00659.warc.gz"}
|
Design and Testing of Cooperative Motion Controller for UAV-UGV System
Mechatronics and Intelligent Transportation Systems
Volume 1, Issue 1, 2022
Pages 12-23
Received: 08-11-2022,
Revised: 09-09-2022,
Accepted: 09-25-2022,
Available online: 11-04-2022
View Full Article|Download PDF
Unmanned ground vehicles (UGVs) and quadrotor unmanned aerial vehicles (UAVs) can work together to solve challenges like intelligent transportation, thanks to their excellent performance complements
in perception, loading, and endurance. This study presents a UAV-UGV system cooperative control mechanism. To achieve collaborative trajectory tracking, the leader-follower strategy based on a
centralized control structure is firstly established in conjunction with the application scenario. The fuzzy robust controller is created to control the quadrotor UAV and improve attitude stability.
Meanwhile, the UGV's controller uses the pure pursuit algorithm and a proportional integral derivative (PID) controller. In order to evaluate the cooperative control strategy and algorithm, the
UAV-UGV experimental platform is set up based on the QDrone and QCar, and the experimental results show the viability of the suggested plan.
Keywords: Unmanned ground vehicles (UGVs), Unmanned aerial vehicles (UAVs), Cooperative motion controller, Fuzzy robust controller
Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
Li, Y. X. & Zhu, X. Y. (2022). Design and Testing of Cooperative Motion Controller for UAV-UGV System. Mechatron. Intell Transp. Syst., 1(1), 12-23. https://doi.org/10.56578/mits010103
Figure 1. The UGV motion model
|
{"url":"https://www.acadlore.com/article/MITS/2022_1_1/mits010103","timestamp":"2024-11-06T08:29:43Z","content_type":"text/html","content_length":"276009","record_id":"<urn:uuid:bca8231f-fccd-4b6a-b557-f4e9b62de648>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00488.warc.gz"}
|
High dimensional data
There is a variety of computational techniques and statistical concepts that are useful for analysis of datasets for which each observation is associated with a large number of numerical variables.
In this chapter, we provide a basic introduction to these techniques and concepts by describing matrix operations in R, dimension reduction, regularization, and matrix factorization. Handwritten
digits data and movie recommendation systems serve as motivating examples.
A task that serves as motivation for this part of the book is quantifying the similarity between any two observations. For example, we might want to know how much two handwritten digits look like
each other. However, note that each observation is associated with \(28 \times 28 = 784\) pixels so we can’t simply use subtraction as we would if our data was one dimensional. Instead, we will
define observations as points in a high-dimensional space and mathematically define a distance. Many machine learning techniques, discussed in the next part of the book, require this calculation.
Additionally, this part of the book discusses dimension reduction. Here we search for data summaries that provide more manageable lower dimension versions of the data, but preserve most or all the
information we need. We again use distance between observations as a specific example: we will summarize the data into lower dimensions, but in a way that preserves distance between any two
observations. We use linear algebra as a mathematical foundation for all the techniques presented here.
|
{"url":"http://rafalab.dfci.harvard.edu/dsbook-part-2/highdim/intro-highdim.html","timestamp":"2024-11-11T19:20:14Z","content_type":"application/xhtml+xml","content_length":"32591","record_id":"<urn:uuid:42099419-df9c-49b8-8ca8-d1c23fb53914>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00818.warc.gz"}
|
Which property justifies this statement? x-4=x-4x-4=x-4 Reflexive Property of Equality Substitution Property of Equality Transitive Property of Equality Symmetric Property of Equality
Find an answer to your question 👍 “Which property justifies this statement? x-4=x-4x-4=x-4 Reflexive Property of Equality Substitution Property of Equality Transitive ...” in 📗 Mathematics if the
answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers
|
{"url":"https://cpep.org/mathematics/2305716-which-property-justifies-this-statement-x-4x-4x-4x-4-reflexive-propert.html","timestamp":"2024-11-02T15:04:58Z","content_type":"text/html","content_length":"23785","record_id":"<urn:uuid:76f5b07c-63f7-4f66-bdee-43675373468e>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00076.warc.gz"}
|
Methods of statistical physics and computer simulations
Method of collective variables. Lviv school of statistical physics is well known in physics community mainly due to the method of collective variables elaborated for description of collective effects
in classical systems of interacting particles. An extremely fruitful idea was the separation of a basic system by means of an average of the Jacobian of transition to collective variables over the
short-range interactions.
A modification of the method of collective variables for its application to description of quantum systems of interacting particles is known in the literature as a method of displacements and
collective variables. The main idea of the method is in a separation of a part from the statistical operator, that characterizes interaction of quantum wave packets of particles. The suggested
approach appeared extremely fruitful for description of systems of interacting Bose and Fermi particles.
Theoreical description of phase transitions. Based on the method of collective variables there has been suggested a method for theoretical description of phase transitions and connected with them
critical phenomena for a broad range of problems of statistical physics. This method is based on a suggested by I.R.Yukhnovskii original scheme for calculation of partition function of the
three-dimensional Ising model, that makes use of idea of existence of specific collective variables in the phase space, average values of which are connected with order parameter. Namely this has
opened a way to construction of a consistent theory of phase transitions on microscopic level, in particular for magnetic and ferroelectric systems, region of liquid-gas critical point, demixing
phenomena etc. A generalization of grand canonical distribution was constructed.
A method of calculation of free energy in vicinity of a point of the phase transition of second order with applied external field. An effect of the external field on behaviour of physical quantities
of the system (susceptibility, specific heat, order parameter etc) close to the temperature of phase transition was studied and their dependence on microscopic parameters was established. There was
formulated a method of calculation of critical indices of “ordinary ” and “special” phase transitions in massive field theory for three-dimensional semi-confined systems. It was shown, that the
surface disorder does not effect the special transition, while bulk disorder changes critical indices of the ordinary transition. Based on an original approach there were calculated critical indices
of the Lifshitz point with arbitrary number of anisotropy axes and shown that the anisotropy index is non-classical.
Multi-density formalism. Within a concept of association in statistical theory of liquids was developed a multi-density formalism, that permitted to extend the methods of simple fluids onto complex
liquids. In this scheme the potentials of intermolecular interaction are separated into three parts: short-range repulsion that defines the structure of simple fluids; long-range interactions that
have electrostatic origin and define energetic properties of liquids; short-range strongly attractive interactions that are responsible for formation of various associative complexes and clusters.
The theory is based on a diagram technique and combination of expansions in activity and density fro correlation functions that are used for description of associate and non-associate interactions,
Analytical approach in the theory of dynamic mean field in the theory of strongly correlated electron systems – materials and compounds for which the single-electron approach cannot be applied – were
developed original tight binding approaches that make use the technique of Hubbard operators and diagram expansions for Green functions. Within a theory of dynamic mean field was developed a new
analytic scheme that is based on both the formalism of auxiliary Fermi-field and approach of generating functional. On this basis for the models of the class of Hubbard model was constructed a
sequence of analytical approximations (like generalization of the Hubbard-III approximation), in framework of which were studied the thermodynamics and features of electron spectrum in the region of
metal-nonmetal transitions as well as mixed valence transitions.
There was developed a general approach to construction of spectral relations for many-time correlation functions with a special focus on treatment of non-ergodic contributions. A representation of
many-time Green functions was obtained via spectral densities and an inverse problem was solved. By making use of these spectral relations were obtained general relations that connect cross-section
of inelastic scattering of electromagnetic waves with many-time Green functions with all the contributions – non-resonant as well as resonant and mixed ones – taken into account.
Method of generalized collective modes. An approach of generalized collective modes within the statistical hydrodynamics of simple and many-component liquids was suggested for investigation of
collective behaviour and generalized transport coefficients. Within this approach the time correlation functions are represented as a separable sum of contributions with different weight coefficients
from dynamic eigenmodes of the system, that describe relaxing and propagating processes in liquids. A theory of non-hydrodynamic collective excitations in simple and many-component liquids was
constructed, that permitted for the first time to describe consistently such collective processes in liquids as structural relaxation, heat waves and excitations of optic phonon type.
Based on N.Bogolyubov’s ideas on reduced description of non-equilibrium processes and the method of non-equilibrium statistical operator was developed a methodology of consistent description of
kinetic and hydrodynamic processes for dense gases, plasma and liquids. There was suggested a formulation of non-equilibrium thermofield dynamics for quantum-field systems. A combination of the
methods of Zubarev’s non-equilibrium statistical operator and Green functions was formulated and developed for quantum spatially non-uniform electron systems and shown its connection to
time-dependent density and current-density functional theories.
A method of calculation of electron structure of transition and rare-earth metals was suggested on the basis of a formalism of completely orthogonalized plane waves (COPW). Within such an approach
the ab initio pseudopotentials are represented via unitary transformation of initial atomic potentials on a complete and orthogonal set of basis functions. Due to absence of overcompleteness of the
basis set, as it was for the OPW set, it was possible to obtain ab initio COPW pseudopotentials free of drawbacks that contain majority of ab initio pseudopotentials.
New numerical methods. In the field of numerical schemes for integration of equations of motion for classical and quantum simulations were proposed a class of high-precision algorithms. Within the
approach of factorization of evolution operators there was performed the complete classification and consistent derivation of all explicit decomposition algorithms with a number of exponential
operators up to 11. Hence in addition to the 8 known earlier algorithms were obtained 37 new schemes with improved precision up to 6 orders in time step. It was shown, that some new algorithms can be
more than two orders effective in comparison with well known integrators as Verlet, Forest-Ruth, Suzuki, Lee etc.
Algorithms for integration of equations of motion were developed for polymer liquid crystal systems in the method of molecular dynamics. In particular, were generalized the methods with a thermostat
and barostat within the ensemble with constant pressure and temperature for a system of spherical and anisometric particles. This permitted to describe effectively the internal structure and
intramolecular dynamics of a number of complex polymer liquid crystals, in particular brush-like and dendritic systems. Application of the proposed algorithms was successful for description of the
main mechanisms of experimentally observed phenomenon of photo-induced deformations in azobenzene films that was caused by the process of photoisomerization.
|
{"url":"https://icmp.lviv.ua/en/content/methods-statistical-physics-and-computer-simulations","timestamp":"2024-11-03T06:56:00Z","content_type":"application/xhtml+xml","content_length":"54982","record_id":"<urn:uuid:0b4a9540-6cab-4917-bcca-a21a0cba0b8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00666.warc.gz"}
|
Algebra Tutors
Top Algebra Tutors serving Chennai
Mathew Paul : Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...of experience in teaching in Management school . In my carrier also I used to bring the best from the team through understanding their passion and placing them in their respective role so that I
could bring the best out of them. This helped us to win more businesses and in turn profit increased
Fathima: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...from Stella Maris College and Indian Institute of Technology Madras. My academic foundation combined with teaching tenure equips me with the expertise to cultivate profound conceptual
understanding among students. I adeptly tailor the complexity of topics to suit the needs of learners, consistently assessing their comprehension levels and fostering a conducive learning environment
for all.
Christopher: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...brain. After graduating, I served a year, full time, as an Americorps member with City Year, tutoring and mentoring 7th and 8th grade students in the Englewood Community of Chicago in 1-on-1 and
small group settings. I am currently working toward a TEFL certification that would allow me to teach English as a foreign language...
Rebecca: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...pursuing a masters degree in social work at the University of Chicago's School of Social Service Administration. I am passionate about tutoring because I want everyone to be able to perform
academically at their highest possible level. I also think it is important that learning is not just for one assignment or one test, but...
Ben: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...work together to pick up the pieces that were lost or are close, but not quite there: learning the way you want to learn and the way you learn best rather than having topics taught at you. For
standardized testing: I have taken the special classes and labored through the workbooks to prepare for taking...
Max: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...as they work their way towards college. I volunteer in New Haven as a math teacher and coach at a public middle school (my team of 7th graders just won the 2016 New Haven regional championship!),
and I've been tutoring middle and high school students for over 7 years. On top of all of that,...
Kaitlin: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...them during high school. I attended a private school where I was given the opportunity to discover a large range of topics I had never thought about pursuing, such as engineering and biomedical
sciences. In college, I earned my Bachelor of Arts degree in Linguistics, while minoring in Arabic and gaining a certificate in Middle...
Adam: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...an expert on schizophrenia to map the epigenetics involved in schizophrenia onset. The application of Dr. William Walsh's research, using nutrients like zinc, SAMe, or vitamins to improve mental
illnesses is inspiring. I enjoy being able to improve the lives of people, and I like medical research because it enables me to help more people....
Molly: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...can do more than simply help students with homework. My approach to aiding them is three-fold. I will work to: 1. Review and reteach material from her school that has proven difficult. 2.
Diagnostically assess for instructional gaps and then fill in gaps to bolster a strong subject area foundation. 3. Impart test-taking skills and...
Jean: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...her, I learned that a meaningful education requires more than just knowing facts and how to find the right answers. A good teacher must help students recognize their individual learning styles,
discover any flaws in their foundation knowledge, set reasonable goals, and look for deeper meanings behind why a problem is best solved a certain...
Koissi: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...distinction, a title earned for scoring above average on 7 AP tests. As a Senior in High School while taking AP Calculus AB, I was a peer tutor in my school's Pre-Calculus course. Other tutoring
experience comes from volunteering to help my peers in their math and Computer Science classes. I am an avid programmer...
Matt: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...science. While I do not have any formal experience with teaching, I have spent much of my academic career tutoring friends and family to help them succeed in school, and I have personally improved
the grades of many of my peers through studying sessions with me by as much as 30%. I am a friendly...
Chandler: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...students with their education. There is little that is more rewarding than aiding a struggling student successfully. In high school, I was a member of the National Honors Society and relished the
opportunities that organization provided me to tutor others. I love learning, and I love sharing that with others. What better way to engage...
Kacey: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...a math tutor. I am invested in each of my students' success and work hard to help them succeed. Consequently, each of my past tutoring clients has seen significant improvement. I am a big believer
in catering to students' learning styles. I pay attention to whether they are an audio, visual, or kinesthetic learner, and...
Mark: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...my scholastic career. I love to help others, and especially help others do their best. I am always open to help someone in need of help. I have helped friends and family increase their performance
in math ever since I was in the 8th grade. I feel that any student can achieve anything if they...
Mark: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...am a recent Yale graduate with a B.S in chemical engineering. I have over 5 years of experience tutoring a wide range of subjects, and I am very passionate about math and science. My favorite part
of tutoring is instilling confidence into students and making them feel that they can understand and enjoy a subject.
Rick: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
Providing guidance in science and mathematics to help students understand the real-world applications of scientific and mathematical concepts in their everyday lives. I have a B.S.E. in Aerospace
Engineering and a B.S. in Sustainable Energy Materials and Technology, and I am pursuing my M.S. in Aerospace Engineering.... Learning science and mathematics through real-world applications that
show how concepts can actually be used to solve real problems.
Javier: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...of an understatement. Math is the most beautiful thing in the universe, and I want open everybody's eyes to this deep and wonderful truth. Math is something to be enjoyed. I believe that while
knowing the formulas and procedures is important, memorizing mathematical formulas without appreciating the concepts is akin to memorizing the lines of...
Jennifer: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
I am committed to providing my students with the best learning experience achievable. I strive to cater to their needs and challenge them to achieve their academic goals. ... I believe that every
student can reach their personal potential. In order to do so, both student and educator must communicate with each other to use the student's strengths to overcome any challenges the student may
Joshua: Chennai Algebra tutor
Certified Algebra Tutor in Chennai
...is that even when we are preparing for a standardized exam, we should always be relating the work to your passions, pursuits, and educational goals to make the work that much more meaningful. In
my spare time, you'll often find me learning bits of new languages for fun or writing. I am currently fluent in...
Private Online Algebra Tutoring in Chennai
Our interview process, stringent qualifications, and background screening ensure that only the best Algebra tutors in Chennai work with Varsity Tutors. To assure a successful experience, you're
paired with one of these qualified tutors by an expert director - and we stand behind that match with our money-back guarantee.
Receive personally tailored Algebra lessons from exceptional tutors in a one-on-one setting. We help you connect with online tutoring that offers flexible scheduling.
Chennai Algebra Tutoring FAQ
Varsity Tutors can get you connected with an exemplary Chennai Algebra tutor. It doesn't matter if you're in secondary school at Government Girls' High School or Loyola Matriculation Higher Secondary
School, or a college student at India Institute of Technology Madras. If you're looking to improve your Algebra skills, securing the services of an Algebra tutor in Chennai is a smart move.
There are multiple benefits to working with a Chenai Algebra tutor. The first major advantage is having an expert on the subject matter at your fingertips. When you're in a classroom setting, you
obviously have a seasoned educator guiding you through the relevant material. Once you leave that classroom, though, you're on your own. Remembering everything that's been presented to you or trying
to wrap your mind around something that didn't necessarily make sense in the moment can be headache-inducing. Sometimes you just need a little extra help. With an Algebra tutor in Chennai, you'll
have regularly scheduled one-on-one time with an experienced professional who not only knows this stuff like the back of their hand, but also has experience teaching different methods of getting to
the right answer. The end result is that you can be better equipped to tackle Algebra with confidence.
The second benefit to working with a Chenai Algebra tutor is that you have someone assisting you at your pace in a way that's tailored to your learning style. The fact of the matter is that everyone
learns differently. Some people can read something once and have it down pat. Other people require repetition to truly absorb information. With some students, talking it out and working through
different examples helps them understand a concept. The Algebra tutor in Chennai you wind up working with can take the time to understand which tactics work best for you and design a lesson plan
around that. The pace of those lessons can likewise depend on your needs. Just like people learn in different ways, they also learn at different rates on different topics. For instance, you might be
able to master polynomials in a day, but find yourself banging your head against a wall when it comes to the quadratic formula. No need to worry. Your Chenai Algebra tutor isn't tied to some generic
lesson plan. Your time together is about helping you succeed, which means time spent together can be allocated in whichever way will most benefit you.
The third benefit to working with an Algebra tutor in Chennai is that they can help you develop healthy strategies when it comes to both studying for and performing on tests on the subject. Most
people don't like taking tests. They can be stressful. If you're working with a Chenai Algebra tutor, though, they can help diminish some of that stress. Before you have a test date breathing down
your neck, they can work with you on building positive study skills like effective time management and planning as well as tactics for memory retention. In advance of the test, they can work with you
to do a deep dive on the material the test will cover and generate practice tests to get you used to the likely question structures and the pressure of time constraints. As a result, you'll be able
to go into the test with greater confidence.
It doesn't matter what your reasoning for wanting to improve your mastery of Algebra might be. If you're willing to put in the time and effort, seeking out the assistance of an Algebra tutor in
Chennai is a wise choice. Don't wait! Contact Varsity Tutors today to get started in less than 24 hours.
Your Personalized Tutoring Program and Instructor
Identify Needs
Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind.
Customize Learning
Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways.
Increased Results
You can learn more efficiently and effectively because the teaching style is tailored to you.
Online Convenience
With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you.
Top International Cities for Tutoring
|
{"url":"https://www.varsitytutors.com/chennai-india/algebra-tutors","timestamp":"2024-11-05T10:08:41Z","content_type":"text/html","content_length":"686777","record_id":"<urn:uuid:c431e544-dffd-4128-ad0d-a2ae9473ba8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00899.warc.gz"}
|
How do you graph x-y=3 using the x and y intercepts? | HIX Tutor
How do you graph #x-y=3# using the x and y intercepts?
Answer 1
graph{x-3 [-0.392, 4.62, -3.096, -0.59]}
We can express any linear equation in #two# variables in the form #x/a#+#y/b#=#1# (here the #eq# will be (#x/a##-y/b#=#3#) 1.we want to find where the line intersects the y axis
in terms of plotting the #equation# we will #mark# #the# #two# points on the #x&y# and draw a st line #through# #them#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To graph the equation (x - y = 3) using the x and y intercepts, we first find the x-intercept by setting (y = 0) and solving for x:
(x - 0 = 3)
(x = 3)
So, the x-intercept is at the point (3, 0).
Next, we find the y-intercept by setting (x = 0) and solving for y:
(0 - y = 3)
(-y = 3)
(y = -3)
So, the y-intercept is at the point (0, -3).
Plot these two points on the graph and draw a straight line passing through both points. This line represents the graph of the equation (x - y = 3).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-graph-x-y-3-using-the-x-and-y-intercepts-8f9af9160a","timestamp":"2024-11-02T17:37:14Z","content_type":"text/html","content_length":"570824","record_id":"<urn:uuid:29b62821-d0ba-490c-b5d5-41ae8bf71e47>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00015.warc.gz"}
|
Phone: (650)723-2081
A CV in PDF is here.
A list of publications in PDF is here.
A list of selected recent publications in PDF is here.
The Mathematics Department web page is here.
The Institute for Computational Mathematics and Engineering (ICME) web page is here.
In the past I have been interested in waves and diffusion in inhomogeneous or random media and in the mathematical analysis of multi-scale phenomena that arise in their study. Applications come
from electromagnetic wave propagation in the atmosphere, underwater sound, waves in the lithosphere, diffusion in porous media, etc. I have studied both linear and nonlinear waves and diffusion,
in both direct and inverse problems, in imaging in particular. I am now working on assessing multiple scattering effects in imaging and communication systems, including time reversal arrays, and
the use of optimization methods in imaging.
Another on-going interest is financial mathematics, the use of asymptotics for stochastic equations in analyzing complex models of financial markets and in data analysis. I am also interested in
the modeling and analysis of systemic effects in multi-agent systems and interacting markets, as well as the properties of highly diversified portfolios.
by Josselin Garnier and George Papanicolaou,
□ A book on waves in random media
by J.-P. Fouque, J. Garnier, G. Papanicolaou and K. Solna, Springer, 2007.
The Table of Contents is here. The book is also available from here .
□ A book on Stochastic Volatility
by Jean-Pierre Fouque, George Papanicolaou, K. Ronnie Sircar and K. Solna
□ Reprint of a book on Homogenization
by Alain Bensoussan, Jack-Louis Lions and George Papanicolaou
is a reprint of the 1978 edition published by the American Mathematical Society in 2011.
The book is also available from here .
|
{"url":"http://math.stanford.edu/~papanico/","timestamp":"2024-11-14T06:44:30Z","content_type":"text/html","content_length":"6121","record_id":"<urn:uuid:2679bd2d-dcda-4d3d-98ae-bf31a6f3cee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00029.warc.gz"}
|
Who is Aryabhatta short note?
Who is Aryabhatta short note?
Aryabhatta is a renowned mathematician and astronomer of ancient India. He was born in 476 CE in Bihar. He studied at the University of Nalanda. One of his major works was Aryabhatiya written in 499
What was the full name of Aryabhatta?
Aryabhata (Sanskrit: आर्यभट, ISO: Āryabhaṭa) or Aryabhata I (476–550 CE) was an Indian mathematician and astronomer of the classical age of Indian mathematics and Indian astronomy….Aryabhata.
Notable works Āryabhaṭīya, Arya-siddhanta
What is Aryabhata best known for?
Aryabhata became famous as a mathematician and astronomer. In his only surviving work, Aryabhatiya, he covered a wide range of topics, such as extracting square roots, solving quadratic equations,
and predicting eclipses.
Who was Aryabhatta one word?
Aryabhatta (476–550 CE) was one of the first and the finest mathematician-astronomers from the classical age of Indian Mathematics and Astronomy. His works include the Aryabhatiya and the Surya
Siddhanta. Aryabhatta was a famous astronomer and one of the finest Indian mathematicians of the classical age.
What is a birth of aryabhatta?
476 AD, PataliputraAryabhata / Born
Which book is written by aryabhatta?
The Āryabhaṭīya of Āryabhaṭa: An Ancient Indian Work on Mathematics and Astronomy
Was Aryabhata born?
WHO launched Aryabhata?
The Aryabhata spacecraft, named after the famous Indian astronomer, was India’s first satellite; it was completely designed and fabricated in India and launched by a Soviet Kosmos-3M rocket from
Kapustin Yar on April 19, 1975.
How old was aryabhatta died?
74 years (476 AD–550 AD)Aryabhata / Age at death
What did Aryabhata satellite discover?
Aryabhata weighed 360 kilograms, with instruments to explore conditions in Earth’s ionosphere, measure neutrons and gamma rays from the Sun, and perform investigations in X-ray astronomy.
|
{"url":"https://www.hotels-in-budapest-hungary.com/who-is-aryabhatta-short-note/","timestamp":"2024-11-13T05:31:47Z","content_type":"text/html","content_length":"53584","record_id":"<urn:uuid:99d7da31-2497-408a-bdd6-87e32161c2de>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00622.warc.gz"}
|
Copyright (c) George Ungureanu KTH/EECS/ESY 2019-2020
License BSD-style (see the file LICENSE)
Maintainer ugeorge@kth.se
Stability experimental
Portability portable
Safe Haskell Safe
Language Haskell2010
This library is is an un-official alternative to Vector meant for simulations of large data which is likely to become too cumbersome. Fast Vector functions do not use atoms, but rather use Prelude
functions on a wrapped newtype using a native Haskell type (in this case lists). The API tries to copy the exported functions of ForSyDe.Atom.Skel.Vector and its submodule so that switching betwen
libraries can be made seamlessly just by Vector with FastVector in the library import.
Useful links:
newtype Vector a Source #
In this library Vector is just a wrapper around a list.
Functor Vector Source #
Defined in ForSyDe.Atom.Skel.FastVector.Lib
Applicative Vector Source #
Defined in ForSyDe.Atom.Skel.FastVector.Lib
Foldable Vector Source #
Defined in ForSyDe.Atom.Skel.FastVector.Lib
Eq a => Eq (Vector a) Source #
Defined in ForSyDe.Atom.Skel.FastVector.Lib
Show a => Show (Vector a) Source #
Defined in ForSyDe.Atom.Skel.FastVector.Lib
farm41 :: Applicative f => (a1 -> a2 -> a3 -> a4 -> b) -> f a1 -> f a2 -> f a3 -> f a4 -> f b Source #
farm51 :: Applicative f => (a1 -> a2 -> a3 -> a4 -> a5 -> b) -> f a1 -> f a2 -> f a3 -> f a4 -> f a5 -> f b Source #
|
{"url":"https://forsyde.github.io/forsyde-atom/api/ForSyDe-Atom-Skel-FastVector.html","timestamp":"2024-11-12T02:17:37Z","content_type":"application/xhtml+xml","content_length":"44010","record_id":"<urn:uuid:5be3b5f8-c9dc-448b-a5da-2b85430349b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00089.warc.gz"}
|
Game Theory (Part 3) - Weak Dominance, Iterated Deletion and Common Knowledge
This is part 3 of my series on game theory. For an index, see
. The series follows the lectures of Ben Polak which are available on the
Open Yale Courses website
In the
previous entry
, we introduced some of the formal notation needed for game theory and used it to give a formal definition of the concept of strict dominance. In this entry, we continue by first examining the
concept of weak dominance and by then exploring the iterated deletion of dominated strategies, as well as the concept of common knowledge.
1. The Hannibal Game
Hannibal Barca was a famous Carthaginian military commander and tactician. He is most renowned for marching an army, complete with war elephants, through the Pyrenees and the Alps into Northern
Italy, during the Second Punic War. Following his arrival in Italy he won a number of notable victories over the Roman army.
His choice of invasion route is widely held-up as an example of shrewd military planning. But was it really that shrewd? We can’t provide a definitive analysis here, but we can construct a simple
game theoretic model that provides some insight.
Here’s the set up: An invader is thinking of invading a country and there are two passes through which they can choose to invade. One of the passes is hard and one is easy (from the invader's
perspective). The defender must defend but only has enough troops to defend one pass.
Payoffs in this game will be measured in terms of the number of battalions that the invader will arrive with (or are captured by the defender). There is a maximum of two battalions. Suppose that if
both players choose the easy path, the defender can expect to win one battalion from the invader. Suppose that if both choose the hard pass, the defender will win both battalions. Finally, suppose
that the hard pass is so difficult that even if the invader is unimpeded, he can expect to lose a battalion.
The following is the game matrix for this game:
If you were the defender in this game, what would you choose to do? Think about it for a moment or two and then come back to me....
What did you decide? It is suggested that you should choose to defend the easy pass even though it is not a strictly dominant strategy. Why do we make this suggestion? Well consider the following:
• (i) If you defend the easy pass, then the invader is indifferent between the easy pass and the hard pass. In other words, he could choose either since they both yield the same payoff for him.
• (ii) If you defend the hard pass, then the invader definitely prefers the easy pass.
Technically, what we say is that for the invader, the easy pass weakly dominates the hard pass. Which gives us the following definition:
Weak Dominance: Player i’s strategy Si* is weakly dominated by strategy Si if:
□ Ui (Si, S-i) ≥ Ui (Si*, S-i) for all S-i and;
□ Ui (Si, S-i) > Ui (Si*, S-i) for some S-i.
Clearly, this holds true for the invaders strategy e relative to strategy h.
According to this model, Hannibal’s decision to invade via the Alps seems irrational. But the model is only as good as the assumptions that go into it. In our case, we simplified massively from the
original set of circumstances. In reality, it is likely that there were uncertainties about the payoffs associated with the different routes. These uncertainties could have made Hannibal’s decision
more rational.
2. The Numbers Game
One of the key ideas in game theory is dominance solvability. This is when you solve a game through the iterated deletion of dominated strategies. Here’s a simple game that illustrates this
Suppose you are in a class of 50 students (the precise number doesn’t matter) and you are all asked to play the following game. You are given a sheet of paper on which you must write a number
between 1 and 100. You are told that the average number for the class will be calculated and the person who writes the number that is closest to being 2/3 of that average will win a prize of some
kind. Assuming you would like to win, what number should you write on the sheet of paper?
This game forces you to make use of one of the lessons from part one: namely, putting yourself in other people’s shoes, imagining what they are likely to do, and then determining your own strategy in
response to your assumptions about the other player.
To solve the game, I suggest picking an expected average number (pretty much at random) and see whether writing a number that is two thirds of this average holds up to scrutiny. As follows:
• (1) If everyone were to write a number at random, then we might expect the average number in the class to be 50, thus if you wrote a number that was roughly two thirds of 50, you could expect to
win. Therefore, you should write 33 or 34.
• (2) The problem is that people don’t choose at random. If they follow the same reasoning pattern as you do, then 33-34 would be the expected average. So you should write a number that is two
thirds of this average, i.e. approx. 22.
• (3) But, of course, this reasoning process is available to all players, and if they follow it, then 22 would be the expected average. So you should write a number that is two thirds of this.
• (4) This reasoning process continues on and on until you reach the number 1.
What’s happening in this game? The answer: an
iterated deletion of dominated strategies
. To see this in more detail, start the analysis once again from scratch. Note that any number chosen above 67 is going to be weakly dominated by 67, so you can remove any number above 67 from the
set of viable strategies. Once you do this, any number above 45 becomes weakly dominated and so must be removed from the set of viable strategies. This process of elimination continues until you
reach the number one.
Of course, if you really did have to play this game, you should take into account how strategically savvy your opponents are.
3. Common Knowledge
The numbers game illustrates another important phenomenon in game theory:
common knowledge
. Two examples will help us to understand this phenomenon.
Consider first the following diagram. It depicts two people wearing pink hats. Person X can see that person Y is wearing a pink hat; person Y can see that person X is wearing a pink hat; but neither
knows the colour of their own hat. In this case, the fact that both are wearing pink hats is not common knowledge, it is only mutual knowledge.
This example suggests that common knowledge is a pretty subtle thing. Formally, it is defined as follows:
Common Knowledge: Proposition P is common knowledge between X and Y, iff X knows P and Y knows P, X knows that Y knows P and Y knows that X knows P, X knows that Y knows that X knows P and so on
ad infinitum.
Common knowledge is thought to underly much of social life and can create enormous problems. This is humorously illustrated by our second example: a famous scene from the movie The Princess Bride.
1 comment:
1. This comment has been removed by the author.
|
{"url":"https://philosophicaldisquisitions.blogspot.com/2011/04/game-theory-part-3-weak-dominance.html","timestamp":"2024-11-14T23:46:50Z","content_type":"text/html","content_length":"135087","record_id":"<urn:uuid:d539c597-8db3-498a-a776-e7fa34267ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00526.warc.gz"}
|
What are two types of hypotheses used in a hypothesis test?
What are two types of hypotheses used in a hypothesis test?
Answer and Explanation: The two types of hypotheses that are used in hypothesis testing are the null and alternative hypotheses.
What is a formal hypothesis test?
We use formal hypothesis testing to decide if a sample result is significantly different enough (beyond the regular experimental variation) to disprove a population claim or expected value.
Which of the following is an example of a nondirectional hypothesis?
Which of the following is an example of a nondirectional hypothesis? there is a relationship between a high fat diet and weight gain. there is a positive relationship between a high fat diet and
weight gain.
What is a nondirectional alternative hypothesis?
A nondirectional hypothesis is a type of alternative hypothesis used in statistical significance testing. For a research question, two rival hypotheses are formed. The alternative hypothesis states
that an observed difference is likely to be genuine and not likely to have occurred by chance alone.
What is a one tailed hypothesis?
A one-tailed test is a statistical hypothesis test set up to show that the sample mean would be higher or lower than the population mean, but not both.
What is null hypothesis in research?
A null hypothesis is a type of hypothesis used in statistics that proposes that there is no difference between certain characteristics of a population (or data-generating process).
|
{"url":"https://neighborshateus.com/what-are-two-types-of-hypotheses-used-in-a-hypothesis-test/","timestamp":"2024-11-11T04:41:55Z","content_type":"text/html","content_length":"48733","record_id":"<urn:uuid:5bb99306-1479-44ee-a3dd-e68a848348d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00351.warc.gz"}
|
What Is Called A Motion?
In physics, motion is the phenomenon in which an object changes its position over time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and time.
Change in position of an object over time
A motorcyclist doing a
, with the background blur representing motion
In physics, motion is the phenomenon in which an object changes its position over time. Motion is mathematically described in terms of displacement, distance, velocity, acceleration, speed, and time.
The motion of a body is observed by attaching a frame of reference to an observer and measuring the change in position of the body relative to that frame with change in time. The branch of physics
describing the motion of objects without reference to its cause is kinematics; the branch studying forces and their effect on motion is dynamics.
If an object is not changing relatively to a given frame of reference, the object is said to be at rest, motionless, immobile, stationary, or to have a constant or time-invariant position with
reference to its surroundings. As there is no absolute frame of reference, absolute motion cannot be determined.[1] Thus, everything in the universe can be considered to be in motion.[2]:20–21
Motion applies to various physical systems: to objects, bodies, matter particles, matter fields, radiation, radiation fields, radiation particles, curvature, and space-time. One can also speak of
motion of images, shapes, and boundaries. So, the term motion, in general, signifies a continuous change in the positions or configuration of a physical system in space. For example, one can talk
about the motion of a wave or the motion of a quantum particle, where the configuration consists of probabilities of the wave or particle occupying specific positions.
Laws of motion[edit]
In physics, motion of massive bodies is described through two related sets of laws of mechanics. Classical mechanics for superatomic (larger than atomic) objects (such as cars, projectiles, planets,
cells, and humans) and quantum mechanics for atomic and sub-atomic objects (such as helium, protons and electrons). Historically, Newton and Euler formulated three laws of classical mechanics:
First law:In an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.Second law:In an inertial reference frame, the
vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma.
If the resultant force F acting on a body or an object is not equal to zero, the body will have an acceleration a which is in the same direction as the resultant.
Third law:When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
Classical mechanics[edit]
Classical mechanics is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and
galaxies. It produces very accurate results within these domains, and is one of the oldest and largest scientific descriptions in science, engineering, and technology.
Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled
by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, first published on July 5, 1687. Newton's three laws are:
A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force.(This is known as the law of inertia.)Force is equal to the change in
momentum (mv) per change in time. For a constant mass, force equals mass times acceleration (F = ma).For every action, there is an equal and opposite reaction.i.e. whenever one body exerts a force F
onto a second body, (in some cases, which is standing still) the second body exerts the force −F back onto the first body. F and −F are equal in magnitude and opposite in direction. So, the body
which exerts F will be pushed backwards.[3]
Newton's three laws of motion were the first to accurately provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of celestial bodies and
motion of objects on earth.
Equations of Motion[edit]
Translational motion
In translational motion, the driving force F is counterbalanced by a resisting force Fr set up by the driven machine and by an inertia force Ma arising from the change in speed, or
F → − F r → = M a → = M d v → d t → {\displaystyle {\vec {F}}-{\vec {Fr}}=M{\vec {a}}=M{\operatorname {d} \!{\vec {v}} \over \operatorname {d} \!{\vec {t}}}} (1)
where the mass M is expressed in kg. the velocity v in m/sec, the acceleration a in m/sec2, and the force F in newtons (N).[4]
Oscillatory motion
A motion repeating itself is referred to as periodic or oscillatory motion. An object in such motion oscillates about an equilibrium position due to a restoring force or torque. Such force or torque
tends to restore (return) the system toward its equilibrium position no matter in which direction the system is displaced.[5]
Rotational motion
In rotational motion, the driving torque TM (usually developed by the electric motor) is counterbalanced by a resisting torque TL (usually developed by the load and referred to as the motor shaft)
and by an inertia or dynamic torque J dω/dt,
T M − T L = J d ω / d t {\displaystyle T_{M}-T_{L}=J\operatorname {d} \!\omega /\operatorname {d} \!t} (2)
where the inertia J is expressed in kg*m2. It is sometimes called flywheel torque or moment and T is the torque in N*m. The signs to be associated with TM and TL in Eq. (2) depend on the regime of
operation of the driving motor and the nature of the load torque.[4]
Uniform Motion:
When an object moves with a constant speed in a particular direction at regular intervals of time it is known as uniform motion. For example: a bike moving in a straight line with a constant speed.
Here value of acceleration will be zero.
Equations of Uniform Motion:
If v {\displaystyle \mathbf {v} } = final and initial velocity, t {\displaystyle t} = time, and s {\displaystyle \mathbf {s} } = displacement, then:
s = v t {\displaystyle \mathbf {s} =\mathbf {v} t} (3)
If v {\displaystyle \mathbf {v} } is final velocity and u {\displaystyle \mathbf {u} } is initial velocity, a {\displaystyle \mathbf {a} } is acceleration throughout the time( t {\displaystyle t} ),
and s {\displaystyle \mathbf {s} } = displacement, then:
v = u + a t {\displaystyle \mathbf {v} =\mathbf {u} +\mathbf {a} t}
s = u t + 1 / 2 a t {\displaystyle \mathbf {s} =\mathbf {u} t+1/2\mathbf {a} t} 2
-You lift your back leg in a jumping motion -You inside-out kick with the front leg in midair What is this kick called? (
— Jason Deel (@SteamPoweredDM) Jan 28, 2021
v 2 − u 2 = 2 a s {\displaystyle \mathbf {v^{2}} -\mathbf {u^{2}} =2\mathbf {a} s}
Non-Uniform Motion:
When an object moves with a different or variable velocity is called non-uniform motion at a regular time interval. And object covers different distances in an equal time interval. Here acceleration
has a non-zero value. Example: A running horse.
There are two types of Non-Uniform Motion with respect to acceleration:
Uniformly accelerated non-uniform motion: When objects move with different velocities in an equal time interval and acceleration is constant throughout the time interval. This means the velocity of
an object will change at a constant rate in a given time interval. Example: Free Fall of an object due to gravity(acceleration: due to gravity 9.8 m/s2 throughout the time interval).
Here acceleration is non-zero but constant.
Non-uniformly accelerated non-uniform motion:When objects move with different velocities in an equal time interval and acceleration is variable throughout the time interval. This means the velocity
of an object will not change at a constant rate. Example: Driving a car with different velocities at different time intervals.
Here acceleration is non-zero but variable.
Relativistic mechanics[edit]
Modern kinematics developed with study of electromagnetism and refers all velocities v to their ratio to speed of light c. Velocity is then interpreted as rapidity, the hyperbolic angle φ for which
the hyperbolic tangent function tanh φ = v/c. Acceleration, the change of velocity, then changes rapidity according to Lorentz transformations. This part of mechanics is special relativity. Efforts
to incorporate gravity into relativistic mechanics were made by W. K. Clifford and Albert Einstein. The development used differential geometry to describe a curved universe with gravity; the study is
called general relativity.
Quantum mechanics[edit]
Quantum mechanics is a set of principles describing physical reality at the atomic level of matter (molecules and atoms) and the subatomic particles (electrons, protons, neutrons, and even smaller
elementary particles such as quarks). These descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality.
In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In quantum mechanics, due to the Heisenberg uncertainty
principle, the complete state of a subatomic particle, such as its location and velocity, cannot be simultaneously determined.[citation needed]
In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomenon such as superfluidity, superconductivity, and biological
systems, including the function of smell receptors and the structures of protein.[7]
List of "imperceptible" human motions[edit]
Humans, like all known things in the universe, are in constant motion;[2]:8–9 however, aside from obvious movements of the various external body parts and locomotion, humans are in motion in a
variety of ways which are more difficult to perceive. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation. The larger scales of
imperceptible motions are difficult for humans to perceive for two reasons: Newton's laws of motion (particularly the third) which prevents the feeling of motion on a mass to which the observer is
connected, and the lack of an obvious frame of reference which would allow individuals to easily see that they are moving.[8] The smaller scales of these motions are too small to be detected
conventionally with human senses.
Spacetime (the fabric of the universe) is expanding, meaning everything in the universe is stretching, like a rubber band. This motion is the most obscure as it is not physical motion, but rather a
change in the very nature of the universe. The primary source of verification of this expansion was provided by Edwin Hubble who demonstrated that all galaxies and distant astronomical objects were
moving away from Earth, known as Hubble's law, predicted by a universal expansion.[9]
The Milky Way Galaxy is moving through space and many astronomers believe the velocity of this motion to be approximately 600 kilometres per second (1,340,000 mph) relative to the observed locations
of other nearby galaxies. Another reference frame is provided by the Cosmic microwave background. This frame of reference indicates that the Milky Way is moving at around 582 kilometres per second
(1,300,000 mph).[10][failed verification]
Sun and solar system[edit]
The Milky Way is rotating around its dense galactic center, thus the sun is moving in a circle within the galaxy's gravity. Away from the central bulge, or outer rim, the typical stellar velocity is
between 210 and 240 kilometres per second (470,000 and 540,000 mph).[11] All planets and their moons move with the sun. Thus, the solar system is moving.
The Earth is rotating or spinning around its axis. This is evidenced by day and night, at the equator the earth has an eastward velocity of 0.4651 kilometres per second (1,040 mph).[12] The Earth is
also orbiting around the Sun in an orbital revolution. A complete orbit around the sun takes one year, or about 365 days; it averages a speed of about 30 kilometres per second (67,000 mph).[13]
The Theory of Plate tectonics tells us that the continents are drifting on convection currents within the mantle causing them to move across the surface of the planet at the slow speed of
approximately 2.54 centimetres (1 in) per year.[14][15] However, the velocities of plates range widely. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of
75 millimetres (3.0 in) per year[16] and the Pacific Plate moving 52–69 millimetres (2.0–2.7 in) per year. At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a
typical rate of about 21 millimetres (0.83 in) per year.
Internal body[edit]
The human heart is constantly contracting to move blood throughout the body. Through larger veins and arteries in the body, blood has been found to travel at approximately 0.33 m/s. Though
considerable variation exists, and peak flows in the venae cavae have been found between 0.1 and 0.45 metres per second (0.33 and 1.48 ft/s).[17] additionally, the smooth muscles of hollow internal
organs are moving. The most familiar would be the occurrence of peristalsis which is where digested food is forced throughout the digestive tract. Though different foods travel through the body at
different rates, an average speed through the human small intestine is 3.48 kilometres per hour (2.16 mph).[18] The human lymphatic system is also constantly causing movements of excess fluids,
lipids, and immune system related products around the body. The lymph fluid has been found to move through a lymph capillary of the skin at approximately 0.0000097 m/s.[19]
The cells of the human body have many structures which move throughout them. Cytoplasmic streaming is a way in which cells move molecular substances throughout the cytoplasm,[20] various motor
proteins work as molecular motors within a cell and move along the surface of various cellular substrates such as microtubules, and motor proteins are typically powered by the hydrolysis of adenosine
triphosphate (ATP), and convert chemical energy into mechanical work.[21] Vesicles propelled by motor proteins have been found to have a velocity of approximately 0.00000152 m/s.[22]
According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms which make up the human
body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the
thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as a
feeling of cold.[23]
Subatomic particles[edit]
Within each atom, electrons exist in a region around the nucleus. This region is called the electron cloud. According to Bohr's model of the atom, electrons have a high velocity, and the larger the
nucleus they are orbiting the faster they would need to move. If electrons 'move' about the electron cloud in strict paths the same way planets orbit the sun, then electrons would be required to do
so at speeds which far exceed the speed of light. However, there is no reason that one must confine oneself to this strict conceptualization (that electrons move in paths the same way macroscopic
objects do), rather one can conceptualize electrons to be 'particles' that capriciously exist within the bounds of the electron cloud.[24] Inside the atomic nucleus, the protons and neutrons are also
probably moving around due to the electrical repulsion of the protons and the presence of angular momentum of both particles.[25]
Light moves at a speed of 299,792,458 m/s, or 299,792.458 kilometres per second (186,282.397 mi/s), in a vacuum. The speed of light in vacuum (or c) is also the speed of all massless particles and
associated fields in a vacuum, and it is the upper limit on the speed at which energy, matter, information or causation can travel. The speed of light in vacuum is thus the upper limit for speed for
all physical systems.
In addition, the speed of light is an invariant quantity: it has the same value, irrespective of the position or speed of the observer. This property makes the speed of light c a natural measurement
unit for speed and a fundamental constant of nature.
Types of motion[edit]
Fundamental motions[edit]
See also[edit]
^ Wahlin, Lars (1997). "9.1 Relative and absolute motion" (PDF). The Deadbeat Universe. Boulder, CO: Coultron Research. pp. 121–129. ISBN 978-0-933407-03-9. Retrieved 25 January 2013.^ a b Tyson,
Neil de Grasse; Charles Tsun-Chu Liu; Robert Irion (2000). One Universe : at home in the cosmos. Washington, DC: National Academy Press. ISBN 978-0-309-06488-0.^ Newton's "Axioms or Laws of Motion"
can be found in the "Principia" on p. 19 of volume 1 of the 1729 translation.^ a b Encyclopedia of Physical Science and Technology. Elsevier Science Ltd. 2001. ISBN 978-0-12-227410-7.^ Alrasheed,
Salma (2019). "Oscillatory Motion". Principles of Mechanics. Advances in Science, Technology & Innovation. Cham: Springer. pp. 155–171. doi:10.1007/978-3-030-15195-9_10. ISBN 9783030151959.^ Feynman,
Richard P. (Richard Phillips), 1918-1988. (1989). The Feynman lectures on physics. Leighton, Robert B., Sands, Matthew L. (Matthew Linzee). Redwood City, Calif.: Addison-Wesley. ISBN
978-0-201-51003-4. OCLC 19455482.{{cite book}}: CS1 maint: multiple names: authors list (link)^ folger, tim (October 23, 2018). "How Quantum Mechanics Lets Us See, Smell and Touch: How the science of
the super small affects our everyday lives". Discovery Magazine. Archived from the original on January 26, 2021. Retrieved October 24, 2021.^ Safkan, Yasar. "Question: If the term 'absolute motion'
has no meaning, then why do we say that the earth moves around the sun and not vice versa?". Ask the Experts. PhysLink.com. Retrieved 25 January 2014.^ Hubble, Edwin (1929-03-15). "A relation between
distance and radial velocity among extra-galactic nebulae". Proceedings of the National Academy of Sciences. 15 (3): 168–173. Bibcode:1929PNAS...15..168H. doi:10.1073/pnas.15.3.168. PMC 522427. PMID
16577160.^ Kogut, A.; Lineweaver, C.; Smoot, G.F.; Bennett, C.L.; Banday, A.; Boggess, N.W.; Cheng, E.S.; de Amici, G.; Fixsen, D.J.; Hinshaw, G.; Jackson, P.D.; Janssen, M.; Keegstra, P.;
Loewenstein, K.; Lubin, P.; Mather, J.C.; Tenorio, L.; Weiss, R.; Wilkinson, D.T.; Wright, E.L. (1993). "Dipole Anisotropy in the COBE Differential Microwave Radiometers First-Year Sky Maps".
Astrophysical Journal. 419: 1. arXiv:astro-ph/9312056. Bibcode:1993ApJ...419....1K. doi:10.1086/173453.^ Imamura, Jim (August 10, 2006). "Mass of the Milky Way Galaxy". University of Oregon. Archived
from the original on 2007-03-01. Retrieved 2007-05-10.^ Ask an Astrophysicist. NASA Goodard Space Flight Center.^ Williams, David R. (September 1, 2004). "Earth Fact Sheet". NASA. Retrieved
2007-03-17.^ Staff. "GPS Time Series". NASA JPL. Retrieved 2007-04-02.^ Huang, Zhen Shao (2001). Glenn Elert (ed.). "Speed of the Continental Plates". The Physics Factbook. Retrieved 2020-06-20.^
Meschede, M.; Udo Barckhausen, U. (November 20, 2000). "Plate Tectonic Evolution of the Cocos-Nazca Spreading Center". Proceedings of the Ocean Drilling Program. Texas A&M University. Retrieved
2007-04-02.^ Wexler, L.; D H Bergel; I T Gabe; G S Makin; C J Mills (1 September 1968). "Velocity of Blood Flow in Normal Human Venae Cavae". Circulation Research. 23 (3): 349–359. doi:10.1161/
01.RES.23.3.349. PMID 5676450.^ Bowen, R (27 May 2006). "Gastrointestinal Transit: How Long Does It Take?". Pathophysiology of the digestive system. Colorado State University. Retrieved 25 January
2014.^ M. Fischer; U.K. Franzeck; I. Herrig; U. Costanzo; S. Wen; M. Schiesser; U. Hoffmann; A. Bollinger (1 January 1996). "Flow velocity of single lymphatic capillaries in human skin". Am J Physiol
Heart Circ Physiol. 270 (1): H358–H363. doi:10.1152/ajpheart.1996.270.1.H358. PMID 8769772.^ "cytoplasmic streaming – biology". Encyclopædia Britannica.^ "Microtubule Motors". rpi.edu. Archived from
the original on 2007-11-30.^ Hill, David; Holzwarth, George; Bonin, Keith (2002). "Velocity and Drag Forces on motor-protein-driven Vesicles in Cells". APS Southeastern Section Meeting Abstracts. 69:
EA.002. Bibcode:2002APS..SES.EA002H.^ Temperature and BEC. Archived 2007-11-10 at the Wayback Machine Physics 2000: Colorado State University Physics Department^ "Classroom Resources". anl.gov.
Argonne National Laboratory.^ Chapter 2, Nuclear Science- A guide to the nuclear science wall chart. Berkley National Laboratory.
External links[edit]
Media related to Motion at Wikimedia Commons Wikiquote has quotations related to: Motion
|
{"url":"https://askandanswer.info/what-is-called-a-motion/","timestamp":"2024-11-08T13:53:33Z","content_type":"text/html","content_length":"69151","record_id":"<urn:uuid:c551b37d-46e1-4b4e-b93d-b79070bf6e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00110.warc.gz"}
|
8 Ways to Infuse Movement into Math Class
8 Ways to Infuse Movement into Math Class
If you think about it, movement is present in our earliest forays into math. We move our fingers to learn how to count and do so again later, when we’re first learning how to add and subtract.
Contrary to conventional wisdom, finger counting isn’t a crutch, and it doesn’t appear to hold kids back. One compelling piece of research shows that finger counting actually boosts math learning,
helping students better understand numbers and acting as a “bridge between other (verbal, symbolic, and non-symbolic) representations of numbers.”
As kids age into elementary school, according to Kendall Stallings, a first-grade teacher in Baltimore, there’s significant research demonstrating that whole-body movement can help students deepen
math engagement and retention, especially when the movement is integrated into instruction and physically embodies concepts being presented to students.
Writing for Ed Week, Stallings argues that movement works particularly well in early math lessons because “it offers students a chance to engage physically with abstract concepts and demonstrate
their understanding kinesthetically.”
There’s a novelty factor at work, too. Elementary school teacher Elizabeth Peterson notes that movement in math is also useful for imbuing learning experiences with “something fresh and new, which
the brain likes.”
Want to add to your math toolkit? Here are eight teacher-tested ways to integrate movement into your elementary school math lessons.
1. Switching movements: To help early math students develop number sense, Stallings recommends this kinesthetic option. When teaching students to count up to 20, for example, have them do 10 jumping
jacks and then switch to clapping when they get to numbers after 10. When asking them to clap to 15, they’ll do 10 jumping jacks and five claps, for example. “The switch to clapping after the first
set of 10 allows them to make the connection that teen numbers are 10 plus some more ones,” Stallings writes. Other movements you can try incorporating are high knees, frog jumps, or hops on one
2. Skip movements: To help promote skip-counting fluency—a vital, early math skill that helps kids learn number patterns and lays the foundation for more complicated multiplication and division—try
assigning specific movements to different values, Stallings writes. When teaching students to count by even numbers, for example, ask them to stand in a circle and count out loud together. But
instead of counting, “one, two, three…” have students substitute a clap for numbers that aren’t even. According to Stallings, this would sound something like: “Two, [clap], four, [clap], six, [clap],
etc.” Peterson writes that you can integrate more elaborate movements into this activity, such as squats. “Giving students this additional, whole-body experience to practice these facts using
movement and rhythm has truly helped the kids memorize them,” Peterson said.
3. Vocabulary movement stretches: Much of early math instruction involves helping students understand concepts—and the related vocabulary—for things like “parallel lines” or “isosceles triangles,”
writes Peterson. Stretching, she says, can help students ingrain this vocabulary into their minds and provide a nice mindfulness exercise in the process.
8 Ways to Infuse Movement into Math Class To illustrate an isosceles triangle, for example, students can spread their feet apart and “take a moment to trace the three sides of the triangle” with
their hands, Peterson writes. “We start at the top point (our belly button) and move down one side, across the base, and up the third side.” To illustrate symmetry, students move their arms up toward
the ceiling, creating a straight line with their body, and notice what about their body makes them symmetrical. “I look around and see who needs a little adjustment, telling the students that paying
attention to precision is an important part of math,” she writes.
After establishing individual movements—and, at the same time, reinforcing these important terms—Peterson writes that she’ll often revisit movements to help struggling students. “If the kids are
having a hard time with parallel versus perpendicular, we’ll run through a series of arm movements to show the difference.”
4. Work math into classroom transitions: There are opportunities to work math into normal, everyday movements students make in school—such as during classroom transitions. Kurt Stielow, an academic
dean in Milwaukee, writes that while students are waiting in line for a transition or walking to their next destination, teachers can give students a math equation to chew on that is rooted in
concepts they’re learning.
For example, ask students what $0.75 plus $1.75 equals, or ask them to count the number of steps it takes them to get to the next room (fractions included!) they’re headed to, which can help them
promote number fluency. Giving students some mental math to work on while moving, Stielow writes, helps keep students engaged and makes managing a line a lot easier. “Two birds, one stone.”
5. Basketball math: A 2021 study found that mixing math into hoop sessions led to a 16 percent increase in children’s motivation to learn math compared to typical classroom math activities, as well
as a 6 percent increase in mastery of specific math skills. 8 Ways to Infuse Movement into Math Class Example exercises include asking students to count how many times they can sink a basket from
three meters away vs. a one-meter distance and subsequently adding up the numbers—or, alternatively, multiplying or dividing the numbers. According to one researcher who spoke to Science Daily, the
practice works because it “endows children with a sense of ownership of their calculations and helps them clarify and concretize abstract concepts, which in turn increases their motivation to learn
6. Fraction ball: Same ball, different game. Research shows fraction balls can help students complete fast arithmetic and understand why numbers like 0.75 and ¾ have the same value much faster and
more memorably than they would by staring at a worksheet.
7. Math dancing: Help students solve algebra problems via a choreography unit. 8 Ways to Infuse Movement into Math Class Try assigning a dance movement, like a twirl, to the x variable of an
algebraic equation and another movement, like a stomp, to the y variable. When trying to solve an equation like 3(x + y), these movements can help students understand that the seemingly difficult
formula really means 3x + 3y, or three twirls + three stomps (and 3x + y = three twirls and one stomp).
8. Use movement to create—and analyze—student data: An easy way to get students more invested in crucial math concepts like mean, median, and mode is to give them the opportunity to create their own
physical data to analyze. Sarah Carter, a math teacher in Oklahoma, suggests using the Blind Stork test to create a large data pool while working movement into the process.
The instructions are simple: Students pair up, then one student times the other as they close their eyes and see how long they can stand on one leg. Then they trade places and repeat the process a
few times to create a more robust dataset. Using an online data storage tool, like Google Sheets, students repeat the exercise as directed and input their values. By the end, the students will have a
large enough pool of data to manipulate statistically, both as pairs and as a whole class.
Leave a Comment
|
{"url":"https://financeinsider.us/2023/12/30/8-ways-to-infuse-movement-into-math-class/","timestamp":"2024-11-14T20:41:20Z","content_type":"text/html","content_length":"162998","record_id":"<urn:uuid:3d02aa3b-4ffc-4844-acb0-28afedeae6c4>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00189.warc.gz"}
|
How To Calculate a Percentage %
Why is it that Maths students, even those doing Further Maths, can never calculate a %? Every year I realise that very few students know how to calculate a %. Maybe it is not on the A Level maths
% are one of the few maths learning of great practical use. When shopping, you won't have to differentiate a complex equation, but, you may need to work out the 20% discount of a good costing £40.
Anyway rant aside.
How to calculate a Percentage %
• 20% means 20/100. Therefore we multiply £40 * 20/100 = £8
20% 20/100 is the same as 0.2
• If price increases from £40 to £44. What is the % increase?
• We divide the difference (44-40) by the initial amount. * 100
• Therefore it is 4 / 40 = 0.1
• 0.1 * 100 = 10%
• If price increases from 70 to 84. The % change is 14/70 *100 = 20%
If quantity was 3,000 and the Q.D increases by 20%. What is the new quantity?
we can times 3,000 * 120/100 = 3,600
Alternatively we can find 20% of 3,000 and then add it to 3,000
20% of 3,000 = 20/100 (0.2) * 3000 = 600
8 comments:
great! thnx .. for help.. :)
At last - found a decent page. You do this stuff in school, never use it till you start work. And you've already forgotten it. :)
Nice and simply explained! Perhaps it will stick now. If not will keep this page handy as a quick reminder. Why don't teachers make sure that children learn basics maths, such as this,
....because they don't! Could it be that the teachers don't know them themselves?!!? :)
Thanks for the nice simple explanation.... let's hope it sticks now.....never did at school; but then I went to 13 different ones and maths was taught differently at each - or so it
seemed!.......It was all a bit of a mystery really. Perhaps that's why I only managed Maths 'O' level at age 37 before going on to study for two degrees!!
Not taught? This is GCSE stuff. You're pretty much expected to know this when doing your Year 11 GCSEs,so uhm,most A-Level students do know this. I'd know,considering I'm one of 'em.But there are
quite a few people in my business class struggling with this so, great help for those who do need it.
Thank you!! Very simple explanation out of all google web pages!!!
Thanks very much for your help. I'm 39 and still trying to work out how it works. This is the best tutorial by a mile. Now I'm free! !
Thankyou for the help!!
|
{"url":"https://econ.economicshelp.org/2007/09/how-to-calculate-percentage.html?showComment=1221580080000","timestamp":"2024-11-06T20:09:46Z","content_type":"text/html","content_length":"49241","record_id":"<urn:uuid:a0688727-246e-43ed-b19b-145174dedc3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00001.warc.gz"}
|
Commission Formula
Hope you're all well.
I’m hitting a brick wall inserting a formula into a tracker for my sales teams commission structure which is being redone.
It’s a tiered system whereby the salesperson will get commission in brackets until they reach target, and then a higher percentage on anything over target; (0.1% for the first 20%) of their target,
(0.2% on the next 20%), (0.3% on the next 20%) so on.
Until they reach target, at which point everything over target they are rewarded at 2%.
If they are under target, they will still receive commission, but only within the brackets.
If anyone who’s done a similar structure that would be able to take a look and let me know their thoughts, it would be really appreciated, as I seem to be going around in circles.
I've extracted a section and published it below.
Thanks in advanced, any suggestions welcome :D
Best Answer
• Hi @Paul Newcome @Glen Urquhart
Hope you are fine, i do it using the following formula, please confirm if this ok for you or we must change the factors of calculation.
=IF([Month End Sales Actual]@row > [2021 Target]@row, (0.001 * [First 20%]@row + 0.002 * [Second 20%]@row + 0.003 * [Third 20%]@row + 0.004 * [Last 20%]@row + ([Over Target]@row * 0.02)), (IF(AND
([Month End Sales Actual]@row <= [Last 20%]@row, [Month End Sales Actual]@row > [Third 20%]@row), ((0.001 * [First 20%]@row + 0.002 * [Second 20%]@row + 0.003 * [Third 20%]@row + (([Month End
Sales Actual]@row - [Third 20%]@row) * 0.004))), (IF(AND([Month End Sales Actual]@row > [Second 20%]@row, [Month End Sales Actual]@row <= [Third 20%]@row), ((0.001 * [First 20%]@row + 0.002 *
[Second 20%]@row + ([Month End Sales Actual]@row - [Second 20%]@row) * 0.003)), (IF(AND([Month End Sales Actual]@row > [First 20%]@row, [Month End Sales Actual]@row <= [Second 20%]@row), (0.001 *
[First 20%]@row + ([Month End Sales Actual]@row - [First 20%]@row) * 0.002), (IF(AND([Month End Sales Actual]@row > Divider@row, [Month End Sales Actual]@row <= [First 20%]@row), 0.001 * ([Month
End Sales Actual]@row - Divider@row))))))))))
☑️ Are you satisfied with my answer to your question? Please help the Community by marking it as an ( Accepted Answer), and I will be grateful for your "Vote Up" or "Insightful"
• Hi @Glen Urquhart
Hope you are fine, i designed the following sheet using the information you mentioned in your question to calculate the Actual Commission please check.
☑️ Are you satisfied with my answer to your question? Please help the Community by marking it as an ( Accepted Answer), and I will be grateful for your "Vote Up" or "Insightful"
• I want to make sure I understand this correctly...
Lets assume a target of $10,000.00 for easier math.
If they reach their target, do they get 2% of 10,000, or is it
.1% of 2,000
.2% of 2,000
.3% of 2,000
.4% of 2,000
.5% of 2,000
all added together?
Then if they were to go to maybe $15,000 they would get 2$ of 5,000?
• Hi @Paul Newcome @Glen Urquhart
Hope you are fine, i do it using the following formula, please confirm if this ok for you or we must change the factors of calculation.
=IF([Month End Sales Actual]@row > [2021 Target]@row, (0.001 * [First 20%]@row + 0.002 * [Second 20%]@row + 0.003 * [Third 20%]@row + 0.004 * [Last 20%]@row + ([Over Target]@row * 0.02)), (IF(AND
([Month End Sales Actual]@row <= [Last 20%]@row, [Month End Sales Actual]@row > [Third 20%]@row), ((0.001 * [First 20%]@row + 0.002 * [Second 20%]@row + 0.003 * [Third 20%]@row + (([Month End
Sales Actual]@row - [Third 20%]@row) * 0.004))), (IF(AND([Month End Sales Actual]@row > [Second 20%]@row, [Month End Sales Actual]@row <= [Third 20%]@row), ((0.001 * [First 20%]@row + 0.002 *
[Second 20%]@row + ([Month End Sales Actual]@row - [Second 20%]@row) * 0.003)), (IF(AND([Month End Sales Actual]@row > [First 20%]@row, [Month End Sales Actual]@row <= [Second 20%]@row), (0.001 *
[First 20%]@row + ([Month End Sales Actual]@row - [First 20%]@row) * 0.002), (IF(AND([Month End Sales Actual]@row > Divider@row, [Month End Sales Actual]@row <= [First 20%]@row), 0.001 * ([Month
End Sales Actual]@row - Divider@row))))))))))
☑️ Are you satisfied with my answer to your question? Please help the Community by marking it as an ( Accepted Answer), and I will be grateful for your "Vote Up" or "Insightful"
• After taking another look at the sheet you provided... Where exactly are you wanting to populate the formula(s)?
• Hi both, thank you for your input.
Correct- as they increase through the tiers, the commision increases as you say.
0.2% of the first 2k
0.3% of the next 2k, so on.
And then any sales that surpass the target 2% on the 'over'
i.e target of 10k, and sales of 15k, would result in 2% of the 5k over the target.
If they achieve for example 5k sales however, they should recieve:
0.1% of the first 2k
0.2% of the second 2k
0.3% of the remaining 1k.
= 2 + 4 + 3...
The numbers are actually different per salesperson / area / agreed percentage etc, but i can't figue the basis for it.
I'm looking for a formula to populate the (now) red filled cells.
• The template was inspired by the furthest to the right "Tiered Commision" scheme on the attached, however there are a couple of errors in the original;
• It may be that the first approch was over complicated for what is actually needed, and infact the below would suffice, however im still having trouble with these ifs rules, is the below formula
description logical / possible?
• Hi Both,
Thank you for your help and input.
I have achieved what I was looking to have as an output now.
@Paul Newcome , it was actually a thread that you answered in 2018 that wa the final piece to my puzzle.
The below is an example of what I was going for:
The "Actual sales" field is fed live from another sheet, which is linked to our CRM via Automate.io, collecting sales reps monthly sales figures.
Thanks again for your help and comments.
• Glad you were able to get it working! 👍️
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/75219/commission-formula","timestamp":"2024-11-13T14:57:27Z","content_type":"text/html","content_length":"435377","record_id":"<urn:uuid:9ce39ef2-8584-47ea-bb37-fbf915b9bce1>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00885.warc.gz"}
|
Geometric interval arithmetic
Main partner: INRIA. Cooperation partners: SINTEF, UO
Computer systems for algebraic geometry aim at exact solutions and consequently dominantly use exact arithmetic. CAD-systems are based on double precision floating point representation and allow
the user to provide geometric tolerances defining when two points should be considered to be the same, and when curves or surfaces should be considered to be completely or partial coincident.
Consequently when two surfaces are closer than the geometry tolerance and intersecting in a complex curve topology, the CAD-users in some cases expect an area of coincidence to be reported rather
than the correct intersection topology. Similarly if no exact intersection exists but the surfaces are closer than the specified tolerances the CAD-user will expect the surfaces to be reported as
partial coincident with a description of the area of coincidence. In other circumstances the CAD-user expects the exact solution to be found.
In the last years INRIA has exploited the use of floating point calculations as part of algorithms for the calculation of a certified topology for the intersection and self-intersection of exactly
represented surfaces. SINTEF has continuously extended the use of certification in their implementation of surface intersection and self-intersection algorithms for the CAD-industry.
We will investigate the approaches for the certification of tolerance dependent intersection and self-intersection algorithms within algebraic geometry and CAD-type geometry in order to develop new
approaches for the result certification for improved quality and performance of intersection algorithms.
For more details contact:
Computer systems for algebraic geometry aim at exact solutions and consequently dominantly use exact arithmetic. CAD-systems are based on double precision floating point representation and allow the
user to provide geometric tolerances defining when two points should be considered to be the same, and when curves or surfaces should be considered to be completely or partial coincident.
Consequently when two surfaces are closer than the geometry tolerance and intersecting in a complex curve topology, the CAD-users in some cases expect an area of coincidence to be reported rather
than the correct intersection topology. Similarly if no exact intersection exists but the surfaces are closer than the specified tolerances the CAD-user will expect the surfaces to be reported as
partial coincident with a description of the area of coincidence. In other circumstances the CAD-user expects the exact solution to be found.
In the last years INRIA has exploited the use of floating point calculations as part of algorithms for the calculation of a certified topology for the intersection and self-intersection of exactly
represented surfaces. SINTEF has continuously extended the use of certification in their implementation of surface intersection and self-intersection algorithms for the CAD-industry.
We will investigate the approaches for the certification of tolerance dependent intersection and self-intersection algorithms within algebraic geometry and CAD-type geometry in order to develop new
approaches for the result certification for improved quality and performance of intersection algorithms.
|
{"url":"https://www.sintef.no/projectweb/computational-geometry/pro/saga/wp3-algebraic-geometry-for-cad-applications/geometric-interval-arithmetic/","timestamp":"2024-11-04T12:25:03Z","content_type":"text/html","content_length":"17288","record_id":"<urn:uuid:5cf347a6-9c17-4f16-b294-34f257be821c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00140.warc.gz"}
|
My formula is broken, not sure house to fix | Microsoft Community Hub
Forum Discussion
My formula is broken, not sure house to fix
Here is my formula =IFS(D22-D18>D18,{" "},D22-D18<D18,{"0"},D22-D18=D17,{"5000000"}) It is broken at the first part, I need it to come to a number that is between $3M and $5M, how do I get a
I don't know what you are trying to do / why it is 'broken'. The formula itself 'works' but not sure why you did a few things:
a) why is each answer formatted as a set of 1 (i.e. {" "} ) you don't need the set around those.
b) why is the 3rd condition =D17 instead of D18? you realize that if D22-D18 = D18 and D18 <> D17 then there is no correct response excel will return #N/A
c) are you sure you want those numbers 0 and 5000000 formatted as TEXT?
so maybe you want:
=IFS(D22-D18>D18,"", D22-D18<D18,0, D22-D18=D18, 5000000)
this is what I am attempting to do is take the d22 less d18 if larger then the difference between d22, d18 and d17 if smaller then 0, or if it higher the d17 make it the amount in d17.
I get the 500,000 but I can't seem to find the formula for the difference between d22-d18>d17,'"
can you help?
• The now information is useful, but I still do not really understand exactly what it is you hope to see. My formula returned 500,000 but I wouldn't know whether that is good or bad.
= IFS(
paidLoss-retention > limit, limit,
paidLoss-retention < retention, 0,
TRUE, 500000
□ I am attempting to make a figure out how to write a formal to determine if the paid loss is below retention, for a zero to enter, if not what is the difference between paid loss, retention
and limit, and if the loss is higher then the limit to have the limit entered.
Does that makes sense?
☆ This tries to follow your written criteria as I followed them
= IFS(
paidLoss-retention < 0, 0,
paidLoss-retention > limit, limit,
TRUE, paidLoss-retention
If that is correct, an alternative formula might be
= MEDIAN(0, limit, paidLoss- retention)
[p.s. I hope that the use of names makes the formula more meaningful written here, evenif you doen't use them in your workbooks]
|
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/my-formula-is-broken-not-sure-house-to-fix/4093231/replies/4093289","timestamp":"2024-11-10T18:14:51Z","content_type":"text/html","content_length":"311161","record_id":"<urn:uuid:197d535a-275e-46ae-b0f2-864915118c54>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00788.warc.gz"}
|
Antoine Augustin Cournot
From New World Encyclopedia
(Redirected from Cournot)
Antoine Augustin Cournot (August 28, 1801 â March 31, 1877) was a French mathematician and economist. He applied mathematics to the field of economics, not necessarily to produce numerical
precision in a predictive fashion, but rather to provide clearer formulation of economic relationships.
Cournot's work, which included describing the behavior of monopolies and "duopolies" (the simplest type of oligopoly) using mathematical functions and graphing supply and demand as a function of
price, is recognized as foundational in econometrics, a field that provides vital information for economic forecasting on the level of individual businesses as well as for national economies. Thus,
Cournot's pioneering efforts allowed economics to develop in ways that enabled human society to maintain and develop healthy economic growth, and thus contributed to the well-being of all people.
Antoine Augustin Cournot was born on August 28, 1801, in the small town of Gray (Haute-SaĂŽne) in France. He was educated in the schools of Gray until he was 15. At 19, he enrolled in a mathematical
preparatory course at a school in Besançon, and subsequently won entry into the à cole Normale Supérieure in Paris in 1821. In 1822, Cournot transferred to the Sorbonne, obtaining a licentiate in
mathematics in 1823.
In Paris, he attended seminars at the Academie des Sciences and the salon of the economist Joseph Droz. Among his main intellectual influences were Pierre-Simon Laplace, Joseph-Louis Lagrange, and
Hachette, a former disciple of Marie-Antoine Condorcet, who started him on the principles of mathematique sociale, i.e., the idea that the social sciences, like the natural sciences, could be dealt
with mathematically. Cournot counted the young mathematician Lejeune Dirichlet as a close friend.
From 1823, Cournot was employed as a literary advisor to Marshal Gouvoin Saint Cyr and as a tutor to his son. In 1829, Cournot acquired a doctorate in sciences, focusing on mechanics and astronomy.
In 1834, Cournot found a permanent appointment as professor of analysis and mechanics at Lyons. A year later, Siméon-Denis Poisson secured him a rectorship at the Academy of Grenoble. Although his
duties were mostly administrative, Cournot excelled at them. In 1838 (again, at the instigation of the loyal Poisson), Cournot was called to Paris as Inspecteur Général des à tudes. In that same
year, he was made a knight of the Légion d'honneur (he was elevated to an officer in 1845).
Cournot's economic masterpiece received hardly any response (or when there was a response, it was highly critical) when it came out in 1838. The denizens of the French Liberal School, who dominated
the economics profession in France at the time, took no notice of it, leaving Cournot crushed and bitter. By the time Cournot died in 1877, he was nearly blind.
Cournot began with some preliminary remarks on the role of mathematics applied to the social sciences. He believed that economists must utilize the tools of mathematics only to establish probable
limits and to express less stable facts in more absolute terms. He further held that the practical uses of mathematics in economics do not necessarily involve strict numerical precision, and that his
purpose in using mathematics is merely to guide his reasoning and illustrate his argument rather than lead to any numerical calculations.
It was in 1838 that Cournot published his economics masterpiece, the Recherches sur les principes mathématiques de la théorie des richesses, translated as Researches on the Mathematical Principles
of the Theory of Wealth (1838 [1938]). In this book he presented his concepts of monopoly, oligopoly (in Cournot's case "duopoly"), and perfect competition.
In demonstrating the equilibrium of his oligopoly game, Cournot introduced a form of "best-reply dynamics," in which each firm selects the quantity that maximizes its profit in response to the total
industry output of the previous period. Through this, he introduced the ideas of functions and probability into economic analysis.
The "Recherches"
In the beginning of Recherches, Cournot runs through the definition of wealth, absolute versus relative prices, and the law of one price.
Then, he unveiled his first formula for the rule of supply and demand as a function of price. He wrote it in general form as
D = f ( p)
where D stands for demand (also quantity) and p stands for price.
He assumes that the function (f), is continuous and takes it as an empirical proposition that the demand function is downward-sloping (the loi de debit, "law of demand") and proceeds to draw it in
price-quantity space. He also introduces the idea of "elasticity," but does not write it down in a mathematical formula. It is important to note that Cournot's "demand function" is not a demand
schedule in the modern sense.
His curve, D = f ( p ), merely summarizes the empirical relationship between price and quantity sold, rather than the conceptual relationship between price and the quantity sought by buyers. Cournot
refused to derive demand from any "utility"-based theories of individual behavior. As he noted:
Accessory ideas of utility, scarcity, and suitability to the needs and enjoyments of mankindâ Šare variable and by nature indeterminate, and consequently ill suited for the foundation of a
scientific theory (1838:10).
Cournot was satisfied with merely acknowledging that the functional form of f ( p ) (with p representing price) depends on
The utility of the article, the nature of the services it can render or the enjoyments it can procure, on the habits and customs of the people, on the average wealth, and on the scale on which
wealth is distributedâ (1838: 47).
Cournot's analysis of monopoly begins with his introduction of the concept of a profit-maximizing producer. Cournot introduces the "cost function" represented by f ( D ), where D is demand or
quantity, and discusses decreasing, constant, and increasing costs to scale. He shows mathematically how a producer will choose to produce at a quantity (denoted by the first derivative of cost
function fâ [ . ] and, hence, where marginal revenue f' [ D ( p ) ] is equal to marginal cost y ( p )). Marginal cost is thus the extra or the increase in total cost required to produce 1 extra
unit of output, or the reduction in total cost from producing 1 unit less.
f ' [ D ( p ) ] = y ( p )
Cournot presented his famous model of a "duopoly" (a simpler form of oligopoly where only two producers dominate a market), with the following features:
• There is more than one firm and all firms produce a homogeneous product
• Firms do not cooperate
• Firms have market power
• There are barriers to entry
• Firms compete in quantities, and choose quantities simultaneously
• There is strategic behavior by the firms.
In his model, price is a commonly known decreasing function of total output. All firms know the total number of firms in the market, and take the output of the others as given. Each firm has a cost
function. Normally the cost functions are treated as common knowledge. The cost functions may be the same or different among firms. The market price is set at a level such that demand equals the
total quantity produced by both firms. Each firm takes the quantity set by its competitors as a given, evaluates its residual demand, and then behaves as a monopoly.
Cournot set up a mathematical model with two rival producers of a homogeneous product. Each producer is conscious that his rival's quantity decision will also impact the price he faces, and thus his
profits, but each firm decides independently how much to produce and put on the market. However, the market price of the commodity is determined by the inverse demand function applied to the sum of
what both firms put on the market. Consequently, each producer chooses a quantity that maximizes his profits subject to the quantity reactions of his rival.
Cournot mathematically derives a deterministic solution, as the quantities chosen by the rival producers are in accordance with each other's anticipated reactions. He shows how this equilibrium can
be drawn as the intersection of two "reaction curves."
Comparing solutions, Cournot notes that under duopoly, the price is lower and the total quantity produced greater than under monopoly. He runs with this insight, showing that as the number of
producers increases, the quantity becomes greater and the price lower.
Perfect Competition
Cournot introduced the case of unlimited competition, i.e., where the quantity of producers is so great that the entry or departure of an individual producer has a negligible effect on the total
quantity produced. He goes on to derive the prices and quantities in this "perfectly competitive" situation, in particular showing that, at the solution, price is equal to the marginal cost y ( p )
as defined above.
The outcome is found by applying Cournot's concept of game theory. The firms in the model do not collude to achieve monopoly, but still achieve greater profits than they would in a competitive
market. A nice feature of the model is that as more firms are added, the price goes to the competitive price, which is equal to marginal cost.
Communication of markets
Cournot described what he called the "communication of markets," or trade of a single good between regions. He analyzed two isolated countries and one homogeneous product, showing that the impact of
opening trade between the two countries leads to the equalization of prices, with the lower cost producer exporting to the higher cost country. Cournot tried to prove that there are conditions where
the opening of trade will lead to a decline in the quantity of the product and lower revenue.
Finally, Cournot also acknowledged that the solutions obtained via his "partial equilibrium" theory are incomplete. He recognized the need to take multiple markets into account and trying to solve
for the general equilibrium, but "this would surpass the powers of mathematical analysis" (Cournot 1838:127).
Cournot and probability
In 1843, Cournot made his first serious attempt at improving probability theory in his Exposition. He differentiated between three types of probabilities: objective, subjective, and philosophical.
The former two follow their standard ontological and epistemological definitions. They are basically what Keynes defined as "having enough rational constraints to make degree of belief or 'degree of
confirmation' unique" and, as such, are similar to the later Bayesian philosophy of statistics "with certain previously known information."
The third category refers to probability "which depends mainly on the idea that we have of the simplicity of the laws of nature" (Cournot 1843: 440). This is the original "frequentist" philosophy
based on the samples of large numbers with truly random outcome.
Cournot was primarily a mathematician, but he did have some influence over economics. In 1838, his book Researches on the Mathematical Principals of the Theory of Wealth was published, in which he
introduced the ideas of mathematical functions and probability into economic analysis. Many economists have come to believe this book to be the point of departure for modern econometrics.
Cournot derived the first formula for the rule of supply and demand as a function of price and was the first to draw supply and demand curves on a graph, anticipating the work of Alfred Marshall by
roughly thirty years. In fact, Marshall himself claimed to have read Cournot's work as far back as 1868, and extensively acknowledged Cournot's influence in his 1890 textbook, particularly in his
discussion of the theory of the firm. Cournotâ s theories on monopolies and "duopolies" are still famous.
Cournot was also a teacher of political economy and mathematics to Auguste Walras, the father of Léon Walras. Cournot and Auguste Walras persuaded Léon Walras to enter the field of political
economics. Léon Walras, who studied Cournot's work, claimed that his own equilibrium theory was but the multi-market generalization of Cournot's "partial equilibrium" theory.
ISBN links support NWE through referral fees
• Cournot, A. A. 1838. "MĂ©moire sur les applications du calcul des chances Ă la statistique judiciaire." Journal des mathĂ©matiques pures et appliquĂ©es 12. T. 3.
• Cournot, A. A. 1838 [1938]. Recherches sur les principes mathĂ©matiques de la thĂ©orie des richesses (Researches on the Mathematical Principles of the Theory of Wealth).
• Cournot, A. A. 1841. TraitĂ© Ă©lĂ©mentaire de la thĂ©orie des fonctions et du calcul infinitesimal.
• Cournot, A. A. 1843. Exposition de la thĂ©orie des chances et des probabilitĂ©s.
• Cournot, A. A. 1847. De l'origine et des limites de la correspondence entre l'agĂšbre et la gĂ©omĂ©trie.
• Cournot, A. A. 1851. Essai sur les fondements de nos connaissances et sur les caractĂšres de la critique philosophique. Vol. I, Vol. II.
• Cournot, A. A. 1861. TraitĂ© de l'enchainement des idĂ©es fondamentales dans les sciences et dans l'histoire.
• Cournot, A. A. 1863. Principes de la thĂ©orie des richesses.
• Cournot, A. A. 1864 Les institutions d'instruction publiques en France.
• Cournot, A. A. 1872. ConsidĂ©rations sur la marche des ideĂ©es et des Ă©vĂ©nements dans les temps modernes. 2 vols.
• Cournot, A. A. 1875. Materialisme, vitalisme, rationalisme: Ă tudes des donnĂ©es de las science en philosophie.
• Cournot, A. A. 1877. Revue sommaire des doctrines Ă©conomiques.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
{"url":"https://www.newworldencyclopedia.org/entry/Cournot","timestamp":"2024-11-07T19:31:08Z","content_type":"text/html","content_length":"64780","record_id":"<urn:uuid:1cc34083-9920-4e9c-b2f0-54a27d8f0187>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00457.warc.gz"}
|
Beginning GPU coding
Trying to tackle the entirety of MOOSE directly wasn't working too well. So I am now working on parallelizing an example class. This can then be extended to HSolve where the GPU capabilities are
actually required. This is a good exercising in understanding how to compile and run CUDA code alongside other CPU code, so I decided to document the entire process here.
The task I was given was simple - get a MOOSE class to calculate the sum of numbers from 1 upto some n, where n is defined by a member of that class. Then transfer the computation to the GPU while
still maintaining an interface to MOOSE.
Setting up the MOOSE class
The example class I created is basically the same as the class I wrote about a few posts back. Do refer back to that post if you need a refresher on MOOSE classes.
The only change I made is in the process function. The class now calculates the sum of numbers from 1 upto y_ (one of its datamembers) and stores the result in x_ (another one of its datamembers).
Heres the process function:
void Example::process( const Eref& e, ProcPtr p )
int count = 0;
for (int i=1; i<y_; i++)
count += i;
x_ = count;
The rest of the class is the same as before.
Now the aim is to shift this computation to the GPU. Ofcourse, being a GPU, we need to also develop a parallel algorithm to calculate this sum instead of just iterating through the array and summing
the numbers. Let's create a new project where all the GPU coding can be done and later work on integrating that code into this class.
The Parallel Reduce Algorithm
This section will focus on programming the GPU to calculate the sum of numbers from 1 to n.
NOTE: You need to have a working installation of CUDA for this part. If you don't have one, a number of tutorials exist to guide you through the installation process. Complete that before you
Every CUDA program has two parts - the main section and the kernel. The main section runs on the host (most often the CPU). Code written here is almost identical to ordinary C++ code. The kernel is
where all the magic happens. This code is loaded into the device (the GPU) and will be run once by each computational unit within the GPU.
So the idea is that all the control is at the hands of the CPU, where you would write ordinary C++ code to control flow of instructions, while the actual data processing is done at the GPU, which
will typically have small snippets of C++ code and a small amount of local memory but will run the same code multiple times in parallel. Together, they make a formidable number crunching machine!
Let's first take a look at the CPU code
int main()
int n[20] = {0};
int *n_d;
int y=20;
dim3 dimBlock( y, 1 );
dim3 dimGrid( 1, 1 );
const int asize = y*sizeof(int);
//1) Fill up the array with numbers from 1 to y
for (int i=0; i<y; i++)
n[i] = i;
//2) Allocate memory for the array on the GPU
cudaMalloc( (void**)&n_d, asize );
//3) Copy over the array from CPU to GPU
cudaMemcpy(n_d, n, asize, cudaMemcpyHostToDevice );
//4) Call the kernel
findSumToN<<<dimGrid, dimBlock>>>(n_d, y);
//5) Copy back the array from GPU to CPU
cudaMemcpy(n, n_d, asize, cudaMemcpyDeviceToHost);
//6) Free memory on the GPU
cudaFree (n_d);
std::cout << "\nSum: " << n[0]<< '\n';
return EXIT_SUCCESS;
The code is about as straightforward as you can imagine. Y is the limit until which we need to sum. In the unlikely case that the code isn't self explanatory, I have put down the steps here:
1. Generate the array having integers from 1 to y.
2. Allocate memory on the GPU to hold the array.
3. Copy the array from CPU to GPU.
4. Call the kernel y times.
5. Copy the array from GPU back to CPU.
6. Free memory on the GPU.
Let's take a look at some of the interesting aspects of this code.
• The dimBlock and dimGrid definitions you see on top are a consequence of how computational units (CUs) are arranged in CUDA. The GPU contains a large number of computational units which are
grouped together (in sets of 32, 64 etc) to form a block. Blocks are further grouped into a grid. CUDA also provides the helpful feature of identifying CUs within a block and blocks within a grid
using 2 dimensional or even 3 dimensional coordinates. The actual arrangement in the GPU will be linear ofcourse, but this interface can be very helpful when dealing with applications in 2D space
(like image processing) or 3D space(like point cloud analysis).
• cudaMalloc is the GPU equivalent to malloc. It allocates the specified space on the GPU and points the specified pointer to the start address. The name of the pointer is n_d. The CUDA convention
is to append _d to all pointers that are pointing to memory on the device.
NOTE: GPUs have a number of different levels of memory, each of which have specific tradeoffs between storage space and access speed. I haven't gone into those details here, but they are worth a
• cudaMemcpy is again similar to memcpy, but does this between the CPU and GPU. Note that last parameter which determines which direction the memory is being copied.
• findSumToN is a function call to a function that we haven't yet defined. This is the kernel, and I will come to this in a moment. Before that, take a look at the triple less-than and greater-than
signs. Between them is the dimGrid and dimBlock that we defined earlier. This determines the number of kernels that are launched. In our case, dimBlock is (y, 1), so y CUs will be launched per
block. Since dimGrid is (1,1), only one block will be launched. So that is y CUs in total, all launched linearly, within the same block. This is ofcourse not the best way to parallelize since a
block might only be able to launch 32 threads (based on the GPU hardware) and theres a good chance y will be more than that. Nevertheless, this is enough for this project.
• cudaFree, as you may imagine, just releases the memory pointed to by the pointer in its argument.
Before we look at the kernel, a brief introduction to the Parallel Reduce Algorithm will be helpful
What you see above is the reduce algorithm used with addition in its entirety!
The naive way of adding the numbers in an array is ofourse to take one element (usually the first) and keep adding each of the other elements to it until all the elements have been added. The
resulting array will have the sum of numbers in it's first cell.
Here, the end result is the same, but the method is slightly different. We first take all even elements (every second element starting from the first) and add the element immediately after it. We
then take every fourth element and add the element two spaces away. Then eighth, and so on until all the elements have been added.
The great thing about this approach is that it is far more parallelizable than the naive approach. To understand why, take a look at the first set of additions. Each addition takes 2 elements from
the first array and puts the result into one element of the second array. None of those additions depend on values computed by any other operation. So all of them can be computed in parallel. The
summation will have to pause after that first step though, to make sure all parallel units have finished computed the first level of summation before the next level of summation can be performed.
This is called a synchronisation of threads.
So what will this look like in parallel code?
int tId = threadIdx.x;
if (tId%2 == 0)
n[tId] += n[tId+1];
if (tId%4 == 0)
n[tId] += n[tId+2];
if (tId%8 == 0)
n[tId] += n[tId+4];
if (tId%16 == 0)
n[tId] += n[tId+8];
Here, tId is the ID number of the thread being run. Remember how we started y computational units in that call to findSumToN? This is (a simplified version of) the code that they will run. Each of
the y CUs are given a unique thread number which is used to identify them in the code.
So what exactly is happening here? All threads with odd tId will actually do nothing! This is a waste of CUs and should be avoided in production code. All threads with tId a factor of 2 will enter
the first if branch. Here, they will compute the sum of the array element n[tId] and the element immediately after it. The __syncthreads() command instructs the GPU to force all CUs to wait until all
other CUs have reached the same point. This will ensure that all the first level of calculations have been done, as mentioned before.
Then, all threads whose tIds are factors of 4 enter the second if branch where they add their element with the element two spaces away. This continues onward.
I have converted all of this into a generic function shown below:
void findSumToN(int *n, int limit)
int tId = threadIdx.x;
for (int i=0; i<=(int)log2((double)limit); i++)
if (tId%(int)(pow(2.0,(double)(i+1))) == 0){
if (tId+(int)pow(2.0, (double)i) >= limit) break;
n[tId] += n[tId+(int)pow(2.0, (double)i)];
Note that this is far from optimised code! There are way more calls to math functions than are required and many threads will be severely underused. It is, however, a fairly simple example to
I originally planned to put down the entire process of making the GpuSolver class and integrating it into MOOSE over here, but this is becoming a very long post. So I'll stop here and finish it in
the next post.
|
{"url":"https://vivekvidyasagaran.weebly.com/moose/beginning-gpu-coding","timestamp":"2024-11-12T05:01:14Z","content_type":"text/html","content_length":"42170","record_id":"<urn:uuid:cab63de2-37a5-4aee-b753-82266487b8ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00592.warc.gz"}
|
Force, Given Parametric Polar Coordinates
The path of motion in the horizontal plane of a 5lb particle is described in terms of polar coordinates as r=2*t+10 where r is the distance from the origin of the reference frame to the particle, and
q=5*t^2-6*t where q is the angle from the x axis of the reference frame to the distance line. Time (t) is in seconds. Determine the unbalanced force acting on the particle when t=2s.
The rate of change of the position vector with respect to time is the velocity vector and its rate of change with respect to time is the acceleration vector. Position p=2*t+10ft at an angle of 5*t^
2-6*t radians. Putting this in rectangular coordinates x and y we have
px=(2*t+10)*cos(5*t^2-6*t) and
The rate of change of p with respect to time, dp/dt is a vector whose components are the derivatives of px and py. To avoid a lot of calculus let's make up a table symmetrical about t=2.0 and
calculate the values of px and py for each time in the table. Then from the finite differences between px and py, calculate the rate of change of px and py, we call them px' and py'. Then from the
finite differences between px' and py', calculate the rate of change of px' and py', we call them px'' and py''.
┃t │1.999 │ │2 │ │2.001 ┃
┃5t^2-6t │7.986005 │ │8 │ │8.014005 ┃
┃cos(5t^2-6t) │-0.1316402│ │-0.1455 │ │-0.1593413┃
┃sin(5t^2-6t) │0.99129757│ │0.98935825│ │0.98722356┃
┃2t+10 │13.998 │ │14 │ │14.002 ┃
┃px │-1.8426991│ │-2.0370005│ │-2.2310965┃
┃py │13.8761833│ │13.8510155│ │13.8231043┃
┃px' │ │-194.30139│ │-194.09605│ ┃
┃py' │ │-25.167888│ │-27.911159│ ┃
┃px'' │ │ │205.346831│ │ ┃
┃py'' │ │ │-2743.2711│ │ ┃
Magnitude of the acceleration is sqrt(px''^2+py''^2) = 2750.94591 f/s/s
Mass is 5lb/32.2f/s/s = 0.1552795 slugs
Force is mass times acceleration = 426.396616N
I did this on a spreadsheet so I could try different delta t values. Decreasing delta t by a factor of 1000 only changed force in the fifth significant digit.
This information is brought to you by M. Casco Associates, a company dedicated to helping humankind reach the stars through understanding how the universe works. My name is James D. Jones. If I can
be of more help, please let me know. JDJ
|
{"url":"https://mcanv.com/Answers/qa_fgppc.html","timestamp":"2024-11-03T04:17:48Z","content_type":"application/xhtml+xml","content_length":"4026","record_id":"<urn:uuid:c8f28243-6c93-49c9-97a7-426ab3a3d72c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00763.warc.gz"}
|
To factor 4x2 - 25, you can first rewrite the expression as: O A. (2x-5) O B. (2x) 2 - (3) O c. (x) 2 - (2) O D. None of the above
To factor 4x2 - 25, you can first rewrite the expression as:
O A. (2x-5)
O B. (2x) 2 - (3)
O c. (x) 2 - (2)
O D. None of the above
Find an answer to your question ✅ “To factor 4x2 - 25, you can first rewrite the expression as: O A. (2x-5) O B. (2x) 2 - (3) O c. (x) 2 - (2) O D. None of the above ...” in 📘 Mathematics if you're
in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions.
Search for Other Answers
|
{"url":"https://edustrings.com/mathematics/2383868.html","timestamp":"2024-11-13T11:43:28Z","content_type":"text/html","content_length":"22824","record_id":"<urn:uuid:d542b8f7-1cb3-41c9-a9d9-c7f9c3c28629>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00120.warc.gz"}
|
Positive trigonometric and inverse trigonometric function online calculator
App description
Trigonometric function is a kind of transcendental function in mathematics. Their essence is the mapping between the set of arbitrary angles and the variables of a set of ratios. The general
trigonometric function is defined in the plane rectangular coordinate system, and its definition domain is the whole real number field. Another definition is in a right triangle, but it is not
complete. In modern mathematics, they are described as the limit of infinite sequence and the solution of differential equation.
Due to the periodicity of trigonometric function, it has no inverse function in the sense of single valued function.
Trigonometric functions have important applications in complex numbers. In physics, trigonometric function is also a common tool.
In RT△ABC, if the acute angle a is determined, then the ratio of the opposite side to the adjacent side of the angle a is determined. This ratio is called the tangent of angle a, which is called
That is, Tan A = opposite side of angle A/adjacent side of angle A.
The inverse function of the function y=tanx, (x is not equal to kπ+π/2,k∈Z) is called arctangent function. Its range is (-π/2,π/2). Arctangent function is a kind of inverse trigonometric function.
Similarly, since the tangent function y = TaNx does not have one-to-one correspondence in the definition domain R, there is no inverse function.
Note that the selection here is a monotone interval of a tangent function.
Usage example
Input value: 5
Trigonometric function: click "calculate tangent (tan)"
Degree: tangent 5 degree (°) = 0.087489
Radian: tangent 5 radian (RAD) =-3.3805
Number of digits: 5
|
{"url":"https://www.gislite.com/app/s2080","timestamp":"2024-11-15T00:00:16Z","content_type":"text/html","content_length":"23864","record_id":"<urn:uuid:87c269f0-6468-4dff-b50f-98cfd1bb838a>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00459.warc.gz"}
|
Lesson 16
Escribamos números para representar cantidades
Lesson Purpose
The purpose of this lesson is for students to write numbers to represent quantities.
Lesson Narrative
Students count groups of objects and images and write numbers to show how many in each group. Writing numbers backwards (“reversals”) and incorrectly forming numbers is expected in kindergarten. The
emphasis is on students writing a number that is recognizable to others with practice.
Learning Goals
Teacher Facing
• Write numbers 1-10 to represent a quantity.
Student Facing
• Escribamos números para mostrar cuántos hay.
Required Materials
Materials to Gather
Materials to Copy
• Math Stories Stage 1 Recording Sheet
• Math Stories Stage 1 and 4 Pictures, Spanish
Required Preparation
Activity 2:
• Each student needs a brown paper (not see through) bag with 1 to 10 objects inside.
Activity 3:
• Gather materials from:
□ Math Stories, Stage 1
□ Math Libs, Stage 1
□ Bingo, Stages 1 and 2
□ Number Race, Stage 1
□ Geoblocks, Stages 1 and 2
□ Math Fingers, Stages 1 and 2
CCSS Standards
Building Towards
Lesson Timeline
Warm-up 10 min
Activity 1 10 min
Activity 2 15 min
Activity 3 20 min
Lesson Synthesis 5 min
Cool-down 0 min
Teacher Reflection Questions
The CCSS require students to compare two numbers between 1 and 10 presented as written numerals. How has the work of this section helped students prepare to meet this standard?
Print Formatted Materials
Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials.
Student Task Statements pdf docx
Lesson Cover Page pdf docx
Teacher Guide Log In
Teacher Presentation Materials pdf docx
Blackline Masters zip
|
{"url":"https://im.kendallhunt.com/K5_ES/teachers/kindergarten/unit-2/lesson-16/preparation.html","timestamp":"2024-11-12T16:47:04Z","content_type":"text/html","content_length":"80348","record_id":"<urn:uuid:aad1237d-7632-4ac8-8277-adfe4be8c3c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00708.warc.gz"}
|
Straight line graphs Part 3 | B28 Maths Tutor
Straight line graphs Part 3
This is the final instalment of my series of tutorials on straight line graphs, and follows on from “Working with a coordinate grid“, “Straight line graphs Part 1” and “Straight line graphs Part 2“.
We’ve covered most of the Foundation Tier material – the only bits left to do are parallel lines and the mid-point of a line – and will also cover the Higher Tier content here.
There are some activities to work through in this tutorial, with the answer to each step given if you scroll down a little, so for maximum benefit, try to do each part before you scroll down and
reveal the answer.
Recap of key points covered so far
Part 1 recap
A graph is a picture showing the relationship between two algebraic variables – we usually use x going across and y going up, but any letters can be used.
A line with the equation x = k (where k is a constant) will always be vertical and will cross the x-axis at k.
A line with the equation y = k will always be horizontal and will cross the y-axis at k.
If the equation of a straight line graph is given in the form y = mx + c then
• The c (the number on its own) is the y-intercept, i.e. the number on the y-axis where the graph crosses it.
This is also the value of y when x is 0, since any graph crosses the y-axis when x = 0 (and vice versa).
• The m (the number multiplied by the x) is the gradient, i.e. how many units the graph goes up by, for each unit to the right. (The x is not part of the gradient!)
If you have these two pieces of information then you can plot, or sketch, a straight line graph without always having to make a table of values and plot all the points.
For example, line y = 2x + 3 will cross the y-axis at 3 and will have a gradient of 2, i.e. it goes up 2 units for every 1 unit to the right.
And the line y = 1 – 4x can be thought of as y = -4x + 1, so it crosses the y-axis at 1 and has a gradient of -4, i.e. it goes DOWN 4 units for every 1 to the right.
Part 2 recap
You can find the gradient of a straight line graph by taking any two points on the graph and using the formula
You can find the equation of any straight line graph if you know its gradient m and a point (x, y) on the line, by substituting the values of y, x, and m into the equation y = mx + c and then solving
the equation to find the value of c.
To sketch a straight line graph, draw a set of axes with no numbers and show only the numbers that are important. For example the graph y = 2x + 3 crosses the y-axis at 3, so you mark that in, and
has a gradient of 2, so you draw a line that goes through 3 on the y-axis and has a fairly steep upward slope from left to right.
If you need the x-intercept then put y = 0 and solve to find x.
If the equation is given in the form ax + by = k, you can use the cover-up method: Put x = 0 so the x-term disappears and see what that leaves you with for the y-intercept, then put y = 0 to find the
x-intercept in the same way.
Sometimes you need to rearrange the equation to make it look like y = mx + c so that you can find the gradient and y-intercept.
Now on to the new content…
Parallel and perpendicular lines
Parallel lines (Foundation/Higher)
If two lines have the same gradient then they are parallel.
For example, y = 3x + 1 and y = 3x – 4 are parallel, because they both have a gradient of 3.
But these lines are NOT parallel to the previous two – can you say why?
1. y = 3 + 2x
2. y = 2 – 3x
3. 2y = 3x – 4
1. The gradient is 2 (the 3 here is the y-intercept)
2. The gradient is -3 so this graph goes down from left to right
3. This isn’t in the form y = mx + c; to get it into that form you need to divide through by 2 to get y =
Look at the diagram below to see what the five straight line graphs look like. The two parallel ones are shown in green and then the others are added. You can see that none of the later ones are
Example problem:
Find the equation of the line parallel to y = 4x – 5 that passes through the point (2, 7).
If the line is parallel to y = 4x – 5 then it must have the same gradient, i.e. m = 4.
We already know how to find the equation of a line when we know the gradient and a point on the line:
At (2, 7), x = 2 and y = 7. Substitute these and m = 4 into the equation y = mx + c:
7 = 4×2 + c
then solve to find the value of c:
7 – 8 = c
c = -1
So the equation of the line is y = 4x – 1.
Your turn 1
1. Find the equation of the line parallel to 2x + y = 4 that crosses the y-axis at -5.
2. Find the equation of the line parallel to x – 2y = 6 that passes through (-2, 4).
Perpendicular lines (Higher only)
Perpendicular lines meet each other at right angles.
On the first set of axes below, we have the straight line graphs of y = x + 2 and y = 4 – x (or y = -x + 4). You can see that, when they cross, the angle between them is a right angle. The 2 and the
4 in the equations are irrelevant here; it’s only the gradients that we are interested in.
So we can see that a line with a gradient of 1 is perpendicular to a line with a gradient of -1.
Now look at the red line, y = 2x, on the second set of axes. You can see that it has a gradient of 2. What would the gradient be of the line perpendicular to this one?
Hint: It’s not -2!
Since the red line has a positive gradient, i.e. goes up from left to right, any straight line graph that is perpendicular to it will have to have a negative gradient.
The red line goes up by 2 for every 1 across, so the perpendicular line needs to go down by 1 for every 2 across; in other words, it has a gradient of -½.
The image below shows you the original red line with gradient 2, together with lines of gradient -2 (blue, not perpendicular) and -½ (green, perpendicular).
If you think of the gradients as fractions then it makes it easier to see the relationship:
A gradient of
A gradient of
What gradient do you think would be perpendicular to a gradient of
Try to work it out before you scroll any further!
Did you say
In general, we say that one gradient is the negative reciprocal of the other, or that the product of two perpendicular gradients is -1.
Have a look:
So if
Example problem:
Find the equation of the line perpendicular to y = 4x – 5 that passes through the point (8, 7).
If the line is perpendicular to y = 4x – 5 (which has gradient m₁ = 4) then its gradient must be -¼, so m₂ = -¼.
At (8, 7), x = 8 and y = 7. Substitute these and m₂ = -¼ into the equation y = m₂x + c:
7 = -¼×8 + c = -2 + c
then solve to find the value of c:
7 + 2 = c
c = 9
So the equation of the line is y = -¼ x + 9.
Your turn 2
1. Find the equation of the line perpendicular to 2x + y = 4 that crosses the y-axis at -5.
2. Find the equation of the line perpendicular to 2x – 3y = 6 that passes through (6, -1).
Alternative form of straight line equation:
y – y₁ = m(x – x₁) (Higher only)
If you go on to do A-level then you’ll probably use this form of the straight line graph equation a lot. Depending on where you’re trying to get to, it can be easier to use than y = mx + c.
y – y₁ = m(x – x₁) is just another way of describing a straight line graph, where m is the gradient and (x₁, y₁) is any point that you know is on the line. If you know two points on the line then it
doesn’t matter which one you use.
Let’s look again at the earlier example with parallel lines: Find the equation of the line parallel to y = 4x – 5 that passes through the point (2, 7).
We know that the gradient is 4, and the known point on the line is (2, 7) so that’s our (x₁, y₁).
Simply substitute these values into the new formula, and – provided that the question doesn’t ask for the equation to be given in a particular form – you’re done; no need to find c!
So y – y₁ = m(x – x₁) simply becomes
y – 7 = 4(x – 2).
Of course, you can rearrange this into y = mx + c form if you want to know the y-intercept. Try it and check that you get the same answer as before, i.e. y = 4x – 1.
Similarly, with the perpendicular lines example – Find the equation of the line perpendicular to y = 4x – 5 that passes through the point (8, 7) – we have a perpendicular gradient of -¼ and a known
point on the line of (8, 7) so we get
y – 7 = -¼(x – 8)
which can be rearranged to y = -¼ x + 9
Now try the questions from “Your turn 1” and “Your turn 2” again, using this version of the equation.
Mid-point of a line (Foundation/Higher)
In the diagram below, you can see three line segments. (A line segment is just a section of a line that has a finite length; a line, in theory, goes on for ever.)
Look at the red one, which has end points (2, 1) and (6, 9). Can you think of a way to work out the coordinates of the mid-point of that line? (Don’t rely on counting squares on the grid – you won’t
always be able to do that!)
The x-coordinates of the end points are 2 and 6, so the x-coordinate of the mid-point is going to be mid-way between those two values. What’s the simplest way to find the value exactly half way in
between a pair of values?
Just find the mean: add them together and divide by 2!
(You can also start from the bottom one and add on half the difference between the two – that’s often the approach that my students suggest at first; it will get you there, but finding the mean is
So we get x =
And of course we do the same thing with the y-coordinates, 1 and 9: y =
So the mid-point of the red line is (4, 5).
In general, the mid-point of a line segment with end points (x₁, y₁) and (x₂, y₂) is
(mean x, mean y) or
Your turn 3
Find the coordinates of the mid-points of the line segments with end points
1. (-3, -5) and (9, 3) (shown in purple on the grid above)
2. (1, 3) and (10, -1) (shown in green on the grid above)
3. (-2, 4) and (20, -7)
Length of a line segment (Higher only)
Here’s the same grid again that we used in the last section. This time we’re going to find the length of the red line segment, using the coordinates of its end points, (2, 1) and (6, 9). Can you
think how we might do that?
The answer is that we use Pythagoras’ theorem. The line segment itself is the hypotenuse of a right-angled triangle, with dotted lines showing the two shorter sides.
The horizontal side goes from 2 to 6 on the x-axis, so that’s a length of 6 – 1 = 4.
The vertical side goes from 1 to 9 on the y-axis, so that’s a length of 9 – 1 = 8.
So (length of line segment)² = 4² + 8² = 16 + 64 = 80
giving a length of √80 or approx. 8.94 units.
Your turn 4
Find the lengths of the line segments with end points
1. (-3, -5) and (9, 3) (shown in purple on the grid above)
2. (1, 3) and (10, -1) (shown in green on the grid above)
3. (-2, 4) and (10, -5)
Finding the equation of a perpendicular bisector
We’ve now covered all the skills you need to work with straight line graphs. Let’s try an application that puts some of them to use!
First, can you explain what a perpendicular bisector is?
It’s a line that cuts another line exactly in half by crossing it at right angles.
Your challenge is to find the equation of the perpendicular bisector of line segment AB, shown on the graph below, where A is the point (-1, -2) and B is (3, 8).
Give your answer in the form px + qy = r, where p, q and r are all integers.
First, try to predict what skills you are going to need, then scroll down to see if you were right.
If the new line crosses AB at right angles then we need the perpendicular gradient. But to work that out, we first need to find the gradient of AB.
If it cuts AB exactly in half then it will cross at its midpoint so we need to find those coordinates.
Finally, we need to plug the information we’ve got into the straight line equation and rearrange it into the required form.
Now see if you can do it, before you scroll down to see the solution.
Gradient of AB =
So perpendicular gradient =
Mid-point = (mean x, mean y) =
So the perpendicular bisector has gradient
Substitute into y – y₁ = m(x – x₁):
y – 3 =
(Of course you could use y = mx + c but then you’d have to work out the c, and after that you’d have to rearrange it anyway to get into the required form.)
If the question hadn’t stipulated a particular form for the answer (the wording in that case is usually “Find an equation for…”) then what we’ve already done would be enough for full marks, but in
this case we need to rearrange it into the required form.
Multiply both sides by 5 to get rid of the fraction:
5(y – 3) = -2(x – 1)
Expand brackets:
5y – 15 = -2x + 2
Get the xs and ys on one side and gather the constants on the other:
2x + 5y = 17
… and we’re done!
Of course, it’s a good idea to check your answer. This format makes it easy to find the x- and y-intercepts of the new line (using the cover-up method); what are they?
Answer: 2x = 17 so x = 8.5; 5y = 17 so y = 3.4
If we draw that line in on the grid it looks like this. You can see that it crosses AB at its mid-point and at right angles.
That covers everything you need to know about straight line graphs, for both GCSE/IGCSE and A-level.
If you’ve found this article helpful then please share it with anyone else who you think would benefit (use the social sharing buttons if you like). If you have any suggestions for improvement or
other topics that you’d like to see covered, then please comment below or drop me a line using my contact form.
On my sister site at at mathscourses.co.uk you can find – among other things – a great-value suite of courses covering the entire GCSE (and Edexcel IGCSE) Foundation content, and the “Flying Start to
A-level Maths” course for those who want to get top grades at GCSE and hit the ground running at A-level – please take a look!
If you’d like to be kept up to date with my new content then please sign up to my mailing list using the form at the bottom of this page, which will also give you access to my collection of free
Your turn 1 – answers
1) Original equation rearranges to y = -2x + 4 so m = 4. y-intercept c = -5 so the new equation is y = -2x – 5.
2) Original equation rearranges to y = ½ x – 3 so m = ½. Sub that and (-2, 4) into y = mx + c to get 4 = ½×-2 + c = -1 + c so c = 4 + 1 = 5, so the new equation is y = ½ x + 5.
Click here to return to questions
Your turn 2 – answers
1. m₁=-2 so m₂=½; c = -5 so equation is y = ½ x – 5
2. m₁=⅔ so m₂=-3/2 or -1.5; Sub in with (6, -1) to get -1 = -1.5 × 6 + c => c = -1 + 9 = 8, so the new equation is y = -1.5x +8
Click here to return to questions
Your turn 3 – answers
1. (3, -1)
2. (5.5, 1)
3. (9, -1.5)
Click here to return to questions
Your turn 4 – answers
1. Short sides are 12 and 8 so length is √208 (or 14.4)
2. Short sides are 9 and 4 so length is √97 (or 9.85)
3. Short sides are 12 and 9 so length is √225 = 15 (this is a Pythagorean triple)
|
{"url":"https://b28mathstutor.co.uk/straight-line-graphs-part-3/","timestamp":"2024-11-02T16:53:13Z","content_type":"text/html","content_length":"140301","record_id":"<urn:uuid:9a3941bb-4aa2-4f74-a200-d8d7cda4e710>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00502.warc.gz"}
|
RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance - Aili
Summarize by Aili
RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance
๐ Abstract
The paper proposes a training-free approach to flexibly personalize rectified flow models using anchored classifier guidance. It extends the applicability of the original classifier guidance by
transforming it into a new fixed-point formulation that can leverage off-the-shelf image discriminators, without relying on a special noise-aware classifier. To improve the stability of this
fixed-point solution, the paper introduces an anchored classifier guidance that constrains the target flow trajectory to be close to a reference trajectory, providing a theoretical convergence
guarantee. The derived method is implemented on a practical class of piecewise rectified flow and demonstrates advantageous results in various personalization tasks for human faces, live subjects,
certain objects, and multiple subjects.
๐ Q&A
[01] Classifier Guidance for Rectified Flow
1. What is the key observation that allows bypassing the need for a noise-aware classifier? The key observation is that by approximating the rectified flow trajectory to be ideally straight, the
original classifier guidance can be reformulated as a simple fixed-point problem involving only the trajectory endpoints, without requiring the noise-aware classifier.
2. What is the limitation of the initial fixed-point solution derived based on this observation? The initial fixed-point solution may not always converge, as even a small perturbation at the starting
point could lead to the target flow trajectory diverging significantly after iterative updates, hindering the controllability of the rectified flow.
3. How does the paper address this limitation? To improve the stability, the paper proposes a new "anchored classifier guidance" that constrains the target flow trajectory to be close to a
predetermined reference trajectory. This provides a better convergence guarantee and a certain degree of interpretability.
[02] Anchored Classifier Guidance
1. What is the key idea behind the anchored classifier guidance? The key idea is to constrain the target flow trajectory to be straight and near a reference trajectory, by anchoring the target
velocity to the reference velocity. This helps stabilize the solving process of the target trajectory.
2. How does the anchored classifier guidance bypass the need for a noise-aware classifier? Similar to the initial fixed-point solution, the anchored classifier guidance substitutes the intermediate
classifier guidance terms with an expression involving only the trajectory endpoints, allowing the use of off-the-shelf image discriminators.
3. What theoretical property does the anchored classifier guidance exhibit? The paper shows that the fixed-point iteration to solve the anchored classifier guidance exhibits at least linear
convergence, provided that the image discriminator is Lipschitz continuous, by properly choosing the guidance scale.
[03] Practical Algorithm
1. How does the paper extend the analysis to handle practical rectified flow models? The paper relaxes the assumption of an ideally straight rectified flow trajectory, and instead adopts a piecewise
linear approximation, where the flow trajectory is assumed straight within each time window.
2. How does the paper address the issue of disconnected reference trajectory segments after updates? To handle the disconnected reference trajectory segments, the paper proposes to reinitialize the
reference trajectory every iteration with predictions for the updated target starting points.
3. What are the key steps in the iterative procedure to solve the target flow trajectory under the anchored classifier guidance? The key steps are: 1) Predict the updated target starting points by
extrapolating from history updates; 2) Solve the derived fixed-point problem to obtain the new target trajectory, anchored to the reinitialized reference trajectory.
[04] Applications
1. What types of personalization tasks does the proposed method cover? The proposed method is flexible for various personalized image generation tasks, including human faces, live subjects (e.g.
cats, dogs), certain objects (e.g. cans, vases), and even multiple subjects.
2. How does the method leverage off-the-shelf image discriminators for these tasks? For face-centric personalization, the method uses a face specialist discriminator (ArcFace). For subject-driven
generation, it employs an open-vocabulary object detector (OWL-ViT) and a self-supervised backbone (DINOv2) to extract visual features.
3. How does the method handle the multi-subject scenario? The method extends to the multi-subject case by incorporating a bipartite matching step to associate the generated subjects with the
reference subjects, before computing the classifier guidance signal.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.
|
{"url":"https://aili.app/share/67HQZI3BIwvJ6dpi7CQ8sI","timestamp":"2024-11-10T17:37:51Z","content_type":"text/html","content_length":"11890","record_id":"<urn:uuid:f04331f2-4cb6-466a-8f0b-9491655648de>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00253.warc.gz"}
|
June 2012
The following is a demonstration of how to use R to do quadratic programming in order to do mean-variance portfolio optimization under different constraints, e.g., no leverage, no shorting, max
concentration, etc.
Taking a step back, it’s probably helpful to realize the point of all of this. In the 1950s, Harry Markowitz introduced what we now call Modern Portfolio Theory (MPT), which is a mathematical
formulation for diversification. Intuitively, because some stocks zig when others zag, when we hold a portfolio of these stocks, our portfolio can have some notional return at a lower variance than
holding the stocks outright. More specifically, given a basket of stocks, there exists a notion of an efficient frontier. I.e., for any return you choose, there exists a portfolio with the lowest
variance and for any variance you fix, there exists a portfolio with the greatest return. Any portfolio you choose that is not on this efficient frontier is considered sub-optimal (for a given
return, why would you choose a a higher variance portfolio when a lower one exists).
The question becomes if given a selection of stocks to choose from, how much do we invest in each stock if at all?
In an investments course I took a while back, we worked the solution for the case where we had a basket of three stocks to choose from, in Excel. Obviously, this solution wasn’t really scalable
outside of the N=3 case. When asked about extending N to an arbitrary number, the behind-schedule-professor did some handwaving about matrix math. Looking into this later, there does exist a
closed-form equation for determining the holdings for an arbitrary basket of stocks. However, the math starts getting more complicated with each constraint you decide to tack on (e.g., no leverage).
The happy medium between “portfolio optimizer in Excel for three stocks” and “hardcore matrix math for an arbitrary number of stocks” is to use a quadratic programming solver. Some context is needed
to see why this is the case.
Quadratic Programming
According to wikipedia, quadratic programming attempts to minimize a function of the form \frac{1}{2}x^{T}Qx + c^{T}x subject to one or more constraints of the form Ax \le b (inequality) or Ex = d
Modern Portfolio Theory
The mathematical formulation of MPT is that for a given risk tolerance q \in [0,\infty), we can find the efficient frontier by minimizing w^{T} \Sigma w - q*R^{T}w.
• w is a vector of holding weights such that \sum w_i = 1
• \Sigma is the covariance matrix of the returns of the assets
• q \ge 0 is the “risk tolerance”: q = 0 works to minimize portfolio variance and q = \infty works to maximize portfolio return
• R is the vector of expected returns
• w^{T} \Sigma w is the variance of portfolio returns
• R^{T} w is the expected return on the portfolio
My introducing of quadratic programming before mean-variance optimization was clearly setup, but look at the equivalence between \frac{1}{2}x^{T}Qx + c^{T}x and w^{T} \Sigma w - q*R^{T}w.
Quadratic Programming in R
solve.QP, from quadprog, is a good choice for a quadratic programming solver. From the documentation, it minimizes quadratic programming problems of the form -d^{T}b + \frac{1}{2} b^{T}Db with the
constraints A^{T}b \ge b_0. Pedantically, note the variable mapping of D = 2\Sigma (this is to offset the \frac{1}{2} in the implied quadratic programming setup) and d = R.
The fun begins when we have to modify A^{T}b \ge b_0 to impose the constraints we’re interested in.
Loading Up the Data
I went to google finance and downloaded historical data for all of the sector SPDRs, e.g., XLY, XLP, XLE, XLF. I’ve named the files in the format of dat.{SYMBOL}.csv. The R code loads it up, formats
it, and then ultimately creates a data frame where each column is the symbol and each row represents an observation (close to close log return).
The data is straight-forward enough, with approximately 13 years worth:
> dim(dat.ret)
[1] 3399 9
> head(dat.ret, 3)
XLB XLE XLF XLI XLK
[1,] 0.010506305 0.02041755 0.014903406 0.017458395 0.023436164
[2,] 0.022546751 -0.00548872 0.006319802 0.013000812 -0.003664126
[3,] -0.008864066 -0.00509339 -0.013105239 0.004987542 0.002749353
XLP XLU XLV XLY
[1,] 0.023863921 -0.004367553 0.022126545 0.004309507
[2,] -0.001843998 0.018349139 0.006232977 0.018206972
[3,] -0.005552485 -0.005303294 -0.014473165 -0.009255754
Mean-Variance Optimization with Sum of Weights Equal to One
If it wasn’t clear before, we typically fix the q in w^{T} \Sigma w - q*R^{T}w before optimization. By permuting the value of q, we then generate the efficient frontier. As such, for these examples,
we’ll set q = 0.5.
solve.QP’s arguments are:
solve.QP(Dmat, dvec, Amat, bvec, meq=0, factorized=FALSE)
Dmat (covariance) and dvec (penalized returns) are generated easily enough:
Amat and bvec are part of the inequality (or equality) you can impose, i.e., A^{T}b \ge b_0. meq is an integer argument that specifies “how many of the first meq constraints are equality statements
instead of inequality statements.” The default for meq is zero.
By construction, you need to think of the constraints in terms of matrix math. E.g., to have all the weights sum up to one, Amat needs to contain a column of ones and bvec needs to contain a single
value of one. Additionally, since it’s an equality contraint, meq needs to be one.
In R code:
# Constraints: sum(x_i) = 1
Having instantiated all the arguments for solve.QP, it’s relatively straightforward to invoke it. Multiple things are outputted, e.g., constrained solution, unconstrained solution, number of
iterations to solve, etc. For our purpose, we’re primarily just interested in the solution.
> qp qp$solution
[1] -0.1489193 0.6463653 -1.0117976 0.4107733 -0.4897956 0.2612327 -0.1094819
[8] 0.5496478 0.8919753
Things to note in the solution are that we have negative values (shorting is allowed) and there exists at least one weight whose absolute value is greater than one (leverage is allowed).
Mean-Variance Optimization with Sum of Weights Equal to One and No Shorting
We need to modify Amat and bvec to add the constraint of no shorting. In writing, we want to add a diagonal matrix of ones to Amat and a vector of zeros to bvec, which works out when doing the matrix
multiplication that for each weight, its value must be greater than zero.
# Constraints: sum(x_i) = 1 & x_i >= 0
Amat qp$solution
[1] 0.0000000 0.4100454 0.0000000 0.0000000 0.0000000 0.3075880 0.0000000
[8] 0.2823666 0.0000000
Note that with the constraints that all the weights sum up to one and that the weights are positive, we’ve implicitly also constrained the solution to have no leverage.
Mean-Variance Optimization with Sum of Weights Equal to One, No Shorting, and No Heavy Concentration
Looking at the previous solution, note that one of the weights suggests that we put 41% of our portfolio into a single asset. We may not be comfortable with such a heavy allocation, and we might want
to impose the additional constraint that no single asset in our portfolio takes up more than 15%. In math and with our existing constraints, that’s the same as saying -x \ge -0.15 which is equivalent
to saying x \le 0.15.
# Constraints: sum(x_i) = 1 & x_i >= 0 & x_i <= 0.15
Amat qp$solution
[1] 0.1092174 0.1500000 0.0000000 0.1407826 0.0000000 0.1500000 0.1500000
[8] 0.1500000 0.1500000
Turning the Weights into Expected Portfolio Return and Expected Portfolio Volatility
With our weights, we can now calculate the portfolio return as R^{T}w and portfolio volatility as sqrt{w^T \Sigma w}. Doing this, we might note that the values look “small” and not what you expected.
Keep in mind that our observations are in daily-space and thus our expected return is expected daily return and expected volatility is expected daily volatility. You will need to annualize it, i.e.,
R^{T}w * 252 and \sqrt{w^{T} \Sigma w * 252}.
The following is an example of the values of the weights and portfolio statistics while permuting the risk parameter and solving the quadratic programming problem with the constraints that the
weights sum to one and there’s no shorting.
> head(ef.w)
XLB XLE XLF XLI XLK XLP XLU XLV XLY
1 0 0.7943524 0 0 0 0 0 0.1244543 0.08119329
1.005 0 0.7977194 0 0 0 0 0 0.1210635 0.08121713
1.01 0 0.8010863 0 0 0 0 0 0.1176727 0.08124097
1.015 0 0.8044533 0 0 0 0 0 0.1142819 0.08126480
1.02 0 0.8078203 0 0 0 0 0 0.1108911 0.08128864
1.025 0 0.8111873 0 0 0 0 0 0.1075003 0.08131248
> head(ef.stat)
ret sd
1 0.06663665 0.2617945
1.005 0.06679809 0.2624120
1.01 0.06695954 0.2630311
1.015 0.06712098 0.2636519
1.02 0.06728243 0.2642742
1.025 0.06744387 0.2648981
Note that as we increase the risk parameter, we’re working to maximize return at the expense of risk. While obvious, it’s worth stating that we’re looking at the efficient frontier. If you plotted
ef.stat in its entirety on a plot whose axis are in return space and risk space, you will get the efficient frontier.
Wrap Up
I’ve demonstrated how to use R and the quadprog package to do quadratic programming. It also happens to coincide that the mean-variance portfolio optimization problem really lends itself to quadratic
programming. It’s relatively straightforward to do variable mapping between the two problems. The only potential gotcha is how to state your desired constraints into the form A^{T}b \ge b_{0}, but
several examples of constraints were given, for which you can hopefully extrapolate from.
Getting away from the mechanics and talking about the theory, I’ll also offer that there are some serious flaws with the approach demonstrated if you attempt to implement this for your own trading.
Specifically, you will most likely want to create return forecasts and risk forecasts instead of using historical values only. You might also want to impose constraints to induce sparsity on what you
actually hold, in order to minimize transaction costs. In saying that your portfolio is mean-variance optimal, there’s the assumption that the returns you’re working with is normal, which is
definitely not the case. These and additional considerations will need to be handled before you let this run in “production.”
All that being said, however, Markowitz’s mean-variance optimization is the building block for whatever more robust solution you might end up coming with. And, an understanding in both theory and
implementation of a mean-variance optimization is needed before you can progress.
Helpful Links
Lecture on Quadratic Programming and Markowitz Model (R. Vanderbei)
Lecture on Linear Programming and a Modified Markowitz Model (R. Vanderbei)
You are Horrible at Market Timing
You are horrible at market timing. Don’t even attempt it. I probably can’t convince you how horrible you are, but hopefully some empirical data analysis will show how you and the majority of people
are no good at market timing.
Recently a friend came to me lamenting about a recent stock purchase he made, lamenting how the stock has gone down since he’s bought it and how he should have waited to buy it for even cheaper. From
this, I was reminded by an anecdote from a professor from my econometrics class. I was taking the class in late 2008, which if you don’t remember, was right in the midst of the major financial
collapse, with all the major indices taking a huge nose dive.
Students being students, somebody asked the professor what he thought about the collapse and what he was doing himself in his own personal account. Keep in mind the tone was what a “normal” person
does instead what a 1000-person hedge fund does. He referred to a past study that showed that most recoveries in the equities space didn’t come from steady returns but instead were concentrated on a
few, infrequently-spaced days. That is, there was no way for you to catch the recoveries unless you were already invested the day before. And, if you were sitting on cash before, saw the move happen,
and attempted to then get into the markets, it would have been too late for you.
I decided to (very) roughly replicate this purported study for my friend.
I first went to google to download daily prices for SPY. They provided a nice facility for you to export the data to a csv format.
The data is relatively straightforward.
I wrote some R code to read in this data and to trim out days that didn’t have an open, which left me with observation starting in 2000/01/03 and ~3100 data points. Additionally, I created log
returns for that day’s open to close, i.e., log(p_{close}) - log(p_{open}).
# Get the data
xx <- read.table(file="~/tmp/spy.data.csv", header=T, sep=",", as.is=T)
names(xx) <- c("date", "open", "high", "low", "close", "vlm")
# Get date in ymd format
xx$ymd <- as.numeric(strftime(as.Date(xx$date, "%d-%b-%y"), "%Y%m%d"))
xx <- xx[, names(xx)[-1]]
xx <- xx[,c(ncol(xx), 1:(ncol(xx)-1))]
# We want to work with complete data
xx <- xx[xx$open != 0,]
# I prefer low dates first than high dates
xx <- xx[order(xx$ymd),]
rownames(xx) <- 1:nrow(xx)
# Getting open to close
xx$o2c <- log(xx$close) - log(xx$open)
xx <- xx[!is.infinite(xx$o2c),]
Getting the top 10 return days is relatively straightforward. Note that finger-in-the-wind, a lot of the top 10 return days came from end of 2008, for which presumably a lot of people decided to put
their money into cash out of fear.
> head(xx[order(-xx$o2c),], n=10)
ymd open high low close vlm o2c
635 20020724 78.14 85.12 77.68 84.72 671400 0.08084961
2202 20081013 93.87 101.35 89.95 101.35 2821800 0.07666903
2213 20081028 87.34 94.24 84.53 93.76 81089900 0.07092978
2225 20081113 86.13 91.73 82.09 91.17 753800996 0.05686811
2234 20081126 84.30 89.19 84.24 88.97 370320441 0.05391737
2019 20080123 127.09 134.19 126.84 133.86 53861000 0.05189898
248 20010103 128.31 136.00 127.66 135.00 17523900 0.05082557
2241 20081205 83.65 88.42 82.24 87.93 471947872 0.04989962
2239 20081203 83.40 87.83 83.14 87.32 520103726 0.04593122
2315 20090323 78.74 82.29 78.31 82.22 420247245 0.04324730
Emphasizing this point more, if you didn’t have your cash in equities at the beginning of the day, you would have missed out on the recovery. An additional point we can do is to see what the returns
were on the prior day. In other words, is there some in-your-face behavior the prior day that would lead you to believe that huge returns would have come the next day?
> max.ndx <- head(order(-xx$o2c), n=10)
> max.ndx <- as.vector(t(cbind(max.ndx, max.ndx-1)))
> xx[max.ndx,]
ymd open high low close vlm o2c
635 20020724 78.14 85.12 77.68 84.72 671400 0.080849612
634 20020723 82.55 83.24 78.85 79.95 65806500 -0.032002731
2202 20081013 93.87 101.35 89.95 101.35 2821800 0.076669027
2201 20081010 86.76 93.94 83.58 88.50 90590400 0.019856866
2213 20081028 87.34 94.24 84.53 93.76 81089900 0.070929778
2212 20081027 85.97 89.51 83.70 83.95 62953200 -0.023777015
2225 20081113 86.13 91.73 82.09 91.17 753800996 0.056868113
2224 20081112 88.23 90.15 85.12 85.82 454330554 -0.027694962
2234 20081126 84.30 89.19 84.24 88.97 370320441 0.053917369
2233 20081125 87.30 87.51 83.82 85.66 454188290 -0.018964491
2019 20080123 127.09 134.19 126.84 133.86 53861000 0.051898981
2018 20080122 127.21 132.43 126.00 130.72 75350600 0.027218367
248 20010103 128.31 136.00 127.66 135.00 17523900 0.050825568
247 20010102 132.00 132.16 127.56 128.81 8732200 -0.024463472
2241 20081205 83.65 88.42 82.24 87.93 471947872 0.049899616
2240 20081204 86.06 88.05 83.74 85.30 444341542 -0.008870273
2239 20081203 83.40 87.83 83.14 87.32 520103726 0.045931222
2238 20081202 83.47 85.49 82.04 85.27 469785220 0.021335407
2315 20090323 78.74 82.29 78.31 82.22 420247245 0.043247296
2314 20090320 78.76 78.91 76.53 76.71 371165651 -0.026373176
Looking at the data, we can see that there were both positive and negative returns the day before. However, there weren’t any moments of “large return today, I better get in.” My perception of this
is that, from the perspective of a normal investor saving for retirement, they should just leave their money in and are already hopefully using some variant of dollar cost averaging.
For what it’s worth, my professor said he hardly touched his own personal investments, presumably just putting his 401k money in a few indices and forgetting about it. His time was better spent on
writing academic papers.
|
{"url":"https://www.wdiam.com/b/2012/06/","timestamp":"2024-11-02T20:52:50Z","content_type":"text/html","content_length":"97651","record_id":"<urn:uuid:57623381-1d73-40d8-8040-7f48cf1cbbcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00361.warc.gz"}
|
3 Hrs 7 Mins - Total Flight Time
3 Hrs 7 Mins - Total Flight Time from Jerusalem to 00120
Plane takes off from Jerusalem, IL and lands in 00120, VA.
Current Time in Vatican City: Wednesday November 13th 7:06am.
Estimated Arrival Time: If you were to fly from Vatican City now, your arrival time would be Wednesday November 13th 10:13am (based on Vatican City time zone).
* Flight duration has been calculated using an average speed of 435 knots per hour. 15 minutes has been added due to takeoff and landing time. Note that this time varies based on runway traffic.
Other factors such as taxing and not being able to reach or maintain a speed of 435 knots per hour has not been taken into account.
Flight Time Summary
Your in air flight time starts at Jerusalem and ends at 00120.
Estimated arrival time: Wednesday November 13th 10:13am (based on destination time zone).
You can see why your trip to Jerusalem takes 3 hrs 7 mins by taking a look at how far of a distance you would need to travel. You may do so by checking the flight distance between Jerusalem and 00120
After seeing how far Jerusalem is from 00120 by plane, you may also want to get information on route elevation from Jerusalem to 00120.
Did you know that 00120 can be reached by car? If you'd like to drive there, you can check the travel time from Jerusalem to 00120.
To see how far your destination is by car, you can check the distance from Jerusalem to 00120.
If you need a road map so that you can get a better understanding of the route to 00120, you may want to check the road map from Jerusalem to 00120.
If you're now considering driving, you may want to take a look at the driving directions from Jerusalem to 00120.
Whether the trip is worth the drive can also be calculated by figuring out the fuel cost from Jerusalem to 00120.
Recent Flight Times Calculations for Jerusalem IL:
Flight Time from Jerusalem to Efrat Connected by Ein Gedi Reserve
Flight Time from Jerusalem to Jaffa Port
Flight Time from Jerusalem to Jaffa
Flight Time from Jerusalem to Airport Rd.
Flight Time from Jerusalem to Sebastia
« RSS Flight Times Calculations for Jerusalem IL »
|
{"url":"https://www.distancesto.com/flight-time/il/jerusalem-to-00120/history/1479390.html","timestamp":"2024-11-13T06:06:39Z","content_type":"text/html","content_length":"49215","record_id":"<urn:uuid:eb23baa1-287b-49ea-a990-388730638b2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00011.warc.gz"}
|
Zippers for non-inductive types
Famous mathematician-philosoper Gian-Carlo Rota said that mathematical understanding is all about showing how things that seem different are essentially the same. If that is true, then functional
programming is something that we
quite well.
One of the most celebrated aspects of functional programming is the correspondence that shows that logical propositions and programming language types are essentially the same thing. It is known as
the "Curry-Howard" correspondence. Less known but perhaps even more interesting that each common type corresponds to an algebraic operation. Let's look at all three in a table (the types are written
in OCaml syntax):
│ Type │ Constructor │ Logic │ Algebra │
│ void │ none │ $\bot$ │ 0 │
│ unit │ () │ $\top$ │ 1 │
│ 'a * 'b │ (a, b) │ $A\wedge B$ │ $A\times B$ │
│ ('a, 'b) sum │ Left a │ $A\vee B$ │ $A+B$ │
│ │ Right b │ │ │
│ 'a -> 'b │ fun x -> b │ $A\supset B$ │ $B^A$ │
type ('a, 'b) sum = Left of 'a | Right of 'b
There are many tutorial introductions explaining this fascinating
What is very cool is that the correspondence runs deep, so that, for example, if an algebraic equation holds then the corresponding types are isomorphic. For example, since $A\times (B+C)=A\times B+A
\times C$ it follows that the types 'a * ('b, 'c) sum and ('a * 'b, 'a * 'c) sum are isomorphic. Indeed, the isomorphisms are:
let f = function (a, Left b) -> Left (a, b)
| (a, Right c) -> Right (a, c);;
val f : 'a * ('b, 'c) sum -> ('a * 'b, 'a * 'c) sum = <fun>
let g = function Left (a, b) -> (a, Left b)
| Right (a, c) -> (a, Right c);;
val g : ('a * 'b, 'a * 'c) sum -> 'a * ('b, 'c) sum = <fun>
The correspondence is really about types rather than about OCaml or any particular language. Limitations of the language may occasionally break the correspondences. For example, $0^A=1$ and indeed
there is always a unique function
void -> 'a just as () : unit is unique. But in OCaml, because of the lack of support for empty patterns, we cannot implement it. A language which has empty patterns and in which we can implement it
is Agda.
The correspondence is broken in both directions, since in
we can define
let contr : (unit -> void) = let rec div x = div x in div ()
which has the type which corresponds to $\top\supset\bot$, which is logically inconsistent. This again cannot happen in Agda. Note that it can also happen in Haskell, so I wouldn't call Haskell a
"pure" language, as it is also not faithful to these correspondences.
So these correspondences may or may not hold in an actual language. In what follows I will be even less rigorous. But rigour here would move us too far away from the clarity required to tell a good
story. Apologies in advances, but this is merely a blog post. I hope a bit of licence will not brand the whole thing as "fake news". For rigour I refer the reader to the proper literature, for which
I give a few entry points at the end.
Type isomorphisms are cool enough, but the correspondence runs even deeper than that. Consider for example the type of
over some type
. If we enumerate the number of distinct lists of size
noting that a list of size
is essentially a
-tuple of
s, the number of such lists is $A^n$. So the type of lists, in general, is either an empty list, or a unit-length list, or a length-two list, or ... . So we have that:
L(A)=1+A+A^2+\cdots+A^n+\cdots = \sum_n A^n.
If we think of $L(A)$ as a function, then the above is its Taylor-series expansion, from which it follows that
L(A)=1+A+A^2+\cdots+A^n+\cdots = \sum_n A^n = \frac 1{1-A}.
We can arrive at the same result in a more direct way from the recursive definition of a list
$$ L(A) = 1+A\times L(A)$$
which if we solve for $L(A)$ we get the same result! However, speaking of informality, the equation-solving route does not always apply. Consider the case of natural numbers, defined as
type nat = Zero | Suc of nat
. The corresponding equation is $N=1+N$, which has no solution. But let us look the other way when faced with such complications.
For the types in the table at the beginning the intuitions can be quite direct, but the algebraic representation of lists, which involves division and subtraction, the direct intuitions are not
obvious at all. This is rather fascinating!
Going deeper down this rabbit hole, the algebraic manipulation opens new possibilities. If types are functions $T(A)$, is
the derivative
$T'(A)=\frac{dT(A)}{dA}$ meaningful? Yes! The derivative corresponds to the
of a type. An element of a zipper is like an element of the original type but with a "highlighted" point, or a "hole".
For example, given a list
its zipper is a list with a highlighted node
which is equivalent to a pair of lists, the prefix and the suffix relative to the hole. So unlike in a list where we can move just in one direction ("right") in a list zipper we can move both left
and right. This is what imperative people usually use a doubly linked list for! But mathematics tells us that a pair of lists suffices, a prefix (which is to be represented in reversed order) and a
suffix. Moving left/right is simply a matter of shifting the head of the prefix/suffix onto the suffix/prefix.
But how does this relate to algebra? As we said, the type of the zipper is computed by the derivative:
$$L'(A)=0 + 1\times L(A) + A\times L'(A)$$
If we solve the equation we get
$$L'(A) = \frac {L(A)}{1-A}= L(A)\times \frac 1 {1-A}$$
But $L(A)= \frac 1 {1-A}$ so
$$L'(A) = L(A)\times L(A)$$
The zipper of a list is a pair of lists! You should have a heart hardened by years of system-level programming or hollowed by years of corporate middleware drivel not to be enraptured by how
beautiful all this is.
And note that if you compute the Taylor expansion of $L'(A)$ you get the same result as computing the derivative of the Taylor expansion of $L(A)$, i.e.
$$L'(A)=\sum_n (1+n)\times a^n.$$
It all comes together superbly.
After this too-fast introduction let's get to the main course, which is answering this question:
Does the magic of algebra only apply to the inductive types we deal with in functional programming?
The answer might shock you, because it is NO!
Let us take the type of circular lists over some type $A$, which we represent graphically as
This is a typical non-recursive data-type, so we cannot retrieve its algebraic representation using an equation. But we can via Taylor series, by enumerating the number of distinct such structures of
each size $n$. Some not-so-deep thinking leads us to the conclusion that since there are $A^n$ distinct lists of length $n$, there will be $\frac{A^n}n$ distinct circular lists: each time we move the
head at the end of the list we get an equivalent list, and we must quotient by this equivalence. Thus
$$C(A)=\sum_n \frac{A^n}n = −\mathrm{log}(1−A),$$
by bringing the Taylor expansion to a closed form.
But, hey, how did I do that, go from that Taylor expansion to the original function? Via zippers! Because I know two things.
First, I know that the type of the zipper is the derivative of the original type. Second, I know that the zipper is like the original structure with a hole. And to me, that looks like a list:
So $C'(A)=\frac 1{1-A}$, which means that $C(A)=\int \frac 1{1-A}da=−\mathrm{log}(1−A)$. And I can easily verify that my intuition is right, because computing the Taylor expansion gives me $C(A)=\
sum_n \frac{A^n}n$, which is consistent to the above.
Is this a fluke? No. Lets take another important non-inductive type, that of multisets. Again looking at the enumeration of distinct multisets in order to compute the Taylor series we get that
$$MSet(A)=\sum_n \frac{A^n}{n!}=e^A,$$
since there are $n!$ permutations to quotient by.
From this, since $\frac{\mathrm de^x}{\mathrm d x}=e^x$ it further follows that the zipper of a multiset is a multiset!
Also, we can get isomorphisms such as
$$MSet(A+B)=e^{A+B}=e^A\times e^B=MSet(A)\times MSet(B)$$
which says that a multiset of $A$s or $B$s is the same as a pairing a multiset of $A$s with a multiset of $B$s. The isomorphism going one way is partitioning by the obvious equivalence, and the other
is multiset union.
Reading list
No comments:
|
{"url":"https://danghica.blogspot.com/2018/11/zippers-for-non-inductive-types.html","timestamp":"2024-11-13T05:56:45Z","content_type":"application/xhtml+xml","content_length":"67186","record_id":"<urn:uuid:c4429169-9d78-44f7-86ac-beaba9cf456b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00734.warc.gz"}
|
Histogram: Definition, Types, Graph, and Examples - Custom Homework Help
Histogram: Definition, Types, Graph, and Examples
A histogram is a graphical representation of data that displays the frequency distribution of a dataset. It consists of a series of adjacent bars, where the height of each bar corresponds to the
frequency or relative frequency of occurrences within each interval, also known as a bin.
A histogram visually represents the distribution of numerical data by dividing it into intervals or bins. It provides a clear picture of the central tendency, variability, and shape of the data.
Histograms are widely used in statistics, data analysis, and various fields to understand the distribution patterns of datasets.
A typical histogram graph consists of two axes:
• The X-axis (Horizontal Axis) represents the range of values or intervals of the data.
• The Y-axis (Vertical Axis) represents the frequency, relative frequency, or cumulative frequency.
Each bar in the histogram corresponds to an interval on the X-axis, with the height of the bar indicating the frequency or relative frequency of observations within that interval.
Types and Examples
Frequency Histogram:
This type of histogram displays the frequency of occurrences within each interval. The height of each bar represents the absolute frequency.
Consider the Income Distribution data in the table below:
Suppose you have a dataset concerning the income (in $1000) of 20 individuals. The histogram would represent the distribution of income across different income intervals, such as 0–10, 11–20, 21–30,
and so on. Therefore, the first step is to create a frequency table before constructing a histogram (in cases where technology is not in use), as shown in the table below
From this point, we can use bar graphs to construct the histogram, with the intervals (income) placed on the x-axis and the frequency on the y-axis. The resultant histogram is shown below.
Relative Frequency Histogram
Instead of showing the absolute frequency, this type of histogram displays the proportion of occurrences within each interval relative to the total number of observations. The height of each bar
represents the relative frequency.
Consider the Height Distribution data in the table below:
Given such data, you can create a histogram to visualize the distribution of heights across various height intervals (e.g., 150–160 cm, 161–170 cm, etc.). Therefore, the first step is to create a
relative frequency table before constructing a histogram (in cases where technology is not in use), as shown in the table below
From this point, we can use bar graphs to construct the histogram, with the intervals placed on the x-axis and the height proportions on the y-axis. The resultant histogram is shown below.
Cumulative Frequency Histogram:
In this histogram, each bar represents the cumulative frequency (CF) up to the corresponding interval. It helps visualize the total cumulative frequency distribution as the data progresses.
Consider the exam score distribution data in the table below:
Given such data, you can create a histogram to visualize the distribution of scores across performance intervals (e.g., 10–20, 20–30, etc.). Therefore, the first step is to create a cumulative
frequency table before constructing a histogram (in cases where technology is not in use), as shown in the table below
From this point, we can use bar graphs to construct the histogram, with the intervals (exam score) placed on the x-axis and the cumulative frequency on the y-axis. The resultant histogram is shown
In summary, histograms offer a powerful way to visually represent and interpret the distribution of data, making it easier to draw insights and make informed decisions in various fields ranging from
finance to healthcare and beyond.
https://customhomeworkhelp.com/wp-content/uploads/2024/07/3-1.png 0 0 Steve https://customhomeworkhelp.com/wp-content/uploads/2024/07/3-1.png Steve2024-04-25 17:47:492024-04-25 17:47:49Histogram:
Definition, Types, Graph, and Examples
|
{"url":"https://customhomeworkhelp.com/histogram-definition-types-graph-and-examples/","timestamp":"2024-11-12T11:53:52Z","content_type":"text/html","content_length":"63415","record_id":"<urn:uuid:0bb0ce07-7030-4b93-8f08-e19d9fd719a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00610.warc.gz"}
|
6 24 In Simplest Form
Work out 6/12 2/4 Give your answer in its simplest form.
6 24 In Simplest Form. 24/6 as a simplified fraction. The fraction calculator will reduce a fraction to its simplest form.
Work out 6/12 2/4 Give your answer in its simplest form.
Web how do you simplify trigonometry expressions? The fraction calculator will reduce a fraction to its simplest form. The key to simplifying fractions is to find a number that goes. Byju’s online
simplest form calculator tool makes. Web what is the simplest form of the fraction 24/60? 6 24/100 = 6 (6·4)/ (25·4) = 6 6/25 _____ you reduce a fraction by removing the common factors from numerator
and denominator. In order to simplify 24/6, you follow these steps: How to simplify 24/6 as a fraction in simplest form. The simplest form calculator is a free online tool that displays the
simplified form of the given fraction. Divide both the numerator and.
Web divide both the numerator and the denominator by the gcf using the steps above, here is the work involved in the solution for fraction 6/24 to simplest form the greatest common. Enter the
fraction you want to simplify. 24/144factor out the 2424*1/24*6as 24 appears in both numerator and denominator at least once it can be. The key to simplifying fractions is to find a number that goes.
Web the simplest form of 6 / 24 is 1 / 4. Divide both the numerator and. Web 6.5k views 10 months ago. Find the gcd (or hcf) of numerator and denominator gcd of 6 and 24 is 6; Web what is the
simplest form of the fraction 24/60? To simplify a trigonometry expression, use trigonometry identities to rewrite the expression in a simpler form. Web what is 6.24 as a fraction?
|
{"url":"https://wedgefitting.clevelandgolf.com/form/6-24-in-simplest-form.html","timestamp":"2024-11-03T09:53:14Z","content_type":"text/html","content_length":"20185","record_id":"<urn:uuid:78aa7227-54ce-4819-a233-e884bc6f8f1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00419.warc.gz"}
|
Existing generative modeling techniques can largely be grouped into two categories based on how they represent probability distributions.
1. likelihood-based models, which directly learn the distribution’s probability density (or mass) function via (approximate) maximum likelihood. Typical likelihood-based models include
autoregressive models , normalizing flow models , energy-based models (EBMs), and variational auto-encoders (VAEs) .
2. implicit generative models , where the probability distribution is implicitly represented by a model of its sampling process. The most prominent example is generative adversarial networks (GANs)
, where new samples from the data distribution are synthesized by transforming a random Gaussian vector with a neural network.
Bayesian networks, Markov random fields (MRF), autoregressive models, and normalizing flow models are all examples of likelihood-based models. All these models represent the probability density or
mass function of a distribution.
GAN is an example of implicit models. It implicitly represents a distribution over all objects that can be produced by the generator network.
Likelihood-based models and implicit generative models, however, both have significant limitations. Likelihood-based models either require strong restrictions on the model architecture to ensure a
tractable normalizing constant for likelihood computation, or must rely on surrogate objectives to approximate maximum likelihood training. Implicit generative models, on the other hand, often
require adversarial training, which is notoriously unstable and can lead to mode collapse .
In this blog post, I will introduce another way to represent probability distributions that may circumvent several of these limitations. The key idea is to model the gradient of the log probability
density function, a quantity often known as the (Stein) score function . Such score-based models are not required to have a tractable normalizing constant, and can be directly learned by score
matching .
Score function (the vector field) and density function (contours) of a mixture of two Gaussians.
Score-based models have achieved state-of-the-art performance on many downstream tasks and applications. These tasks include, among others, image generation (Yes, better than GANs!), audio synthesis
, shape generation, and music generation. Moreover, score-based models have connections to normalizing flow models, therefore allowing exact likelihood computation and representation learning.
Additionally, modeling and estimating scores facilitates inverse problem solving, with applications such as image inpainting , image colorization , compressive sensing, and medical image
reconstruction (e.g., CT, MRI) .
1024 x 1024 samples generated from score-based models
This post aims to show you the motivation and intuition of score-based generative modeling, as well as its basic concepts, properties and applications.
The score function, score-based models, and score matching
Suppose we are given a dataset \(\{\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_N\}\), where each point is drawn independently from an underlying data distribution \(p(\mathbf{x})\). Given this
dataset, the goal of generative modeling is to fit a model to the data distribution such that we can synthesize new data points at will by sampling from the distribution.
In order to build such a generative model, we first need a way to represent a probability distribution. One such way, as in likelihood-based models, is to directly model the probability density
function (p.d.f.) or probability mass function (p.m.f.). Let \(f_\theta(\mathbf{x}) \in \mathbb{R}\) be a real-valued function parameterized by a learnable parameter \(\theta\). We can define a
p.d.f. Hereafter we only consider probability density functions. Probability mass functions are similar. via p_\theta(\mathbf{x}) = \frac{e^{-f_\theta(\mathbf{x})}}{Z_\theta}, \label{ebm} where \(Z_\
theta > 0\) is a normalizing constant dependent on \(\theta\), such that \(\int p_\theta(\mathbf{x}) \textrm{d} \mathbf{x} = 1\). Here the function \(f_\theta(\mathbf{x})\) is often called an
unnormalized probabilistic model, or energy-based model .
We can train \(p_\theta(\mathbf{x})\) by maximizing the log-likelihood of the data \max_\theta \sum_{i=1}^N \log p_\theta(\mathbf{x}_i). \label{mle} However, equation \eqref{mle} requires \(p_\theta
(\mathbf{x})\) to be a normalized probability density function. This is undesirable because in order to compute \(p_\theta(\mathbf{x})\), we must evaluate the normalizing constant \(Z_\theta\)—a
typically intractable quantity for any general \(f_\theta(\mathbf{x})\). Thus to make maximum likelihood training feasible, likelihood-based models must either restrict their model architectures
(e.g., causal convolutions in autoregressive models, invertible networks in normalizing flow models) to make \(Z_\theta\) tractable, or approximate the normalizing constant (e.g., variational
inference in VAEs, or MCMC sampling used in contrastive divergence) which may be computationally expensive.
By modeling the score function instead of the density function, we can sidestep the difficulty of intractable normalizing constants. The score function of a distribution \(p(\mathbf{x})\) is defined
as $$abla_\mathbf{x} \log p(\mathbf{x}), otag$$ and a model for the score function is called a score-based model , which we denote as \(\mathbf{s}_\theta(\mathbf{x})\). The score-based model is
learned such that \(\mathbf{s}_\theta(\mathbf{x}) \approx \nabla_\mathbf{x} \log p(\mathbf{x})\), and can be parameterized without worrying about the normalizing constant. For example, we can easily
parameterize a score-based model with the energy-based model defined in equation \eqref{ebm} , via
\[$$\mathbf{s}_\theta (\mathbf{x}) = abla_{\mathbf{x}} \log p_\theta (\mathbf{x} ) = -abla_{\mathbf{x}} f_\theta (\mathbf{x}) - \underbrace{abla_\mathbf{x} \log Z_\theta}_{=0} = -abla_\mathbf{x} f_\
Note that the score-based model \(\mathbf{s}_\theta(\mathbf{x})\) is independent of the normalizing constant \(Z_\theta\) ! This significantly expands the family of models that we can tractably use,
since we don’t need any special architectures to make the normalizing constant tractable.
Parameterizing probability density functions. No matter how you change the model family and parameters, it has to be normalized (area under the curve must integrate to one).
Parameterizing score functions. No need to worry about normalization.
Similar to likelihood-based models, we can train score-based models by minimizing the Fisher divergence Fisher divergence is typically between two distributions p and q, defined as $$$$\mathbb{E}_{p
(\mathbf{x})}[\| abla_\mathbf{x} \log p(\mathbf{x}) - abla_\mathbf{x}\log q(\mathbf{x}) \|_2^2].$$$$ Here we slightly abuse the term as the name of a closely related expression for score-based
models. between the model and the data distributions, defined as \($$\mathbb{E}_{p(\mathbf{x})}[\| abla_\mathbf{x} \log p(\mathbf{x}) - \mathbf{s}_\theta(\mathbf{x}) \|_2^2]\label{fisher}$$\)
Intuitively, the Fisher divergence compares the squared \(\ell_2\) distance between the ground-truth data score and the score-based model. Directly computing this divergence, however, is infeasible
because it requires access to the unknown data score \(\nabla_\mathbf{x} \log p(\mathbf{x})\). Fortunately, there exists a family of methods called score matching Commonly used score matching methods
include denoising score matching and sliced score matching . Here is an introduction to score matching and sliced score matching. that minimize the Fisher divergence without knowledge of the
ground-truth data score. Score matching objectives can directly be estimated on a dataset and optimized with stochastic gradient descent, analogous to the log-likelihood objective for training
likelihood-based models (with known normalizing constants). We can train the score-based model by minimizing a score matching objective, without requiring adversarial optimization.
Additionally, using the score matching objective gives us a considerable amount of modeling flexibility. The Fisher divergence itself does not require \(\mathbf{s}_\theta(\mathbf{x})\) to be an
actual score function of any normalized distribution—it simply compares the \(\ell_2\) distance between the ground-truth data score and the score-based model, with no additional assumptions on the
form of \(\mathbf{s}_\theta(\mathbf{x})\). In fact, the only requirement on the score-based model is that it should be a vector-valued function with the same input and output dimensionality, which is
easy to satisfy in practice.
As a brief summary, we can represent a distribution by modeling its score function, which can be estimated by training a score-based model of free-form architectures with score matching.
Langevin dynamics
Once we have trained a score-based model \(\mathbf{s}_\theta(\mathbf{x}) \approx \nabla_\mathbf{x} \log p(\mathbf{x})\), we can use an iterative procedure called Langevin dynamics to draw samples
from it.
Langevin dynamics provides an MCMC procedure to sample from a distribution \(p(\mathbf{x})\) using only its score function \(\nabla_\mathbf{x} \log p(\mathbf{x})\). Specifically, it initializes the
chain from an arbitrary prior distribution \(\mathbf{x}_0 \sim \pi(\mathbf{x})\), and then iterates the following
\[ \mathbf{x}_{i+1} \gets \mathbf{x}_i + \epsilon \nabla_\mathbf{x} \log p(\mathbf{x}) + \sqrt{2\epsilon}~ \mathbf{z}_i, \quad i=0,1,\cdots, K, \label{langevin} \]
where \(\mathbf{z}_i \sim \mathcal{N}(0, I)\). When \(\epsilon \to 0\) and \(K \to \infty\), \(\mathbf{x}_K\) obtained from the procedure in \eqref{langevin} converges to a sample from \(p(\mathbf
{x})\) under some regularity conditions. In practice, the error is negligible when \(\epsilon\) is sufficiently small and \(K\) is sufficiently large.
Using Langevin dynamics to sample from a mixture of two Gaussians.
Note that Langevin dynamics accesses \(p(\mathbf{x})\) only through \(\nabla_\mathbf{x} \log p(\mathbf{x})\). Since \(\mathbf{s}_\theta(\mathbf{x}) \approx \nabla_\mathbf{x} \log p(\mathbf{x})\), we
can produce samples from our score-based model \(\mathbf{s}_\theta(\mathbf{x})\) by plugging it into equation \eqref{langevin}.
Naive score-based generative modeling and its pitfalls
So far, we’ve discussed how to train a score-based model with score matching, and then produce samples via Langevin dynamics. However, this naive approach has had limited success in practice—we’ll
talk about some pitfalls of score matching that received little attention in prior works.
Score-based generative modeling with score matching + Langevin dynamics.
The key challenge is the fact that the estimated score functions are inaccurate in low density regions, where few data points are available for computing the score matching objective. This is
expected as score matching minimizes the Fisher divergence
\[\mathbb{E}_{p(\mathbf{x})}[\| \nabla_\mathbf{x} \log p(\mathbf{x}) - \mathbf{s}_\theta(\mathbf{x}) \|_2^2] = \int p(\mathbf{x}) \| \nabla_\mathbf{x} \log p(\mathbf{x}) - \mathbf{s}_\theta(\mathbf
{x}) \|_2^2 \mathrm{d}\mathbf{x}.\]
Since the \(\ell_2\) differences between the true data score function and score-based model are weighted by \(p(\mathbf{x})\), they are largely ignored in low density regions where \(p(\mathbf{x})\)
is small. This behavior can lead to subpar results, as illustrated by the figure below:
Estimated scores are only accurate in high density regions.
When sampling with Langevin dynamics, our initial sample is highly likely in low density regions when data reside in a high dimensional space. Therefore, having an inaccurate score-based model will
derail Langevin dynamics from the very beginning of the procedure, preventing it from generating high quality samples that are representative of the data.
Score-based generative modeling with multiple noise perturbations
How can we bypass the difficulty of accurate score estimation in regions of low data density? Our solution is to perturb data points with noise and train score-based models on the noisy data points
instead. When the noise magnitude is sufficiently large, it can populate low data density regions to improve the accuracy of estimated scores. For example, here is what happens when we perturb a
mixture of two Gaussians perturbed by additional Gaussian noise.
Estimated scores are accurate everywhere for the noise-perturbed data distribution due to reduced low data density regions.
Yet another question remains: how do we choose an appropriate noise scale for the perturbation process? Larger noise can obviously cover more low density regions for better score estimation, but it
over-corrupts the data and alters it significantly from the original distribution. Smaller noise, on the other hand, causes less corruption of the original data distribution, but does not cover the
low density regions as well as we would like.
To achieve the best of both worlds, we use multiple scales of noise perturbations simultaneously . Suppose we always perturb the data with isotropic Gaussian noise, and let there be a total of \(L\)
increasing standard deviations \(\sigma_1 < \sigma_2 < \cdots < \sigma_L\). We first perturb the data distribution \(p(\mathbf{x})\) with each of the Gaussian noise \(\mathcal{N}(0, \sigma_i^2 I), i=
1,2,\cdots,L\) to obtain a noise-perturbed distribution
\[p_{\sigma_i}(\mathbf{x}) = \int p(\mathbf{y}) \mathcal{N}(\mathbf{x}; \mathbf{y}, \sigma_i^2 I) \mathrm{d} \mathbf{y}.\]
Note that we can easily draw samples from \(p_{\sigma_i}(\mathbf{x})\) by sampling \(\mathbf{x} \sim p(\mathbf{x})\) and computing \(\mathbf{x} + \sigma_i \mathbf{z}\), with \(\mathbf{z} \sim \
mathcal{N}(0, I)\).
Next, we estimate the score function of each noise-perturbed distribution, \(\nabla_\mathbf{x} \log p_{\sigma_i}(\mathbf{x})\), by training a Noise Conditional Score-Based Model \(\mathbf{s}_\theta(\
mathbf{x}, i)\) (also called a Noise Conditional Score Network, or NCSN, when parameterized with a neural network) with score matching, such that \(\mathbf{s}_\theta(\mathbf{x}, i) \approx \nabla_\
mathbf{x} \log p_{\sigma_i}(\mathbf{x})\) for all \(i= 1, 2, \cdots, L\).
We apply multiple scales of Gaussian noise to perturb the data distribution (first row), and jointly estimate the score functions for all of them (second row).
Perturbing an image with multiple scales of Gaussian noise.
The training objective for \(\mathbf{s}_\theta(\mathbf{x}, i)\) is a weighted sum of Fisher divergences for all noise scales. In particular, we use the objective below:
\[$$\sum_{i=1}^L \lambda(i) \mathbb{E}_{p_{\sigma_i}(\mathbf{x})}[\| abla_\mathbf{x} \log p_{\sigma_i}(\mathbf{x}) - \mathbf{s}_\theta(\mathbf{x}, i) \|_2^2],\label{ncsn_obj}$$\]
where \(\lambda(i) \in \mathbb{R}_{>0}\) is a positive weighting function, often chosen to be \(\lambda(i) = \sigma_i^2\). The objective \eqref{ncsn_obj} can be optimized with score matching, exactly
as in optimizing the naive (unconditional) score-based model \(\mathbf{s}_\theta(\mathbf{x})\).
After training our noise-conditional score-based model \(\mathbf{s}_\theta(\mathbf{x}, i)\), we can produce samples from it by running Langevin dynamics for \(i = L, L-1, \cdots, 1\) in sequence.
This method is called annealed Langevin dynamics (defined by Algorithm 1 in , and improved by ), since the noise scale \(\sigma_i\) decreases (anneals) gradually over time.
Annealed Langevin dynamics combine a sequence of Langevin chains with gradually decreasing noise scales.
Annealed Langevin dynamics for the Noise Conditional Score Network (NCSN) model (from ref.) trained on CelebA (left) and CIFAR-10 (right). We can start from unstructured noise, modify images
according to the scores, and generate nice samples. The method achieved state-of-the-art Inception score on CIFAR-10 at its time.
Here are some practical recommendations for tuning score-based generative models with multiple noise scales:
• Choose \(\sigma_1 < \sigma_2 < \cdots < \sigma_L\) as a geometric progression, with \(\sigma_1\) being sufficiently small and \(\sigma_L\) comparable to the maximum pairwise distance between all
training data points . \(L\) is typically on the order of hundreds or thousands.
• Parameterize the score-based model \(\mathbf{s}_\theta(\mathbf{x}, i)\) with U-Net skip connections .
• Apply exponential moving average on the weights of the score-based model when used at test time .
With such best practices, we are able to generate high quality image samples with comparable quality to GANs on various datasets, such as below:
Samples from the NCSNv2 model. From left to right: FFHQ 256x256, LSUN bedroom 128x128, LSUN tower 128x128, LSUN church_outdoor 96x96, and CelebA 64x64.
Score-based generative modeling with stochastic differential equations (SDEs)
As we already discussed, adding multiple noise scales is critical to the success of score-based generative models. By generalizing the number of noise scales to infinity , we obtain not only higher
quality samples, but also, among others, exact log-likelihood computation, and controllable generation for inverse problem solving.
In addition to this introduction, we have tutorials written in Google Colab to provide a step-by-step guide for training a toy model on MNIST. We also have more advanced code repositories that
provide full-fledged implementations for large scale applications.
Link Description
Tutorial of score-based generative modeling with SDEs in JAX + FLAX
Load our pretrained checkpoints and play with sampling, likelihood computation, and controllable synthesis (JAX + FLAX)
Tutorial of score-based generative modeling with SDEs in PyTorch
Load our pretrained checkpoints and play with sampling, likelihood computation, and controllable synthesis (PyTorch)
Code in JAX Score SDE codebase in JAX + FLAX
Code in PyTorch Score SDE codebase in PyTorch
Perturbing data with an SDE
When the number of noise scales approaches infinity, we essentially perturb the data distribution with continuously growing levels of noise. In this case, the noise perturbation procedure is a
continuous-time stochastic process, as demonstrated below
Perturbing data to noise with a continuous-time stochastic process.
How can we represent a stochastic process in a concise way? Many stochastic processes (diffusion processes in particular) are solutions of stochastic differential equations (SDEs). In general, an SDE
possesses the following form:
\[ \mathrm{d}\mathbf{x} = \mathbf{f}(\mathbf{x}, t) \mathrm{d}t + g(t) \mathrm{d} \mathbf{w},\label{sde} \]
where \(\mathbf{f}(\cdot, t): \mathbb{R}^d \to \mathbb{R}^d\) is a vector-valued function called the drift coefficient, \(g(t)\in \mathbb{R}\) is a real-valued function called the diffusion
coefficient, \(\mathbf{w}\) denotes a standard Brownian motion, and \(\mathrm{d} \mathbf{w}\) can be viewed as infinitesimal white noise. The solution of a stochastic differential equation is a
continuous collection of random variables \(\{ \mathbf{x}(t) \}_{t\in [0, T]}\). These random variables trace stochastic trajectories as the time index \(t\) grows from the start time \(0\) to the
end time \(T\). Let \(p_t(\mathbf{x})\) denote the (marginal) probability density function of \(\mathbf{x}(t)\). Here \(t \in [0, T]\) is analogous to \(i = 1, 2, \cdots, L\) when we had a finite
number of noise scales, and \(p_t(\mathbf{x})\) is analogous to \(p_{\sigma_i}(\mathbf{x})\). Clearly, \(p_0(\mathbf{x}) = p(\mathbf{x})\) is the data distribution since no perturbation is applied to
data at \(t=0\). After perturbing \(p(\mathbf{x})\) with the stochastic process for a sufficiently long time \(T\), \(p_T(\mathbf{x})\) becomes close to a tractable noise distribution \(\pi(\mathbf
{x})\), called a prior distribution. We note that \(p_T(\mathbf{x})\) is analogous to \(p_{\sigma_L}(\mathbf{x})\) in the case of finite noise scales, which corresponds to applying the largest noise
perturbation \(\sigma_L\) to the data.
The SDE in \eqref{sde} is hand designed, similarly to how we hand-designed \(\sigma_1 < \sigma_2 < \cdots < \sigma_L\) in the case of finite noise scales. There are numerous ways to add noise
perturbations, and the choice of SDEs is not unique. For example, the following SDE
\[ \mathrm{d}\mathbf{x} = e^{t} \mathrm{d} \mathbf{w} \]
perturbs data with a Gaussian noise of mean zero and exponentially growing variance, which is analogous to perturbing data with \(\mathcal{N}(0, \sigma_1^2 I), \mathcal{N}(0, \sigma_2^2 I), \cdots, \
mathcal{N}(0, \sigma_L^2 I)\) when \(\sigma_1 < \sigma_2 < \cdots < \sigma_L\) is a geometric progression. Therefore, the SDE should be viewed as part of the model, much like \(\{\sigma_1, \sigma_2,
\cdots, \sigma_L\}\). In , we provide three SDEs that generally work well for images: the Variance Exploding SDE (VE SDE), the Variance Preserving SDE (VP SDE), and the sub-VP SDE.
Reversing the SDE for sample generation
Recall that with a finite number of noise scales, we can generate samples by reversing the perturbation process with annealed Langevin dynamics, i.e., sequentially sampling from each noise-perturbed
distribution using Langevin dynamics. For infinite noise scales, we can analogously reverse the perturbation process for sample generation by using the reverse SDE.
Generate data from noise by reversing the perturbation procedure.
Importantly, any SDE has a corresponding reverse SDE , whose closed form is given by
\[$$\mathrm{d}\mathbf{x} = [\mathbf{f}(\mathbf{x}, t) - g^2(t) abla_\mathbf{x} \log p_t(\mathbf{x})]\mathrm{d}t + g(t) \mathrm{d} \mathbf{w}.\label{rsde}$$\]
Here \(\mathrm{d} t\) represents a negative infinitesimal time step, since the SDE \eqref{rsde} needs to be solved backwards in time (from \(t=T\) to \(t = 0\)). In order to compute the reverse SDE,
we need to estimate \(\nabla_\mathbf{x} \log p_t(\mathbf{x})\), which is exactly the score function of \(p_t(\mathbf{x})\).
Solving a reverse SDE yields a score-based generative model. Transforming data to a simple noise distribution can be accomplished with an SDE. It can be reversed to generate samples from noise if we
know the score of the distribution at each intermediate time step.
Estimating the reverse SDE with score-based models and score matching
Solving the reverse SDE requires us to know the terminal distribution \(p_T(\mathbf{x})\), and the score function \(\nabla_\mathbf{x} \log p_t(\mathbf{x})\). By design, the former is close to the
prior distribution \(\pi(\mathbf{x})\) which is fully tractable. In order to estimate \(\nabla_\mathbf{x} \log p_t(\mathbf{x})\), we train a Time-Dependent Score-Based Model \(\mathbf{s}_\theta(\
mathbf{x}, t)\), such that \(\mathbf{s}_\theta(\mathbf{x}, t) \approx \nabla_\mathbf{x} \log p_t(\mathbf{x})\). This is analogous to the noise-conditional score-based model \(\mathbf{s}_\theta(\
mathbf{x}, i)\) used for finite noise scales, trained such that \(\mathbf{s}_\theta(\mathbf{x}, i) \approx \nabla_\mathbf{x} \log p_{\sigma_i}(\mathbf{x})\).
Our training objective for \(\mathbf{s}_\theta(\mathbf{x}, t)\) is a continuous weighted combination of Fisher divergences, given by
\[$$\mathbb{E}_{t \in \mathcal{U}(0, T)}\mathbb{E}_{p_t(\mathbf{x})}[\lambda(t) \| abla_\mathbf{x} \log p_t(\mathbf{x}) - \mathbf{s}_\theta(\mathbf{x}, t) \|_2^2],$$\]
where \(\mathcal{U}(0, T)\) denotes a uniform distribution over the time interval \([0, T]\), and \(\lambda: \mathbb{R} \to \mathbb{R}_{>0}\) is a positive weighting function. Typically we use \(\
lambda(t) \propto 1/ \mathbb{E}[\| \nabla_{\mathbf{x}(t)} \log p(\mathbf{x}(t) \mid \mathbf{x}(0))\|_2^2]\) to balance the magnitude of different score matching losses across time.
As before, our weighted combination of Fisher divergences can be efficiently optimized with score matching methods, such as denoising score matching and sliced score matching . Once our score-based
model \(\mathbf{s}_\theta(\mathbf{x}, t)\) is trained to optimality, we can plug it into the expression of the reverse SDE in \eqref{rsde} to obtain an estimated reverse SDE.
\[$$\mathrm{d}\mathbf{x} = [\mathbf{f}(\mathbf{x}, t) - g^2(t) \mathbf{s}_\theta(\mathbf{x}, t)]\mathrm{d}t + g(t) \mathrm{d} \mathbf{w}.$$\]
We can start with \(\mathbf{x}(T) \sim \pi\), and solve the above reverse SDE to obtain a sample \(\mathbf{x}(0)\). Let us denote the distribution of \(\mathbf{x}(0)\) obtained in such way as \(p_\
theta\). When the score-based model \(\mathbf{s}_\theta(\mathbf{x}, t)\) is well-trained, we have \(p_\theta \approx p_0\), in which case \(\mathbf{x}(0)\) is an approximate sample from the data
distribution \(p_0\).
When \(\lambda(t) = g^2(t)\), we have an important connection between our weighted combination of Fisher divergences and the KL divergence from \(p_0\) to \(p_\theta\) under some regularity
conditions :
\[\begin{multline} \operatorname{KL}(p_0(\mathbf{x})\|p_\theta(\mathbf{x})) \leq \frac{T}{2}\mathbb{E}_{t \in \mathcal{U}(0, T)}\mathbb{E}_{p_t(\mathbf{x})}[\lambda(t) \| \nabla_\mathbf{x} \log p_t(\
mathbf{x}) - \mathbf{s}_\theta(\mathbf{x}, t) \|_2^2] \\+ \operatorname{KL}(p_T \mathrel\| \pi). \end{multline}\]
Due to this special connection to the KL divergence and the equivalence between minimizing KL divergences and maximizing likelihood for model training, we call \(\lambda(t) = g(t)^2\) the likelihood
weighting function. Using this likelihood weighting function, we can train score-based generative models to achieve very high likelihoods, comparable or even superior to state-of-the-art
autoregressive models.
How to solve the reverse SDE
By solving the estimated reverse SDE with numerical SDE solvers, we can simulate the reverse stochastic process for sample generation. Perhaps the simplest numerical SDE solver is the Euler-Maruyama
method. When applied to our estimated reverse SDE, it discretizes the SDE using finite time steps and small Gaussian noise. Specifically, it chooses a small negative time step \(\Delta t \approx 0\),
initializes \(t \gets T\), and iterates the following procedure until \(t \approx 0\):
\[\begin{aligned} \Delta \mathbf{x} &\gets [\mathbf{f}(\mathbf{x}, t) - g^2(t) \mathbf{s}_\theta(\mathbf{x}, t)]\Delta t + g(t) \sqrt{\vert \Delta t\vert }\mathbf{z}_t \\ \mathbf{x} &\gets \mathbf{x}
+ \Delta \mathbf{x}\\ t &\gets t + \Delta t, \end{aligned}\]
Here \(\mathbf{z}_t \sim \mathcal{N}(0, I)\). The Euler-Maruyama method is qualitatively similar to Langevin dynamics—both update \(\mathbf{x}\) by following score functions perturbed with Gaussian
Aside from the Euler-Maruyama method, other numerical SDE solvers can be directly employed to solve the reverse SDE for sample generation, including, for example, Milstein method, and stochastic
Runge-Kutta methods. In , we provided a reverse diffusion solver similar to Euler-Maruyama, but more tailored for solving reverse-time SDEs. More recently, authors in introduced adaptive step-size
SDE solvers that can generate samples faster with better quality.
In addition, there are two special properties of our reverse SDE that allow for even more flexible sampling methods:
• We have an estimate of \(\nabla_\mathbf{x} \log p_t(\mathbf{x})\) via our time-dependent score-based model \(\mathbf{s}_\theta(\mathbf{x}, t)\).
• We only care about sampling from each marginal distribution \(p_t(\mathbf{x})\). Samples obtained at different time steps can have arbitrary correlations and do not have to form a particular
trajectory sampled from the reverse SDE.
As a consequence of these two properties, we can apply MCMC approaches to fine-tune the trajectories obtained from numerical SDE solvers. Specifically, we propose Predictor-Corrector samplers. The
predictor can be any numerical SDE solver that predicts \(\mathbf{x}(t + \Delta t) \sim p_{t+\Delta t}(\mathbf{x})\) from an existing sample \(\mathbf{x}(t) \sim p_t(\mathbf{x})\). The corrector can
be any MCMC procedure that solely relies on the score function, such as Langevin dynamics and Hamiltonian Monte Carlo.
At each step of the Predictor-Corrector sampler, we first use the predictor to choose a proper step size \(\Delta t < 0\), and then predict \(\mathbf{x}(t + \Delta t)\) based on the current sample \
(\mathbf{x}(t)\). Next, we run several corrector steps to improve the sample \(\mathbf{x}(t + \Delta t)\) according to our score-based model \(\mathbf{s}_\theta(\mathbf{x}, t + \Delta t)\), so that \
(\mathbf{x}(t + \Delta t)\) becomes a higher-quality sample from \(p_{t+\Delta t}(\mathbf{x})\).
With Predictor-Corrector methods and better architectures of score-based models, we can achieve state-of-the-art sample quality on CIFAR-10 (measured in FID and Inception scores ), outperforming the
best GAN model to date (StyleGAN2 + ADA ).
Method FID \(\downarrow\) Inception score \(\uparrow\)
StyleGAN2 + ADA 2.92 9.83
Ours 2.20 9.89
The sampling methods are also scalable for extremely high dimensional data. For example, it can successfully generate high fidelity images of resolution \(1024\times 1024\).
1024 x 1024 samples from a score-based model trained on the FFHQ dataset.
Some additional (uncurated) samples for other datasets (taken from this GitHub repo):
256 x 256 samples on LSUN bedroom.
256 x 256 samples on CelebA-HQ.
Probability flow ODE
Despite capable of generating high-quality samples, samplers based on Langevin MCMC and SDE solvers do not provide a way to compute the exact log-likelihood of score-based generative models. Below,
we introduce a sampler based on ordinary differential equations (ODEs) that allow for exact likelihood computation.
In , we show t is possible to convert any SDE into an ordinary differential equation (ODE) without changing its marginal distributions \(\{ p_t(\mathbf{x}) \}_{t \in [0, T]}\). Thus by solving this
ODE, we can sample from the same distributions as the reverse SDE. The corresponding ODE of an SDE is named probability flow ODE , given by
\[$$\mathrm{d} \mathbf{x} = \bigg[\mathbf{f}(\mathbf{x}, t) - \frac{1}{2}g^2(t) abla_\mathbf{x} \log p_t(\mathbf{x})\bigg] \mathrm{d}t.\label{prob_ode}$$\]
The following figure depicts trajectories of both SDEs and probability flow ODEs. Although ODE trajectories are noticeably smoother than SDE trajectories, they convert the same data distribution to
the same prior distribution and vice versa, sharing the same set of marginal distributions \(\{ p_t(\mathbf{x}) \}_{t \in [0, T]}\). In other words, trajectories obtained by solving the probability
flow ODE have the same marginal distributions as the SDE trajectories.
We can map data to a noise distribution (the prior) with an SDE, and reverse this SDE for generative modeling. We can also reverse the associated probability flow ODE, which yields a deterministic
process that samples from the same distribution as the SDE. Both the reverse-time SDE and probability flow ODE can be obtained by estimating score functions.
This probability flow ODE formulation has several unique advantages.
When \(\nabla_\mathbf{x} \log p_t(\mathbf{x})\) is replaced by its approximation \(\mathbf{s}_\theta(\mathbf{x}, t)\), the probability flow ODE becomes a special case of a neural ODE. In particular,
it is an example of continuous normalizing flows, since the probability flow ODE converts a data distribution \(p_0(\mathbf{x})\) to a prior noise distribution \(p_T(\mathbf{x})\) (since it shares
the same marginal distributions as the SDE) and is fully invertible.
As such, the probability flow ODE inherits all properties of neural ODEs or continuous normalizing flows, including exact log-likelihood computation. Specifically, we can leverage the instantaneous
change-of-variable formula (Theorem 1 in , Equation (4) in ) to compute the unknown data density \(p_0\) from the known prior density \(p_T\) with numerical ODE solvers.
In fact, our model achieves the state-of-the-art log-likelihoods on uniformly dequantized It is typical for normalizing flow models to convert discrete images to continuous ones by adding small
uniform noise to them. CIFAR-10 images , even without maximum likelihood training.
Method Negative log-likelihood (bits/dim) \(\downarrow\)
RealNVP 3.49
iResNet 3.45
Glow 3.35
FFJORD 3.40
Flow++ 3.29
Ours 2.99
When training score-based models with the likelihood weighting we discussed before, and using variational dequantization to obtain likelihoods on discrete images, we can achieve comparable or even
superior likelihood to the state-of-the-art autoregressive models (all without any data augmentation) .
Method Negative log-likelihood (bits/dim) \(\downarrow\) on CIFAR-10 Negative log-likelihood (bits/dim) \(\downarrow\) on ImageNet 32x32
Sparse Transformer 2.80 -
Image Transformer 2.90 3.77
Ours 2.83 3.76
Controllable generation for inverse problem solving
Score-based generative models are particularly suitable for solving inverse problems. At its core, inverse problems are same as Bayesian inference problems. Let \(\mathbf{x}\) and \(\mathbf{y}\) be
two random variables, and suppose we know the forward process of generating \(\mathbf{y}\) from \(\mathbf{x}\), represented by the transition probability distribution \(p(\mathbf{y} \mid \mathbf{x})
\). The inverse problem is to compute \(p(\mathbf{x} \mid \mathbf{y})\). From Bayes’ rule, we have \(p(\mathbf{x} \mid \mathbf{y}) = p(\mathbf{x}) p(\mathbf{y} \mid \mathbf{x}) / \int p(\mathbf{x}) p
(\mathbf{y} \mid \mathbf{x}) \mathrm{d} \mathbf{x}\). This expression can be greatly simplified by taking gradients with respect to \(\mathbf{x}\) on both sides, leading to the following Bayes’ rule
for score functions:
\[$$abla_\mathbf{x} \log p(\mathbf{x} \mid \mathbf{y}) = abla_\mathbf{x} \log p(\mathbf{x}) + abla_\mathbf{x} \log p(\mathbf{y} \mid \mathbf{x}).\label{inverse_problem}$$\]
Through score matching, we can train a model to estimate the score function of the unconditional data distribution, i.e., \(\mathbf{s}_\theta(\mathbf{x}) \approx \nabla_\mathbf{x} \log p(\mathbf{x})
\). This will allow us to easily compute the posterior score function \(\nabla_\mathbf{x} \log p(\mathbf{x} \mid \mathbf{y})\) from the known forward process \(p(\mathbf{y} \mid \mathbf{x})\) via
equation \eqref{inverse_problem}, and sample from it with Langevin-type sampling .
A recent work from UT Austin has demonstrated that score-based generative models can be applied to solving inverse problems in medical imaging, such as accelerating magnetic resonance imaging (MRI).
Concurrently in , we demonstrated superior performance of score-based generative models not only on accelerated MRI, but also sparse-view computed tomography (CT). We were able to achieve comparable
or even better performance than supervised or unrolled deep learning approaches, while being more robust to different measurement processes at test time.
Below we show some examples on solving inverse problems for computer vision.
Class-conditional generation with an unconditional time-dependent score-based model, and a pre-trained noise-conditional image classifier on CIFAR-10.
Image inpainting with a time-dependent score-based model trained on LSUN bedroom. The leftmost column is ground-truth. The second column shows masked images (y in our framework). The rest columns
show different inpainted images, generated by solving the conditional reverse-time SDE.
Image colorization with a time-dependent score-based model trained on LSUN church_outdoor and bedroom. The leftmost column is ground-truth. The second column shows gray-scale images (y in our
framework). The rest columns show different colorizedimages, generated by solving the conditional reverse-time SDE.
We can even colorize gray-scale portrays of famous people in history (Abraham Lincoln) with a time-dependent score-based model trained on FFHQ. The image resolution is 1024 x 1024.
Connection to diffusion models and others
I started working on score-based generative modeling since 2019, when I was trying hard to make score matching scalable for training deep energy-based models on high-dimensional datasets. My first
attempt at this led to the method sliced score matching. Despite the scalability of sliced score matching for training energy-based models, I found to my surprise that Langevin sampling from those
models fails to produce reasonable samples even on the MNIST dataset. I started investigating this issue and discovered three crucial improvements that can lead to extremely good samples: (1)
perturbing data with multiple scales of noise, and training score-based models for each noise scale; (2) using a U-Net architecture (we used RefineNet since it is a modern version of U-Nets) for the
score-based model; (3) applying Langevin MCMC to each noise scale and chaining them together. With those methods, I was able to obtain the state-of-the-art Inception Score on CIFAR-10 in (even better
than the best GANs!), and generate high-fidelity image samples of resolution up to $256\times 256$ in .
The idea of perturbing data with multiple scales of noise is by no means unique to score-based generative models though. It has been previously used in, for example, simulated annealing, annealed
importance sampling, diffusion probabilistic models, infusion training, and variational walkback for generative stochastic networks. Out of all these works, diffusion probabilistic modeling is
perhaps the closest to score-based generative modeling. Diffusion probabilistic models are hierachical latent variable models first proposed by Jascha and his colleagues in 2015, which generate
samples by learning a variational decoder to reverse a discrete diffusion process that perturbs data to noise. Without awareness of this work, score-based generative modeling was proposed and
motivated independently from a very different perspective. Despite both perturbing data with multiple scales of noise, the connection between score-based generative modeling and diffusion
probabilistic modeling seemed superficial at that time, since the former is trained by score matching and sampled by Langevin dynamics, while the latter is trained by the evidence lower bound (ELBO)
and sampled with a learned decoder.
In 2020, Jonathan Ho and colleagues significantly improved the empirical performance of diffusion probabilistic models and first unveiled a deeper connection to score-based generative modeling. They
showed that the ELBO used for training diffusion probabilistic models is essentially equivalent to the weighted combination of score matching objectives used in score-based generative modeling.
Moreover, by parameterizing the decoder as a sequence of score-based models with a U-Net architecture, they demonstrated for the first time that diffusion probabilistic models can also generate high
quality image samples comparable or superior to GANs.
Inspired by their work, we further investigated the relationship between diffusion models and score-based generative models in an ICLR 2021 paper . We found that the sampling method of diffusion
probabilistic models can be integrated with annealed Langevin dynamics of score-based models to create a unified and more powerful sampler (the Predictor-Corrector sampler). By generalizing the
number of noise scales to infinity, we further proved that score-based generative models and diffusion probabilistic models can both be viewed as discretizations to stochastic differential equations
determined by score functions. This work bridges both score-based generative modeling and diffusion probabilistic modeling into a unified framework.
Collectively, these latest developments seem to indicate that both score-based generative modeling with multiple noise perturbations and diffusion probabilistic models are different perspectives of
the same model family, much like how wave mechanics and matrix mechanics are equivalent formulations of quantum mechanics in the history of physicsGoes without saying that the significance of
score-based generative models/diffusion probabilistic models is in no way comparable to quantum mechanics.. The perspective of score matching and score-based models allows one to calculate
log-likelihoods exactly, solve inverse problems naturally, and is directly connected to energy-based models, Schrödinger bridges and optimal transport. The perspective of diffusion models is
naturally connected to VAEs, lossy compression, and can be directly incorporated with variational probabilistic inference. This blog post focuses on the first perspective, but I highly recommend
interested readers to learn about the alternative perspective of diffusion models as well (see a great blog by Lilian Weng).
Many recent works on score-based generative models or diffusion probabilistic models have been deeply influenced by knowledge from both sides of research (see a website curated by researchers at the
University of Oxford). Despite this deep connection between score-based generative models and diffusion models, it is hard to come up with an umbrella term for the model family that they both belong
to. Some colleagues in DeepMind propose to call them “Generative Diffusion Processes”. It remains to be seen if this will be adopted by the community in the future.
This blog post gives a detailed introduction to score-based generative models. We demonstrate that this new paradigm of generative modeling is able to produce high quality samples, compute exact
log-likelihoods, and perform controllable generation for inverse problem solving. It is a compilation of several papers we published in the past few years. Please visit them if you are interested in
more details:
For a list of works that have been influenced by score-based generative modeling, researchers at the University of Oxford have built a very useful (but necessarily incomplete) website: https://
There are two major challenges of score-based generative models. First, the sampling speed is slow since it involves a large number of Langevin-type iterations. Second, it is inconvenient to work
with discrete data distributions since scores are only defined on continuous distributions.
The first challenge can be partially solved by using numerical ODE solvers for the probability flow ODE with lower precision (a similar method, denoising diffusion implicit modeling, has been
proposed in ). It is also possible to learn a direct mapping from the latent space of probability flow ODEs to the image space, as shown in . However, all such methods to date result in worse sample
The second challenge can be addressed by learning an autoencoder on discrete data and performing score-based generative modeling on its continuous latent space . Jascha’s original work on diffusion
models also provides a discrete diffusion process for discrete data distributions, but its potential for large scale applications remains yet to be proven.
It is my conviction that these challenges will soon be solved with the joint efforts of the research community, and score-based generative models/ diffusion-based models will become one of the most
useful tools for data generation, density estimation, inverse problem solving, and many other downstream tasks in machine learning.
|
{"url":"https://yang-song.net/blog/2021/score/?ref=dingran.me","timestamp":"2024-11-06T17:04:00Z","content_type":"text/html","content_length":"74589","record_id":"<urn:uuid:9131127a-9bc4-4206-985b-1ca3264f05cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00581.warc.gz"}
|
Derivation of Average Speed of Gaseous Molecules - Unraveled The Mysteries of Quantum Relativity
We can start by analyzing equation #15 from the Derivation of Ideal Gas Law below:
Because we want to derive the equation to the speed of gaseous molecules, the most important factor come to mind is the speed v[x]. Therefore, the equation will be
Canceling out
Since R = N[a]k.
Finally, to calculate the avarage speed we find v[rms](Root Meean Square of Speed).
M = N[a]m in the equation above is the mass of one mole of molecules (the molecular mass).
* The gas constant R must be expressed in correct units for the situation in which it is being used. In the ideal gas equation where pv=nRT, it is logical to use units (L)(atm)/(mol)(K).
*In regard to speed, however, energy unit must be taken into account. Therefore, it is more appropriate to to convert it to (J)/(mol)(K).
2 responses to “Derivation of Average Speed of Gaseous Molecules”
1. Please be careful with this. The RMS speed IS NOT equivalent to the average speed. In fact, the RMS speed will always be greater than the average speed.
2. it is rms velocity not average speed
|
{"url":"https://quantumfreak.com/speed-of-gaseous-molecules-derivation/","timestamp":"2024-11-10T01:32:22Z","content_type":"text/html","content_length":"68277","record_id":"<urn:uuid:061aba2b-b3e6-4e8a-8209-6cd27cf71a1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00408.warc.gz"}
|
Stata | FAQ: First and last occurrences in panel data
How can I identify first and last occurrences systematically in panel data?
Title First and last occurrences in panel data
Author Nicholas J. Cox, Durham University, UK
The problem
I have panel data (or longitudinal data or cross-sectional time-series data). I wish to identify systematically the first (or last) occurrences of a particular condition in each panel with an
indicator variable that is 1 when an observation is the first (or last) occurrence in a panel and 0 otherwise. How do I do this?
Example and analysis of the problem
Let us be clear about what the problem is. With panel data, we have one or more panels with identifiers and a time variable. Thus a panel might look like this:
id time state
Here the variable state takes on two values, 0 and 1. In panel 1, state 1 first occurs at time 4, and the individual remains in that state. In panel 2, the individual is in state 1 only from time 3
to time 5. We need to be able to deal with both patterns, recognizing that the state concerned may be absorbing or just temporary.
This example with a binary or indicator variable is about as simple as you can imagine. We can solve this example easily, and encouragingly, we can reduce other examples to the same form. Let us take
one step at a time.
A closely parallel FAQ is FAQ: How can I drop spells of missing values at the beginning and end of panel data?. You might like to read that first, particularly since it spells out some details taken
for granted here.
Identifying the kind of solution needed
For this problem, there is a simple Stata solution, which will be revealed in a moment. More important, however, is how you can work out the solution to these and similar problems yourself.
Two elements are immediate. First, the panel structure is crucial here. For each panel, we must identify the first (or perhaps last) occurrence of a state, say, state == 1. To experienced Stata
users, this should suggest that you use by varlist:, here by id:. For more on the syntax, see by, check out sections in the manual on by:, or read the tutorial by Cox (2002).
What can seem strange at first sight is that absolutely no looping is needed here. Many Stata users, especially if they have experience using loops in other languages, tend to think about problems
like this one in terms of looping over the panels and then over the times within each panel, but simpler and faster code avoids that. More precisely, code can be found that does the looping
implicitly, with the details managed for you.
Second, sort order within panels is also crucial. We must work through values, respecting the order of the time variable.
Although we talk about panel data, we nowhere assume that you have declared your dataset as panel data to Stata by using tsset. That is often a good idea and does no harm here, but it is irrelevant
to what follows.
Particular solutions: First occurrences in panels
Here the cumulative sum sum(state) will be 0 before the first occurrence, 1 at the first occurrence, and 1 or more thereafter.
id time state
In the first panel, sum(state) would be 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, and it is characteristic of absorbing states (those that once entered are never left) that are coded by 1 that sum(state) is 1
precisely once, on the first occurrence of the state in any panel. This leads us to the solution for absorbing states coded by 1:
. by id (time), sort: gen byte first = sum(state) == 1
However, this solution is not general enough to cope with nonabsorbing states. If, for example, state is 1 at time 4 but 0 thereafter, sum(state) will be 1 for all times from 4 onward. A more general
solution is thus
. by id (time), sort: gen noccur = sum(state)
. by id: gen byte first = noccur == 1 & noccur[_n - 1] != noccur
The condition noccur[_n − 1] != noccur catches those cases in which the previous value noccur[_n − 1] is 0, as well as the case when noccur equals 1 for the first observation, _n == 1, as noccur[0]
is always treated as missing.
We can also do that using one variable, not two, and one statement, not two:
. by id (time), sort: gen byte first = sum(state) == 1 & sum(state[_n - 1]) == 0
This also works even when the first occurrence of state is also the first observation in the panel. Then sum(state[_n − 1]) is 0 because state[0] is evaluated as missing and sum(.) is 0. Otherwise
put, the cumulative sum function sum() is hard-wired to ignore missings. More precisely, it is always initialized as 0, and it always adds 0 when fed a missing value.
Now let us rewrite this in a more long-winded way that is numerically equivalent but shows a more general solution in which, no matter what the state whose first occurrence we are seeking, we can
recognize it as the first occurrence of a condition numerically evaluated as 1.
In this case, if state is 1, then state == 1 is true, and that condition is evaluated numerically as 1. Similarly, if state is 0, then state == 1 is false and that condition is evaluated numerically
as 0. So, wherever we have an indicator variable, we would get the same results numerically by writing down an equivalent true-or-false condition for Stata to evaluate as 1 or 0. Conversely, to get
the benefits of an indicator variable, all we need to do is write down a true-or-false condition. For more on these principles, see FAQ: What is true and false in Stata?.
. by id (time), sort: gen byte first = sum(state == 1) == 1 & sum(state[_n - 1] == 1) == 0
sum(state[_n − 1] == 1) is 0 even in the awkward case of the first observation. If _n is 1, then state[0] is evaluated as missing and is not equal to 1; thus, state[_n - 1] == 1 is false or
numerically 0, so sum(0) is 0.
When was the first occasion on which the frog turned into a prince?
. by id (time), sort: gen byte first = sum(state == "prince") == 1 & sum(state[_n - 1] == "prince") == 0
When was the first occasion on which the value was at least 42?
. by id (time), sort: gen byte first = sum(inrange(value, 42,.)) == 1 & sum(inrange(value[_n - 1],42,.)) == 0
In this last case, the condition range >= 42 includes range == ., but that should usually be avoided.
Particular solutions: Last occurrences in panels
To consider values at the end of each panel, we need to start at the end and work backward. By far, the easiest way to do this is just to reverse the sort order within each panel and then apply the
same logic as before.
You could change the sort order this way:
. gen ntime = -time
. by id (ntime), sort: whatever
. drop ntime
Another way is to do it with gsort:
. gsort id -time
. by id: whatever
Either way, you usually want to clean up the sort order again before other work by a plain tsset if you did do a tsset earlier, by typing
. sort id time
or by typing
. tsset id time
Particular solutions: Using egen
A different approach is to use egen, which gives a pleasantly direct solution. The first and last times at which state == 1 are given by
. by id, sort: egen firsttime = min(cond(state == 1, time, .))
. by id: egen lasttime = max(cond(state == 1, time, .))
The key to this approach is to realize that egen, min() and egen, max() can take expressions, here using the cond() function that yields either time when state == 1 or missing otherwise. We are
exploiting the fact that Stata ignores missings in calculating extremes. Thus the first and last times reported for a panel will be missing only if the condition referred to, here state == 1, is
never observed for that panel. Naturally any other true-or-false condition may be used in place of state == 1.
Given first and last times, indicator variables are at hand:
. gen byte first = time == firsttime
. gen byte last = time == lasttime
A tacit assumption here is that time takes distinct values within each panel, which seems likely and is essential if tsset is to be applied.
If the first and last times themselves, rather than associated indicator variables, are of most interest, then this approach is doubly attractive. If somehow you had indicators and not times, then
. by id: egen firsttime = total(first * time)
. by id: egen lasttime = total(last * time)
yields the times. Each total is based on one instance in which an indicator variable is 1 and other instances in which it is 0, so the result is just the first time or last time multiplied by one,
plus various zeros, or simply the first or last time.
For further discussions of tricks in this territory, see Cox (2011).
Cox, N. J. 2002.
Speaking Stata: How to move step by: step. Stata Journal 2: 86–102.
Cox, N.J. 2011.
Speaking Stata: Compared with ... Stata Journal 11: 305–314.
|
{"url":"https://www.stata.com/support/faqs/data-management/first-and-last-occurrences/","timestamp":"2024-11-07T03:54:26Z","content_type":"text/html","content_length":"68352","record_id":"<urn:uuid:7e59c7b4-fbe9-4353-b07d-a480b0280560>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00360.warc.gz"}
|
58,360 research outputs found
We present a general theory of the proximity effect in junctions between diffusive normal metals (DN) and unconventional superconductors in the framework of the quasiclassical Green's function
formalism. Various possible symmetry classes in a superconductor are considered which are consistent with the Pauli principle: even-frequency spin-singlet even-parity (ESE) state, even-frequency
spin-triplet odd-parity (ETO) state, odd-frequency spin-triplet even-parity (OTE) state and odd-frequency spin-singlet odd-parity (OSO) state. For each of the above four cases symmetry and spectral
properties of the induced pair amplitude in the DN are determined. It is shown that the pair amplitude in a DN belongs respectively to an ESE, OTE, OTE and ESE pairing state.Comment: 5 pages with one
Hakucho satellite operations and the problems that emerged from the neutron star observations are illustrated. X-ray pulsars and bursters are reviewed. Future satellite programs are outlined
Tunneling conductance in ferromagnet / unconventional superconductor junctions is studied theoretically as a function of temperatures and spin-polarization in feromagnets. In d-wave superconductor
junctions, the existence of a zero-energy Andreev bound state drastically affects the temperature-dependence of the zero-bias conductance (ZBC). In p-wave triplet superconductor junctions, numerical
results show a wide variety in temperature-dependence of the ZBC depending on the direction of the magnetic moment in ferromagnets and the pairing symmetry in superconductors such as $p_{x}$, $p_{y}$
and $p_{x}+ip_{y}$-wave pair potential. The last one is a promising symmetry of Sr$_2$RuO$_4$. From these characteristic features in the conductance, we may obtain the information about the degree of
spin-polarization in ferromagnets and the direction of the $d$-vector in triplet superconductors
The similarity of the mathematical description of random-field spin systems to orthogonal frequency-division multiplexing (OFDM) scheme for wireless communication is exploited in an
intercarrier-interference (ICI) canceller used in the demodulation of OFDM. The translational symmetry in the Fourier domain generically concentrates the major contribution of ICI from each
subcarrier in the subcarrier's neighborhood. This observation in conjunction with mean field approach leads to a development of an ICI canceller whose necessary cost of computation scales linearly
with respect to the number of subcarriers. It is also shown that the dynamics of the mean-field canceller are well captured by a discrete map of a single macroscopic variable, without taking the
spatial and time correlations of estimated variables into account.Comment: 7pages, 3figure
|
{"url":"https://core.ac.uk/search/?q=authors%3A(Tanaka%2C%20Y.)","timestamp":"2024-11-14T20:33:44Z","content_type":"text/html","content_length":"115600","record_id":"<urn:uuid:541f3fa0-d10f-431d-b7c5-fc798d44894e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00788.warc.gz"}
|
Creeping Zeros And Economic Armageddon
Regular readers are familiar with my characterization and observations concerning the Corporate media propaganda machine. This oligopoly disseminates its “news” as a single, monolithic herd. This, by
itself, is conclusive proof that we are dealing with propaganda (and brainwashing), as any legitimate “free press” always exhibits considerable diversity of opinion.
However, one important facet of this brainwashing/conditioning requires no deceptions or distortions of any kind in order to achieve the desired effect: apathy and confusion. As an inevitable
consequence of “inflation” (i.e. the relentless/excessive money-printing of the One Bank); the numbers we use in discussing the parameters of all our economies are increasing, and at an exponential
This phenomenon of arithmetic is known as “creeping zeros”. But what is important for this discussion is not the arithmetic, but the inevitable psychological ramifications of creeping zeros.
Specifically, as the numbers increase in size at an exponential rate; our understanding of these numbers decreases – proportionately.
The implication of this is hopefully obvious to most readers: the bigger the crimes of the One Bank, and the faster it piles one mega-crime atop another, the faster we lose the capacity to understand
the magnitude of these crimes. When it commits crimes involving numbers which are literally beyond human comprehension, it becomes logically impossible to truly understand these mega-crimes,
It is necessary to inject some hard numbers here to facilitate understanding. The largest number which we puny humans can fully understand is (roughly) one million. It requires no academic
credentials to make such an assertion, because the basis for this conclusion is tautological in nature.
Generally speaking (and as simple, common sense), we can only “understand” what we are capable of perceiving with our senses. You cannot explain “colour” to someone who has been blind all of their
lives. You cannot explain “music” to someone who has been deaf all of their lives. They lack the sensory capacity to genuinely understand such concepts.
Similarly; while we can be told what an “atom” is, we cannot truly grasp the nature of these particles – save for the very few who can observe them (somewhat) via the aid of an electron microscope.
This lack of comprehension also applies to phenomena in the universe which are too large for our comprehension.
It is accepted universally that the universe itself is beyond our intellectual grasp. When we look into the night sky; our (unaided) vision extends only out into a small portion of a single galaxy.
And even boosted to the greatest extent which our technology allows; we still know our vision our encompasses a microscopic portion of infinity.
The question here then becomes: what is the demarcation point of our comprehension; at what point do numbers themselves become too large for us to understand? The answer to that question is “one
For city-dwellers who have access to elevated viewpoints; we can see one million – the populations of our own cities. We can’t actually “see” the one million (puny) inhabitants themselves, but we can
see their dwellings, from which it is a very small extrapolation of logic to comprehend the number of inhabitants. Anything beyond one million is too large for us to truly understand, because it is
totally beyond our sensory capacity.
We can have a very crude understanding of one billion, as it is the concept of quantum immediately above the largest number we genuinely understand (and the numerical representation of the entire,
human population). But once we reach “trillions”; such numbers become totally outside of any/all mathematical comprehension of virtually our entire species.
The natural question which would occur in the minds of many readers is: what makes me competent to engage in an analysis of this nature – as an authority – when I am also a puny human, capable of
fully understanding no number larger than a million?
The answer is that (unlike my audience); I know what I do not know, and have been aware of the concept of creeping zeros for decades. Knowing what I do not know; I opt for the next-best logical
alternative – I use proxies.
One billion is a thousand times larger than the largest number of which I am capable of understanding (one million). Having above-average aptitude in mathematics; I am confident that I fully
understand one thousand and one million, and thus have a reasonable conceptual grasp of one billion.
One trillion is a million times larger than the largest number of which I am capable of understanding. More to the point; it is the largest number of which I am capable of understanding multiplied by
the largest number of which I am capable of understanding. It is a concept which only those with the most thorough grasp of arithmetic are capable of understanding (at all).
It is for this reason that I never use a calculator, for anything. It is an intellectual crutch which has sapped the mathematical competence of the entire, industrialized world – and to literally a
dangerous extent.
I’m old enough to remember when all sales clerks possessed above-average aptitude in arithmetic – because they were forced (by our mechanical “cash registers”) to perform calculations on virtually
every transaction: the “change” which customers received after payment. But go into any store today, and if (for some reason) the clerk is forced to perform even a simple calculation himself/herself;
you will most likely see someone looking as helpless as a child.
With few exceptions; we have lost the understanding of arithmetic which used to be as eternal a skill as reading and writing (the “three R’s”). But thanks to the calculator, that aptitude has been
destroyed. How can individuals who require a calculator to perform arithmetic functions involving tiny numbers ever understand “1,000,000 X 1,000,000”? The answer is that they can’t.
Naturally; this mathematical/conceptual illiteracy extends to the people running our governments. Indeed, it is one of the primary reasons why the metaphor of “lemmings” appears in my commentaries
again and again. In our societies today; we literally have “the Blind leading the Blind.”
The people putting together our national “budgets” each year, and running our finances on a day-to-day basis have utterly no understanding of the numbers with which they are dealing. As our economies
are run into the ground by these Lemming Governments; most of our representatives are not actually corrupt – they are simply completely ignorant (and thus totally incompetent).
Dealing with such children; it has literally been “child’s play” for the One Bank as it attached its Yoke of Debt around their throats (our throats) with nary a whimper of protest. Our “leaders” (for
the most part) have had absolutely no understanding of what was being done to them (and us), because they don’t understand the numbers.
Similarly; what has left me more aghast than reporting on “LIBOR fraud” (nearly two years ago) is the lack of reaction to the largest mega-crime in history. Possessing a crude grasp of the concept of
“trillions” myself; I simply assumed that reporting a crime with a magnitude of $500 trillion would instantly/utterly horrify all readers – and then spread like wild-fire throughout the general
It did not.
I have pointed out -- on several occasions -- that this one crime (by dollar value) was larger than all of the other crimes in human history, combined. I assumed that this, surely, would instantly
shock/horrify all readers.
It did not.
Clearly what I have been endeavouring to do here (with few exceptions) is like trying to explain music to the deaf. Why is LIBOR-fraud continuing, even after it has been more-or-less completely
exposed? Because now even the crimes of the One Bank have become “too big to fail”.
The same collections of fools and traitors in our governments who refuse to explicitly acknowledge that they have no understanding (at all) of “one trillion” have all implicitly acknowledged that
“$500 trillion” is a number which has them all utterly terrified – and thus they refuse to touch this mega-crime. It is likely for similar reasons that the $1+ quadrillion “derivatives market” (and
time-bomb) remains virtually unregulated, and nearly completely ignored by these same politicians.
Why do we not allow children to play with matches? Because we realize that children are not capable of understanding the dangerous consequences of what they are doing. We will all soon discover that
allowing our politicians to play with trillions (and allowing the One Bank to scam us by the trillions) is a folly of infinitely greater magnitude.
Readers have previously seen charts and analyses showing in absolutely unequivocal terms how/why our economies are heading toward an unprecedented “Armageddon event”, specifically, a
hyperinflationary depression. But for perhaps the first time they can now also see and comprehend that the “leaders” dragging us toward this fate are all wearing blindfolds.
Jeff Nielson
More from Silver Phoenix 500
|
{"url":"https://www.silver-phoenix500.com/article/creeping-zeros-and-economic-armageddon","timestamp":"2024-11-06T08:55:19Z","content_type":"application/xhtml+xml","content_length":"52580","record_id":"<urn:uuid:d253da10-9b14-423a-a0d7-4c95030c04ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00684.warc.gz"}
|