content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Negation and the Duncker Diagram
Sometimes it is easier NOT to solve the problem - and it is always useful to ask this question as it will force your team to eliminate from "overprocessing" and eliminate "gold plating" the solution,
which drastically increases the probability of failure (or reduces the likelihood of success, whichever you prefer).
This is of course, the mighty Duncker Diagram, which I first learned about in Strategies for Creative Problem Solving (Fogler, LeBlanc, Rizzo) - I highly recommend this book.
Strategies for Creative Problem Solving
In short, it entails asking the question - "Why do we need solve this at all?" In other words, try to negate the idea we need to solve the problem at all. Here is the concept in 60 seconds from MIT
Professor and Robotics Pioneer Rodney Brooks (from the movie Fast, Cheap, and Out of Control):
At Deflategate Movie with Professor H. Scott Fogler (June 15, 2018) at the beautiful Michigan Theater (Ann Arbor, MI) | {"url":"https://jfcarrie.mit.edu/blog/negation-and-duncker-diagram","timestamp":"2024-11-09T07:11:24Z","content_type":"text/html","content_length":"69218","record_id":"<urn:uuid:8ec20846-2094-4006-8a86-a664130de6ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00428.warc.gz"} |
- T
7.1: Drawing Crayons (5 minutes)
The mathematical purpose of this activity is to show students that some events can depend on one another. It is not important that students get a single, correct solution for the second problem.
Student Facing
A bag contains 1 crayon of each color: red, orange, yellow, green, blue, pink, maroon, and purple.
1. A person chooses a crayon at random out of the bag, uses it for a bit, then puts it back in the bag. A second person comes to get a crayon chosen at random out of the bag. What is the probability
the second person gets the yellow crayon?
2. A person chooses a crayon at random out of the bag and walks off to use it. A second person comes to get a crayon chosen at random out of the bag. What is the probability the second person gets
the yellow crayon?
Activity Synthesis
The goal of this discussion is to introduce the terms independent events and dependent events in the context of the crayon problem. Ask students “How are the questions different?” (In the first
question it did not matter what crayon was used first, but in the second question it did matter.) Tell students that independent events are two events from the same experiment for which the
probability of one event is not affected by whether the other event occurs or not. Dependent events are two events from the same experiment for which the probability of one event relies on the result
of the other event.
Here are some questions for discussion.
• “Which of the two events is independent?” (Choosing the first crayon, then replacing it and choosing the second crayon.)
• “Which of the two events is dependent?” (Choosing the first crayon, then without replacing it choosing the second crayon.)
• “When the crayon was not replaced, the two events are dependent. Describe what it means for the events to be dependent using the crayon context.” (It means that you can’t figure out the
probability of what color crayon will be second without knowing what color crayon was chosen first.)
• “When the crayon was replaced, the two events are independent. Describe what it means for the events to be independent using the crayon context.” (It means that you can find the probability of
either event separately. For example, in the first question when you pick a crayon from the bag the second time it does not matter what was picked the first time because you know the color of all
six crayons that are in the bag.)
7.2: Choosing Doors (30 minutes)
In this lesson, students explore variations of the classic "Monty Hall" problem to understand what it means for events to be dependent or independent. This exploration introduces a context that is
not intuitive to many people prior to a deeper study of conditional probability. In these examples, one event (winning the prize) depends on the outcome of another event (choosing to stay or switch
from the chosen door).
Arrange students in groups of 2. Remind students that it is important to play the games fairly. The host should not try to help their partner nor should they try to trick them. As with most data
collection, it is important to not try to influence the results.
Demonstrate the games by playing the role of host while a student is the contestant. Allow students to play the first game for 10 minutes before telling students to move on to the questions and
second game.
Engagement: Provide Access by Recruiting Interest. Begin with a small-group or whole-class demonstration of how to play the game. Check for understanding by inviting students to rephrase directions
in their own words.
Supports accessibility for: Memory; Conceptual processing
Student Facing
1. On a game show, a contestant is presented with 3 doors. One of the doors hides a prize and the other two doors have nothing behind them.
□ The contestant chooses one of the doors by number.
□ The host, knowing where the prize is, reveals one of the empty doors that the contestant did not choose.
□ The host then offers the contestant a chance to stay with the door they originally chose or to switch to the remaining door.
□ The final chosen door is opened to reveal whether the contestant has won the prize.
Choose one partner to play the role of the host and the other to be the contestant. The host should think of a number: 1, 2, or 3 to represent the prize door. Play the game keeping track of
whether the contestant stayed with their original door or switched and whether the contestant won or lost.
Switch roles so that the other person is the host and play again. Continue playing the game until the teacher tells you to stop. Use the table to record your results.
│ │stay│switch │total│
│ win │ │ │ │
│lose │ │ │ │
│total│ │ │ │
1. Based on your table, if a contestant decides they will choose to stay with their original choice, what is the probability they will win the game?
2. Based on your table, if a contestant decides they will choose to switch their choice, what is the probability they will win the game?
3. Are the two probabilities the same?
2. In another version of the game, the host forgets which door hides the prize. The game is played in a similar way, but sometimes the host reveals the prize and the game immediately ends with the
player losing, since it does not matter whether the contestant stays or switches.
Choose one partner to play the role of the host and the other to be the contestant. The contestant should choose a number: 1, 2, or 3. The host should choose one of the other two numbers. The
contestant can choose to stay with their original number or switch to the last number.
After following these steps, roll the number cube to see which door contains the prize:
□ Rolling 1 or 4 means the prize was behind door 1.
□ Rolling 2 or 5 means the prize was behind door 2.
□ Rolling 3 or 6 means the prize was behind door 3.
Play the game keeping track of whether the contestant stayed with their original door or switched and whether the contestant won or lost.
Switch roles so that the other person is the host and play again. Continue playing the game until the teacher tells you to stop. Use the table to record your results.
│ │stay│switch │total│
│ win │ │ │ │
│lose │ │ │ │
│total│ │ │ │
1. Based on your table, if a contestant decides they will choose to stay with their original choice, what is the probability they will win the game?
2. Based on your table, if a contestant decides they will choose to switch with their original choice, what is the probability they will win the game?
3. Are the two probabilities the same?
Student Facing
Are you ready for more?
In another version of the game, the contestant is presented with 5 doors. One of the doors hides a prize and the other four doors have nothing behind them.
• The contestant chooses 3 doors by number.
• The host, knowing where the prize is, reveals 3 of the doors that have nothing behind them. Two of the doors that the contestant has chosen that are empty and one of the other doors that are
• The host then offers the contestant a chance to stay with the door they originally chose or to switch to the remaining door.
• The final chosen door is opened to reveal whether the contestant has won the prize.
Choose one partner to play the role of the host and the other to be the contestant. The host should think of a number: 1, 2, 3, 4, or 5 to represent the prize door. Play the game keeping track of
whether the contestant stayed with their original door or switched and whether the contestant won or lost.
Switch roles so that the other person is the host and play again. Continue playing the game until the teacher tells you to stop. Use the table to record your results.
│ │stay│switch │total│
│ win │ │ │ │
│lose │ │ │ │
│total│ │ │ │
1. Based on your table, if a contestant decides they will choose to stay with their original choice, what is the probability they will win the game?
2. Based on your table, if a contestant decides they will choose to switch with their original choice, what is the probability they will win the game?
3. Are the two probabilities the same?
Anticipated Misconceptions
Some students may not understand why the probability of winning when choosing to switch doors is \(\frac{2}{3}\). Prompt students to look at the three possibilities of how the winning (W) and losing
doors (L) can be arranged:
Emphasize that choosing to stay means that the probability of winning is equal to the probability that your first choice is the winning door. The probability that your first choice is the winning
door is \(\frac{1}{3}\). This also means that the probability that your first choice is a losing door is \(\frac{2}{3}\). If you start with a losing door that means that the remaining two doors are
L and W. If you choose to switch, the host eliminates the losing door, and only the winning door remains. This means that if you start on a losing door and switch then you are guaranteed to win.
Therefore, the probability of winning when choosing to switch is \(\frac{2}{3}\) (the probability that your first choice is a losing door).
Activity Synthesis
The purpose of this discussion is for students to gain a deeper understanding of what it means for events to be dependent or independent.
Poll the class to collect results, and display a table that shows all of the results for each game. Tell students, “In the first game, the event of a win is dependent on the event of switching since
the probability of winning is different depending on whether the contestant stays or switches. In the second game, the event of a win is independent of the event of switching since the probability of
winning is the same whether the contestant stays or switches.“
Here are some questions for discussion.
• “In the first game, is choosing which door to open at the start of the game dependent on anything?” (It is not dependent on anything.)
• “In the first game, is choosing to switch dependent on anything? ” (It depends on the fact that you know that the host is going to eliminate a door that is not a winner rather than a door chosen
at random.)
• “In the first game, what are the different outcomes?” There are four outcomes: that you win by staying (\(\frac{1}{3}\) of the time), lose by staying (\(\frac{2}{3}\) of the time), win by
switching (\(\frac{2}{3}\) of the time) or lose by switching (\(\frac{1}{3}\) of the time).)
• “Does it help you to think about this activity by making the decision to stay or switch before you play the game?” (Yes, that really helps me because we are talking about probability and the
decision to switch kept getting in the way of me thinking about the probabilities. It really helps me to look at it as two different situations.)
• “In the second game, is choosing to switch dependent on anything?” (It is not dependent on anything because the host is eliminating a door at random.)
• “What are the possible outcomes for the second game?” (There are four outcomes: that you win by staying (\(\frac{1}{3}\) of the time), lose by staying (\(\frac{2}{3}\) of the time), win by
switching ((\(\frac{1}{3}\) of the time) or lose by switching (\(\frac{2}{3}\) of the time).)
One way to explain the dependency for the first game is to imagine the decisions being made in the other order. If a player decides they are going to stay regardless of what happens, then they must
choose the correct door right at the beginning. There is a \(\frac{1}{3}\) chance of choosing the right door.
If a player decides they are going to switch regardless of what happens, then they must choose one of the doors that have nothing behind them in order to win. If they choose either one, the host will
reveal the other one and the player can switch to the correct door. Since the goal is to initially choose a door with nothing behind it, there is a \(\frac{2}{3}\) chance of winning the game.
A second way to understand the reasoning is to imagine a larger game with 100 doors to choose from. After the contestant makes their choice, the host reveals 98 empty rooms leaving only the one the
contestant chose and one other door closed, similar to the 2 left in the first game. Now consider whether the initial door that was chosen is more likely to contain the prize or the single other door
the host left untouched.
Reading, Writing, Speaking: MLR3 Clarify, Critique, Correct. After the class has been polled and their data has been displayed in a table, present an incorrect interpretation of the likelihood of
winning the game. For example, “The probability of winning is always \(\frac13\) because only one door out of three has a prize behind it.” Ask students to identify the error, critique the reasoning,
and write a correct explanation. As students discuss with a partner, listen for students who identify and clarify that there are three possibilities of how the winning and losing doors can be
arranged. For example, the author probably thought that there was only one way to win or did not realize that the doors could be arranged in three ways, WLL, LWL, or LLW, and switching doors changes
the probability of winning. This will help students better understand when events are independent or dependent.
Design Principle(s): Optimize output (for explanation); Maximize meta-awareness
Lesson Synthesis
Here are some questions for discussion.
• "What does it mean for two events to be independent?" (Two events are independent when the probability of one event occurring does not change whether the other event occurs or not.)
• "What does it mean for two events to be dependent?" (Two events are dependent when the probability of one event occurring is different if the other event occurs or not.)
• “There are 10 different names written on slips of paper that are placed in a hat. Your teacher picks one name out of a hat and places it on the desk. The teacher then picks another name out of
the hat. Are these events dependent or independent? Explain your reasoning.” (They are dependent because the probability of picking a name the second time is dependent on what name was picked
first. For example, if my name was picked first, I would have a \(\frac{0}{9}\) chance of it being picked second. If my name was not picked first then I would have a \(\frac{1}{9}\) chance of it
being picked second.)
• “How does the situation change if the first name is placed back in the hat?” (It changes because now the events are independent. The name picked first does not change the probability of the name
being picked second because all the names will be in the bag regardless of the first name picked.)
• “In science class you may have heard about independent and dependent variables. How is this related to independent and dependent events?” (In science class, the dependent variables is the one
that changes in response to the independent variable. For example the temperature at which water boils depends on the pressure, so pressure is the independent variable and the temperature is the
dependent variable.)
• “Can you think of any other examples that use dependence and independence?” (One example would be your performance review at a job and how much you are paid. I would hope they are dependent, but
they do not have to be.)
7.3: Cool-down - Tall Basketball Players (5 minutes)
Student Facing
When considering probabilities for two events it is useful to know whether the events are independent or dependent. Independent events are two events from the same experiment for which the
probability of one event is not affected by whether the other event occurs or not. Dependent events are two events from the same experiment for which the probability of one event is affected by
whether the other event occurs or not.
For example, let's say a bag contains 3 green blocks and 2 blue blocks. You are going to take two blocks out of the bag.
Consider two experiments:
1. Take a block out, write down the color, return the block to the bag, and then choose a second block. The event, "the second block is green" is independent of the event, "the first block is blue."
Since the first block is replaced, it doesn’t matter what block you picked the first time when you pick a second block.
2. Take a block out, hold on to it, then take another block out. The same two events, "the second block is green" and "the first block is blue," are dependent.
If you get a blue block on the first draw, then the bag has 3 green blocks and 1 blue block in it, so \(P(\text{green})=\frac{3}{4}\).
If you get a green block on the first draw, then the bag has 2 green blocks and 2 blue blocks in it, so \(P(\text{green}) = \frac{1}{2}\).
Since the probability of getting a green block on the second draw changes depending on whether the event of drawing a blue block on the first draw occurs or not, the two events are dependent.
In some cases, it is difficult to know whether events are independent without collecting some data. For example, a basketball player shoots two free throws. Does the probability of making the second
shot depend on the outcome of the first shot? Some data would need to be collected about how often the player makes the second shot overall and how often the player makes the second shot after making
the first so that you could compare the estimated probabilities. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/8/7/index.html","timestamp":"2024-11-03T06:30:28Z","content_type":"text/html","content_length":"111823","record_id":"<urn:uuid:5420973a-4cce-4d4d-8d7d-00a31af7bf1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00471.warc.gz"} |
Formula Use Cases
The Formula Column is one of the more popular ways to manipulate data on monday.com. From simple mathematical calculations to more complicated formulas, by utilizing our library of available
functions, the Formula Column can help you simplify complex problems.
Before we dive into this article, in the below board, you will find some of our most common use cases for the Formula column:
Feel free to simply copy and paste the formula provided in the 'Formula' column directly onto your own board for you to use!
Tip: To access our Formula Use Cases board, you can also click right here. 🙏
Now, let's further explore some of the use cases that we've collected. Below you'll find some of the most common ways to use the Formula Column ⬇️
In this example, we want to add 15 days to each date from the column "Start Date":
ADD_DAYS({Start Date},15)
If we wanted to subtract 15 days instead, we would use the function SUBTRACT_DAYS() in place of ADD_DAYS().
SUBTRACT_DAYS({Start Date},15)
Both of the above formulas will show you an unformatted result that may look a little clunky. This is why we recommend using the following formula. The result is formatted and will look cleaner on
your board:
FORMAT_DATE(ADD_DAYS({Start Date},15))
If you want to remove the year and simply see the month and day, you can further customize your formula by using the LEFT() function. In the following formula, the LEFT() function takes the output of
the FORMAT_DATE() function and only outputs 6 characters from the left:
LEFT(FORMAT_DATE(ADD_DAYS({Start Date},15)),6)
If we change the 6 in the formula, we change the number of characters that the formula outputs. The results of this formula can be seen in the "LEFT" column:
One really useful and dynamic function is TODAY(). Using this with the DAYS() function, you can calculate the number of days until the due date (or past the due date). When using TODAY(), you don't
need to include anything within the parenthesis.
We've also encapsulated the output of the DAYS() function within the ROUND() function. In this case, we're indicating that we want to round the output with 0 decimal places.
ROUND(DAYS({Due Date},TODAY()),0)
If you're using a board to keep track of employee vacation requests, a formula can be useful when calculating the number of working days the employee will need. The following function, WORKDAYS(),
will return the number of working days between two dates. Working days are defined, according to your account settings, as Monday to Friday or Sunday to Thursday. To learn more about this account
setting, check out this article.
When using the Time Tracking Column with the Formula Column, you can choose whether you want to pull in seconds, minutes, or hours. For the following formula, we're using the "Billable" column's
hours. We're also using the ROUND() function again to clean up our decimals. The number 2 in this formula signifies two decimal places.
ROUND(MULTIPLY({Billable#Hours},{Hourly Rate}),2)
In this example, I'd like to calculate how long my contractors have worked. This should not include their unpaid break time. Using four Hour Columns, I can create a formula to calculate this.
IF(HOURS_DIFF({Break End}, {Break Start}) > "0", HOURS_DIFF(HOURS_DIFF({End}, {Start}), HOURS_DIFF({Break End}, {Break Start})), HOURS_DIFF({End}, {Start}))
This formula says that if the break is greater than 0, calculate the total hours worked minus the break. If the break is not greater than 0, calculate the total hours worked.
Here's how the formula works:
In this board, we're looking at the total sales per month for four employees. To calculate the change between the results of January and February as a percentage, you would use the following formula:
MULTIPLY(DIVIDE(MINUS({February Sales},{January Sales}),{January Sales}),100)
This formula can also be written as:
((({February Sales}-{January Sales})/{January Sales})*100)
Now we want to calculate each employee's bonus. An employee will receive a bonus only if the "Total Sales" are higher than $350,000 and if the number of deals in the "Deals" column is higher than 12:
IF(AND({Total Sales}>350000,{Deals}>12),250,0)
The AND() function checks whether the two conditions are true. Based on the result, the IF() statement tells the formula column which value to return.
In this example, let's say I manage a sales team with varying commission rates per salesperson. You can use the labels from the status column to indicate a specific rate within your formula:
IF({Rate}="Rate 1",25,IF({Rate}="Rate 2",20,IF({Rate}="Rate 3",15,IF({Rate}="Rate 4",10,IF({Rate}="Rate 5",5)))))
The "Commission %" column is showing the relevant rate based on the status label selected.
You can take this formula a step further and calculate commission based on the rate and the "Total Sales" column by incorporating the MULTIPLY() function. Just keep in mind that if you use a decimal
in a formula, you must write 0.25 rather than .25 to avoid an illegal formula error.
IF({Rate}="Rate 1",MULTIPLY(0.25,{Total Sales}),IF({Rate}="Rate 2",MULTIPLY(0.20,{Total Sales}),IF({Rate}="Rate 3",MULTIPLY(0.15,{Total Sales}),IF({Rate}="Rate 4",MULTIPLY(0.10,{Total Sales}),IF(
{Rate}="Rate 5",MULTIPLY(0.05,{Total Sales}))))))
If you're using a board to keep track of your budget, this formula might come in handy. In this example, the travel budget for each employee is $6,500. We want to find out if the total amount spent
on each employee is within the budget or over it. To do this, we will compare the SUM() of the values in four columns to the budget with an IF() statement.
IF(SUM({Flight},{Hotel},{Insurance},{Expenses})>6500, "Over Budget","Good")
You can easily manage your inventory with monday.com. This example explains how to calculate your current available stock and how much inventory was sold (as a percentage).
For the "In Stock" column:
MINUS(MINUS({Starting Inventory},{Reserved}),{Sold})
For the "% Sold" column:
ROUND(MULTIPLY(DIVIDE({Sold},{Starting Inventory}),100),2)
By using the TEXT() function, you can format your results exactly the way you want. Let's take an easy example in the first line of the following board. We want to multiply 100 by 25, and we want the
output to show like this: $2,500.00.
TEXT(MULTIPLY({Starting Inventory},{Cost}),"$#,##.00")
In the last part of the formula, each # represents a number. We added .00 to the end because we want the output to end with two zeros, but you can replace this with .## if you want your output to end
with calculated numbers. The $ places the symbol in front of the number.
Keep in mind that if you format a number using the TEXT() function, the formula column will read the number as text rather than a number. This means that your column summary will show "N/A" (as
above) instead of the sum/average/etc. of the numbers shown in your formula column.
Tip: Looking to expand your use of the Formula Column? Check out the apps marketplace to explore several popular apps that extend the native capabilities of the Formula Column. 🙌
If you have any questions, please reach out to our team right here. We’re available 24/7 and happy to help. | {"url":"https://support.monday.com/hc/en-us/articles/360015316019-Formula-Use-Cases","timestamp":"2024-11-09T14:14:37Z","content_type":"text/html","content_length":"231578","record_id":"<urn:uuid:dbb227ce-c226-479c-941c-7c378a4852d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00178.warc.gz"} |
2014 Peanut PLC Computation - Texas Peanuts
2014 Peanut PLC Computation
Posted on September 11, 2015
From the National Center for Peanut Competitiveness
USDA-National Agricultural Statistics Service has reported the 2014 marketing year national seasonal average price for peanuts to be $0.22/lb which translates to $440/ton FSP. This can be found on
the USDA-NASS website using its Quick Stats at the bottom of the home page.
If the national seasonal average price is below $535/ton FSP, a 2014 peanut PLC payment will occur. The PLC payment will be $535-$440= $95/base ton FSP.
The farmer has two ways to calculate the total payment per farm serial number.
The farmer would take the $95/base ton times 85% of the base acres (includes generic base allocated to peanuts) on that farm serial number times the payment yield for that farm serial number. This
total payment would be reduced by the sequestration cut. Based on the USDA-FSA Handbook for ARCPLC, the 2014 sequestration cut would be 7.3%. This approach is what one would see and hear from
USDA-FSA folks and the press.
The alternative approach, which the NCPC prefers is as follows. Both approaches will yield the same total PLC dollar amount per farm serial number. First, the farmer calculates their total peanut
base tonnage per farm serial number by multiplying the base acres (includes generic base allocated to peanuts) times the payment yield. This will allow the farmer to compare their total base tonnage
on that farm to their total production. The farmer would then multiply the PLC payment per ton (in this case it is $95/base ton) by 85% and then reduce that value by the sequestration cut (7.3%).
That is $95*.85*(1.0-0.073) which equals $74.85525. One can view the $74.85525 as the net PLC per base ton. The farm’s total PLC payment would then be $74.85525 times the total base tons on the farm.
The farmer needs to be cautioned in that the total payments from all of their farms will be directly attributed to the individual with a cap of $125,000 per entity. If a farmer had any 2014 peanut
crop MLGs, that MLG will also be attributed back to the individual and count against the $125,000 payment limit. Based on when the farmer received their MLGs, those gains may count first against
their payment limit, which could lead to further reduction in actual PLC payments. Southern commodity organizations are working with Members of Congress in obtaining generic certificates applicable
for the 2015 and 2016 crops. If successful, the generic certificates would be substituted for potential LDP/MLG, which would not be counted against one’s payment limit.
Finally, not knowing how USDA-FSA will round and when they will round, their final numbers may differ slightly from the numbers presented above due to rounding. | {"url":"https://texaspeanuts.com/2014-peanut-plc-computation/","timestamp":"2024-11-13T16:14:16Z","content_type":"text/html","content_length":"47004","record_id":"<urn:uuid:aa2f9ff8-8377-4831-ac4f-cee4209c3e65>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00177.warc.gz"} |
: ..
.. :
: 110 : : 2024 :
.., .. , // . 110. .: , 2024. .42-67. DOI: https://doi.org/10.25728/ubs.2024.110.2
, , ,
hierarchical tree, local and global weights of criteria, local data aggregation, scale transformation
, , , . , , , . - , - . , , , , . , : , () . , . , , ? . .
In the tasks of multi-criteria assessment and selection of objects with a multilevel structure, the initial data characterizing the objects are usually measured in different scales. In this regard,
the use of additive convolution for the end criteria of a hierarchical tree reflecting the multilevel structure of objects is correct only for estimates of objects represented or transformed to a
single homogeneous scale. The article introduces the concept of weight in the quantitative scale of the relations of the criterion (k-1)-th level of the hierarchical tree, determined by the sum of
the weights of the subcriteria of the k-th level. In this case, the application of the procedure for calculating global normalized weights, which are commonly called coefficients, at each level of
the hierarchy through a multiplicative convolution of local coefficients lying on the path from the root vertex is correct. The proposed method of local aggregation of estimates of objects with a
multilevel structure has an important property, namely: the adequacy of the ordering of objects at any vertex of the hierarchical structure of criteria for calculating aggregated estimates in a
quantitative, ordinal (rank) scale. It is shown under what conditions the integral method of aggregated estimates based on global coefficients of the end criteria coincides with the local one. The
advantages of the local method are visibility, the ability for analysts to understand and control intermediate results, and greater objectivity of calculated estimates at the root vertex of the
hierarchical tree. The essence of the methods and their comparison is shown by the example of a multi-criteria evaluation of information materials.
: 128, : 35, : 8. | {"url":"https://www.mtas.ru/search/search_results_new.php?publication_id=23224&IBLOCK_ID=20","timestamp":"2024-11-04T15:28:56Z","content_type":"text/html","content_length":"14781","record_id":"<urn:uuid:63b4c69c-af35-4a26-9bf6-12889c1a07a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00275.warc.gz"} |
February 2007
February 2007
• 70 participants
• 157 discussions
Hello, I am running into overfull boxes when including a weaved cweb file into a context document. The original reference to the cwebmac.tex file (the plain TeX macro) is commented out and replaced
by \usemodule[cweb]. The cweb macros appear to expand something into \*11ptmmmt* which seems to have something to do with it. Is there anyone here with experience and deep (Con)TeX(t) knowledge who
understands what is happening? Regards, Edwin ----------------------------------part of my log file------------------------------------------------ Overfull \hbox (6.4885pt too wide) in paragraph at
lines 3--13 \*11ptmmmr* for the ap-pli-ca-tion de-vel-op-ment mod-ule. This is a small ap-p li-ca-tion which loads file \hbox(7.63889+2.13889)x398.33858 .\*11ptmmmr* f .\*11ptmmmr* o .\*11ptmmmr* r .
\glue(\spaceskip) 3.66628 .\*11ptmmmr* t .etc. Overfull \hbox (8.38132pt too wide) in paragraph at lines 3--13 \*11ptmmmr* with point and line da-ta and dis-plays it in a win-dow. It pro-vid es a
us-er in-ter-face al- \hbox(7.63889+2.13889)x398.33858 .\*11ptmmmr* w .\*11ptmmmr* i .\*11ptmmmr* t .\*11ptmmmr* h .\glue(\spaceskip) 3.66628 .etc.
Dear list, my publisher forces me to set different things as titles, in headers and table of contents. In some cases \nolist and \nomarking aren't enough. here are my actual definitions: ---- \def\
Untertitel#1{\blank[3pt]{\ss\bf #1}\blank} % ok \def\Titel#1#2{{\nohyphens\chapter{#1}\Untertitel{#2}}} % ok \def\KomplexTitel#1#2#3{{\writetolist[chapter]{#3}\nohyphens\title{#1}\U ntertitel{#2}}}
---- with ---- \start \setupheadertexts[][manual entry][part][] % normally defined as [][chapter][part][] \KomplexTitel{Main chapter title}{chapter subtitle}{toc entry} Much text \stop % of other
headertext ---- Real life example: \start \setupheadertexts [][Vorwort][][] \KomplexTitel{Die neutrale Schweiz und das globale Dorf}% {Vorwort von Prof.|~|Dr.|~|Heinrich Ott}% {Vorwort Prof.|~|Dr.|~|
Heinrich Ott:\\Die neutrale Schweiz und das globale Dorf} ... Now there stay two issues: 1. I'd like to get a four-parameter command that can set the header text also. 2. I must get a line feed into
the toc, but neither \\ nor \crlf works. Please help! Thanks. Grüßlis vom Hraban! -- www.fiee.net/texnique/ www.ramm.ch/context/
Hi, How can the bib module and \setuphead[chapter][prefix=+] be made to coexist? The following example works, but if you uncomment the \setuphead, the citations are broken and the reference list
vanishes: \setupoutput[pdftex] % uncomment this line and it breaks %\setuphead[chapter][prefix=+] \usemodule[bib] \usemodule[bibltx] \setupbibtex[database={xampl.bib}] \setuppublications[refcommand=
num] % makes no real difference %\setuppublicationlist[title=\chapter] \version[temporary] \starttext \chapter[chapt]{A Chapter} Some citations: \cite[article-full] \cite[inbook-full] \cite[manual-
full]. This is chapter~\in[chapt]. \completepublications[criterium=all] \stoptext Any help much appreciated! PS: The following old mailing list article appears to describe the same problem, but the
symptoms are different (citations disappear, but reference list does not) http://archive.contextgarden.net/message/ 20010516.173217.113e3787.en.html Robin
Hello, I suggested to the author of the very feature rich LaTeX-Listings package, Carsten Heinz, to port this package to ConTeXt, and it seems, that he is interested. But before doing it, there are
some questions: * What do Hans and other people thing about that? * listings.sty uses keyval.sty, how is the proper ConTeXt-way to do the same things as keyval, or should listings stay with keyval?
Greetings, Peter -- http://pmrb.free.fr/contact/ _____________________________________ FilmSearch engine: http://f-s.sf.net/
Hello, now I see, that my problem with HZ is only in DVI mode, PDFs are ok. Here my test file: \usetypescript[serif,sans,mono][handling][highquality] \setupalign[hanging,hz] \usetypescript
[modern-base][\defaultencoding] \setupbodyfont[modern] \starttext \input tufte \stoptext And here the errors: This is dvips(k) 5.96 Copyright 2005 Radical Eye Software (www.radicaleye.com) ' TeX
output 2007.01.08:1425' -> test.ps kpathsea: Running mktexpk --mfmode ljfzzz --bdpi 1200 --mag 1+240/1200 --dpi 1440 ec-lmr10+15 mktexpk: don't know how to create bitmap font for ec-lmr10+15.
kpathsea: Appending font creation commands to missfont.log. dvips: Font ec-lmr10+15 not found, characters will be left blank. dvips: Can't open font metric file ec-lmr10+15.tfm dvips: I will use
cmr10.tfm instead, so expect bad output. dvips: Checksum mismatch in ec-lmr10+15 kpathsea: Running mktexpk --mfmode ljfzzz --bdpi 1200 --mag 1+240/1200 --dpi 1440 ec-lmr10+20 mktexpk: don't know how
to create bitmap font for ec-lmr10+20. dvips: Font ec-lmr10+20 not found, characters will be left blank. dvips: Can't open font metric file ec-lmr10+20.tfm dvips: I will use cmr10.tfm instead, so
expect bad output. dvips: Checksum mismatch in ec-lmr10+20 kpathsea: Running mktexpk --mfmode ljfzzz --bdpi 1200 --mag 1+240/1200 --dpi 1440 ec-lmr10-10 mktexpk: don't know how to create bitmap font
for ec-lmr10-10. dvips: Font ec-lmr10-10 not found, characters will be left blank. dvips: Can't open font metric file ec-lmr10-10.tfm dvips: I will use cmr10.tfm instead, so expect bad output. dvips:
Checksum mismatch in ec-lmr10-10 kpathsea: Running mktexpk --mfmode ljfzzz --bdpi 1200 --mag 1+240/1200 --dpi 1440 ec-lmr10-15 mktexpk: don't know how to create bitmap font for ec-lmr10-15. dvips:
Font ec-lmr10-15 not found, characters will be left blank. dvips: Can't open font metric file ec-lmr10-15.tfm dvips: I will use cmr10.tfm instead, so expect bad output. dvips: Checksum mismatch in
ec-lmr10-15 kpathsea: Running mktexpk --mfmode ljfzzz --bdpi 1200 --mag 1+240/1200 --dpi 1440 ec-lmr10-20 mktexpk: don't know how to create bitmap font for ec-lmr10-20. dvips: Font ec-lmr10-20 not
found, characters will be left blank. dvips: Can't open font metric file ec-lmr10-20.tfm dvips: I will use cmr10.tfm instead, so expect bad output. dvips: Checksum mismatch in ec-lmr10-20 <tex.pro>
<lm-ec.enc><texps.pro>. <lmr10.pfb>[1] TeXExec | runtime: 11.250944 Can anybody help please? Cheers, Peter -- http://pmrb.free.fr/contact/
Hi, just by chance I experienced that there seems to be a problem with ligatures and hyphenation in ConTeXt. This is the example: \usetypescript[postscript][\defaultencoding] \setupencoding[default=
texnansi] \mainlanguage[de] \enableregime[mac] \setupbodyfont[postscript,10pt] \starttext Auflagen % the fl-ligature is wrong here Auf\-lagen % this is the right fl Auf\/lagen %this is the right fl
123456789012345678901234567890123456789012345678901234567890123456789012 34567890 Auflagen % right hyphenation, but wrong ligature
123456789012345678901234567890123456789012345678901234567890123456789012 34567890 Auf\-lagen % but this kills the other hyphenation Aufla-gen
123456789012345678901234567890123456789012345678901234567890123456789012 34567890 Auf\/lagen % but this kills the other hyphenation Aufla-gen
123456789012345678901234567890123456789012345678901234567890123456789012 34567890 Auf"-lagen % this is used in (La)TeX, but doesn'T work in ConTeXt?! % the next wrong causes an error (but would be
perfect: 123456789012345678901234567890123456789012345678901234567890123456789012 34567890 Auf"|lagen % this is used in (La)TeX, but doesn'T work in ConTeXt?! \stoptext What is needed is to ... 1)
avoid the fl-ligature 2) preserve all the default hyphenation points In TeX (or only LaTeX?) Auf"-lagen or Auf"|lagen would do this. But what is the equivalent in ConTeXt? Steffen
Hi all, \starttext 5,00\,\texteuro \stoptext does not work any more. What have I to do? Bernd here the message from contextgarden systems : begin file texweb at line 2 ! Font \thedefinedfont=fmvr8x
at 12.0pt not loadable: Metric (TFM) file not fou nd. <recently read> \scaledfont \symbolicsizedfont ...ntfontbodyscale \scaledfont \thedefinedfont \getglyph #1#2->{\symbolicfont {#1} \doifnumberelse
{#2}\char \donothing #2} \dodosymbol ...bol \csname \??ss :#1:#2\endcsname \relax }\relax \donormalsymbol ... {#1}{#2}{\dodosymbol {#1}{#2}} \else \edef \currentsymbol... l.5 5,00\,\texteuro ? !
Emergency stop. <recently read> \scaledfont
Hi, (This post actually belongs to dev-context, but I do not know if everyone interested in this reads dev-context, so I am posting it to this list). I have been looking at what is the best way to do
theorems etc with ConTeXt. There are ways to handle most of the requirements, but some of them are not possible out of the box with ConTeXt. I have searched the archives and this questions about
theorems and proofs keep on coming on and off. I want to see what is the requirements for theorems, what all is possible right now, and what needs to be done. Below I am writing my requirements of
theorem. Does anyone have anything more to add? 1. They should be numbered, it should be possible to control the numbering mechanism, for example way=bysection, bychapter, etc. > Can be done using
enumerations. 2. It should be possible to have two different types of theorems have the same number. For example, \definetheorem[theorem] \definetheorem[lemma][number=theorem] should follow the same
number as theorems. > Is possible using enumerations 3. It should be possible to get a list of theorems. It should be possible to say which types of theorems go to the list and which do not. > Is
possible using enumerations 4. The theorem should have a title. The title should be optional. > Is partially implemented 5. There should be a mechanism to do end-of-proof marks. The end-of-proof
marks should also work with itemizations, and formulas. > Not implemented at all. 6. Anything more...? Requirement 4 is not completely implemented. With \defineenumeration[theorem][title=yes] I
always have to give a title. Normally, while writings theorems, there are only a few theorems that have a title. The rest of them do not. Right now, I can work around this restriction, by either
having two different theorems (titledtheorem and nontitledtheorem) or always adding {} at the beginning of each theorem. I want the behaviour closer to title=maybe. I do not understand why \
@@startdescription contains \dowithwargument{\@@startsomedescription{#1}[#2]} I would prefer it to contain \dosinglegroupempty{\@@startsomedescription{#1}[#2]} (well this will of course not work, but
I hope the idea is clear) So that I can do \defineenumeration[theorem][title=yes] \starttheorem A silly theorem not worth a title \stoptheorem as well \starttheorem {My Fancy Theorem} A fancy theorem
that needs to be given a title \stoptheorem I do not think that changing this will break anything. I am not even sure why dowithwargument is there. I cannot imagine anyone writing \startthoerem Title
this is a theorem \stoptheorem instead of \starttheorem {Title} this is a theorem \stoptheorem The title also needs some more attributes. Right now, only titlestyle, titlecolor and titledistance are
there. To be more flexible, you would also need something like titleleft, and titleright, which should not be too difficult. The last things, that is the end of proof marker, is right now not
possible in ConTeXt. There are a lot of things that need to be taken care of while having a end-of-proof marker: basically, you need to ensure that there is no page break between the proof and the
marker. Also the marker needs to be moved up or down, depending on how the proof ends. At the very least, ConTeXt should have something that ensures that the end of proof marker does not go onto a
page of its own. Finally, my question is: Does it make sense to include all these functionality into enumerations, or have them in a separate module? Aditya
Hello, I am having a problem setting up a body font since updating my tex system to texlive 2007. The file below worked fine on my old tetex system, but alas the body font will no longer change. No
matter what I do, I get the default roman font instead of the one I specify. The Chapter font is the correct font (bickham), but the body font is not. Based on this I think I have the font installed
correctly. Can anyone see what I am missing. Why can I not change the body font? %output=pdftex \resetmapfiles \loadmapfile[texnansi-familylearn-bickham] \definefontsynonym[BodyFont][texnansi-pbir8a]
[encoding=texnansi] \definebodyfont[default][rm][tf=BodyFont sa 1] \setupbodyfont[default, rm] \definefont[ChapterFont][BodyFont at 48pt] \setuphead[chapter][style=\ChapterFont] \starttext \chapter
{Bickham Chapter} Hello in Bickham font. \stoptext Thanks, paul
While running the latest ConTeXt, I get the following warning: [6.6] [7.7] ./Figure1a.pdf Error: PDF version 1.6 -- xpdf supports version 1.5 (continuing anyway) Warning: pdftex (file ./
Figure1a.pdf): pdf inclusion: found pdf version <1.6>, but at most version <1.5> allowed <./Figure1a.pdf> It does not seem to have any observable impact on the typeset product however. Alan | {"url":"https://mailman.ntg.nl/archives/list/ntg-context@ntg.nl/2007/2/","timestamp":"2024-11-03T02:43:09Z","content_type":"text/html","content_length":"103342","record_id":"<urn:uuid:be481ab4-5910-4b4e-bd0d-7a3ca82914b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00172.warc.gz"} |
Project : Notes And Queries
MyWikiBiz, Author Your Legacy — Wednesday November 13, 2024
Jump to navigationJump to search
Peircean Pragmata
Several recent blog postings have brought to mind a congeries of perennial themes out of Peirce. I am prompted to collect what old notes of mine I can glean off the Web, and — The Horror! The Horror!
— maybe even plumb the verdimmerung depths of that old box of papyrus under the desk …
Peirce's Law : Tertia Datur And Non
Peirce's Law and the Pragmatic Maxim
Jacob Longshore conjectures a link between Peirce's Law and the Pragmatic Maxim.
Jon Awbrey freely associates to Post N°3.
Pieces of the Puzzle
For the Time Being, a Sleightly Random Recap of Notes …
Pragmatic Maxim as Closure Principle
Consider what effects that might conceivably have practical bearings you conceive the objects of your conception to have. Then, your conception of those effects is the whole of your conception of
the object. (C.S. Peirce, CP 5.438).
Consider the following attempts at interpretation:
Your concept of \(x\!\) is your concept of the practical effects of \(x.\!\)
Not exactly. It seems a bit more like:
Your concept of \(x\!\) is your concept of your-conceived-practical-effects of \(x.\!\)
Converting to a third person point of view:
\[j\!\]'s concept of \(x\!\) is \(j\!\)'s concept of \(j\!\)'s-conceived-practical-effects of \(x.\!\)
An ordinary closure principle looks like this:
\[C(x) = C(C(x))\!\]
It is tempting to try and read the pragmatic maxim as if it had the following form, where \(C\!\) and \(E\!\) are supposed to be a 1-adic functions for "concept of" and "effects of", respectively.
1-adic functional case:
\[C(x) = C(E(x))\!\]
But it is really more like:
2-adic functional case:
\[C(y, x) = C(y, E(y, x))\!\]
\[y\!\] = you.
\[C(y, x)\!\] = the concept that you have of \(x.\!\)
\[E(y, x)\!\] = the effects that you know of \(x.\!\)
x C(y, x)
/|\ ^
/ | \ =
/ | \ =
/ | \ =
e_1 e_2 e_3 =
\ | / =
\ | / =
\ | / =
\|/ =
E(y, x) C(y, E(y, x))
The concept that you have of \(x\!\) is the concept that you have of the effects that you know of \(x.\!\)
It is also very likely that the functional interpretations will not do the trick, and that 3-adic relations will need to be used instead.
Source. Jon Awbrey (08 Aug 2002), "Inquiry Driven Systems : Note 23", Ontology List, Peirce List.
Logic As Semiotic
Inquiry Into Information | {"url":"https://mywikibiz.com/Directory:Jon_Awbrey/Projects/Notes_And_Queries","timestamp":"2024-11-13T18:08:52Z","content_type":"text/html","content_length":"28301","record_id":"<urn:uuid:07a08bc5-97ec-464e-9281-bea4c9bfeef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00080.warc.gz"} |
System, Surroundings, Boundary and Universe in thermodynamics - types of systems (open, closed, isolated)
These are some common definitions associated with the basic thermodynamics. every thermodynamic study and analysis is related to these terms. let's get a quick review of these terms.
Thermodynamic System
thermodynamic system is the place which contains a certain quantity of matter in which thermodynamic processes happen and thermodynamic analysis can be carried out. in the system, the matter will
consist of certain properties which can be altered by different processes like transfer of mass and energy.
Everything external to the system i.e. the outside environment is called surrounding to the system.
Boundary of the System
Boundary is something which separates the system and the surroundings. the boundary can be real or imaginary. sometimes a relative boundary is considered so the boundary can be at rest or in a
The combination of system and surroundings is called Universe i.e. when both system and the surroundings are kept together they can be referred as the universe.
Types of System
there are three types of systems that are recognized-
1. Open System
2. Closed System
3. Isolated System
Open System
Open Systems are those in which both Mass and Energy can be inserted and can be taken out of it. This system is open to any intersection of mass and energy.
Closed System
Closed system is one in which mass remains constant i.e. mass cannot be added or subtracted but energy can be manipulated.
Isolated system
Isolated System is one in which mass and energy both can neither be added nor be subtracted from the system.
2 comments:
1. good
2. Thanks for give this definition | {"url":"http://www.mechanicalclasses.com/2018/02/system-surrounding-boundary-and.html","timestamp":"2024-11-12T13:36:19Z","content_type":"application/xhtml+xml","content_length":"162790","record_id":"<urn:uuid:ffd3ed21-5255-41ae-a3ec-96b4aee4b0f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00090.warc.gz"} |
Sparsity Made Easy – Introducing the Cerebras PyTorch Sparsity Library - Cerebras
The growing commercialization of large language models (LLMs) for myriad tasks, such as text generation, retrieval-augmented search, etc., leads to exponential growth in training new language models.
As models and dataset sizes scale, the ability to reduce the prohibitive costs of training is a fundamental enabler. At Cerebras, we believe unstructured sparsity is the answer for lowering the
compute for training foundation models.
Training with sparsity involves masking certain learnable weights in the layer’s weight matrix, as shown in Figure 1. As an earlier blog post explained, training with sparse weights allows us to skip
floating point operations (FLOPs), i.e., compute during the forward and backward pass, giving speedup on hardware that supports accelerating sparsity, such as the Cerebras CS-2 system.
Figure 1: Applying weight sparsity to a dense neural network by masking weights effectively prunes neuron connections within the network. The light blue connections in the sparse network indicate the
masked weights.
The Cerebras CS-2, enabled by our on-chip memory bandwidth and fine-grained dataflow scheduling, is the only production hardware capable of accelerating training with unstructured weight sparsity.
This necessitates software with an easy interface to access the power of sparsity. Most deep learning libraries, such as PyTorch, are optimized for dense computations and provide minimal support for
sparsity. Also, the sparsity support it does have is not a first-class citizen and is optimized for GPU computing. A good user interface allows ML users to train dense models and takes advantage of
sparsity if the underlying hardware supports it without complex rewrites. With this in mind, we release our fully integrated PyTorch-based library for sparse training. The library is built on the
principle of modular APIs shared among different sparsity algorithm implementations, allowing for fast research ideation and extensibility to new algorithms. It is co-designed from the ground up with
our software solution, does not require changing complex code deep in the framework, and introduces minimal overhead for ML users.
In the rest of the blog, we introduce the core API that makes the foundations of our sparsity library. We release implementations of several popular weight sparsity algorithms (for static and dynamic
sparse training). We follow this up by showing how easy it is to enable sparse training with our ModelZoo and a few benchmark results. Finally, we demonstrate how extensible the API is by enabling a
new dynamic sparsity algorithm and training a model using that implementation.
Cerebras PyTorch Sparsity Library
Our library is designed to support unstructured sparsity. It is hardware agnostic; however, when enabled with the Cerebras CS-2, it can harness its unique ability to accelerate unstructured sparsity.
The library abstracts all low-level complexity by providing reusable APIs in PyTorch, enabling the ML user to focus on research ideation. It introduces minimal overhead and is easy to integrate into
most training workflows.
API Design
In this section, we will give an overview of the design of our sparsity API. We have four key abstractions that enable us to flexibly design most weight-sparsity algorithms for training.
Optimizer Integration: Most state-of-the-art sparsity algorithms require stateful variables such as masks, counters, initialization states, etc., to enable iterative adjustments during training for
better model quality. This is similar to PyTorch optimizers such as AdamW [3], which track momentum buffers for adaptive optimization. Relying on this framework of optimizers for handling states and
enabling per-parameter options, we design our sparsity algorithms as special optimizers, which operate on a sparse view of the model parameters.
We wrap our sparse optimizers over the existing PyTorch optimizer for training (e.g., SGD or AdamW). This enables efficient parallelization, saving the sparsity states in checkpoints and
conditionally handling dynamic gradient loss scaling. Figure 2 shows how our wrapper introduces minimal changes to facilitate training with sparsity.
# Construct model and optimizer
model = torch.nn.Module(...)
optimizer = cstorch.optim.SGD(...)
# Model forward and backward
loss = model(...)
# Update weights.
# Construct model and optimizer as usual
model = torch.nn.Module(...)
optimizer = cstorch.optim.SGD(...)
# Construct a sparsity optimizer, and use the returned wrapper as a drop-in replacement for the original optimizer
optimizer = cstorch.sparse.configure_sparsity_wrapper(
# Model forward and backward as usual. Sparsity is automatically applied.
loss = model(...)
# Update weights. If using dynamic sparsity, it is also updated according to its schedule.
Figure 2: We compare CS2 workflows for dense (left) and sparse (right) training. Our sparsity wrapper handles all the changes needed for sparse training internally. For the ML developer, it is as
simple as calling the wrapper configuration function with the arguments for sparsity.
Base Optimizer: Most state-of-the-art sparsity and pruning algorithms share similar routines for applying and updating masks for models. We consolidate these under a BaseSparsityOptimizer
implementation, which handles other standard functions such as defining the checkpoint states for sparsity, initializing masks with custom distributions, and handling the sparse views of the model
parameters and optimizer states. This allows users to define new optimizers relatively easily without worrying about the control flow of sparse training and checkpointing.
Update Schedules: Sparse training algorithms often change the mask patterns and sparsity level at different frequencies and rely on some scheduling functions to enable them. We provide a
BaseHyperParameter class, which can be used to define custom schedules easily and pass them to existing algorithms. We also implement standard schedules such as cosine, polynomial, and periodic.
Tensor utilities: We provide a few base utilities for handling mask updates for sparsity algorithms, enabled via an efficient TopK implementation for the CS2. To enable fine-grained tensor handling,
we also develop two utilities for developers:
1. ScoreShaper: enables parameter reshaping, allowing for grouped structures during training.
2. ScoreTieBreaker: enables breaking ties between individual elements within a tensor when writing custom sparsification logic to ensure determinism.
We rely on the above-defined abstractions of our API to implement the following sparse training algorithms as a representative set of baselines:
1. Static Sparse Training
2. Gradual Magnitude Pruning (GMP) [4]
3. Sparse Evolutionary Training (SET) [1]
4. Rigging the Lottery (RigL) [2]
Our developer documentation contains more details for functional arguments and support for update schedules, initializations, etc. We also provide a detailed guide on how to set up a sparse training
workflow from scratch for training on the Cerebras CS2.
Why support dynamic sparsity?
The Lottery Ticket Hypothesis [5] demonstrated that we can find a sparse network with iterative pruning and successfully train it from scratch to achieve comparable accuracy by starting from the
original initial conditions. In practice, this work relies on finding the “winning ticket,” which is compute-intensive and often challenging to discover. Previous works, such as SNIP [6], GRASP [7],
etc., have tried finding this winning ticket at initialization to reduce compute costs but lose accuracy compared to training a dense model. As an orthogonal approach, some works, such as SET [1],
RigL [2], etc., have focused on employing dynamic updates to efficiently identify optimal sparse networks within a single training cycle, bypassing the need for finding the winning ticket. Figure 3
illustrates the general workflow of dynamic updates during training. The recent state-of-the-art research on sparsity for training neural networks relies on dynamic sparse methods by default. Also,
in our recent work, Sparse-IFT [8], we benchmark the advantages of dynamic sparse training over static sparse training and show consistent wins at all sparsity levels.
Figure 3: Dynamic sparsity algorithms improve the optimization of sparse neural networks by leveraging updates during training. For example, RigL [2] utilizes weight and gradient magnitudes to
jointly optimize model parameters and connectivity. Figure sourced from the RigL paper.
Push Button Software for Sparse Training
The Cerebras Software Platform makes it extremely simple to pre-train models using unstructured sparsity. Any existing PyTorch model in the Cerebras Model Zoo can be made sparse with just a few lines
of change to the configuration file, as shown in Figure 4.
Figure 4: Example configuration changes to enable 80% sparsity for training a 1.3B Llama2 model using RigL. In this example, we start with a random mask on all linear layers in the network. For the
drop fraction, we follow a cosine decay schedule to 0.
We benchmark our sparse training algorithms for training LLMs to demonstrate the new API and the effectiveness of dynamic sparsity beyond static sparsity. We use the same architecture as the Llama2
[9] family of models but do not adopt their findings on Generalized Query Attention (GQA) and long context lengths. We train a 1.3 billion (B) parameter model on 112 billion tokens of SlimPajama [10]
data. Table 1 shows the architectural details of the model. We use the AdamW [3] optimizer with betas of (0.9, 0.95) and epsilon of 10^-8. The global norm is clipped at 1.0, and a weight decay of 0.1
is used. There is a learning rate warm-up over the first 2000 steps to a peak value of 2 ∗ 10^-4, followed by a cosine decay to 4.5 ∗ 10^-6. We train on packed sequences of 2048 context length for
computational efficiency.
Table 1: Size and architecture of the trained Llama2 model.
To train the sparse models, we uniformly prune 80% of all the linear layer weights in the decoder blocks (5x compression). The normalization, embeddings, and output linear layers are not pruned to
promote training stability for the sparse models. Following the findings in the In-Time Overparameterization [11] paper, we reduce the batch size by half compared to the dense model and train for 2x
longer. We also increase the drop fraction to 0.5, follow a cosine decay pruning schedule, and decrease the frequency of updates to allow algorithms such as SET and RigL to find better masks. The
hyper-parameters for all models are shown in Table 2.
Table 2: Learning hyper-parameters (batch size and sparsity) of the models we trained. All models are trained on 112B tokens, and for both dynamic sparsity runs, the drop fraction is decayed to 0 for
75% of the training run, following the recommendations of the RigL paper.
While a single CS-2 system can seamlessly pre-train GPT models up to 175 billion parameters, we leverage a Wafer-Scale Cluster equipped with 4 x CS-2 systems to scale pre-training to speed up our
experiments. The remarkable ease of scaling is shown in Figure 5. A more detailed discussion of the CS-2’s scaling properties can be found in this blog post.
Figure 5: Distributing training across multiple CS-2 systems in a Wafer-Scale Cluster is as easy as specifying the number of systems in the run command. No programming for distributed training is
Table 3 shows the model’s results on the SlimPajama validation subset and downstream tasks from the eval harness following the Open LLM Leaderboard. Using dynamic sparsity algorithms during training
leads to better model quality over static sparse training on upstream validation perplexity and downstream few-shot evaluation tasks.
Table 3: Evaluation of the dense and sparse trained models. We report the validation perplexity (↓ - lower is better) and the average downstream few-shot accuracy (↑ - higher is better) on the
public Open LLM Leaderboard. We do not report the scores for the GSM8K task, as none of the models have strong scores (less than 0.5) on this task.
While we do not run baselines here for GMP, the recent paper on scaling laws for sparse neural networks [13] shows some examples of training transformer models using this algorithm.
Ease of Integrating New Algorithms
Our library’s modular and extensible design enables the building of new algorithms seamlessly. We showcase this flexibility by building support for new state-of-the-art dynamic sparse algorithms such
as GraNet [12]. GraNet builds on top of the pruning-and-regeneration design of RigL [2]. The critical difference is that RigL is a constant sparse-to-sparse algorithm (i.e., the sparsity level does
not change throughout training), whereas GraNet follows a gradual nondecreasing sparsity schedule. This unlocks both dense-to-sparse (i.e., start dense and end sparse) and sparse-to-sparse (i.e.,
start at lower sparsity level and end at higher sparsity level) training.
Figure 6 shows the changes to enable GraNet, given a RigL configuration file.
Figure 6: Example configuration changes to enable GraNet training. In this example, we start with the RigL configuration defined in Figure 3 and add changes to the sparsity schedule to allow dynamic
changes in both the mask update and the sparsity level through training. Note that beyond adding the sparsity schedule, all other hyper-parameters are the same between RigL and GraNet.
To enable the gradual sparsity schedule of GraNet, we implement a simple cubic schedule using our BaseHyperParameter abstraction for schedules (described in API design). We compare this with the
implementation of constant sparsity level for RigL in Figure 7.
class Constant(BaseHyperParameter):
Constant at every step.
TYPE = "constant"
def __init__(self, value):
self.value = torch.tensor(value)
def __call__(self, step: torch.Tensor, is_update_step: torch.Tensor):
return self.value
class Cubic(BaseHyperParameter):
Cubic sparsity function.
:math:`s_t = s_f + (s_i - s_f) * (1 - (t - t0) / (n * t_delta))**3`
TYPE = "cubic"
def __init__(
self.s_init = init_sparsity
self.s_end = end_sparsity
self.update_iter = prune_every_k_steps
self.init_iter = int(sparsity_start_step / prune_every_k_steps)
self.final_iter = int(sparsity_end_step / prune_every_k_steps)
self.total_iters = self.final_prune_iter - self.initial_prune_iter
def __call__(self, step: torch.Tensor, is_update_step: torch.Tensor):
curr_iter = (step / self.update_iter).int()
prune_decay = (1 - ((curr_iter - self.init_iter) / self.total_iters)) ** 3
current_prune_rate = self.s_end + (self.s_init - self.s_end) * prune_decay
return torch.clamp(current_prune_rate, min=self.s_init, max=self.s_end)
def get_min_max_end(
self, begin: int, end: int
) -> Tuple[float, float, float]:
return (self.init_sparsity, self.end_sparsity, self.end_sparsity)
Figure 7: Defining the schedulers used by RigL (left) and GraNet (right) for sparsity. RigL keeps the sparsity level constant throughout training, whereas GraNet uses a cubic, non-decreasing schedule
to enable dense-to-sparse or sparse-to-sparse training. No other changes are required to the base dynamic update logic for the masks.
We train a Llama2 1.3B model (from our benchmarks) following the same training configurations for 2.6B tokens using our GraNet implementation and show the training curves below in Figure 8.
Figure 8: Loss curves for a Llama2 1.3B model trained with RigL (in blue) and GraNet (in orange). We compare a model trained at 80% sparsity with RigL to one trained with GraNet (start at 50% and end
at 80% sparsity). We observe that the gradual increase in sparsity leads to a lower loss (i.e., better) than RigL.
In this blog, we introduce our PyTorch-based library for training models with weight sparsity and show results for training some large models with it. We also show how easy integrating new algorithms
and enabling sparsity for training models with the Cerebras Model Zoo is. Parallel to our work, libraries like JaxPruner [16] and STen [17] have also been released to enable sparsity research.
The Cerebras CS-2’s specialized architecture enables unprecedented efficiency and performance for sparse neural network models. Our co-designed ML/software solution allows users to access this
performance through a research-friendly API. This library is already pivotal in supporting our in-house research on sparsity, demonstrated through the works in Sparse Pre-training and Dense
Fine-tuning [14, 15] and Sparse Iso-FLOP Transformations [8].
We are actively exploring new methods and directions to optimize performance and the quality of sparse models. Contact us to learn more about this study or how the Cerebras CS-2 and our software
platform can empower your sparsity research.
1. Mocanu, Decebal Constantin, et al. “Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.” Nature communications1 (2018): 2383.
2. Evci, Utku, et al. “Rigging the lottery: Making all tickets winners.” International Conference on Machine Learning. PMLR, 2020.
3. Loshchilov, Ilya, and Frank Hutter. “Decoupled weight decay regularization.” arXiv preprint arXiv:1711.05101 (2017).
4. Zhu, Michael, and Suyog Gupta. “To prune, or not to prune: exploring the efficacy of pruning for model compression.” arXiv preprint arXiv:1710.01878 (2017).
5. Frankle, Jonathan, and Michael Carbin. “The lottery ticket hypothesis: Finding sparse, trainable neural networks.” arXiv preprint arXiv:1803.03635 (2018).
6. Lee, Namhoon, Thalaiyasingam Ajanthan, and Philip HS Torr. “Snip: Single-shot network pruning based on connection sensitivity.” arXiv preprint arXiv:1810.02340 (2018).
7. Wang, Chaoqi, Guodong Zhang, and Roger Grosse. “Picking winning tickets before training by preserving gradient flow.” arXiv preprint arXiv:2002.07376 (2020).
8. Saxena, Shreyas, et al. “Sparse Iso-FLOP Transformations for Maximizing Training Efficiency.” arXiv e-prints (2023): arXiv-2303.
9. Touvron, Hugo, et al. “Llama 2: Open foundation and fine-tuned chat models, 2023.” URL https://arxiv. org/abs/2307.09288 (2023).
10. Soboleva, Daria, et al. “SlimPajama: A 627B token cleaned and deduplicated version of RedPajama.” (2023).
11. Liu, Shiwei, et al. “Do we actually need dense over-parameterization? in-time over-parameterization in sparse training.” International Conference on Machine Learning. PMLR, 2021.
12. Liu, Shiwei, et al. “Sparse training via boosting pruning plasticity with neuroregeneration.” Advances in Neural Information Processing Systems 34 (2021): 9908-9922.
13. Frantar, Elias, et al. “Scaling laws for sparsely-connected foundation models.” arXiv preprint arXiv:2309.08520 (2023).
14. Thangarasa, Vithursan, et al. “SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models.” arXiv preprint arXiv:2303.10464 (2023).
15. Gupta, Abhay, et al. “Accelerating Large Language Model Training with Variable Sparse Pre-training and Dense Fine-tuning.” Cerebras Blog (2023)
16. Lee, Joo Hyung, et al. “JaxPruner: A concise library for sparsity research.” Conference on Parsimony and Learning. PMLR, 2024.
17. Ivanov, Andrei, et al. “STen: Productive and Efficient Sparsity in PyTorch.” arXiv preprint arXiv:2304.07613 (2023).
Abhay Gupta led the design of the sparsity API along with Mark Browning, tested the algorithms during their internal bring-up, ran all benchmarks, and contributed to the writing of this blog. Mark
Browning is the primary developer of the sparsity library, enabling framework support for the CS-2. Claire Zhang helped implement the GraNet algorithm, ran the associated experiments, and contributed
to writing the blog. Sean Lie is the architect of sparsity on the Cerebras CS-2 and has guided the bring-up of the hardware, software, and training infrastructure used for the training runs in this
blog. We also acknowledge the software, machine learning, and performance teams, who have played an instrumental role in developing sparsity support on the CS-2 hardware. | {"url":"https://cerebras.ai/blog/sparsity-made-easy-introducing-the-cerebras-pytorch-sparsity-library","timestamp":"2024-11-08T04:13:34Z","content_type":"application/xhtml+xml","content_length":"249518","record_id":"<urn:uuid:1d6e9468-9a27-4aa5-a1e6-edad53621ec5>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00620.warc.gz"} |
The Best Sum Calculator - Find the Sum with 100% Accuracy!
Introduction to Sum Calculator
Sum calculator is an online tool that is used to find the sum of given numbers in a fraction of a second. Our addition calculator evaluates the two or more two-number values summation between
one-digit number to nth numbers of the digit.
Adding calculator is a very valuable tool for everyone who needs to do addition calculations in a few seconds without making external efforts. With its calculation sum feature, you can obtain
accurate results swiftly, saving time and effort.
That is why we make this tool for them you just need to enter the input value and the rest of the calculation will be done in the calculator automatically.
What Is an Addition?
Addition is a method that is used to combine various types of numbers together to get the sum of given number values. For the addition of numbers, we use “ +” in mathematics. It is simply used to
find the total numbers.
A number adder is often employed to quickly determine the total value resulting from addition operations.
How to Add Natural Numbers in an Addition Calculator?
Sum calculator is the simplest method to combine different number problems for summation. You can add the number of one-digit, two…..or more and it gives you a complete step of summation in solution.
You need to enter your input number in the sums calculator, it will first align the number in the column as per the number of digits value.
After aligning it starts taking additions from the right side. If the first line number is two digits like 12 then write 2 below that line and 1 goes as carry on the other number line.
Again if the summation number of the second line is two digits then repeat the above process again. If the summation number has only one number of digits then no need to carry the number.
Repeat this process to find the solution of the summation method for the given number. An example of an addition question along with its solution is given to know the working process of our add
numbers calculator.
Solved Examples of Summation Numbers
The sum calculator can solve the sum problems easily but it is important to understand the manual calculation. So, an example is given below,
In a class test, the marks scored by julie out of 20 are 10, 8, 6 and 11 in English, Maths, science and computer. Calculate the total marks scored by julie.
Marks scored by Juie:
$$ In\; English \;=\; 10 $$
$$ In\; Maths \;=\; 8 $$
$$ In \; Science \;=\; 6 $$
$$ In\; Computer \;=\; 11 $$
$$ Total\; marks \;=\; 10 + 8 + 6 + 11 \;=\; 35 $$
Addition Example:
Michel earns 2000, 2230, 1789, 2341, 1890, 1678 and 2318 dollars in different days of the week chronologically. Determine his total earning.
Michel earns in each day of week,
$$ On\; Monday \;=\; 2000\; dollars $$
$$ On\; Tuesday \;=\; 2230\; dollars $$
$$ On\; Wednesday \;=\; 1789\; dollars $$
$$ On\; Thursday \;=\; 2341 $$
$$ On\; Friday \;=\; 1890 $$
$$ On\; Saturday \;=\; 1678 $$
$$ On\; Sunday \;=\; 2318 $$
Total money earned,
2000 + 2230 + 1789+ 2341 + 1890 + 1678 + 2318 \;=\; 14246 $$
How to Use the Sum Calculator Between the Numbers?
Addition calculator has a user-friendly design that helps you for calculating the sum of various types of number summation instantly. You just need to follow some simple steps for a comfortable
experience during addition numbers. These steps are:
• Enter your number in the input field for addition solution.
• Check the given number before clicking the calculate button to get the correct solution to the summation question.
• Click the “Calculate” button for the solution of whole or decimal numbers for addition.
• Click the “Recalculate” button for the calculation of more questions for the summation of different number solution.
• If you want to check the working process behind our adding calculator then use the load example for the calculation to get an idea of its accuracy in the solution.
Final Result from the Number Adder
Sum calculator provides you with a solution to your input number problem when you click on the calculate button. It may include as:
In the result box,
When you click on the result button you get the solution of the number value.
Steps box Click on the steps option so that you get the solution to the given number of questions in a step-by-step method.
Benefits of Using Our Online Add Calculator
Sum calculator has different features for solving number addition to get the solution. It gives you multiple benefits whenever you use it for the summation of the given number question.
• Addition calculator is a trustworthy tool as it always provides you with accurate solutions of addition for numbers.
• Adding calculator is a swift tool that evaluates addition problems with solutions in a couple of seconds
• Calculator for addition is a free tool that allows you to use it for the calculate sum numbers without any fee.
• It has a simple design, anyone or even a beginner can easily use it for the solution of addition number problems
• It can operate through a desktop, mobile, or laptop through the internet to solve the addition value problems.
• Add calculator is an educational tool that is used to teach children about the concept of addition very easily on online platforms.
• Sums calculator is a handy tool that solves addition in the unit, ten, hundred, thousand, lac, and million terms quickly you do not need to do anything just enter your question only.
• Feel free to provide the numbers you want to add, and our number adder calculate the sum of for you. | {"url":"https://pinecalculator.com/sum-calculator","timestamp":"2024-11-06T12:21:03Z","content_type":"text/html","content_length":"42824","record_id":"<urn:uuid:77364c97-00f7-4ac4-a2c7-e406bf64add4>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00506.warc.gz"} |
Torpor Ark Calculator - Temz Calculators
Torpor Ark Calculator
Use the Torpor ARK Calculator to determine how long your creature will remain unconscious in ARK: Survival Evolved. By inputting the creature’s weight, torpor rate, and increase rate, you can quickly
calculate the duration of its torpor.
Torpor Calculation Formula
The following formula is used to calculate the torpor duration for creatures in ARK: Survival Evolved.
Torpor Duration = (Creature Weight * Torpor Rate) / Torpor Increase Rate
• Torpor Duration is the total time your creature will stay unconscious (minutes)
• Creature Weight is the weight of your creature in kilograms (kg)
• Torpor Rate is the rate at which torpor accumulates (%)
• Torpor Increase Rate is the rate of torpor increase per minute
To calculate the torpor duration, multiply the creature’s weight by the torpor rate and divide the result by the increase rate.
What is Torpor Calculation?
Torpor calculation is the process of determining how long a creature in ARK: Survival Evolved will remain incapacitated after being knocked out. This calculation is crucial for planning taming
strategies and managing resources effectively in the game.
How to Calculate Torpor Duration?
The following steps outline how to calculate the torpor duration using the provided formula.
1. First, input the creature’s weight into the calculator.
2. Next, enter the torpor rate and the increase rate.
3. Use the formula: Torpor Duration = (Creature Weight * Torpor Rate) / Torpor Increase Rate.
4. Calculate the duration and check the result with the calculator above.
Example Problem:
Use the following variables as an example problem to test your knowledge.
Creature Weight = 500 kg
Torpor Rate = 5%
Torpor Increase Rate = 20 per minute
1. What is torpor?
Torpor is a state of unconsciousness in ARK: Survival Evolved that occurs when a creature’s torpor reaches a certain level. It is essential for taming creatures and preventing them from waking up
2. How does torpor affect taming?
Torpor affects how long you have to tame a creature before it wakes up. Higher torpor duration means more time to feed and tame the creature without it waking up.
3. Can environmental factors affect torpor duration?
Yes, environmental factors can impact how quickly a creature’s torpor increases or decreases. Factors such as temperature and other in-game elements can alter torpor dynamics.
4. How often should I use the torpor calculator?
It’s useful to use the torpor calculator whenever you are taming a creature to ensure you have accurate information about how long it will stay unconscious.
5. Is the calculator accurate for all creatures?
The calculator provides an estimate based on the inputs given. For precise data, refer to specific creature details in the game or consult community resources. | {"url":"https://temz.net/torpor-ark-calculator/","timestamp":"2024-11-07T09:21:28Z","content_type":"text/html","content_length":"74214","record_id":"<urn:uuid:45522009-8591-4eec-bdc6-bcf5975f424d>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00331.warc.gz"} |
The bias-variance trade-off: concluding comments
Hopefully you now have a good intuition over what it means for models to underfit and overfit. See if all of the terms in the beginning of this post now make sense. Before we throw tons of code at
you, let's finish up talking about the bias-variance trade-off.
To recap, when we train machine learning algorithms on a dataset, what we are really interested in is how our model will perform on an independent data set. It is not enough to do a good job
classifying instances on the training set. Essentially, we are only interested in building models that are generalizable - getting 100% accuracy on the training set is not impressive, and is simply
an indicator of overfitting. Overfitting is the situation in which we have fitted our model too closely to the data, and have tuned to the noise instead of just to the signal.
To be clear: strictly speaking, we are not trying to model the trends in the dataset. We try to model the underlying generative process that has created the data. The specific dataset we happen to be
working with is just a small set of instances (i.e. a sample) of the ground truth, which brings with it its own noise and peculiarities.
Here is a summary figure showing how under-fitting (high bias, low variance), properly fitting, and over-fitting (low bias, high variance) models fare on the training compared to the test sets:
This idea of building generalizable models is the motivation behind splitting your dataset into a training set (on which models can be trained) and a test set (which is held out until the very end of
your analysis, and provides an accurate measure of model performance).
But - big warning! It's also possibly to overfit to the test set. If we were to try lots of different models out and keep changing them in order to chase accuracy points on the test set, then the
information from the test set can inadvertently leak into our model creation phase, which is a big no-no. We need a way around this. | {"url":"https://notebook.community/nslatysheva/data_science_blogging/messy_modelling/messy_modelling_simplified","timestamp":"2024-11-05T01:04:49Z","content_type":"text/html","content_length":"283322","record_id":"<urn:uuid:e00eeb4a-fd46-4153-b32b-bef09640de02>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00262.warc.gz"} |
Radians to Degrees Conversion (rad to °) - Inch Calculator
Radians to Degrees Converter
Enter the angle in radians below to get the value converted to degrees. The calculator supports values containing decimals, fractions, and π: (π/2, 1/2π, etc)
Result in Degrees:
2 rad = 114.591559°
2 rad = 114° 35′ 29.61″
2 rad = 360 / π
Do you want to convert degrees to radians?
How to Convert Radians to Degrees
To convert a measurement in radians to a measurement in degrees, you need to use a conversion formula. Since pi radians are equal to 180°, the following conversion formula is preferred in mathematics
for its accuracy and convenience.
degrees = radians × 180 / π
In other words, the angle in degrees is equal to the radians times 180 divided by pi.
To use this formula, start by substituting the angle, in radians, into the formula. Then, move the radians to the top of the fraction, and finally simplify the fraction and evaluate.
For example,
let's convert 5 radians to degrees using the preferred formula.
degrees = 5 rad × 180 / π
degrees = 5 rad × 180 / π
degrees = 900 / π
Alternate Radian to Degree Formula
If you want to convert radians to degrees without using pi, multiply the angle by the following conversion ratio: 57.29578 degrees/radian.
Since one radian is equal to 57.29578 degrees, you can use this simple formula to convert:
degrees = radians × 57.29578
The angle in degrees is equal to the angle in radians multiplied by 57.29578.
For example,
here's how to convert 5 radians to degrees using this formula.
degrees = (5 rad × 57.29578) = 286.478898°
How Many Degrees Are in a Radian?
There are exactly 180/π degrees in a radian.
1 rad = 180° / π
Without using pi, there are approximately 57.29578 degrees in a radian.
1 rad ≈ 57.29578°
Radians and degrees are both units used to measure angle. Keep reading to learn more about each unit of measure.
What Is a Radian?
A radian is the measurement of angle equal to the length of an arc divided by the radius of the circle or arc.^[1] 1 radian is equal to 180/π degrees, or about 57.29578°. There are about 6.28318
radians in a circle.
The radian is the SI derived unit for angle in the metric system. Radians can be abbreviated as rad, and are also sometimes abbreviated as ^c, r, or ^R. For example, 1 radian can be written as 1 rad,
1 ^c, 1 r, or 1 ^R.
Radians are often expressed using their definition. The formula to find an angle in radians is θ = s/r, where the angle in radians θ is equal to the arc length s divided by the radius r. Thus,
radians may also be expressed as the formula of arc length over the radius.
Radians are also considered to be a "unitless" unit. That is, when multiplying or dividing by radians, the result does not include radians as part of the final units.
For example, when determining the length of an arc for a given angle, we use the formula above, rearranged to be s = θr. If θ is in radians and r is in meters, then the units of s will be meters, not
radian-meters. If θ were in degrees, however, then s would have units of degree-meters.
Learn more about radians.
What Is a Degree?
A degree is a measure of angle equal to 1/360th of a revolution, or circle.^[2] The number 360 has 24 divisors, making it a fairly easy number to work with. There are also 360 days in the Persian
calendar year, and many theorize that early astronomers used 1 degree per day.
The degree is an SI accepted unit for angle for use with the metric system. A degree is sometimes also referred to as a degree of arc, arc degree, or arcdegree. Degrees can be abbreviated as °, and
are also sometimes abbreviated as deg. For example, 1 degree can be written as 1° or 1 deg.
Degrees can also be expressed using arcminutes and arcseconds as an alternative to using the decimal form. Arcminutes and arcseconds are expressed using the prime (′) and double-prime (″) characters,
respectively, although a single-quote and double-quote are often used for convenience.
One arcminute is equal to 1/60th of a degree, and one arcsecond is equal to 1/60th of an arcminute.
Protractors are commonly used to measure angles in degrees. They are semi-circle or full-circle devices with degree markings allowing a user to measure an angle in degrees. Learn more about how to
use a protractor or download a printable protractor.
Learn more about degrees.
Radian to Degree Conversion Table
Radian values as a mathematical expression
converted to a decimal value and degree value.
Radians (expression) Radians (decimal) Degrees
0 rad 0 rad 0°
π/12 rad 0.261799 rad 15°
π/6 rad 0.523599 rad 30°
π/4 rad 0.785398 rad 45°
π/3 rad 1.047198 rad 60°
π/2 rad 1.570796 rad 90°
2π/3 rad 2.094395 rad 120°
5π/6 rad 2.617994 rad 150°
π rad 3.141593 rad 180°
3π/2 rad 4.712389 rad 270°
2π rad 6.283185 rad 360°
1. International Bureau of Weights and Measures, The International System of Units, 9th Edition, 2019, https://www.bipm.org/documents/20126/41483022/SI-Brochure-9-EN.pdf
2. Collins Dictionary, Definition of 'degree', https://www.collinsdictionary.com/us/dictionary/english/degree
More Radian & Degree Conversions | {"url":"https://www.inchcalculator.com/convert/radian-to-degree/","timestamp":"2024-11-13T18:29:17Z","content_type":"text/html","content_length":"76077","record_id":"<urn:uuid:9b5b7b9b-9486-422d-9aed-555fe70dc2b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00080.warc.gz"} |
IntroductionAssimilation of compact phase space retrievalsWRF-Chem/DART – a regional chemical transport/data assimilation systemStudy period, domain, initial conditions, boundary conditions, emissions,
and initial ensemble generationMeteorology observations and satellite trace gas retrievalsExperimental designResultsThe control and chemical data assimilation experimentsAssimilation of compact phase space retrievalsVerification against MOPITT retrievalsObservation error covariance diagonalization through zeroing of the
cross-correlationsVertical localization and phase space retrievalsSummary and conclusionsAcknowledgementsReferences
GMD Geoscientific Model Development GMD Geosci. Model Dev. 1991-9603 Copernicus Publications Göttingen, Germany 10.5194/gmd-9-965-2016Assimilating compact phase space retrievals of atmospheric
composition with WRF-Chem/DART: a regional chemical transport/ensemble Kalman filter data assimilation system MizziArthur P. mizzi@ucar.edu Arellano Jr.Avelino F. EdwardsDavid P. AndersonJeffrey L.
PfisterGabriele G. https://orcid.org/0000-0002-9177-1315 National Center for Atmospheric Research, Atmospheric Chemistry Observation and Modeling Laboratory, Boulder, CO, USA University of Arizona,
Department of Hydrology and Atmospheric Science, Tucson, AZ, USA National Center for Atmospheric Research, Institute for Applied Mathematics, Boulder, CO, USA Arthur P. Mizzi (mizzi@ucar.edu)
4March2016 9 3 965978 3June2015 8September2015 20January2016 10February2016 This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit
http://creativecommons.org/licenses/by/3.0/ This article is available from https://gmd.copernicus.org/articles/9/965/2016/gmd-9-965-2016.html The full text article is available as a PDF file from
This paper introduces the Weather Research and Forecasting Model with chemistry/Data Assimilation Research Testbed (WRF-Chem/DART) chemical transport forecasting/data assimilation system together
with the assimilation of compact phase space retrievals of satellite-derived atmospheric composition products. WRF-Chem is a state-of-the-art chemical transport model. DART is a flexible software
environment for researching ensemble data assimilation with different assimilation and forecast model options. DART's primary assimilation tool is the ensemble adjustment Kalman filter. WRF-Chem/DART
is applied to the assimilation of Terra/Measurement of Pollution in the Troposphere (MOPITT) carbon monoxide (CO) trace gas retrieval profiles. Those CO observations are first assimilated as
quasi-optimal retrievals (QORs). Our results show that assimilation of the CO retrievals (i) reduced WRF-Chem's CO bias in retrieval and state space, and (ii) improved the CO forecast skill by
reducing the Root Mean Square Error (RMSE) and increasing the Coefficient of Determination (R2). Those CO forecast improvements were significant at the 95% level.
Trace gas retrieval data sets contain (i) large amounts of data with limited information content per observation, (ii) error covariance cross-correlations, and (iii) contributions from the retrieval
prior profile that should be removed before assimilation. Those characteristics present challenges to the assimilation of retrievals. This paper addresses those challenges by introducing the
assimilation of compact phase space retrievals (CPSRs). CPSRs are obtained by preprocessing retrieval data sets with an algorithm that (i) compresses the retrieval data, (ii) diagonalizes the error
covariance, and (iii) removes the retrieval prior profile contribution. Most modern ensemble assimilation algorithms can efficiently assimilate CPSRs. Our results show that assimilation of MOPITT CO
CPSRs reduced the number of observations (and assimilation computation costs) by ∼35%, while providing CO forecast improvements comparable to or better than with the assimilation of MOPITT CO QORs.
There is increased international interest in chemical weather forecasting (Kukkonen et al., 2012; MACC-II Final Report, 2014). Such forecasts rely on coupled forecast model–data assimilation systems
that ingest a combination of remotely sensed and in situ atmospheric composition observations together with conventional meteorological observations. Generally the remotely sensed observations come
in the form of trace gas retrievals. Examples include carbon monoxide (CO) total and partial column or profile retrievals from the Terra/Measurement of Pollution in the Troposphere (MOPITT) and Metop
/Infrared Atmospheric Sounding Interferometer (IASI) instruments. The associated data sets are characterized by large numbers of observations with limited information per observation. Such remotely
sensed data have been assimilated in various settings (e.g. Bei et al., 2008; Herron-Thorpe et al., 2012; Klonecki et al., 2012; Gaubert et al., 2014), but there have been only a few papers
addressing data compression strategies. Two such papers were Joiner and da Silva (1998) and Migliorini et al. (2008). This article is inspired by their research and introduces an efficient
assimilation strategy that reduces the number of MOPITT CO retrieval observations by ∼35%. Greater reductions are possible, for example, with IASI CO and ozone (O3) retrievals, and depend on the
number of (i) levels in the retrieval profile and (ii) linearly independent pieces of information in the retrieval profile.
Joiner and da Silva (1998) first proposed the idea of using information content to reduce the number of retrieval observations. They suggested projecting retrievals onto the eigenvectors of the
observation error covariance matrix and zeroing those coefficients with little or no information. Their approach evolved from one-dimensional retrieval algorithms (e.g. Twomey, 1974; Smith and Woolf,
1976; Thompson, 1992).
Retrievals are often obtained using the optimal estimation method of Rodgers (2000) to obtain solutions to the retrieval equation yr=Ayt+(I-A)ya+ε, where yr is the retrieval profile, A is the
averaging kernel, yt is the true atmospheric profile (unknown), I is the identity matrix, ya is the retrieval prior profile, and ε is the measurement error in retrieval space with error covariance Em
– the measurement error covariance in retrieval space. Joiner and da Silva (1998) proposed projecting Eq. (1) onto the trailing left singular vectors from the singular value decomposition (SVD) of
(i) (I-A) (their method 1) or (ii) the smoothing error Es=(A-I)Pp(A-I)T, where Pp is the retrieval prior error covariance (their method 2). Those projections removed the components of the retrieval
that the instrument could not measure with sufficient sensitivity. They called that approach null-space filtering.
Joiner and da Silva (1998) recognized that when the retrieval was strongly constrained by the retrieval prior profile, the assumptions underlying null-space filtering were invalid. For such
retrievals, they proposed filtering based on an eigen-decomposition of Em (their PED (partial eigen-decompostion) retrievals). Their analysis showed that PED retrievals were well conditioned and
independent of the retrieval prior profile.
Migliorini et al. (2008) noted that the Joiner and DaSilva (1998) filtering depended on their truncation criteria and was therefore somewhat arbitrary. They also showed it was possible to achieve
similar filtering results with an alternative approach that used a more well-defined truncation criterion. Migliorini et al. (2008) rearranged Eq. (1) to obtain yr-(I-A)ya=Ayt+ε. Following their
terminology, we call the left side of Eq. (2) a quasi-optimal retrieval (QOR). Migliorini et al. (2008) noted that Em was unlikely to be diagonal and likely to be poorly conditioned. To address those
issues, they applied a SVD transform to Eq. (2) based on the leading left singular vectors of Em (similar to that proposed by Anderson, 2003) – this step provided diagonalization of Em and used a
well-determined truncation cutoff. They also applied a scaling based on the inverse square root of the associated singular values – this step improved numerical conditioning.
Migliorini et al. (2008) continued to reduce the dimension of Ayt (i.e., the number of observations) by neglecting those elements whose variability was smaller than the measurement error standard
deviation (unity in their rotated and scaled system). They proposed identifying those elements with an eigen-decomposition of the covariance of Ayt. Since that covariance is generally unknown, they
replaced it with the forecast error covariance and showed that the resulting dimension was approximately equal to the number of independent linear functions that could be measured to better than
noise level.
A more recent paper (Migliorini 2012) shows that retrievals can be transformed to represent only the portion of the state that is well constrained by the original radiance measurements when two
requirements are satisfied: (i) the radiance observation operator is approximately linear in a region of state space centered on the retrieval and with a radius on the order of the retrieval error,
and (ii) the prior information used to constrain the retrieval does not underrepresent the variability of the state. Migliorini (2012) proves that when those conditions are met the assimilation of
radiances is equivalent to the assimilation of retrievals. The Migliorini (2012) analysis shows that it is possible to use information from the retrieval algorithm to compress information in the
transformed retrievals
In this paper, we propose an approach that achieves results similar to (i) Migliorini et al. (2008) without needing to approximate the covariance of Ayt, and (ii) Migliorini (2012) without needing
information about the retrieval algorithm. Our goal is to compress the retrievals and remove those components that are not dependent on the measurements. In so doing we expect to make the
assimilation of retrievals more computationally efficient. The rest of this paper is organized as follows: Sect. 2 introduces compact phase space retrievals, and Sect. 3 introduces the WRF-Chem/DART
regional chemical weather forecast/data assimilation system. Section 4 discusses our experimental design including the study period, model domain, initial/boundary conditions, and relevant WRF-Chem/
DART parameter settings. Section 5 discusses the observations that were assimilated in the various experiments described in Sect. 6. The results from those experiments are presented in Sect. 7, and
we end with a summary of our thoughts and conclusions in Sect. 8.
We can rewrite Eq. (2) as yr-(I-A)ya-ε=Ayt. In Eq. (3) the averaging kernel A is singular and has low rank. Therefore the information content for each component of Ayt is relatively small, and the
assimilation is inefficient because one must assimilate the entire profile to get the same information as can be compressed into a number of processed observations equal to the rank of A. To compress
Eq. (3), we propose transforming it with the leading left singular vectors from a SVD of the averaging kernel; i.e., A=USVT. We denote the truncated system with the subscript zero so that A0=U0S0V0T;
the truncated averaging kernel is obtained by setting the trailing singular values (i.e., singular values that were less than 1.0×10-4) and vectors to zero. The transformed system has the form U0T
(yr-(I-A)ya-ε)=S0V0Tyt. Migliorini et al. (2008) showed that subtracting (I-A)ya from Eq. (3) removes all contribution from the retrieval prior profile. Equations (3) and (4) confirm their result
because the leading left singular vectors of A span its range, so the left side of Eq. (3) should project completely onto U0T. Following that transform Em becomes U0TEmU0, which may still be
non-diagonal and poorly conditioned. Therefore, we apply an SVD transform and inverse scaling similar to that used by Migliorini et al. (2008). If the SVD of U0TEmU0 has the form U0TEmU0=ΦΣΨT, the
transformed and conditioned form of Eq. (4) is Σ-1/2ΦTU0T(yr-(I-A)ya-ε)=Σ-1/2ΦTS0V0Tyt.
Our approach compresses Eq. (3) so that the dimension of the compact phase space retrieval (CPSR) profile on the left side of Eq. (5) is identical to the number of independent linear functions of the
atmospheric profile to which the instrument is sensitive. This method is different from that of Migliorini et al. (2008) because it compresses the quasi-optimal retrieval observations based on a
linear independence analysis and relies on the assimilation system to decide how much weight to give the observations. The approach of Migliorini et al. (2008) reduces the number of observations
based on an uncertainty analysis independent of the assimilation system. Our approach identifies all linearly independent information contained in the QOR profile (through projection of the QOR
profile onto the left non-zero singular vectors of the averaging kernel). The approach of Migliorini et al. (2008) may (i) discard some linearly independent information because the left non-zero
singular vectors of the observation error covariance are not necessarily a basis for the space of QORs, and (ii) discard some linearly independent information through their uncertainty analysis.
Finally, our approach relies on two transforms: (i) a compression transform (based on the left non-zero singular vectors of the averaging kernel, and (ii) a diagonalization transform (based on the
left non-zero singular vectors of the compressed observation error covariance). The approach of Migliorini et al. (2008) uses two diagonalization transforms – the first based on the observation error
covariance and the second based on the transformed forecast error covariance in observation space. Our diagonalization transform is analogous to their first diagonalization transform except we apply
it to the compressed observation error covariance, and they apply it to the untransformed observation error covariance. As in Migliorini et al. (2008), the final form of our observation error
covariance is the truncated identity matrix.
The assimilation of CPSRs should produce results similar to the assimilation of QORs except for (i) the effect of assimilation sub-processes like horizontal localization and inflation, and (ii)
differences in the observation error due to the CPSR compression transform. The QOR and CPSR observation errors are different because the compression transform projects the errors onto the leading
left singular vectors of the averaging kernel and retains only those components that lie in the range of the averaging kernel.
In summary the steps for obtaining CPSRs from trace-gas retrievals are as follows (assuming the retrieval equation has the same form as Eq. 2):
Obtain the retrieval and retrieval prior profiles, the averaging kernel, and the observation error covariance for a particular horizontal location.
Subtract the retrieval prior term (I-A)ya from the retrieval profile yr. This yields the QOR as defined by Eq. (3).
Perform a SVD of the averaging kernel. Form a transform matrix from the left singular vectors associated with the non-zero singular values. Left multiply the QOR by the transpose of the transform
matrix. This yields the truncated QOR profile as defined by Eq. (4).
Left multiply the observation error covariance by the transpose of the transform matrix. Right multiply that matrix product by the transform matrix. This yields the truncated observation error
Perform a SVD of the truncated observation error covariance. Scale the left singular vectors with the inverse square root of their respective singular values. Left multiply the truncated QOR profile
by the transpose of the scaled left singular vector matrix. This yields the CPSR profile as defined by Eq. (5).
As a check, left multiply the truncated observation error covariance by the transpose of the scaled left singular vector matrix. Right multiply that matrix product by the scaled left singular vector
matrix. The result should be an identity matrix with rank equal to the number of non-zero CPSRs from the previous step.
Assimilate the non-zero CPSRs with unitary error variance.
The Weather Research Forecasting Model with chemistry/Data Assimilation Research Testbed (WRF-Chem/DART) system is the WRF-Chem chemical transport model (www2.acd.ucar.edu/wrf-chem) coupled with the
DART (www.image.ucar.edu/DAReS/DART) ensemble adjustment Kalman filter (Anderson, 2001, 2003) data assimilation system. WRF-Chem/DART is an extension of WRF/DART (www.cawcr.gov.au and references
therein). WRF-Chem is the National Center for Atmospheric Research (NCAR) regional Weather Research and Forecasting (WRF) model (www.wrf-model.org) with chemistry. WRF-Chem is a regional model that
predicts conventional weather together with the emission, transport, mixing, and chemical transformation of atmospheric trace gasses and aerosols. WRF-Chem is collaboratively developed and maintained
by the National Oceanic and Atmospheric Administration/Earth System Research Laboratory (NOAA/ESRL), Pacific Northwest National Laboratory (PNNL), and NCAR/Atmospheric Chemistry Observation and
Modeling Laboratory (ACOM). WRF-Chem is documented in Grell et al. (2005), discussed in Kukkonen et al. (2012), and has been applied in various research settings (e.g., Pfister et al., 2011, 2013).
DART (Anderson et al., 2009) is a community resource for ensemble data assimilation (DA) research developed and maintained by the NCAR/Data Assimilation Research Section (DAReS). DART is a flexible
software environment for studying the interaction between different assimilation methods, observation platforms, and forecast models. WRF-Chem and DART are state-of-the-art tools for studying the
impact of assimilating trace gas retrievals on conventional and chemical weather analyses and forecasts.
We conducted continuous cycling experiments with WRF-Chem/DART for the period of 00:00UTC, 1 June 2008 to 00:00UTC, 1 July 2008 with 6h cycling (00:00, 06:00, 12:00, and 18:00UTC). To facilitate
a large number of experiments, we used a reduced ensemble of 20 members and a horizontal resolution of 100km (101×41 grid points). We used 34 vertical levels with a model top at 10hPa and ∼15
levels below 500hPa. WRF-Chem ran with the Model for Ozone and Related Chemical Tracers (MOZART-4) chemistry and Goddard Chemistry Aerosol Radiation and Transport (GOCART) model aerosol options
(Colarco et al., 2009; Emmons et al., 2010). Ideally for chemical transport forecast experiments we would like an ensemble size of at least 40 members, a horizontal resolution of no larger than
20km, and a vertical grid with at least 50 levels. We expect our small ensemble/coarse-resolution cycling results, as they pertain to the assimilation of QORs and CPSRs, will apply to larger
ensembles with higher resolutions. However, as the vertical resolution increases, the sensitivity to vertical localization may increase (because as the model's vertical resolution increases (i) the
vertical solution becomes less smooth and may exhibit greater vertical variability and (ii) the fidelity of vertical localization becomes greater) so that tuning of the vertical localization length
may be necessary. For our experiments we used a three-dimensional Gaspari–Cohn type localization with a localization radius half-width of 3000km in the horizontal and 8km in the vertical. We
conducted sensitivity experiments to determine the appropriate localization settings. Results from the horizontal tests are not discussed. Results from selected vertical localization tests are
discussed briefly in Sect. 7.5.
We used NCEP Global Forecast System (GFS) 0.5∘ six-hour forecasts for the WRF-Chem initial/boundary conditions. Our model domain extends from ∼176 to ∼50∘W and from ∼7 to ∼54∘N. We used the WRF
preprocessing system (WPS) to interpolate the GFS forecasts to our domain and generate the deterministic boundary conditions. We used the WRF data assimilation system (WRFDA) (http://
www2.mmm.ucar.edu/wrf/users/wrfda/Docs/user_guide_V3.7/WRFDA_Users_Guide.pdf) to generate the initial meteorology ensemble.
For the chemistry initial and lateral boundary conditions, we used global simulations from the NCAR MOZART-4 model. The fire emissions came from the Fire Inventory from NCAR (FINNv1; Wiedinmyer et
al., 2011), and the Model of Emissions of Gases and Aerosols from Nature (MEGAN; Guenther et al., 2012) calculated the biogenic emissions as part of the WRF-Chem forecast. The anthropogenic emissions
were based on the US Environmental Protection Agency's (EPA's) 2005 National Emissions Inventory (NEI-2005). We used or adapted existing ACOM/WRF-Chem utilities (https://www2.acom.ucar.edu/wrf-chem/
wrf-chem-tools-community) to generate the initial chemistry ensembles with a Gaussian distribution from a specified mean and standard deviation. That distribution was truncated at the tails to
include 95% of the distribution. Similar utilities were used to generate the emission ensembles. We excluded the distribution tails to avoid the potential for the extreme values to cause numerical
problems in the chemistry algorithms. Although we recognize that the assimilation cycling results may be sensitive to the emission perturbation horizontal correlation lengths (e.g. Pagowski and
Grell, 2012), this was not particularly relevant to our study so we set the horizontal and vertical correlation lengths to zero.
At each cycle time, depending on the experiment we assimilated meteorology and/or chemistry observations with the DART ensemble adjustment Kalman filter (EAKF) and then advanced the analysis ensemble
to the next cycle time with WRF-Chem. The 6h forecast ensemble was then used as the first guess for the next ensemble DA step.
We assimilated conventional meteorological observations and CO trace gas retrievals from MOPITT. The meteorological observations were NCEP automated data processing (ADP) upper air and surface
observations (PREPBUFR observations). They included air temperature, sea level pressure, surface winds, dew point temperature, sea surface temperature, and upper level winds from various observing
platforms. We refer to those observations as the MET OBS.
We also assimilated MOPITT partial column/profile CO retrievals. MOPITT is an instrument flying on NASA's Earth Observing System Terra spacecraft. MOPITT's spatial resolution is 22km at nadir, and
it sees the earth in 640km wide swaths. MOPITT uses gas correlation spectroscopy to measure CO in a thermal-infrared (TIR) band near 4.7µm and a near-infrared (NIR) band near 2.3µm. TIR radiances
are sensitive to CO in the middle and upper troposphere while NIR measures the CO total column. Worden et al. (2010), Deeter (2011), and Deeter et al. (2012, 2013) showed that the sensitivity to CO
in the lower troposphere is significantly greater for retrievals exploiting simultaneous TIR and NIR than for retrievals based on TIR alone. MOPITT started data collection in March 2000. We used the
MOPITT v5 TIR/NIR products described in Deeter et al. (2013). We refer to the MOPITT observations as the CHEM OBS.
Summary of the WRF-Chem/DART Forecast/Data Assimilation Experiments.
Experiment Assimilate Assimilate Assimilate Use error Use meteorology MOPITT MOPITT covariance vertical observations CO QORs CO CPSRs zeroing localization MET DA Yes No No No No MOP QOR Yes Yes No No
No MOP CPSR Yes No Yes No No MOP NROT Yes Yes No Yes No MOP LOC Yes Yes No No Yes
The retrieval error covariance Er associated with each MOPITT CO retrieval profile is provided as part of the data product. That error covariance is derived by the retrieval process based on a
specified a priori error covariance Ea. Under the optimal estimation theory of Rodgers (2000) Er is related to Ea through the averaging kernel A by Er=(I-A)Ea. The measurement error in retrieval
space Em is also related to Ea and A by Em=(I-A)EaAT. Generally for retrieval data sets, Ea, Er, and Em are non-diagonal.
We conducted two basic experiments: (i) a control experiment where we assimilated only MET OBS (MET DA); and (ii) a chemical data assimilation experiment where we assimilated MET OBS and MOPITT CO
partial column retrievals in the form of QORs (MOP QOR). In addition we conducted an experiment where we converted the CHEM OBS to CPSRs and assimilated the CPSRs (MOP CPSR). We also conducted
sensitivity experiments where we (i) zeroed the observation error covariance cross-correlations (MOP NROT) – as opposed to using a SVD transformation for diagonalization, and (ii) applied vertical
localization (MOP LOC). The suite of experiments is summarized in Table 1.
For all experiments we used (i) DART horizontal and vertical localization – Gaspari–Cohn localization with a localization radius half-width of 3000km in the horizontal and 8km in the vertical, (ii)
DART prior adaptive inflation, (iii) no posterior inflation, (iv) full interaction between all observations and all state variables – i.e., MET OBS update chemistry state variables and CHEM OBS
update meteorology state variables (joint assimilation of MET and CHEM OBS), (v) DART clamping (i.e., the imposition of a minimum threshold) on chemistry state variables to constrain the posterior
ensemble members to be positive, and (vi) the reported MOPITT retrieval error covariance as the observation error covariance to account for unrepresented error sources such as representativeness
(a) Shaded contours of CO in ppb for the MOP QOR (upper panel) and MET DA (middle panel) experiments for the first model level above the surface (∼1000hPa) from the 6h forecast valid on 18:00UTC,
28 June 2008. The lower panel shows the difference contours for those experiments (MOP QOR – MET DA). The shaded area represents the WRF-Chem domain. (b) The upper panel shows the assimilated MOPITT
CO retrievals between the surface and 900hPa for 18:00UTC, 28 June 2008. The lower panel shows the associated assimilation increment.
For the MOP QOR experiment, the MOPITT CO retrievals were converted to QORs using an algorithm similar to that described by Migliorini et al. (2008) except we did not perform their second forecast
error covariance-based filtering.
The MET DA and MOP QOR experiments are intended to identify the impact of assimilating chemistry observations. Figure 1a shows shaded contours of CO in parts per billion (ppb) at ∼1000hPa from the
6h forecast valid at 00:00UTC, 29 June 2008 and compares the MET DA and MOP QOR experiments. It shows that over the course of MOP QORs the assimilation of CO retrievals reduced the (i) positive CO
bias found in polluted areas of MET DA (i.e., metropolitan areas with high-CO emissions – San Francisco, Los Angeles, Chicago, and the northeast USA), and (ii) negative CO bias found in nonpolluted
areas in MET DA (Hawaii, east Pacific, southeast USA, and Baja). The MET DA biases could result from model errors such as (i) emission errors – CO emissions too high in polluted areas and too low in
nonpolluted areas, (ii) transport errors – insufficient CO transport away from polluted areas and insufficient transport toward nonpolluted areas, and/or (iii) chemistry errors – CO destruction too
weak in polluted areas and too strong in nonpolluted areas. The MET DA biases could also result from initial/boundary condition errors that were corrected by the assimilation of MOPITT CO in MOP
Figure 1b shows the assimilated CO retrievals for the 18:00UTC, 28 June 2008 update cycle in the upper panel and the corresponding increments in the lower panel. Comparison of those panels shows
that the assimilation step adjusted the CO concentrations primarily along the satellite observation paths, which is a consequence of assimilating sparse observations. The DA adjustments in Fig. 1b
are generally consistent with the differences between MOP QORs and MET DA in Fig. 1a (CO increases in nonpolluted areas – east of San Francisco, the southeast USA, and Baja). However, that is a
general statement because the MOP QOR – MET DA differences are partially related to the impact of assimilating CO observations during the preceding assimilation cycle and partially related to the
impact of assimilating all the CO observations since the beginning of the cycling experiment (∼100 cycles). Consequently, there are locations where the signs of the MOP QOR – MET DA differences are
different from the signs of the increments (e.g. southwest of lakes Michigan and Huron and over the Ohio River valley and San Francisco Bay). The sense of those sign differences is not an indication
of relative forecast accuracy but that the (i) impact from assimilating CO during the preceding cycle was similar to that from assimilating CO throughout the cycling experiment (same signs), and (ii)
impact from assimilating CO during the preceding cycle was different to that from assimilating CO throughout the cycling experiment (different signs).
Figure 2 shows time series of the domain average CO from the MET DA and MOP QOR experiments in retrieval and state space. The dots represent the retrieval space results where the cool colors (blue
and black) show the forecasts, and the warm colors (red and magenta) show the analyses. The green dots represent the MOPITT retrievals. The solid lines show state-space results. Figure 2 has several
interesting results. First, MET DA had a negative bias of ∼10ppb in retrieval space. Second, assimilation of MOPITT CO reduced that bias by ∼5ppb. Finally, in state space MOP QORs increased the
mean CO by ∼5ppb. As discussed below, those results are consistent with Fig. 1, which shows a large number of nonpolluted areas in MET DA with a negative bias and a small number of polluted areas
with a positive bias.
Time series of the domain average CO from the MOP QOR and MET DA experiments. The red and magenta dots show the domain average CO in retrieval space for the MOP QOR and MET DA analyses denoted in the
legend by “A”. The blue and black dots show the domain average CO in retrieval space for the MOP QOR and MET DA forecasts denoted in the legend by “F”. The green dots show the domain average MOPITT
CO retrievals. They are the same in both panels and are included for reference. The solid lines show the domain average CO in model space with the same color scheme as used for the analyses and
forecasts in retrieval space. The solid lines are the same in both panels are also included for reference.
Figure 3 shows vertical profiles of the time (00:00UTC, 25 June 2008 to 00:00UTC, 29 June 2008) and horizontal domain average CO in retrieval space. It shows that the MOPITT profile had greater
vertical variability (moderate CO near the surface, low CO in the middle troposphere: 500–400hPa, high CO in the upper troposphere: 300–200hPa, and low CO near the tropopause: 200–100hPa) than the
MET DA and MOP QOR profiles. It also shows that the assimilation of MOPITT CO had positive impacts throughout the troposphere with the greatest improvement in the upper troposphere. Figure 3 shows
that there were differences in the MOP QOR/MET DA bias reduction between: (i) the upper and lower troposphere (greater magnitude negative bias reduction in the upper troposphere and lesser magnitude
positive bias reduction in the lower troposphere), and (ii) the forecast and the analysis (greater bias reduction in the analysis than in the forecast). Those results expand our understanding of the
bias in Figs. 1 and 2. In Fig. 3 the forecast and analysis show greater bias reduction in the upper troposphere. That suggests that the domain averages in Fig. 2 were dominated by bias reductions in
the upper troposphere. Figure 3 also suggests that bias reductions in the lower troposphere were dominated by the reduction of the positive bias in the polluted areas of Fig. 1. Those results suggest
that the following model errors (as opposed to initial/boundary condition errors) caused the biases: (i) the near-surface biases were likely caused by the CO emissions being too high in polluted
areas and too low in nonpolluted areas, (ii) the positive biases in the lower middle troposphere (∼600hPa) were likely caused by erroneously large vertical CO fluxes from the near surface to the
lower middle troposphere and/or too little CO destruction, and (iii) the negative biases in the upper troposphere were likely caused by erroneously small vertical CO fluxes and/or too much CO
destruction. We reach the conclusion regarding model error versus initial/boundary condition (IC/BC) error because Fig. 3 shows that the bias reduction in the lower troposphere is greater for the
analyses than for the forecasts. That suggests that following the assimilation of MOPITT CO in MOP QOR, the CO IC/BCs have improved relative to MET DA. Then during the course of model integration the
bias increases. Thus, we conclude that model error is a more likely cause of the bias.
Vertical profiles of time/horizontal domain average CO from the MOP QOR and MET DA experiments for 00:00UTC, 25–29 June 2008. The results are in MOPTT retrieval space. The red profiles represent the
MOPITT retrievals. Otherwise the color of the lines corresponds to the legend. forecast is the assimilation prior, and analysis is the assimilation posterior.
Lastly, we tested the null hypothesis that the difference between the MET DA and MOP QOR time series results was zero (H0: MOP QOR-MET DA=0) against an alternative hypothesis that the difference
was not zero (HA: MOP QOR-MET DA≠0). We used the retrieval-space time series from Fig. 2 and the large sample parametric test for the difference between two means from a normal distribution. The
test statistic was Z=Y‾1-Y‾2σ12/n1-σ22/n2 where Y‾1, σ12, and n1 denote the sample mean, sample variance, and number of samples for the MOP QOR experiment, respectively; Y‾2, σ22, and n2 denote the
analogous sample statistics for the MET DA experiment; and n1=n2=104. The rejection criteria was Cannot handle '' as spaceZCannot handle '' as space>zα/2, where α=0.05 and zα/2=1.96 for a two-tailed
test at the 95% confidence level. We were able to reject the null hypothesis. Based on that result, we conclude that assimilation of MOPITT CO retrievals significantly changed the WRF-Chem/DART CO
forecasts and analyses. When measured against MOPITT, those changes were a significant improvement.
(a) Horizontal domain average of the full and truncated terms in the retrieval equation for 18:00UTC, 28 June 2008. MOP-Ret, MOP-Trc, and MOP-Res are the MOPITT retrieval, truncated retrieval, and
residual profiles, respectively. QOR-Ret is the MOPITT QOR profile, QOR-Trc is the truncated MOPITT QOR profile, and QOR-Res is the MOPITT QOR residual profile. (b) Horizontal domain average of the
MOPITT averaging kernel profiles in the upper panel and leading left singular vectors of those averaging kernels in the lower panel for 18:00UTC, 28 June 2008.
Next we study the assimilation of CPSRs as described in Sect. 2 but first review some CPSR attributes. Figure 4a shows vertical profiles of CPSR characteristics averaged for the MOPITT retrieval
domain at 18:00UTC, 28 June 2008. The blue curves represent the MOPITT CO retrievals (MOP-Rets). Those curves have reduced vertical structure due to the units (log10(VMR) as opposed to VMR). After
conversion from log10(VMR) to VMR, MOP-Rets has greater vertical structure and resembles the MOPITT profiles in Fig. 3. The black curves represent the MOPITT CO QORs (MOP-QOR) as defined by Eq. (3).
MOP-QORs differs from MOP-Rets in that they have maxima near the surface and upper troposphere and a minimum in the middle troposphere. The green curves represent the truncated profiles, which are
obtained by (i) projecting the full retrieval profile or the QOR profile onto the leading left singular vectors of the associated averaging kernel to get the projection coefficients (e.g., cr=U0Tyr,
where cr is the projection coefficient vector for the full retrieval and cqor=U0T(yr-(I-A)ya-ε)=U0Tyqor, where cqor is the coefficient vector for the QOR profile – see Eq. 4 in Sect. 2), and (ii)
performing the inverse projection by multiplying the leading singular vectors by their respective projection coefficients and summing those dot products (e.g., y^r=U0cr is the truncated retrieval
profile – denoted MOP-Trc and y^qor=U0cqor is the truncated QOR profile – denoted QOR-Trc). The forward transform in (i) is analogous to the first part of the CPSR transform in Eq. (4). The inverse
transform in (ii) brings the result of forward transform in (i) back to state space. The inverse transform is not part of the CPSR algorithm.
(a) Same as Fig. 1a except for the MOP CPSR experiment and the middle panel from Fig. 1a, the MET DA experiment is not plotted. (b) Same as Fig. 1b except for the MOP CPSR experiment.
In Fig. 4a the residuals are defined as the difference between the full and truncated profiles (e.g., yr-y^r is the full retrieval residual – denoted MOP-Res and yqor-y^qor is the QOR residual –
denoted QOR-Res). If the full profiles project completely onto the leading singular vectors, the residuals are zero. The upper panel of Fig. 4a shows that the transform in (i) has the greatest impact
near the surface and the upper troposphere and the least impact in the middle troposphere. When the truncation residuals are nonzero, the original profiles contain components that are not in the
range of the averaging kernel. That always indicates a contribution from the retrieval prior term (A-I)ya. However, a zero residual does not always indicate that the contribution from the retrieval
prior term has been removed. For QOR residuals, the retrieval prior term contribution is completely removed. For the retrieval residuals, the retrieval prior term contribution may not be completely
removed. When components of the retrieval prior term lie in the range of the averaging kernel, they cannot be removed by the transform in (i) and are therefore not included in the residual. For
example in the upper panel of Fig. 4a the similarity between MOP-Trc and QOR-Ret shows that the MOPITT retrieval was strongly influenced by the retrieval prior and that most of the prior contribution
was removed by the transform in (i). That also shows that most of the prior contribution was not in the range of the averaging kernel. However, not all was outside the range, and the difference
between MOP-Trc and QOR-Ret shows that most was inside the range. This analysis shows that the influence of the retrieval prior term cannot be completely removed by projecting the retrieval onto the
range of the averaging kernel. The results show that it is necessary to use the Migliorini et al. (2008) quasi-optimal subtraction in Eq. (2) to remove the retrieval prior contribution. Comparison of
QOR-Ret and QOR-Trc in the lower panel of Fig. 4a shows that QOR-Ret lies completely within the range of the averaging kernel. That result was expected from the discussion of Eqs. (3) and (4).
In summary Fig. 4a shows the state space impacts from applying the Migliorini et al. (2008) quasi-optimal subtraction and the CPSR transform in (i). It also shows that the quasi-optimal subtraction
was necessary to remove the influence of the retrieval prior. Thus, in CPSRs the quasi-optimal subtraction removes the influence of the retrieval prior, and projection onto the leading singular
vectors of the averaging kernel provides the data compression.
In Fig. 4a the average number of leading singular vectors was ∼2.3. CPSRs reduced the number of observations by ∼7.7 per MOPITT profile. After thinning there were ∼30000 MOPITT profiles per
assimilation cycle. That implies a CPSR reduction of ∼281000 retrievals or ∼80% per cycle. On application the actual reduction was less because the number of non-retrieval observations was not
reduced. As an example when assimilating MET OBS and CHEM OBS we found a reduction of ∼35% in the computation cost. That is a wall clock time reduction based on NCAR's 1.5-petaflop high-performance
IBM Yellowstone computer with 32 tasks, and 8 tasks per node. We expect similar reductions for other computing configurations.
In Fig. 4b we examine the vertical structure of the CPSRs. The upper panel shows the retrieval domain average of the MOPITT averaging kernel profiles from 18:00UTC, 28 June 2008. The lower panel
shows the domain average of the leading left singular vectors from SVDs of the averaging kernel profiles in the upper panel. Comparison of the upper and lower panels shows that while the singular
vector and averaging kernel profiles are similar it is not possible to associate a specific singular vector profile with a specific averaging kernel profile or with a group of profiles. However, the
averaging kernel of singular vectors show the sensitivity of the associated CPSR to the true CO profile. The first singular vector shows positive sensitivity to the entire CO profile with greater
sensitivity in the lower and middle troposphere and greatest sensitivity in the upper middle troposphere. The second singular vector shows positive sensitivity in the lower troposphere and negative
sensitivity in the upper troposphere. Lastly, the third singular vector resembles the first singular vector with greatest positive sensitivity in the upper middle troposphere but with negative
sensitivity in the lower and upper troposphere. Those characteristics are consistent with the MOPITT TIR and NIR joint sensitivities documented by Worden et al. (2010), Deeter (2011), and Deeter et
al. (2012, 2013). It should be noted that the sign of the singular vectors in the Fig. 4b is arbitrary because the left and right singular vectors can be jointly multiplied by negative one and still
qualify as singular vectors. However, when multiplied by one sign the singular vector may have physical meaning, and when multiplied by the other it may not. For our application, the sign that made
the vertical structure of the singular vectors most similar to that of the averaging kernel had physical meaning. Therefore, in Fig. 4b we chose the sign that made the singular vector profile most
consistent with the averaging kernel profiles.
To test the benefit of assimilating CPSRs we converted the MOPITT CO retrievals to CPSRs and repeated the MOP QOR experiment (called MOP CPSR). Those results are shown in Figs. 5–7. Conceptually the
MOP CPSR results should be similar to the MOP QOR results in Figs. 1–3. Practically, the results are different due to (i) the effect of DA sub-processes like horizontal localization and inflation,
and (ii) differences in the observation error caused by the CPSR compression transform. Comparison of the contour maps in Figs. 1a and 5a shows that MOP CPSR provided similar adjustments to MOP QOR
but they were of greater magnitude and larger area (the MET DA result was not plotted in Fig. 5a because it would be the same as in Fig. 1a). The general trend from Fig. 1a that the assimilation of
CO retrievals reduced the positive CO bias in polluted areas and the negative bias in nonpolluted areas appears in Fig. 5a. Comparison of Figs. 1b and 5b shows that MOP CPSR generally assimilated the
same CO retrievals as MOP QOR, but the CPSR increments were of greater magnitude and more widely dispersed. Comparison of the time series plots in Figs. 2 and 6 shows that there were slightly greater
bias reductions for MOP CPSR than MOP QOR. MOP CPSR reduced the CO negative bias in retrieval space by ∼8ppb and increased the mean CO in state space by ∼10ppb. Those improvements are also seen
from a comparison of the vertical profiles in Figs. 3 and 7, which shows that MOP CPSR produced greater bias reductions for the forecast and analysis throughout the troposphere. As in MOP QOR the MOP
CPSR improvements were greater in the upper troposphere than in the lower troposphere. The MOP CPSR results from Fig. 7 provide further support for our suggestion that the domain average bias
reductions in Figs. 2 and 6 were due to bias reductions in the upper troposphere because the greater bias reductions in the upper troposphere of Fig. 7 (compared to Fig. 3) provided greater bias
reductions in Fig. 6 (compared to Fig. 2). In summary Figs. 5–7 confirm our analysis of Figs. 1–3 and show that assimilation of CPSRs produced results that were similar to or better than those from
the assimilation of QORs at two-thirds the computational cost. We also conducted significance testing for MOP CPSR similar to that for MOP QOR and were able to reject the null hypothesis that there
was no difference between the MOP CPSR and MET DA time series in Fig. 6.
Same as Fig. 2 except for the MOP CPSR experiment.
We calculated verification statistics (bias, root mean square error (RMSE), and coefficient of determination (R2)) for the 6h forecasts from all experiments based on the time series results in
Figs. 2 and 6. Those statistics are plotted in Fig. 8. Generally, Fig. 8 shows that the assimilation of MOPITT CO improved model performance for all metrics when compared against the MOPITT
retrievals. Figure 8 also shows that RMSE was dominated by the bias and that the differences in the statistics for the different treatments (assimilating QORs, CPSRs, cross-covariance zeroing, and
vertical localization) were generally negligible except for cross-covariance zeroing. We now discuss the cross-covariance zeroing and vertical localization experiments.
Same as Fig. 3 except for the MOP CPSR experiment.
One method used to diagonalize the observation error covariance is zeroing of the cross-correlations (see the Introduction to Migliorini et al., 2008). The uncertainty of the error covariance and the
practice of adjusting the observation error variance to tune ensemble DA strategies are used to justify the zeroing. As noted by Anderson (2001) and applied by Migliorini et al. (2008), a more
aesthetic and mathematically correct approach is to apply a variance maximizing rotation based on a SVD of the error covariance. In this section we compare those two error covariance diagonalization
methods. Recall that MOP QOR used an SVD-based rotation to diagonalize the error covariance. We conducted a companion experiment MOP NROT where we used cross-correlation zeroing. The assimilation/
forecast plots are not shown because it is not a central theme of this paper. However, we include the verification statistics in Fig. 8. Significance testing and scores from assimilation of MOPITT CO
show that SVD-based diagonalization produced significantly greater forecast skill compared to cross-correlation zeroing. Based on that result we conclude that the second SVD-based rotation is a
necessary step in our definition of CPSRs.
Verification statistics for all experiments in MOPITT retrieval space. The blue curve is the bias (model – observation), the red curve is the root mean square error (RMSE), and the magenta curve is
the coefficient of determination. The experiments are described in the text and summarized in Table 1.
Ensemble data assimilation generally uses localization to remove spurious correlations that may occur from under-sampling. Localization limits the horizontal and vertical spatial scales on which the
observations impact the posterior. Vertical localization may be inappropriate when assimilating phase space retrievals because ∼80% of the vertical variation in the retrieval is described by the
first leading singular vector of the averaging kernel (the basis function for the phase space transform) and that vector is nearly independent of height (see Fig. 4b). Nevertheless, if vertical
localization is appropriate then the question becomes how to do it because phase space retrievals are not associated with a unique vertical location. One solution assumes that phase space retrievals
are associated with the level of maximum sensitivity in the transformed averaging kernel, i.e., the averaging kernel after applying the compression and diagonalization transforms discussed in
Sect. 2. We applied such localization to MOP QOR (in the MOP LOC experiment) and found that results from the two experiments were similar. Therefore, we do not present the assimilation/forecast plots
but include the verification statistics in Fig. 8. Comparison of the verification scores in Fig. 8 for MOP LOC with those from the other experiments (MOP QOR and MOP CPSR) shows that vertical
localization did not substantially alter the results. We experimented with different vertical localization lengths and found similar results. We are unsure whether this is a general result and are
continuing to investigate vertical localization.
In this paper we incorporated WRF-Chem into DART and assimilated MOPITT CO trace gas retrievals. We also introduced the assimilation of compact phase space retrievals (CPSRs). CPSRs are preprocessed
trace gas retrievals that have (i) the influence of the retrieval prior removed, (ii) data compression, (iii) SVD-based error covariance diagonalization, and (iv) unit error variance scaling. We
showed that assimilation of CPSRs is an efficient alternative to assimilation of quasi-optimal retrievals (QORs) that provided substantial reductions in computation time (∼35%) without degrading
the analysis fit or forecast skill.
We presented results from month-long (00:00UTC, 1 June 2008 to 18:00UTC, 31 June 2008) cycling experiments where we assimilated conventional meteorology and MOPITT CO retrievals. For MOP QOR the
time series plots in Fig. 2 showed that MET DA had a negative bias of ∼10ppb. The assimilation of MOPITT CO in MOP QOR reduced that bias by ∼5ppb. The vertical profile plots in Fig. 3 showed that
assimilation of MOPITT CO improved the CO analysis fit and forecast skill throughout the troposphere when compared to MET DA. We also used traditional skill metrics (bias, RMSE, and R2) to quantify
the impact of assimilating CO retrievals. Those results showed that bias dominated the RMSE and that assimilation of CO retrievals improved WRF-Chem performance. Specifically, MOP QOR significantly
improved the WRF-Chem CO forecast skill for all three metrics.
Next we focused on making the assimilation of retrievals computationally efficient and introduced compact phase space retrievals. CPSRs advance the work of Joiner and DaSilva (1998) and Migliorini et
al. (2008) by describing an easily applied methodology to achieve data compression for phase space retrievals. Conceptually, the assimilation of QORs and CPSRs should yield similar results except for
the effects of (i) assimilation sub-processes like localization and inflation and (ii) different observation errors due to the CPSR compression transform. Nevertheless, our CPSR approach is different
from that of Migliorini et al. (2008): (i) we perform two transforms – a compression transform and a diagonaliztion transform, they perform two diagonalization transforms; (ii) we identify and
assimilate all linearly independent information observed by the instrument, they may discard linearly independent information – some because their transform vectors are not necessarily a basis for
the space of QORs and some because their uncertainty analysis discards some information that lies in the range of their transformed averaging kernel; (iii) our diagonalization transform is analogous
to their first diagonalization transform except we diagonalize the compressed observation error covariance and they diagonalize the untransformed observation error covariance; and (iv) we rely on the
assimilation system to decide how much weight to give the transformed observations and require no information from the forecast ensemble, and they use the forecast ensemble to decide which
observations to discard.
MOP CPSR maps in Fig. 5 showed that assimilation of CPSRs placed CO hot spots in the same locations as MOP QOR but they were of greater magnitude and larger area. The time series and vertical profile
plots showed that those differences generally represented analysis and forecast improvements. Skill metrics for MOP CPSR showed that when compared to MOP QOR, the assimilation of CPSRs slightly
improved the forecast skill for all metrics, and when compared to MET DA it significantly improved the forecast skill for all metrics. Based on those results we conclude that the assimilation of
CPSRs performed as well or better than the assimilation of QORs at a substantially reduced computational cost (∼35% reduction in computation time).
Collectively our analysis of the MOP QOR and CPSR results in Figs. 1–3 and 5–7 suggested that (i) in the lower troposphere MET DA had a negative CO bias in polluted areas and a positive bias in
nonpolluted areas (Figs. 1 and 5) and (ii) bias reductions in the domain average retrieval space CO were due to reductions in the negative CO bias in the upper troposphere (Figs. 2, 3, 6, and 7). We
proposed three causes for the CO biases: (i) emission errors – overestimation of CO emissions in polluted areas and underestimation in nonpolluted areas, (ii) transport errors – too much CO transport
from the near surface to the lower troposphere and too little transport from the lower to upper troposphere, and (iii) chemistry errors – too little CO destruction in the near surface and lower
troposphere and too much destruction in the upper troposphere.
We expect that CPSRs have the potential for broad operational application. CPSRs can be easily obtained from retrievals derived from any optimal estimation algorithm. They can be used to assimilate
retrievals with correlated or uncorrelated errors for any sequential assimilation methodology (both Kalman filter and variational-based algorithms). Due to their ease of derivation, flexibility, and
potential for large reductions in assimilation computation time, CPSRs should facilitate the efficient assimilation of dense geostationary observations.
NCAR is sponsored by the National Science Foundation (NSF). Any opinions, findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily
reflect the views of NSF. This research was also sponsored by NASA grants NNX11A110G and NNX10AH45G. We gratefully acknowledge the anonymous reviewers, Chris Snyder, and Louisa Emmons for their
thorough reviews of this manuscript and for providing many constructive comments. We also acknowledge Helen Worden for assistance with the MOPITT data, and Jerome Barre for assistance with coding the
WRF-Chem initial and boundary condition perturbations. Edited by: A. Lauer
Anderson, J. L.: An ensemble adjustment Kalman filter for data assimilation, Mon. Weather Rev., 129, 2884–2903, 2001. Anderson, J. L.: A local least squares framework for ensemble filtering, Mon.
Weather Rev., 131, 634–642, 2003. Anderson, J. L., Hoar, T., Raeder, K., Liu, H., Collins, N., Torn, R., and Arellano, A.: The Data Assimilation Research Testbed: A community facility, B. Am.
Meteorol. Soc., 90, 1283–1296, 2009. Bei, N., de Foy, B., Lei, W., Zavala, M., and Molina, L. T.: Using 3DVAR data assimilation system to improve ozone simulations in the Mexico City basin, Atmos.
Chem. Phys., 8, 7353–7366, 10.5194/acp-8-7353-2008, 2008. Colarco, P., da Silva, A., Chin, M., and Diehl, T.: Online simulation of global aerosol distributions in the NASA GEOS-4 model and
comparisons to satellite and ground-based aerosol optical depth, J. Geophys. Res., 115, D14207, 10.1029/2009JD012820, 2010. Deeter, M. N.: MOPITT (Measurement of Pollution in the Troposphere) version
5 Product User's Guide, available at: www.acom.ucar.edu/mopitt/v5_users_guide_beta.pdf (last access: 1 March 2016), 2011. Deeter, M. N., Worden, H. M., Edwards, D. P., Gille, J. C., and Andrews, A.
E.: Evaluation of MOPITT retrievals of lower-tropospheric carbon monixide over the United States, J. Geophs. Res., 117, D13306, 10.1029/2012JD017553, 2012. Deeter, M. N., Martinez, A. S., Edwards, D.
P., Emmons, L. K., Gille, J. C., Worden, H. M., Pittman, J. V., Daube, B. C., and Wofsy, S. C.: Validation of MOPITT version 5 thermal-infrared, near-infrared, and multispectral carbon monoxide
profile retrievals for 2000–2011, J. Geophys. Res.-Atmos., 118, 6710–6725, 2013. Emmons, L. K., Walters, S., Hess, P. G., Lamarque, J.-F., Pfister, G. G., Fillmore, D., Granier, C., Guenther, A.,
Kinnison, D., Laepple, T., Orlando, J., Tie, X., Tyndall, G., Wiedinmyer, C., Baughcum, S. L., and Kloster, S.: Description and evaluation of the Model for Ozone and Related chemical Tracers,
version 4 (MOZART-4), Geosci. Model Dev., 3, 43–67, 10.5194/gmd-3-43-2010, 2010. Gaubert, B., Coman, A., Foret, G., Meleux, F., Ung, A., Rouil, L., Ionescu, A., Candau, Y., and Beekmann, M.: Regional
scale ozone data assimilation using an ensemble Kalman filter and the CHIMERE chemical transport model, Geosci. Model Dev., 7, 283–302, 10.5194/gmd-7-283-2014, 2014. Grell, G. A., Peckham, S. E.,
Schmitz, R., McKeen, S. A., Frost, G., Skamrock, W. C., and Eder, B.: Fully coupled “online” chemistry in the WRF model, Atmos. Environ., 39, 6957–6976, 2005. Guenther, A. B., Jiang, X., Heald, C.
L., Sakulyanontvittaya, T., Duhl, T., Emmons, L. K., and Wang, X.: The Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN2.1): an extended and updated framework for modeling
biogenic emissions, Geosci. Model Dev., 5, 1471–1492, 10.5194/gmd-5-1471-2012, 2012. Herron-Thorpe, F. L., Mount, G. H., Emmons, L. K., Lamb, B. K., Chung, S. H., and Vaughan, J. K.: Regional
air-quality forecasting for the Pacific Northwest using MOPITT/TERRA assimilated carbon monoxide MOZART-4 forecasts as a near real-time boundary condition, Atmos. Chem. Phys., 12, 5603–5615, 10.5194/
acp-12-5603-2012, 2012. Joiner J. and da Silva, A. B.: Efficient methods to assimilate remotely sensed data based on information content, Q. J. Roy. Meteor. Soc., 124, 1669–1694, 1998. Klonecki, A.,
Pommier, M., Clerbaux, C., Ancellet, G., Cammas, J.-P., Coheur, P.-F., Cozic, A., Diskin, G. S., Hadji-Lazaro, J., Hauglustaine, D. A., Hurtmans, D., Khattatov, B., Lamarque, J.-F., Law, K. S.,
Nedelec, P., Paris, J.-D., Podolske, J. R., Prunet, P., Schlager, H., Szopa, S., and Turquety, S.: Assimilation of IASI satellite CO fields into a global chemistry transport model for validation
against aircraft measurements, Atmos. Chem. Phys., 12, 4493–4512, 10.5194/acp-12-4493-2012, 2012. Kukkonen, J., Olsson, T., Schultz, D. M., Baklanov, A., Klein, T., Miranda, A. I., Monteiro, A.,
Hirtl, M., Tarvainen, V., Boy, M., Peuch, V.-H., Poupkou, A., Kioutsioukis, I., Finardi, S., Sofiev, M., Sokhi, R., Lehtinen, K. E. J., Karatzas, K., San José, R., Astitha, M., Kallos, G., Schaap,
M., Reimer, E., Jakobs, H., and Eben, K.: A review of operational, regional-scale, chemical weather forecasting models in Europe, Atmos. Chem. Phys., 12, 1–87, 10.5194/acp-12-1-2012, 2012. MACC-II
Final Report: Monitoring Atmospheric Composition and Climate – Interim Implementation, available at: https://www.wmo.int/pages/prog/arep/gaw/documents/GAW-2013-Peuch-MACCII.pdf (last access:
1 March 2016), 2014. Migliorini, S.: On the equivalence between radiance and retrieval assimilation, Mon. Weather Rev., 140, 258–265, 2012. Migliorini, S., Piccolo, C., and Rodgers, C. D.: Use of the
information content in satellite measurements for an efficient interface to data assimilation, Mon. Weather Rev., 136, 2633–2650, 2008. Pagowski, M. and Grell, G. A.: Experiments with the
assimilation of fine aerosols using an ensemble Kalman filter, J. Geophys. Res., 117, D21302, 10.1029/2012JD018333, 2012. Pfister, G. G., Avise, J., Wiedinmyer, C., Edwards, D. P., Emmons, L. K.,
Diskin, G. D., Podolske, J., and Wisthaler, A.: CO source contribution analysis for California during ARCTAS-CARB, Atmos. Chem. Phys., 11, 7515–7532, 10.5194/acp-11-7515-2011, 2011. Pfister, G. G.,
Walters, S., Emmons, L. K., Edwards, D. P., and Avise, J.: Quantifying the contribution of inflow on surface ozone over California during summer 2008, J. Geophys. Res.-Atmos., 118, 12282–12299,
10.1002/2013JD020336, 2013. Rodgers, C. D.: Inverse Methods for Atmospheric Sounding, Theory and Practice, World Sci., Singapore, 2000. Smith, W. L. and Woolf, H.: The use of eigenvectors of
statistical covariance matrices for interpreting satellite sounding radiometer observations, J. Atmos. Sci., 33, 1–7, 1976. Thompson, O.: Regularizing the satellite temperature-retrieval problem
through singular-value decomposition of the radiative transfer physics, Mon. Weather Rev., 120 2314–2328, 1992. Twomey, S.: Information content in remote sensing, Appl. Optics, 13, 942–945, 1974.
Wiedinmyer, C., Akagi, S. K., Yokelson, R. J., Emmons, L. K., Al-Saadi, J. A., Orlando, J. J., and Soja, A. J.: The Fire INventory from NCAR (FINN): a high resolution global model to estimate the
emissions from open burning, Geosci. Model Dev., 4, 625–641, 10.5194/gmd-4-625-2011, 2011. Worden, H. M., Deeter, M. N., Edwards, D. P., Gille, J. C., Drummond, J. R., and Nedelec, P.: Observations
of near-surface carbon monoxide from space using MOPITT multispectral retrievals, J. Geophys. Res., 115, D18314, 10.1029/2010JD014242, 2010. | {"url":"https://gmd.copernicus.org/articles/9/965/2016/gmd-9-965-2016.xml","timestamp":"2024-11-15T01:13:59Z","content_type":"application/xml","content_length":"116843","record_id":"<urn:uuid:e85cfb97-f739-4b47-ba4e-0e8367e2737b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00433.warc.gz"} |
Tenure Track Faculty Position in Machine Learning for Mathematical & Quantum Physics
Department of Applied Mathematics (University of Waterloo) and Perimeter Institute for Theoretical Physics
The Department of Applied Mathematics at the University of Waterloo and the Perimeter Institute for Theoretical Physics invite applications for a tenure-track Assistant Professor position in the area
of Machine Learning for Mathematical & Quantum Physics. In special cases a position at the rank of Associate or Full Professor may be considered.
The incumbent will be a faculty member in Applied Mathematics at the University of Waterloo and will spend 50% of their time as an Associate at the Perimeter Institute.
We are particularly interested in outstanding researchers with interests related to one or more of the following areas:
• machine learning for quantum physics / quantum computing
• quantum machine learning
• intersection of machine learning and quantum matter or condensed matter theory
• machine learning for other areas of mathematical / theoretical / computational physics
• researchers with broader interests at the intersection of machine learning and physics will also be considered
We are looking for applicants with an enthusiasm for teaching at both the undergraduate and graduate level, and for the supervision of graduate research. All complete applications received by January
15, 2024 will receive full consideration.
The Department of Applied Mathematics is one of four departments that, together with the School of Computer Science, comprise the Faculty of Mathematics at the University of Waterloo. With 300
faculty members, 8,000 undergraduate students and more than 1,000 graduate students in mathematics and computer science, Waterloo’s Faculty of Mathematics is a global powerhouse in research,
education and innovation. The Applied Mathematics department has 30 regular faculty members, over 100 graduate students, and strong undergraduate programs in applied mathematics, scientific computing
and mathematical physics. Research in the department is enhanced by close links to interdisciplinary institutes including the Perimeter Institute, the Waterloo Artificial Intelligence Institute, the
Centre for Computational Mathematics, and the Institute for Quantum Computing. More information about the department can be found at https://uwaterloo.ca/applied-mathematics/.
The Perimeter Institute is a leading global centre for fundamental research in theoretical physics. Home to more than 150 resident researchers and 1,000 visiting scientists each year all working to
unlock nature’s most profound secrets hidden deep inside the atom and far across the universe, Perimeter’s research efforts include condensed matter theory, cosmology, mathematical physics, quantum
fields and strings, quantum foundations, quantum gravity, quantum information, and particle physics. Visit www.perimeterinstitute.ca for more information and to view the list of Perimeter Institute
Interested candidates must have a PhD or equivalent in Applied Mathematics, Theoretical Physics or a related field. The salary range for this position is $110,000-$160,000 CAD. Salary will be
commensurate with qualifications, experience, and research record. Negotiations beyond this salary range will be considered for exceptionally qualified candidates. The effective date of appointment
is July 1, 2024. Interested individuals should apply using MathJobs (https://www.mathjobs.org/jobs/list/24027 ). Complete applications should include a cover letter, a curriculum vitae, research and
teaching statements, a statement on Equity-Diversity-Inclusion, teaching evaluation summaries (if available) and up to three reprints/preprints. In addition, applicants should arrange to have at
least three reference letters submitted on their behalf by January 15, 2024.
The University of Waterloo understands the impact that career interruptions (e.g. parental leave, leave due to illness) can have on a candidate’s achievement and encourages potential candidates to
explain in their application the impact this may have on their record; this information will be taken into careful consideration during the assessment process.
If you have any questions regarding the position, please contact: Prof. Hans De Sterck, Chair, Department of Applied Mathematics, University of Waterloo, Canada (hdesterck@uwaterloo.ca).
The University values the diverse and intersectional identities of its students, faculty, and staff. The University regards equity and diversity as an integral part of academic excellence and is
committed to accessibility for all employees. The University of Waterloo seeks applicants who embrace our values of equity, anti-racism and inclusion. As such, we encourage applications from
candidates who have been historically disadvantaged and marginalized, including applicants who identify as Indigenous peoples (e.g., First Nations, Métis, Inuit/Inuk), Black, racialized, a person
with a disability, women and/or 2SLGBTQ+.
The University of Waterloo is committed to accessibility for persons with disabilities. If you have any questions regarding the application process, assessment process or eligibility requirements
please contact Alicia Hanbidge (ahanbidg@uwaterloo.ca). At any time candidates can submit requests for application, interview or workplace accommodations to Alicia Hanbidge (ahanbidge@uwaterloo.ca).
All qualified candidates are encouraged to apply; however, Canadians and permanent residents will be given priority.
Three Reasons to Apply: https://uwaterloo.ca/faculty-association/why-waterloo.
Why Choose Perimeter: https://perimeterinstitute.ca/workplace-culture-perimeter-institute.
The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the
Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research,
learning, teaching, and community building, and is centralized within our Indigenous Initiatives Office (https://uwaterloo.ca/human-rights-equity-inclusion/indigenousinitiatives). | {"url":"https://cms.math.ca/job-ad/tenure-track-faculty-position-in-machine-learning-for-mathematical-quantum-physics/","timestamp":"2024-11-03T20:18:50Z","content_type":"text/html","content_length":"162075","record_id":"<urn:uuid:b5209253-c7db-4c29-942b-0865ff1721cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00535.warc.gz"} |
This is the Overture.Lifts module of the Agda Universal Algebra Library.
{-# OPTIONS --without-K --exact-split --safe #-}
module Overture.Lifts where
open import Overture.Inverses public
The hierarchy of universes in Agda is structured as follows:^1
𝓤 ̇ : 𝓤 ⁺ ̇, 𝓤 ⁺ ̇ : 𝓤 ⁺ ⁺ ̇, etc.
This means that the universe 𝓤 ̇has type 𝓤 ⁺ ̇, and 𝓤 ⁺ ̇has type 𝓤 ⁺ ⁺ ̇, and so on. It is important to note, however, this does not imply that 𝓤 ̇: 𝓤 ⁺ ⁺ ̇. In other words, Agda’s universe hierarchy is
noncumulative. This makes it possible to treat universe levels more generally and precisely, which is nice. On the other hand, a noncumulative hierarchy can sometimes make for a nonfun proof
assistant. Specifically, in certain situations, the noncumulativity makes it unduly difficult to convince Agda that a program or proof is correct.
Here we describe a general Lift type that help us overcome the technical issue described in the previous subsection. In the Lifts of algebras section of the Algebras.Algebras module we will define a
couple domain-specific lifting types which have certain properties that make them useful for resolving universe level problems when working with algebra types.
Let us be more concrete about what is at issue here by considering a typical example. Agda will often complain with errors like the following:
𝓤 != 𝓞 ⊔ 𝓥 ⊔ (𝓤 ⁺) when checking that the expression... has type...
This error message means that Agda encountered the universe level 𝓤 ⁺, on line 498 (columns 20–23) of the file Birkhoff.lagda, but was expecting a type at level 𝓞 ⁺ ⊔ 𝓥 ⁺ ⊔ 𝓤 ⁺ ⁺ instead.
The general Lift record type that we now describe makes these situations easier to deal with. It takes a type inhabiting some universe and embeds it into a higher universe and, apart from syntax and
notation, it is equivalent to the Lift type one finds in the Level module of the Agda Standard Library.
record Lift {𝓦 𝓤 : Universe} (A : 𝓤 ̇) : 𝓤 ⊔ 𝓦 ̇ where
constructor lift
field lower : A
open Lift
The point of having a ramified hierarchy of universes is to avoid Russell’s paradox, and this would be subverted if we were to lower the universe of a type that wasn’t previously lifted. However, we
can prove that if an application of lower is immediately followed by an application of lift, then the result is the identity transformation. Similarly, lift followed by lower is the identity.
lift∼lower : {𝓦 𝓤 : Universe}{A : 𝓤 ̇} → lift ∘ lower ≡ 𝑖𝑑 (Lift{𝓦} A)
lift∼lower = refl
lower∼lift : {𝓦 𝓤 : Universe}{A : 𝓤 ̇} → lower{𝓦}{𝓤} ∘ lift ≡ 𝑖𝑑 A
lower∼lift = refl
The proofs are trivial. Nonetheless, we’ll come across some holes these lemmas can fill. | {"url":"https://ualib.gitlab.io/Overture.Lifts.html","timestamp":"2024-11-09T04:19:08Z","content_type":"text/html","content_length":"15519","record_id":"<urn:uuid:04d6787d-d9ba-4a8b-b9e0-4587662c4dc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00056.warc.gz"} |
Aplusix Windows
Aplusix Windows is software to help learning algebra which has been developed in France by University of Grenoble, CNRS, INP and University of Nantes. It is regularly used by middle and high schools.
Go to the Aplusix Windows page
Aplusix Neo
Aplusix Neo comes from Aplusix Windows.
It uses its fundamental functions:
• A Training and a Test mode,
• The student can make how many calculation steps she wants,
• Permanent syntax checking
• Commands to trigger software automatic calculations.
• In Training mode, calculation checking on demand.
• In Training mode, checking the completion of exercises,
• Commands to have some calculations made by the device.
• At the end of a Test, Aplusix switches to Correction mode and enables checking the calculations and the completion of exercises.
Practice as you wish
Aplusix Neo runs on computers, tablets and smartphones.
It allows practicing calculations and algebra in a pleasant way everywhere: at the middle school, at the high school, at home, in transports.
Its predecessor, Windows software, is very appreciated:
“I like this program because it helped me a lot in Mathematics and it gave me more confidence to do difficult exercises.”
Some calculations performed by the device
Numerical calculations are done by the device.
Some calculations (those that have been chosen by the creator of the resource) can be performed by the device. One way to do a calculation is to select a subexpression and use the calculation button
that appears in the selection palette, see example opposite. Another way is to do a drag-and-drop calculation by selecting a subexpression and dragging it to another location. The "Presentation"
section contains more details on these calculation possibilities.
Various exercises
Aplusix Neo groups various and fundamental exercises for algebra and arithmetic.
A wide range of exercise is proposed by Aplusix Neo: numerical calculations, factoring, expanding and simplifying, solving equations, inequalities and linear systems of equations.
The base of exercises is regularly updated with resources produced by the authors of the application and by middle and high school teachers.
Some exercises are thematically grouped within galleries. Access to the galleries
Teachers can create Aplusix resources
The EpsilonWriter Creator application allows teachers to create exercise files for Aplusix and upload them to the epsilon-publi website. Teachers can also create resources by modifying open source
resources of the epsilon-publi site. They can give resources to students through links that are provided by EpsilonWriter Creator; They can be added to the list of teachers who publish resources:
They can then create a gallery and ask for it to be added to existing galleries.
Connected use
Aplusix Neo can be used in a connected way.
Registering is very easy and is done with Aplusix Neo. When you are connected, the calculation steps are recorded online. You can display in the web browser a report of the work done and give its
link to whomever you want. You can also register the email address of a tutor(a teacher, a parent,... that agrees such a role) to notify her some work sessions or ask her some help (send email with a
direct link to the work done on the given exercise). You can also ask for help to the support service.
Scientific experiments, led by math education researchers and psychologists in different countries proved that students who used Aplusix Windows actually improved their skills.
Teachers enjoy it and students too:
“It helped me a lot to improve my calculations. My speed has also increased, it is just fantastic!”
The MC² Project and Aplusix Neo
Aplusix Neo was partly funded by the MCSquared european project. This project focus was on social creativity and the design of digital media intended to enhance creativity in mathematical thinking
The Aristod company, which developed these tools, ceased its activities in April 2019, due to the very low interest that these tools have generated.
Jean-Francois Nicaud, the main author of these tools, keeps them available to users on this website for a few years.
Contact: jeanfrancois dot nicaud at laposte dot net | {"url":"https://www.aplusix.org/siteTemplate.php?lang=en&page=accueil.php","timestamp":"2024-11-09T20:15:55Z","content_type":"application/xhtml+xml","content_length":"18666","record_id":"<urn:uuid:ca7e8dec-5499-42b7-91d0-78104877516b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00832.warc.gz"} |
Game coloring the Cartesian product of graphs
: This article proves the following result: Let G and G be graphs of orders n and n , respectively. Let G be obtained from G by adding to each vertex a set of n degree 1 neighbors. If G has game
coloring number m and G has acyclic chromatic number k, then the Cartesian product G G has game chromatic number at most k(k+m - 1). As a consequence, the Cartesian product of two forests has game
chromatic number at most 10, and the Cartesian product of two planar graphs has game chromatic number at most 105. | {"url":"https://www.sciweavers.org/publications/game-coloring-cartesian-product-graphs","timestamp":"2024-11-05T12:07:54Z","content_type":"application/xhtml+xml","content_length":"37336","record_id":"<urn:uuid:b300494f-fae1-4ca4-b485-c0f90af39ab9>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00519.warc.gz"} |
(PDF) Hydrodynamic models of preference formation in multi-agent societies
Author content
All content in this area was uploaded by Andrea Tosin on Dec 24, 2018
Content may be subject to copyright.
Hydrodynamic models of preference formation
in multi-agent societies
Lorenzo Pareschi∗Giuseppe Toscani†Andrea Tosin‡Mattia Zanella§
In this paper we discuss the passage to hydrodynamic equations for kinetic models of
opinion formation. The considered kinetic models feature an opinion density depending on
an additional microscopic variable, identified with the personal preference. This variable de-
scribes an opinion-driven polarisation process, leading finally to a choice among some possible
options, as it happens e.g. in referendums or elections. Like in the kinetic theory of rarefied
gases, the derivation of hydrodynamic equations is essentially based on the computation of the
local equilibrium distribution of the opinions from the underlying kinetic model. Several nu-
merical examples validate the resulting model, shedding light on the crucial role played by the
distinction between opinion and preference formation on the choice processes in multi-agent
Keywords: Opinion and preference formation, choice processes, kinetic modelling, hydro-
dynamic equations
Mathematics Subject Classification: 35L65, 35Q20, 35Q70, 35Q91, 82B21
1 Introduction
The mathematical modelling of opinion formation in multi-agent societies has enjoyed in recent
years a growing attention [4,5,6,29,36,46]. In particular, owing to their cooperative nature,
the dynamics of opinion formation have been often dealt with resorting to methods typical of
statistical mechanics [14,16]. Among other approaches, kinetic theory served as a powerful basis
to model fundamental interactions among the so-called agents [8,9,10,17,30,47] and to provide
a sound structure for related applications [1,49]. In kinetic models, analogously to the kinetic
theory of rarefied gases, the mechanism leading to the opinion variation is given by binary, i.e.
pairwise, interactions among the agents. Then, depending on the parameters of such microscopic
rules, the society develops a certain macroscopic equilibrium distribution [37,40], which describes
the formation of a relative consensus about certain opinions.
Two main aspects are usually taken into account in designing the elementary binary interac-
tions. The first is the compromise [20,52], namely the tendency of the individuals to reduce the
distance between their respective opinions after the interaction. The second is the self-thinking [47],
i.e. an erratic individual change of opinion inducing unpredictable deviations from the prescribed
deterministic dynamics of the interactions.
Recently, many efforts have been devoted to include further details in the opinion formation
models, so as to capture more and more realistic phenomena. The usual strategy consists in
∗Department of Mathematics and Computer Sciences, University of Ferrara, Via Machiavelli 35, 44121 Ferrara,
Italy (lorenzo.pareschi@unife.it)
†Department of Mathematics “F. Casorati”, University of Pavia, Via Ferrata 1, 27100 Pavia, Italy
‡Department of Mathematical Sciences “G. L. Lagrange”, Dipartimento di Eccellenza 2018-2022, Politecnico di
Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy (andrea.tosin@polito.it)
§Department of Mathematical Sciences “G. L. Lagrange”, Dipartimento di Eccellenza 2018-2022, Politecnico di
Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy (mattia.zanella@polito.it)
taking additional behavioural aspects into account, such as the stubbornness of the agents [34,45],
the emergence of opinion leaders [25,27], the influence of social networks [2,48], the expertise
in decision making tasks [41], the personal conviction [3,7,11,19]. Generally, the aim of such
additional parameters is to model on one hand the resistance of the agents to change opinion and,
on the other hand, the prominent role played by some individuals in attracting others towards their
opinions. In all these contributions, the additional variables act as modifiers of the microscopic
interactions. This means that they affect the process of opinion formation but are not affected in
turn by the evolving opinions.
In the present paper we aim instead at modelling a parallel process to opinion formation,
namely the formation of preferences. The preference, which is driven by, but need not coincide
with, the opinion, is here understood as representative of the choice that an agent makes among
some possible options, such as e.g. some candidates in an election or yes/no in a referendum [18].
As such, and unlike the opinion, the preference evolves towards necessarily polarised states, which
reflect the available options.
To pursue this goal, we consider a novel class of inhomogeneous kinetic models for the joint
distribution function f(t, ξ, w), where ξis the variable describing the preference and wthe one
describing the opinion. In particular, f(t, ξ, w)dξdw is the proportion of agents who, at time
t≥0, express a preference in the interval [ξ, ξ +dξ] and simultaneously an opinion in the interval
[w, w +dw]. As far as the modelling of the interactions leading to the evolution of the opinion
is concerned, we take advantage of the well consolidated background recalled before. Conversely,
concerning the evolution of the preference, we assume transport-type dynamics of the form
dt = (w−α)Φ(ξ).
Here, in analogy with the classical kinematics, wplays morally the role of the velocity, i.e. it
drags the preference in time, however biased by a perceived social opinion α, which accounts for
the predominant social feeling. Moreover, the zeros of the function Φ define the options where the
preference may polarise.
These ingredients lead us to an inhomogeneous Boltzmann-type kinetic equation of the form
∂tf+ (w−α)∂ξ(Φ(ξ)f) = Q(f, f ),
where Qis a Boltzmann-type collision operator encoding the opinion formation interaction dynam-
ics. From here, analogously to the classical kinetic theory of rarefied gases, we derive macroscopic
equations for the density ρ=ρ(t, ξ) and the mean opinion m=m(t, ξ) of the agents with pref-
erence ξat time tby means of a local equilibrium closure based on the identification of the local
equilibrium distribution of the opinions – the equivalent of a local “Maxwellian”. The precise type
of hydrodynamic equations that we obtain in this way depends on whether the mean opinion of
the agents is or is not conserved in time by the microscopic dynamics of opinion formation. For
instance, if it is conserved we get the following system of conservation laws:
(∂tρ+∂ξ(Φ(ξ)ρ(m−α)) = 0,
∂t(ρm) + ∂ξ(Φ(ξ)(M2,∞−αρm)) = 0,
where M2,∞denotes the energy of the local Maxwellian. For special classes of local equilibrium
distributions, that we compute from the collisional kinetic equation by an asymptotic procedure
reminiscent of the grazing collision limit of the classical kinetic theory [47,50], we can express
such an energy analytically in terms of the hydrodynamic parameters ρ,m. We recover therefore a
self-consistent macroscopic model, which we show to be hyperbolic for all the physically admissible
values of (ρ, m) and able to reproduce the preference polarisations, viz. the choices, discussed
In more detail, the paper is organised as follows. In Section 2we give preliminary microscopic
insights into the joint process of opinion and preference formation, stressing in particular the role
of the perceived social opinion α. In Section 3we move to an aggregate analysis of the opinion
formation by means of Boltzmann-type kinetic models, studying in particular their steady states
which, as set forth above, pave the way for the identification of the local equilibrium distributions
needed in the passage to the hydrodynamic equations. In Section 4we derive various types of
macroscopic models of preference formation out of the aforementioned inhomogeneous Boltzmann-
type equation and we link them precisely to key features of the microscopic interactions among the
agents. In Section 5we present several numerical tests, both at the kinetic and at the macroscopic
scales, which exemplify the distinction between the preference formation and the opinion formation
processes and show how much such a distinction enhances the interpretation of the social dynamics.
Finally, in Section 6we discuss further developments and research prospects.
2 A microscopic look at the opinion-preference interplay
The first mathematical models of consensus formation in opinion dynamics were proposed in
the form of systems of ordinary differential equations (ODEs) describing the behaviour of a finite
number of agents. After the pioneering works [21,28], which introduced simple agent-based models
to understand the effects of the influence among connected individuals, many research efforts have
been devoted to the construction of sophisticated differential models of opinion formation. Besides
those dealing with simple consensus dynamics, in recent years new models of more realistic social
phenomena have been proposed to capture additional aspects. Without intending to review all
the literature, we give here some references on certain classes of models for finite systems: the
celebrated Hegselmann-Krause model [32], which considers bounded-confidence-type interactions
to stress the impact of homophily in learning processes, see also [15,35]; models incorporating
leader-follower effects [38]; models of social interactions on realistic networks [51]; models of opinion
control [12].
An aspect which, to our knowledge, has been so far basically disregarded in mathematical
models of opinion dynamics, in spite of its realism, is the distinction between the opinion in the
strict sense of the individuals about single issues and their overall preference, which, in some cases,
is actually mainly responsible for their choices. For instance, in case of referendums or political
elections, the preference of a voter can be identified with his/her voting intention, which may
not always coincide with his/her opinion on every topic debated during the election campaign.
Obviously, the preference evolves in time with the opinion, in such a way that if a certain opinion
persists for a sufficiently long time it can affect the preference considerably. For example, a
voter with a voting intention biased towards right (left, respectively) parties, who however finds
him/herself frequently in agreement with the positions taken by left (right, respectively) parties
on key topics of the election campaign, may end up with a final vote opposite to his/her original
As a foreword to the subsequent contents of the paper, in this section we present some pre-
liminary considerations about the effect of the interplay between opinion and preference, taking
advantage of a deterministic microscopic model for a finite number of agents. Let us consider then
a system composed by Nagents with microscopic state given by a pair (ξi, wi)∈[−1,1]2, where
wiis the opinion of the ith agent, ξiis his/her preference and i= 1, . . . , N . Sticking to a standard
custom in the literature of models of opinion dynamics, we describe mathematically the opinion of
an agent as a bounded scalar variable wiconventionally taken in the interval [−1,1]. In particular,
we understand the values wi=±1 as the two extreme opinions, while wi= 0 as the maximum of
indecisiveness. Since, from the physical point of view, the preference is commensurable with an
opinion, we adopt the same mathematical conventions also for the variable ξi.
In order to highlight the different roles of the opinion and the preference variables, we rely on
the analogy with the classical kinetic theory of rarefied gases. There, a molecule moving on a line
is characterised by its position x∈Rand its velocity v∈R. In the absence of external forces, the
velocity remains constant, whereas the position varies according to the kinematic law
dt =v. (1)
In practice, a particle with positive velocity will move rightwards, while a particle with negative
velocity will move leftwards. In first approximation, it seems natural to assume that the opinion
plays the role of the velocity and the preference that of the position. Indeed, at least in the case
in which an agent has to end up with one of the two preferences ±1, like e.g. in a referendum,
one can assume that a large part of the agents with positive opinion will move their preferences
rightwards, while agents with negative opinion will move their preferences leftwards. Clearly, one
cannot resort to a law like (1), which allows the position to increase or decrease indefinitely in
time: a correction is required in order to maintain the preference variable in the allowed interval
A primary example of the analogy just discussed is provided below, where the time evolution
of the opinions and the preferences of the agents is modelled by the ODE system
dt = (wi−α)Φ(ξi)
dt =1
P(wi, wj)(wj−wi)
for i= 1, . . . , N, supplemented with initial conditions (ξi(0), wi(0)) = (ξ0,i, w0,i )∈[−1,1]2. The
second equation describes standard alignment dynamics among the opinions of the agents, i.e.
consensus, driven by the interaction/compromise function 0 ≤P(·,·)≤1, see e.g., [40,47]. The
first equation describes instead the evolution of the preference of the ith agent based on the signed
distance between its true opinion wiand a reference opinion α∈[−1,1] perceived in the society,
which we will refer to as the perceived social opinion. The function Φ : [−1,1] →[0,1] has to be
primarily chosen so as to guarantee that ξi(t)∈[−1,1] for all t > 0. However, as we will see in
a moment, this function will be also useful to take into account meaningful polarisations of the
In model (2)-(3), the coupling between opinion and preference is actually one-directional,
indeed the evolution of ξidepends on that of the wi’s but not vice versa. In particular, the
system (3) for the wi’s can be solved a priori, before analysing the dynamics (2) of the ξi’s. Let
us consider, in particular, the case of interactions with bounded confidence, which are described
by taking
P(wi, wj) = χ(|wj−wi| ≤ ∆),(4)
where χdenotes the characteristic function and ∆ ∈[0,2] is a given confidence threshold, above
which agents do not interact because their opinions are too far away from each other. If ∆ = 0
then only agents with the very same opinion interact, whereas if ∆ = 2 then we speak of all-to-all
interactions, considering that |wj−wi| ≤ 2 for all wi, wj∈[−1,1]. The latter case is actually
equivalent to choosing P≡1.
Depending on the value of ∆, one can observe a loss of global consensus. Asymptotically,
the opinions may form several clusters, whose number is dictated by ∆ and by the initial condi-
tions w0,1, . . . , w0,N , see Figure 1. However, since the function Pgiven in (4) is symmetric, i.e.
P(wi, wj) = P(wj, wi) for all i, j = 1, . . . , N, the mean opinion 1
i=1 wiis conserved in time
and for all t > 0 coincides, in particular, with the mean opinion at t= 0.
Now we give some insights into the preference dynamics modelled by (2), at least under special
forms of the function Φ. In order to avoid that the preference ξileaves the interval [−1,1], a very
natural condition is Φ(±1) = 0. This implies that the constant functions ξi(t) = −1 and ξi(t)=1
are indeed stationary solutions to (2) and may therefore represent attractive or repulsive equilibria
of the system, depending on the sign of wi−α.
For instance, we may choose
Φ(ξ) = 1 − |ξ|.
Taking for granted from (3) that wi(t)≤1 for all t > 0, if we integrate (2) starting from an initial
condition ξ0,i ∈[0,1] then for all times t > 0 in which ξiremains non-negative we find
ξi(t)≤1−(1 −ξ0,i)e−(1−α)t≤1.
(a) ∆ = 1 (b) ∆ = 0.4 (c) ∆ = 0.2
Figure 1: Solution of (3) with N= 50 agents and Pgiven by (4) for decreasing values of the
confidence threshold ∆. The initial opinions w0,i have been sampled uniformly in [−1,1]. The
ODE system has been integrated numerically via a standard fourth order Runge-Kutta method.
Likewise, taking for granted from (3) that wi(t)≥ −1 for all t > 0, if we start from an initial
condition ξ0,i ∈[−1,0] then for all times t > 0 in which ξiremains non-positive we deduce
ξi(t)≥ −1 + (1 + ξ0,i)e−(1+α)t≥ −1.
This argument, applied to the various time intervals in which ξihas constant sign, shows indeed
that ξi(t)∈[−1,1] for all t≥0. Nevertheless, we cannot solve (2) exactly, because from (3) we
cannot calculate exactly the function t7→ wi(t). On the other hand, we can get a useful idea at
least of the large time dynamics of (2) by fixing wito its asymptotic value, say w∞,i ∈[−1,1],
and considering the equation dξi
dt = (w∞,i −α) (1 − |ξi|),
whose solution reads
ξi(t) = (−1 + (1 + ξ0,i)e(w∞,i −α)tif ξi(t)≤0
1−(1 −ξ0,i)e−(w∞,i −α)tif ξi(t)≥0.
From here we easily deduce that:
•if ξ0,i <0 and w∞,i < α then ξi→ −1 for t→+∞;
•if ξ0,i >0 and w∞,i > α then ξi→1 for t→+∞.
In both cases, the final preference confirms and consolidates the initial one, because w∞,i −αhas
the same sign as ξ0,i. Conversely,
•if ξ0,i <0 but w∞,i > α then ξi→1 for t→+∞;
•if ξ0,i >0 but w∞,i < α then ξi→ −1 for t→+∞.
In these cases, the final preference reverses the initial one, because w∞,i −αhas opposite sign
with respect to ξ0,i.
Another possible choice of the function Φ is:
Φ(ξ) = |ξ|1−ξ2,(5)
which vanishes also at ξ= 0. Thus we are led to consider the equation
dt = (w∞,i −α)|ξi|1−ξ2
whose solution reads
ξi(t) = ξ0,i
0,i +1−ξ2
0,ie−2 sgn ξ0,i (w∞,i−α)t
Now the asymptotic trend of the preference can be summarised as follows.
(a) ∆ = 1, α =−0.3 (b) ∆ = 1, α = 0.3
Figure 2: The curves t7→ ξi(t) generated by the coupled system (2)-(3) with N= 50 agents and
P, Φ like in (4), (5) with ∆ = 1 and α=±0.3. The initial values w0,i,ξ0,i have been sampled
uniformly in the interval [−1,1]. The ODE system has been integrated numerically via a standard
fourth order Runge-Kutta method.
•For w∞,i < α:
–if ξ0,i <0 then ξi→ −1 for t→+∞;
–if ξ0,i >0 then ξi→0 for t→+∞.
•For w∞,i > α:
–if ξ0,i <0 then ξi→0 for t→+∞;
–if ξ0,i >0 then ξi→1 for t→+∞.
We observe that the large time behaviour of the preference is again a polarisation in poles coin-
ciding with the zeroes of the function Φ. Unlike the previous case, however, the presence of an
intermediate pole at ξ= 0 prevents a complete reversal of the initial preference when the latter has
opposite sign with respect to w∞,i −α. In such a situation, the agents simply become indecisive,
their preference tending indeed to zero.
In order to illustrate the actual coupled dynamics of (2)-(3), we solve numerically the coupled
system of equations with N= 50 agents and with the functions P, Φ given in (4), (5), respectively.
In Figure 2we present the curves t7→ ξi(t) in the case ∆ = 1 and for α=±0.3. Since
the agents reach a global consensus around the centrist opinion w= 0, cf. Figure 1(a), with a
leftward-biased perceived social opinion α < 0 we observe the polarisation of the preferences either
towards the indecisiveness ξ= 0, if the initial preference was in turn leftward-biased, i.e. ξ0,i <0,
or in ξ= 1, if the initial preference was rightward-biased, i.e. ξ0,i >0, cf. Figure 2(a). Conversely,
with a rightward-biased perceived social opinion α > 0 we observe indecisiveness if ξ0,i >0 and
consolidation in ξ=−1 if ξ0,i <0, cf. Figure 2(b).
Such rather simple dynamics may become more complex under the formation of multiple
opinion clusters. To exemplify this case, we consider now ∆ = 0.4, like in Figure 1(b), and again
the two cases α=±0.3, cf. Figure 3(a, b) along with also α=±0.6, cf. Figure 3(c, d). In this
case, simultaneous polarisations in ξ=±1 can also be observed, depending on the distribution of
the pairs (ξ0,i, w0,i ) at the initial time.
(a) ∆ = 0.4, α =−0.3 (b) ∆ = 0.4, α = 0.3
(c) ∆ = 0.4, α =−0.6 (d) ∆ = 0.4, α = 0.6
Figure 3: The same as in Figure 2but with ∆ = 0.4, cf. Figure 1(b), and α=±0.3 (top row),
α=±0.6 (bottom row).
3 Aggregate analysis of opinion dynamics
The discussion set forth in the previous section shows that it is in general quite hard to analyse
exactly the interplay between opinions and preferences from a strictly microscopic point of view.
Due to the severe dependence of the microscopic system on the particular initial state and tra-
jectory of each agent, the main difficulty is, as usual, to grasp the essential facts able to explain
the big picture, namely to depict the collective behaviour. For this reason, from this section we
move to a more aggregate analysis, which, starting from a description of opinion dynamics by
methods of statistical physics and kinetic theory, will finally lead us to macroscopic equations for
the preference dynamics written in terms of hydrodynamic parameters such as the density of the
agents and their mean opinion.
3.1 Microscopic binary interactions
In order to approach the opinion dynamics (3) from the point of view of kinetic theory, we need to
set up a consistent scheme of binary, i.e. pairwise, interactions among the agents. To this purpose,
inspired by [13], we consider (3) for just two agents, say i,j, and we discretise the differential
equation with the forward Euler formula during a small time step 0 < γ < 1. Setting
w:= wi(t), w∗:= wj(t), w0:= wi(t+γ), w0
∗:= wj(t+γ)
we obtain the binary rules
w0=w+γP (w, w∗)(w∗−w) + D(w)η,
∗=w∗+γP (w∗, w)(w−w∗) + D(w∗)η, (6)
where we have also added a random contribution, given by a centred random variable η, modelling
stochastic fluctuations induced by the self-thinking of the agents. Here, D(·)≥0 is an opinion-
dependent diffusion coefficient modulating the amplitude of the stochastic fluctuations, that is the
variance of η.
In general, the binary interactions (6) are such that
∗i=w+w∗+γ(P(w, w∗)−P(w∗, w)) (w∗−w),(7)
where h·i denotes the expectation with respect to the distribution of η. Hence the mean opinion
is in general not conserved on average in a single binary interaction unless Pis symmetric, i.e.
P(w, w∗) = P(w∗, w) for all w, w∗∈[−1,1]. Furthermore, at leading order for γsmall enough
we have
h(w0)2+ (w0
∗+ 2γ(wP (w, w∗)−w∗P(w∗, w)) (w∗−w)
+D2(w) + D2(w∗)σ2+o(γ),(8)
where σ2>0 denotes the variance of η. Therefore, in general, also the energy is not conserved on
average in a single binary interaction, not even for a symmetric function P.
Equations (7), (8) show that a particularly interesting case is when Pis constant, for then
from (7) we deduce that the mean opinion is conserved in each binary interaction, while from (8)
we see that, at least in the absence of stochastic fluctuations (i.e. formally for σ2= 0), the average
energy is dissipated:
h(w0)2+ (w0
In order to be physically admissible, the interaction rules (6) have to be such that |w0|,|w0
∗| ≤ 1
for |w|,|w∗| ≤ 1. Observing that
|w0|=|(1 −γP (w, w∗))w+γP (w, w∗)w∗+D(w)η|
≤(1 −γP (w, w∗)) |w|+γP (w, w∗) + D(w)|η|,
where we have used the fact that |w∗| ≤ 1, we see that a sufficient condition for |w0| ≤ 1 is
D(w)|η| ≤ (1 −γP (w, w∗))(1 − |w|),
which is satisfied if there exists a constant c > 0 such that
(|η| ≤ c(1 −γP (w, w∗))
cD(w)≤1− |w|,∀w, w∗∈[−1,1].(9)
Considering that P(w, w∗)≤1 by assumption, the first condition can be further enforced by
requiring |η| ≤ c(1 −γ), which implies that ηhas to be chosen as a compactly supported random
variable. The second condition forces instead D(±1) = 0. Taking inspiration from [47], possible
choices are: D(w)=1− |w|and c= 1, which produces |η| ≤ 1−γ; or D(w) = 1 −w2and c=1
which yields |η| ≤ 1
2(1 −γ). Another less obvious option is
D(w) = q(1 −(1 + γs)w2)+and c=γs/2
√1 + γs, s > 0,(10)
where (·)+:= max{0,·} denotes the positive part, which produces |η| ≤ γs/2(1−γ)
√1+γs. This function
Dconverges uniformly to √1−w2in [−1,1] when γ→0+. Notice, however, that such a uniform
limit does not comply with (10) regardless of choice of c > 0, because of the infinite derivative at
Exactly the same considerations hold true for the second interaction rule in (6).
3.2 Kinetic description and steady states
Introducing the distribution function f=f(t, w) : R+×[−1,1] →R+, such that f(t, w)dw is the
fraction of agents with opinion in [w, w +dw] at time t, the binary rules (6) can be encoded in a
Boltzmann-type kinetic equation, which, in weak form, writes:
dt Z1
ϕ(w)f(t, w)dw
−1hϕ(w0) + ϕ(w0
∗)−ϕ(w)−ϕ(w∗)if(t, w)f(t, w∗)dw dw∗,(11)
where ϕ: [−1,1] →Ris an arbitrary test function, i.e. any observable quantity depending on the
microscopic state of the agents. Choosing ϕ(w) = 1, we obtain that the integral of fwith respect
to wis constant in time, i.e. that the total number of agents is conserved. This also implies that,
up to normalisation at the initial time, fcan be thought of as a probability density for every
t > 0. Choosing instead ϕ(w) = wwe discover
dt Z1
wf (t, w)dw =γ
(P(w, w∗)−P(w∗, w))(w∗−w)f(t, w)f(t, w∗)dw dw∗,(12)
therefore the mean opinion M1:= R1
−1wf (t, w)dw is either conserved in time, if Pis symmetric so
that the right-hand side of the previous equation vanishes, or not conserved, if Pis non-symmetric.
This difference has important consequences on the steady distributions of (11), which in turn will
impact considerably on the equations describing the formation of the preferences. Therefore, in
what follows we investigate it in some detail.
3.2.1 Symmetric P
The prototype of a symmetric Pis the constant function P≡1. In this case, from (11) we
can recover an explicit expression of the asymptotic distribution function at least in the so-called
quasi-invariant regime, i.e. the one in which the variation of the opinion in each binary interaction
is small. To describe such a regime, we scale the parameters γ,σ2in (6) as
γ→γ, σ2→σ2,(13)
where > 0 is an arbitrarily small scaling coefficient. Parallelly, in order to study the large time
behaviour of the system, we introduce the new time scale τ:= t and we scale the distribution
function as g(τ, w) := f(τ
, w). In this way, it is clear that, at every fixed τ > 0 and in the limit
→0+,gdescribes the large time trend of f. Since ∂τg=1
∂tf, substituting in (11) and using
the symmetry of the interactions (6) with P≡1 we see that the equation satisfied by gis
dτ Z1
ϕ(w)g(τ, w)dw =1
−1hϕ(w0)−ϕ(w)ig(τ, w)g(τ, w∗)dw dw∗.(14)
Now, because of the scaling (13), if ϕis sufficiently smooth then the difference hϕ(w0)−ϕ(w)i
is small and can be expanded about wto give:
6ϕ000( ¯w)h(w0−w)3i,
where min{w, w0}<¯w < max{w, w0}. Plugging into (14) this produces
dτ Z1
ϕ(w)g(τ, w)dw =γZ1
ϕ0(w)(m−w)g(τ, w)dw
ϕ00(w)D2(w)g(τ, w)dw +Rϕ(g, g),
Figure 4: Asymptotic opinion distribution (16) with mean m= 0.25 and four different values of
the parameter λ.
where we have denoted by m∈[−1,1] the constant mean opinion and where Rϕ(g, g) is a reminder
such that |Rϕ(g, g)|=O(√) under the assumption that ηhas finite third order moment, i.e.
h|η|3i<+∞, cf. [47] for details. Hence for →0+it results Rϕ(g, g)→0 and we get
dτ Z1
ϕ(w)g(τ, w)dw =γZ1
ϕ0(w)(m−w)g(τ, w)dw +σ2
ϕ00(w)D2(w)g(τ, w)dw.
Integrating back by parts the terms on the right-hand side and assuming ϕ(±1) = ϕ0(±1) = 0,
due to the arbitrariness of ϕthis can be recognised as a weak form of the Fokker-Planck equation
Fixing1D(w) = √1−w2, the unique asymptotic (τ→+∞) solution with unitary mass, say
g∞(w), to (15) reads
g∞(w) = (1 + w)1+m
λ−1(1 −w)1−m
λ, λ := σ2
where B(·,·) denotes the Beta function. Notice that such a g∞is a Beta probability density
function on the interval [−1,1]. Using the known formulas for the moments of Beta random
variables, we easily check that its mean is indeed mand we compute its energy as
M2,∞:= Z1
w2g∞(w)dw =2m2+λ
2 + λ.(17)
In Figure 4we illustrate some typical trends of the distribution function (16) with positive
mean, m= 0.25 in this example. We observe that, depending on the value of λ, such a distribution
may depict a transition from a strong consensus around the mean (λ= 0.1) to a milder consensus
(λ= 0.4) and further to a radicalisation in the extreme opinion w= 1 (λ= 1) up to the appearance
of a double radicalisation in the two opposite extreme opinions w=±1 (λ= 2).
3.2.2 Non-symmetric P
A natural prototype of a non-symmetric function Pis a linear perturbation of a constant P
depending on only one of the two variables w,w∗. More specifically, we consider
P(w, w∗) = P(w∗) = pw∗+q, (18)
1In view of the scaling (13), as →0+the function (10) converges uniformly to √1−w2, which can therefore
be chosen as diffusion coefficient in the Fokker-Planck equation (15)after performing the quasi-invariant limit.
where p, q ∈Rhave to be chosen in such a way that pw∗+q∈[0,1] for all w∗∈[−1,1]. This is
obtained if
0≤q≤1,|p| ≤ min{q, 1−q}.
With respect to model (6), such a function Pdescribes a situation in which agents with opinion
w∗>0 are more persuasive than agents with opinion w∗<0 if p > 0 and vice versa if p < 0.
Using (18) in (12) we obtain that the evolution of the mean opinion M1=M1(t) is ruled by
dt =pγ
(w∗−w)2f(t, w)f(t, w∗)dw dw∗,
whence we see that the sign of the time derivative dM1
dt coincides with that of p. Thus, if p > 0
the mean opinion is non-decreasing, while if p < 0 the mean opinion is non-increasing. Continuing
the previous calculation, we further find:
dt =pγ
which indicates that at the steady state (t→+∞) it results invariably M2,∞=M2
1,∞. This
implies that the asymptotic distribution has zero variance, thus that it is necessarily a Dirac delta
centred in the asymptotic mean opinion, i.e. f∞(w) = δ(w−M1,∞). Plugging this into (11) we
hϕ(M1,∞+D(M1,∞)η)i − ϕ(M1,∞)=0,
which has to hold for every test function ϕ. As a consequence, we deduce D(M1,∞) = 0, whence
M1,∞=±1 if the only zeroes of the diffusion coefficient are w=±1 like in the examples considered
in Section 3.1.
In conclusion, with the non-symmetric function Pgiven by (18) we fully characterise the
asymptotic distribution function as:
•f∞(w) = δ(w+ 1) if p < 0, the mean opinion decreasing from its initial value to M1,∞=−1;
•f∞(w) = δ(w−1) if p > 0, the mean opinion increasing from its initial value to M1,∞= 1.
The considerations above can be generalised to the following function P:
P(w, w∗) = rw +pw∗+q, (19)
where p6=r, so that Pis non-symmetric, and where the coefficients p, q, r ∈Rhave to be chosen
in such a way that rw +pw∗+q∈[0,1] for all (w, w∗)∈[−1,1]2. Repeating the previous
calculations, we conclude that:
•f∞(w) = δ(w+ 1) if p−r < 0; in this case, the mean opinion decreases from its initial value
to M1,∞=−1;
•f∞(w) = δ(w−1) if p−r > 0; in this case, the mean opinion increases from its initial value
to M1,∞= 1.
From the modelling point of view, we may interpret the difference p−ras a balance between
the persuasion ability of the agents, expressed by p, and their tendency to be persuaded, expressed
by r. Notice indeed that for p= 0 and r6= 0 we obtain the mirror case of (18), in which agents
with opinion w > 0 are more inclined to change their opinion than agents with opinion w < 0 if
r > 0 and vice versa if r < 0.
The discussion above clearly shows that an arbitrarily small perturbation of a constant P, by
destroying the conservation of the mean opinion, may drag the system towards asymptotic config-
urations much less variegated than (16) independently of the parameters γ,σ2of the interactions.
4 Macroscopic description of preference formation
According to model (2)-(3), the opinions of the agents evolve through mutual interactions inde-
pendent of the preferences; on the other hand, the preference of each agent is transported in time
by his/her opinion. This suggests that a proper way to account for the interplay between opinion
and preference in an aggregate manner is by means of an inhomogeneous Boltzmann-type kinetic
equation, whose transport term describes the evolution of the preference and whose “collisional”
term accounts simultaneously for the changes in the opinions.
4.1 Inhomogeneous Boltzmann-type description and hydrodynamics
A Boltzmann-type description of the opinion dynamics in the form of binary interactions (6)
coupled to the transport of the preference (2) is obtained by introducing the kinetic distribution
f=f(t, ξ, w) : R+×[−1,1] ×[−1,1] →R+,
such that f(t, ξ, w)dξ dw is the proportion of agents that at time thave a preference in [ξ, ξ +dξ]
and an opinion in [w, w +dw]. The distribution function fsatisfies the following weak Boltzmann-
type equation:
ϕ(w)f(t, ξ, w)dw +∂ξΦ(ξ)Z1
(w−α)ϕ(w)f(t, ξ, w)dw
−1hϕ(w0) + ϕ(w0
∗)−ϕ(w)−ϕ(w∗)if(t, ξ, w)f(t, ξ, w∗)dw dw∗,(20)
where the transport term (second term on the left-hand side) has been written taking into account
that, according to (2), the transport velocity of the preference ξis (w−α)Φ(ξ) and where w0,w0
on the right-hand side are given by (6).
From the distribution function f, by integration with respect to the opinion w, we can compute
macroscopic quantities in the space of the preferences, such as the density of the agents with
preference ξat time t:
ρ(t, ξ) := Z1
f(t, ξ, w)dw
and the mean opinion of the agents with preference ξat time t:
m(t, ξ) := 1
ρ(t, ξ)Z1
wf (t, ξ, w)dw.
The interest in (20) is that it allows one to obtain evolution equations directly for the quantities
ρ,m, provided one is able to characterise the large time statistical trends of the opinions, like in
Section 3. The underlying key idea is to consider a so-called hydrodynamic regime, in which the
opinions reach a local equilibrium much more quickly than the preferences, pretty much in the
spirit of the microscopic investigations performed in Section 2.
Let 0 < δ 1 be a small parameter, which we use to define a macroscopic time scale τ:= δt,
i.e. the time scale of the evolution of the preferences, which then turns out to be much larger, viz.
slower, than the characteristic one of the binary interactions among the agents. If we want that,
on this new scale, the preference dynamics remain the same, from (2) we see that we need to scale
simultaneously the transport speed of the preference by letting Φ(ξ)→δΦ(ξ).
Let g(τ, ξ, w) := f(τ
δ, ξ, w), whence ∂τg=1
δ∂tf. Plugging into (20) we find that gsatisfies
the equation
ϕ(w)g(τ, ξ, w)dw +∂ξΦ(ξ)Z1
(w−α)ϕ(w)g(τ, ξ, w)dw
−1hϕ(w0) + ϕ(w0
∗)−ϕ(w)−ϕ(w∗)ig(τ, ξ, w)g(τ, ξ, w∗)dw dw∗.(21)
Basically, the aforesaid scaling produces the coefficient 1/δ in front of the interaction term,
hence δis analogous to the Knudsen number in classical fluid dynamics. Since we are assuming
that δis small, a hydrodynamic regime is justified and, in particular, it can be described by a
splitting of (21), cf. [26], totally analogous to the one often adopted in the numerical solution of
the inhomogeneous Boltzmann equation, see e.g. [22,23,39]. One first solves the fast interactions:
ϕ(w)g(τ, ξ, w)dw
−1hϕ(w0) + ϕ(w0
∗)−ϕ(w)−ϕ(w∗)ig(τ, ξ, w)g(τ, ξ, w∗)dw dw∗,(22)
which, owing to the high frequency 1/δ, reach quickly an equilibrium described by a local (in ξand
τ) asymptotic distribution function playing morally the role of a local Maxwellian. Notice indeed
that (22) is actually an equation on the time scale of the microscopic interactions, because τcan
be scaled back to tusing the factor 1/δ. Next, one transports such a local equilibrium distribution
according to the remaining terms of (21) on the slower hydrodynamic scale:
ϕ(w)g(τ, ξ, w)dw +∂ξΦ(ξ)Z1
(w−α)ϕ(w)g(τ, ξ, w)dw= 0.(23)
Due to (22), and taking the definition of ρinto account, the local “Maxwellian” can be given the
form g(τ, ξ, w) = ρ(τ, ξ)g∞(w), where g∞is one of the asymptotic opinion distribution functions
found in Section 3. This is the distribution transported by (23), hence we finally obtain
(w−α)ϕ(w)g∞(w)dw= 0 (24)
and we can use the knowledge of g∞to compute explicitly the remaining integral terms.
4.2 First order hydrodynamic models
Let us consider at first the case of the non-symmetric functions P(18), (19) discussed in Sec-
tion 3.2.2. The asymptotic opinion distribution is either g∞(w) = δ(w+ 1) or g∞(w) = δ(w−1),
depending on the asymmetry of P. Plugging into (24) along with the choice ϕ(w) = 1 we find
therefore either
∂τρ−(1 + α)∂ξ(Φ(ξ)ρ) = 0 (25)
∂τρ+ (1 −α)∂ξ(Φ(ξ)ρ)=0.(26)
In both cases, we get a self-consistent equation for the sole density ρand we speak thus of first
order hydrodynamic model.
Unlike typical conservation laws, in (25) and (26) the flux does not only depend on the variable
ξthrough the conserved quantity ρbut also explicitly through the function Φ. An analogous
characteristic is found, for instance, in conservation-law-based macroscopic models of vehicular
traffic featuring different flux functions in different roads, see [31].
We observe that both (25) and (26) admit the family of stationary distributional solutions
ρ∞(ξ) =
ρkδ(ξ−ξk), ρk≥0,(27)
where the ξk’s are the zeroes of the function Φ. This indicates that models (25) and (26) reproduce
the asymptotic polarisation of the agents in the preference poles individuated by the points where
Φ vanishes. The coefficients ρkrepresent the masses concentrating in each pole. Furthermore, (25)
describes invariably a leftward transport of ρin the space of the preferences, because −(1 + α)<0
for all α∈(−1,1] (if α=−1 the density is simply not transported). Conversely, (26) describes
invariably a rightward transport of ρ, since 1 −α > 0 for all α∈[−1,1) (now the density is not
transported if α= 1).
4.3 Second order hydrodynamic model
We now consider the symmetric case P≡1 discussed in Section 3.2.1, which produces the asymp-
totic opinion distribution g∞given by (16). Notice that this distribution is parametrised by the
(local) mean opinion m=m(τ, ξ), because the latter is conserved by the opinion dynamics. This
implies that, if we plug such a g∞into (24) together with the choice ϕ(w) = 1, we do not get a
self-consistent equation for the density ρ. In fact, we find:
∂τρ+∂ξ(Φ(ξ)ρ(m−α)) = 0,
with both hydrodynamic parameters ρ,munknown. In order to close the macroscopic equations,
we need a further equation relating ρand m, which we can obtain from (24) with ϕ(w) = wand
recalling also (17):
∂τ(ρm) + ∂ξΦ(ξ)ρ2m2+λ
2 + λ−αm= 0.
On the whole, we get the second order (i.e., composed of a self-consistent pair of equations)
hydrodynamic model
∂τρ+∂ξ(Φ(ξ)ρ(m−α)) = 0
∂τ(ρm) + ∂ξΦ(ξ)ρ2m2+λ
2 + λ−αm= 0,
where the parameter λ=σ2/γ, which here enters the game through the energy of the stationary
opinion distribution (16), is reminiscent of the self-thinking (diffusion) of the agents.
Also in this case, (27) is a family of admissible stationary distributional solutions. Hence
model (28) can in turn reproduce the asymptotic polarisation of the preferences already observed
in the microscopic model.
It is useful to ascertain under which conditions system (28) is hyperbolic in the natural state
space {(ρ, m)∈R+×[−1,1]}. To this purpose, we rewrite it in the quasilinear matrix form
∂τU+ Φ(ξ)A(U)∂ξU+ Φ0(ξ)F(U)=0,
where U:= (ρ, m)T,A(U) is the matrix
A(U) := m−α ρ
and F(U) denotes lower order terms, which is not important to write explicitly. Since Φ is real-
valued, system (28) is hyperbolic if both eigenvalues of A(U) are real. To check this, we compute
the discriminant ∆(U) of the characteristic polynomial of A(U):
∆(U) := tr2A(U)−4 det A(U) = 4λ
λ+ 2 1−2m2
λ+ 2.
Since m∈[−1,1], thus m2∈[0,1], and λ≥0, we easily see that ∆(U) is always non-negative.
Therefore, we conclude:
Proposition 4.1. System (28)is hyperbolic in the whole state space {(ρ, m)∈R+×[−1,1]}for
every choice of the parameters α∈[−1,1],λ≥0and for every function Φ:[−1,1] →[0,1].
4.4 General first and second order hydrodynamic models
If the asymptotic opinion distribution g∞is not known analytically, like e.g. in the significant
case (4), the hydrodynamic models can still be written from (24), although only in a semi-analytical
Assume that the microscopic dynamics (6) do not conserve the mean opinion. Then the sole
conserved quantity is the mass of the agents and from (24) with ϕ(w) = 1 we obtain the first order
∂τρ+ (M1,∞−α)∂ξ(Φ(ξ)ρ)=0
in the unknown ρ=ρ(τ, ξ), where M1,∞:= R1
−1wg∞(w)dw is the asymptotic mean opinion. The
latter may be computed e.g. from (12), by means of an appropriate numerical approach.
Conversely, if the microscopic dynamics (6) conserve the mean opinion then g∞is parametrised
by mand from (24) with ϕ(w)=1, w we obtain the second order model
∂τρ+∂ξ(Φ(ξ)ρ(m−α)) = 0
∂τ(ρm) + ∂ξ(Φ(ξ)ρ(M2,∞(m)−αm)) = 0
in the unknowns ρ=ρ(τ, ξ), m=m(τ, ξ). Here, M2,∞(m) := R1
−1w2g∞(w)dw is the energy of
the asymptotic opinion distribution, expressed as a function of the conserved quantity m.
The precise calculation of M2,∞requires, in general, an accurate numerical reconstruction of
g∞. The latter is a stationary solution to the Fokker-Planck equation
wD2(w)g+γ∂w(B[g]g) (31)
B[g](τ, w) := Z1
P(w, w∗)(w−w∗)g(τ, w∗)dw∗,(32)
which is obtained in the quasi-invariant regime starting from the binary interactions (6) with
a symmetric but not necessarily constant compromise function P. In particular, the following
implicit representation of g∞can be given:
g∞(w) = C
D2(w)exp −2
where C > 0 is a normalisation constant and the integral on the right-hand side denotes any
antiderivative of the function w7→ B[g∞](w)/D2(w). For instance, if Pis the function (19) with
r=p, 0≤q≤1,|p| ≤ 1
2min{q, 1−q},
so that Pis symmetric and P(w, w∗)∈[0,1] for all (w, w∗)∈[−1,1]2, then from (33) we find
the semi-explicit expression
g∞(w) = Ce 2p
λw(1 + w)
λ−1(1 −w)
which brings the calculation of M2,∞back to the numerical solution of the non-linear system of
−1g∞(w)dw = 1
−1w2g∞(w)dw =M2,∞
parametrised by m.
In general, however, we observe that the type of dependence of M2,∞on mvalid for P≡1,
cf. (17), is somewhat paradigmatic. In fact, let us consider (14) in the quasi-invariant limit →0+
for binary interactions (6) with a symmetric P. Fixing D(w) = √1−w2, we obtain the following
equation for M2:
dτ = 2γZ1
w(w∗−w)P(w, w∗)g(τ, w)g(τ, w∗)dw dw∗+σ2(1 −M2).
Set a:= infw,w∗∈[−1,1] P(w, w∗), 0 ≤a≤1. Then
−(2 + λ)M2+ 2am2+λ≤1
dτ ≤ −(2a+λ)M2+ 2m2+λ,
which produces asymptotically
2 + λ≤M2,∞≤2m2+λ
This suggests that a perhaps rough but possibly useful approximation of M2,∞, to be used in (30),
is the average of these lower and upper bounds, i.e.:
λ+ 2 +1
2a+λm2+λ(1 + a+λ)
(λ+ 2)(2a+λ),
which for a= 0, like e.g. in case (4), yields M2,∞≈m2
λ+2 .
5 Numerical tests
In this section we exemplify, by means of several numerical tests, the main features of the formation
of preferences at the kinetic and hydrodynamic scales as described by the models presented in the
previous sections.
The numerical approach is essential, in particular, to investigate the cases in which the com-
promise function Pdoes not allow for an explicit computation of the asymptotic opinion distri-
bution g∞. Therefore, first we will briefly review Structure Preserving (SP) numerical methods,
which are able to capture the large time solution to possibly non-local Fokker-Planck equations
with non-constant diffusion, such as those introduced in Sections 3.2.1 and 4.4, see [42,43]. Next,
we will compare the large time distributions so computed with those obtained from the numer-
ical solution of the original Boltzmann-type equation (14) in the quasi-invariant limit (1) by
means of classical Monte Carlo (MC) methods for kinetic equations [23,40]. After validating in
this way the accuracy of the numerical solver for the sole opinion dynamics (homogeneous kinetic
model), we will investigate the inhomogeneous kinetic model (20) as well as the hydrodynamic
models derived therefrom.
5.1 MC and SP methods for the homogeneous kinetic equation (14)
We begin we rewriting the Boltzmann-type equation (14) in strong form:
Q+(g, g)−g,(34)
where Q+is the gain part of the kinetic collision operator:
Q+(g, g)(τ , w) := Z1
0Jg(τ, 0w)g(τ, 0w∗)dw∗.
Here, (0w, 0w∗) are the pre-interaction opinions generating the post-interaction opinions (w, w∗)
according to the binary interaction rule (6) and 0Jis the Jacobian of the transformation from the
former to the latter.
To compute the solution of (34), we adopt a direct MC scheme based on the Nanbu algorithm
for Maxwellian molecules [40]. We introduce a uniform time grid τn:= n∆τwith fixed step
∆τ > 0 and we denote gn(w) := g(τn, w). A forward discretisation of (34) on such a mesh reads
gn+1 =1−∆τ
Q+(gn, gn).(35)
From (34), owing to mass conservation, we see that R1
−1Q+(g, g)(τ , w)dw = 1 for all τ > 0 if
−1g(0, w)dw = 1, therefore Q+(g, g)(τ, ·) can be regarded as a probability density function at
all times. From (35), under the restriction ∆τ≤, we obtain therefore that gn+1 is a convex
combination of two probability density functions and is therefore in turn a probability density
function. The probabilistic interpretation of (35) is clear: with probability ∆τ
any two particles
interact during the time step ∆τ; with complementary probability 1 −∆τ
they do not. This is
the basis on which to ground an MC-type numerical method for the approximate solution of (34).
However, it is in general numerically demanding to obtain from (35) an accurate reconstruction
of the asymptotic distribution g∞. To obviate this difficulty, one can take advantage of the fact
that, for sufficiently small, the large time trend of (34) is well approximated by the Fokker-
Planck equation (31). In [43], an SP numerical scheme has been specifically designed to capture
the large time behaviour of the solution to (31) with arbitrary accuracy and no restriction on the
w-mesh size. Moreover, in the transient regime that scheme is second order accurate, preserves the
non-negativity of the solution and is entropic for specific problems with gradient flow structure.
See also [24,33,42] for further applications.
To derive SP schemes for the Fokker-Planck equation (31), we rewrite the latter in flux form:
where the flux is
F[g](τ, w) := C[g](τ, w)g(τ, w) + σ2
2D2(w)∂wg(τ, w)
C[g](τ, w) := γZ1
P(w, w∗)(w−w∗)g(τ, w∗)dw∗+σ2
Next, we introduce a uniform grid {wi}N
i=1 ⊂[−1,1] such that wi+1 −wi= ∆w > 0, we denote by
gi(τ) an approximation of the grid value g(τ, wi) and we consider the conservative discretisation
of (36)
dτ =Fi+1/2− Fi−1/2
where Fi±1/2is an approximation of Fat wi±1/2:= wi±∆w
2. In particular, we choose a numerical
flux of the form
Fi+1/2:= ˜
gi+1 −gi
with ˜gi+1/2defined as a convex combination of gi,gi+1:
˜gi+1/2:= 1−δi+1/2gi+1 +δi+1/2gi.
The coefficient δi+1/2∈[0,1] has to be properly chosen. Setting in particular
Ci+1/2:= σ2D2
B[g](τ, w)
D2(w)dw + log Di+1
where B[g] is given by (32), we obtain explicitly
δi+1/2:= 1
1−exp(λi+1/2)with λi+1/2:= 2∆w˜
The order of this scheme for large times coincides with that of the quadrature formula employed
for computing the integral contained in (38). In particular, if a standard Gaussian quadrature
rule is used then spectral accuracy is achieved in the wvariable. In the transient regime, instead,
the scheme is always second order accurate.
5.1.1 Comparison of the numerical solutions for large times
We now compare the large time numerical solution of the Boltzmann-type equation (34), obtained
by means of the MC scheme with the following specifications:
•quasi-invariant regime approximated by taking either = 10−1or = 10−2,
with the numerical solution of the Fokker-Planck equation (31), obtained by means of the SP
scheme with the following specifications:
•N= 81 grid points for the mesh {wi}N
i=1 ⊂[−1,1], yielding a mesh step ∆w= 2.5·10−2;
•fourth order Runge-Kutta method for the time integration of (37);
•Gaussian quadrature rule, with 10 quadrature points in each cell [wi, wi+1], for the approx-
imation of the integral in (38).
We use the symmetric bounded-confidence-type compromise function Pgiven by (4) with several
choices of the confidence threshold ∆ ∈[0,2]. At the initial time τ= 0, we prescribe the uniform
distribution in [−1,1], i.e.
g(0, w) = 1
In Figure 5we fix ∆ = 1 (left panels) and ∆ = 0.4 (right panels) and we take λ=σ2/γ =
5·10−3. We observe that, as expected, the smaller the more the MC solution coincides with
the SP solution of the Fokker-Planck equation for either value of the confidence threshold ∆.
Furthermore, the asymptotic profiles compare qualitatively well with those obtained with the
deterministic microscopic model (3), cf. Figure 1(a, b), in terms of number and location of the
opinion clusters.
In Figure 6we repeat the same comparisons between the MC and SP numerical solutions
but with ∆ = 0.2. For λ= 10−4(left panels) we recover both a transient behaviour and an
asymptotic trend of the solution fully consistent with those already observed with the deterministic
microscopic model (3). In particular, four opinion clusters emerge in the long run. Interestingly, for
a slightly larger parameter λ= 10−3, indicating a higher relevance of the self-thinking (stochastic
fluctuation) in the behaviour of the individuals, two opinion clusters merge, thereby giving rise
to just three clusters in the long run. This aggregate phenomenon can only be observed if some
microscopic randomness is duly taken into account in the model.
5.2 Inhomogeneous kinetic equation (21)
We now pass to the inhomogeneous kinetic model, in which the formation of the preferences is
driven by an interplay with the opinion dynamics studied before.
We start by outlining the procedure by which we solve the inhomogeneous Boltzmann-type
equation (21). Since the Knudsen-like number δis assumed to be small, at each time step we adopt
the very same splitting procedure already discussed in Section 4.1. Therefore, upon introducing a
time discretisation τn:= n∆τ, with ∆τ > 0 constant, we proceed as follows.
Interaction step. At time τ=τn, we solve the interactions towards the equilibrium during half
a time step:
∂τG(τ, ξ, w) = 1
δQ(G, G)(τ, ξ, w), τ ∈(τn, τ n+1/2]
G(τn, ξ, w) = g(τn, ξ, w)
-1 -0.5 0 0.5 1
Figure 5: Top row: contours of the distribution function gcomputed numerically for τ∈(0, T ],
T= 50, from the Fokker-Planck equation (31) with the SP scheme. Bottom row: comparison
of the numerical approximations at τ=Tof the large time distribution g∞obtained with the
previous SP scheme and with the MC scheme for the Boltzmann-type equation (34) with two
decreasing values of the parameter simulating the quasi-invariant regime. In both rows, the
confidence thresholds are ∆ = 1 (left) and ∆ = 0.4 (right).
for all ξ=ξibelonging to a suitable mesh {ξi}i⊂[−1,1]. In this step, we take advantage
of the MC scheme introduced in Section 5.1, which has proved to give asymptotic solutions
comparable to those of the more accurate SP scheme, provided the parameter δis sufficiently
small. In particular, we use a sample of 106particles and we fix δ= 10−2.
In (39), Qdenotes the collision operator that appears on the right-hand side of (21) once
this equation has been written in strong form.
Transport step. Next, we take the asymptotic distribution obtained in the interaction step as
the input of a pure transport towards the next time step τn+1:
∂τg(τ, ξ, w)+(w−α)∂ξ(Φ(ξ)g(τ, ξ, w)) = 0, τ ∈(τn+1/2, τ n+1 ]
g(τn+1/2, ξ, w) = G(τn+1/2, ξ, w).
In the tests of this section, unless otherwise specified, we prescribe the uniform distribution in
the variables ξ,was initial datum:
g(0, ξ, w) := 1
4χ((ξ, w)∈[−1,1]2) (40)
we fix λ= 10−3and we take the function Φ given in (5).
Figure 6: The same as Figure 5but with ∆ = 0.2. In the left panels we use λ= 10−4, in the
right panels λ= 10−3. In the latter case, the asymptotic distribution features only three opinion
clusters, well reproduced by both the SP Fokker-Planck solution and the MC Boltzmann solution
(especially with = 10−2), because the two central clusters merge during the transient due to
a higher relevance of the self-thinking (diffusion) with respect to the tendency to compromise
(a) τ= 1 (b) τ= 3 (c) τ= 5
Figure 7: Contours of the inhomogeneous kinetic distribution g(τ, ξ, w) at different times with
∆ = 1 and α=−0.3.
5.2.1 Symmetric P
First, we consider symmetric interactions described again by the bounded confidence compromise
function (4) with ∆ = 1. In Figures 7,8we show the evolution of the inhomogeneous kinetic model
(a) τ= 1 (b) τ= 3 (c) τ= 5
Figure 8: The same as Figure 7but with α= 0.3.
(a) τ= 1 (b) τ= 3 (c) τ= 5
Figure 9: The same as Figure 7but with ∆ = 0.4 and α= 0.3.
(a) w-marginal at T= 10
(b) ξ-marginal at T= 10
Figure 10: Marginal distributions of the opinions (a) and of the preferences (b) for the numerical
test of Figure 9.
for two different choices of the perceived social opinion, α=±0.3 respectively. We clearly observe
that while the opinions distribute around the conserved mean opinion m= 0, as expected, the
preferences polarise in two possible ways. For α=−0.3, cf. Figure 7, polarisations emerge in ξ= 0
and ξ= 1. Specifically, individuals with an initial preference in [−1,0] tend to polarise in ξ= 0,
whereas individuals with an initial preference in (0,1] tend to polarize in ξ= 1. For α= 0.3, cf.
Figure 8, the mirror trends emerge. These polarisation patterns of the preferences are very much
consistent with those observed in Section 2with the deterministic microscopic model (2)-(3), cf.
Figure 2.
Next, we consider the same symmetric compromise function Pas before but now we fix ∆ = 0.4.
In Figure 9we depict the evolution of the inhomogeneous kinetic model for α= 0.3. As far as
the opinion dynamics are concerned, we recognise that individuals tend to cluster in two well
distinct positions, see Figure 10(a), directly comparable with the emerging clusters shown in
Figure 5(right) and also, up to diffusion, in Figure 1(b). Nevertheless, the social detail is now
higher, because we clearly distinguish that individuals with the same asymptotic opinion may
actually polarise in different preferences. More specifically, the opinion cluster near w=−0.5 is
formed by individuals with preferences polarised in either ξ=−1 or ξ= 0, while the opinion
cluster near w= 0.5 is formed by individuals with preference polarised in either ξ= 0 or ξ= 1,
see Figure 9(c). Remarkably, three polarisations of the preference emerge on the whole in the long
run, see Figure 10(b), because the opinions do not reach a global consensus.
Also these polarisation patterns of the preferences are consistent with those discussed in Sec-
tion 2, indeed the deterministic microscopic model can account in principle for three preference
poles. The fact that Figure 3(b) shows asymptotically only two of them depends essentially on
the choice of the initial conditions, which in a particle model hardly allow one to observe the
representative average trend in a single realisation.
The case α=−0.3 is qualitatively analogous to the one just discussed, therefore we do not
report it in detail.
5.2.2 Non-symmetric P
Finally, we investigate the effect of a non-symmetric compromise function P. As already discussed
in Section 3.2.2, we recall that the asymmetry of Pcan be understood as a systematic bias of
the individuals, who for some reason are more prone to change opinion in a specific direction.
In this numerical example, we remain in the class of the bounded confidence models and, taking
inspiration from [32], we consider
P(w, w∗) = χ(−∆L≤w∗−w≤∆R),(41)
where ∆L,∆R∈[0,2] are two confidence thresholds.
In order to understand the effect of function (41), we observe that if w≤w∗then interactions
are allowed provided |w∗−w|=w∗−w≤∆R. Otherwise, if w≥w∗then interactions are allowed
provided |w∗−w|=w−w∗≤∆L. Thus, if e.g. ∆R>∆Lthen an individual with opinion wis
more incline to interact with other individuals with opinion w∗≥w. The converse holds if instead
Remark 5.1.If ∆L= ∆Rthen (41) actually reduces to (4) with ∆ = ∆R.
We choose ∆L= 0.3 and ∆R= 0.7, meaning that individuals compromise preferentially with
other individuals with an opinion located on the right of their own. Moreover, we consider the
perceived social opinion α= 0.3. In Figure 11 we show the evolution of the inhomogeneous kinetic
model starting from the uniform distribution (40).
We observe that initially the mean opinion is neutral at any preference, indeed
wg(0, ξ, w)dw = 0,∀ξ∈[−1,1].
Nevertheless, due to the non-symmetric interactions, the mean opinion is not conserved in time,
cf. Figure 11(d). In particular, owing to the bias induced by ∆R>∆L, the opinions tend to
shift on the whole rightwards, cf. Figure 11(e), while the preferences polarise in the three poles
ξ=−1,0,1, cf. Figure 11(f). Again, we notice that the joint picture preference-opinion is a
lot more informative than the sole opinion dynamics, because it allows us to observe e.g. that
two clusters with nearly the same asymptotic opinion about w≈0.5 actually include individuals
expressing strongly different preferences (ξ= 0,1), cf. Figures 11(b, c).
(a) τ= 1 (b) τ= 5 (c) τ= 10
(d) Mean opinion trend
(e) w-marginal at T= 10
(f) ξ-marginal at T= 10
Figure 11: Top row: Contours of the inhomogeneous kinetic distribution g(τ, ξ, w) at different
times with the non-symmetric compromise function (41) featuring ∆L= 0.3, ∆R= 0.7. Bottom
row: Time trend of the mean opinion (the symmetric case is plotted for duly comparison) and
marginal distributions of opinions and preferences at time T= 10.
5.3 Hydrodynamic model
Now we test the hydrodynamic model of preference formation derived in Section 4. In particular,
since the dynamics predicted by the first order models of Section 4.2 are quite well understood
analytically, we focus on the second order model presented in Section 4.3, cf. (28).
To discretise the system of conservation laws (28), we introduce a uniform | {"url":"https://www.researchgate.net/publication/329894945_Hydrodynamic_models_of_preference_formation_in_multi-agent_societies?_sg=iGEcuNEmZMA_N-9GNICVAyV9cO3IFsUlxu-mKBRTwS2jNYs1THr4GyncdBgcJlpEpxyz_8Yslv1g52PwtofWVXEOgLb9P6u9EZgfVUnW.vrYNFl5e71hPGg6tmWgsaEUIKHtKNuu7KCBFWdRD3Q005N1aw05F2rleQ1GyT6olfobC0l7n6NIJE9wgHaNKCw","timestamp":"2024-11-07T07:23:39Z","content_type":"text/html","content_length":"1050306","record_id":"<urn:uuid:cb08c25e-066c-4fe4-8a8e-163be100ce18>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00529.warc.gz"} |
Root (polynomial)
A root of a function (often a polynomial) $f(x)$ whose range is the real, the complex numbers or any abstract field is a value $a$ in the domain of the function such that $f(a) = (x-a) = 0$.
Finding roots
Let $\[P(x) = c_nx^n + c_{n-1}x^{n-1} + \dots + c_1x + c_0 = \sum_{j=0}^{n} c_jx^j\]$ with all $c_j \in \mathbb C$ and $c_n eq 0$, a general degree-$n$ polynomial. The degree of $P(x)$ is $n$, so $P
(x)$ has at most $n$ complex roots. In fact, the sum of the multiplicities of all distinct complex roots is exactly $n$; that is, counting any double roots twice, triple roots three times, and so on,
there are in fact exactly $n$ complex roots of $P(x)$.
General techniques
Multiplying or dividing all of the coefficients of a polynomial by a nonzero constant does not change its roots. Thus, there will always be a monic polynomial $Q(x)$ with degree $n$ having the same
roots as $P(x)$, given by $\[Q(x) = x^n + \frac{c_{n-1}x^{n-1}}{c_n} + \dots + \frac{c_1x}{c_n} + \frac{c_0}{c_n} = \sum_{j=0}^{n} \frac{c_jx^j}{c_n}.\]$
Once a root $a$ of $P(x)$ is found, the Factor Theorem gives that $x - a$ is a factor of $P(x)$. Therefore, $x - a$ can be divided out of $P(x)$ using synthetic division, and the roots of the
resulting quotient will be the remaining roots of $P(x)$. Once another root is found, the process can be repeated. Dividing through by $x - a$ is most practical when the root $a$ and all the
coefficients $c_j$ of $P(x)$ are rational.
Rational roots
The three simplest values to test are $-1, 0,$ and $1$.
• $0$ is a root of $P(x)$ if and only if the constant term $c_0$ is $0$.
• $1$ is a root of $P(x)$ if and only if the sum of the coefficients is $0$.
• $-1$ is a root of $P(x)$ if and only if the alternating sum of the coefficients is $0$.
If all of the coefficients $c_j$ of $P(x)$ are integers, then the Rational Root Theorem applies; namely, if $\frac{p}{q}$ is a root of $P(x)$, with $p$ and $q$relatively prime, then $p$ is a
(positive or negative) divisor of $c_0$ and $q$ is a divisor of $c_n$.
If all of the $c_j$ are rational, but not necessarily integers, then multiplying through by the least common multiple of the denominators yields a polynomial with the same roots as $P(x)$ and integer
coefficients. The Rational Root Theorem can then be applied to the new polynomial to search for rational roots of $P(x)$.
In some cases the search may be simplified by substituting $P(x) = M(L(x))$, where $L(x)$ is a nonconstant linear polynomial with rational coefficients. If $a$ is a rational root of $P(x)$, then $L
(a)$ is a rational root of $M(x)$. Conversely, if $a$ is a rational root of $M(x)$, then the inverse $L^{-1}(a)$ of $L(x)$ at $a$ must be a rational root of $P(x)$.
Real roots
Evaluating $P(x)$ at chosen values can point to the location of roots. If $P(a)$ and $P(b)$ have opposite signs, then the Intermediate Value Theorem states that $P(x)$ has at least one root in $(a,b)
For single roots and other roots of odd multiplicity (triple roots, quintuple roots, etc.), $P(x)$ will always change sign on opposite sides of the root, but for roots of even multiplicity (such as
double roots), the sign of $P(x)$ will be the same on either side of the root, so the Intermediate Value Theorem will not detect even-multiplicity roots.
Descartes Rule of Signs yields information on the number of positive real roots and negative real roots of $P(x)$. Writing the coefficients of $P(x)$ in descending order of degree and excluding any
that equal $0$, the number of positive real roots of $P(x)$ is equal to the number of sign changes between adjacent coefficients, minus some even nonnegative integer. The number of negative real
roots of $P(x)$ is equal to the number of such sign changes after reversing the sign of every odd-degree coefficient, again minus some even nonnegative integer. Here roots are counted according to
multiplicity, so double roots are counted twice, triple roots three times, and so on.
More broadly, counting roots according to their multiplicity as before the number of real roots always has the same parity as the degree $n$. Specifically, if $n$ is odd then $P(x)$ must have at
least one real root.
Rolle's Theorem guarantees that if $P(x)$ has two roots $r$ and $s$, then its derivative $P'(x)$ has at least one root in the interval $(r,s)$. In particular, if $P'(x)$ has no roots in an interval $
(a,b)$, then $P(x)$ has at most one root in $[a,b]$.
Newton's method generates arbitrarily close approximations of the value of a real root of a polynomial. Use of Newton's method generally requires an educated guess for the location of the root based
on the above criteria. We let the guess equal $x_0$ and compute the approximations recursively: $\[x_{k+1} = x_k - \frac{P(x_k)}{P'(x_k)} = x_k - \frac{\sum_{j=0}^{n} c_jx_k^j}{\sum_{j=1}^{n} jc_jx_k
Complex roots
Complex roots of polynomials with real coefficients always exist in conjugate pairs. That is, if $a$, $b$, and all coefficients of $P(x)$ are real and $a + bi$ is a root of $P(x),$ then $a - bi$ is
also a root of $P(x)$. In particular, $P(x)$ must both have an even number of distinct nonreal roots and an even sum of the multiplicities of all distinct nonreal roots.
The product of the factors corresponding to $a + bi$ and $a - bi$ is $\[(x - (a + bi))(x - (a - bi)) = x^2 - 2ax + (a^2 + b^2),\]$ a quadratic polynomial with real coefficients. As such, dividing
through by the product leaves a polynomial which still has real coefficients and all roots of the original except $a + bi$ and $a - bi$.
In algebraic form
A root of a polynomial with integer coefficients will always be an algebraic number. General formulas give the algebraic form of the roots of polynomials with degree at most $4$.
For a quadratic equation $ax^2 + bx + c$, the roots are given by the quadratic formula $\[\frac{ -b \pm \sqrt{b^2 - 4ac}}{2a}.\]$
The cubic formula and quartic formula also exist, but they are quite lengthy and not very practical for computation by hand.
No similar formula exists for quintics or polynomials of any higher degree. Although the roots of such polynomials are algebraic numbers, they cannot be expressed in terms of the coefficients using
only addition, subtraction, multiplication, division, powers, and radicals.
This article is a stub. Help us out by expanding it. | {"url":"https://artofproblemsolving.com/wiki/index.php/Root_(polynomial)","timestamp":"2024-11-12T09:57:18Z","content_type":"text/html","content_length":"62145","record_id":"<urn:uuid:f9c33f3e-2b1b-416b-8f29-06c845b455f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00505.warc.gz"} |
Calculating initial velocity using kinematic equations
• Thread starter cmonstre
• Start date
In summary, the conversation discusses a problem involving calculating the force of the wind on a launched object using kinematic equations and a table of values for displacement, time, and velocity.
The issue of using instantaneous initial velocity instead of average velocity is brought up and the suggestion to use a system of equations to solve for the exact initial velocity is given. The
poster is also advised to consider using modern tools like Excel to fit a parabola to the data points.
Hi! I'm working on a problem for class and am having trouble figuring out how to calculate the initial velocity, which I need in order to calculate acceleration and from there the value of an unknown
force. I thought I was on the right track, but my professor reminded me that I need to use the instantaneous initial velocity, not the average velocity. The problem is below:
1. Homework Statement
A 0.15kg object is launched from the ground and moves under the influence of gravity as well as a second force (wind). The wind force is constant, but magnitude and direction are unknown. We are
given a table of values (points) from the plot of the trajectory and are asked to calculate the force of the wind on the ball.
The table looks like this:
t(s), x(m), y(m)
t=0, x=0, y=0
t=.5, x=3.46, y=4.59
t=1, x=6.72, y=6.85
t=1.5, x=9.81, y=6.79
t=2, x=12.70, y=4.40
The graph is obviously a parabola.
Homework Equations
I'll put the kinematic equations down for reference:
deltax= 1/2 (V[f] - V[0])t
deltax = V[f]t - 1/2at^2
deltax= V[0]t + 1/2at^2
V[f] - V[0] = at
Vf^2= V0^2 + 2a(deltax)
The Attempt at a Solution
Alright. I know that the values in the table can give me information to calculate displacement, instantaneous velocities, and time, from which I can calculate acceleration using the kinematic
equations. Since I also know the mass of the object, I can then use the F=ma equation to calculate the force of the wind.
I know that if the force on an object is constant, the acceleration must be constant (because the mass is obviously constant)…so the acceleration of the object wouldn't change across the trajectory.
I think this means that the vertical acceleration (Ay) would be the acceleration due to gravity PLUS the acceleration of the wind, and the horizontal acceleration (Ax) would just be the acceleration
of the wind. Unless of course the wind acceleration only affects the horizontal acceleration, and then the vertical acceleration is just acceleration due to gravity.
My original thought process was to calculate initial horizontal velocity, and then calculate acceleration from there. The issue I ran into is that using the values from the table only gave me average
velocity across that interval. Then I thought that I could solve for the angle of launch for a better approximation of V0.
I used the x value of 6.72m and the y value of 6.85m, then used tan(theta)=6.85/6.72 to solve for the launch angle, which gave me approximately 45.5 degrees. The initial horizontal velocity would
then be V0x= V0*cos(45.5), and the initial vertical velocity would be V0y=V0*sin(45.5). My professor stopped me here and reminded me that this is still an approximation though and told me to
construct a system of equations that would allow me to solve for the exact initial velocity. I'm not sure where to start with the system of equations, because I only have the displacement and time
variables as I would run into the same issue of having an average velocity if I tried to calculate Vf.
Once I have the value for initial velocity, however, I would be able to solve for acceleration using
deltax= V[0]t + 1/2at^2
and then multiply the acceleration by the mass of the projectile to get Fx. From there it would just be solving for the unknown force.
Thanks for the help! I know this is ultimately pretty basic, but this is my first physics class!
Science Advisor
Homework Helper
Hello Cm, welcome to PF
Pretty clear posting, but you had me puzzled with "The graph is obviously a parabola". Was that part of the problem formulation, or did you add it ?
And which graph are you referring to (I am so primitive that I first think of 2-D graphs, so x(t), y(t) or y(x) ) ?
(and when I draw them, they all look like parabolas quite well -- also x(t) ! In fact the first two look like perfect parabolas...)
This may be your first physics class, but your prof is really asking some tough questions here ! Are you allowed to use some modern tools, such as fitting a parabola through the points using the
trendline gadget in Excel ? If not, then you have to do some real work to get out reasonable answers. Fortunately, the time steps are all 0.5 seec, so that helps.
Your first idea
Unless of course the wind acceleration only affects the horizontal acceleration, and then the vertical acceleration is just acceleration due to gravity.
is not applicable: The vertical acceleration is clearly different from -4.9 m/s
Your second idea
Then I thought that I could solve for the angle of launch for a better approximation of V0.
sounds good, but unfortunately the motions in the x-direction and in the y-direction are completely independent in this exercise.
You also make a strange choice when you pick t=1 points to calculate the launch angle ! After all, that angle changes with time fairly rapidly (check with the angle at t=0.5 s!)
(That is probably what triggered prof to intervene. You have a good one there, so if I were you I would listen carefully !)
To get you going: my hint is that x(t) and y(t) look like nearly perfect parabolas. As you know, a parabola is completely determined by three parameters.
In SUVAT equations: y(t) = y
+ v
t + 0.5 a t
So three points should be enough to construct them (just like two points yield a line -- two parameters).
You have five points, so you can do a little averaging -- easy with a calculator .
x(t) is a good candidate to begin with.
Let us know how you are getting on.
FAQ: Calculating initial velocity using kinematic equations
1. How do you calculate initial velocity using kinematic equations?
To calculate initial velocity using kinematic equations, you will need to know the final velocity, acceleration, and time. You can use the equation v = u + at, where v is the final velocity, u is the
initial velocity, a is the acceleration, and t is the time. Rearrange the equation to solve for u, which will give you the initial velocity.
2. What is the difference between initial velocity and final velocity?
Initial velocity is the velocity at the beginning of a motion or in other words, the velocity when t = 0. On the other hand, final velocity is the velocity at the end of a motion or when an object
comes to a stop. In kinematics, we often use the symbol u for initial velocity and v for final velocity.
3. Can you calculate initial velocity if you only have the distance and time?
No, you cannot calculate initial velocity with only the distance and time. You will also need to know the acceleration, which is the rate of change of velocity. Without the acceleration, you cannot
determine the initial velocity using kinematic equations.
4. How do you handle negative values when calculating initial velocity?
In kinematics, negative values for velocity and acceleration indicate a change in direction. To handle negative values when calculating initial velocity, you will need to pay attention to the
direction of motion. If the initial velocity is in the opposite direction of the final velocity, it will have a negative value. You can use the sign convention to determine the direction of the
5. Is it possible to have a negative initial velocity?
Yes, it is possible to have a negative initial velocity. As mentioned earlier, negative values for velocity indicate a change in direction. If an object is thrown downwards, its initial velocity will
be negative as it is moving in the opposite direction of the positive direction. However, if the object is thrown upwards, the initial velocity will be positive as it is moving in the same direction
as the positive direction. | {"url":"https://www.physicsforums.com/threads/calculating-initial-velocity-using-kinematic-equations.814361/","timestamp":"2024-11-03T09:04:07Z","content_type":"text/html","content_length":"88504","record_id":"<urn:uuid:93632f2d-aea0-4b4d-96d2-a9c08d6ac7d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00151.warc.gz"} |
ou are supposed to implement a circuit that takes - Asksia.ai
Computer Science
Posted 6 months ago
ou are supposed to implement a circuit that takes three inputs A, B, and C. When the values of the input A, B, and C, interpreted as an unsigned integer is divisible by 3, then the circuit should output 1. For example, if A is 0, B is 1, and C is 1, then the unsigned integer value is 011, which is 3 and is divisible by 3. Write the minimal Boolean expression for the circuit.
Answer from Sia
Posted 6 months ago
To solve the problem of creating a circuit that outputs 1 when the unsigned integer value of inputs A, B, and C is divisible by 3, we need to consider all possible combinations of A, B, and C and their corresponding outputs. Since A, B, and C are binary digits, there are $2^3 = 8$ possible combinations. We can create a truth table to determine the output for each combination:
From the truth table, we can derive the minimal Boolean expression using Karnaugh maps or Boolean algebra simplification techniques. The minimal Boolean expression that represents the output is:
$F(A, B, C) = \overline{A} \cdot \overline{B} \cdot \overline{C} + \overline{A} \cdot B \cdot C + A \cdot B \cdot \overline{C}$
This Boolean expression represents the minimal form of the logic needed to output 1 when the binary number ABC is divisible by 3. The overline represents the NOT operation, the dot represents the AND
operation, and the plus represents the OR operation.
Step-by-step Instruction
Determine the output for each combination based on whether the unsigned integer value is divisible by 3
Use Karnaugh maps or Boolean algebra simplification techniques to derive the minimal Boolean expression
Time Complexity
This concept does not apply to Boolean expression simplification as it is not an algorithm with multiple steps that depend on the size of the input.
Space Complexity
This concept does not apply to Boolean expression simplification as it does not require memory allocation that depends on the size of the input. | {"url":"https://www.asksia.ai/question-and-answer/ou-are-supposed-to-implement-a-circuit-that-takes-three-inputs-A-B-and-C-When-the-values-of-the-input-A-B-and-C-interpreted-as-an-unsigned-integer-is--k5F7T","timestamp":"2024-11-03T02:29:12Z","content_type":"text/html","content_length":"97485","record_id":"<urn:uuid:eec6f683-61f5-4b54-a147-ec6cfe1888af>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00511.warc.gz"} |
Corrigendum to "New constraints on kinetic isotope effects during CO2(aq) hydration and hydroxylation" : Revisiting theoretical and experimental data” [Geochim. Cosmochim. Acta 214 (2017) 246–265](S0016703717304593)(10.1016/j.gca.2017.07.035)
The authors regret an error in the derivation of the link between KFFs and isotopic rate constants of CO [2] hydration (Section 2.2). Considering the subset of isotopologues, C ^16O ^16O, C ^18O ^
16O, H [2] ^16O and H [2] ^18O, there are four possible forward reactions: (1) C ^16O ^16O + H [2] ^16O → H [2]C ^16O ^16O ^16O,(2) C ^18O ^16O + H [2] ^16O → H [2]C ^18O ^16O ^16O,(3) C ^16O ^16O +
H [2] ^18O → H [2]C ^18O ^16O ^16O,(4) C ^18O ^16O + H [2] ^18O → H [2]C ^18O ^18O ^16O.Reaction (4) involves rare isotopologues of both CO [2] and of H [2]O, forms a doubly substituted isotopologue,
and was not included in the original paper. Indeed, backward reaction (4) may be neglected in the derivation of the link between KFFs and isotopic rate constants of H [2]CO [3] dehydration. However,
forward reaction (4) still should have been considered in the derivation of the link between KFFs and isotopic rate constants of CO [2] hydration. Including reaction (4) in the derivations presented
in Section 2.2 and Appendix A.1 results in the following relations, (5) [Formula presented],(6) [Formula presented], which are distinct from those originally reported in Table 2, [Formula presented]
and [Formula presented]. Thus, the dependence of the KFFs on the isotope ratio of the reactants, which was discussed in the paper, may be neglected. We have corrected Table 2 accordingly.
Additionally, Fig. 2 of the original paper is obviated, as is its discussion in the text (last paragraph in Section 2.2). In the rest of the paper, this mistake affects only the theoretical KFF
values of CO [2] (de)hydration calculated using the isotopic rate constants of Zeebe (2014). The corrected KFF values differ by 2–4‰ from those reported in the paper, and are slightly farther from
the experimental estimates after Clark and Lauriol (1992). We present below the corrected KFF values in revised Tables 4 and 6 along with values unaffected by the change (changes are underlined).
Finally, we have revised Tables 8 and 9 and Figs. 3A and 4B which make use of the corrected KFFs to describe calcite precipitation at the kinetic limit. We thank Chen Zhou, University of Science and
Technology of China, Hefei, for pointing out this mistake. We apologise for any inconvenience caused.
All Science Journal Classification (ASJC) codes
• Geochemistry and Petrology
Dive into the research topics of 'Corrigendum to "New constraints on kinetic isotope effects during CO2(aq) hydration and hydroxylation" : Revisiting theoretical and experimental data” [Geochim.
Cosmochim. Acta 214 (2017) 246–265](S0016703717304593)(10.1016/j.gca.2017.07.035)'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/corrigendum-to-new-constraints-on-kinetic-isotope-effects-during-","timestamp":"2024-11-02T20:37:12Z","content_type":"text/html","content_length":"56503","record_id":"<urn:uuid:4d430e46-6ce7-4d05-a4c3-e62d57079b6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00646.warc.gz"} |
Can the p-value be
Can the p-value be 0?
by admin
Can the p-value be 0?
It is incorrect that the p-value will never be « 0 »…some statistical software such as SPSS sometimes gives p-values. 000 which is impossible, must be treated as p< . 001, i.e. the null hypothesis is
rejected (the test is statistically significant).
Is a p-value of 0.000 significant?
If the p-value is less than the significance level, we reject null hypothesis. So when you get a p-value of 0.000, you should compare it to the significance level. …since 0.000 is below all of these
significance levels, we will reject the null hypothesis in each case.
What does it mean when p is 0?
If P=0, subtract it from 100% and you are 100% confident The data you tested is statistically significant. Reject NULL (no difference) and accept the alternative (difference) P=0.05, then you are 95%
confident that the data is statistical.
Can a p-value in Anova be 0?
True zero p-values are possible, but not in any context you might see. This means that « under the null hypothesis, this evidence is practically impossible ». Example: If your null is « no unicorn
exists » and you observe one, then your p-value = 0.
What would you write when the p value is 0?
Most statisticians write 0.000 as a P value= 0.001. Muhammad’s answer is wrong. P=0.000 means P<0.0005。 iF P>=0.0005, p=0.0001.
P-value = .000? ? ?What to do when the p-value is reported as 0.000
34 related questions found
What is an example of a p-value?
Definition of P value
p-values are used in hypothesis testing to help you support or reject the null hypothesis. p value Evidence against the null hypothesis. . . For example, an ap value of 0.0254 is 2.54%. This means
that there is a 2.54% chance that your result is random (ie happens by chance).
Are the p-values statistically significant?
One p-values less than 0.05 (usually ≤ 0.05) are statistically significant… p-values above 0.05 (> 0.05) are not statistically significant, indicating strong evidence for the null hypothesis.
This means that we keep the null hypothesis and reject the alternative hypothesis.
What does a p-value of 0.01 mean?
The p-value measures how much evidence we have against the null hypothesis. … under normal circumstances, a p-value less than 0.01 means that There is substantial evidence against the null hypothesis
What does a p-value of 0.5 mean?
Mathematical probabilities like p-values range from 0 (no chance) to 1 (absolute certainty).So 0.5 means 50% chance 0.05 means a 5% chance. …results were considered statistically significant if
p-values were below .01, and highly statistically significant if they were below .005.
What does the p-value of ANOVA mean?
One-way ANOVA for F-values is a tool that helps you answer the question « Are the variances between the means of two populations significantly different? » The F value in the ANOVA test also
determines the P value; the P value the probability of getting an outcome that is at least as extreme as the actually observed outcome, …
Can the p-value be 1?
As a probability, P can take any value between 0 and 1. A value close to 0 indicates that the observed difference is unlikely to be due to chance, while a P value close to 1 indicates that there is
no difference between groups other than chance.
Why is my p-value so low?
very small p-value Indicates that the null hypothesis is very incompatible with the collected data. … a small P-value may simply be due to a very large sample size, regardless of effect size. A P
value >0.05 does not mean that no effect was observed, or that the effect size is small.
Can the p-value be greater than 1?
no, one p-value cannot be greater than one.
Why is my p-value so high?
A high p-value indicates that Your evidence is insufficient to suggest an effect in the population. There may be an effect, but it may be that the effect size is too small, the sample size is too
small, or the variability of the hypothesis test is too large to detect it.
What does a chi-square significance value of P = 0.05 mean?
What is the significant p-value for chi-square? The likelihood chi-square statistic is 11.816, p-value = 0.019. Therefore, at a significance level of 0.05, you can draw the following conclusions: The
association between the variables is statistically significant.
How do you reject the null hypothesis with a p-value?
if p-value less than 0.05, we reject the null hypothesis that there is no difference between the means and conclude that there is a significant difference. If the p-value is greater than 0.05, we
cannot conclude that there is a significant difference. It’s simple, right? Below 0.05, it is significant.
What does a p-value of 0.2 mean?
If p-value = 0.2, then there is The probability of the null hypothesis is 20% is correct? . … The P-value is a statistical measure with its own strengths and weaknesses that should be considered to
avoid misuse and misunderstanding (12).
What does a p-value of 0.08 mean?
A p-value of 0.08 greater than a benchmark of 0.05 indicates that the test is not significant.this means The null hypothesis cannot be rejected…so if your p-value is less than your alpha error, you
can reject the null hypothesis and accept the alternative.
What does a p-value of 0.04 mean?
In this case, P = 0.04 (or 4%) means that If the null hypothesis is true If you conduct the study in the exact same way multiple times, each time taking a random sample from the population, then in
4% of cases you will get the same or greater difference between groups…
How to calculate p-value?
If your test counts positive, first Find the probability that Z is greater than Your test statistic (look up your test statistic on the Z-table, find its corresponding probability, and subtract it
from it). This result is then doubled to obtain a p-value.
What affects the p-value?
The p-value is affected by sample size. The larger the sample size, the smaller the p-value. …increasing the sample size results in a smaller P-value only if the null hypothesis is false.
What is a p-value in plain English?
From Simple English Wikipedia, the free encyclopedia.In statistics, the p-value is Probability of the null hypothesis (The theory being tested is a false idea) to make a particular experimental
result happen. p-values are also known as probability values.
What is a p-value table?
The p-value or probability value is A number describing how likely your data is to occur under the null hypothesis of a statistical test…they can also be estimated using a table of p-values for the
relevant test statistic. P-values are calculated from the null distribution of the test statistic.
What is a p-value in a normal distribution?
Normal distribution: An approximate representation of data in hypothesis testing. p-value: The probability that an outcome at least as extreme as the observed outcome occurs if the null hypothesis is
What does a p-value greater than 0.05 mean?
P > 0.05 is the probability that the null hypothesis is true. 1 minus the P value is the probability that the alternative hypothesis is true. A statistically significant test result (P ≤ 0.05) means
that the test hypothesis is false or should be rejected. AP value greater than 0.05 means No effect observed.
Leave a Comment Cancel Reply
0 comment 0 FacebookTwitterPinterestEmail
previous post
Does zurn make pex a?
next post
Is Arrival a Book?
Related Articles | {"url":"https://1200artists.com/can-the-p-value-be-0/","timestamp":"2024-11-12T11:44:01Z","content_type":"text/html","content_length":"147987","record_id":"<urn:uuid:fdf65d47-6c53-4e05-83cc-ac4723ed6a37>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00154.warc.gz"} |
On the Discrete Normal Modes of Quasigeostrophic Theory
1. Introduction
a. Background
The vertical decomposition of quasigeostrophic motion into normal modes plays an important role in bounded stratified geophysical fluids (e.g., Charney 1971; Flierl 1978; Fu and Flierl 1980; Wunsch
1997; Chelton et al. 1998; Smith and Vallis 2001; Tulloch and Smith 2009; Lapeyre 2009; Ferrari et al. 2010; Ferrari and Wunsch 2010; de La Lama et al. 2016; LaCasce 2017; Brink and Pedlosky 2019).
Most prevalent are the traditional baroclinic modes (e.g., section 6.5.2 in Vallis 2017) that are the vertical structures of Rossby waves in a quiescent ocean with no topography or boundary buoyancy
gradients. In a landmark contribution, Wunsch (1997) partitions the ocean’s kinetic energy into the baroclinic modes and finds that the zeroth and first baroclinic modes dominate over most of the
extratropical ocean. Additionally, Wunsch (1997) concludes that the surface signal primarily reflects the first baroclinic mode and, therefore, the motion of the thermocline.
However, the use of baroclinic modes has come under increasing scrutiny in recent years (Lapeyre 2009; Roullet et al. 2012; Scott and Furnival 2012; Smith and Vanneste 2012). Lapeyre (2009) observes
that the vertical shear of the baroclinic modes vanishes at the boundaries, thus leading to the concomitant vanishing of the boundary buoyancy. Consequently, Lapeyre (2009) proposes that the
baroclinic modes cannot be complete^^1 due to their inability to represent boundary buoyancy. To supplement the baroclinic modes, Lapeyre (2009) includes a boundary-trapped exponential surface
quasigeostrophic solution (see Held et al. 1995) and suggests that the surface signal primarily reflects, not thermocline motion, but boundary-trapped surface quasigeostrophic dynamics (see also
Lapeyre 2017).
Appending additional functions to the collections of normal modes as in Lapeyre (2009) or Scott and Furnival (2012) does not result in a set of normal modes since the appended functions are not
orthogonal to the original modes. It is only with Smith and Vanneste (2012) that a set of normal modes capable of representing arbitrary surface buoyancy is derived.
Yet it is not clear how the normal modes of Smith and Vanneste (2012) differ from the baroclinic modes or what these modes correspond to in linear theory. Indeed, Rocha et al. (2015), noting that the
baroclinic series expansion of any sufficiently smooth function converges uniformly to the function itself, argues that the incompleteness of the baroclinic modes has been “overstated.” Moreover, de
La Lama et al. (2016) and LaCasce (2017), motivated by the observation that the leading empirical orthogonal function of Wunsch (1997) vanishes near the ocean bottom, propose an alternate set of
modes—the surface modes—that have a vanishing pressure at the bottom boundary.
We thus have a variety of proposed normal modes, and it is not clear how their properties differ. Are the baroclinic modes actually incomplete? What about the surface modes? What does completeness
mean in this context? The purpose of this paper is to answer these questions.
b. Normal modes and eigenfunctions
A normal mode is a linear motion in which all components of a system move coherently at a single frequency. Mathematically, a normal mode has the form
where Φ
describes the spatial structure of the mode and
is its angular frequency. The function Φ
is obtained by solving a differential eigenvalue problem and hence is an eigenfunction. The collection of all eigenfunctions forms a basis of some function space relevant to the problem.
By an abuse of terminology, the spatial structure, Φ
, is often called a normal mode (e.g., the term “Fourier mode” is often used for
, where
is a wavenumber). In linear theory, this misnomer is often benign because each Φ
corresponds to a frequency
. For example, given some initial condition Ψ(
), we decompose Ψ as a sum of modes at
= 0,
where the
are the Fourier coefficients, and the time evolution is then given by
However, with nonlinear dynamics, this abuse of terminology can be confusing. Given some spatial structure Ψ(x, y, z) in a fluid whose flow is nonlinear, we can still exploit the basis properties of
the eigenfunctions Φ[a] to decompose Ψ as in Eq. (2). Whereas in a linear fluid only wave motion of the form in Eq. (1) is possible, a nonlinear flow admits a larger collection of solutions (e.g.,
nonlinear waves and coherent vortices) and so the linear wave solution Eq. (3) no longer follows from the decomposition Eq. (2).
For this reason, we call the linear solution in Eq. (1) a physical normal mode to distinguish it from the spatial structure Φ[a], which is only an eigenfunction. Otherwise, we will use the terms
“normal mode” and “eigenfunction” interchangeably to refer to the spatial structure Φ[a], as is prevalent in the literature.
Our strategy here is then the following. We find the
normal modes [of the form Eq.
] to various Rossby wave problems and examine the basis properties of their constituent eigenfunctions. By assuming a doubly periodic domain in the horizontal, the problem reduces to finding the
vertical normal modes
). These vertical normal modes are obtained by solving a differential eigenvalue problem of the form
in the interval (
) with boundary conditions
= 1, 2, where 1/
), and
) are real-valued functions and
, and
are real numbers. This eigenvalue problem in Eqs.
differs from traditional Sturm–Liouville problems in that the eigenvalue λ appears in the boundary conditions Eq.
. Our goal is to find a collection of eigenfunctions
(i.e., so-called normal modes in the prevalent terminology) capable of representing every possible quasigeostrophic state.
c. Contents of this article
This article constitutes an examination of all collections of discrete (i.e., noncontinuum^^2) quasigeostrophic normal modes. We include the baroclinic modes, the surface modes of de La Lama et al.
(2016) and LaCasce (2017), the surface-aware mode of Smith and Vanneste (2012), as well as various generalizations. To study the completeness of a set of normal modes, we must first define the
underlying space in question. From general considerations, we introduce in section 2 the quasigeostrophic phase space, defined as the space of all possible quasigeostrophic states. Subsequently, in
section 3 we use the general theory of differential eigenvalue problems with eigenvalue-dependent boundary conditions, as developed in Yassin (2021), to study Rossby waves in an ocean with prescribed
boundary buoyancy gradients (e.g., topography; see section 2a). Intriguingly, in an ocean with no topography, we find that, in addition to the usual baroclinic modes, there are two additional
stationary step-mode solutions that have not been noted before. The stationary step modes are the limits of boundary-trapped surface quasigeostrophic waves as the boundary buoyancy gradient vanishes.
Our study of Rossby waves then leads us examine all possible discrete collections of normal modes in section 4. As shown in this section, the baroclinic modes are incomplete, as argued by Lapeyre
(2009), and we point out that the incompleteness leads to a loss of information after projecting a function onto the baroclinic modes. In contrast, modes such as those suggested by Smith and Vanneste
(2012) are complete in the quasigeostrophic phase space so that projecting a function onto such modes provides an equivalent representation of the function.
We offer discussion of our analysis in section 5 and conclusions in section 6. Appendix A summarizes the key mathematical results pertaining to eigenvalue problems where the eigenvalue appears in the
boundary conditions. Appendix B then summarizes the polarization relations as well as the vertical velocity eigenvalue problem.
2. Mathematics of the quasigeostrophic phase space
a. The potential vorticity
Consider a three-dimensional region D of the form
The area of the lower and upper boundaries is denoted by D
and is a rectangle of area
(lower boundary) and
(upper boundary) are constants. The horizontal boundaries are either rigid or periodic.
The state of a quasigeostrophic fluid in D is determined by a charge-like quantity known as the quasigeostrophic potential vorticity (
Hoskins et al. 1985
Schneider et al. 2003
). If the potential vorticity is distributed throughout the three-dimensional region D, we are concerned with the volume potential vorticity density
, with
related to the geostrophic streamfunction
by [e.g., section 5.4 of
Vallis (2017)
Here, the latitude-dependent Coriolis parameter is
) is the prescribed background buoyancy frequency, ∇
is the horizontal Laplacian operator, and
is the horizontal geostrophic velocity,
Additionally, the potential vorticity may be distributed over a two-dimensional region, say the lower and upper boundaries D
, to obtain surface potential vorticity densities
. The surface potential vorticity densities are related to the streamfunction by
is an imposed surface potential vorticity density at the lower or upper boundary and
= 1, 2. The density
corresponds to a prescribed buoyancy
at the
th boundary [see Eq.
]. Alternatively,
may be thought of as an infinitesimal topography through
represents infinitesimal topography at the
th boundary. Whereas
has dimensions of inverse time,
has dimensions of length per time.
b. Defining the quasigeostrophic phase space
We define the quasigeostrophic phase space to be the space of all possible quasigeostrophic states, with a quasigeostrophic state determined by the potential vorticity densities Q, R[1], and R[2].
Note that the volume potential vorticity density Q is defined throughout the whole fluid region D so that Q = Q(x, y, z, t). In contrast, the surface potential vorticity densities R[1] and R[2] are
only defined on the two-dimensional lower and upper boundary surfaces D[0] so that R[j] = R[j](x, y, t).
It is useful to restate the previous paragraph with some added mathematical precision. For that purpose, let L^2[D] be the space of square-integrable functions^^3 in the fluid volume D and let L^2[D
[0]] be the space of square-integrable functions on the boundary area D[0]. Elements of L^2[D] are functions of three spatial coordinates, whereas elements of L^2[D[0]] are functions of two spatial
coordinates. Hence, Q ∈ L^2[D] and R[1], R[2] ∈ L^2[D[0]].
Define the space
where ⊕ is the direct sum. Equation
states that any element of
is a tuple (
) of three functions, where
) is a function on the volume D and hence element of
[D] while the functions
), for
= 1, 2, are functions on the area D
and hence are elements of
]. We conclude that (
) ∈
and that
is the space of all possible quasigeostrophic states. We thus call
the quasigeostrophic phase space.
c. The phase space in terms of the streamfunction
Given an element (
) ∈
, we can reconstruct a continuous function
that contains the same dynamical information as (
). By inverting the problem
$Q−f=∇2ψint+∂∂z(f0N2∂ψint∂z) for z∈(z1,z2),$
$R1−g1=f02N2∂ψlow∂z for z=z1$
$R2+g2=−f02N2∂ψupp∂z for z=z2,$
we obtain a function
) that is unique up to a gauge transformation (see
Schneider et al. 2003
). Conversely, given a function
), we can differentiate
as in Eqs.
to obtain (
) ∈
. Thus, we can also consider the quasigeostrophic phase space
to be the space of all possible streamfunctions
motivate the definition of the relative potential vorticity densities,
− (−1)
, which are the portions of the potential vorticity providing a source for a streamfunction. Explicitly, the relative potential vorticity densities are
$q=∇2ψ+∂∂z(f02N2∂ψ∂z) for z∈(z1,z2),$
$r1=f02N2∂ψ∂z for z=z1,and$
$r2=−f02N2∂ψ∂z for z=z2.$
d. The vertical structure phase space
We proceed by expanding the potential vorticity density distribution, (
), and the streamfunction
in terms of the eigenfunctions
of the horizontal Laplacian. For the rectangular horizontal domain D
, the eigenfunction
) satisfies
= (
) is the horizontal position vector,
= (
) is the horizontal wavevector, and
= |
| is the horizontal wavenumber. For example, in a horizontally periodic domain the eigenfunctions
) are proportional to complex exponentials
Projecting the relative potential vorticity density distribution, (
), onto the horizontal eigenfunctions
$q(x,z,t)=∑kqk(z,t)ek(x), for z∈(z1,z2), and$
$rj(x,t)=∑krjk(t)ek(x)for j=1,2.$
Thus the Fourier coefficients of (
) are (
) where
is a function of
, and
are independent of
. Hence,
is an element of
)] whereas
are elements of the space of complex numbers
We conclude that the vertical structure of the potential vorticity, given by (
), is an element of
so that the vertical structures of the potential vorticity distribution are determined by a function
)] and two
-independent elements
of ℂ. Similarly, the streamfunction can be represented as
and (
) are related by
$qk=−k2ψk+∂∂z(f02N2∂ψk∂z) and$
As before, knowledge of the vertical structure of the streamfunction ψ[k](z) is equivalent to knowing the vertical structure of the potential vorticity distribution (q[k], r[1k], r[2k]). In the
resulting differential eigenvalue problem for the vertical normal modes, the nonzero r[j][k] lead to an eigenvalue problem of the form given in Eqs. (4) and (5), with the eigenvalue appearing in the
boundary condition. Such an eigenvalue problem takes place in the space $P^$ given by Eq. (18) (Yassin 2021). Thus $P^$ is also the space of all possible streamfunction vertical structures.
That ψ[k] belongs to $P^$ and not L^2[(z[1], z[2])] underlies much of the confusion over baroclinic modes. Assertions of completeness, based on Sturm–Liouville theory, assume that ψ is an element of
L^2[(z[1], z[2])]. However, as we have shown, that is an incorrect assumption. That ψ belongs to $P^$ will have consequences for the convergence and differentiability of normal-mode expansions, as
discussed in section 4. In the context of quasigeostrophic theory, the space $P^$ first appeared in Smith and Vanneste (2012). More generally, $P^$ appears in the presence of nontrivial boundary
dynamics (Yassin 2021).
We call
the vertical structure phase space, and for convenience we denote
)] by
for the remainder of the article. The vertical structure phase space
is then written as the direct sum
e. Representing the energy and potential enstrophy
We find it convenient to represent several quadratic quantities in terms of the eigenfunctions of the horizontal Laplacian
). The energy per unit mass in the volume D is given by
$E=1V∫D[|∇ψ|2+f02N2|∂ψ∂z|2] dA dz=∑kEk,$
where the horizontal energy mode is given by the vertical integral
$Ek=1H∫z1z2[k2|ψk|2+f02N2|∂ψk∂z|2] dz,$
being the domain volume and
being the domain depth.
Similarly, for the relative volume potential enstrophy density
, we have
$Z=1V∫D|q|2 dA dz=∑kZk$
$Zk=1H∫z1z2|qk|2 dz.$
Last, analogous to
, we have the relative surface potential enstrophy densities
on the area D
$Yj=1A∫D0|rj|2 dA=∑kYjk$
3. Rossby waves in a quiescent ocean
In this section, we study Rossby waves in an otherwise quiescent ocean; in other words, we examine the
normal modes of a quiescent ocean. The linear equations of motion are
$∂q∂t+βυ=0 for z∈(z1,z2) and$
$∂rj∂t+u·∇[(−1)j+1gj]=0 for z=zj.$
We assume that the prescribed surface potential vorticity densities at the lower and upper boundaries,
, are linear, which ensures that the resulting eigenvalue problem is separable. Moreover, because the ocean is quiescent,
must refer to topographic slopes, as in Eq.
The importance of the linear problem in Eq.
is that it provides all possible discrete Rossby wave normal modes in a quasigeostrophic flow. Substituting a wave ansatz of the form [cf. Eq.
normal modes]
into the linear problem Eq.
∈ (
), and
a. Traditional Rossby wave problem
We first examine the traditional case of linear fluctuations to a quiescent ocean with isentropic lower and upper boundaries, that is, with no topography. Setting ∇g[1] = ∇g[2] = 0 in the eigenvalue
problem of Eqs. (30)–(31) gives
$ω[−k2F+ddz(f02N2dFdz)]−βkxF=0 and$
is a nondimensional function. There are two cases to consider depending on whether
1) Traditional baroclinic modes
≠ 0 in the eigenvalue problem Eq.
renders a Sturm–Liouville eigenvalue problem in
$−ddz(f02N2dFdz)=λF for z∈(z1,z2) and$
$f02N2dFdz=0 for z=z1,z2,$
where the eigenvalue
is given by
Fig. 1
for an illustration of the dependence of |
| on the wavevector
Fig. 1.
Polar plots of the absolute value of the nondimensional angular frequency |ω[n]|/(βL[d]) of the first five modes of the traditional eigenvalue problem (section 3a) as a function of the wave
propagation direction, k/|k|, for constant stratification. The outer most ellipse, with the largest absolute angular frequency, represents the angular frequency of the barotropic (n = 0) mode. The
higher modes have smaller absolute frequencies and are thus concentric and within the barotropic angular frequency curve. Since the absolute value of the angular frequency of the barotropic mode
becomes infinitely large at small horizontal wavenumbers k, we have chosen a large wavenumber k, given by kL[d] = 7, so that the angular frequency of the first five modes can be plotted in the same
figure. We have chosen f[0] = 10^−4 s^−1, β = 10^−11 m^−1 s^−1, N[0] = 10^−2 s^−1 and H = 1 km leading to a deformation radius L[d] = N[0]H/f[0] = 100 km. Numerical solutions to all eigenvalue
problems in this paper are obtained using Dedalus (Burns et al. 2020).
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
Fig. 2.
Polar plots of the absolute value of the nondimensional angular frequency |ω[n]|/(βL[d]) of the first five modes from section 3b as a function of the wave propagation direction k/|k| for a horizontal
wavenumber given by kL[d] = 7 in constant stratification. The dashed line corresponds to ω[0]; this mode becomes boundary trapped at large wavenumbers k = |k|. The remaining modes, ω[n] for n = 1, 2,
3, and 4, are shown with solid lines. White regions are angles where γ[1] > 0. All Rossby waves with a propagation direction lying in the white region have negative angular frequencies ω[n] and so
have a westward phase speed. Gray regions are angles where γ[1] < 0. Here, ω[0] is positive while the remaining angular frequencies ω[n] for n > 0 are negative. Consequently, in the gray regions, ω
[0] corresponds to a Rossby wave with an eastward phase speed whereas the remaining Rossby waves have westward phase speeds. The lower boundary buoyancy gradient, proportional to ∇g[1], points toward
55^° and corresponds to a bottom slope of |∇h[1]| = 1.5 × 10^−5, leading to γ[1]/H = 0.15. The remaining parameters are as in Fig. 1.
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
From Sturm–Liouville theory (e.g.,
Brown and Churchill 1993
), the eigenvalue problem Eq.
has infinitely many eigenfunctions,
, … with distinct and ordered eigenvalues
th mode,
, has
internal zeros in the interval (
). The eigenfunctions are orthonormal with respect to the inner product [,], given by the vertical integral
$[F,G]=1H∫z1z2FG dz,$
with orthonormality, meaning that
is the Kronecker delta. A powerful and commonly used result of Sturm–Liouville theory is that the set
forms an orthonormal basis of
2) Stationary step modes
There are two additional solutions to the Rossby wave eigenvalue problem Eq.
that have not previously been noted in the literature. If
= 0, then the eigenvalue problem in Eq.
$βkxF=0 for z∈(z1,z2)and$
$0=0 for z=z1,z2.$
Consequently, if
≠ 0, then
) = 0 for
∈ (
). That is,
must vanish in the interior of the interval. However, since
= 0 in Eq.
, we obtain tautological boundary conditions Eq.
. As a result,
can take arbitrary values at the lower and upper boundaries. Thus, two solutions are
$Fjstep(z)={1 for z=zj0 otherwise.$
The two step-mode solutions in Eq.
are independent of the traditional baroclinic modes
). An expansion of the step mode
in terms of the baroclinic modes will fail and produce a series that is identically zero.
The two stationary step modes, $F1step$ and $F2step$, correspond to the two inert degrees of freedom in the eigenvalue problem Eq. (32). These two solutions are neglected in the traditional
eigenvalue problem Eq. (33) through the assumption that ω ≠ 0. Although dynamically trivial, we will see that these two step waves are obtained as limits of boundary-trapped modes as the boundary
buoyancy gradients N^2∇g[j]/f[0] become small.
3) The general solution
For a wavevector
≠ 0, the vertical structure of the streamfunction must be of the form
where Ψ(
) is a twice differentiable function satisfying
= 0 for
= 1, 2 and Ψ
and Ψ
are arbitrary constants. We can represent Ψ according to the expansion
and so the time evolution is
It is this time-evolution expression, which is valid only in linear theory for a quiescent ocean, that gives the baroclinic modes a clear physical meaning. More precisely, Eq.
states that the vertical structure Ψ(
) disperses into its constituent Rossby waves with vertical structures
. Outside the linear theory of this section, baroclinic modes do not have a physical interpretation, although they remain a mathematical basis for
b. The Rhines problem
We now examine the case with a sloping lower boundary, ∇g[1] ≠ 0, and an isentropic upper boundary, ∇g[1] = 0. The special case of a meridional bottom slope and constant stratification was first
investigated by Rhines (1970). Subsequently, Charney and Flierl (1981) extended the analysis to realistic stratification and Straub (1994) examined the dependence of the waves on the propagation
direction. Yassin (2021) applies the mathematical theory of eigenvalue problems with λ-dependent boundary conditions and obtains various completeness and expansion results as well as a qualitative
theory for the streamfunction modes. Below, we generalize these results, study the two limiting boundary conditions, and consider the corresponding vertical velocity modes.
1) The eigenvalue problem
, where
is a nondimensional function. We then manipulate the eigenvalue problem in Eqs.
to obtain (assuming
≠ 0)
$−ddz(f02N2dGdz)=λG for z∈(z1,z2)$
$−k2G−γ1−1(f02N2dGdz)=λG for z=z1, and$
$dGdz=0for z=z2,$
where the length scale
is given by
$γj=(−1)j+1z^·(k×∇gj)z^·(k×∇f)=(−1)j+1(αjkβkx) sin(Δθj),$
= |∇
| and Δ
is the angle between the wavevector
and ∇
measured counterclockwise from
. The parameter
depends only on the direction of the wavevector
and not its magnitude
= |
|. If
= 0, then the
th boundary condition can be written as a
-independent boundary condition [as in the upper boundary condition at
of the eigenvalue problem Eq.
]. For now, we assume that
≠ 0.
Since the eigenvalue λ appears in the differential equation and one boundary condition in the eigenvalue problem Eq. (43), the eigenvalue problem takes place in L^2 ⊕ ℂ.
2) Characterizing the eigensolutions
The following is obtained by applying the theory summarized in appendix A to the eigenvalue problem Eq. (43).^^5
The eigenvalue problem Eq.
has a countable infinity of eigenfunctions
, … with ordered and distinct nonzero eigenvalues
The inner product 〈,〉 induced by the eigenvalue problem Eq.
$〈F,G〉=1H[∫z1z2FG dz+γ1F(z1)G(z1)],$
which depends on the direction of the horizontal wavevector
. Moreover,
is not necessarily positive,
with one consequence being that some functions
may have a negative square: 〈
〉 < 0. Orthonormality of the modes
then takes the form
where at most one mode,
, satisfies 〈
〉 = −1. The eigenfunctions
form an orthonormal basis of
⊕ ℂ under the inner product Eq.
Appendix A provides the following inequality,
that, using the dispersion relation in Eq.
, implies that modes
with 〈
〉 > 0 correspond to waves with a westward phase speed whereas modes
with 〈
〉 < 0 correspond to waves with an eastward phase speed (assuming
> 0).
We distinguish the following cases depending on the sign of γ[1]. In the following, we assume k ≠ 0.
1. γ[1] > 0: All eigenvalues satisfy λ[n] > −k^2, all modes satisfy 〈G[n], G[n]〉 > 0, and all waves propagate westward. The nth mode G[n] has n internal zeros (Binding et al. 1994). See the
regions in white in Fig. 2.
2. γ[1] < 0: There is one mode, G[0], with a negative square, 〈G[n], G[n]〉 < 0, corresponding to an eastward-propagating wave. The eastward-propagating wave nevertheless travels pseudowestward (to
the left of the upslope direction for f[0] > 0). The associated eigenvalue λ[0] satisfies λ[n] < −k^2. The remaining modes, G[n] for n > 1, have positive squares, 〈G[n], G[n]〉 > 0,
corresponding to westward-propagating waves and have eigenvalues λ[n] satisfying λ[n] > −k^2. Both G[0] and G[1] have no internal zeros, whereas the remaining modes, G[n], have n − 1 internal
zeros for n > 1 (Binding et al. 1994). See the stippled regions in Fig. 2.
To elucidate the meaning of λ[n] < −k^2, note that a pure surface quasigeostrophic mode^^7 has λ = −k^2. Thus, λ[0] < −k^2 means that the bottom-trapped mode decays away from the boundary more
rapidly than a pure surface quasigeostrophic wave. Indeed, the limit of λ[0] → −∞ yields the bottom step mode (39) of section 3a(2).
The step-mode limit is obtained as
→ 0
. This limit is found as either |∇
| → 0 for propagation directions in which
< 0 or as
becomes parallel or antiparallel to ∇
(whichever limit satisfies
→ 0
). In this limit, we obtain a step mode exactly confined at the boundary (i.e., |
= 0) with zero phase speed (see
Fig. 3a
). The remaining modes then satisfy the isentropic boundary condition
Fig. 3.
The two limits of the boundary-trapped surface quasigeostrophic waves, as discussed in section 3c. (a) Convergence to the step mode given in Eq. (39) with j = 1 as γ[1] → 0^− for three values of γ[1]
at a wavenumber k = |k| given by kL[d] = 1. The phase speed approaches zero in the limit γ[1] → 0^−. (b) Here, γ[1]/H ≈ 10 for the three vertical structures G[n] shown. Consequently, the bottom
trapped wave has λ ≈ −k^2 and the phase speeds are large. The vertical structure G for three values of kL[d] is shown, illustrating the dependence on k of this mode, which behaves as a
boundary-trapped exponential mode with an e-folding scale of |λ|^−1/2 = k^−^1. In both (a) and (b), the wave propagation direction θ = 260^°. All other parameters are identical to Fig. 2.
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
The other limit is that of |
| → ∞ that is obtained as the buoyancy gradient becomes large, |∇
| → ∞. In this limit, the eigenvalue
→ −
Fig. 3b
). Moreover, the phase speed of the bottom-trapped wave becomes infinite, which is an indication that the quasigeostrophic approximation breaks down. Indeed, the large buoyancy gradient limit
corresponds to steep topographic slopes and so we obtain the topographically trapped internal gravity wave of
Rhines (1970)
, which has an infinite phase speed in quasigeostrophic theory. The remaining modes then satisfy the vanishing pressure boundary condition
as in the surface modes of
de La Lama et al. (2016)
LaCasce (2017)
3) The general time-dependent solution
At some wavevector
, the observed vertical structure now has the form
where Ψ is a twice continuously differentiable function satisfying
= 0. For such functions we can write (see
appendix A
$Ψ=∑n=0∞〈Ψ,Gn〉〈Gn,Gn〉 Gn.$
so that the time evolution is
$ψk(z,t)=∑n=0∞〈Ψ,Gn〉〈Gn,Gn〉 Gn(z)e−iωnt.$
Again, it is the above expression, which is valid only in linear theory with a quiescent background state, that gives the generalized Rhines modes
physical meaning. Outside the linear theory of this section, the generalized Rhines modes do not have any physical interpretation and instead merely serve as a mathematical basis for
⊕ ℂ.
Recall from
section 3a
that an expansion of a step mode Eq.
in terms of the baroclinic modes
produces a series that is identically zero. It follows that the step modes are independent of the baroclinic modes—they constitute independent degrees of freedom. However, with the inclusion of
bottom boundary dynamics, we may now expand the bottom step mode,
, in terms of the
⊕ ℂ
, with the expansion given by
$F1step(z)=γ1H∑n=0∞Gn(z1)〈Gn,Gn〉 Gn(z).$
c. The generalized Rhines problem
The general problem with topography at both the upper and lower boundaries is
$−ddz(f02N2dGdz)=λG for z∈(z1,z2)$
$−k2G+(−1)jγj−1(f02N2dGdz)=λG for z=zj,$
= 1, 2, where the length scale
is given by Eq.
. As the eigenvalue
appears in both boundary conditions, the eigenvalue problem in Eq.
takes place in
⊕ ℂ
. The inner product now has the form
$〈F,G〉=1H[∫z1z2FG dz+∑j=12γjF(zj)G(zj)],$
which reduces to Eq.
= 0. Under this inner product, the eigenfunctions
form a basis of
⊕ ℂ
There are now three cases depending on the signs of γ[1] and γ[2] and as depicted in Figs. 4 and 5. In the following, we assume k ≠ 0.
Fig. 4.
As in Fig. 2, but now with an upper slope |∇h[2]| = 10^−5 in the direction 200^° in addition to the bottom slope in Fig. 2. The upper slope corresponds to γ[2]/H = 0.1. The dotted line corresponds to
ω[0] and the dashed line corresponds to ω[1], with these two modes becoming boundary trapped at large wavenumbers k. The remaining modes, ω[n] for n = 2, 3, and 4, are shown with solid lines. White
regions are angles where γ[1] > 0 and γ[2] > 0. All Rossby waves with a propagation direction lying in the white region have negative angular frequencies ω[n] and so have a westward phase speed. Gray
regions are angles where γ[1] < 0 and γ[2] < 0. The two gravest angular frequencies ω[0] and ω[1] are both positive while the remaining angular frequencies ω[n] for n > 1 are negative. Consequently,
in the gray regions, ω[0] and ω[1] each correspond to a Rossby waves with an eastward phase speed whereas the remaining Rossby waves have westward phase speeds. Stippled regions are angles where γ[1]
> 0 and γ[2] < 0. In the stippled region, ω[0] is positive and has an eastward phase speed. The remaining Rossby waves in the stippled region have negative angular frequencies and have westward phase
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
Fig. 5.
This figure illustrates the dependence of the vertical structure G[n] of the streamfunction on the horizontal wavevector k as discussed in section 3c, for three propagation directions θ = (a),(b)
180°, (c),(d) 225°, and (e),(f) 265° [e.g., the row containing (a) and (b) are the vertical structures of waves at θ = 180°] and two wavenumbers kL[d] = (left) 0.5 and (right) 7 (where k = |k|)
[e.g., (b), (d), and (f) are the vertical structure of waves with kL[d] = 7]. The parameters for the above figure are identical to Fig. 2. We emphasize two features in this figure. First, note how
the boundary modes (n = 0, 1) are typically only boundary-trapped at small horizontal scales (i.e., for kL[d] = 7). At larger horizontal scales, we typically obtain a depth-independent mode along
with another mode with large-scale features in the vertical direction. Second, note that for γ[1] and γ[2] > 0, as in (a) and (b), the nth mode has n internal zeros, as in Sturm–Liouville theory; for
γ[1] > 0 and γ[2] < 0, as in (c) and (d), the first two modes (n = 0, 1) have no internal zeros; and for γ[1] and γ[2] < 0, the zeroth-mode G[0] has one internal zero, the first and second modes (G
[1] and G[2]) have no internal zeros, and the third mode G[2] has one internal zero. The zero crossing for the n = 0 mode in (f) is difficult to observe because the amplitude of G[0] is small near
the zero crossing.
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
1. γ[1] > 0 and γ[2] > 0: This corresponds to case i in section 3b. See the regions in white in Fig. 4 and Figs. 5a and 5b.
2. γ[1]γ[2] < 0: This corresponds to case ii in section 3b. See the stippled regions in Fig. 4 and Figs. 5c and 5d.
3. γ[1] < 0 and γ[2] < 0: There are two modes G[0] and G[1] with negative squares, 〈G[n], G[n]〉 < 0, that propagate eastward and have eigenvalues G[n] satisfying G[n] < −k^2 for n = 1, 2. The
remaining modes G[n] for n > 1 have positive squares, 〈G[n], G[n]〉 > 0, propagate westward, and have eigenvalues, λ[n], satisfying λ[n] > −k^2. The zeroth mode G[0] has one internal zero, the
first and second modes, G[1] and G[2], have no internal zeros, and the remaining modes G[n] have n − 2 internal zeros for n > 2 (Binding and Browne 1999). See the shaded regions in Figs. 2 and 4
and Figs. 5e and 5f.
d. The vertical velocity eigenvalue problem
$w^(z)=w^0 χ(z)$
, where
) is a nondimensional function. For the Rossby waves with isentropic boundaries of
section 3a
(the traditional baroclinic modes), the corresponding vertical velocity modes satisfy
with vanishing vertical velocity boundary conditions
appendix B
for details). The resulting modes
form an orthonormal basis of
with orthonormality given by
$δmn=1H∫z1z2χmχn(N2f02) dz.$
One can obtain the eigenfunctions
by solving the eigenvalue problem in Eqs.
or by differentiating the streamfunction modes
according to Eq.
Quasigeostrophic boundary dynamics
As seen earlier, boundary buoyancy gradients activate boundary dynamics in the quasigeostrophic problem. In this case, boundary conditions for the quasigeostrophic vertical velocity problem Eq.
appendix B
). The resulting modes
Fig. 6
) satisfy a peculiar orthogonality relation given by Eq.
Fig. 6.
The first six vertical velocity normal modes χ[n] (thin gray lines) and streamfunction normal modes G[n] (black lines) (see section 3d). The propagation direction is θ = 75^° with a wavenumber of kL
[d] = 2. The remaining parameters are as in Fig. 2. Note that χ[n] and G[n] are nearly indistinguishable from the boundary-trapped modes n = 0, 1 whereas they are related by a vertical derivative for
the internal modes n > 1. The eigenvalue in the figure is nondimensionalized by the deformation radius L[d].
Citation: Journal of Physical Oceanography 52, 2; 10.1175/JPO-D-21-0199.1
4. Eigenfunction expansions
Motivated by the Rossby waves of the previous section, we now investigate various sets of normal modes for quasigeostrophic theory. Let
be a collection of continuous functions that form a basis of
and assume
) is twice continuously differentiable in
. Define the eigenfunction expansion
is a basis of
, the eigenfunction expansion
satisfies (e.g.,
Brown and Churchill 1993
$∫z1z2|ψk(z)−ψkexp(z)|2 dz=0.$
Significantly, the vanishing of the integral in Eq.
does not imply
at every
∈ [
] because the two functions can still differ at some points
∈ [
In the following, we will only consider eigenfunction expansions that diagonalize the energy and potential enstrophy integrals of section 2e.
a. The four possible L^2 modes
There are only four
bases in quasigeostrophic theory that diagonalize the energy and potential enstrophy integrals. All four sets of corresponding normal modes satisfy the differential equation
$−ddz(f02N2 dFdz)=λF z∈(z1,z2),$
but differ in boundary conditions according to the following (recall that
is the bottom and
is the surface)—
• baroclinic modes: these have vanishing vertical velocity at both boundaries (Neumann),
$dF(z1)dz=0 and dF(z2)dz=0,$
• antibaroclinic modes: these have vanishing pressure^^8 at both boundaries (Dirichlet),
$F(z1)=0 and F(z2)=0,$
• surface modes: (mixed Neumann/Dirichlet)
$F(z1)=0 and dF(z2)dz=0,and$
• antisurface modes: (mixed Neumann/Dirichlet)
$dF(z1)dz=0 and F(z2)=0.$
All four sets of modes are missing two modes. Each boundary condition of the form
implies a missing step mode, and a boundary condition of the form
implies a missing boundary-trapped exponential mode [see the
→ ∞ limit leading to Eq.
b. Expansions with L^2 modes
We here examine the pointwise convergence and the term-by-term differentiability of eigenfunction expansions in terms of L^2 modes. These properties of L^2 Sturm–Liouville expansions may be found in
Brown and Churchill (1993) and Levitan and Sargsjan (1975).^^9
1) Pointwise equality on [z[1], z[2]]
For all four sets of
modes, if
is twice continuously differentiable in
, we obtain pointwise equality in the interior
$ψk(z)=ψkexp(z) for z∈(z1,zz).$
The behavior at the boundaries depends on the boundary conditions that the modes
satisfy. If the
satisfy the vanishing pressure boundary condition at the
th boundary
regardless of the values of
). It follows that
will be continuous over (
) and will generally have a jump discontinuity at the boundaries [unless
) = 0 for
= 1, 2]. In contrast, if the
satisfy a zero vertical velocity boundary condition at the
th boundary
Consequently, of the four sets of
modes, only with the baroclinic modes do we obtain the pointwise equality
on the
interval [
However, even though $ψkexp$ converges pointwise to ψ[k] when the baroclinic modes are used, we are unable to represent the corresponding velocity w[k] in terms of the vertical velocity baroclinic
modes since the modes vanish at both boundaries. Analogous considerations show that only the antibaroclinic vertical velocity modes (see appendix B) can represent arbitrary vertical velocities.
2) Differentiability of the series expansion
Although we obtain pointwise equality on the whole interval [z[1], z[2]] with the streamfunction baroclinic modes, we have lost two degrees of freedom in the expansion process. Recall that the
degrees of freedom in the quasigeostrophic phase space are determined by the potential vorticity. The volume potential vorticity q[k] is associated with the L^2 degrees of freedom while the surface
potential vorticities, r[1][k] and r[2][k], are associated with the ℂ^2 degrees of freedom.
The series expansion
in terms of the baroclinic modes is differentiable in the interior (
). Consequently, we can differentiate the
series for
∈ (
) to recover
; that is,
is not differentiable at the boundaries
, so we are unable to recover the surface potential vorticities
. Two degrees of freedom are lost by projecting onto the baroclinic modes.
The energy at wavevector
is indeed partitioned between the modes
and similarly for the potential enstrophy
However, because we have lost
in the projection process, the surface potential enstrophies
, defined in Eq.
, are not partitioned.
c. Quasigeostrophic L^2 ⊕ ℂ^2 modes
Consider the eigenvalue problem
$−ddz(f02N2dGdz)=λG for z∈(z1,z2) and$
$−k2G+(−1)jDj−1(f02N2dGdz)=λG for z=zj,$
are nonzero real constants. This eigenvalue problem differs from the generalized Rhines eigenvalue problem in Eq.
in that
are generally not equal to the
defined in Eq.
. The inner product 〈,〉 induced by the eigenvalue problem in Eq.
is given by Eq.
with the
replaced by the
Smith and Vanneste (2012) investigate an equivalent eigenvalue problem to (80) and conclude that, when D[1] and D[2] are positive, the resulting eigenfunctions form a basis of L^2 ⊕ ℂ^2. However,
such a completeness result is insufficient for the Rossby wave problem of section 3c, in which case D[j] = γ[j] and γ[j] can be negative.
d. Expansion with L^2 ⊕ ℂ^2 modes
in the eigenvalue problem Eq.
are finite and nonzero, the resulting eigenmodes
form a basis for the vertical structure phase space
⊕ ℂ
. Thus, the projection
is an
representation of
. Not only do we have pointwise equality
$ψk(z)=ψkexp(z) for z∈[z1,z2],$
but the series
is also differentiable on the
interval [
] [the case of
> 0 is due to
Fulton (1977)
whereas the case of
< 0 is due to
Yassin (2021)
]. Thus, given
, we can differentiate to obtain both
and thereby recover all quasigeostrophic degrees of freedom. Indeed, we have
$qk(z,t)=∑n=0∞qkn(t)Gn(z) and$
$qkn=−(k2+λn)〈Ψ,Gn〉〈Gn,Gn〉 and$
= 1, 2.
In addition, the energy
, volume potential enstrophy
, and surface potential enstrophies
are partitioned (diagonalized) between the modes
$Ek=∑n=0∞〈Gn,Gn〉(k2+λn)|ψkn|2 and$
5. Discussion
The traditional baroclinic modes are useful because they are the vertical structures of linear Rossby waves in a resting ocean and they can be used for wave-turbulence studies such as in, e.g., Hua
and Haidvogel (1986) and Smith and Vallis (2001). Therefore, any basis we choose should not only be complete in L^2 ⊕ ℂ^2 but should also represent the vertical structure of Rossby waves in the
linear (quiescent ocean) limit. Such a basis would then be amenable to wave-turbulence arguments and can permit a dynamical interpretation of field observations. The basis suggested by Smith and
Vanneste (2012) does not correspond to Rossby waves in the linear limit. It is a mathematical basis with two independent parameters D[1], D[2] > 0 that diagonalizes the energy and potential enstrophy
This observation can be verified by rewriting the linear time-evolution Eqs.
as a time-evolution equation for modal amplitudes. Expanding
in terms of the generalized Rhines modes [the vertical eigenfunctions of the eigenvalue problem Eq.
) for each wavevector
], using the relationship between
given by Eq.
, the orthonormality condition Eq.
, and assuming
) ≠ 0, we obtain
This expansion diagonalizes the linear terms (i.e.,
and ∇
) in the time-evolution Eqs.
through the choice
) for each vector
. However, this choice cannot be made using the
Smith and Vanneste (2012)
theory—which assumes that the
are positive—because
) can be negative in certain propagation directions.
The Rhines modes of section 3b offer a basis of L^2 ⊕ ℂ that corresponds to Rossby waves over topography in the linear limit. These Rhines modes do not contain any free parameters. Indeed, if we set
D[2] = 0 in the eigenvalue problem (80) and let D[1] = γ[1], we then obtain the Rhines modes. Note that since D[1] = γ[1] = γ[1](k) may be negative, the Smith and Vanneste (2012) modes do not apply.
Instead, the case of negative D[j] is examined in this article and in Yassin (2021).
However, the Rhines modes, as a basis of
⊕ ℂ, are not a basis of the whole vertical structure phase space
⊕ ℂ
since they exclude surface buoyancy anomalies at the upper boundary. To solve this problem, we can use the modes of the eigenvalue problem Eq.
but leaving
arbitrary as in
Smith and Vanneste (2012)
. Although this basis now only has one free parameter,
, it still does not correspond to Rossby waves in the linear limit. We can even eliminate this free parameter by interpreting surface buoyancy gradients as topography—for example, by defining
corresponds to the background flow, and using
in place of
in the generalized Rhines modes of
section 3c
. However, the waves resulting from topographic gradients generally differ from those resulting from vertically sheared mean flows (in particular, one must take into account advective continuum
modes) and so this resolution is artificial.
Galerkin approximations with L^2 modes
Both the L^2 baroclinic modes and the L^2 ⊕ ℂ^2 modes have infinitely many degrees of freedom. In contrast, numerical simulations only contain a finite number of degrees of freedom. Consequently, it
should be possible to use baroclinic modes to produce a Galerkin approximation to quasigeostrophic theory with nontrivial boundary dynamics. Such an approach has been proposed by Rocha et al. (2015).
onto the baroclinic modes produces a series expansion,
, that is differentiable in the interior but not at the boundaries. By differentiating the series
in the interior
we obtain Eq.
. If instead we integrate by parts twice and avoid differentiating
, we obtain
The two expressions Eqs. (77) and (92) are only equivalent when r[1][k] = r[2][k] = 0. For nonzero r[1][k] and r[2][k], the singular nature of the expansion means we have a choice between Eqs. (77)
and (92).
By choosing Eq. (92) and avoiding the differentiation of $ψkexp$, Rocha et al. (2015) produced a least squares approximation to quasigeostrophic dynamics that conserves the surface potential
enstrophy integrals in Eq. (26). This is a conservation property underlying their approximation’s success.
6. Conclusions
In this article, we have studied all possible noncontinuum collections of quasigeostrophic streamfunction normal modes that diagonalize the energy and potential enstrophy. There are four possible L^2
modes: the baroclinic modes, the antibaroclinic modes, the surface modes, and the antisurface modes. Additionally, we explored the properties of the family of L^2 ⊕ ℂ^2 bases introduced by Smith and
Vanneste (2012) that contain two free parameters D[1] and D[2] and generalized the family to allow for D[1], D[2] < 0. This generalization is necessary for Rossby waves in the presence of bottom
topography. If D[j] = γ[j], where γ[j] is given by Eq. (44) for j = 1, 2, the resulting modes are the vertical structure of Rossby waves in a quiescent ocean with prescribed boundary buoyancy
gradients (i.e., topography). We have also examined the associated L^2 and L^2 ⊕ ℂ^2 vertical velocity modes.
For the streamfunction L^2 modes, only the baroclinic modes are capable of converging pointwise to any quasigeostrophic state on the interval [z[1], z[2]], whereas for the vertical velocity L^2
modes, only the antibaroclinic modes are capable. However, in both cases, the resulting eigenfunction expansion is not differentiable at the boundaries, z = z[1], z[2]. Consequently, while we can
recover the volume potential vorticity density q[k], we cannot recover the surface potential vorticity densities r[1][k] and r[2][k]. Thus, we lose two degrees of freedom when projecting onto the
baroclinic modes. In contrast, L^2 ⊕ ℂ^2 modes provide an equivalent representation of the function in question. Namely, the eigenfunction expansion is differentiable on the closed interval [z[1], z
[2]] so that we can recover q[k], r[1][k], and r[2][k] from the series expansion.
We have also introduced a new set of modes, the Rhines modes, that form a basis of L^2 ⊕ ℂ and correspond to the vertical structures of Rossby waves over topography. A natural application of these
normal modes is to the study of weakly nonlinear wave-interaction theories of geostrophic turbulence found in Fu and Flierl (1980) and Smith and Vallis (2001), extending their work to include bottom
A collection of functions is said to be complete in some function space Φ if this collection spans the space Φ. Specifying the underlying function space Φ turns out to be crucial, as we see in
section 2d.
The definition of L^2[D] is more subtle than is presented here. Namely, elements of L^2[D] are not functions but rather are equivalence classes of functions leading to the unintuitive properties seen
in this section. See Yassin (2021) and citations within for more details.
Since all physical fields must be real, only a single degree of freedom is gained from ℂ. Furthermore, when complex notation is used (e.g., complex exponentials for the horizontal eigenfunctions e[k]
) it is only the real part of the fields that is physical.
To apply the theory of Yassin (2021), summarized in appendix A, let $λ˜=λ−k2$ be the eigenvalue in place of λ; the resulting eigenvalue problem for $λ˜$ will then satisfy the positiveness conditions,
Eqs. (A7) and (A8), of appendix A.
A pure surface quasigeostrophic mode is the mode found after setting β = 0 with an upper boundary at z[2] = ∞.
Recall that the geostrophic streamfunction ψ is proportional to pressure (e.g., Vallis 2017, his section 5.4).
In particular, chapters 1 and 8 in Levitan and Sargsjan (1975) show that eigenfunction expansions have the same pointwise convergence and differentiability properties as the Fourier series with the
analogous boundary conditions. The behavior of Fourier series is discussed in Brown and Churchill (1993).
To see that $ψkexp$ is nondifferentiable at z = z[1], z[2], suppose that the series $ψkexp$ is differentiable and that dψ[k](z[j])/dz ≠ 0 for j = 1, 2; but then $0≠dψk(zj)/dz=∑n=0∞ψkn[dFn(zj)/dz]=0$,
which is a contradiction.
We offer sincere thanks to Stephen Garner, Robert Hallberg, Isaac Held, Sonya Legg, and Shafer Smith for comments and suggestions that greatly helped our presentation. We also thank Guillaume
Lapeyre, William Young, one anonymous reviewer, and the editor (Joseph LaCasce) for their comments that helped us to further refine and focus the presentation, and to correct confusing statements.
This report was prepared by Houssam Yassin under Award NA18OAR4320123 from the National Oceanic and Atmospheric Administration of the U.S. Department of Commerce. The statements, findings,
conclusions, and recommendations are those of the authors and do not necessarily reflect the views of the National Oceanic and Atmospheric Administration or the U.S. Department of Commerce.
Data availability statement.
The data that support the findings of this study are available within the article.
Sturm–Liouville Eigenvalue Problems with λ-Dependent Boundary Conditions
Consider the differential eigenvalue problem given by Eqs. (4) and (5). We assume that 1/p(z), q(z), and r(z) are real-valued integrable functions and that a[j], b[j], c[j], and d[j] are real numbers
for j = 1, 2. Moreover, we assume that p > 0, r > 0, p and r are twice continuously differentiable, q is continuous, and (a[j], b[j]) ≠ (0, 0).
Define the two boundary parameters
= 1, 2 by
Then the natural inner product for the eigenvalue problem is given by
$〈F,G〉=∫z1z2FG dz+∑j=12Dj−1(CjF)(CjG)$
where the boundary operator Χ
is defined by
$CjF=cjF(zj)−dj (pdFdz)(zj).$
The eigenvalue problem takes place in the space
⊕ ℂ
, where
is the number of nonzero
. Assume for the following that
= 2; the case in which
= 1 is similar. If
= 1, 2 then the inner product
is positive definite—that is, all nonzero
satisfy 〈
〉 > 0. Therefore
⊕ ℂ
, equipped with the inner product Eq.
, is a Hilbert space. In this Hilbert-space setting, the eigenfunctions
form an orthonormal basis of
⊕ ℂ
and the eigenvalues are distinct and bounded below as in Eq.
Evans 1970
Walter 1973
Fulton 1977
). The appendix of
Smith and Vanneste (2012)
also proves this result in the case in which
= 0. The convergence properties of normal-mode expansions in this case are due to
Fulton (1977)
However, as we observe in
section 3
, the
> 0 case is not sufficient for the Rossby wave problem with topography. In general, the space
⊕ ℂ
with the indefinite inner product Eq.
is a Pontryagin space (see
Iohvidov and Krein 1960
Bognár 1974
). Pontryagin spaces are analogous to Hilbert spaces except that they have a finite-dimensional subspace of elements satisfying 〈
〉 < 0. If Π is a Pontryagin space with inner product 〈,〉, then Π admits a decomposition
where Π
is a Hilbert space under the inner product 〈,〉 and Π
is a finite-dimensional Hilbert space under the inner product −〈,〉. If {
is an orthonormal basis for the Pontryagin space Π, then an element Ψ ∈ Π can be expressed
Even though
is normalized, the presence of 〈
〉 = ±1 in the denominator of Eq.
is essential since this term may be negative.
One can rewrite the eigenvalue problem in Eqs.
in the form L
for some operator L (e.g.,
Langer and Schneider 1991
). The operator L is a positive operator if for the
-dependent boundary conditions we have
$aiciDi≤0, bidiDi≤0, and (−1)iaidiDi≥0$
or for the
-independent boundary conditions we have
$bi=0 or (−1)i+1aibi≥0 if bi≠0.$
Yassin (2021)
has shown that, when L is positive, the eigenfunctions
of the eigenvalue problem in Eqs.
form an orthonormal basis of
⊕ ℂ
; that the eigenvalues are real; and that the eigenvalues are ordered as in Eq.
. Moreover, since L is positive, we have the relationship
Yassin (2021)
also shows that the normal-mode-expansion results of
Fulton (1977)
extend to this case as well.
Polarization Relations and the Vertical Velocity Eigenvalue Problem
a. Polarization relations
The linear quasigeostrophic vorticity and buoyancy equations, computed about a resting background state, are
$∂ζ∂t+β∂ψ∂x=f0∂w∂z and$
in the interior
∈ (
). The vorticity
and buoyancy
are given in terms of the geostrophic streamfunction by
$ζ=∇2ψ and$
The no-normal flow at the lower and upper boundaries implies
= 1, 2. Substituting Eq.
into the linear buoyancy Eq.
, yields the boundary conditions
$∂tb+u·∇(N2f0gj)=0 for z=zj.$
We now assume solutions of the form
and similarly for
. Substituting such solutions into Eqs.
and using
$dψ^dz=−iN2f0ωw^ and$
∈ (
). At the boundaries
, we use Eqs.
to obtain
$b^=−N2f0ωu^·∇gj and$
b. The vertical velocity eigenvalue problem
Taking the vertical derivative of Eq.
and using Eq.
is nondimensional. The boundary conditions at
as obtained by using Eqs.
in boundary conditions Eq.
. The orthonormality condition is
$±δmn=1H[∫z1z2χmχn(N2f02) dz−1k2∑j=121γj(Cjχm)(Cjχn)],$
When only one boundary condition is λ dependent (e.g., γ[2] = 0), the eigenvalue problem in Eqs. (B12) and (B13) satisfies Eq. (A4) when γ[1] > 0 and Eqs. (A7) and (A8) when γ[1] < 0; thus, the
reality of the eigenvalues and the completeness results follow. However, when both boundary conditions are λ dependent the problem no longer satisfies these conditions for all k. Instead, in this
case, one exploits the relationship between the vertical velocity eigenvalue problem in Eqs. (B12) and (B13) and the streamfunction problem in Eqs. (55a) and (55b) given by Eqs. (B8) and (B9) to
conclude that the two problems have identical eigenvalues (for ω ≠ 0), and then one uses the simplicity of the eigenvalues to conclude that no generalized eigenfunctions can arise.
c. The vertical velocity L^2 modes
Analogous to the streamfunction
modes, we have the following sets of vertical velocity
modes: For
baroclinic modes
, we have vanishing vertical velocity at both boundaries,
$χ(z1)=0 and χ(z2)=0.$
For a
ntibaroclinic modes
, we have vanishing pressure at both boundaries,
$dχ(z1)dz=0 and dχ(z2)dz=0.$
surface modes
, we have
$dχ(z1)dz=0 and χ(z2)=0.$
antisurface modes
, we have
$χ(z1)=0 and dχ(z2)dz=0.$
• Binding, P. A., and P. J. Browne, 1999: Left definite Sturm-Liouville problems with eigenparameter dependent boundary conditions. Differ. Integr. Equations, 12, 167–182.
• Binding, P. A., P. J. Browne, and K. Seddighi, 1994: Sturm–Liouville problems with eigenparameter dependent boundary conditions. Proc. Edinburgh Math. Soc., 37, 57–72, https://doi.org/10.1017/
• Bognár, J., 1974: Indefinite Inner Product Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete, Vol. 78, Springer-Verlag, 223 pp.
• Brink, K. H., and J. Pedlosky, 2019: The structure of baroclinic modes in the presence of baroclinic mean flow. J. Phys. Oceanogr., 50, 239–253, https://doi.org/10.1175/JPO-D-19-0123.1.
• Brown, J. W., and R. V. Churchill, 1993: Fourier Series and Boundary Value Problems. 5th ed. McGraw-Hill, 348 pp.
• Burns, K. J., G. M. Vasil, J. S. Oishi, D. Lecoanet, and B. P. Brown, 2020: Dedalus: A flexible framework for numerical simulations with spectral methods. Phys. Rev. Res., 2, 023068, https://
• Charney, J. G., and G. R. Flierl, 1981: Oceanic analogues of large-scale atmospheric motions. Evolution of Physical Oceanography, B. A. Warren and C. Wunsch, Eds., MIT Press, 448–504.
• Chelton, D. B., R. A. deSzoeke, M. G. Schlax, K. El Naggar, and N. Siwertz, 1998: Geographical variability of the first baroclinic Rossby radius of deformation. J. Phys. Oceanogr., 28, 433–460,
• de La Lama, M. S., J. H. LaCasce, and H. K. Fuhr, 2016: The vertical structure of ocean eddies. Dyn. Stat. Climate Syst., 1, dzw001, https://doi.org/10.1093/climsys/dzw001.
• Drazin, P. G., D. N. Beaumont, and S. A. Coaker, 1982: On Rossby waves modified by basic shear, and barotropic instability. J. Fluid Mech., 124, 439–456, https://doi.org/10.1017/S0022112082002572
• Ferrari, R., S. M. Griffies, A. J. Nurser, and G. K. Vallis, 2010: A boundary-value problem for the parameterized mesoscale eddy transport. Ocean Modell., 32, 143–156, https://doi.org/10.1016/
• Fulton, C. T., 1977: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. Roy. Soc. Edinburgh Sec. A: Math., 77, 293–308, https://doi.org/
• Hoskins, B. J., M. E. McIntyre, and A. W. Robertson, 1985: On the use and significance of isentropic potential vorticity maps. Quart. J. Roy. Meteor. Soc., 111, 877–946, https://doi.org/10.1002/
• Iohvidov, I. S., and M. G. Krein, 1960: Spectral theory of operators in spaces with an indefinite metric. I. Eleven Papers on Analysis, American Mathematical Society Translations: Series 2, Vol.
13, Amer. Math. Soc., 105–175.
• Lapeyre, G., 2009: What vertical mode does the altimeter reflect? On the decomposition in baroclinic modes and on a surface-trapped mode. J. Phys. Oceanogr., 39, 2857–2874, https://doi.org/
• Levitan, B. M., and I. S. Sargsjan, 1975: Introduction to spectral theory: Selfadjoint ordinary differential operators. Trans. Math. Monogr., No. 39, Amer. Math. Soc., 525 pp., https://doi.org/
• Rocha, C. B., W. R. Young, and I. Grooms, 2015: On Galerkin approximations of the surface active quasigeostrophic equations. J. Phys. Oceanogr., 46, 125–139, https://doi.org/10.1175/
• Roullet, G., J. C. McWilliams, X. Capet, and M. J. Molemaker, 2012: Properties of steady geostrophic turbulence with isopycnal outcropping. J. Phys. Oceanogr., 42, 18–38, https://doi.org/10.1175/
• Scott, R. B., and D. G. Furnival, 2012: Assessment of traditional and new eigenfunction bases applied to extrapolation of surface geostrophic current time series to below the surface in an
idealized primitive equation simulation. J. Phys. Oceanogr., 42, 165–178, https://doi.org/10.1175/2011JPO4523.1.
• Tulloch, R., and K. S. Smith, 2009: Quasigeostrophic turbulence with explicit surface dynamics: Application to the atmospheric energy spectrum. J. Atmos. Sci., 66, 450–467, https://doi.org/
• Vallis, G. K., 2017: Atmospheric and Oceanic Fluid Dynamics: Fundamentals and Large-scale Circulation. 2nd ed. Cambridge University Press, 946 pp. | {"url":"https://journals.ametsoc.org/view/journals/phoc/52/2/JPO-D-21-0199.1.xml","timestamp":"2024-11-11T00:19:20Z","content_type":"text/html","content_length":"1049715","record_id":"<urn:uuid:3e659462-9c01-4b5c-bba5-31b295210426>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00666.warc.gz"} |
Peak Analyzer, Fit Peaks Page
19.1.10.11 Peak Analyzer, Fit Peaks Page
This page is for OriginPro only. It is available in the Peak Analyzer only when Fit Peaks is selected for Goal in the Start page. It performs fitting to the peaks that are found by the Find Peaks
□ Menu Command: Analysis: Peaks and Baseline: Peak Analyzer: Open Dialog
□ Window Types: Workbook, Graph
Dialog Theme
Fit Peaks Controls
Snap to Spectrum Select this checkbox to specify whether you want the peak center anchor points to be snapped to the spectrum data. If this is selected, the peak center anchor points
will be pulled onto the spectrum as you add or move them.
Peaks Click the Add button to add peaks manually. Or, click the Modify/Del button to modify or delete peaks manually.
Show Residuals Specifies whether to show the current residuals plot.
Show 2nd Derivative Specifies whether to show the second derivative of the spectrum data.
Generate Report from Current If this check box is selected, the results report will be generated after you click the Finish button even when the fitting does not converge. Otherwise, the results
Fitting Result report will be generated only when the fitting converges.
Fit Control Click this button open the Peak Fit Parameters dialog, which allows you to control the fitting. See more details here.
Fit Click this button to perform the fit.
The Weight Group
Specifies the weighting method, which will be used in the calculating the Chi-Square during the fitting. The available weighting methods are:
• No Weighting
• Instrumental
Method • Statistical
• Arbitrary Dataset
• Direct Weighting
See details in Fitting with Errors and Weighting
Data This is available when Method is Instrumental, Arbitrary Dataset or Direct Weighting. Use this to specify the dataset that will be used for weighting.
The Result Group
The Output Settings Branch
Specify the identifier for the source datasets.
Select a type to specify the source datasets information. The Identifier can be Range, Book Name, Sheet Name, Name (Use the long name of the corresponding column if
Dataset Identifier there is a long name, otherwise use the short name of column.), Short Name, Long Name, Units, Comments, <Custom> (For its usage, please refer to Advanced legend text
Show Identifier in Flat Sheet
Specify whether to use the dataset identifier in flat sheet.
Specifies the destination of Report Tables/ Peaks Table/ Fitted Curves. Report Tables store comprehensive results for peak fitting. Peaks Table store peaks information. Fitted
Curves store the data for baseline and fitted curves.
☆ <none>: Do not output Report Tables/ Peaks Table/ Fitted Curves
☆ <auto>: If source workbook is available, the source workbook will be used as output; Otherwise, a new workbook will be created and used as output.
☆ <source>: The workbook that has the source data.
☆ <new>: A new workbook.
Report Tables/ Peaks ☆ <existing>: A specified existing workbook.
Table/ Fitted Curves
Sheet: the destination worksheet. This is always <new>.
Results Log (Report Tables only): Output the report to the Results Log.
Script Window (Report Tables only): Output the report to the Script Window.
Notes Window (Report Tables only): Use this drop-down list to specify the destination Notes Window.
☆ <none> (Report Tables only): Do not output to any Notes window.
☆ <new> (Report Tables only): Output to a new Notes window.
Specifies the destination workbook and worksheet for residual values
☆ <fittedvalue>: The workbook where the fitted values will be output.
☆ <new>: A new workbook.
☆ <existing>: A specified existing workbook.
Fit Residuals BookName
☆ <fittedvalue>: The worksheet that has the fitted values.
☆ <new>: A new worksheet.
☆ <existing>: A specified existing worksheet.
This is available only when a baseline has been subtracted and the Add Back Baseline check box is disabled. Specifies the destination workbook and worksheet for subtracted
☆ <fittedvalue>: The workbook where the fitted values will be output.
☆ <new>: A new workbook.
☆ <existing>: A specified existing workbook.
Subtracted Data
☆ <new>: The worksheet that has the fitted values.
☆ <existing>: A specified existing workbook.
This is available only when a baseline has been subtracted and the Add Back Baseline check box is enabled. Specifies the destination workbook and worksheet for baseline data.
☆ <fittedvalue>: The workbook where the fitted values will be output.
☆ <new>: A new workbook.
☆ <existing>: An existing workbook.
Baseline Data
☆ <new>: The worksheet that has the fitted values.
☆ <existing>: A specified existing workbook.
The Configure Report Branch
Specifies the quantities to be computed and displayed in the fitting report.
Fit Parameters
Value: Parameter value.
Shared: If the parameter is not shared, this will be 0. Otherwise, this value indicates the index of the peak whose corresponding parameter is shared with the parameter.
Fixed: Specifies whether the parameter is fixed.
Standard Error: Standard error of parameters
LCL: The lower confidence limit
UCL: The upper confidence limit
Confidence level for Parameters (%): The confidence level for regression
t-Value: t-test value of parameters
prob>|t|: p-value of parameters
Dependency: The dependency values for parameters
CI Half-Width: Half-width of the confidence interval
Lower Bound: Lower bound
Upper Bound: Upper bound
to Compute Fit Statistics
Number of Points: Total number of fitting points
Degrees of Freedom: Model degrees of freedom
Reduced Chi-Sqr: The reduced Chi-Square value
R Value: The R value, equals to square root of $R^2$
Residual Sum of Squares: Residual sum of squares (RSS); or sum of square error.
R-Square (COD): Coefficient of determination
Adj. R-Square: Adjusted coefficient of determination
Root-MSE (SD): Residual standard deviation; or square root of mean square error.
Number of Iterations: Number of iterations that were performed.
Fit Status: The status of the fitting
ANOVA: Output the analysis of variance table.
Covariance matrix: Output the covariance matrix.
Correlation matrix: Output the correlation matrix.
Please see details on the computation here.
Specifies the peak characteristics to be computed and displayed in the peak worksheet report.
Peak Index: The indices of peaks.
Peak Function Type: The function that was used to fit the peak. (In the report worksheet, this column is named Peak Type.)
Fitted Peak Area: Integrate to find the area between the peak function and the baseline using the parameter values obtained from the fit. The integration is performed from $-\infty$
to $\infty$. (In the report worksheet, this column is named Area Fit.)
Fitted Peak Area Contained in Fitting Range: Integrate to find the area between the peak function and the baseline using the parameter values obtained from the fit. (In the report
worksheet, this column is named Area FitT.)
Fitted Peak Area Contained in Fitting Range(%): Integrate to find the area between the peak function and the baseline using the parameter values obtained from the fit. The
integration is performed within the data range only. The result is expressed as a percent of the total area. (In the report worksheet, this column is named Area FitTP.)
Peak Area by Integrating Data: Integrate to find the area between the fitted peak data and the baseline. The fitted peak data corresponds to Fit Peak # (Long Name) in FitPeakCurve#
output sheet. The integration is performed within the data range only. (In the report worksheet, this column is named Area Intg.)
Peak Area by Integrating Data(%): Integrate to find the area between the fitted peak data and the baseline. The fitted peak data corresponds to Fit Peak # (Long Name) in FitPeakCurve
# output sheet. The integration is performed within the data range only. The result is expressed as a percent of the total area. (In the report worksheet, this column is named Area
Location for Peak Maximum Height: The X value for the peak maximum. (In the report worksheet, this column is named Center Max.)
Peak Gravity Center: The X value of the peak center of gravity ($m_1'$). See 3rd Order Moment below. (In the report worksheet, this column is named Center Grvty.)
Peak Maximum Height: The Y value for the peak maximum. (In the report worksheet, this column is named Max Height.)
Full Width @ Half Maximum: The peak width at half the peak's maximum value (In the report worksheet, this column is named FWHM).
Width At: Specifies whether to enable the Width At(% of peak maximum) text box below.
Width At(% of peak maximum): The peak width at n% of the peak's maximum value, where n is the value entered in this text box. (In the report worksheet, this column is named WidthAtP
Area above: Specifies whether to enable the Area above(% of peak maximum) text box below.
Area above(% of peak maximum): The peak area above the n% of the peak's maximum value, where n is the value entered in this text box (In the report worksheet, this column is named
Cumulative Area to: Specifies whether to enable the Cumulative Area to(Relative to center) text box below.
Cumulative Area to(Relative to center): The cumulative fitted area from $-\infty$ to X, where X is the value entered in this text box and will be viewed as a specified value relative
to the peak center (In the report worksheet, this column is named CumArea).
Peak Variance: variance of the function, which is the second moment ($m_2$). See 3rd Order Moment below (In the report worksheet, this column is named Variance).
Peak Skew: Fisher skewness, which is a measure of the degree of asymmetry of the peak.
where $m_3$ is the 3rd order moment and $m_2$ is the 2nd order moment. See their computations below.
(In the report worksheet, this column is named Skew.)
Properties Peak Excess: Fisher kurtosis, measures the long-tailedness or peakedness of the peak relative to the normal or Gaussian distribution with the same mean and variance.
where $m_4$ is the 4th order moment and $m_2$ is the 2nd order moment. See their computations below.
(In the report worksheet, this column is named Excess.)
Resolution with Next Adjacent Peak.
$R_s=\frac{X_{c2}-X_{c1}}{0.5\cdot (w_2+w_1)}$
where $X_{c1}$ and $X_{c2}$ are peak centers, and $w_1$ and $w_2$ are constructed base widths.
(In the report worksheet, this column is named Resolution.)
3rd Order Moment:
The moments are defined as follows:
$m_o=\int_{-\infty}^\infty F(x)dx$ (0th moment or the area of the peak)
$m_n'=\frac 1{m_0}\int_{-\infty}^\infty F(x)x^ndx$, where $n\ge1$(nth zero-point moment)
$m_n=\frac 1{m_0}\int_{-\infty}^\infty F(x)(x-m_1')^ndx$, where $n\ge2$((nth central moment)
(In the report worksheet, this column is named Moment3.)
4th Order Moment:
The fourth order moment. See 3rd Order Moment above. (In the report worksheet, this column is named Moment4.)
Moments Computation Methods:
This is available only when a least one of the following check boxes are selected: Peak Skew, Peak Excess, Resolution with Next Adjacent Peak, 3rd Order Moment and 4th Order
Moment. You can use this drop-down list to select the method to calculate the peak properties related to moments. Two options are available:
☆ Speed Mode (Numeric Integral): Use trapezoid rule to perform integration on datasets when computing the moments.
☆ Accurate Mode (Function Integral): Use the function definition to compute the moments. This is more accurate than the speed mode.
Note: for a peak which you fit with GaussAmp, Gaussian, Lorentz or BiGaussian, the selection in this drop-down list will not apply. Instead, Origin used the fitted parameters to
calculate the moments.
Y Maximum:
Output the Y maximum (with baseline) for each found peak.
The Configure Graph Branch
Specifies whether to create report graph.
Create Summary None: Do not to create report graph.
Graph New Graph: Create a new report graph.
Source Graph: Only available when source data is from graph. Use source graph as report graph.
It is available only when Create Summary Graph was selected to New Graph or Source Graph. Use this drop-down list to specify whether to show the result table on the report/summary
□ None
No not add fitting result table to the graph.
□ Summary Graph
Add fitting result table to the summary graph.
Result Table □ Report Graph
Add fitting result table to the embedded graphs in the report sheet.
□ Summary and Report Graphs
Add fitting result table to both the summary graph and report sheet graph.
Table Style Template: Specifies the table template (*.OTW) to be used to show the results on the graph. By default, the system template is used. The table template should be
placed in Origin installation folder for it to be used directly by name, e.g., MyTable.otw.
Quantities in Table: Specify the quantities to be shown on table by inputting it in format *Quantity Abbreviation. You can also click the more options button next to edit box
to bring up Quantities in Table dialog and add/remove/order the quantities.
When this checkbox is selected, fitted curves will be outputted and the controls under this branch will become available. Otherwise, the fitted curves will not be outputted and
other controls will be unavailable.
Plot Curves: Use this drop-down list to select the curves to be plotted.
☆ Individual Peak curve: A curve for each individual peak
☆ Cumulative Curve: Cumulative curve for all peaks
☆ Both: Both individual peak curve and cumulative curve will be plotted.
Use Separate X Data for Individual Peak: Use separate X data for individual peak so every fitted peak has its individual X data column. If unchecked, all peaks will share a
same X data column.
X Data Points for Individual Peak: Only available when Use Separate X Data for Individual Peak checkbox is checked. Specify the number of data points for X column for each
Add back Baseline: Specifies whether to add the baseline to the fitted curves and spectrum curve, if the baseline has been subtracted from the source data.
Plot in Report Table: Specifies whether to include the plots in the report table.
Stack with Residual vs. Independents Plot: Stack the fitted curve with the Residual vs. Independents Plot.
Update Legend on Source Graph: Specifies whether to auto update legend on the source graph. Available only when Create Summary Graph is set to <Source Graph>.
X Data Type: Specifies how to generate the X values of the fitted curve:
☆ Uniform Linear: The X values of the fitted curve are plotted on an evenly-spaced linear scale.
☆ Log: The X values of the fitted curve are plotted on a logarithmic scale.
☆ Same as Input Data: The X values of the fitted curve are the same as the input X values.
☆ Use Source Graph Scale Type: The X values of the fitted curve uses the same scale type of the source curve.
☆ Follow Curve Shape: The X values of the fitted curve are smartly computed so that the fitted curve will follow the source curve shape. It is very useful when the shape
Fitted Curves Plot of the source curve changes rapidly in some areas. For example, multiple peaks center in a short X range.
This control is available only when X Data Type is either Uniform Linear or Log. It specifies the total number of data points in a fitted curve.
This control is available only when X Data Type is either Uniform Linear or Log. It specifies the range of the X values of the fitted curve. Select one of the
following options:
○ Use Input Data Range + Range Margin: Use the X range of the input data and range margin specified in the Range Margin (%) text box below.
○ Span to Full Axis Range: Span the X values to the full axis range.
○ Custom: Enter the minimum X value and maximum X value in the Min and Max edit boxes below.
Range Margin (%): This control is available only when X Data Type is either Uniform Linear or Log and Use Input Data Range + Range Margin is selected for Range. It
specifies the range margin into which the fitted curves extend.
Min: This text box is available only when X Data Type is either Uniform Linear or Log and Custom is selected for Range. It specifies the minimum X value for the fitted
Max: This text box is available only when X Data Type is either Uniform Linear or Log and Custom is selected for Range. It specifies the maximum X value for the fitted
Confidence Bands: If this is checked, confidence bands will be added to the fitted curve plot as two lines with filled area in between. You can turn the area fill off or
customize the fill pattern on the Line tab of Plot Details dialog.
Prediction Bands: If this is checked, prediction bands will be added to the fitted curve plot as two lines with filled area in between. You can turn the area fill off or
customize the fill pattern on the Line tab of Plot Details dialog.
Confidence Level for Curves(%): Specify the confidence level for confidence bands and prediction bands. This is editable when either the Confidence Bands check box or the
Prediction Bands check box is selected.
Residual vs. Specifies whether to output the Residual vs. Independent plot.
Independent Plot
Peak Report Fields This Peak Report Fields group includes a display box and a toolbar with five buttons :
Quantities in Table Dialog Control
Only available when Show Result Table on Graph is checked under Result: Configure Graph node. This Quantities in Table dialog includes a display box and a toolbar with five buttons :
Display Box The selected quantities and added comments will display in this box.
Click this button then choose a quantity or add comments from fly-out menu:
Model: The function model used to fit the peaks.
Plot: The data plot to be fitted.
Weight: Weighting method.
Parameters: Fitting Parameters of peak function.
Derived Parameters: Derived parameters from fitting parameters.
Parameters & Derived Parameters: Both fitting parameters and derived parameters.
Triangle Button for Reduced Chi-Sqr: The reduced Chi-Square value.
Select Residual Sum of Squares: Residual sum of squares (RSS); or sum of square error.
R-Square(COD): Coefficient of determination.
Adj. R-Square: Adjusted coefficient of determination.
Add All: Add all quantities in the list.
Reset: Clear the quantity list.
User Comments...: Click to open User Comments dialog. Specify comments to be inputted into the result table. Use comma to separate comments to arrange them into multiple
columns in result table.
Remove button Remove the selected quantities from the Display Box. This button is available when you select one or more selected quantities in the box.
Move Up button Move the selected quantities up in the Display Box. Use this button to order the quantities and the results table will follow this order.
Move Down button Move the selected quantities down in the Display Box. Use this button to order the quantities and the results table will follow this order.
Select All button Select all data quantities down in the Display Box. When clicked, this button becomes Unselect All button . | {"url":"https://cloud.originlab.com/doc/en/Origin-Help/PeakAnalyzer-FitPeak","timestamp":"2024-11-12T12:54:11Z","content_type":"text/html","content_length":"231414","record_id":"<urn:uuid:3d0774a1-ed89-4a6a-8e07-e9e034a910af>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00028.warc.gz"} |
Department of Mathematical Sciences
The teachers and associates of the department were involved in research in mathematical calculus (the theory of summability), functional analysis (linear operators), algebra (group theory),
geometrical theory of complex function unknowns,mathematical programming - non-linear programming, mathematical modeling, mathematical cybernetics, technical cybernetics, discrete mathematics (theory
of automation, theory of functional systems, theory of Boole functions, combinatorial analysis), informatics and the mathematical theory of intelligent systems.
The teachers of the department, in cooperation with the Mathematical Institute of the Serbian Academy of Sciences and Arts, have been organizing a seminar on the theory of automation for several
years, and a seminar on the theory of automation and image recognition.
The department has developed institutional scientific and research cooperation with the Department of Discrete Mathematics, and later with the Department of Mathematical Theory of Intelligent Systems
at the Faculty of Mechanics and Mathematics, Moscow State University "M. V. Lomonosov".
Mathematics 3
Ordinary differential equations – the first-order differential equations, differential equations of the second and higher orders, some applications of ordinary differential equations;
Systems of ordinary sifferential equations-definition, methods of solution; some applications of systems of ordinary differential equations;
Series-numerical series- definition, properties, converging criteria, power series- definition, properties, the domain of convergence, converging criteria, expansion of functions into power series,
Taylor’s and Maclaurin’s series.
Probability – events,definition of probability, characteristics of probability, conditional probability,total probability theorem, Bayes’ theorem.
Elements of Probability and Statistics
Series-numerical series- definition, properties, converging criteria, power series- definition, properties, the domain of convergence, converging criteria, expansion of functions into power series,
Taylor’s and Maclaurin’s series.
Probability – definition, characteristics, total probability theorem, Bayes’ theorem, random variable, the most important discrete and continuous probability distributions, numerical characteristics
of distributions, the central limit theorem of the calculus of probabilities;
Statistics – random sample, examples of the most important statistics, tabular and graphical representation of statistical data, point estimation of distribution parameters, methods of obtaining
point estimations , confidence intervals for parameters of normal distribution , parametric hypothesis testing, chi-squarse tests, regression (linear, non-linear).
Differential Equations
Ordinary differential equations – the first-order differential equations, differential equations of the second and higher orders, some applications of ordinary differential equations;
Systems of ordinary sifferential equations-definition, methods of solution; some applications of systems of ordinary differential equations;
Partial differential equations- the first order partial differential equations, the second order partial differential equations, numerical solution of partial differential equations, some
applications of partial differential equations;
Laplace transformations- definition, properties, inverse Laplace transformations, application of Laplace transformations to solving differential equations and systems of differential equations.
Mathematical Processing of Experimental Data
Probability – definition, characteristics, total probability theorem, Bayes’ theorem, random variable, the most important discrete and continuous probability distributions, multidimensional random
variables, the most important multidimensional distributions, numerical characteristics of distributions, numerical characteristics of multidimensional distributions, law of large numbers and central
limit theorem of the calculus of probabilities; Statistics – random sample, examples of the most important statistics, tabular and graphical representation of statistical data, point estimation of
distribution parameters, methods of obtaining point estimations , confidence intervals for parameters of normal distribution , parametric hypothesis testing, non-parametric tests, regression (linear,
non-linear, multidimensional).
Mathematics 1
Basic elements of modern mathematics-mathematical logic, set theory, real numbers, complex numbers.
Real functions of one real variable-binar relations, elementary functions, polynomial functions, sequences of real numbers, limiting value of sequences, limiting value of functions, continuity of
Derivatives, diferentiation, high order derivatives, fundamental theorems from diferentiation calculus, Taylor's theorem.
Elements of linear algebra and analytic geometry-determinants, matrices, systems of linear equations, vectors, equations of straight lines and plains.
Mathematics 2
Integral calaculus—primitive function, the definite integral, inproper integrals, applications of integrals.
Real functions of several real variables-definition, limit and continuity, partial derivatives and diferentiation.
Complex functions of complex variables-definition, differential calculus, elementary functions.
Line and multiple integrals-lines and surfaces, line integrals, double integrals, triple integrals, surface integrals, Green-Stocks and Gauss-Ostrogradski theorems.
Scalar and vector fields-definition, divergence and curl of a vector field, Hamilton's operator.
Selected topics of mathematical analysis
Complex functions of complex variable-definition, complex sequences,limit and conitnuity, derivative and differentiability, Cauchy-Riemann equations, integration, Cauchy’s integral formulas, Taylor's
and Loran's sereies, residues and residue theorem.
Calculus of variations-unconstrained and constarined minimum of functions of several variables, basic problem of the calculus of variations, problems with high order derivatives.
Series Fourier-ortogonality of trigonometric functions, Dirichle theorem, seriees Fourier of some functions.
Selected topics of numerical analysis
Approximate numbers and errors-sources of error, absolute and relative error, aproximate calculations of function values.
Numerical solution of nonlinear equations-bisection method, Newton method, iterative methods, iterative methods for systems of the equations.
Interpolation-the interpolation formula of Lagrange, Newton's interpolation formula, Hermite interpolation, Stirling interpolation Bessel interpolation.
Numerical differentiation and integration-trapeziodal rule, Simpson's rule.
Numerical solution of ordinary differential equations-Euler and Newton method, Runge-Kutta methods, Adam-Moulton methods. Numerical solution of partial differential equations-the finite difference
Особље катедре
Редовни професор
Асистент доктор наука | {"url":"https://www.tmf.bg.ac.rs/en/departments/department-of-mathematical-sciences","timestamp":"2024-11-13T15:42:31Z","content_type":"text/html","content_length":"54110","record_id":"<urn:uuid:272cb634-c624-45d1-bed4-383bd08d0604>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00624.warc.gz"} |
Facile Electron Transfer to CO 2 During Adsorption at the Metal | Solution Interface
We estimate the rate of electron transfer to CO2 at the Au (211)|water interface during adsorption in an electrochemical environment under reducing potentials. Based on density functional theory
calculations at the generalized gradient approximation and hybrid levels of theory, we find electron transfer to adsorbed *CO2 to be very facile. This high rate of transfer is estimated by the energy
distribution of the adsorbate-induced density of states as well as from the interaction between diabatic states representing neutral and negatively charged CO2. Up to 0.62 electrons are transferred
to CO2, and this charge adiabatically increases with the bending angle to a lower limit of 137°. We conclude that this rate of electron transfer is extremely fast compared to the timescale of the
nuclear degrees of freedom, that is, the adsorption process. | {"url":"https://suncat.stanford.edu/publications/facile-electron-transfer-co-2-during-adsorption-metal-solution-interface","timestamp":"2024-11-04T17:40:17Z","content_type":"text/html","content_length":"23585","record_id":"<urn:uuid:f281f49a-82b9-4a99-a69e-ebb8b4f6d11a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00867.warc.gz"} |
ULTIMATE SYMMETRY - I.4.1 The Three Hypothesis of the
Single Monad Model
I.4.1 The Three Hypothesis of the Single Monad Model
The Single Monad Model of the Cosmos can be summarized into the following three complementary hypotheses:
1.The Single Monad: At any real single instance of time, there is only one Single Monad that alone can be described by real continuous existence. By perpetually manifesting in different forms, this
Monad creates other individual monads, thus imaging itself to make a comprehensive image as one single frame of the entire cosmos. This still picture is created in one full Week of the original
Cosmic Days of events, which are the inner levels of time, but this creative process is equivalent only to one single moment, that is the outward atom of time.
2.The Re-creation Principle: The forms of manifestation of the Single Monad cease to exist intrinsically right after the instant of their becoming, to be re-created again by the Single Monad in every
original creative Week, from Sunday to Friday. Being one of these discrete instances, we don t witness this creation process, since we only observe the created World on Saturday, the last Day of
creation. So the seven Days of the divine Week are in all one abstract geometrical point of space-time, which then creates the space-time container which encompasses the whole World, both spatially
and temporally. Due to this re-creation, although the internal flow of time is real and continuous, the outward time is discrete and imaginary, as we shall describe further below.
3.The Actual Flow of Time: Since the World takes seven Days to be re-created by the Single Monad, which manifests the forms of the individual monads one by one in chronological order, observers have
to wait, out of existence, six Days, from Sunday to Friday, in order to witness the next moment of creation, that is the next frame of space, on the following Saturday. In each Day of these Days of
creation, a corresponding dimension of the World is created. Therefore, the real flow of the actual created time doesn t go linearly, but rather is intertwined with the observable, normal earthly
days in the special and rather mystifying manner that have been summarized in Chapter Iv of Volume II.
This complex view of space-time and creation will lead to the Duality of Time hypothesis on which we will be able to explain many persisting problems in terms of the genuinely-complex time-time
geometry, including Quantum Gravity as we shall explain further in Chapter II. Einstein s theory of General Relativity was able to explain Gravity in terms of space-time geometry, but it could not
realize this hidden discrete symmetry which is the only way to reconcile it with Quantum Mechanics. The complex time-time structure of the Duality of Time is naturally discrete, and it explains
Gravity as well as all other fundamental forces in terms of its complex-time geometry. Furthermore, this encompassing view of time will include not only the physical world, but also the psychical and
spiritual worlds that will be described in the coming chapters.
Ibn al-Arabi stresses that this continuously renewed return to non-existence is an intrinsic condition of all created forms, and not due to any external force [II.385.4]. Typically he relates this
fundamental insight to the Quranic verse: (but they are unaware of the new creation) [50:15], which he frequently quotes, along with the famous verse concerning the Day of the divine Task/Event
[55:29] that he always cites in relation to his intimately related concept of discrete time. Therefore, the existence of things in the World is not continuous, as we may imagine and deceitfully
observe, because Allah is continuously and perpetually creating every single thing at every instance, or in every single Day of Event [II.454.21, II.384.30]. If any entity in existence would remain
for at least two instants of time, it would be independent from God, so with this re-creation everything is always in need for the Creator to bring it into existence.
Additionally, Ibn al-Arabi argues that there are no two truly identical forms, since otherwise Allah won t be described as the Infinitely Vast . Because of this unique divine Vastness [I.266.8], the
Single Monad will never wear two identical forms: i.e. it never wears exactly the same form for more than one instance; so nothing is ever truly repeated [I.721.22]. The new forms, he admits, are
often similar to the previous ones but they aren t the same [II.372.21, III.127.24]. Ibn al-Arabi summarized this by saying:
At every instance of time, the World is (perpetually) re-formed and disintegrated, so the individual entity of the substance of the World has no persistence (in existence) except through its
receiving of this formation within it. Therefore the World is always in a state of needfulness, perpetually: the forms are in need (of a creator) to bring them forth from non-existence into
existence; and the substance (being the substrate for the created forms or accidents] is also in need to preserve its existence, because unavoidably a condition for its existence is the existence of
the formation of those (newly re-created forms) for which it is a substrate. [II.454.19]
As we mentioned in the previous two volumes, this Single Monad is also called the Universal Intellect, and also the Pen, among many other different names or descriptions. However, some confusion may
occur with the Greatest Element that is creating the Single Monad itself; sometimes it is not very clear for some of these many variant names whether they are really for the Single Monad or the
Greatest Element. One of these interesting names is the real through whom creation takes place , that is the most perfect image of the Real, as divine name of Allah, the Creator of the Worlds.
Everything in the Creation is rooted in the Single Monad, just as the leaves and the fruits of a tree are rooted in the stalk that spring out of the seed. The Single Monad is like the seed for the
tree of the cosmos, while the Greatest Element, is what makes up the seed down to the cells, atoms and subatomic particles inside it, just as the leaves were also determined in the seed even before
it was planted.
Furthermore, one of the most interesting name of this Single Monad is Everything ! This name is interesting because Ibn al-Arabi says that in everything there is everything, even if we don t
recognize that . This is another way of saying that: the Single Monad is in everything , but it also mean that the internal structure of the Single Monad is as complicated as the World itself
because: in everything, even the Single Monad, there is everything, even the World! This is plausible since both the Single Monad, that is the microcosm, and the whole World, that is the macrocosm,
are both created on the divine Image. This reminds us in mathematics with fractals such as Mandelbrot set, Julia set and Sierpinski triangle, where the structure keeps repeating itself on any larger
or smaller scale, as we shall describe further in section I.2 below. This also means that although each instance of the outward time is an indivisible moment, it is internally divided into
sub-moments, just as the visible day, where the Sun rises, moves gradually in the sky and then sets to rise again in the next day. So on the outward dimension, this whole day forms an indivisible
unit of time, but internally it seems to be continuously divisible into ever smaller time intervals, at least potentially. Therefore, just as the Single Monad might be identical with the World, the
moment might be identical with the day. It just depends on the scale we are using; if we were inside the Single Monad we might see creations such as the Sun, planets and the stars, but because we are
outside we see it as a point. Similarly, if we suppose we go outside the Universe, we shall see it as a point; that is as the Single Monad, indivisible but compound. This is also similar in modern
cosmology to the black hole, which occupies a single point in our space but itself is considered a complete world. | {"url":"https://www.smonad.com/symmetry/book.php?id=35","timestamp":"2024-11-10T14:15:21Z","content_type":"text/html","content_length":"38378","record_id":"<urn:uuid:4a2aea76-dabb-4a88-a157-a045b11efeb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00358.warc.gz"} |
Dharmendra S. Modha
Authors: Deepika Bablani, Jeffrey L. Mckinstry, Steven K. Esser, Rathinakumar Appuswamy, Dharmendra S. Modha
Abstract: For effective and efficient deep neural network inference, it is desirable to achieve state-of-the-art accuracy with the simplest networks requiring the least computation, memory, and
power. Quantizing networks to lower precision is a powerful technique for simplifying networks. It is generally desirable to quantize as aggressively as possible without incurring significant
accuracy degradation. As each layer of a network may have different sensitivity to quantization, mixed precision quantization methods selectively tune the precision of individual layers of a network
to achieve a minimum drop in task performance (e.g., accuracy). To estimate the impact of layer precision choice on task performance two methods are introduced: i) Entropy Approximation Guided Layer
selection (EAGL) is fast and uses the entropy of the weight distribution, and ii) Accuracy-aware Layer Precision Selection (ALPS) is straightforward and relies on single epoch fine-tuning after layer
precision reduction. Using EAGL and ALPS for layer precision selection, full-precision accuracy is recovered with a mix of 4-bit and 2-bit layers for ResNet-50 and ResNet-101 classification networks,
demonstrating improved performance across the entire accuracy-throughput frontier, and equivalent performance for the PSPNet segmentation network in our own commensurate comparison over leading mixed
precision layer selection techniques, while requiring orders of magnitude less compute time to reach a solution.
Link: https://arxiv.org/abs/2301.13330 | {"url":"https://modha.org/page/2/","timestamp":"2024-11-10T19:16:49Z","content_type":"text/html","content_length":"79802","record_id":"<urn:uuid:41b31f27-eeea-4106-8227-a47f7c3b85db>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00307.warc.gz"} |
Definition of the Derivative | Example
• Learn how to answer one of the most important questions in calculus by calculating the rate of change of a function at a point (aka taking the derivative)!
Learn how to answer one of the most important questions in calculus by calculating the rate of change of a function at a point (aka taking the derivative)!
• Understand derivatives with help from this lesson on difference quotients. See how to plug in values and functions and then simplify confusing equations.
Understand derivatives with help from this lesson on difference quotients. See how to plug in values and functions and then simplify confusing equations.
• In this lesson from IntegralCalc, brush up on your calculus skills and work through a problem to find the equation of a line tangent to a particular function!
In this lesson from IntegralCalc, brush up on your calculus skills and work through a problem to find the equation of a line tangent to a particular function!
• Don’t be intimidated by long implicit differentiation problems! Learn how to solve this type of equation with help from Krista, founder of IntegralCalc.
Don’t be intimidated by long implicit differentiation problems! Learn how to solve this type of equation with help from Krista, founder of IntegralCalc.
• Learn how to solve optimization problems and find the extremes, the local or global minima or maxima of a function, in this lesson from integralCALC.
Learn how to solve optimization problems and find the extremes, the local or global minima or maxima of a function, in this lesson from integralCALC.
• Recommended Recommended
• History & In Progress History
• Browse Library
• Most Popular Library
Get Personalized Recommendations
Let us help you figure out what to learn! By taking a short interview you’ll be able to specify your learning interests and goals, so we can recommend the perfect courses and lessons to try next.
Start Interview
You don't have any lessons in your history.
Just find something that looks interesting and start learning!
8 Comments
Great presentation. I always seem to understand but when taking exams I get nervous especially when running out of time. I need short cuts and quick checks to make sure I get the correct answer if
you know any.
At least for derivatives, you can do the problems a lot faster when you learn all the derivative rules and can use them instead of the definition. When it comes to test taking, I know a lot of people
struggle, but the most important thing is to stay calm, make sure you answer the easy questions first to get points, and not worry about problems you don't get to, because the worry and anxiety don't
help you finish the current problem. But it takes practice! :)
I am quite agree with you.
good clear concise explanations. I haven't done this stuff since 1974. very good job. well done. | {"url":"https://curious.com/integralcalc/definition-of-the-derivative-example/in/calculus-i-essentials","timestamp":"2024-11-09T22:42:34Z","content_type":"text/html","content_length":"193759","record_id":"<urn:uuid:77ceb467-9e51-4136-88bf-9af956802528>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00311.warc.gz"} |
The SURVEYMEANS Procedure
Replicate Weights Output Data Set
If you specify the OUTWEIGHTS= method-option for VARMETHOD=BRR or VARMETHOD=JACKKNIFE, PROC SURVEYMEANS stores the replicate weights in an output data set. The OUTWEIGHTS= output data set contains
all observations from the DATA= input data set that are valid (used in the analysis). (A valid observation is an observation that has a positive value of the WEIGHT variable. Valid observations must
also have nonmissing values of the STRATA and CLUSTER variables, unless you specify the MISSING option. See the section Data and Sample Design Summary for details about valid observations.)
The OUTWEIGHTS= data set contains the following variables:
• all variables in the DATA= input data set
• RepWt_1, RepWt_2, , RepWt_n, which are the replicate weight variables
where n is the total number of replicates in the analysis. Each replicate weight variable contains the replicate weights for the corresponding replicate. Replicate weights equal zero for those
observations not included in the replicate.
After the procedure creates replicate weights for a particular input data set and survey design, you can use the OUTWEIGHTS= method-option to store these replicate weights and then use them again in
subsequent analyses, either in PROC SURVEYMEANS or in the other survey procedures. You can use the REPWEIGHTS statement to provide replicate weights for the procedure. | {"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_surveymeans_details50.htm","timestamp":"2024-11-09T00:52:30Z","content_type":"application/xhtml+xml","content_length":"14992","record_id":"<urn:uuid:1e75e1c5-d0b7-425d-8d99-7a835af4a5a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00074.warc.gz"} |
The Frederic Esser Nemmers Prize in Mathematics goes this year to Assaf Naor “for his profound work on the geometry of metric spaces, which has led to breakthroughs in the theory of algorithms.”
Snowflakes at infinity
The countless shapes of snowflakes have long raised the curiosity of many scientists, among others the famous Kepler. They have by now been classified by empirical observation into 80 different
shapes, but a mathematical explanation for this classification seems to be missing. A striking point about them is that, even though two snowflakes are almost … Continue reading "Snowflakes at
Ricci flow and diffeomorphism groups of 3-manifolds
A new paper proves the contractibility of the space of constant curvature metrics on all 3-manifolds except possibly real projective space. Bamler, Kleiner: Ricci flow and diffeomorphism groups of
3-manifolds, https://arxiv.org/pdf/1712.06197.pdf The Smale conjecture in its original form asserted that the diffeomorphism group of the 3-sphere deformation retracts onto O(3), the isometry group
of its … Continue reading "Ricci flow and diffeomorphism groups of 3-manifolds"
A locally hyperbolic 3-manifold that is not hyperbolic
A preprint with a new example shows that the understanding of infinitely generated Kleinian groups will be more complicated than for the finitely generated ones. Cremaschi: A locally hyperbolic
3-manifold that is not hyperbolic, https://arxiv.org/pdf/1711.11568 By the proofs of hyperbolization and tameness, one knows precisely which irreducible 3-manifolds with finitely generated
fundamental groups admit hyperbolic … Continue reading "A locally hyperbolic 3-manifold that is not hyperbolic"
What’s hot at MathOverflow 23/2017
The two ways Feynman diagrams appear in mathematics
Bavarian Geometry/Topology Meeting
These days, there was the 2nd Bavarian Geometry/Topology Meeting, organized by Fabian Hebestreit and Markus Land, and hopefully becoming a tradition as the NRW topology meeting which by now had its
28th recurrent. Main event of the meeting were the lectures of Oscar Randal-Williams from Oxford, who discussed work on the cohomology of the mapping … Continue reading "Bavarian Geometry/Topology
Breakthrough Prize for higher-dimensional geometry
The award ceremony is certainly not what mathematicians are used to, and there are certainly many things that one can say for and against such monstrous awards and the ambience around. In any case,
if you‘d like to see the ceremony, the math part starts at 1:22:30. The breakthrough prize for 2018 was given to … Continue reading "Breakthrough Prize for higher-dimensional geometry"
Quasi-isometric groups with no common model geometry
Do quasi-isometries between groups always arise from actions on a common model space? Previous counterexamples invoked central extensions of lattices, e.g., of surface groups. A new construction of
infinitely many classes is now using amalgams of surface groups. Stark, Woodhouse: Quasi-isometric groups with no common model geometry, https://arxiv.org/pdf/1711.05026.pdf If a group \(\Gamma\)
acts geometrically (i.e., … Continue reading "Quasi-isometric groups with no common model geometry" | {"url":"https://blog.spp2026.de/page/24/","timestamp":"2024-11-15T03:46:18Z","content_type":"text/html","content_length":"53747","record_id":"<urn:uuid:35da9d6e-b28c-49e7-8d88-85d1a2e410f0>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00610.warc.gz"} |
Exact Approaches for the Knapsack Problem with Setups
We consider a generalization of the knapsack problem in which items are partitioned into classes, each characterized by a fixed cost and capacity. We study three alternative Integer Linear
Programming formulations. For each formulation, we design an efficient algorithm to compute the linear programming relaxation (one of which is based on Column Generation techniques). We theoretically
compare the strength of the relaxations and derive specific results for a relevant case arising in benchmark instances from the literature. Finally, we embed the algorithms above into a unified
implicit enumeration scheme which is run in parallel with an improved Dynamic Programming algorithm to efficiently compute an optimal solution of the problem. An extensive computational analysis
shows that our new exact algorithm is capable of efficiently solving all the instances of the literature and turns out to be the best algorithm for instances with a low number of classes. | {"url":"https://optimization-online.org/2016/07/5537/","timestamp":"2024-11-14T08:33:19Z","content_type":"text/html","content_length":"84722","record_id":"<urn:uuid:3e1a2818-3e32-4103-a7fe-c8b6c1234658>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00753.warc.gz"} |
Estimation of the confidence envelope of the D function under
DEnvelope {dbmss} R Documentation
Estimation of the confidence envelope of the D function under its null hypothesis
Simulates point patterns according to the null hypothesis and returns the envelope of D according to the confidence level.
DEnvelope(X, r = NULL, NumberOfSimulations = 100, Alpha = 0.05,
Cases, Controls, Intertype = FALSE, Global = FALSE,
verbose = interactive())
X A point pattern (wmppp.object).
r A vector of distances. If NULL, a sensible default value is chosen (512 intervals, from 0 to half the diameter of the window) following spatstat.
NumberOfSimulations The number of simulations to run, 100 by default.
Alpha The risk level, 5% by default.
Cases One of the point types
Controls One of the point types.
Intertype Logical; if TRUE, D is computed as Di in Marcon and Puech (2012).
Global Logical; if TRUE, a global envelope sensu Duranton and Overman (2005) is calculated.
verbose Logical; if TRUE, print progress reports during the simulations.
The only null hypothesis is random labeling: marks are distributed randomly across points.
This envelope is local by default, that is to say it is computed separately at each distance. See Loosmore and Ford (2006) for a discussion.
The global envelope is calculated by iteration: the simulations reaching one of the upper or lower values at any distance are eliminated at each step. The process is repeated until Alpha / Number of
simulations simulations are dropped. The remaining upper and lower bounds at all distances constitute the global envelope. Interpolation is used if the exact ratio cannot be reached.
An envelope object (envelope). There are methods for print and plot for this class.
The fv contains the observed value of the function, its average simulated value and the confidence envelope.
Duranton, G. and Overman, H. G. (2005). Testing for Localisation Using Micro-Geographic Data. Review of Economic Studies 72(4): 1077-1106.
Kenkel, N. C. (1988). Pattern of Self-Thinning in Jack Pine: Testing the Random Mortality Hypothesis. Ecology 69(4): 1017-1024.
Loosmore, N. B. and Ford, E. D. (2006). Statistical inference using the G or K point pattern spatial statistics. Ecology 87(8): 1925-1931.
Marcon, E. and F. Puech (2017). A typology of distance-based measures of spatial concentration. Regional Science and Urban Economics. 62:56-67.
See Also
# Keep only 20% of points to run this example
X <- as.wmppp(rthin(paracou16, 0.2))
labelSize = expression("Basal area (" ~cm^2~ ")"),
labelColor = "Species")
# Calculate confidence envelope (should be 1000 simulations, reduced to 20 to save time)
r <- 0:30
NumberOfSimulations <- 20
Alpha <- .05
# Plot the envelope (after normalization by pi.r^2)
autoplot(DEnvelope(X, r, NumberOfSimulations, Alpha,
"V. Americana", "Q. Rosea", Intertype = TRUE), ./(pi*r^2) ~ r)
version 2.9-0 | {"url":"https://search.r-project.org/CRAN/refmans/dbmss/html/DEnvelope.html","timestamp":"2024-11-06T23:55:26Z","content_type":"text/html","content_length":"5623","record_id":"<urn:uuid:45e52de1-12dc-410b-90ea-4626a4016043>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00028.warc.gz"} |
Imbedding statistics for linear families via Markov chains
The genus polynomial for a finite graph G is the generating function gG(z) = Σaiz
i, where ai is the number of imbeddings of G in the surface of genus i. A linear family Gn of graphs is formed by taking n copies of the same graph G and forming a path of them by adding edges in the
same way between one copy of G and the next. For any such linear family there is a production or transfer matrix M(z) and initial vector v(z) (all entries are polynomials in z with non-negative
integer coefficients) such that the genus polynomials for the imbedding types of Gn are given by Mn (z)v(z). The columns of M(1) have constant column sum s so (1/s)M(1) is a matrix for a Markov chain
whose states are the imbedding types
of the linear family. We show how to use the Jordan normal form for M(1) to find the average genus of each imbedding type for each member of a linear family.
Original language English
Title of host publication AMS National Meeting
State Published - 2018
Dive into the research topics of 'Imbedding statistics for linear families via Markov chains'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/imbedding-statistics-for-linear-families-via-markov-chains","timestamp":"2024-11-03T22:03:17Z","content_type":"text/html","content_length":"44368","record_id":"<urn:uuid:808e2f61-e3f1-4ac7-b3ae-de4270ef9481>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00292.warc.gz"} |
Transactions Online
Asahi MIZUKOSHI, Ayano NAKAI-KASAI, Tadashi WADAYAMA, "PSOR-Jacobi Algorithm for Accelerated MMSE MIMO Detection" in IEICE TRANSACTIONS on Fundamentals, vol. E107-A, no. 3, pp. 486-492, March 2024,
doi: 10.1587/transfun.2023TAP0004.
Abstract: This paper proposes the periodical successive over-relaxation (PSOR)-Jacobi algorithm for minimum mean squared error (MMSE) detection of multiple-input multiple-output (MIMO) signals. The
proposed algorithm has the advantages of two conventional methods. One is the Jacobi method, which is an iterative method for solving linear equations and is suitable for parallel implementation. The
Jacobi method is thus a promising candidate for high-speed simultaneous linear equation solvers for the MMSE detector. The other is the Chebyshev PSOR method, which has recently been shown to
accelerate the convergence speed of linear fixed-point iterations. We compare the convergence performance of the PSOR-Jacobi algorithm with that of conventional algorithms via computer simulation.
The results show that the PSOR-Jacobi algorithm achieves faster convergence without increasing computational complexity, and higher detection performance for a fixed number of iterations. This paper
also proposes an efficient computation method of inverse matrices using the PSOR-Jacobi algorithm. The results of computer simulation show that the PSOR-Jacobi algorithm also accelerates the
computation of inverse matrix.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2023TAP0004/_p
author={Asahi MIZUKOSHI, Ayano NAKAI-KASAI, Tadashi WADAYAMA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={PSOR-Jacobi Algorithm for Accelerated MMSE MIMO Detection},
abstract={This paper proposes the periodical successive over-relaxation (PSOR)-Jacobi algorithm for minimum mean squared error (MMSE) detection of multiple-input multiple-output (MIMO) signals. The
proposed algorithm has the advantages of two conventional methods. One is the Jacobi method, which is an iterative method for solving linear equations and is suitable for parallel implementation. The
Jacobi method is thus a promising candidate for high-speed simultaneous linear equation solvers for the MMSE detector. The other is the Chebyshev PSOR method, which has recently been shown to
accelerate the convergence speed of linear fixed-point iterations. We compare the convergence performance of the PSOR-Jacobi algorithm with that of conventional algorithms via computer simulation.
The results show that the PSOR-Jacobi algorithm achieves faster convergence without increasing computational complexity, and higher detection performance for a fixed number of iterations. This paper
also proposes an efficient computation method of inverse matrices using the PSOR-Jacobi algorithm. The results of computer simulation show that the PSOR-Jacobi algorithm also accelerates the
computation of inverse matrix.},
TY - JOUR
TI - PSOR-Jacobi Algorithm for Accelerated MMSE MIMO Detection
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 486
EP - 492
AU - Asahi MIZUKOSHI
AU - Ayano NAKAI-KASAI
AU - Tadashi WADAYAMA
PY - 2024
DO - 10.1587/transfun.2023TAP0004
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E107-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2024
AB - This paper proposes the periodical successive over-relaxation (PSOR)-Jacobi algorithm for minimum mean squared error (MMSE) detection of multiple-input multiple-output (MIMO) signals. The
proposed algorithm has the advantages of two conventional methods. One is the Jacobi method, which is an iterative method for solving linear equations and is suitable for parallel implementation. The
Jacobi method is thus a promising candidate for high-speed simultaneous linear equation solvers for the MMSE detector. The other is the Chebyshev PSOR method, which has recently been shown to
accelerate the convergence speed of linear fixed-point iterations. We compare the convergence performance of the PSOR-Jacobi algorithm with that of conventional algorithms via computer simulation.
The results show that the PSOR-Jacobi algorithm achieves faster convergence without increasing computational complexity, and higher detection performance for a fixed number of iterations. This paper
also proposes an efficient computation method of inverse matrices using the PSOR-Jacobi algorithm. The results of computer simulation show that the PSOR-Jacobi algorithm also accelerates the
computation of inverse matrix.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2023TAP0004/_p","timestamp":"2024-11-03T12:19:07Z","content_type":"text/html","content_length":"63377","record_id":"<urn:uuid:ea1d0ebd-79e4-4e92-85ca-ad52ff0226bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00722.warc.gz"} |
12 February 2012 Archives
“We have to split up… in case somebody better comes along!”
Either from our own real life or from popular culture and the media, we’ve all come across a statement like that. It’s rarely quite so brazen: instead, it’s sometimes concealed behind another reason,
whether tactful or simply false. But it still reeks of a lack of commitment and an unwillingness to “give it a try.”
With thanks for Flickr user "i.am.rebecca".
However, it turns out that there’s actually a solid mathematical basis for it. Let’s assume for a moment that you:
1. Engage exclusively in monogamous relationships. To each their own, I suppose.
2. Are seeking for a relationship that will last indefinitely (e.g. traditional monogamous marriage, “’til death do us part,” and all that jazz).
3. Can’t or won’t date your exes.
4. Can rate all of your relationships relative to one another (i.e. rank them all, from best to worst)?
5. Can reasonably estimate the number of partners that you will have the opportunity to assess over the course of your life. You can work this out by speculating on how long you’ll live (and be
dating!) for, and multiplying, though of course there are several factors that will introduce error. When making this assumption, you should assume that you break up from any monogamous
relationship that you’re currently in, and that no future monogamous relationship is allowed to last long enough that it may prevent you from exploring the next one, until you find “the one” –
the lucky winner you’re hoping to spend the rest of your life with.
Assuming that all of the above is true, what strategy should you employ in order to maximise your chance of getting yourself the best possible lover (for you)?
The derivation of the optimal policy for the secretary problem.
It turns out that clever (and probably single) mathematicians have already solved this puzzle for you. They call it the Secretary Problem, because they’d rather think about it as being a human
resources exercise, rather than a reminder of their own tragic loneliness.
A Mathematical Strategy for Monogamy
Here’s what you do:
1. Take the number of people you expect to be able to date over the course of your lifetime, assuming that you never “settle down” and stop dating others. For example’s sake, let’s pick 20.
2. Divide that number by e – about 2.71828. You won’t get a round number, so round down. In our example, we get 7.
3. Date that many people – maybe you already have. Leave them all. This is important: these first few (7, in our example) aren’t “keepers”: the only reason you date them is to give you a basis for
comparison against which you rate all of your future lovers.
4. Keep dating: only stop when you find somebody who is better than everybody you’ve dated so far.
And there you have it! Mathematically-speaking, this strategy gives you a 37% chance of ending up with the person who – of all the people you’d have had the chance to date – is the best. 37% doesn’t
sound like much, but from a mathematical standpoint, it’s the best you can do with monogamy unless you permit yourself to date exes, or to cheat.
Or to conveniently see your current partner as being better than you would have objectively rated them otherwise. That’s what love will do for you, but that’s harder to model mathematically.
Of course, if everybody used this technique (or even if enough people used it that you might be reasonably expected to date somebody who did, at some point in your life), then the problem drifts into
the domain of game theory. And by that point, you’d do better to set up a dating agency, collect everybody’s details, and use a Stable Marriage problem solution to pair everybody up.
This has been a lesson in why mathematicians shouldn’t date. | {"url":"https://danq.me/2012/02/12/","timestamp":"2024-11-11T01:36:37Z","content_type":"text/html","content_length":"51405","record_id":"<urn:uuid:493c0f24-5949-4cb1-8c3d-0bf1735365ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00576.warc.gz"} |
How to Do Super Quick Mathematical Calculations to Score Better in Government Exams
Team OpenNaukri
Government jobs are always in demand as Indian people prefer a government job to a private one. The reasons are pretty obvious. Even in the recent economic crisis that gripped most of the countries
across the globe and caused many private Indian companies to reduce their staff, government jobs were safe and sound in India.
As the competition is tough, it is quite natural that most government jobs of this country are hard to get. Getting a government job in India isn’t at all a quick and easy process, it is rather a
long term goal which demands a lot of preparation. The jobs are generally offered only to the candidates who clear an eligibility test. The test also acts as a filter because most of the test-takers
are not able to clear it.
Tests are generally objective in nature except a few exceptions like the civil services test. An important and integral part of the tests is mathematics. Almost all the tests from the different
government organisations have a mathematical component. Also as the tests may have sectional cut-offs, a good knowledge of Mathematics becomes quite essential to land a government job. The public
sector companies either have their own tests or use the all India Graduate Aptitude Test to scout for potential employees. Similarly the banks of this country also conduct an objective multiple
choice questions test. These exams have a fixed time duration and a large number of questions.
As the time limit in most government exams is almost insufficient when compared to their number of questions, candidates who can solve the problems in a shorter time duration are at advantage. In
maths also it is necessary to make the required calculations and derivations within time. Generally maths questions are calculation based. Learning speed mathematics or using techniques to make their
calculations more efficient and speedy is sure to help candidates a great deal. Practicing is one of the most key parts of the exam preparation process. Practicing problems regularly is surely going
to boost the calculating speed and it has the additional advantage of students making fewer mistakes. Students can learn vedic math methods to increase their speed. When candidates are able to do
their calculations faster in the exam, they save time for the tougher sections of the paper. This helps them in increasing their scores.
There are all kind of mathematical techniques to improve one’s calculation speed. For example to multiply a big number by 5 it is much easier to multiply the number by 10 and divide it by 2.
Candidates should have the squares of all numbers upto 25 on the tip of their fingers as they are used very frequently. To check a number for divisibility by 3 all we have to do is add the digits of
the number and see if the sum is a multiple of 3. Similarly there are a whole lot of other tricks for the different numbers. Checking for divisibility by 5 is easy as only a number with a zero or a 5
at its unit’s place can be divided by 5.
To do fast multiplication of a number with 11, say for example, you have to multiply 45 with 11. Find the sum of both the numbers i.e. 4+5=9, then place the sum between the two digits, hence the
product of 11 and 45 is 495. In case, if the sum of number is more than 10, for example let’s find the product of 99 and 11, 9+9=18, we will keep 8 as the middle digit and carry forward 1, hence the
final digit is 1089.
To multiply a three digit number with 11, for example 435 *11, you simply need to find the sum of first digit and second digit, i.e. (4+3=7), and sum of second digit and third digit (3+5=8), and
Insert them between the first digit and last digit of the original number, which are 4 and 5 respectively. Thus the product is 4785.
To multiply any two numbers between 11 and 19, say for example you have to multiply 12 with 15. The first step is to add first number with last digit of second number and multiply the sum with 10,
i.e. 12+5= 17 * 10 = 170, to the number obtained (170) add product of the last digit of both the original numbers i.e. 2*5= 10. The final number is 170 + 10 = 180.
To multiply a number with 9, say you have to multiply 61 with 9. All you need to do is to add 0 at the end of the given number, hence 61 becomes 610, to get the product subtract original number from
610, and you get the result which is 610-61= 549. Isn’t it easy?
Let’s move to division, if you have to divide a number by 5, let’s say you need to divide 5678 by 5. The first step is to multiply the given number by 2, you get 11356, and then move decimal point
one step left to get final answer, i.e. 1135.6.
These small quick calculation methods are very useful and can save time of the examine. To learn more tricks, visit Opennaukri blog regularly. Let us know, what calculations are troubling you, will
will give some amazing solutions for those.
LEAVE A REPLY Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.opennaukri.com/how-to-do-super-quick-mathematical-calculations-to-score-better-in-government-exams/","timestamp":"2024-11-08T14:16:26Z","content_type":"text/html","content_length":"184982","record_id":"<urn:uuid:a3cf5e9e-741d-42b2-a29a-379988fc7875>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00815.warc.gz"} |
Free Blank Multiplication Table 1 12 Printable Chart In PDF | Multiplication Chart Printable
Free Blank Multiplication Table 1 12 Printable Chart In PDF
Free Blank Multiplication Table 1 12 Printable Chart In PDF
Free Blank Multiplication Table 1 12 Printable Chart In PDF – A Multiplication Chart is a practical tool for kids to find out how to multiply, separate, as well as discover the smallest number. There
are many usages for a Multiplication Chart.
What is Multiplication Chart Printable?
A multiplication chart can be made use of to help kids discover their multiplication truths. Multiplication charts can be found in several kinds, from full web page times tables to single web page
ones. While private tables are useful for offering chunks of information, a full web page chart makes it much easier to examine realities that have actually already been grasped.
The multiplication chart will generally feature a left column and a leading row. The leading row will certainly have a listing of items. When you wish to locate the product of two numbers, pick the
first number from the left column as well as the second number from the top row. Relocate them along the row or down the column up until you get to the square where the 2 numbers fulfill as soon as
you have these numbers. You will certainly then have your product.
Multiplication charts are practical knowing tools for both kids and also adults. Youngsters can utilize them in your home or in school. Printable Multiple Table Times Chart 1-12 are offered on the
web as well as can be published out and also laminated for longevity. They are a remarkable tool to use in math or homeschooling, as well as will certainly provide an aesthetic reminder for children
as they discover their multiplication realities.
Why Do We Use a Multiplication Chart?
A multiplication chart is a layout that demonstrates how to multiply two numbers. It normally consists of a top row as well as a left column. Each row has a number standing for the product of both
numbers. You select the initial number in the left column, relocate down the column, and after that select the 2nd number from the top row. The item will be the square where the numbers meet.
Multiplication charts are handy for several factors, including helping children find out exactly how to separate and simplify portions. Multiplication charts can also be useful as desk sources
because they serve as a continuous pointer of the student’s progress.
Multiplication charts are additionally valuable for aiding trainees remember their times tables. They help them find out the numbers by minimizing the variety of steps needed to complete each
procedure. One strategy for remembering these tables is to concentrate on a single row or column at once, and afterwards relocate onto the following one. At some point, the whole chart will be
committed to memory. Just like any skill, remembering multiplication tables requires time and technique.
Printable Multiple Table Times Chart 1-12
1 12 X Times Table Chart Templates At Allbusinesstemplates
Printable Multiple Table Times Chart 1-12
If you’re looking for Printable Multiple Table Times Chart 1-12, you’ve concerned the appropriate area. Multiplication charts are available in different styles, including full size, half size, and a
range of cute layouts. Some are vertical, while others include a straight layout. You can also find worksheet printables that include multiplication formulas as well as math truths.
Multiplication charts and also tables are crucial tools for kids’s education. You can download as well as publish them to make use of as a training help in your youngster’s homeschool or class. You
can also laminate them for durability. These charts are great for use in homeschool math binders or as classroom posters. They’re especially beneficial for kids in the second, 3rd, as well as fourth
A Printable Multiple Table Times Chart 1-12 is a helpful tool to reinforce math truths as well as can assist a kid discover multiplication quickly. It’s likewise a terrific tool for miss counting and
also finding out the moments tables.
Related For Printable Multiple Table Times Chart 1-12 | {"url":"https://multiplicationchart-printable.com/printable-multiple-table-times-chart-1-12/free-blank-multiplication-table-1-12-printable-chart-in-pdf-25/","timestamp":"2024-11-07T03:04:54Z","content_type":"text/html","content_length":"28631","record_id":"<urn:uuid:da54f29a-b462-4162-bd9a-b346e4a74265>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00358.warc.gz"} |
[Solved] Looking for the excel function in Yellow | SolutionInn
Answered step by step
Verified Expert Solution
Looking for the excel function in Yellow on Q9 Question 9 Annual 3.000% Semi-Annual 3.022% Quarterly 3.034% Monthly 3.042% 4 points Question 10 14.2 14.2
Looking for the excel function in Yellow on Q9
Question 9 Annual 3.000% Semi-Annual 3.022% Quarterly 3.034% Monthly 3.042% 4 points Question 10 14.2 14.2 4 points 9) After reviewing the compounding model, your Kindergartener has a follow up
question about effective annual rates. She figures with more frequent compounding, the effective annual rate cannot be 3% for all the compounding options. Calculate the effective annual rate for each
of the compounding options in Question 8, using a function. (round to three decimal places) 10) A whistleblower exposed an illegal payoff scheme at a prison. The whistleblower's attorney believes his
client could be eligible for a $3 million whistleblower pay out. The whistleblower believes this is a good start, but would need $6 million to retire comfortably, with the lifestyle he desires. If
able to collect on the $3 million, how many years would it take to double his money to $6 million, assuming a 5% discount rate? Calculate this for him using an Excel function, not with the rule of
72. (round to the nearest tenth of a year) = Rate PMT PV FV 14.20669908 0.05 0 $3,000,000 $6,000,000
There are 3 Steps involved in it
Step: 1
SOLUTION 9 The formula for calculating the effective annual rate EAR is EAR 1 rnn 1 where r is the annual nominal interest rate and n is the number of ...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
Recommended Textbook for
Authors: Gerald E. Whittenburg, Martha Altus Buller, Steven L Gill
31st Edition
1111972516, 978-1285586618, 1285586611, 978-1285613109, 978-1111972516
More Books
Students also viewed these Finance questions
View Answer in SolutionInn App | {"url":"https://www.solutioninn.com/study-help/questions/looking-for-the-excel-function-in-yellow-on-q9-question-470194","timestamp":"2024-11-05T10:38:36Z","content_type":"text/html","content_length":"114951","record_id":"<urn:uuid:62bd2e3c-bb38-402d-8c46-9eb44eb6c2bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00210.warc.gz"} |
Test Prep Idea #3: Calculator Tricks Every Students Should Know
To my fellow Texas teachers, I apologize for not writing this earlier--but I think you can relate, as you too have just made it through "testing week". Throughout the week, every high school student
in Texas took at least one standardized test.
This article is about what every high school students needs to know about graphing calculators (focusing on Texas Instruments TI-83/84, the most widely used version). This idea is for anyone who
hasn't tested yet, is preparing for final exams or next year, and for Pre-Algebra classes being introduced to graphic calculators. I'm covering only the basics, the stuff you must know for 9th grade/
Algebra I standardized testing--in my opinion, everything else is extra.
Common Error Messages
This is just as important as anything else, since test administrators can't help students use the calculator just like they can't help them with the test itself.
1. ERR: SYNTAX: You typed in something wrong. Instead of "Quit", select "Goto" to go to the place where the problem is. For my students, this happens frequently when they try to graph an equation,
and use a subtraction sign instead of a negative sign in the front. However, you must also show them that sometimes when you confuse those signs, the calculator doesn't give you an error, it just
graphs something different. (For example, graph x^2 [minus] 5, then x^2 [negative sign] 5 to illustrate the difference).
2. ERR: INVALID DIM: One of the scatter plots (i.e. Plot1) on the top of the Y= screen has been turned on. Arrow up and hit ENTER to turn it off. This is a good opportunity to also show them how to
toggle Y1, Y2, etc on and off.
3. ERR: WINDOW RANGE: You messed up something in WINDOW. Fix it with ZOOM 6.
Besides using
and checking the table, students need to know how to manipulate both the window and table settings to see whatever they need to see. However, your students should remember one thing: they can change
any setting they want
as long as they know how to change it back
1. ZOOM 6: This is the single most important button combination since the Konami Code. Your students must memorize this, and you must ask "How do I put the graph back to normal?" on a daily basis.
2. Recenter with TRACE: Demonstrate moving around the graph with TRACE, and that ENTER will recenter the graph on that point, just like clicking a point on Google Maps or MapQuest. Sometimes this is
better (and easier) than zooming in and out.
3. ZoomIn/Out: Show students the most common pitfall of ZOOM IN/OUT, which is hitting ENTER more than once, which will continue to zoom in and out farther if they don't press any other buttons. If
they're lost in the graph, ZOOM 6 baby!
4. ZSquare: Explain that the calculator screen is streched out like a widescreen TV (point out how the spacing on the x-axis is wider than on the y-axis). To see perpendicular lines, for example,
you must use ZSquare.
5. ZoomFit: Good for fitting the graph in screen when other options don't work.
6. You can show all of the Zoom options, as long as you continually remind them about ZOOM 6.
Table Settings
Show them how to change the table settings if they're trying to match an equation to a table or find specific values. TblStart can be set to anything (set to zero to reset it) and [delta]Tbl changes
the increments, so when the independent variable increases by 0.5 or 50 you can quickly match the calculator's table to given data (set back to 1 when they want to go back to normal). I would
recommend you leave the other two settings alone to avoid confusion.
Main Screen Editing Shortcuts
I see my students typing and retyping long equations and getting frustrated when they press the wrong button, for example. Teach them about:
• 2nd, ENTER (ENTRY): Brings back the previous entry, so if they are plugging in values to an equation for example, they can just edit the part they need to.
• DEL: Students don't always know they can delete something without deleting everything with CLEAR. I made an analogy to typing something in Microsoft Word--you don't delete an entire paragraph
when you want to change one word, right?
• 2nd, DEL (INS): I used the same analogy to explain the usefulness of INS. It's helpful when you have to solve a problem through trial and error or when you need to graph several similar
Other Essentials
• MATH, 1 or MATH, ENTER (Math>Frac): This function takes either a decimal or a given fraction and converts it to a fraction in simplest form. As far as the TAKS goes, most answers that can be
expressed as fractions are fractions, and they're always simplified. So my students have to be able to convert answers by themselves or with the calculator.
• Exponents: Make sure they know how to use the x^2 key to square and the carat ^ for exponents other than 2.
• Resetting: If all else fails, TI-83 Plus/TI-84 calculators can be reset by typing 2nd, +, 7, 1, 2. This should be their last resort, since it drains the batteries quite a bit.
• Use parentheses: I tell my students to put parentheses around fractions and as often as necessary to avoid any problems with order of operations.
The "Equivalent Expression" Trick
I didn't actually show this to my students in the days leading up to the test, but I did mention it earlier this year. I know that many teachers think of this as cheating, or at the very least shows
a severe weakness of our tests. There are many questions where students have to do nothing more than simplify an algebraic equation by using the distributive property, combining like terms, laws of
exponents and so on.
Students can simply input the expression exactly as given as
in the
screen, and input the answer choices as Y2, Y3, etc. The equivalent expression or equation will create an identical graph as the original expression, because, of course, it's actually the same
equation in a different form. This means
students don't actually need to know any algebra to do these problems
! Despite what your kids might say, this is a bad thing.
What's worse is that you can actually type equations in the main screen, with X and everything, and the calculator will plug in its own value for X and give you a numerical answer. The right answer
will give you the same numerical answer. The troubling thing is, many students don't understand why the numerical answer isn't one of the choices, nor do they have any idea where it came from.
I don't see an easy solution to this problem, but it's something the authors of these tests need to think about.
Another quick TI graphic calculator guide | {"url":"http://www.teachforever.com/2008/05/test-prep-idea-3-calculator-tricks.html","timestamp":"2024-11-07T17:09:00Z","content_type":"application/xhtml+xml","content_length":"98270","record_id":"<urn:uuid:81516645-7080-4d45-a2e0-146e9d2f24c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00738.warc.gz"} |
Coherent sheaf - Wikiwand
In mathematics, especially in algebraic geometry and the theory of complex manifolds, coherent sheaves are a class of sheaves closely linked to the geometric properties of the underlying space. The
definition of coherent sheaves is made with reference to a sheaf of rings that codifies this geometric information.
Coherent sheaves can be seen as a generalization of vector bundles. Unlike vector bundles, they form an abelian category, and so they are closed under operations such as taking kernels, images, and
cokernels. The quasi-coherent sheaves are a generalization of coherent sheaves and include the locally free sheaves of infinite rank.
Coherent sheaf cohomology is a powerful technique, in particular for studying the sections of a given coherent sheaf.
A quasi-coherent sheaf on a ringed space ${\displaystyle (X,{\mathcal {O}}_{X})}$ is a sheaf ${\displaystyle {\mathcal {F}}}$ of ${\displaystyle {\mathcal {O}}_{X}}$-modules that has a local
presentation, that is, every point in ${\displaystyle X}$ has an open neighborhood ${\displaystyle U}$ in which there is an exact sequence
${\displaystyle {\mathcal {O}}_{X}^{\oplus I}|_{U}\to {\mathcal {O}}_{X}^{\oplus J}|_{U}\to {\mathcal {F}}|_{U}\to 0}$
for some (possibly infinite) sets ${\displaystyle I}$ and ${\displaystyle J}$.
A coherent sheaf on a ringed space ${\displaystyle (X,{\mathcal {O}}_{X})}$ is a sheaf ${\displaystyle {\mathcal {F}}}$ of ${\displaystyle {\mathcal {O}}_{X}}$-modules satisfying the following two
1. ${\displaystyle {\mathcal {F}}}$ is of finite type over ${\displaystyle {\mathcal {O}}_{X}}$, that is, every point in ${\displaystyle X}$ has an open neighborhood ${\displaystyle U}$ in ${\
displaystyle X}$ such that there is a surjective morphism ${\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {F}}|_{U}}$ for some natural number ${\displaystyle n}$;
2. for any open set ${\displaystyle U\subseteq X}$, any natural number ${\displaystyle n}$, and any morphism ${\displaystyle \varphi$ :{\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {F}}|_{U}} of ${\
displaystyle {\mathcal {O}}_{X}}$-modules, the kernel of ${\displaystyle \varphi }$ is of finite type.
Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of ${\displaystyle {\mathcal {O}}_{X}}$-modules.
The case of schemes
When ${\displaystyle X}$ is a scheme, the general definitions above are equivalent to more explicit ones. A sheaf ${\displaystyle {\mathcal {F}}}$ of ${\displaystyle {\mathcal {O}}_{X}}$-modules is
quasi-coherent if and only if over each open affine subscheme ${\displaystyle U=\operatorname {Spec} A}$ the restriction ${\displaystyle {\mathcal {F}}|_{U}}$ is isomorphic to the sheaf ${\
displaystyle {\tilde {M}}}$ associated to the module ${\displaystyle M=\Gamma (U,{\mathcal {F}})}$ over ${\displaystyle A}$. When ${\displaystyle X}$ is a locally Noetherian scheme, ${\displaystyle
{\mathcal {F}}}$ is coherent if and only if it is quasi-coherent and the modules ${\displaystyle M}$ above can be taken to be finitely generated.
On an affine scheme ${\displaystyle U=\operatorname {Spec} A}$, there is an equivalence of categories from ${\displaystyle A}$-modules to quasi-coherent sheaves, taking a module ${\displaystyle M}$
to the associated sheaf ${\displaystyle {\tilde {M}}}$. The inverse equivalence takes a quasi-coherent sheaf ${\displaystyle {\mathcal {F}}}$ on ${\displaystyle U}$ to the ${\displaystyle A}$-module
${\displaystyle {\mathcal {F}}(U)}$ of global sections of ${\displaystyle {\mathcal {F}}}$.
Here are several further characterizations of quasi-coherent sheaves on a scheme.^[1]
Theorem — Let ${\displaystyle X}$ be a scheme and ${\displaystyle {\mathcal {F}}}$ an ${\displaystyle {\mathcal {O}}_{X}}$-module on it. Then the following are equivalent.
• ${\displaystyle {\mathcal {F}}}$ is quasi-coherent.
• For each open affine subscheme ${\displaystyle U}$ of ${\displaystyle X}$, ${\displaystyle {\mathcal {F}}|_{U}}$ is isomorphic as an ${\displaystyle {\mathcal {O}}_{U}}$-module to the sheaf ${\
displaystyle {\tilde {M}}}$ associated to some ${\displaystyle {\mathcal {O}}(U)}$-module ${\displaystyle M}$.
• There is an open affine cover ${\displaystyle \{U_{\alpha }\}}$ of ${\displaystyle X}$ such that for each ${\displaystyle U_{\alpha }}$ of the cover, ${\displaystyle {\mathcal {F}}|_{U_{\alpha
}}}$ is isomorphic to the sheaf associated to some ${\displaystyle {\mathcal {O}}(U_{\alpha })}$-module.
• For each pair of open affine subschemes ${\displaystyle V\subseteq U}$ of ${\displaystyle X}$, the natural homomorphism
${\displaystyle {\mathcal {O}}(V)\otimes _{{\mathcal {O}}(U)}{\mathcal {F}}(U)\to {\mathcal {F}}(V),\,f\otimes s\mapsto f\cdot s|_{V}}$
is an isomorphism.
• For each open affine subscheme ${\displaystyle U=\operatorname {Spec} A}$ of ${\displaystyle X}$ and each ${\displaystyle f\in A}$, writing ${\displaystyle U_{f}}$ for the open subscheme of ${\
displaystyle U}$ where ${\displaystyle f}$ is not zero, the natural homomorphism
${\displaystyle {\mathcal {F}}(U){\bigg [}{\frac {1}{f}}{\bigg ]}\to {\mathcal {F}}(U_{f})}$
is an isomorphism. The homomorphism comes from the universal property of localization.
On an arbitrary ringed space, quasi-coherent sheaves do not necessarily form an abelian category. On the other hand, the quasi-coherent sheaves on any scheme form an abelian category, and they are
extremely useful in that context.^[2]
On any ringed space ${\displaystyle X}$, the coherent sheaves form an abelian category, a full subcategory of the category of ${\displaystyle {\mathcal {O}}_{X}}$-modules.^[3] (Analogously, the
category of coherent modules over any ring ${\displaystyle A}$ is a full abelian subcategory of the category of all ${\displaystyle A}$-modules.) So the kernel, image, and cokernel of any map of
coherent sheaves are coherent. The direct sum of two coherent sheaves is coherent; more generally, an ${\displaystyle {\mathcal {O}}_{X}}$-module that is an extension of two coherent sheaves is
A submodule of a coherent sheaf is coherent if it is of finite type. A coherent sheaf is always an ${\displaystyle {\mathcal {O}}_{X}}$-module of finite presentation, meaning that each point ${\
displaystyle x}$ in ${\displaystyle X}$ has an open neighborhood ${\displaystyle U}$ such that the restriction ${\displaystyle {\mathcal {F}}|_{U}}$ of ${\displaystyle {\mathcal {F}}}$ to ${\
displaystyle U}$ is isomorphic to the cokernel of a morphism ${\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {O}}_{X}^{m}|_{U}}$ for some natural numbers ${\displaystyle n}$ and ${\
displaystyle m}$. If ${\displaystyle {\mathcal {O}}_{X}}$ is coherent, then, conversely, every sheaf of finite presentation over ${\displaystyle {\mathcal {O}}_{X}}$ is coherent.
The sheaf of rings ${\displaystyle {\mathcal {O}}_{X}}$ is called coherent if it is coherent considered as a sheaf of modules over itself. In particular, the Oka coherence theorem states that the
sheaf of holomorphic functions on a complex analytic space ${\displaystyle X}$ is a coherent sheaf of rings. The main part of the proof is the case ${\displaystyle X=\mathbf {C} ^{n}}$. Likewise, on
a locally Noetherian scheme ${\displaystyle X}$, the structure sheaf ${\displaystyle {\mathcal {O}}_{X}}$ is a coherent sheaf of rings.^[5]
• An ${\displaystyle {\mathcal {O}}_{X}}$-module ${\displaystyle {\mathcal {F}}}$ on a ringed space ${\displaystyle X}$ is called locally free of finite rank, or a vector bundle, if every point in
${\displaystyle X}$ has an open neighborhood ${\displaystyle U}$ such that the restriction ${\displaystyle {\mathcal {F}}|_{U}}$ is isomorphic to a finite direct sum of copies of ${\displaystyle
{\mathcal {O}}_{X}|_{U}}$. If ${\displaystyle {\mathcal {F}}}$ is free of the same rank ${\displaystyle n}$ near every point of ${\displaystyle X}$, then the vector bundle ${\displaystyle {\
mathcal {F}}}$ is said to be of rank ${\displaystyle n}$.
Vector bundles in this sheaf-theoretic sense over a scheme ${\displaystyle X}$ are equivalent to vector bundles defined in a more geometric way, as a scheme ${\displaystyle E}$ with a morphism $
{\displaystyle \pi :E\to X}$ and with a covering of ${\displaystyle X}$ by open sets ${\displaystyle U_{\alpha }}$ with given isomorphisms ${\displaystyle \pi ^{-1}(U_{\alpha })\cong \mathbb {A}
^{n}\times U_{\alpha }}$ over ${\displaystyle U_{\alpha }}$ such that the two isomorphisms over an intersection ${\displaystyle U_{\alpha }\cap U_{\beta }}$ differ by a linear automorphism.^[6]
(The analogous equivalence also holds for complex analytic spaces.) For example, given a vector bundle ${\displaystyle E}$ in this geometric sense, the corresponding sheaf ${\displaystyle {\
mathcal {F}}}$ is defined by: over an open set ${\displaystyle U}$ of ${\displaystyle X}$, the ${\displaystyle {\mathcal {O}}(U)}$-module ${\displaystyle {\mathcal {F}}(U)}$ is the set of
sections of the morphism ${\displaystyle \pi ^{-1}(U)\to U}$. The sheaf-theoretic interpretation of vector bundles has the advantage that vector bundles (on a locally Noetherian scheme) are
included in the abelian category of coherent sheaves.
• Locally free sheaves come equipped with the standard ${\displaystyle {\mathcal {O}}_{X}}$-module operations, but these give back locally free sheaves.
• Let ${\displaystyle X=\operatorname {Spec} (R)}$, ${\displaystyle R}$ a Noetherian ring. Then vector bundles on ${\displaystyle X}$ are exactly the sheaves associated to finitely generated
projective modules over ${\displaystyle R}$, or (equivalently) to finitely generated flat modules over ${\displaystyle R}$.^[7]
• Let ${\displaystyle X=\operatorname {Proj} (R)}$, ${\displaystyle R}$ a Noetherian ${\displaystyle \mathbb {N} }$-graded ring, be a projective scheme over a Noetherian ring ${\displaystyle R_{0}}
$. Then each ${\displaystyle \mathbb {Z} }$-graded ${\displaystyle R}$-module ${\displaystyle M}$ determines a quasi-coherent sheaf ${\displaystyle {\mathcal {F}}}$ on ${\displaystyle X}$ such
that ${\displaystyle {\mathcal {F}}|_{\{feq 0\}}}$ is the sheaf associated to the ${\displaystyle R[f^{-1}]_{0}}$-module ${\displaystyle M[f^{-1}]_{0}}$, where ${\displaystyle f}$ is a
homogeneous element of ${\displaystyle R}$ of positive degree and ${\displaystyle \{feq 0\}=\operatorname {Spec} R[f^{-1}]_{0}}$ is the locus where ${\displaystyle f}$ does not vanish.
• For example, for each integer ${\displaystyle n}$, let ${\displaystyle R(n)}$ denote the graded ${\displaystyle R}$-module given by ${\displaystyle R(n)_{l}=R_{n+l}}$. Then each ${\displaystyle R
(n)}$ determines the quasi-coherent sheaf ${\displaystyle {\mathcal {O}}_{X}(n)}$ on ${\displaystyle X}$. If ${\displaystyle R}$ is generated as ${\displaystyle R_{0}}$-algebra by ${\displaystyle
R_{1}}$, then ${\displaystyle {\mathcal {O}}_{X}(n)}$ is a line bundle (invertible sheaf) on ${\displaystyle X}$ and ${\displaystyle {\mathcal {O}}_{X}(n)}$ is the ${\displaystyle n}$-th tensor
power of ${\displaystyle {\mathcal {O}}_{X}(1)}$. In particular, ${\displaystyle {\mathcal {O}}_{\mathbb {P} ^{n}}(-1)}$ is called the tautological line bundle on the projective ${\displaystyle
• A simple example of a coherent sheaf on ${\displaystyle \mathbb {P} ^{2}}$ that is not a vector bundle is given by the cokernel in the following sequence
${\displaystyle {\mathcal {O}}(1){\xrightarrow {\cdot (x^{2}-yz,y^{3}+xy^{2}-xyz)}}{\mathcal {O}}(3)\oplus {\mathcal {O}}(4)\to {\mathcal {E}}\to 0}$
this is because ${\displaystyle {\mathcal {E}}}$ restricted to the vanishing locus of the two polynomials has two-dimensional fibers, and has one-dimensional fibers elsewhere.
• Ideal sheaves: If ${\displaystyle Z}$ is a closed subscheme of a locally Noetherian scheme ${\displaystyle X}$, the sheaf ${\displaystyle {\mathcal {I}}_{Z/X}}$ of all regular functions vanishing
on ${\displaystyle Z}$ is coherent. Likewise, if ${\displaystyle Z}$ is a closed analytic subspace of a complex analytic space ${\displaystyle X}$, the ideal sheaf ${\displaystyle {\mathcal {I}}_
{Z/X}}$ is coherent.
• The structure sheaf ${\displaystyle {\mathcal {O}}_{Z}}$ of a closed subscheme ${\displaystyle Z}$ of a locally Noetherian scheme ${\displaystyle X}$ can be viewed as a coherent sheaf on ${\
displaystyle X}$. To be precise, this is the direct image sheaf ${\displaystyle i_{*}{\mathcal {O}}_{Z}}$, where ${\displaystyle i:Z\to X}$ is the inclusion. Likewise for a closed analytic
subspace of a complex analytic space. The sheaf ${\displaystyle i_{*}{\mathcal {O}}_{Z}}$ has fiber (defined below) of dimension zero at points in the open set ${\displaystyle X-Z}$, and fiber of
dimension 1 at points in ${\displaystyle Z}$. There is a short exact sequence of coherent sheaves on ${\displaystyle X}$:
${\displaystyle 0\to {\mathcal {I}}_{Z/X}\to {\mathcal {O}}_{X}\to i_{*}{\mathcal {O}}_{Z}\to 0.}$
• Most operations of linear algebra preserve coherent sheaves. In particular, for coherent sheaves ${\displaystyle {\mathcal {F}}}$ and ${\displaystyle {\mathcal {G}}}$ on a ringed space ${\
displaystyle X}$, the tensor product sheaf ${\displaystyle {\mathcal {F}}\otimes _{{\mathcal {O}}_{X}}{\mathcal {G}}}$ and the sheaf of homomorphisms ${\displaystyle {\mathcal {H}}om_{{\mathcal
{O}}_{X}}({\mathcal {F}},{\mathcal {G}})}$ are coherent.^[8]
• A simple non-example of a quasi-coherent sheaf is given by the extension by zero functor. For example, consider ${\displaystyle i_{!}{\mathcal {O}}_{X}}$ for
${\displaystyle X=\operatorname {Spec} (\mathbb {C} [x,x^{-1}]){\xrightarrow {i}}\operatorname {Spec} (\mathbb {C} [x])=Y}$^[9]
Since this sheaf has non-trivial stalks, but zero global sections, this cannot be a quasi-coherent sheaf. This is because quasi-coherent sheaves on an affine scheme are equivalent to the category
of modules over the underlying ring, and the adjunction comes from taking global sections.
Let ${\displaystyle f:X\to Y}$ be a morphism of ringed spaces (for example, a morphism of schemes). If ${\displaystyle {\mathcal {F}}}$ is a quasi-coherent sheaf on ${\displaystyle Y}$, then the
inverse image ${\displaystyle {\mathcal {O}}_{X}}$-module (or pullback) ${\displaystyle f^{*}{\mathcal {F}}}$ is quasi-coherent on ${\displaystyle X}$.^[10] For a morphism of schemes ${\displaystyle
f:X\to Y}$ and a coherent sheaf ${\displaystyle {\mathcal {F}}}$ on ${\displaystyle Y}$, the pullback ${\displaystyle f^{*}{\mathcal {F}}}$ is not coherent in full generality (for example, ${\
displaystyle f^{*}{\mathcal {O}}_{Y}={\mathcal {O}}_{X}}$, which might not be coherent), but pullbacks of coherent sheaves are coherent if ${\displaystyle X}$ is locally Noetherian. An important
special case is the pullback of a vector bundle, which is a vector bundle.
If ${\displaystyle f:X\to Y}$ is a quasi-compact quasi-separated morphism of schemes and ${\displaystyle {\mathcal {F}}}$ is a quasi-coherent sheaf on ${\displaystyle X}$, then the direct image sheaf
(or pushforward) ${\displaystyle f_{*}{\mathcal {F}}}$ is quasi-coherent on ${\displaystyle Y}$.^[2]
The direct image of a coherent sheaf is often not coherent. For example, for a field ${\displaystyle k}$, let ${\displaystyle X}$ be the affine line over ${\displaystyle k}$, and consider the
morphism ${\displaystyle f:X\to \operatorname {Spec} (k)}$; then the direct image ${\displaystyle f_{*}{\mathcal {O}}_{X}}$ | {"url":"https://www.wikiwand.com/en/articles/Coherent_sheaf","timestamp":"2024-11-03T03:39:34Z","content_type":"text/html","content_length":"1049787","record_id":"<urn:uuid:6dd6bbfd-d765-4130-9642-10997df06b59>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00709.warc.gz"} |
SM AC9C
Discrete-time or continuous-time synchronous machine AC9C excitation system including an automatic voltage regulator and an exciter
Since R2023a
Simscape / Electrical / Control / SM Control
The SM AC9C block implements a synchronous machine type AC9C excitation system model in conformance with Std IEEE 421.5-2016 [1].
Use this block to model the control and regulation of the field voltage of a synchronous machine that operates as a generator using an AC rotating exciter.
Switch between continuous and discrete implementations of the block by using the Sample time (-1 for inherited) parameter. To configure the integrator for continuous time, set the Sample time (-1 for
inherited) parameter to 0. To configure the integrator for discrete time, set the Sample time (-1 for inherited) parameter to a positive scalar. To inherit the sample time from an upstream block, set
the Sample time (-1 for inherited) parameter to -1.
The SM AC9C block comprises five major components:
• The Current Compensator component modifies the measured terminal voltage as a function of the terminal current.
• The Voltage Measurement Transducer component simulates the dynamics of a terminal voltage transducer using a low-pass filter.
• The Excitation Control Elements component compares the voltage transducer output with a terminal voltage reference to produce a voltage error value. The component then passes this value through a
voltage regulator to produce the exciter field voltage.
• The AC Rotating Exciter component models the AC rotating exciter, which produces a field voltage that is applied to the controlled synchronous machine. The block also feeds the exciter field
current (V[FE]) back to the excitation system.
• The Power Source component models the dependency of the power source for the controlled rectifier from the terminal voltage.
This diagram shows the structure of the AC9C excitation system model:
In the diagram:
• V[T] and I[T] are the measured terminal voltage and current of the synchronous machine, respectively.
• V[C1] is the current-compensated terminal voltage.
• V[C] is the filtered, current-compensated terminal voltage.
• V[REF] is the reference terminal voltage.
• V[S] is the power system stabilizer voltage.
• SW[1] is the power source switch that you specify for the controlled rectifier.
• V[B] is the exciter field voltage.
• E[FE] and V[FE] are the exciter field voltage and current, respectively.
• E[FD] and I[FD] are the field voltage and current, respectively.
Current Compensator and Voltage Measurement Transducer
The block models the current compensator by using this equation:
• R[C] is the load compensation resistance.
• X[C] is the load compensation reactance.
The block implements the voltage measurement transducer as a Low-Pass Filter block with the time constant T[R]. Refer to the documentation for the Low-Pass Filter block for information about the
exact discrete and continuous implementations.
Excitation Control Elements
This diagram shows the structure of the excitation control elements:
In the diagram:
• The Summation Point Logic subsystem models the summation point input location for the overexcitation limiter (OEL), underexcitation limiter (UEL), and stator current limiter (SCL). For more
information about using limiters with this block, see Field Current Limiters.
• The PID subsystem models a PID controller that functions as a control structure for the automatic voltage regulator. The minimum and maximum anti windup saturation limits for the block are V
[PIDmin] and V[PIDmax], respectively.
• The Take-over Logic subsystem models the take-over point input location for the OEL, UEL, SCL and PSS voltages. For more information about using limiters with this block, see Field Current
• The PI_R subsystem models a PI controller that functions as a control structure for the field current regulator. The minimum and maximum anti windup saturation limits for the block are V[Amin]
and V[Amax], respectively.
• The top Low-Pass Filter block models the major dynamics of the controlled rectified bridge. K[A] is the controlled rectifier bridge equivalent gain and T[A] is the major time constant of the
controlled rectifier bridge. The minimum and maximum anti windup saturation limits for the block are V[Rmin] and V[Rmax], respectively.
• The bottom Low-Pass Filter block models the rate feedback path for the stabilization of the excitation system. K[F] and T[F] are the gain and time constants of this system, respectively. See the
documentation for the Low-Pass Filter block for information about the discrete and continuous implementations.
• The Power state logic subsystem supports the selection of the power stage type, which can be a thyristor or a chopper converter. If you set the Power stage type selector, S_CT parameter to
Thyristor bridge, it represents a thyristor converter. If you set the Power stage type selector, S_CT parameter to Chopper converter, it represents a chopper converter. The subsystem sums the
voltage regulator command signal V[R] to the exciter field voltage V[B]. For more information about the logical switch for the power source of the controlled rectifier and about the power state
logic subsystem, see Power Source and Power State Logic.
• The initialOffset Constant block ensures the simulation can start from a steady state. The SM AC9C block calculates this value by using saturation and exciter parameters, including the initial
field voltage and the exciter field current feedback gain.
Field Current Limiters
You can use different types of field current limiter to modify the output of the voltage regulator under unsafe operating conditions:
• Use an overexcitation limiter to prevent overheating of the field winding due to excessive field current demand.
• Use an underexcitation limiter to boost field excitation when it is too low, which risks desynchronization.
• Use a stator current limiter to prevent overheating of the stator windings due to excessive current.
Attach the output of any of these limiters at one of these points:
• Summation point — Use the limiter as part of the automatic voltage regulator (AVR) feedback loop.
• Take-over point — Override the usual behavior of the AVR.
If you are using the stator current limiter at the summation point, use the input V[SCLsum]. If you are using the stator current limiter at the take-over point, use the overexcitation input V[OELscl]
, and the underexcitation input V[UELscl].
AC Rotating Exciter
This diagram shows the structure of the AC rotating exciter:
In the diagram:
• The exciter field current V[FE] is the sum of three signals:
□ The nonlinear function V[x] models the saturation of the exciter output voltage.
□ The proportional term K[E] models the linear relationship between the exciter output voltage and the exciter field current.
□ The subsystem models the demagnetizing effect of the load current on the exciter output voltage using the demagnetization constant K[D] in the feedback loop.
• The Integrator with variable limits subsystem integrates the difference between E[FE] and V[FE] to generate the exciter alternator output voltage V[E]. T[E] is the time constant for this process.
• The nonlinear function F[EX] models the exciter output voltage drop from the rectifier regulation. This function depends on the constant K[C], which itself is a function of commutating reactance.
• The parameters V[Emin] and V[FEmax] model the lower and upper limits of the rotating exciter.
Power Source and Power State Logic
You can use different power source representations for the controlled rectifier by setting the Power source selector SW1 parameter value. To derive the power source for the controlled rectifier from
the terminal voltage, set the Power source selector SW1 parameter to Position A: power source derived from terminal voltage. To specify that the power source is independent of the terminal voltage,
set the Power source selector SW1 parameter to Position B: power source independent from the terminal conditions.
The Power state logic subsystem supports the selection of the power stage type, which can be a thyristor or a chopper converter. If you set the Power stage type selector, S_CT parameter to Thyristor
bridge, the Power state logic subsystem represents a thyristor converter. If you set the Power stage type selector, S_CT parameter to Chopper converter, the Power state logic subsystem represents a
chopper converter. The value of the voltage regulator command signal V[R] depends on the Voltage limit, V_lim1 (pu) and Voltage limit, V_lim2 (pu) parameters and on the V[CT], V[FW], and V[AVR]
signals according to this logic:
if S_CT ~= 0 % Thyristor bridge
V_R = V_CT
else % Chopper converter
if V_AVR > V_lim1
V_R = C_T
if V_AVR > V_lim2
V_R = 0
V_R = -V_FW
V_REF — Voltage reference
Voltage regulator reference set point, in per-unit representation, specified as a scalar.
Data Types: single | double
V_S — Input from stabilizer
Input from the power system stabilizer, in per-unit representation, specified as a scalar.
Data Types: single | double
V_T — Terminal voltage
Terminal voltage magnitude, in per-unit representation, specified as a scalar.
Data Types: single | double
I_T — Terminal current
Terminal current magnitude, in per-unit representation, specified as a scalar.
Data Types: single | double
V_OEL — Overexcitation limit signal
Input from the overexcitation limiter, in per-unit representation, specified as a scalar.
• To ignore the input from the overexcitation limiter, set Alternate OEL input locations (V_OEL) to Unused.
• To use the input from the overexcitation limiter at the summation point, set Alternate OEL input locations (V_OEL) to Summation point.
• To use the input from the overexcitation limiter at the take-over point, set Alternate OEL input locations (V_OEL) to Take-over.
Data Types: single | double
V_UEL — Underexcitation limit signal
Input from the underexcitation limiter, in per-unit representation, specified as a scalar.
• To ignore the input from the underexcitation limiter, set Alternate UEL input locations (V_UEL) to Unused.
• To use the input from the underexcitation limiter at the summation point, set Alternate UEL input locations (V_UEL) to Summation point.
• To use the input from the underexcitation limiter at the take-over point, set Alternate UEL input locations (V_UEL) to Take-over.
Data Types: single | double
V_SCLsum — Summation point stator current limit signal
Input from the stator current limiter when using the summation point, in per-unit representation, specified as a scalar.
• To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to Unused.
• To use the input from the stator current limiter at the summation point, set Alternate SCL input locations (V_SCL) to Summation point.
Data Types: single | double
V_SCLoel — Take-over stator current limit for overexcitation limiter
Input from the stator current limiter to prevent field overexcitation when using the take-over point, in per-unit representation, specified as a scalar.
• To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to Unused.
• To use the input from the stator current limiter at the take-over point, set Alternate SCL input locations (V_SCL) to Take-over.
Data Types: single | double
V_SCLuel — Take-over stator current limit for underexcitation limiter
Input from the stator current limiter to prevent field underexcitation when using the take-over point, in per-unit representation, specified as a scalar.
• To ignore the input from the stator current limiter, set Alternate SCL input locations (V_SCL) to Unused.
• To use the input from the stator current limiter at the take-over point, set Alternate SCL input locations (V_SCL) to Take-over.
Data Types: single | double
Ifd_pu — Measured field current
Measured per-unit field current of the synchronous machine, specified as a scalar.
Data Types: single | double
Efd_pu — Field voltage
Per-unit field voltage to apply to the field circuit of the synchronous machine, returned as a scalar.
Data Types: single | double
Initial field voltage, Efd0 (pu) — Initial output voltage
1 (default) | real scalar
Initial per-unit voltage to apply to the field circuit of the synchronous machine.
Initial terminal voltage, Vt0 (pu) — Initial terminal voltage
1 (default) | real scalar
Initial per-unit voltage to apply to the terminal.
To enable this parameter, in the Exciter section, set Power source selector SW1 to Position A: power source derived from terminal voltage.
Initial terminal current, It0 (pu) — Initial terminal current
1 (default) | real scalar
Initial per-unit current to apply to the terminal.
Sample time (-1 for inherited) — Block sample time
-1 (default) | 0 | positive scalar
Time between consecutive block executions. During execution, the block produces outputs and, if appropriate, updates its internal state. For more information, see What Is Sample Time? and Specify
Sample Time.
For inherited discrete-time operation, set this parameter to -1. For discrete-time operation, set this parameter to a positive integer. For continuous-time operation, set this parameter to 0.
If this block is in a masked subsystem or a variant subsystem that supports switching between continuous operation and discrete operation, promote this parameter to ensure correct switching between
the continuous and discrete implementations of the block. For more information, see Promote Block Parameters on a Mask.
Resistive component of load compensation, R_C (pu) — Compensation resistance
0 (default) | positive scalar
Resistance used in the current compensation system. Set this parameter and Reactance component of load compensation, X_C (pu) to 0 to disable current compensation.
Reactance component of load compensation, X_C (pu) — Compensation reactance
0 (default) | positive scalar
Reactance used in the current compensation system. Set this parameter and Resistive component of load compensation, R_C (pu) to 0 to disable current compensation.
Regulator input filter time constant, T_R (s) — Regulator time constant
0.01 (default) | positive scalar
Equivalent time constant for the voltage transducer filtering.
Voltage regulator proportional gain, K_PR (pu) — Proportional gain of the voltage regulator
10 (default)
Per-unit proportional gain of the voltage regulator.
Voltage regulator integral gain, K_IR (pu/s) — Integral gain of the voltage regulator
10 (default)
Per-unit integral gain of the voltage regulator.
Voltage regulator derivative gain, K_DR (pu.s) — Derivative gain of the voltage regulator
0 (default)
Derivative gain of the voltage regulator.
Lag time constant for derivative channel of PID controller, T_DR (s) — Lag time constant for PID derivative channel
0.01 (default) | positive scalar
Equivalent lag time constant for the derivative channel of the PID controller.
Maximum voltage regulator output, V_PIDmax (pu) — Maximum output of PID regulator
1.6 (default) | positive scalar
Maximum admissible per-unit output of the PID regulator.
Minimum voltage regulator output, V_PIDmin (pu) — Minimum output of PID regulator
0 (default) | positive scalar
Minimum admissible per-unit output of the PID regulator.
Field current regulator proportional gain, K_PA (pu) — Proportional gain of the field current regulator
4 (default) | real scalar
Per-unit proportional gain of the field current regulator.
Field current regulator integral gain, K_IA (pu/s) — Integral gain of the field current regulator
0 (default) | real scalar
Per-unit integral gain of the field current regulator.
Maximum current regulator output, V_Amax (pu) — Maximum current regulator output
0.996 (default) | real scalar
Maximum per-unit current regulator output.
Minimum current regulator output, V_Amin (pu) — Minimum current regulator output
-0.866 (default) | real scalar
Minimum per-unit current regulator output.
Controlled rectifier bridge equivalent gain, K_A (pu) — Rectifier bridge gain
20 (default) | positive scalar
Gain of the controlled rectifier bridge.
Controlled rectifier bridge equivalent time constant, T_A (s) — Rectifier bridge time constant
0.0018 (default) | positive scalar
Time constant of the controlled rectifier bridge.
Maximum rectifier bridge output, V_Rmax (pu) — Upper limit of the rectifier bridge output
19.92 (default) | real scalar
Maximum per-unit output of the rectifier bridge.
Minimum rectifier bridge output, V_Rmin (pu) — Lower limit of the rectifier bridge output
-17.32 (default) | real scalar
Minimum per-unit output of the rectifier bridge.
Exciter field current feedback gain, K_F (pu) — Exciter field current feedback gain
0.2 (default) | real scalar
Per-unit field current feedback gain of the exciter.
Field current feedback time constant, T_F (s) — Feedback time constant
0.01 (default) | positive scalar
Feedback time constant for the stabilization of the excitation system.
Free wheel equivalent feedback gain, K_FW (pu) — Free wheel feedback gain
0 (default) | real scalar
Per-unit free wheel feedback gain of the exciter.
Maximum free wheel feedback, V_FWmax (pu) — Upper limit of the free wheel feedback
10 (default) | real scalar
Maximum per-unit free wheel feedback.
Minimum free wheel feedback, V_FWmin (pu) — Lower limit of the free wheel feedback
0 (default) | real scalar
Minimum per-unit free wheel feedback.
Power stage type selector, S_CT — Option to select power stage type
Thyristor bridge (default) | Chopper converter
Option to select the power stage type. If you set the Power stage type selector, S_CT parameter to Thyristor bridge, the power stage represents a thyristor converter. If you set the Power stage type
selector, S_CT parameter to Chopper converter, the power stage represents a chopper converter.
Voltage limit, V_lim1 (pu) — First automatic voltage regulator limit
0 (default)
First automatic voltage regulator limit for the calculation of the voltage regulator command signal.
Voltage limit, V_lim2 (pu) — Second automatic voltage regulator limit
-0.1 (default)
Second automatic voltage regulator limit for the calculation of the voltage regulator command signal.
Alternate OEL input locations (V_OEL) — OEL input location
Unused (default) | Summation point | Take-over
Location of the overexcitation limiter input, specified as one of these options:
• Summation point — V_OEL is an input of the Summation Point Logic subsystem.
• Take-over — V_OEL is an input of the Take-over Logic subsystem.
Alternate UEL input locations (V_UEL) — UEL input location
Unused (default) | Summation point | Take-over
Location of the underexcitation limiter input, specified as one of these options:
• Summation point — V_UEL is an input of the Summation Point Logic subsystem.
• Take-over — V_UEL is an input of the Take-over Logic subsystem.
Alternate SCL input locations (V_SCL) — SCL input location
Unused (default) | Summation point | Take-over
Location of the stator current limiter input, specified as one of these options:
• Summation point — Use the V_SCLsum input port.
• Take-over — Use the V_SCLoel and V_SCLuel input ports.
Exciter field proportional constant, K_E (pu) — Exciter field gain
1 (default) | positive scalar
Proportional constant for the exciter field.
Exciter field time constant, T_E (s) — Exciter field time constant
1 (default) | positive scalar
Time constant for the exciter field.
Diode bridge loading factor proportional to commutating reactance, K_C (pu) — Diode bridge loading factor
0 (default) | positive scalar
Diode bridge loading factor. This value is proportional to the commutating reactance.
Demagnetizing factor, function of exciter alternator reactances, K_D (pu) — Demagnetization factor
1 (default) | positive scalar
Demagnetization factor related to the exciter alternator reactances.
Exciter output voltage for saturation factor S_E(E_1), E_1 (pu) — First saturation output voltage
4.167 (default) | positive scalar
Exciter output voltage for the first saturation factor.
Exciter saturation factor at exciter output voltage E_1, S_E(E_1) (1) — First saturation lookup voltage
0.001 (default) | positive scalar
Saturation factor for the first exciter.
Exciter output voltage for saturation factor S_E(E_2), E_2 (pu) — Second saturation output voltage
3.125 (default) | positive scalar
Exciter output voltage for the second saturation factor.
Exciter saturation factor at exciter output voltage E_2, S_E(E_2) (1) — Second saturation lookup voltage
0.01 (default) | positive scalar
Saturation factor for the second exciter.
Exciter field current limit , V_FEmax (pu) — Exciter upper limit
16 (default) | real scalar
Maximum per-unit field current limit reference.
Minimum exciter output limit, V_Emin (pu) — Exciter lower limit
0 (default) | real scalar
Minimum per-unit exciter voltage output.
Potential circuit gain coefficient, K_P (pu) — Potential circuit gain coefficient
1 (default) | real scalar
Per-unit potential circuit gain coefficient.
Potential circuit phase angle (degrees) — Potential circuit phase angle
0 (default) | real scalar
Potential circuit phase angle, in degrees.
To enable this parameter, set Power source selector SW1 to Position A: power source derived from terminal voltage.
Compound circuit (current) gain coefficient, K_I (pu) — Compound circuit current gain coefficient
0 (default) | real scalar
Per-unit compound circuit current gain coefficient.
To enable this parameter, set Power source selector SW1 to Position A: power source derived from terminal voltage.
Reactance associated with potential source, X_L (pu) — Reactance associated with potential source
0 (default) | real scalar
Per-unit reactance associated with the potential source.
To enable this parameter, set Power source selector SW1 to Position A: power source derived from terminal voltage.
Rectifier loading factor proportional to commutating reactance, K_C1 (pu) — Rectifier loading factor proportional to commutating reactance
0 (default) | real scalar
Per-unit loading factor of the rectifier. This value is proportional to the commutating reactance.
Power source selector SW1 — Power source selector
Position A: power source derived from terminal voltage (default) | Position B: power source independent of the terminal conditions
Position of the power source selector SW1.
Maximum available exciter field voltage, V_B1max (pu) — Maximum available exciter field voltage
100 (default) | real scalar
Maximum per-unit available field voltage for the exciter.
Rectifier loading factor proportional to commutating reactance, K_C2 (pu) — Rectifier loading factor proportional to commutating reactance
0 (default) | real scalar
Per-unit loading factor of the rectifier. This value is proportional to the commutating reactance, derived from the generator terminal current through a separate series diode bridge.
Compound circuit (current) gain coefficient, K_I2 (pu) — Compound circuit current gain coefficient
0 (default) | real scalar
Per-unit compound circuit current gain coefficient derived from the generator terminal current through a separate series diode bridge. Set this parameter to 0 to disable the separate compound power
source completely.
Maximum available exciter field voltage, V_B2max (pu) — Maximum available exciter field voltage
0 (default) | real scalar
Maximum per-unit available field voltage for the exciter, derived from the generator terminal current through a separate series diode bridge.
[1] IEEE Std 421.5-2016 (Revision of IEEE Std 421.5-2005). "IEEE Recommended Practice for Excitation System Models for Power System Stability Studies." Piscataway, NJ: IEEE, 2016.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Version History
Introduced in R2023a | {"url":"https://de.mathworks.com/help/sps/ref/smac9c.html","timestamp":"2024-11-04T21:16:14Z","content_type":"text/html","content_length":"191448","record_id":"<urn:uuid:c4e8ac5d-6510-4846-b21c-74e5595f5b90>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00251.warc.gz"} |
Semi | Definition & Meaning
Semi|Definition & Meaning
Semi literally means “half.” It is used as a prefix to imply halves of various things. For example, semi-annually means every half-year (e.g., interest compounding), semicircle means a half circle,
and semi-major and minor axes mean half the major and minor axes of ellipses. It is also used in other contexts, as in semiprime numbers (a product of two prime numbers).
A prefix that means “half” is known as a semi. The most common example of the semi is a semi-circle and semiannually. Let’s first explain the semi-circle in detail. The following figure represents
the semi-circle.
Figure 1 – Representation of semi-circle
The figure below represents the semi-polygon.
Figure 2 – Semi-polygon representation
What Is a Semi-circle?
A semicircle is a half-circle created by splitting a circle into two equal parts. It is created when a line pierces the circle’s center and touches its two ends. The circle’s diameter is the name
given to this line.
Taking a whole circle and slicing it in half along its diameter yields a semicircle, which can also be referred to as a half-circle. Only one line of symmetry, known as the reflection symmetry, may
be found in the shape of a semicircle. A half-disk is another name for a shape that resembles a semicircle.
Since a semicircle is only half of a circle, its arc will always measure 180 degrees because 360 degrees is the total number of degrees in a circle.
Figure 3 – Circle representation
Finding a Semicircle’s Area
The interior or the surrounding area of such a semicircle is referred to as its area. It has a circle’s half surface area. Remember that a circle’s area is equal to πr², where the circle’s radius is
denoted by r and pi (π) is approximately equal to 22/7 = 3.14. Thus, the following equation can be used to determine a semicircle’s area:
Area of semi-circle is = ½ × πr²
Finding a Semicircle’s Circumference and Perimeter
The perimeter or circumference indicates the length of the semicircle’s entire boundary. Contrary to popular belief, a semicircle’s perimeter is not equal to half that of a circle because a circle’s
half-perimeter only yields the curved portion’s perimeter. The bottom diameter line must be added to the bottom circumference to get the whole perimeter.
Remember that a circle’s radius is 2πr. Therefore, the circumference of the semicircle’s curved portion is equal to 1/2 of 2πr equal to πr. Let’s now provide the diameter’s length as well, so the
semicircle’s total perimeter is equal to πr + d, where d is its diameter.
However, we are also aware that a circle’s diameter is equal to twice its radius. As a result, we find that the semicircle perimeter is equal to πr + 2r. We can take r common, so the perimeter = r (π
+ 2).
Characteristics of a Half Circle
The following is a list of some essential characteristics of a semicircle, which together make it a distinctive form in geometry:
• A closed two-dimensional shape is known as a semicircle.
• Due to the fact that one of its edges is bent, it cannot be considered a polygon.
• One of the edges of a semicircle is curved, and this edge is known as the circumference. The other edge is straight, and this edge is known as the diameter.
• It corresponds to precisely one-half of a circle. Both semicircles that were generated out of the circle had the same diameter as the circle itself.
• A semicircle has an area equal to one-half that of a full circle.
What Is a Semi-Annual?
The word “semiannual” designates events that occur twice a year, often once every six months, and are paid for, reported, published, or take place in another manner.
For instance, the interest on a general obligation bond with a term of ten years that was issued in 2020 by Buckeye City, Ohio Consolidated School District will be paid on a semiannual basis each
year up to the maturity date of the bond in 2030.
When an investor purchases these bonds, he or she will be entitled to interest payments twice during each of those years, specifically, once in the month of June and once in the month of December.
The school system will also issue a semiannual financial report in February and November.
A Detailed Explanation of Semi-Annual
The term “semiannual” refers to something that takes place twice yearly and is merely a word. For instance, a business may hold workplace parties on a semiannual basis, a couple could celebrate their
marriage anniversary on a semiannual basis, and a family could take a vacation on a semiannual basis. The term “semiannual” refers to occurrences that take place twice yearly.
If a company decides to pay its shareholders a dividend on a semiannual basis, then those shareholders will be entitled to dividend payments on two separate occasions each year. A corporation has the
ability to decide whether or not to deliver dividends on an annual basis. Quarterly (or four times a year) publications of financial statements or perhaps even reports are common practice.
It is quite unusual for companies just to publish their financial results every other year. However, they do provide an annual report, which, according to the dictionary definition, must be done at
least once every year.
When purchasing bonds, it is essential to have a solid understanding of the semiannual payment schedule. The yield that a bond pays its holder is typically used as a way to characterize the bond. A
bond with a face value of $2,000 might have a yield of 5%, for instance.
To have a better understanding of the payment that you would get as the bondholder, it’s indeed essential to understand if this 5% has been paid annually or semiannually.
A Numerical Example of a Semi-circle
Example 1
A semicircle has a diameter of 7 cm. Determine the curved surface’s perimeter.
Given that:
The circle has a 7 cm diameter.
Radius = 7/2 cm.
Semicircle’s curved surface’s perimeter is equal to 1/2 * 2πr.
= ½ × 2 × 22/7 × 7/2
= 11 cm
Example 2
A semicircle has a diameter of 8 cm. Determine the curved surface’s perimeter.
Given that:
The circle has a 7 cm diameter.
Radius = 8/2 cm.
Semicircle’s curved surface’s perimeter is equal to 1/2 * 2πr.
= ½ × 2 × 22/7 × 8/2
= 12.56 cm
All mathematical drawings and images were created with GeoGebra. | {"url":"https://www.storyofmathematics.com/glossary/semi/","timestamp":"2024-11-03T00:19:05Z","content_type":"text/html","content_length":"168090","record_id":"<urn:uuid:19a7e6cf-d51a-4590-8344-9edd7b60a6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00527.warc.gz"} |
Justin Barhite - Teaching
I received the University of Kentucky College of Arts & Sciences Certificate for Outstanding Teaching in 2021 and the University of Kentucky Provost’s Award for Outstanding Teaching in 2022.
Courses taught at the University of Colorado Boulder
MATH 1300 Calculus 1 (Fall 2023, Spring 2024)
MATH 2300 Calculus 2 (Spring 2024)
Courses taught at the University of Kentucky
MA 111 Introduction to Contemporary Mathematics (Fall 2021, Spring 2022)
MA 310 Mathematical Problem Solving for Teachers (Spring 2021)
MA 241 Geometry for Middle School Teachers (Fall 2020)
MA 202 Math for Elementary Teachers (Spring 2019)
MA 109 College Algebra (Fall 2018)
MA 213 Calculus III (Summer 2018)
MA 110 Algebra and Trigonometry for Calculus (Fall 2022)
MA 114 Calculus II (MathExcel*) (Spring 2020)
MA 113 Calculus I (MathExcel*) (Fall 2019)
MA 113 Calculus I (Spring 2018)
MA 110 Algebra and Trigonometry for Calculus (Fall 2017)
*Students in MathExcel sections spend extra time in recitation each week, working on problems in small groups with assistance from the TA and undergraduate assistants. | {"url":"https://www.jbarhite.com/teaching","timestamp":"2024-11-04T01:25:41Z","content_type":"text/html","content_length":"74327","record_id":"<urn:uuid:4c873da7-39e0-4bdb-8772-c273d24d3504>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00183.warc.gz"} |
Does category theory make you a better programmer ?
How much of category theory knowledge should a working programmer have ? I guess this depends on what kind of language the programmer uses in his daily life. Given the proliferation of functional
languages today, specifically typed functional languages (Haskell, Scala etc.) that embeds the typed lambda calculus in some form or the other, the question looks relevant to me. And apparently to
a few others
as well. In one of his courses on
Category Theory
, Graham Hutton mentioned the following points when talking about the usefulness of the theory :
• Building bridges—exploring relationships between various mathematical objects, e.g., Products and Function
• Unifying ideas - abstracting from unnecessary details to give general definitions and results, e.g., Functors
• High level language - focusing on how things behave rather than what their implementation details are e.g. specification vs implementation
• Type safety - using types to ensure that things are combined only in sensible ways e.g. (f: A -> B g: B -> C) => (g o f: A -> C)
• Equational proofs—performing proofs in a purely equational style of reasoning
Many of the above points can be related to the experience that we encounter while programming in a functional language today. We use
types, we use Functors to abstract our computation, we marry types together to encode domain logic within the structures that we build and many of us use
equational reasoning
to optimize algorithms and data structures.
But how much do we need to care about how category theory models these structures and how that model maps to the ones that we use in our programming model ?
Let's start with the classical definition of a Category. [
] defines a Category as comprising of:
1. a collection of objects
2. a collection of arrows (often called morphisms)
3. operations assigning to each arrow f an object dom f, its domain, and an object cod f, its codomain (f: A → B, where dom f = A and cod f = B
4. a composition operator assigning to each pair of arrows f and g with cod f = dom g, a composite arrow g o f: dom f → cod g, satisfying the following associative law: for any arrows f: A → B, g: B
→ C, and h: C → D, h o (g o f) = (h o g) o f
5. for each object A, an identity arrow id[A]: A → A satisfying the following identity law: for any arrow f: A → B, id[B] o f = f and f o id[A] = f
Translating to Scala
Ok let's see how this definition can be mapped to your daily programming chores. If we consider Haskell, there's a category of Haskell types called Hask, which makes the collection of objects of the
Category. For this post, I will use Scala, and for all practical purposes assume that we use Scala's pure functional capabilities. In our model we consider the Scala types forming the objects of our
You define any function in Scala from
type A
type B
A => B
) and you have an example of a morphism. For every function we have a domain and a co-domain. In our example,
val foo: A => B = //..
we have the
type A
as the domain and the
type B
as the co-domain.
Of course we can define composition of arrows or functions in Scala, as can be demonstrated with the following REPL session ..
scala> val f: Int => String = _.toString
f: Int => String = <function1>
scala> val g: String => Int = _.length
g: String => Int = <function1>
scala> f compose g
res23: String => String = <function1>
and it's very easy to verify that the composition satisfies the associative law.
And now the
law, which is, of course, a specialized version of composition. Let's define some functions and play around with the identity in the REPL ..
scala> val foo: Int => String = _.toString
foo: Int => String = <function1>
scala> val idInt: Int => Int = identity(_: Int)
idInt: Int => Int = <function1>
scala> val idString: String => String = identity(_: String)
idString: String => String = <function1>
scala> idString compose foo
res24: Int => String = <function1>
scala> foo compose idInt
res25: Int => String = <function1>
Ok .. so we have the identity law of the Category verified above.
Category theory & programming languages
Now that we understand the most basic correspondence between category theory and programming language theory, it's time to dig a bit deeper into some of the implicit correspondences. We will
definitely come back to the more explicit ones very soon when we talk about products, co-products, functors and natural transformations.
Do you really think that understanding category theory helps you understand the programming language theory better ? It all depends how much of the *theory* do you really care about. If you are doing
enterprise software development and/or really don't care to learn a language outside your comfort zone, then possibly you come back with a resounding *no* as the answer. Category theory is a subject
that provides a uniform model of set theory, algebra, logic and computation. And many of the concepts of category theory map quite nicely to structures in programming (particularly in a language that
offers a decent type system and preferably has some underpinnings of the typed lambda calculus).
Categorical reasoning helps you reason about your programs, if they are written using a typed functional language like Haskell or Scala. Some of the basic structures that you encounter in your
everyday programming (like
types or
types) have their correspondences in category theory. Analyzing them from CT point of view often illustrates various properties that we tend to overlook (or take for granted) while programming. And
this is not coincidental. It has been shown that there's indeed a strong
between typed lambda calculus and cartesian closed categories. And Haskell is essentially an encoding of the typed lambda calculus.
Here's an example of how we can explain the properties of a data type in terms of its categorical model. Consider the category of Products of elements and for simplicity let's take the example of
cartesian products from the category of Sets. A cartesian product of 2 sets
is defined by:
A X B = {(a, b) | a ∈ A and b ∈ B}
So we have the tuples as the objects in the category. What could be the relevant morphisms ? In case of products, the applicable arrows (or morphisms) are the
projection functions π[1]: A X B → A
π[2]: A X B → B
. Now if we draw a category diagram where
is the product type, then we have 2 functions
f: C → A and g: C→ B
as the projection functions and the product function is represented by
: C → A X B
and is defined as
<F, G>(x) = (f(x), g(x))
. Here's the diagram corresponding to the above category ..
and according to the category theory definition of a Product, the above diagram commutes. Note, by commuting we mean that for every pair of vertices
, all paths in the diagram from
are equal in the sense that each path forms an arrow and these arrows are equal in the category. So here commutativity of the diagram gives
π[1] o <F, G> = f
π[2] o <F, G> = g
Let's now define each of the functions above in Scala and see how the results of commutativity of the above diagram maps to the programming domain. As a programmer we use the projection functions (
in Scala's
in Haskell
) on a regular basis. The above category diagram, as we will see gives some additional insights into the abstraction and helps understand some of the mathematical properties of how a cartesian
product of Sets translates to the composition of functions in the programming model.
scala> val ip = (10, "debasish")
ip: (Int, java.lang.String) = (10,debasish)
scala> val pi1: ((Int, String)) => Int = (p => p._1)
pi1: ((Int, String)) => Int = <function1>
scala> val pi2: ((Int, String)) => String = (p => p._2)
pi2: ((Int, String)) => String = <function1>
scala> val f: Int => Int = (_ * 2)
f: Int => Int = <function1>
scala> val g: Int => String = _.toString
g: Int => String = <function1>
scala> val `<f, g>`: Int => (Int, String) = (x => (f(x), g(x)))
<f, g>: Int => (Int, String) = <function1>
scala> pi1 compose `<f, g>`
res26: Int => Int = <function1>
scala> pi2 compose `<f, g>`
res27: Int => String = <function1>
So, as we claim from the commutativity of the diagram, we see that
pi1 compose `<f, g>`
is typewise equal to
pi2 compose `<f, g>`
is typewise equal to
. Now the definition of a Product in Category Theory says that the morphism between
A X B
is unique and that
A X B
is defined upto isomorphism. And the uniqueness is indicated by the symbol
in the diagram. I am going to skip the proof, since it's quite trivial and follows from the definition of what a Product of 2 objects mean. This makes sense intuitively in the programming model as
well, we can have one unique type consisting of the Pair of A and B.
Now for some differences in semantics between the categorical model and the programming model. If you consider an eager (or eager-by-default) language like Scala, the Product type fails miserably in
presence of the
data type (_|_) represented by Nothing. For Haskell, the non-strict language, it also fails when we consider the fact that a Product type needs to satisfy the equations
(fst(p), snd(p)) == p
and we apply the Bottom (_|_) for
. So, the programming model remains true only when we eliminate the Bottom type from the equation. Have a look at this
comment from Dan Doel
in James Iry's blog post on sum and product types.
This is an instance where a programmer can benefit from knwoledge of category theory. It's actually a bidirectional win-win when knowledge of category theory helps more in understanding of data types
in real life programming.
Interface driven modeling
One other aspect where category theory maps very closely with the programming model is its focus on the arrows rather than the objects. This corresponds to the notion of an
in programming. Category theory typically
"abstracts away from elements, treating objects as black boxes with unexamined internal structure and focusing attention on the properties of arrows between objects"
]. In programming also we encourage interface driven modeling, where the implementation is typically abstracted away from the client. When we talk about objects upto isomorphism, we focus solely on
the arrows rather than what the objects are made of. Learning programming and category theory in an iterative manner serves to enrich your knowledge on both. If you know what a Functor means in
category theory, then when you are designing something that looks like a Functor, you can immediately make it generic enough so that it composes seamlessly with all other functors out there in the
Thinking generically
Category theory talks about objects and morphisms and how arrows compose. A special kind of morphism is
morphism, which maps to the Identity function in programming. This is 0 when we talk about addition, 1 when we talk about multiplication, and so on. Category theory generalizes this concept by using
the same vocabulary (morphism) to denote both stuff that
some operations and those that
. And it sets this up nicely by saying that for every object
, there exists a morphism
id[X] : X → X
called the identity morphism on
, such that for every morphism
f: A → B
we have
id[B] o f = f = f o id[A]
. This (the concept of a generic zero) has been a great lesson at least for me when I identify structures like monoids in my programming today.
In the programming model, many dualities are not explicit. Category theory has an explicit way of teaching you the dualities in the form of category diagrams. Consider the example of Sum type (also
known as Coproduct) and Product type. We have abundance of these in languages like Scala and Haskell, but programmers, particularly people coming from the imperative programming world, are not often
aware of this duality. But have a look at the category diagram of the sum type
A + B
for objects
It's the same diagram as the Product only with the arrows reversed. Indeed a Sum type
A + B
is the categorical dual of Product type
A X B
. In Scala we model it as the union type like
where the value of the sum type comes either from the left or the right. Studying the category diagram and deriving the properties that come out of its commutativity helps understand a lot of theory
behind the design of the data type.
In the next part of this discussion I will explore some other structures like Functors and Natural Transformation and how they map to important concepts in programming which we use on a daily basis.
So far, my feeling has been that if you use a typed functional language, a basic knowledge of category theory helps a lot in designing generic abstractions and make them compose with related ones out
there in the world.
23 comments:
Hi there, You have done an excellent job. I'll certainly digg it and personally recommend to my friends. I'm confident they'll be benefited from this site.
My web blog - scripting vs programming language
Erik said...
Hi Debasish, great post, thanks!
Tim said...
Thank you for this Debashish, it's really enlightening. However, I'm having a little trouble working out what the category 'C' denotes in your sum and product examples - the types of 'f' and 'g'
in the scala example of a product seem to suggest that C is the type Integer (and is therefore the same as A?) - but I'm having a hard time seeing the significance of this (or indeed working out
what the types of f, g, and [f,g] should be for the 'sum' example. Might you be able to shed some light on this?
Unknown said...
Hi Tim -
Consider the definition of a Cartesian product between 2 Sets. In category theory, we define it as follows:
For all sets C, if there exists a morphism f: C -> A and g: C -> B, then there exists a *unique* h given by h: C -> A × B (typically written ) such that π1 ◦ h = f and π2 ◦ h = g.
The object C exists to show that the morphism h: C -> A x B is unique upto isomorphism. In other words, if we have 2 such objects C1 and C2, such that both C1 and C2 morph to A x B (and either of
them can be called the Product of A and B), then C1 is isomorphic to C2.
In terms of the programming model, if we had C1 as Int and C2 as XInt (some other type), but both map to the product of Int x String, then we can say that Int is isomorphic to XInt. This goes to
show that for every pair of Scala types, the Product or Tuple2 is uniquely defined.
Does this clear things a bit ?
j2kun said...
Strictly speaking, products are not defined as sets of tuples. The tuples are just a realization of a product in a specific category (the category of sets is the simplest example). In fact, in
pure category theory there is no such thing as a set or an element. In this way, you can define things like the category of types, which has absolutely nothing to do with sets.
It's an important fact that not all categories have products. A category with products is a strong assumption, and if you talk about "elements" of sets in your category, then you're probably
working under the assumption that your category is abelian. At least, this is the main content of the Freyd-Mitchell embedding theorem, which says that every abelian category can be thought of as
a category of R-modules (and hence, of sets).
Unknown said...
Some good discussions on proggit .. http://www.reddit.com/r/programming/comments/xdz76/does_category_theory_make_you_a_better_programmer/ and Google+ https://plus.google.com/101021359296728801638
Adam Warski said...
nice article!
One thing I don't understand, why: "... the Product type fails miserably in presence of the Bottom data type (_|_) represented by Nothing ..."?
Unknown said...
For a Product type we need to satisfy the following rules (easier to explain in Haskell):
fst (a, b) = a // _1 in Scala
snd (a, b) = b // _2 in Scala
In order to be a categorical product it also has to satisfy the following rule for a product type p:
(fst p, snd p) = p
Now if you substitute _|_ for p, then you get
(_|_, _|_) = _|_
which fails in Haskell, since the above is false.
A categorical product for Haskell would be unlifted, and would be considered non-bottom if either component were non-bottom, but bottom if both were. But we don't have those available.
The above explanation is from Dan Doyle's comment in James Iry's blog post that I referred to in the article (http://james-iry.blogspot.in/2011/05/why-eager-languages-dont-have-products.html#
Hope this helps ..
Adam Warski said...
Hmm aren't some levels mixed here?
For a categorical product we need the diagram to commute and the product object to be unique up to isomorphism, that is ;pi_1 = f etc., there's no notion of "elements" of the objects, as the (fst
p, snd p) = p formula could suggest.
The data type _|_ (Nothing in Scala) in uninhabited, that is there are no instances of this type. So we can take "a" p, as there is none.
Unknown said...
I am not sure I get your question ..
However in categorical domain, we don't have the bottom. Hence the equations hold good. While in the programming model, unless we assume a strong functional model, the bottom is the spoilsport.
But I think you are pointing to something else ..
Adam Warski said...
Ah, got it!
There are no *instances* of type Nothing, but there are *expressions* of type nothing (a diverging computation). And then, indeed, there are no products, if as your element you take such a
Unknown said...
Rob said...
The first sentence following the third snippet looks incorrect. Aren't the last two expressions typewise equal to f and g respectively (rather than to each other)?
Unknown said...
Rob - Thanks for pointing out .. fixed.
seanbell said...
It’s hard to find knowledgeable people on this topic, but you sound like you know what you’re talking about! Thank you and giving great information about how to make better programmer.
Jan said...
I think providing good and fast software solutions for normal people's problems makes you a good dev. This one: Scala programming pretty much describes some Scala things. I find it quite
important to ensure people are able to use your software. To be honest I never found a client who was really interested in what type of language is in the background. Most users don't care...
Best Training said...
Good article.
piyushmahesh said...
Can you recommend any good introductory books on category theory from programmers ?
Unknown said...
I started with Conceptual Mathematics (http://www.amazon.com/Conceptual-Mathematics-First-Introduction-Categories/dp/052171916X), which does a great job teaching the basics.
sftwr2020 said...
Ashish, your book DSLs in Action (Manning) is one of my favorites! I recently reviewed on my blog the best Scala books in print (Programming Digressions). Given how your fine book has fantastic
portions dealing with Scala in the DSL world, I made sure to mention some details about your book in my comments that accompany that particular blog - Keep up your great work!
sftwr2020 said...
Hi Debasish - As I returned to re-read this fantastic post, I glanced at my prior comment and cringed upon realizing that I had inadvertently mixed up your name with the name of a good friend of
mine... My apologies!
BTW, I'm looking forward to buying your book Functional and Reactive Domain Modeling as soon as it's published - Another friend of mine here in Austin (Texas) is, in fact, currently reviewing
your upcoming book through Manning Publications' MEAP process, and has a lot of good things to say about it!
Unknown said...
Hello Akram -
Thanks for the kind words. Also it feels good to hear that you find my latest book interesting enough.
Unknown said...
In section "Thinking generically" I don't understand references to 1 (unit of multiplication) and 0 (unit of addition) and what it brings to the paragraph (multiplication with 1 form a monoid and
addition with 0 form a monoid when considering familiar numeric sets such as N, Z, Q, R). Is the parallel that composition of functions forms a monoid with idA and idB playing a role analog to 1
and 0 (when using corresponding arithmetic operators)? | {"url":"https://debasishg.blogspot.com/2012/07/does-category-theory-make-you-better.html?showComment=1344246638482","timestamp":"2024-11-15T01:19:28Z","content_type":"application/xhtml+xml","content_length":"141735","record_id":"<urn:uuid:d3c76b48-a91b-40e0-b363-20fae1007067>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00794.warc.gz"} |
In a recent paper the authors proposed a lower bound on $1-{\lambda }_{i}$, where ${\lambda }_{i}$, ${\lambda }_{i}e 1$, is an eigenvalue of a transition matrix $T$ of an ergodic Markov chain. The
bound, which involved the group inverse of $I-T$, was derived from a more general bound, due to Bauer, Deutsch, and Stoer, on the eigenvalues of a stochastic matrix other than its constant row sum.
Here we adapt the bound to give a lower bound on the algebraic connectivity of an undirected graph, but principally consider the case of equality in the bound when...
Let $A$ be an $n×n$ symmetric, irreducible, and nonnegative matrix whose eigenvalues are ${\lambda }_{1}>{\lambda }_{2}\ge ...\ge {\lambda }_{n}$. In this paper we derive several lower and upper
bounds, in particular on ${\lambda }_{2}$ and ${\lambda }_{n}$, but also, indirectly, on $\mu ={max}_{2\le i\le n}|{\lambda }_{i}|$. The bounds are in terms of the diagonal entries of the group
generalized inverse, ${Q}^{#}$, of the singular and irreducible M-matrix $Q={\lambda }_{1}I-A$. Our starting point is a spectral resolution for ${Q}^{#}$. We consider the case of equality in some of
these inequalities and we apply our results to the algebraic connectivity of undirected... | {"url":"https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.Shader%252C+Bryan+L.&qt=SEARCH","timestamp":"2024-11-10T17:13:26Z","content_type":"application/xhtml+xml","content_length":"98155","record_id":"<urn:uuid:8f1e3d1c-e099-455d-9d81-9bf305b744fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00761.warc.gz"} |
What is the Fractional of a Hundred? - Learn Definition, Facts & Examples
In Maths, a fraction is used to symbolise the element or part of the entirety. It represents the equal elements of the entire. A fraction has two parts, namely numerator and denominator. The number
on the top is called the numerator, and the number on the bottom is called the denominator. The numerator defines the number of identical parts taken, whereas the denominator defines the entire
quantity of the same components in a whole.
For instance, $\dfrac{5}{10}$ is a fraction. Here, 5 is a numerator, and 10 is a denominator. So what is the fractional of a hundred? In this article, you'll learn how to find the fractional of 100.
Types of Fractions
There are four different varieties of fractions. They are:
• Unit Fraction: In a fraction, the numerator with 1 is known as a unit fraction. For instance, $\dfrac{1}{2}$,$\dfrac{1}{4}$
• Proper Fraction: If a numerator is less than the denominator, it is known as a proper fraction. $\dfrac{7}{9}$,$\dfrac{8}{10}$
• Improper Fraction: If a numerator is greater than the denominator, it is called an improper fraction. Example: $\dfrac{11}{2}$,$\dfrac{6}{4}$
• Mixed Fraction: If a fraction includes a whole number with a proper fraction, it is called a mixed fraction. Example five $5\dfrac{1}{2}$,$10\dfrac{1}{4}$
Other Types of Fractions
• Like Fractions: The fractions with the identical denominator are called like fractions.
Example: $\dfrac{4}{2}, \dfrac{7}{2}, \dfrac{9}{2}$
Here, the denominators of all of the fractions are 2. Hence, they're referred to as like fractions.
• Unlike Fractions: The fractions with distinct denominators are known as unlike fractions.
Example: $\dfrac{5}{2}, \dfrac{4}{6}, \dfrac{9}{4}$
Here, the denominator values are one-of-a-kind in all of the fractions. Hence, they're referred to as in-contrast-to fractions.
• Equivalent Fractions: If two fractions bring about identical values after simplification, they're equal to every different.
Example: $\dfrac{2}{3}$ and $\dfrac{4}{6}$ are equal fractions, given that $\dfrac{4}{6} = \dfrac{(2 \times 2)}{(2 \times 3)}=\dfrac{2}{3}$
Fraction on a Number Line
Representing fractions on a number line implies that we can plot fractions on a number line in the same way that we can plot whole numbers and integers. Parts of a whole are represented by fractions.
On the number line, fractions are represented by making equal portions of a whole, i.e. 0 to 1, and the number of those equal parts is the same as the number given in the fraction's denominator.
For example, if we need to represent $\dfrac{1}{8}$ on the number line, we need to mark 0 and 1 on the 2 ends and divide the number line into eight identical parts. And then mark the point which we
have to represent.
Fraction on Number Line
Solved Examples
Q1. Write five hundredths in decimal form.
Ans: Since we know that one-tenth is $\dfrac{1}{10}.$ and one-hundredths is $\dfrac{1}{100}$ and so on.
So, when we are expanding any number which is having a decimal point.
After the decimal point, the place values of digits from left to right are tenth, hundredths, thousandth, and so on.
In digits these values are $\dfrac{1}{10}, \dfrac{1}{100}, \dfrac{1}{1000} \ldots$
5 hundredths $=\dfrac{5}{100}$
In decimal form, it is $0.05$.
Therefore, five hundredths in the decimal form will be $0.05$.
Q2. What will be the fractional of 100?
Ans: A fraction is made up of two parts. The number on the top of the line is called the numerator. The number below the line is called the denominator.
In this case, we need to express 100 as a fraction.
Since 20 is a whole number, its denominator will predominantly be 1, and the numerator will remain $100 .$
Since a fraction $=$ numerator $/$ denominator
Thus, the fractional of 100 will be $\dfrac{100}{1}.$
Practice Problems
Q1. What will be the decimal form of $\dfrac{5}{10}$?
Ans: 0.5
Q2. What will be the fractional form of 0.44?
Ans: $\dfrac{44}{100}$
In this article, we learned about fractions and different types of fractions. Fractions are represented as numerical value, which defines a part of a whole. A fraction can be a portion or section of
any quantity out of a whole, whereas the whole can be any number, a specific value, or a thing. Then we learned how to write five hundredths in decimal form and fractional of 100.
FAQs on What is the Fractional of a Hundred?
1. How are fractions and decimals related?
A fraction may be transformed into a decimal if we divide the given numerator through the denominator.
Similarly, to transform a decimal into a fraction, we write the given decimal as the numerator, and we place a fractional bar under it. Then, we place 1 right under the decimal point, followed by
using the number of zeros required hence. Then, this fraction may be simplified. For example, converting 0.5 to a fraction will give us $\dfrac{5\div 5}{10\div 5}$ = $\dfrac{1}{2}$.
2. What are Comparing fractions?
Comparing fractions means locating the larger and the smaller fraction between two or greater fractions. For instance, let us compare $\dfrac{3}{16}$ and $\dfrac{7}{16}$.
We first study the denominators of the given fractions: $\dfrac{3}{16}$ and $\dfrac{7}{16}$. Since the denominators are identical, we can examine the numerators. Since 3 < 7, the fraction with the
larger numerator is the bigger fraction. Therefore, $\dfrac{3}{16}$ < $\dfrac{7}{16}$. If the fractions have distinct denominators, we can convert them to like fractions by locating the LCM of the
denominators and writing the respective equivalent fractions. Once the denominators become identical, we can compare the numerators.
3. What are some real-life examples of fractions?
In real life, we will many examples of fractions, which includes:
• If a pizza is split into equal parts, each part equals half of the entire pizza.
• If we divide a slice of watermelon into 3 identical components, then every part is equal to $\dfrac{1}{3}$ rd of the entire watermelon. | {"url":"https://www.vedantu.com/maths/fractional-of-hundred","timestamp":"2024-11-04T12:31:17Z","content_type":"text/html","content_length":"213053","record_id":"<urn:uuid:54aa4418-aff3-47a4-b79c-ec747a14e4da>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00716.warc.gz"} |
Cascades and wall-normal fluxes in turbulent channel flows | Journal of Fluid Mechanics | Cambridge Core
1 Introduction
The multiscale feature of turbulent flows has always drawn the attention of scientists since Richardson’s work (Richardson Reference Richardson1922) on the turbulent energy cascade. But, it is only
after Kolmogorov’s seminal works on the inertial subrange of turbulence (Kolmogorov Reference Kolmogorov1941a ,Reference Kolmogorov b ) that most research efforts have been devoted to the study of
the turbulent multiscale interactions. Kolmogorov’s works contain one of the very few exact and non-trivial results in the field of turbulence. Kolmogorov’s groundbreaking intuition was to reduce the
complex problem of turbulence to its essential features, by assuming homogeneity and isotropy. In these conditions, the main process governing turbulence is the energy cascade among scales which is
described by a single scalar parameter, the averaged dissipation rate. Much of the current understanding of fully developed turbulence is based on this result and relies on a general equation which
is known as the Kolmogorov equation,
(1.1) $$\begin{eqnarray}\langle {\it\delta}u_{\Vert }^{3}\rangle -6{\it\nu}\frac{\text{d}}{\text{d}r}\langle {\it\delta}u_{\Vert }^{2}\rangle =-\frac{4}{5}\langle {\it\epsilon}\rangle r,\end
where ${\it\delta}u_{\Vert }$ is the longitudinal velocity increment between two points with separation $r$ , ${\it\epsilon}$ is the rate of dissipation, ${\it\nu}$ is the kinematic viscosity and
angular brackets denote ensemble average. The basic result of (1.1) is that, at sufficiently large Reynolds numbers, an intermediate range of scales exists, away from energy injection and energy
dissipation, where the energy flux across scales is identified as $\langle {\it\delta}u_{\Vert }^{3}\rangle /r$ . This expression provides a direct evaluation of the energy cascade through the
inertial range (see Nie & Tanveer Reference Nie and Tanveer1999; Aoyama et al. Reference Aoyama, Ishihara, Kaneda, Yokokawa, Itakura and Uno2005; Gotoh & Watanabe Reference Gotoh and Watanabe2005;
Ishihara, Gotoh & Kaneda Reference Ishihara, Gotoh and Kaneda2009).
Actually, real turbulent flows have a much richer physics, involving, beside energy transfer, anisotropic production and inhomogeneous spatial fluxes. Such processes are strongly scale and position
dependent and lead to a geometrically complex redistribution of energy. Several attempts aiming at the ultimate understanding of the energy path from production to dissipation in wall-bounded flows
can be found in the recent literature. The nonlinear transfer of the turbulent kinetic energy has been investigated by Domaradzki et al. (Reference Domaradzki, Liu, Härtel and Kleiser1994) in two
different wall-bounded flows, with and without a mean flow, by means of a mixed physical–spectral decomposition of the nonlinear term of the Navier–Stokes equations. The analysis of the energy
redistribution among different distances from the wall and among lateral wavenumbers highlights that energy is transferred most effectively between scales of similar size and suggests the possible
presence of a reverse cascade from large to small wavenumbers in the near-wall region. Similar conclusions have been drawn by Dunn & Morrison (Reference Dunn and Morrison2003) where a wavelet
decomposition is used to provide a dual scale/physical space description of the production and flux of energy. The analysis reveals that the transfer is predominantly local. There are however
complications arising from the fact that both forward and backward energy transfer are present. A generalized form of the Kolmogorov’s equation is proposed by Danaila et al. (Reference Danaila,
Anselmet, Zhou and Antonia2001) showing how the inhomogeneity of the large scales quantitatively acts along the direction normal to the wall. The energy transfer has been studied also from a
phenomenological point of view in terms of size of turbulent structures. In fact, since the turbulent structures near the wall are small while those further away from the wall are large, the spatial
flux of energy from the near-wall region to the bulk flow is conjectured to be an example of inverse cascade (Jiménez Reference Jiménez1999). Following this line of reasoning, in Adrian, Meinhart &
Tomkins (Reference Adrian, Meinhart and Tomkins2000), forward and reverse cascade are proposed to coexist through a phenomenological model of hairpin packets within the logarithmic layer of wall
turbulence while, in Lozano-Durán & Jiménez (Reference Lozano-Durán and Jiménez2014), statistics of vortex cluster and Reynolds stress structures are used to identify cascades. A recent review of the
variety of approaches used for the study of the turbulent cascades can be found in Jiménez (Reference Jiménez2012).
All of these approaches, however, do not fully account for the multidimensional nature and the directionality of the process. To provide a more complete view, the four-fifths law in the form of a
balance equation for second-order structure function, originally proposed by Hill (Reference Hill2002), was used by Marati, Casciola & Piva (Reference Marati, Casciola and Piva2004) to address the
energy transfer in both spatial and scale spaces for a turbulent channel flow. The multidimensional and directional description provided by this equation was exploited by Cimarelli, De Angelis &
Casciola (Reference Cimarelli, De Angelis and Casciola2013) to understand the formation and sustainment of long and wide turbulent fluctuations. In that paper, a model for the energy cascade was also
developed to account for the dual nature of the energy transfer consisting of forward and reverse cascades ascending from the wall. From the model a large eddy simulation (LES) closure was developed,
Cimarelli & De Angelis (Reference Cimarelli and De Angelis2011, Reference Cimarelli and De Angelis2012, Reference Cimarelli and De Angelis2014), that was shown able to account for the small scale
behaviour responsible for the backward energy transfer.
Aim of the present work is to extend the analysis of the generalized Kolmogorov equation to directly address the flux of energy between different wall-normal scales at different distances from the
wall. The topic has long been central in turbulence research. Probably one of the first instances is the study of the spectral budget of turbulent kinetic energy reported by Lumley (Reference Lumley
1964). He argued that the spatial inhomogeneity of wall turbulence leads to a complex energy flux where an inverse energy transfer occurs. A phenomenological theory was proposed by Townsend (
Reference Townsend1976) based on his famous attached eddy hypothesis. The description consists of elongated turbulent structures attached to the wall which are generated by the lift up and by the
orienting effect of the mean flow on the spanwise vorticity (see, e.g., Perry, Henbest & Chong Reference Perry, Henbest and Chong1986; Marusic Reference Marusic2001; Nickels et al. Reference Nickels,
Marusic, Hafez, Hutchins and Chong2007). These turbulent structures while moving away from the wall, grow and remain attached to the wall thus leading to an increase of wall-normal scales
corresponding to a sort of reverse cascade (see Piomelli, Yu & Adrian Reference Piomelli, Yu and Adrian1996). At a certain point, these structures detach and break down into smaller structures giving
rise to a form of forward cascade.
At present, much is understood about the nature of turbulence in canonical wall-bounded flows. However, the comprehensive description simultaneously encompassing spatial and scale-space features
provided by the generalized Kolmogorov equation has been only incompletely exploited. The appeal of this approach is its ability to provide a clear picture of the basic cascade process, as described
in the work of Kolmogorov, combined with the possibility to rigorously tackle inhomogeneity and anisotropy. The central quantity in the theory is the second-order structure function. The conservative
equation for this observable precisely identifies the relevant fluxes in the spatial and scale space and allows for a rigorous definition of the corresponding production and dissipation. Structure
functions have long been exploited for basic studies in turbulence. In homogeneous conditions, the second-order structure function can be taken to represent the energy content of the small scales. In
this context the second-order structure function can be referred to as the scale energy. When extending its usage to strongly inhomogeneous flows this interpretation becomes somewhat arguable
especially for large separations. Nevertheless, the elegant formulation endows such structure function with its physical interpretation, see Davidson (Reference Davidson2004, pp. 88–94). Indeed, the
Kolmogorov equation is an exact balance between second- and third-order moments (Monin & Yaglom Reference Monin and Yaglom1975; Danaila et al. Reference Danaila, Anselmet, Zhou and Antonia2001;
Danaila, Anselmet & Zhou Reference Danaila, Anselmet and Zhou2004; Germano Reference Germano2007; Gomes-Fernandes, Ganapathisubramani & Vassilicos Reference Gomes-Fernandes, Ganapathisubramani and
Vassilicos2015) showing that the rate of energy dissipation $\langle {\it\epsilon}\rangle$ is associated, at any scale, with fluxes which in turn are fed by local sources. Following the different
ranges of scales and positions encountered by the fluxes, the second- and third-order moments eventually assume their physical interpretation of scale energy and scale-energy flux.
The work is organized as follows. In § 2 we introduce the generalized Kolmogorov equation and the multidimensional space of scales/positions analysed here. In § 3 we start describing the statistics
by showing the topology of scale energy and of scale-energy source while the structure of the scale-energy fluxes is depicted in § 4. Finally § 5 reports a discussion on the combined role of
scale-energy flux and scale-energy source while § 6 closes the paper with final remarks.
2 The generalized Kolmogorov equation and the $(r_{y},r_{z},Y_{c})$ space
The second-order structure function $\langle {\it\delta}u^{2}\rangle$ , where ${\it\delta}u^{2}={\it\delta}u_{i}{\it\delta}u_{i}$ , involves the fluctuation velocity difference ${\it\delta}u_{i}=u_
{i}(x_{s}^{\prime \prime })-u_{i}(x_{s}^{\prime })$ at two points, $x_{s}^{\prime \prime }$ and $x_{s}^{\prime }$ , where the mid-point and the separation are $X_{s}=\left(x_{s}^{\prime \prime }+x_
{s}^{\prime }\right)/2$ and $r_{s}=x_{s}^{\prime \prime }-x_{s}^{\prime }$ . The balance equation for second-order structure function allows us to study the global statistical properties of
turbulence as a function of the separation vector between the two points, $r_{s}$ , and of the spatial position of the mid-point $X_{s}$ , hence describing the scale-dependent mechanisms in the
presence of inhomogeneity. Hereafter, as anticipated in the introduction, we will often refer to the second-order structure function as the scale energy. The balance equation of $\langle {\it\delta}u
^{2}\rangle$ in wall flows is the generalized Kolmogorov equation (Hill Reference Hill2002) which for a turbulent channel flow with longitudinal mean velocity $U(y)$ (Marati et al. Reference Marati,
Casciola and Piva2004) reads
(2.1) $$\begin{eqnarray}\displaystyle & & \displaystyle \frac{\partial \langle {\it\delta}u^{2}{\it\delta}u_{i}\rangle }{\partial r_{i}}+\frac{\partial \langle {\it\delta}u^{2}{\it\delta}U\rangle }{\
partial r_{x}}+2\langle {\it\delta}u{\it\delta}v\rangle \left(\frac{\text{d}U}{\text{d}y}\right)^{\ast }+2\langle {\it\delta}uv^{\ast }\rangle {\it\delta}\left(\frac{\text{d}U}{\text{d}y}\right)+\
frac{\partial \langle v^{\ast }{\it\delta}u^{2}\rangle }{\partial Y_{c}}\nonumber\\ \displaystyle & & \displaystyle \quad =-4\langle {\it\epsilon}^{\ast }\rangle +2{\it\nu}\frac{\partial ^{2}\langle
{\it\delta}u^{2}\rangle }{\partial r_{i}\partial r_{i}}-\frac{2}{{\it\rho}}\frac{\partial \langle {\it\delta}p{\it\delta}v\rangle }{\partial Y_{c}}+\frac{{\it\nu}}{2}\frac{\partial ^{2}\langle {\it\
delta}u^{2}\rangle }{\partial {Y_{c}}^{2}},\end{eqnarray}$$
where the asterisk denotes the arithmetic average of a variable at the points $X_{s}\pm r_{s}/2$ , $Y_{c}=X_{2}$ is the wall-normal coordinate of the mid-point, $v=u_{2}$ is the wall-normal velocity,
${\it\nu}$ is kinematic viscosity, ${\it\rho}$ is the density and ${\it\epsilon}={\it\nu}(\partial u_{i}/\partial x_{j})(\partial u_{i}/\partial x_{j})$ is the pseudo-dissipation. For the symmetries
of the channel, the angular brackets operator $\langle \cdot \rangle$ denotes spatial average along the wall-parallel homogeneous directions and average over different uncorrelated fields. It is
useful to recast (2.1) in terms of a four-dimensional vector field, $\unicode[STIX]{x1D731}=({\it\Phi}_{r_{x}},{\it\Phi}_{r_{y}},{\it\Phi}_{r_{z}},{\it\Phi}_{c})$ , hereafter called the scale-energy
hyper-flux, defined in a four-dimensional space $(r_{x},r_{y},r_{z},Y_{c})$ ,
(2.2) $$\begin{eqnarray}\boldsymbol{{\rm\nabla}}_{4}\boldsymbol{\cdot }\unicode[STIX]{x1D731}(\boldsymbol{r},Y_{c})={\it\xi}(\boldsymbol{r},Y_{c}),\end{eqnarray}$$
where $\boldsymbol{{\rm\nabla}}_{4}$ is the four-dimensional gradient and ${\it\xi}=-2\langle {\it\delta}u{\it\delta}v\rangle (\text{d}U/\text{d}y)^{\ast }-2\langle {\it\delta}uv^{\ast }\rangle {\it\
delta}(\text{d}U/\text{d}y)-4\langle {\it\epsilon}^{\ast }\rangle$ is the scale-energy source/sink. The flux in the three-dimensional space of scales is $\unicode[STIX]{x1D731}_{r}=({\it\Phi}_{r_
{x}},{\it\Phi}_{r_{y}},{\it\Phi}_{r_{z}})=\langle {\it\delta}u^{2}{\it\delta}\boldsymbol{u}\rangle +\langle {\it\delta}u^{2}{\it\delta}U\rangle \hat{\boldsymbol{e}}_{x}-2{\it\nu}\boldsymbol{{\rm\
nabla}}_{r}\langle {\it\delta}u^{2}\rangle$ , where $\hat{\boldsymbol{e}}_{x}$ is the unit vector in the mean flow direction $x$ . In addition to the flux in the space of scales, the generalized
Kolmogorov equation features the spatial flux ${\it\Phi}_{c}=\langle v^{\ast }{\it\delta}u^{2}\rangle +2\langle {\it\delta}p{\it\delta}v\rangle /{\it\rho}-({\it\nu}/2)\partial \langle {\it\delta}u^
{2}\rangle /\partial Y_{c}$ .
Equation (2.2) has been analysed in the hyper-plane $r_{y}=0$ in Cimarelli et al. (Reference Cimarelli, De Angelis and Casciola2013). This approach allowed us to identify the reverse energy transfer
as a crucial mechanism characterizing wall turbulence, responsible for the formation of the commonly observed very long and wide velocity fluctuations in the outer region of the flow. Here, the
analysis will be performed in the hyper-plane $r_{x}=0$ , i.e. in the $(r_{y},r_{z},Y_{c})$ -space. The present results will allow us for the first time to distinguish in a well-defined mathematical
framework the flux in the space of wall-normal scales $r_{y}$ from the spatial flux among different wall distances $Y_{c}$ . Indeed, due to violation of spatial homogeneity, this distinction lacks of
a classical spectral description.
Given the definition of velocity increments, the $r_{y}$ direction is limited by the presence of the wall. In particular, for a given wall distance $Y_{c}$ , the space of wall-normal scales extends
from zero to twice the distance from the wall, $r_{y}\in [0,2Y_{c}]$ . Actually, negative increments, i.e. $r_{y}\in [-2Y_{c},2Y_{c}]$ , are not considered here since the symmetry of the flow is
such that the transformation $\boldsymbol{r}\rightarrow -\boldsymbol{r}$ , $\widetilde{Y}_{c}=\text{const.}$ leads to $\unicode[STIX]{x1D731}_{r}\rightarrow -\unicode[STIX]{x1D731}_{r}$ and ${\it\
Phi}_{c}\rightarrow {\it\Phi}_{c}$ . This transformation leaves quantities such as ${\it\delta}u^{2}$ and $v^{\ast }$ statistically invariant while reversing the sign of vectors such as ${\it\delta}
u_{i}$ and $\boldsymbol{{\rm\nabla}}_{r}$ . It is worth pointing out that the space of wall-normal scales, $r_{y}$ , involves velocity increments between two points separated in the inhomogeneous
direction. By definition, the second-order structure function can be written as $\langle {\it\delta}u^{2}\rangle =2\langle k\rangle (Y_{c}+r_{y}/2)+2\langle k\rangle (Y_{c}-r_{y}/2)-2\langle u_{i}(Y_
{c}+r_{y}/2)u_{i}(Y_{c}-r_{y}/2)\rangle$ where $k=u_{i}u_{i}/2$ and increments are considered only in the wall-normal direction for simplicity. This expression allows us to highlight how spatial
inhomogeneity enters the space of wall-normal scales $r_{y}$ . In particular, the inhomogeneous spatial distribution of energy $\langle k\rangle (y)$ contributes to the value of scale energy by means
of a scale-dependent quantity $4\langle k\rangle ^{\ast }(Y_{c},r_{y})=2\langle k\rangle (Y_{c}+r_{y}/2)+2\langle k\rangle (Y_{c}-r_{y}/2)$ . For small wall-normal scales compared with the length of
inhomogeneity of the flow, the dependence of $4\langle k\rangle ^{\ast }$ on $r_{y}$ is small and, hence, scale energy is roughly unaffected by inhomogeneity. In contrast, for large wall-normal
scales, the inhomogeneous spatial distribution of energy $k(y)$ significantly contribute to the value of scale energy.
The data used for the present analysis come from a channel-flow direct numerical simulation (DNS) at $Re_{{\it\tau}}=u_{{\it\tau}}h/{\it\nu}=2003$ , where $h$ is the channel half-height. Throughout
the paper, inner variables will be used and denoted with the superscript +, implying normalization of lengths with the friction length ${\it\nu}/u_{{\it\tau}}$ and velocities with the friction
velocity $u_{{\it\tau}}=\sqrt{{\it\tau}_{w}/{\it\rho}}$ where ${\it\tau}_{w}$ is the average shear stress at the wall. The computational domain is $8{\rm\pi}h\times 2h\times 3{\rm\pi}h$ and the
resolution in the homogeneous directions is ${\rm\Delta}x^{+}=8.2$ and ${\rm\Delta}z^{+}=4.1$ , see Hoyas & Jiménez (Reference Hoyas and Jiménez2006) for the details of the simulation. The velocity
and pressure increments, ${\it\delta}u_{i}$ and ${\it\delta}p$ respectively, appearing in the generalized Kolmogorov equation (2.1) are computed directly in physical space over the whole
computational box by considering the values of velocity and pressure at the two points of the increment. Then, the terms of the generalized Kolmogorov equation (2.1) are computed and averaged by
considering spatial homogeneity in the streamwise and spanwise directions and using 15 independent fields. The statistical convergence of the data is measured by considering the accuracy with which (
2.1) is satisfied. The mean unbalance of the terms of (2.1) is found to be less than 1.5 % of the local dissipation.
3 Scale energy and scale-energy source
We start by analysing the second-order structure function in the multidimensional $(r_{y},r_{z},Y_{c})$ space, i.e. $r_{x}^{+}=0$ , see figure 1(a) for a global view. The plot shows isolines of
scale energy on two coordinate planes, namely $r_{z}^{+}=0$ and $r_{y}^{+}=0$ , and a third oblique plane slightly displaced (30 wall units) from $r_{y}^{+}=2Y_{c}^{+}$ (recall that the solid wall
limits the maximum allowed wall-normal separation to twice the wall-normal distance of the mid-point, $0\leqslant r_{y}^{+}\leqslant 2Y_{c}^{+}$ ). In figure 1(c) the isolines are plotted on the
planes $r_{z}^{+}=1000$ . The relative maxima of the second-order structure function apparent in this plot are roughly located at $r_{y}^{+}=2Y_{c}^{+}-30$ , i.e. the oblique plane in figure 1(a)
contains these local maxima. In figure 1(b) scale energy is shown in the plane $r_{y}^{+}=0$ , see also figure 2 where a detailed view of $\langle {\it\delta}u^{2}\rangle$ in the $r_{y}=0$ plane is
shown as a function of the spanwise separation for different $Y_{c}$ in figure 2(a) and as a function of the wall distance for different $r_{z}$ in figure 2(b). The behaviour of scale energy as a
function of $Y_{c}^{+}$ is not monotonous, with maxima occurring in the near-wall region, see figures 1(b) and 2(b). Observe that the $Y_{c}^{+}$ location of the maxima becomes independent of $r_{z}^
{+}$ for large values of the latter and takes place at $Y_{c}^{+}=18$ , see the inset plot of figure 2(b). The behaviour of scale energy in the spanwise scales is again not monotonous with maxima
occurring for relatively large spanwise scales, see figures 1(b) and 2(a). The scales where such maxima occur increase with the distance from the wall. These maxima correspond to negative minima in
the correlation, $\langle \boldsymbol{u}(X_{c}^{+},Y_{c}^{+},Z_{c}^{+}+r_{z}^{+}/2)\boldsymbol{\cdot }\boldsymbol{u}(X_{c}^{+},Y_{c}^{+},Z_{c}^{+}-r_{z}^{+}/2)\rangle =\langle k(Y_{c}^{+})\rangle -\
langle {\it\delta}u^{2}(Y_{c}^{+},r_{z}^{+})\rangle /2$ . At careful inspection, the absolute maximum of the structure function emerges at scales order $r_{z}^{+}\simeq 1000$ and for $Y_{c}^{+}=18$ .
This value could be related to the presence of large coherent structures in the channel flow, see e.g. Monty et al. (Reference Monty, Stewart, Williams and Chong2007). The behaviour at small
separations is consistent with that already described in a previous paper (Saikrishnan et al. Reference Saikrishnan, De Angelis, Longmire, Marusic, Casciola and Piva2012) at Reynolds number order 500
and 1000. In particular, in figure 2(a), a local maximum is observed at small scales $r_{z}^{+}\simeq 80$ within the buffer layer. It is worth recalling that, at large separations (increasing $r_{z}^
{+}$ ), when the turbulent signal becomes uncorrelated, the second-order structure function approaches twice the local value of the turbulent kinetic energy, $\lim _{r_{z}^{+}\rightarrow \infty }\
langle {\it\delta}u^{2}(Y_{c}^{+},r_{y}^{+}=0,r_{z}^{+})\rangle =2\langle k(Y_{c}^{+})\rangle$ . Overall, for sufficiently large $r_{z}^{+}$ , increasing the distance from the wall, the scale energy
tends to decrease, following the trend of the turbulent kinetic energy. Increasing $r_{z}^{+}$ a change in the concavity of the plots occurs that ultimately recovers the anomalous behaviour of the
turbulent kinetic energy profile which at larger Reynolds number should indicate the presence of a second peak in the overlap layer (Hutchins & Marusic Reference Hutchins and Marusic2007).
Let us go back to figure 1(b), corresponding to the plane $r_{z}^{+}=1000$ where the absolute maximum of the second-order structure function is achieved at $Y_{c}^{+}\simeq 18$ and $r_{y}^{+}=0$
(inset plot). For $Y_{c}^{+}>20$ , as already anticipated, the locus of relative maxima becomes the plane $r_{y}^{+}=2Y_{c}^{+}-30$ . Such maxima are somehow related to the maximum of the (single
point) turbulent kinetic energy profile. From the definition of mid-point and increment, in the plane $r_{y}^{+}=2Y_{c}^{+}-30$ the wall normal positions of the two points across which the velocity
difference is evaluated are ${y^{\prime }}^{+}=15$ and ${y^{\prime \prime }}^{+}=15+r_{y}^{+}$ . We stress that $y^{+}=18$ coincides with the location of the turbulent kinetic energy maximum. The
slight displacement from $y^{+}=18$ to ${y^{\prime }}^{+}=15$ is due to the velocity correlation still present at these scales. Actually, if the correlation were entirely negligible, the limit form
of the second-order structure function, $\langle k(Y_{c}^{+}-r_{y}^{+}/2)+k(Y_{c}^{+}+r_{y}^{+}/2)\rangle$ , would have implied ${y^{\prime }}^{+}=18$ for the locus of the maxima.
The plane $r_{y}^{+}=2Y_{c}^{+}-30$ will be hereafter called the plane of attached scales to highlight that within this plane wall-normal scales are approximatively equal to twice the distance of the
mid-point from the wall, meaning that point ${y^{\prime }}^{+}$ belongs to the buffer layer. In contrast, smaller wall-normal scales, $r_{y}^{+}<2Y_{c}^{+}-30$ , will be called detached, meaning that
both points, ${y^{\prime }}^{+}$ and ${y^{\prime \prime }}^{+}$ , are distant from the wall.
After the analysis of the second-order structure function, let us now focus on the scale-energy source ${\it\xi}$ . An attempt to provide an overall picture of the distribution of scale-energy source
in the $(r_{y},r_{z},Y_{c})$ space is presented in figure 3(a) where the isosurface ${\it\xi}^{+}=0.005$ is depicted. Two main features are apparent in the figure, namely a blob at detached scales in
the overlap layer, $100<Y_{c}^{+}<0.2Re_{{\it\tau}}$ , and a thin layer over the $r_{y}^{+}=2Y_{c}^{+}-30$ plane of attached scales. The intersection of the detached scale-energy source with the $r_
{y}^{+}=0$ plane, see figure 3(b), was already described in Cimarelli et al. (Reference Cimarelli, De Angelis and Casciola2013) at a lower Reynolds number. Actually, the maximum of scale-energy
source, which cannot be seen in the global view of figure 3(a), is highlighted in the inset of figure 3(a). The maximum, ${\it\xi}_{max}^{+}=0.74$ , is located at $r_{y}^{+}=0,r_{z}^{+}=40,Y_{c}^{+}=
12$ and corresponds in position and scale to the self-sustaining cycle of near-wall turbulence. Figure 3(c) shows the plane $r_{z}^{+}=40$ that goes just through this maximum. The behaviour of the
scale-energy source close to this maximum is highlighted in the inset of figure 3(c) where it is apparent that isolines of ${\it\xi}$ are roughly located at the intersection between the inclined
plane of attached scales, $r_{y}^{+}=2Y_{c}^{+}-30$ , and the plane $r_{y}^{+}=0$ .
Let us consider the relative maximum for the scale-energy source found in the overlap layer for detached scales, see figure 3(b). From previous investigations Cimarelli et al. (Reference Cimarelli,
De Angelis, Schlatter, Brethouwer, Talamelli and Casciola2015), we know that the strength of this outer scale-energy source in inner units, ${\it\xi}_{max}^{+}=0.0095$ , stays unchanged while its
extension increases with increasing Reynolds number, at least in the range that was available to us. The consequence is that the amount of scale-energy injected in the flow by this region should
increase with Reynolds number and could become an essential feature in contributing to explain the behaviour of the scale-energy fluxes at high Reynolds number.
The third most relevant feature of the scale-energy source is the relative maxima occurring in the thin layer of net energy production shown by the inclined plane of figure 3(a) corresponding to the
attached scales. These attached scales are found to be responsible for the largest contribution to the scale-energy source in the outer region. The maximum intensity of the attached scale-energy
source, ${\it\xi}_{max}^{+}=0.2$ , still smaller than the inner source, exceeds that previously discussed in connection with the detached outer source. Let us point out that the inner region of
scale-energy source and the attached region are not disjoined, see the inset of figure 3(c). Beyond $Y_{c}^{+}=20$ , this combined region becomes fully aligned with the oblique plane, indicating that
at these distances from the wall the source is fully attached. As pointed out via a completely different approach by Del Álamo et al. (Reference Del Álamo, Jiménez, Zandonade and Moser2006) and
Lozano-Durán, Flores & Jiménez (Reference Lozano-Durán, Flores and Jiménez2012), this wall distance represents the cross-over between the two attached/detached-dominated regions of the flow. This
link is interesting, given the different approach used in introducing the notion of attached/detached scales, which in their case is based on the wall-normal length of turbulent structures defined by
means of different thresholding techniques.
It is finally important to note that the two outer scale-energy source regions, attached and detached, are responsible for the violation of the equilibrium assumption usually made for the study of
the overlap layer whereby production is locally balanced by dissipation. As will be shown in the next section, such violation leads to strong consequences for the topology of fluxes. This scenario
should be especially true by increasing the Reynolds number. Since the outer attached and detached regions should increase their extent with the Reynolds number (Cimarelli et al. Reference Cimarelli,
De Angelis, Schlatter, Brethouwer, Talamelli and Casciola2015), we speculate that, together, the two contributions should become increasingly important for the high-Reynolds-number regime both in
terms of their intensity (large region of intense ${\it\delta}u^{2}$ ) and of their ability to sustain the turbulent motion (large region of intense ${\it\xi}$ ). This conjecture is consistent with
the behaviour of the single-point turbulent kinetic energy source, $s=-\langle uv\rangle (\text{d}U/\text{d}y)-\langle {\it\epsilon}\rangle$ , which is displayed for three Reynolds numbers, $Re_{{\it
\tau}}=550$ , $Re_{{\it\tau}}=950$ and $Re_{{\it\tau}}=2000$ , in figure 4. From the plots the increasing share of the overlap layer to the total single-point energy source is apparent; see Smits,
McKeon & Marusic (Reference Smits, McKeon and Marusic2011) for a review of the topic.
4 Scale-energy paths
In this section we analyse the flux of scale energy, $({\it\Phi}_{r_{y}},{\it\Phi}_{r_{z}},{\it\Phi}_{c})$ , in the $(r_{y},r_{z},Y_{c})$ space. To visualize the scale-energy paths, the field lines
of the flux vector field $({\it\Phi}_{r_{y}},{\it\Phi}_{r_{z}},{\it\Phi}_{c})$ are plotted in figure 5 where the grey scale encodes the strength of the flux. The fluxes take origin from the peak of
energy source ${\it\xi}$ in inner region discussed in the previous section and highlighted by a circle in figure 5(a). The peak of energy source corresponds to the singular point of the fluxes. The
fluxes eventually reach the $Y_{c}$ -distributed scale-energy sink located at small dissipative scales. Despite the complexity of the overall picture of ascending spirals, the path of the fluxes from
the origin to dissipation follow well-defined patterns. In particular, four regions are recognized as sketched in figure 6 in terms of scales $(r_{y},r_{z})$ and wall distances $Y_{c}$ traversed by
the flux along its path.
4.1 Region 1
In the first region (region 1 of sketch 6 a,b), the fluxes, starting from the singularity, follow a single line increasing the spanwise scale $r_{z}$ at constant distance from the wall, $Y_{c}^{+}=
14.5$ and $r_{y}=0$ , see the lower side of the triangle in figure 5(a,b). Along this source line the production term is very active, ${\it\xi}>0$ , and the strength of the flux increases. This is a
line of divergence since $\boldsymbol{{\rm\nabla}}\boldsymbol{\cdot }\unicode[STIX]{x1D731}={\it\xi}>0$ and the fluxes progressively depart from it at increasing spanwise scales $r_{z}={\rm\Delta}z_
{d}$ to bifurcate toward the wall and the bulk flow. Here we are interested only in the fluxes toward higher wall distances.
4.2 Region 2
Following the fluxes departing from the buffer layer, the wall-normal scale $r_{y}$ increases together with the wall-normal distance $Y_{c}$ . After the transition region sketched in figure 6(a,b),
for $Y_{c}^{+}>20$ the fluxes become aligned with the plane of attached scales, $r_{y}^{+}=2Y_{c}^{+}-30$ , highlighted by the red line in figure 5(c). Now ${\it\Phi}_{r_{y}}$ and ${\it\Phi}_{c}$ are
positive while the fluxes keep on moving toward larger spanwise scales. We find
(4.1) $$\begin{eqnarray}{\it\Phi}_{r_{z}}\sim {\it\Phi}_{r_{y}}/2\sim {\it\Phi}_{c},\end{eqnarray}$$
see the dashed green line in the red triangle of figure 5(b). Hence, the fluxes move among spanwise and wall-normal scales which increase linearly with the wall distance,
(4.2a,b ) $$\begin{eqnarray}r_{y}\sim 2Y_{c},\quad r_{z}\sim {\rm\Delta}z_{d}+Y_{c}\quad \text{or}\quad r_{z}\sim {\rm\Delta}z_{d}+r_{y}/2,\end{eqnarray}$$
where ${\rm\Delta}z_{d}$ parametrizes the family of field lines by the spanwise scale at which the flux departs from the source line in the buffer layer. Summarizing, from a hierarchy of spanwise
scales close to the wall a spatial reverse cascade takes place. The scale-energy ascends toward the channel centre moving, through a straight line in the $(r_{y},r_{z},Y_{c})$ space, toward linearly
increasing spanwise and wall-normal scales (region 2).
In analogy with the line of divergence in the buffer layer (region 1), the plane of attached scales, $r_{y}^{+}=2Y_{c}^{+}-30$ , is a plane of divergence from which the fluxes eventually detach as
apparent in figure 5(c). Indeed, as shown in the previous § 3, the attached scales are responsible for a significant scale-energy source, ${\it\xi}>0$ , thus leading to strong positive values of the
divergence, $\boldsymbol{{\rm\nabla}}\boldsymbol{\cdot }\unicode[STIX]{x1D731}>0$ .
4.3 Region 3
In the third region the behaviour of the fluxes falls into two different families. Figure 5 provides an overall view of this complex topology using different projections while a synthetic sketch is
provided in figure 6. Concerning the first family, region 3a of figure 6(a), all the three components of the flux are positive, meaning that scale energy is moved by the flux toward increasing
spanwise and wall-normal scales while ascending toward the centre of the channel. However now ${\it\Phi}_{r_{y}}$ , still positive, is less then twice the spatial wall normal flux ${\it\Phi}_{c}$ .
The interpretation is that the scale energy leaves the attached plane to feed detached eddies of increasing spanwise and wall-normal scales (detached spatial reverse cascade). From the inspection of
the data, this family is characterized by the fact that the spanwise scale $r_{z}$ where the flux departs from the plane of attached scales is larger than the limiting value $\tilde{r}_{z}^{+}=5Y_{c}
^{+}-30$ , see figure 5(a,b) where $\tilde{r}_{z}$ is shown as the upper side of the red triangle lying in the plane of attached scales.
Concerning the second family, region 3b of figure 6(b), the flux remains on the plane of attached scales, ${\it\Phi}_{r_{y}}\sim 2{\it\Phi}_{c}$ , but now, at variance with region 2, the flux
component in the spanwise scale direction, ${\it\Phi}_{r_{z}}$ , becomes negative. In this case the bundle of field lines coming from region 2 bend towards smaller spanwise scale, figure 5(a,b) to
eventually detach from the plane at the end of region 3. As a consequence, detachment occurs at spanwise scales smaller than $\tilde{r}_{z}$ .
Let us point out that the behaviour of the fluxes in region 3a might be related with the presence of the outer scale-energy source at those detached scales as previously shown in figure 3. Indeed,
the family of fluxes of region 3a detach from the plane of attached scales at relatively large spanwise scales, $r_{z}>\tilde{r}_{z}$ . The detached scale-energy source is located at smaller spanwise
and wall-normal scales, hence these trajectories seem to avoid the detached source by moving toward still larger spanwise and wall-normal scales thus forming a detached reverse cascade. In contrast,
for the fluxes leaving the plane of attached scales at $r_{z}<\tilde{r}_{z}$ the detached source remains at larger spanwise scales. Hence, these fluxes form a detached forward cascade, as will be
shown in the next section. Despite the fact that the intensity of the source in this region is very small compared with the scale-energy source in the attached scales, its effect on the topology of
the fluxes appears to be non-negligible.
4.4 Region 4
After regions 3a and 3b, the scale-energy paths finally form an ascending to the bulk forward cascade toward small $r_{z}$ and $r_{y}$ , region 4 of figure 6(a,b). At increasing Reynolds number, we
expect in this region the progressive development of a classical inertial range a lá Kolmogorov, where the smallest scales eventually assume the characteristics of locally homogeneous and isotropic
turbulence (Casciola et al. Reference Casciola, Gualtieri, Jacob and Piva2005; Jacob et al. Reference Jacob, Casciola, Talamelli and Alfredsson2008). In this region the spatial component of the flux,
${\it\Phi}_{c}$ , is still positive, indicating that scale energy is still moved at increasing wall-normal distances.
The detached forward cascade is the last part of the scale-energy path before dissipation at $r_{z},r_{y}\simeq {\it\eta}$ , with ${\it\eta}$ the putative Kolmogorov scale, thus closing the lifecycle
of turbulence from production to dissipation. It is worth pointing out that a correlation of the form
(4.3) $$\begin{eqnarray}{\it\Phi}_{r_{y}}\sim -2{\it\Phi}_{c},\end{eqnarray}$$
green dashed line in figure 5(c), is observed between spatial and wall-normal scale components of the flux in the intermediate range of scales of the overlap layer. This suggests that, following the
flux in region 4, the wall-normal scale linearly decreases with the wall distance as $r_{y}^{+}\sim B-2Y_{c}^{+}$ . Accordingly, given the wall-normal coordinates of the two points involved in
constructing the flux, $y_{top}=Y_{c}+r_{y}/2$ and $y_{bot}=Y_{c}-r_{y}/2$ , the top one remains at a constant distance from the wall along the field line, as sketched in region 4 of figure 6(a,b).
5 The combined role of fluxes and sources
We address here the strict relationship between fluxes and sources in the sustainment of turbulence in the different regions of the phase space $\left(r_{y},r_{z},Y_{c}\right)$ .
Fluxes and sources have been already discussed at length for the buffer layer which is characterized by the strongest values for the source. In particular, the field lines of the flux spring from a
singularity identified with the peak of ${\it\xi}$ . The reverse scale-energy cascade in region 1 (figure 6) also corresponds to strong scale-energy source. In contrast, the fluxes and sources
further away from the wall need further characterization.
In figures 7 and 8 the behaviour of two generic field lines of scale-energy flux is addressed showing in (a) the scales, $r_{z}$ , $r_{y}$ , $|\boldsymbol{r}|$ and the wall-normal position $Y_{c}$
while in (b) the strengths of flux $|{\it\Phi}|$ and source ${\it\xi}$ as a function of the arc length
(5.1) $$\begin{eqnarray}{\it\gamma}=\int \,\text{d}{\it\gamma}\quad \text{with }\text{d}{\it\gamma}=\sqrt{(\text{d}r_{z}^{2}+\text{d}r_{y}^{2}+\text{d}Y_{c}^{2})}.\end{eqnarray}$$
These field lines are selected as representative of the structure of the scale-energy paths schematized in figure 6(a,b), and are shown respectively in figures 7 and 8. The paths start from the
transition between regions 1 and 2. Different trends are consistently observed in the different regions, however the parametrization in terms of arc length might be specific to the particular stream
The first part of the field line of figure 7(a), see the sketch in figure 6(b), after a short transition, involves scales increasing linearly with wall distance, $r_{z}^{+}\sim {\rm\Delta}z_{d}^{+}
+Y_{c}^{+}$ (dashed grey line) and $r_{y}^{+}\sim 2Y_{c}^{+}-30$ (dashed black line). This is region 2 of figure 6 where ${\it\Phi}_{r_{z}}\sim {\it\Phi}_{r_{y}}/2\sim {\it\Phi}_{c}$ and a reverse
cascade spatially moving away from the wall in the plane of attached scales takes place. Successively, in region 3b, the flux, while remaining in the attached plane $r_{y}^{+}\sim 2Y_{c}^{+}-30$
(dashed black line), bends towards smaller spanwise scales (dashed grey line), as appreciated by the change of the sign of ${\it\Phi}_{r_{z}}$ . Finally, both the spanwise and wall-normal scales
(dashed grey and black line respectively) decrease forming an ascending detached forward cascade up to dissipation, region 4 of figure 6(b). In figure 8(a) the picture is the same with the exception
that, instead of region 3b, a reverse cascade at detached scale, $r_{y}^{+}<2Y_{c}^{+}-30$ , takes place, namely region 3a of figure 6(a).
After discussing the geometry of the flux field lines, let us address the strength of the flux and of the source (figure 7 b). Following the transition region, where the intensity of the reverse
cascade increases due to the action of the source term (grey line), in region 2 the flux strength decreases along the reverse cascade which takes place in the plane of attached scales (black line).
Our data show that the intensity of the flux follows a power law,
(5.2) $$\begin{eqnarray}|\unicode[STIX]{x1D731}|\sim {\it\gamma}^{-1/4},\end{eqnarray}$$
suggesting a self-similar process linked to the cascade. In the meanwhile the source is very active implying that the reverse cascade gains scale energy from the nearly constant source, ${\it\xi}\sim
\text{const.}>0$ , at attached scales (grey line). As shown in figure 7(b), by entering region 3b, still in the plane of attached scales, the field lines bend towards smaller $r_{z}$ ( ${\it\Phi}_{r_
{z}}<0$ ) and the intensity of the flux (black line) drastically increases. At the same time the source (grey line) decreases to eventually become negative at the end of this region. Contextually the
flux tubes are squeezed. Finally, the flux detaches from the plane, region 4, and intercepts smaller spanwise and wall-normal scales while ascending to larger wall distances. As shown in figure 7(b),
in this region, the source term (grey line) is consistently negative, ${\it\xi}<0$ , with turbulence sustained only by the decreasing flux (black line). The intensity of the flux behaves linearly
with the arc length along the field line,
(5.3) $$\begin{eqnarray}|\unicode[STIX]{x1D731}|\sim {\it\gamma}_{max}-{\it\gamma},\end{eqnarray}$$
see the dash-dotted line in the logarithmic plot of figure 7(b). In the corresponding range of figure 7(a) the scale $|\boldsymbol{r}|$ is shown to decrease linearly with ${\it\gamma}$ , implying
that the flux strength decreases linearly with the scale. This is consistent with Kolmogorov description of inertial range of turbulence (direct cascade) that should occur at intermediate scales in
the overlap layer. This process is eventually terminated by the dissipation occurring at small scales.
In figure 8(b), the picture is qualitatively the same except for the third region, region 3a of figure 6(a), where the flux detaches from the plane of attached scales flowing toward larger spanwise
and wall-normal scales while increasing the distance from the wall to form a detached reverse cascade. In contrast to region 3b, the intensity of the flux (black line) reaches a minimum while
detaching. After bending toward small spanwise scales while still moving toward larger wall-normal scales the flux starts to increase. The maximum flux intensity is finally observed where the forward
cascade begins and both $r_{y}$ and $r_{z}$ decrease, region 4.
Summarizing, in region 1 and in the transitional layer of the scale-energy path, the strong scale-energy source in the inner layer plays a leading role defining the singularity point for the fluxes
and sustaining the hierarchy of spanwise scales emerging from the buffer layer. The scale-energy source in this region represents also the triggering mechanisms for the reverse cascades toward the
attached scales of motion at larger distances from the wall in the overlap layer. In these attached scales the source term is very large thus sustaining the reverse cascade toward larger attached
scales located further away from the wall and the continuous detachment of fluxes toward smaller detached scales up to dissipation (forward cascade). This last consideration could corroborate the
idea of an overlap layer independent from the near-wall region and where the attached scales sustain, rather than being fed by, the spatial reverse cascade triggered in the buffer layer.
In closing this section let us propose a possible description of the detachment of the scale-energy path from the plane of attached scales. By considering a coordinate system $({\it\tau}_{1},{\it\
tau}_{2},{\it\eta})$ with ${\bf\tau}=({\it\tau}_{1},{\it\tau}_{2})$ and ${\it\eta}$ aligned and normal to the plane of attached scales, respectively, we can rewrite (2.2) as
(5.4) $$\begin{eqnarray}\boldsymbol{{\rm\nabla}}_{{\it\tau}}\boldsymbol{\cdot }\unicode[STIX]{x1D731}_{{\it\tau}}({\bf\tau},{\it\eta})+\frac{\partial {\it\Phi}_{{\it\eta}}}{\partial {\it\eta}}({\bf\
with ${\it\zeta}={\it\xi}-\partial {\it\Phi}_{x}/\partial r_{x}$ to be understood as an extended source term. By the inspection of the data, we found that the topology of ${\it\zeta}$ is essentially
the same of ${\it\xi}$ . As shown in § 3, in the overlap layer, the strong positive values of the source term are concentrated in a very thin layer aligned to the plane of attached scales. The
extended source ${\it\zeta}$ appears to be weakly dependent on ${\bf\tau}$ while in the ${\it\eta}$ direction it can be roughly modelled as a Dirac delta function,
(5.5) $$\begin{eqnarray}{\it\zeta}\simeq {\it\zeta}({\it\eta})\propto {\it\delta}({\it\eta}).\end{eqnarray}$$
Hence, (5.4) can be rewritten as
(5.6) $$\begin{eqnarray}\boldsymbol{{\rm\nabla}}_{{\it\tau}}\boldsymbol{\cdot }\unicode[STIX]{x1D731}_{{\it\tau}}({\it\eta})+\frac{\partial {\it\Phi}_{{\it\eta}}}{\partial {\it\eta}}({\it\eta})\
propto {\it\delta}({\it\eta}).\end{eqnarray}$$
In accordance with the scale-energy paths, we argue that the strong concentration of the scale-energy source mostly reflects on the normal divergence of the fluxes, hence
(5.7) $$\begin{eqnarray}\frac{\partial {\it\Phi}_{{\it\eta}}}{\partial {\it\eta}}\propto {\it\delta}({\it\eta}),\end{eqnarray}$$
thus leading to a normal component proportional to a Heaviside step function
(5.8) $$\begin{eqnarray}{\it\Phi}_{{\it\eta}}\propto H({\it\eta}).\end{eqnarray}$$
This jump of the normal component of the flux describes the detachment of the scale-energy paths and could be related to the increase of the intensity of the flux observed in regions 3a and 3b and
shown in figures 7 and 8, respectively.
6 Final remarks
The elusive nature of wall-bounded turbulence is related to the multidimensionality of energy transfer, production and dissipation which combines phenomena occurring in the space of scales with those
taking place in physical space. Different form of energy cascade, both toward small and large scale, and the related spatial fluxes toward the bulk flow are intermingled and cannot be analysed
A suitable instrument to study this energy transfer in the multidimensional space of scales and position is the generalized Kolmogorov equation for the second-order structure function which addresses
the velocity difference between two points in space. The equation could be considered the natural tool for the statistical analysis of general turbulent flows that lack a classical spectral
decomposition due to violation of spatial homogeneity. This approach allows the cascade mechanisms by which energy is transported among different flow regions and different scales to be addressed.
The equation has already been studied in Cimarelli et al. (Reference Cimarelli, De Angelis and Casciola2013) and Cimarelli et al. (Reference Cimarelli, De Angelis, Schlatter, Brethouwer, Talamelli
and Casciola2015) in the reduced space of spanwise and streamwise scales and wall-normal distances. In addition complementing the available information, the purpose here was mainly to distinguish the
transport processes in wall-normal scales from those taking place in the physical wall-normal direction, a distinction that could not be addressed in the previous conditions, given the constraint $r_
{y}=0$ which here we relaxed. Apart from that, the two reduced views for $r_{y}=0$ and $r_{x}=0$ should be considered as entirely complementary and part of a general description in the
four-dimensional space of positions and scales provided by the generalized Kolmogorov equation.
Three driving mechanisms are found for the energy flux, two of which were already identified in the previous works. The previous two correspond to a strong source in the buffer layer related to the
near-wall cycle of turbulence and to an outer one here better characterized as belonging to the region of detached scales. The newly identified source lives at attached scales in the overlap layer.
These sourcing mechanisms lead to a complex redistribution of energy where spatially evolving forward and reverse cascades coexist involving respectively detached and attached scales of motion. The
picture is as follows. A hierarchy of spanwise scales is generated in the buffer layer through local sourcing mechanisms. Successively, through a reverse cascade, energy flows toward large spanwise
scales at constant distance from the wall. The strong source associated with this hierarchy of spanwise scales triggers a reverse cascade in the overlap layer that climbs the inclined plane of
attached scales toward increasing separations and further away from the wall. The switch between the two attached/detached dominated regions of the flow is given by $Y_{c}^{+}=20$ . The newly
identified source in the attached scales strongly contributes to sustain the reverse cascade toward larger attached scales further away from the wall and to initialize the forward cascade toward
small dissipative scales which also ascends toward the channel centre. All of these features could be consistent with the notion of an overlap layer independent of the near-wall energy source region.
In the region of the attached scales the flux, going from small to large scales and toward the bulk of the flow, follows a power law, $|\unicode[STIX]{x1D731}|\sim {\it\gamma}^{-1/4}$ , consistently
with the idea of a self-similar process for the reverse cascade. In the region of detached scales the flux reverts to small scales still going toward the bulk. This is the process which brings scale
energy to the eventual dissipation thus closing the turbulent cycle from production to dissipation. Nevertheless, detached scales are not entirely characterized by a direct cascade toward small
scales. Actually a detached reverse cascade is found at large spanwise separations, $r_{z}>\tilde{r}_{z}$ , and is presumably related to the presence of the outer source at detached scales that was
already identified in a previous paper. Generally speaking, spatial redistribution of energy, moving away from the wall, is always present in the channel flow. As already commented on, this makes
necessary to distinguish between the wall-normal spatial component of the flux and the one occurring in wall-normal separation. The resulting picture consists of energy produced at a certain distance
from the wall and dissipated further away from it. The reverse cascade which energizes the larger scales is followed by a forward cascade combined with a spatial component of the flux toward the
bulk. A putative inertial range is expected for intermediate small detached scales within the overlap layer. Following the flux in this range, the topmost one of the two points among which the
velocity difference is evaluated remains at a constant distance from the wall.
As a final comment, we stress that the terminology we have adopted throughout the paper should be considered as suggestive of the physical meaning of the statistical objects we were considering. In
particular, as recommended in § 2, the terms scale energy and scale-energy flux should not be take too literally. Actually, energy is by definition an additive quantity. The scale energy is not. Its
name is only intended to convey the idea that the scale energy or, more technically, the second-order structure function, obeys a conservation equation where the divergence of the relevant flux is
determined by suitable sources. At small scales, the scale energy could be approximately accepted as a measure of the intensity of the eddies living at those scales. At larger scales this
interpretation is misleading. Anyhow, the flux that is eventually dissipated by viscosity at small scales can be rightfully referred to as the flux of scale energy. If we trace back this flux we
identify the corresponding sources and in doing so we felt entitled to keep calling it the scale-energy flux. By extension, the transported quantity, the second-order structure function, was
nicknamed scale energy. To conclude, we reinforce that the paths the scale energy takes in the combined space of scale and positions is a clear indication of the processes occurring in the system.
Clearly, the overall picture is rather complex even in the relatively simple context of a canonical channel flow.
The authors would like to acknowledge the support of the European Research Council, which funded the Multiflow summer program within which this work was conceived and partially developed. We
acknowledge the technical support of Dr J. Hackl, who helped us to manage the computing resources of the School of Aeronautics, Universidad Politécnica de Madrid, which is also acknowledged. | {"url":"https://core-cms.prod.aop.cambridge.org/core/journals/journal-of-fluid-mechanics/article/cascades-and-wallnormal-fluxes-in-turbulent-channel-flows/71E16BA6610FEA979753844DB329B887","timestamp":"2024-11-02T05:32:11Z","content_type":"text/html","content_length":"1049980","record_id":"<urn:uuid:b68250c9-a898-42a8-b190-e95bc0fca640>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00047.warc.gz"} |
Indices and Standard Form MCQs [PDF] Quiz Questions Answers | Indices and Standard Form MCQ App Download & e-Book: Test 1
Class 8 Math MCQs - Chapter 2
Indices and Standard Form Multiple Choice Questions (MCQs) PDF Download - 1
The Indices and standard form Multiple Choice Questions (MCQs) with Answers PDF (indices and standard form MCQs PDF e-Book) download Ch. 2-1 to study Grade 8 Math Course. Learn Indices and Standard
Form Quiz Questions and Answers for homeschool certificate courses. The Indices and Standard Form MCQs App Download: Free learning app for multiplication law of indices, math prefixes, fractional
indices test prep for online courses.
The MCQ: The product of a²b^4 and a³b^5 is; "Indices and Standard Form" App Download (Free) with answers: A^5 b^9; A^10 b^20; A^4 b^6; A^6 b^7; for homeschool certificate courses. Solve
Multiplication Law of Indices MCQ Questions, download Google eBook (Free Sample) for online elementary school.
Indices and Standard Form MCQs with Answers PDF Download: Quiz 1
MCQ 1:
The product of a²b^4 and a³b^5 is
1. a^10 b^20
2. a^5 b^9
3. a^4 b^6
4. a^6 b^7
MCQ 2:
The product of y^7 and y³ is equal to
1. y^10
2. y^21
3. y^4
4. y²
MCQ 3:
The answer of 9.5 x 10^5 ⁄10^4 in ordinary notation is
1. 0.95
2. 0.095
3. 95
4. 0.0095
MCQ 4:
If 3^6 ⁄27 = 3^x then the value of 'x' is
1. 3
2. 4
3. 5
4. 2
MCQ 5:
If (9^4 )2 = 3^x then the value of 'x' is
1. 14
2. 16
3. 15
4. 17
Indices and Standard Form Learning App: Free Download Android & iOS
The App: Indices and Standard Form MCQs App to learn Indices and Standard Form Textbook, 8th Grade Math MCQ App, and College Math MCQs App. The "Indices and Standard Form" App to free download iOS &
Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions! | {"url":"https://mcqlearn.com/grade8/math/indices-and-standard-form-multiple-choice-questions-answers.php","timestamp":"2024-11-03T02:39:09Z","content_type":"text/html","content_length":"72474","record_id":"<urn:uuid:db1b0c98-9701-42f9-b7cd-4fcce122bf0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00563.warc.gz"} |
V5B2 - Selected Topics in Analysis and PDE - Dispersive PDEs: deterministic and probabilistic perspectives
Summer Semester 2021
Every Tuesday there will be a lecture, taking place on Zoom. The handwritten notes taken during the lectures will be made available on this website. Recordings of the lectures are available, please
contact Dr. Tolomeo via email if you are interested. For convenience, the recording of lecture 1 (and only lecture 1) is available on this page.
The information about the zoom session can be found on Basis.
This course aims at providing the basis for the study of dispersive equations, both in the deterministic setting, and in the probabilistic one. Our goal is to show how probabilistic effects affect
the behaviour of these equations, greatly improving the results available. We will cover
1. Strichartz estimates for Schrödinger and wave equations.
2. Local well posedness theory Schrödinger and wave equations in subcritical Sobolev spaces H^s.
3. Global well posedness theory for Schrödinger and wave equations in H^1.
4. Ill posedness in supercritical Sobolev spaces.
5. Local well posedness in
Sobolev spaces for random initial data.
6. Global well posedness for random initial data.
If time allows, we will also discuss some features of the associated stochastic PDEs.
Analysis: basic real and complex analysis, basic knowledge of Fourier analysis.
Probability: measure theoretical approach to probability, Gaussian random variables, independence.
• L. Grafakos, Classical and modern Fourier analysis.
• T. Tao, Nonlinear Dispersive Equations: Local and Global Analysis.
• M. Gubinelli, T. Souganidis, N. Tzvetkov, Singular Random Dynamics, Chapter 4. | {"url":"https://www.math.uni-bonn.de/ag/ana/SoSe2021/dispersive_PDEs_deterministic_and_probabilistic/","timestamp":"2024-11-06T02:53:16Z","content_type":"text/html","content_length":"7640","record_id":"<urn:uuid:4a6d633f-8a72-4d1a-bbb7-4cbc91de53ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00692.warc.gz"} |
MTU Mathematics
“Transitioning to e-assessment in Mathematics Education” is a project funded by the NATIONAL FORUM FOR THE ENHANCEMENT OF TEACHING AND LEARNING IN HIGHER EDUCATION through the Teaching and Learning
Enhancement Fund 2014.
It is a joint initiative between University College Cork and Cork Institute of Technology. The main project aim is to leverage the potential of NUMBAS (online assessment tool - University of
Newcastle) to construct localized formative e-assessment for first year service Mathematics and Statistics courses at UCC and CIT. The project focuses on the implementation and evaluation of this
Mathematics e-assessment tool.
Numbas has been developed at the University of Newcastle. It is actively maintained, freely available, and easy for students to work with straightaway. Numbas allows students to input mathematical
formulae easily and creates a similar but different question for each student. It gives students instant feedback and also interacts with Learning Management Systems like Blackboard and Moodle
automatically correcting and inputting student results
Sample Resources:
Goals of the project:
• Improve student engagement
• Improve student understanding and appreciation of mathematics
• Improve student core mathematics skills
• Manage lecturer workload
• Strengthen relationship between Maths @UCC and Maths @CIT.
University College Cork
• Tom Carroll (t.carroll@ucc.ie)
• Kieran Mulchrone (k.mulchrone@ucc.ie)
Cork Institute of Technology
• Áine Ní Shé (aine.nishe@mtu.ie)
• Julie Crowley (julie.crowley@mtu.ie) | {"url":"https://mathematics.mtu.ie/numbas1","timestamp":"2024-11-03T12:57:30Z","content_type":"application/xhtml+xml","content_length":"17977","record_id":"<urn:uuid:d9f44a42-af66-441e-aa6c-8ffc1f77dca8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00096.warc.gz"} |
The Stacks project
Lemma 20.41.8. Let $(X, \mathcal{O}_ X)$ be a ringed space. Let $\mathcal{I}^\bullet $ be a K-injective complex of $\mathcal{O}_ X$-modules. Let $\mathcal{L}^\bullet $ be a K-flat complex of $\
mathcal{O}_ X$-modules. Then $\mathop{\mathcal{H}\! \mathit{om}}\nolimits ^\bullet (\mathcal{L}^\bullet , \mathcal{I}^\bullet )$ is a K-injective complex of $\mathcal{O}_ X$-modules.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0A8T. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0A8T, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0A8T","timestamp":"2024-11-08T12:19:27Z","content_type":"text/html","content_length":"15448","record_id":"<urn:uuid:51282e15-8604-4a63-afe7-e7cf768b5ec4>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00408.warc.gz"} |
Papers with Code - The most popular papers with code
An important building block for many quantum circuit optimization techniques is pattern matching, where given a large and a small quantum circuit, we are interested in finding all maximal matches of
the small circuit, called pattern, in the large circuit, considering pairwise commutation of quantum gates.
Quantum Physics Data Structures and Algorithms | {"url":"https://physics.paperswithcode.com/greatest","timestamp":"2024-11-10T21:02:32Z","content_type":"text/html","content_length":"117134","record_id":"<urn:uuid:1611a8fd-5e57-4690-834c-7ac57aa0532b>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00588.warc.gz"} |
Jean Constant - Julia set
Jean Constant - Julia set
The Julia set, named after the French mathematician Gaston Julia, consists of values such that an arbitrarily small perturbation can cause drastic changes in the sequence of iterated function values.
The image is created by mapping each pixel to a rectangular region of the complex plane. A significant property of the Julia sets is the so called “self-similarity” property, which can easily be
adjusted to designs of a more artistic nature when dealing with symmetry and mirroring in a representational 2D environment.
A 36 plate variation on the Julia set is available at hermay.org
Fragments of a Julia set created in 3D-XplorMath were assembled and manipulated in a raster graphic program to create an original visualization using design techniques of mirroring and symmetry.
Julia set #2
Variation on the self similarity property of the Julia set.
The outlines were created in 3D-XplorMath and manipulated in a raster graphic program and imported into G. Bousquet SeamlessMaker to add Droste and mirroring effects. | {"url":"https://www.imaginary.org/tr/node/260","timestamp":"2024-11-01T22:14:12Z","content_type":"text/html","content_length":"55888","record_id":"<urn:uuid:625a7a7c-b143-45a1-bf22-56356cd7f088>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00217.warc.gz"} |
Tight local approximation results for max-min linear programs
In a bipartite max-min LP, we are given a bipartite graph G = (V ∪ I ∪ K, E), where each agent v ∈ V is adjacent to exactly one constraint i ∈ I and exactly one objective k ∈ K. Each agent v controls
a variable x[v]. For each i ∈ I we have a nonnegative linear constraint on the variables of adjacent agents. For each k ∈ K we have a nonnegative linear objective function of the variables of
adjacent agents. The task is to maximise the minimum of the objective functions. We study local algorithms where each agent v must choose x[v] based on input within its constant-radius neighbourhood
in G. We show that for every ε ≥ 0 there exists a local algorithm achieving the approximation ratio Δ[I] (1 − 1/Δ[K]) + ε. We also show that this result is the best possible – no local algorithm can
achieve the approximation ratio Δ[I] (1 − 1/Δ[K]). Here Δ[I] is the maximum degree of a vertex i ∈ I, and Δ[K] is the maximum degree of a vertex k ∈ K. As a methodological contribution, we introduce
the technique of graph unfolding for the design of local approximation algorithms.
Sándor P. Fekete (Ed.): Algorithmic Aspects of Wireless Sensor Networks, Fourth International Workshop, ALGOSENSORS 2008, Reykjavik, Iceland, July 2008, Revised Selected Papers, volume 5389 of
Lecture Notes in Computer Science, pages 2–17, Springer, Berlin, 2008
ISBN 978-3-540-92861-4 | {"url":"https://jukkasuomela.fi/max-min-lp-algosensors/","timestamp":"2024-11-14T01:54:57Z","content_type":"text/html","content_length":"6607","record_id":"<urn:uuid:b83bb15d-ad58-49da-9d63-26dc34b433f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00664.warc.gz"} |
Quadrilaterals - Revision Notes
CBSE Class 09 Mathematics
Revision Notes
CHAPTER 8
• Angle Sum Property of a Quadrilaterals
• Types of Quadrilaterals
• Properties of a Parallelogram
• The Mid-point Theorem
1. Sum of the all angles of a quadrilateral is
2. A diagonals of a parallelogram divides it into two congruent triangles.
3. In a parallelogram
• diagonals bisects each other.
• opposite angles are equal.
• opposite sides are equal
(4) Diagonals of a square bisects each other at right angles and are equal, and vice-versa.
(5) A line through the mid-point of a side of a triangle parallel to another side bisects the third side. (Mid point theorem)
(6)The line segment joining the mid-points of two sides of a triangle is parallel to the third side and equal to half the third side.
(7) In a parallelogram, the bisectors of any two consecutive angles intersect at right angle.
(8) If a diagonal of a parallelogram bisect one of the angles of a parallelogram it also bisects the second angle.
(9) The angle bisectors of a parallelogram form a rectangle.
(10) Each of the four angles of a rectangle is right angle.
(11) The diagonals of a rhombus are perpendicular to each other. | {"url":"https://mobile.surenapps.com/2020/10/quadrilaterals-revision-notes.html","timestamp":"2024-11-14T21:43:21Z","content_type":"application/xhtml+xml","content_length":"73691","record_id":"<urn:uuid:832e26fb-69d2-446f-aabb-24cd4871a780>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00653.warc.gz"} |
Statistics / superbessaywriters.com
1) The main purpose of descriptive statistics is to
We Write Essays For Students
Tell us about your assignment and we will find the best writer for your paper
Write My Essay For Me
1. summarize data in a useful and informative manner
1. make inferences about a population
1. determine if the data adequately represents the population
1. gather or collect data
2) The general process of gathering, organizing, summarizing, analyzing, and interpreting data is called
1. statistics
1. descriptive statistics
1. inferential statistic
1. levels of measurement
3) The performance of personal and business investments is measured as a percentage, return on investment. What type of variable is return on investment?
1. Qualitative
1. Continuous
1. Attribute
1. Discrete
4) What type of variable is the number of robberies reported in your city?
1. Attribute
1. Continuous
1. Discrete
1. Qualitative
5) What level of measurement is the number of auto accidents reported in a given month?
1. Nominal
1. Ordinal
1. Interval
1. Ratio
6) The names of the positions in a corporation, such as chief operating officer or controller, are examples of what level of measurement?
1. Nominal
1. Ordinal
1. Interval
1. Ratio
7) Shoe sizes, such as 7B, 10D, and 12EEE, are examples of what level of measurement?
1. Nominal
1. Ordinal I guess my this answer is incorrect. This should be A. Nominal. Take your call.
1. Interval
1. Ratio
8) Monthly commissions of first-year insurance brokers are $1,270, $1,310, $1,680, $1,380, $1,410, $1,570, $1,180, and $1,420. These figures are referred to as
1. a histogram
1. raw data
1. frequency distribution
1. frequency polygon
9) A small sample of computer operators shows monthly incomes of $1,950, $1,775, $2,060, $1,840, $1,795, $1,890, $1,925, and $1,810. What are these ungrouped numbers called?
1. Histogram
1. Class limits
1. Class frequencies
1. Raw data
10) The sum of the deviations of each data value from this measure of central location will always be 0
1. Mode
1. Mean
1. Median
1. Standard deviation
11) For any data set, which measures of central location have only one value?
1. Mode and median
1. Mode and mean
1. Mode and standard deviation
1. Mean and median
12) A sample of single persons receiving social security payments revealed these monthly benefits: $826, $699, $1,087, $880, $839, and $965. How many observations are below the median?
1. 0
1. 1
1. 2
1. 3
13) A dot plot shows
1. the general shape of a distribution
1. the mean, median, and mode
1. the relationship between two variables
1. the interquartile range
14) The test scores for a class of 147 students are computed. What is the location of the test score associated with the third quartile?
1. 111
1. 37
1. 74
1. 75%
15) The National Center for Health Statistics reported that of every 883 deaths in recent years, 24 resulted from an automobile accident, 182 from cancer, and 333 from heart disease. Using the
relative frequency approach, what is the probability that a particular death is due to an automobile accident?
1. 24/883 or 0.027
1. 539/883 or 0.610
1. 24/333 or 0.072
1. 182/883 or 0.206
16) If two events A and B are mutually exclusive, what does the special rule of addition state?
1. P(A or B) = P(A) + P(B)
1. P(A and B) = P(A) + P(B)
1. P(A and/or B) = P(A) + P(B)
1. P(A or B) = P(A) – P(B)
17) A listing of all possible outcomes of an experiment and their corresponding probability of occurrence is called a
1. random variable
1. probability distribution
1. subjective probability
1. frequency distribution
18) The shape of any uniform probability distribution is
1. negatively skewed
1. positively skewed
1. rectangular
1. bell shaped
19) The mean of any uniform probability distribution is
1. (b – a)/2
1. (a + b)/2
1. Σ x/ η
1. nπ
20) For the normal distribution, the mean plus and minus 1.96 standard deviations will include about what percent of the observations?
1. 50%
99. 99.7%
1. 95%
1. 68%
21) For a standard normal distribution, what is the probability that z is greater than 1.75?
1. 0.0401
1. 0.0459
1. 0.4599
1. 0.9599
22) A null hypothesis makes a claim about a
1. A population parameter
1. sample statistic
1. sample mean
1. Type II error
23) What is the level of significance?
1. Probability of a Type II error
1. Probability of a Type I error
1. z-value of 1.96
1. Beta error
24) Suppose we test the difference between two proportions at the 0.05 level of significance. If the computed z is -1.07, what is our decision?
1. Reject the null hypothesis
1. Do not reject the null hypothesis
1. Take a larger sample
1. Reserve judgment
25) Which of the following conditions must be met to conduct a test for the difference in two sample means?
1. Data must be at least of interval scale
1. Populations must be normal
1. Variances in the two populations must be equal
1. Data must be at least of interval scale and populations must be normal
26) For a hypothesis test comparing two population means, the combined degrees of freedom are 24. Which of the following statements about the two sample sizes is NOT true? Assume the population
standard deviations are equal.
1. Sample A = 11; sample B = 13
1. Sample A = 12; sample B = 14
1. Sample A = 13; sample B = 13
1. Sample A = 10; sample B = 16
27) What is the chart called when the paired data (the dependent and independent variables) are plotted?
1. Scatter diagram
1. Bar chart
1. Pie chart
1. Histogram
28) What is the variable used to predict the value of another called?
1. Independent variable
1. Dependent variable
1. Correlation variable
1. Variable of determination
29) Twenty randomly selected statistics students were given 15 multiple-choice questions and 15 open-ended questions, all on the same material. The professor was interested in determining on which
type of questions the students scored higher. This experiment is an example of
1. a one sample test of means
1. a two sample test of means
1. a paired t-test
1. a test of proportions
30) The measurements of weight of 100 units of a product manufactured by two parallel processes have same mean but the standard of process A is 15 while that of B is 7. What can you conclude?
1. The weight of units in process A are grouped closer than in process B
1. The weight of units in process B are grouped closer than in process A
1. Both processes are out of control
1. More data is needed to draw a conclusion
Follow this link to get a similar paper written from scratch
Struggling with paper writing? Look no further, as you have found the ideal paper writing company! We are a reputable essay writing service that offers high-quality papers at affordable prices. On
our user-friendly website, you can request a wide range of assignments. Rest assured that our work is entirely original. Each essay is crafted from scratch, tailored to meet the precise requirements
of your assignment. We guarantee that it will successfully pass any plagiarism check.
Get Your Assignments Completed by Expert Writers. Hire Essay Helpers for Any Task
Order essays, term papers, research papers, reaction paper, research proposal, capstone project, discussion, projects, case study, speech/presentation, article, article critique, coursework, book
report/review, movie review, annotated bibliography, or another assignment without having to worry about its originality – we offer 100% original content written completely from scratch | {"url":"https://contentfence.com/statistics/","timestamp":"2024-11-05T06:30:29Z","content_type":"text/html","content_length":"133984","record_id":"<urn:uuid:354751fa-f623-4506-adc0-3948e8e59324>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00360.warc.gz"} |
Team A7: Scalable Machine Learning Using FPGAs
After meeting with a professor, I have determined that the SPI clock cannot run as fast as I expected. However, SPI is still the fastest method to transfer data.
What has changed is that the SPI clock will run at 15.6 MHz, or 250MHz / 16. This means that the 50MHz FPGA clock can successfully process the SPI clock without relying on a separate clock. I still
have to code this, but it will be done by Monday.
By Friday, I plan to finish all of the SPI bus and spend the weekend completing additional tasks for TJ.
Mark’s Status Report for April 11th
This week, I wrote a helper function that takes a ML model and returns a list of all the layers of that model in the order that they are called in. This took longer than expected as some layers were
not showing up properly or in the right order. Additionally, I had to make some small modifications to the serialization of the models since we made some changes to the Transport Layer Protocol. The
two changes were changing the size of each packet of information (from any size -> 4 bytes), and swapping the order of the preambles before sending each layer.
This coming week, I plan on finishing up the serialization of a specific model, and then checking that every model that was defined in the benchmark.py file serializes correctly.
Team Status Report for April 11
TJ has completed the following FPU operations:
│Operation │Described?│Implemented?│Testbench?│
│Linear Forward │Yes │Yes │Yes │
│Linear Backward │Yes │Yes │ │
│Linear Weight Gradient │Yes │Yes │ │
│Linear Bias Gradient │Yes │Yes │ │
│Convolution Forward │Yes │ │ │
│Convolution Backward │Yes │Yes │ │
│Convolution Weight Gradient │Yes │ │ │
│Convolution Bias Gradient │Yes │ │ │
│MaxPool Forward │Yes │ │ │
│MaxPool Backward │ │ │ │
│ReLU Forward │Yes │ │ │
│ReLU Backward │Yes │ │ │
│Softmax Forward │ │ │ │
│Softmax Backward │ │ │ │
│Cross-Entropy Backward │ │ │ │
│Flatten Forward │Yes │ │ │
│Flatten Backward │Yes │Yes │ │
│Parameter Update │Yes │ │ │
And will be finishing the rest to get an end-to-end test working.
Mark has finished the helper function to sort through a model and list out every layer that is called in the specific order. This will be used in order to serialize each model. Mark also made small
changes to the Transport Layer Protocol.
Jared has fixed bugs in the SPI protocol and guaranteed its ability to function on the RPi.
TJ will spend the next week finishing up the FPU Job Manager and implementing the rest of the Model Manager in preparation for the Demo on Monday.
Mark will spend the next week making sure that models are being serialized over correctly.
Jared will complete the SPI bus implementation, along with additional processing for data receiving.
Theodor’s Status Report for April 11
This week I’ve been working on the FPU Job Manager Operations. I’ve been following my previous process of describing the FSM control signals state-by-state, then simply copying them over into
SystemVerilog. Here’s what I have so far:
│Operation │Described?│Implemented?│Testbench?│
│Linear Forward │Yes │Yes │Yes │
│Linear Backward │Yes │Yes │ │
│Linear Weight Gradient │Yes │Yes │ │
│Linear Bias Gradient │Yes │Yes │ │
│Convolution Forward │Yes │ │ │
│Convolution Backward │Yes │Yes │ │
│Convolution Weight Gradient │Yes │ │ │
│Convolution Bias Gradient │Yes │ │ │
│MaxPool Forward │Yes │ │ │
│MaxPool Backward │ │ │ │
│ReLU Forward │Yes │ │ │
│ReLU Backward │Yes │ │ │
│Softmax Forward │ │ │ │
│Softmax Backward │ │ │ │
│Cross-Entropy Backward │ │ │ │
│Flatten Forward │Yes │ │ │
│Flatten Backward │Yes │Yes │ │
│Parameter Update │Yes │ │ │
Last week, I had the Convolutional Forward and Linear Forward operations described, and only the Linear Forward operation implemented.
I’ve consolidated all of the weight and bias operations into a single “Parameter Update” operation, since they’re all the exact same and the shape of each tensor can be read from memory.
Another work-around I’m implementing is for the Softmax Backward operation. I haven’t been able to find a working floating-point exponent calculator in Verilog, so in the case that I’m unable to find
one, I will simply subtract the output from the label, which in terms of optimization will have the same effect as taking the backwards gradient of the softmax direction.
Schedule & Accomplishments for Next Weeks
I’ll be finishing up the FPU Job Manager operations over the next couple days, then preparing the Model Manager for the Demo.
Jared’s Status Report for April 4
Placing the program on the FPGA isn’t as easy as I thought it would be.
Things that were easy:
• Following the user manual and learning pin assignments
• writing a compliant SPI module
Things that are hard:
• Deciphering the signals from the RPi
• Receiving the signals correctly
Here is my current configuration for the physical board:
Followed by a screenshot in Quartus of the interface (while debugging):
The SPI module likes to act strange: Once in a while it will count too many clock cycles and receive or return a bad buffer. This doesn’t appear when connecting MISO and MOSI together (loopback), so
it must be with the Pi.
I have contacted sources outside the group for help, as without an oscilloscope I don’t think I can properly assess my issue.
Next week will be dedicated to getting SPI working.
Mark’s Status Report for April 5
This week, I finished the tensor serializer helper function for serializing tensors between 1-4 dimensions. This meant converting any of the various dimensioned tensors into a single dimension list
based off of the Transport Protocol that we had described previously. Additionally, I wrote a basic serialization for each of the six possible layers that a model could have (Linear, 2D Convolution,
ReLU, Maxpool2D, Flatten, Softmax). As of this point, all tensors use integer values to represent values as it is easier to validate a tensor is being serialized correctly.
This coming week. I plan on fully implementing the serialization for each of the six possible layers, as well as cleaning up the interaction between the Data Source machine and the Worker.
Theodor’s Status Report for April 4
This week, I started building the FPU Job Manager and implemented the Linear Forward operation. For the interim demo, I constructed a testbench that computes a matrix-vector multiplication using the
FPU Job Manager.
No schedule changes are needed for next week. I will continue implementing FPU Job Manager operations.
Accomplishments for Next Week
Next week I will have Convolutional forward implemented and the FSM Control signals defined for the rest of the operations we plan to implement.
Team Status Report for April 4
Theodor implemented the skeleton of the FPU and implemented the Linear Forward Operation. He also implemented the assignment phase of the Model Manager.
Mark implemented the Tensor serializer (Dimensions 1-4) as well as a rough skeleton for serializing each of the various layers of a model.
Jared implemented a UDP client for the Raspberry Pi and is debugging the SPI protocol on the FPGA.
Accomplishments for next week
Theodor will spend next week implementing the rest of the FPU Job Manager operations. After that, he will finish work on the Model Manager and (if necessary) implement the DPR for end-to-end
Mark will spend next week implementing the serialization of each of the potential layers, as well as cleaning up the communication between the Data Source machine and the Rasp Pis.
Jared will produce a working SPI implementation and a receiving module for the FPGA. This includes correct interpretation of the transport layer protocol.
Mark’s Status Report for March 28
This week, I set up a basic framework for the Worker Finder using the UDP communications protocol. I tested this feature by setting up a server and sent messages back and forth between the Worker
finder and the server. The server in this case acts as a Rasp Pi. There is still a little bit of work left to do on the Worker Finder as I am unclear about some of the very specific details of the
implementation. This coming week, I plan on working with Jared to hash out these details as he in charge of the Bus Protocol. I also plan on working on the Workload Manager, specifically using the
third party tool that I found a couple weeks back to measure the size of a model given the input parameters.
Jared’s Status Report for Mar. 28
This week I wrote an initial draft of the SPI slave SystemVerilog code. Todo is the code to faciliate data transfer betwenn the Ethernet and SPI. This should meet the mark where we can put our pieces
together, however I would still need to write some tests.
I received the parts for the setup this week. I will be connecting them to the host desktop for us to use and test on.
In relation to TJ I have been slacking a bit on code. I hope to meet with him once this section is done and work on the meat and bones of the project a bit more. | {"url":"https://course.ece.cmu.edu/~ece500/projects/s20-teama7/page/2/","timestamp":"2024-11-02T12:11:38Z","content_type":"text/html","content_length":"80665","record_id":"<urn:uuid:5b123a13-774c-4c9e-8221-874681f3835b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00793.warc.gz"} |
Enumerating identities, part 2
Part 2 of Enumerating all mathematical identities (in fixed-size bitvector arithmetic with a restricted set of operations) of a certain size.
To recap, the approach in part 1 was broadly:
• Use a CEGIS-style loop to find a pair of expressions that are equal for all possible inputs.
• After finding such a pair, block it from being found again.
• Repeat until no more pairs can be found.
One weakness (in addition to the other weaknesses) of that approach is that the "block list" keeps growing as more identities are found. Here is an alternative approach that does not use an
ever-growing block list:
• Interpreting the raw bit-string that represents a pair of expressions as an integer, find the lowest pair of equivalent expressions.
• After finding the lowest pair, interpreted as the number X, set the lower bound for the next pair to X + 1.
• Repeat until no more pairs can be found.
Finding the lowest pair of expressions
SAT by itself does not try to find the lowest solution, nor does the CEGIS-loop built on top of it. But we can use the CEGIS-loop as an oracle to answer the question: is there any solution in (a
restricted part of) the search space, and that lets us do a bitwise binary search - the "find one bit at the time" variant of binary search, typically discussed in a completely different context.
Bitwise binary search maps well to SAT, not directly of course, but in the following way:
1. Initialize the prefix to an empty list.
2. If the prefix has the same size as a pair of expressions, directly return the result of the CEGIS-oracle.
3. Extend the prefix with false.
4. Ask the CEGIS-oracle if there is a solution that starts with the prefix, if there is, go back to step 2.
5. Otherwise, turn the false at the end of the prefix into true, then go back to step 2.
Asking a SAT solver for a solution that starts with a given prefix is easy and tends to make the SAT instance easier to solve (especially when the prefix is long), this only involves forcing some
variables (the ones covered by the prefix) to be true or false, using single-literal clauses.
A very useful optimization can be done in step 4: when the CEGIS-oracle says there is a solution with the current prefix, the solution may have some extra zeroes after the prefix which we can use
directly to extend the prefix for free. That saves a lot of SAT solves when the solutions tend to have a lot of zeroes in them, which in my case they do, due to extensive use of one-hot encoding.
Encoding the lower bound
There is a neat way to encode that an integer must be greater-than-or-equal-to some given constant, which I haven't seen people talk about (perhaps it's part of the folklore?), using only popcnt
(lower_bound) clauses. The idea here is that for every bit that's set in the bound, at least one of the following bits must be set in the solution: that bit itself, or any more-significant bit that
is zero in the lower bound. That only takes one clause to encode, a clause containing the variable that corresponds to the set bit, and the variables corresponding to the zeroes to the left of that
set bit.
For concreteness, here's how I implemented constraining solutions to conform to the prefix, it's really simple:
// set the prefix (used by binary search)
for (size_t i = 0; i < prefix.size(); i++)
if (prefix[i])
Here's how I set the lower bound:
// if there is a lower bound (used by binary search), enforce it
if (!lower_bound.empty())
vector<Lit> cl;
for (size_t i = 0; i < lower_bound.size(); i++) {
if (lower_bound[i]) {
And binary search (with the optimization to keep the extra zeroes that the solver gives for free) looks like this:
optional<pair<vector<InstrB>, vector<InstrB>>> find_lowest(vector<bool>& prefix,
vector<vector<int>>& inputs)
do {
if (prefix.size() == progbits) {
return format_progbits(synthesize(prefix, inputs),
inputcount, lhs_size, rhs_size);
else {
auto f = synthesize(prefix, inputs);
if (f.has_value()) {
// if the next bits are already zero in the solution, keep them zero
auto bits = *f;
while (prefix.size() < progbits && !bits[prefix.size()])
} while (true);
The full code of my implementation of this idea is available on gitlab. It's a bit crap but at least it should show any detail that you may still be wondering about. In the code I also constrain
expressions to not be "a funny way to write zero" and to not be "a complicated way to do nothing", otherwise a lot of less-interesting identities would be generated. | {"url":"https://bitmath.blogspot.com/2024/08/enumerating-identities-part-2.html","timestamp":"2024-11-03T16:10:24Z","content_type":"application/xhtml+xml","content_length":"60553","record_id":"<urn:uuid:8b09952d-8f02-4955-9c78-ca27e00c71cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00587.warc.gz"} |
Fixed Point - (Graph Theory) - Vocab, Definition, Explanations | Fiveable
Fixed Point
from class:
Graph Theory
A fixed point in graph theory is a vertex that remains unchanged under a given graph automorphism. In simpler terms, when you apply an automorphism to a graph, a fixed point is a vertex that maps to
itself. This concept is crucial for understanding the structure and symmetries of graphs, as it helps identify invariant properties when the graph undergoes transformations.
congrats on reading the definition of Fixed Point. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The number of fixed points in a graph can provide insights into the graph's symmetry and structural characteristics.
2. Fixed points are often important in the study of bipartite graphs, where certain vertices may remain unchanged under specific automorphisms.
3. In permutation groups, a fixed point can indicate stable configurations that do not change under the group's operations.
4. Graphs with many fixed points can exhibit more robust symmetries, which can simplify analysis and computations related to their properties.
5. Understanding fixed points helps in the classification of graphs based on their automorphisms and can reveal whether two graphs are isomorphic.
Review Questions
• How does the concept of a fixed point relate to the study of graph automorphisms?
□ A fixed point directly illustrates how certain vertices behave under graph automorphisms. When applying an automorphism to a graph, the presence of fixed points indicates which vertices
retain their original positions. This relationship helps in analyzing the symmetries of the graph and understanding its invariant structures during transformations.
• Discuss the significance of fixed points in relation to isomorphic graphs and their automorphisms.
□ Fixed points play an important role when analyzing isomorphic graphs through their automorphisms. When two graphs are isomorphic, they will have corresponding vertices that map to each other.
The identification of fixed points can help establish whether an automorphism preserves these mappings. Consequently, it contributes to determining the structural similarities and differences
between graphs by revealing invariant characteristics.
• Evaluate how the presence of fixed points might influence the classification of a given graph's symmetry group.
□ The presence of fixed points can significantly influence how we classify a graph's symmetry group. A high number of fixed points usually indicates that the graph has more robust symmetries,
leading to richer structural properties. When assessing a symmetry group, fixed points allow for the simplification of its analysis because they highlight stable configurations within the
group actions. This evaluation aids in understanding how different classes of graphs relate to one another through their symmetrical behaviors.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/graph-theory/fixed-point","timestamp":"2024-11-02T09:17:29Z","content_type":"text/html","content_length":"156890","record_id":"<urn:uuid:f19f495a-07fb-4e0f-90dc-c7b1a639df3f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00038.warc.gz"} |
On Verifiable Delay Functions - How to Slow Burning the Planet Down (Verifiably)
you can find the
Part II
of this series
In this blog post I am going to talk about some really cool
cryptographic research
done by
Luca De Feo
Simon Masson
Christophe Petit
around a relatively new cryptographic construction called
Verifiable Delay Functions
from now on). I know at this point you are thinking that the title of this blog post was yet another clickbait link but I promise that if you bear with me until the end you are not going to be
disappointed. If you never heard about VDF fret not I will try to
this concept. So fasten your seat belt.
The history of VDF is actually pretty neat indeed it seems that the concept was growing slowly through the years before finally being formalized. This is somehow evident looking at the links in
VDF were formally introduced by (the legendary)
in a
seminal paper
less than a year ago (June 2018). The paper contained only some weak form of VDF construction (based on univariate permutation polynomials ) and motivated researchers to continue to look for
theoretically optimal VDF
We still lack a theoretically optimal VDF, consisting of a simple inherently sequential function requiring low parallelism to compute but yet being very fast (e.g. logarithmic) to invert.
Well it looks that this incentive worked even better than predicted indeed within 10 days two papers responded to the challenge. Firstly
Benjamin Wesolowski
Krzysztof Pietrzak
published their respective papers (more on this later) :
But let's shift down the gear and keep things in order.
Time lock puzzle
The first construction that might resemble a VDF goes back to the '90s precisely to
Rivest, Shamir and Wagner's paper
(RSW). This paper is heavily based on the famous RSA construction and introduced the concept of
encrypt into the future
, If you can't remember how RSA works here a quick informal RSA refresher:
But what about this RSW paper though? Well now that we master RSA the concept is pretty simple. In order to encrypt something to the future it would be enough to have the exponent
being very very (very very) big. Or as big as needed (it depends how long into the future we want to encrypt). Now it is clear that unless the message sender would know the (secret) factorization of
he needs to go through all the powering sequential steps in order to encrypt the message: From the other end if the factorization of
would be known a shortcut exists: the exponent can be reduced to
e mod (phi(N))
Verifiable Delay Functions (VDF)
Fast forwarding 20 years
Joseph Bonneau
gave a really inspiring
talk about Verifiable lotteries.
This concept inspired by the
1981 Rabin'
s paper about Random Beacon conceived to avoid situation like this:
Bonneau in his talk introduced a concept of
Verifiable Lotteries
where there is a service that regularly publishes random values which no party can predict and everyone can verify. Back then he did not have an effective construction in his hand but in order for
his solution to work he needed that the random extraction to be slow and not parallelizable while the verification should have been immediate. He then displayed some really nice math trick from yet
paper by Dwork and Naor
. The trick is as simple as beautiful:
Clear no? Compute the modular square root is pretty simple but sequential and the running time is logarithmically bigger a p grows. From the other end the verification is immediate. All this comes
with a caveat: it turns out that the computation phase is actually parallelizable. So back to square one. At this point all was ready for the VDF idea to be finally formalized (and this happened in
the cited
seminal paper)
. But what this VDF is? Well I guess is time to introduce it finally.
VDF stands for
unction and is (as the name says) a
1. Takes T steps to evaluate even with unbounded parallelism
2. The output can be verified efficiently
And so what? Why all this fuzz? Bear with me another bit an (I hope) all will be clear. It turns out that building a VDF minus any of the property listed in his name is kind of easy (credit to
Ben Fish
for this analogy, I have seen this in
his VDF presentation
1. NOT a Function: Proof a sequential work achieves this.
2. NO Delay: many example in cryptography e.g. Discrete Log
3. NOT Verifiable: is obtained chaining a one way function
From the other handit seemed that build a Function with all the required properties was not a trivial exercise. With a bit of surprise though withing 10 days of the publication of the VDF paper first
Benjamin Wesolowski
published his
Efficient verifiable delay functions
paper immediately followed by
Simple Verifiable Delay Functions
. Even more surprisingly both papers were based on a similar idea (the
time lock puzzle
seen above) but solved the problem in two totally different ways! What made the
time lock puzzle NOT
being a VDF was the lack of efficient verification. And this is what was solved independently and differently by Wesolowski and Pietrzak. I am not going to describe these papers here because I can
never do a better job then this
survey paper
or the
trail of bits blog post
(so I recommend to read at least one of those two resources in case you want to know more about). The interesting fact is though that
none of this two VDFs is clearly better than the other instead each of the one has his own strength and weakness. Recently an hybrid approach has been proposed by Wesolowski
Wesolowski/Pietrzak solved this? Well it actually there are two possible solutions. The first one is to perform a
trusted setup
(ideally via a
multiparty computation
) and the second is to use
class groups of imaginary quadratic number fields
. Wait what? While RSA groups are somehow common in cryptography
class groups of imaginary quadratic number fields
are way less). They are pretty well studied in mathematic though and they were discovered by Gauss (in case you want to dig into it more there is a great
blog post
Michael Straka
). The tl;dr of why this class groups work in this VDF setting is that taking a negative prime integer being
equals to 3 mod 4
gives a group of unknown order so compute the VDF output faster would not be possible. Incidentally enough
reuse the same trick for building
efficient Accumulators
(to be used in order to save space in a blockchain setup).
Isogenies VDF
Right. We are finally to the part I am more interested in:
isogenies VDF
:). (Briefly) On isogenies first. An isogeny is nothing else that a non-constant algebraic map between elliptic curves, preserving the point at infinity (informally speaking is a way to "travel" from
an elliptic curve to another). In the "common" elliptic curve setting we are used to multiply a point to a scalar and land to yet another point in the same curve, In the isogeny based cryptography
things are a bit different, again highly informally, you instead start from a curve and end your journey to another curve (after a serious of hops). Isogeny based cryptography started to gain
popularity in the last years thanks to a celebrated
paper by Jao and De Feo
where they built a key exchange protocol (that goes under the name of
) based on isogenies. The key fact of SIDH is that it appears to be resistant to quantum computers and his CCA version (
) is a serious contender for the
Post-Quantum Cryptography standardization
. But what about VDF ? Well it turns out that we can use isogenies to build a really efficient and elegant VDF. This is what we have shown in
our paper
Verifiable Delay Functions from Supersingular Isogenies and Pairings
. In a nutshell we force the prover to perform a long walk between curves (the length of the walk is directly proportional to the time parameter
) and we employ pairings to solve the fast verification (the pairing operation is not tight to the time parameter
This brings to a
s style equality. Curiously enough the use of pairing will
invalidate the quantum resistance
brought by isogenies to the VDF. I will cover
extensively isogenies VDF in my next blog post but let's spend another couple of words about it. What advantage would it bring over the existing VDFs? The first thing that is not a real advantage but
it is evident is that it employs a totally different primitive than the other 2 VDFs. Isogenies is an emergent standard tool in cryptography becoming everyday more popular that lies on top of
elliptic curves and algebraic geometry. This is already a good point because it doesn't require people to embark on new study journey and makes code/tooling reusable. The other aspect to take in
consideration here is the need of trusted setup. As Wesolowski/Pietrzak's with RSA group the isogenies VDF currently needs a trusted setup but this is not a game over story. Indeed if someone will be
able to find an algorithm to generate random supersingular curves in a way that does not reveal their endomorphism ring (and this is not totally unlikely) the requirement of the trusted group will be
lost. Last thing I will mention for now (again more details in the next blog post) is that the isogenies VDF is also a natural VRF (this is inherited by being a generalization of the BLS signature).
If you want to play with isogenies VDF you can find some Sage code in https://github.com/isogenies-vdf/isogenies-vdf-sage (kudos to Simon Masson).
I will end up this section with a table that compares the existing VDFs that comes directly from the paper:
VDF's applications
So now that we know that a VDF constructions exists what can we do with that? Good question! Well my hope is that the answer will finally make you believe the title of this blog post was not so
cheese afterall. But lets step back for last time and let's come back to our Verified Lotteries and distributed random generation. A typical solution to this problem is to have something called
reveal and commit:
In this scenario any (honest) participant to the distributed randomness will generate a random value r and will commit it to a public bullettin board. The final random value is obtained XORing all
the values. It is not so hard to spot a fallacy here. Indeed the last participant, lets call her Zoe, will have a clear advantage over the other and can indeed cast a value at her own advantage,
rigging the output. But at this point VDFs come to the rescue:
Indeed it is enough to pipe the outcome of the
public bulletin board
through a VDF. Assuming the VDF time value is long enough Zoe will not have anymore the time to try to cheat. E.g. if the random beacon will output one random value every hour it is enough to set the
VDF time
to 1 hour. In this way Zoe doesn't have control on the output before to commit her contribution. Well you know what? I just described part of the
Etherum 2.0
architecture!! Indeed you can reuse this really simple concept to try to replace one of the biggest plagues associated to the blockchains:
Proof of Work.
It is well known that in order to keep all these blockchains up and running (Bitcoin & co) an incredible amount of electricity is needed:
Can we do any better? WE HAVE TO!
This is what
Etherum 2.0
and other blockchains related solution are experimenting with. In the case of
Etherum 2.0
they plan to go with some form of
Proof of stake
+ VDF. You can read the full idea
. You can also see
Justin Ðrake
explaining his full VDF plan in
. In a nutshell while is true that you can't speed up the VDF computation with parallelization it has to be clear how fast you can go with a single operation (modular squaring in the Etherum 2.0
case). For this reason they plan to build some ASICSs as clearly explained in this
blog post
investing as much as
15M $ in research
Etherum 2.0
seems has chosen to go with
Wesolowski's VDF plus RSA groups
(you can also see how they plan to solve the trusted setup with a multiparty ceremony
in this video
). A totally different choice has been made by
Chia Network
. In their case they still plan to use VDF but they plan to combine it with
Proof of Space
and using
Wesolowski's VDF with class groups
! The is an ongoing
with a 100k price (currently in phase 2) where there is an attempt to speed up as much as possible the class group single operation. But this is just the start. Currently there are several blockchain
projects evaluating VDF:
— Justin Ðrake (@drakefjustin) January 19, 2019
Well this post became kind of long. I will turn the focus specifically on the isogenies VDF in the next post of the series.
That's all folks! For more Crypto stuff follow me on Twitter.
Antonio Sanso (reporting on joint work with Luca De Feo, Simon Masson, Christophe Petit ).
Usual Mandatory Disclaimer: IANAC (I am not a cryptographer) so I might likely end up writing a bunch of mistakes in this blog post... tl;dr The OpenSSL 1.0.2 releases suffer from a Key Recovery
Attack on DH small subgroups . This issue got assigned CVE-2016-0701 with a severity of High and OpenSSL 1.0.2 users should upgrade to 1.0.2f. If an application is using DH configured with parameters
based on primes that are not "safe" or not Lim-Lee (as the one in RFC 5114 ) and either Static DH ciphersuites are used or DHE ciphersuites with the default OpenSSL configuration (in particular
SSL_OP_SINGLE_DH_USE is not set) then is vulnerable to this attack. It is believed that many popular applications (e.g. Apache mod_ssl) do set the SSL_OP_SINGLE_DH_USE option and would therefore not
be at risk (for DHE ciphersuites), they still might be for Static DH ciphersuites. Introduction So if you are still here it means you wanna know more. And here is the thing. In my last bl
• Get link
• Facebook
• Twitter
• Pinterest
• Email
• Other Apps
23 comments
tl;dr if you are using go-jose , node-jose , jose2go , Nimbus JOSE+JWT or jose4j with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) hence many software libraries
implementing this specification used to suffer from a classic Invalid Curve Attack . This would allow an attacker to completely recover the secret key of a party using JWE with Key Agreement with
Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) , where the sender could extract receiver’s private key. Premise In this blog post I assume you are already knowledgeable about elliptic
curves and their use in cryptography. If not Nick Sullivan 's A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography or Andrea Corbellini's series Elliptic Curve Cryptography: finite
fields and discrete logarithms are great starting points. Then if you further want to climb the elliptic learning curve including the related attacks you might also want to visit https://s
tl;dr Mozilla Firefox prior to version 72 suffers from Small Subgroups Key Recovery Attack on DH in the WebCrypto 's API. The Firefox's team fixed the issue r emoving completely support for DH over
finite fields (that is not in the WebCrypto standard). If you find this interesting read further below. Premise In this blog post I assume you are already knowledgeable about Diffie-Hellman over
finite fields and related attacks. If not I recommend to read any cryptography book that covers public key cryptography. Here is a really cool simple explanation by David Wong : I found a cooler way
to explain Diffie-Hellman :D pic.twitter.com/DlPvGwZbto — David Wong (@cryptodavidw) January 4, 2020 If you want more details about Small Subgroups Key Recovery Attack on DH I covered some background
in one of my previous post ( OpenSSL Key Recovery Attack on DH small subgroups (CVE-2016-0701) ). There is also an academic pape r where we examine the issue with some more rigors. | {"url":"http://blog.intothesymmetry.com/2019/05/on-verifiable-delay-functions-how-to.html","timestamp":"2024-11-04T05:25:44Z","content_type":"application/xhtml+xml","content_length":"158742","record_id":"<urn:uuid:b9d0839e-8b3d-4869-a7af-031e52151d6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00734.warc.gz"} |
Multiplying decimals by whole numbers
To multiply a decimal by a whole number, use long multiplication as follows: 1. Set out the calculation. 2. Multiply each digit of the decimal number by the each digit of the whole number from right
to left. 3. Carrying where required, write out the answer to each one below the other in the answer space, using the correct place value columns. 4. Add back in the decimal, positioned directly under
the decimal column. | {"url":"https://evulpo.com/en/uk/dashboard/lesson/uk-m-ks2-04fractions-25multiply-decimals-by-whole-numbers","timestamp":"2024-11-02T05:21:21Z","content_type":"text/html","content_length":"1050149","record_id":"<urn:uuid:fe4db7fc-d850-4922-9f69-03ee3d77cec8>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00777.warc.gz"} |
What is: Bravais-Pearson Correlation
What is the Bravais-Pearson Correlation?
The Bravais-Pearson correlation, commonly referred to simply as the Pearson correlation coefficient, is a statistical measure that evaluates the strength and direction of the linear relationship
between two continuous variables. This coefficient is denoted by the letter ‘r’ and ranges from -1 to +1. A value of +1 indicates a perfect positive linear correlation, while -1 indicates a perfect
negative linear correlation. A value of 0 suggests no linear correlation between the variables. Understanding this correlation is crucial in fields such as data analysis, statistics, and data
science, as it helps in identifying relationships and making predictions based on data.
Mathematical Formula of the Bravais-Pearson Correlation
The formula for calculating the Bravais-Pearson correlation coefficient is given by:
r = (Σ(xi – x̄)(yi – ȳ)) / (√(Σ(xi – x̄)²) * √(Σ(yi – ȳ)²))
In this formula, ‘xi’ and ‘yi’ represent the individual sample points, while ‘x̄’ and ‘ȳ’ are the means of the x and y variables, respectively. The numerator calculates the covariance of the two
variables, while the denominator normalizes this value by the standard deviations of both variables. This normalization is what allows the correlation coefficient to remain bounded between -1 and +1,
providing a standardized measure of correlation.
Assumptions of the Bravais-Pearson Correlation
To accurately interpret the Bravais-Pearson correlation coefficient, certain assumptions must be met. Firstly, both variables should be continuous and normally distributed. Secondly, there should be
a linear relationship between the variables, which can be visually assessed using scatter plots. Additionally, the data should not contain significant outliers, as these can disproportionately affect
the correlation coefficient. Meeting these assumptions ensures that the Pearson correlation provides a reliable measure of the relationship between the variables.
Applications of the Bravais-Pearson Correlation
The Bravais-Pearson correlation is widely used in various fields, including social sciences, natural sciences, and business analytics. In social sciences, researchers often use it to assess
relationships between variables such as income and education level. In natural sciences, it can help in understanding the relationship between temperature and the rate of chemical reactions. In
business analytics, companies utilize the Pearson correlation to analyze customer behavior and sales trends, enabling data-driven decision-making.
Limitations of the Bravais-Pearson Correlation
Despite its widespread use, the Bravais-Pearson correlation has limitations. One significant limitation is its sensitivity to outliers, which can skew the results and lead to misleading
interpretations. Additionally, the Pearson correlation only measures linear relationships; it may not accurately reflect the relationship between variables that exhibit a non-linear pattern.
Therefore, it is essential to complement the Pearson correlation with other statistical methods to gain a comprehensive understanding of the data.
Interpreting the Bravais-Pearson Correlation Coefficient
Interpreting the Bravais-Pearson correlation coefficient requires understanding the context of the data being analyzed. A coefficient close to +1 indicates a strong positive correlation, suggesting
that as one variable increases, the other also tends to increase. Conversely, a coefficient close to -1 indicates a strong negative correlation, implying that as one variable increases, the other
tends to decrease. Values near 0 indicate little to no linear relationship. However, it is crucial to remember that correlation does not imply causation; further analysis is often needed to establish
causal relationships.
Calculating the Bravais-Pearson Correlation in Software
Many statistical software packages and programming languages, such as R, Python, and SPSS, provide built-in functions to calculate the Bravais-Pearson correlation coefficient. In Python, for example,
the ‘numpy’ library offers a function called ‘corrcoef’ that can be used to compute the Pearson correlation matrix. Similarly, R provides the ‘cor’ function for this purpose. Utilizing these tools
allows researchers and analysts to efficiently compute correlation coefficients and focus on interpreting the results rather than performing manual calculations.
Visualizing the Bravais-Pearson Correlation
Visualizing the Bravais-Pearson correlation can enhance understanding and interpretation of the relationship between variables. Scatter plots are commonly used for this purpose, where one variable is
plotted on the x-axis and the other on the y-axis. The resulting plot can reveal the nature of the relationship, whether it is linear, non-linear, or if there are any outliers present. Additionally,
correlation matrices can be used to visualize multiple variables simultaneously, providing a comprehensive overview of relationships within a dataset.
Conclusion on the Bravais-Pearson Correlation
In summary, the Bravais-Pearson correlation is a fundamental statistical tool that quantifies the linear relationship between two continuous variables. Its applications span various fields, making it
an essential concept in statistics, data analysis, and data science. Understanding its assumptions, limitations, and interpretation is crucial for effectively utilizing this correlation coefficient
in research and analysis. | {"url":"https://statisticseasily.com/glossario/what-is-bravais-pearson-correlation-explained/","timestamp":"2024-11-06T11:21:45Z","content_type":"text/html","content_length":"139525","record_id":"<urn:uuid:81140c1f-294a-431a-aa09-c290ea0e72ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00618.warc.gz"} |
Extracting numerator of a Ricci tensor component
Extracting numerator of a Ricci tensor component
I'm solving Einstein equation in vacuum. I already computed the Ricci tensor. For example, de first Ricci component is saved in the following variable eq1:
The denominator of the component should vanish since since the other part of the equation is zero. How can I extract numerator of the expression. I have tried:
But it does not work. Thanks | {"url":"https://ask.sagemath.org/question/55479/extracting-numerator-of-a-ricci-tensor-component/","timestamp":"2024-11-08T21:47:48Z","content_type":"application/xhtml+xml","content_length":"48450","record_id":"<urn:uuid:eab7fd27-52fb-48ed-94af-e962e049518e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00859.warc.gz"} |
Set Theory
Set theory is the branch of mathematics that studies sets, which are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that
are relevant to mathematics. The language of set theory can be used in the definitions of nearly all mathematical objects.
The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. After the discovery of paradoxes in naive set theory, numerous axiom systems were proposed in the early
twentieth century, of which the Zermelo–Fraenkel axioms, with the axiom of choice, are the best-known.
Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory
is a branch of mathematics in its own right, with an active research community. Contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real
number line to the study of the consistency of large cardinals.
Read more about Set Theory: History, Basic Concepts, Some Ontology, Axiomatic Set Theory, Applications, Objections To Set Theory As A Foundation For Mathematics
Famous quotes containing the words set and/or theory:
“I consider it equal injustice to set our heart against natural pleasures and to set our heart too much on them. We should neither pursue them, nor flee them; we should accept them.”
—Michel de Montaigne (1533 1592)
“Many people have an oversimplified picture of bonding that could be called the epoxy theory of relationships...if you don t get properly glued to your babies at exactly the right time, which
only occurs very soon after birth, then you will have missed your chance.”
—Pamela Patrick Novotny (20th century)
Main Site Subjects
Related Phrases
Related Words | {"url":"https://www.primidi.com/set_theory","timestamp":"2024-11-06T11:34:57Z","content_type":"text/html","content_length":"7323","record_id":"<urn:uuid:9cf0ec73-466b-4e5f-97cf-2d795071fdb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00329.warc.gz"} |
Elementary Math Activities With Balance Scales | Synonym
Elementary Math Activities With Balance Scales
Balance scales are a tool used to concretely teach children about weights and equivalency measures. Exploring and seeing the concept in action will impress the principles on their minds better than
more passive activities or worksheets.
1 Weight Versus Size
The very first concept to teach with a balance scale is the simple idea that bigger often means heavier. A hands-on method for doing this is to use lumps of clay or play-dough. Have the children put
various sized pieces of the material on either side of the scale and see which one pulls down its side of the scale, showing it is heavier. As a forerunner to a later activity, have them attempt to
put two pieces on the scale that are the exact same size, balancing the scale. This lays the groundwork for the idea of estimation.
2 Variations in Mass
The next concept for which you can use a balance scale is that of varying weights despite relative size. The easiest way to do this is to gather 5 to 10 small objects that are known to be
particularly heavy or light for their size. Some examples are a large handful of cotton balls, a lead ball, a pencil, an eraser, several feathers, a couple of index cards, a block magnet or even a
snack baggie with a zipper closure with water in it. (Double bag the water as a precaution.) Have the students put an item on each side of the scale to see which is heavier. Students can put the
items on a chart showing comparisons. Younger students may need a worksheet prepared with pictures to compare their findings.
3 Estimating
The next idea to explore is estimation. Use dried beans or other "counters". Start by putting an unknown number of beans on one side of the scale. (Use enough that students cannot easily count them
just by looking.) Have students take turns trying to guess (estimate) the number of beans. Check their guess by counting out that many beans into the other side to see if the scale balances.
4 Measuring
Introduce the concept of weight measurement by again using the beans and the items from the first activity. Put an item on one side of the scale and have the students count how many beans it takes to
balance the scales. They can write their answers on a simple chart. Repeat this exercise with several different items.
5 Addition
Once the students are familiar with the scales and concepts of varying weights, begin using the scales to teach mathematical functions like addition and subtraction. Show them how to put a number of
beans on one side of the balance and a lesser number on the other. See how many beans have to be added to the smaller side to balance (equal) the first side. In the same way, they can subtract beans
from the first amount to get it to balance the lesser amount. Have them write each transaction as an equation. | {"url":"https://classroom.synonym.com/elementary-math-activities-balance-scales-8046480.html","timestamp":"2024-11-05T20:37:56Z","content_type":"text/html","content_length":"242428","record_id":"<urn:uuid:291a6c43-3813-4cf3-b374-d894857ea959>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00162.warc.gz"} |
An ARL-unbiased modified chart for monitoring autoregressive counts with geometric marginal distributions
Morais, M. C. ; Wittenberg, P.; Knoth, S.
Sequential Analysis, 42 (2023), 323-347
Geometrically distributed counts arise in the industry. Ideally, they should be monitored using a control chart whose average run length (ARL) function achieves a maximum when the process is
in-control, i.e., the chart is ARL-unbiased. Moreover, its in-control ARL should coincide with a reasonably large and pre-specified value. Since dependence among successive geometric counts is
occasionally a more sensible assumption than independence, we assess the impact of using an ARL-unbiased chart specifically designed for monitoring independent geometric counts when, in fact, these
counts are autocorrelated. We derive an ARL-unbiased modified chart for monitoring geometric first-order integer-valued autoregressive or GINAR(1) counts. We provide compelling illustrations of this
chart and discuss its use to monitor other autoregressive counts with a geometric marginal distribution. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=5&member_id=90&doc_id=3594","timestamp":"2024-11-05T10:21:11Z","content_type":"text/html","content_length":"8996","record_id":"<urn:uuid:35fa3de3-64f9-48b5-8102-1d0067bbb2ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00808.warc.gz"} |
Annual employee turnover rate calculation formula
20 Dec 2019 For example, a high employee turnover rate in a single large department basis, or you could choose to use a quarterly or annual calculation. Calculate turnover rate with the following
formula: Turnover = # of Exited Employees / Average # of Employees × 100. Calculating # of With these two numbers, you can now calculate your overall annual turnover rate. Monitor this number
2 Nov 2019 Calculating Annual Turnover. To calculate the portfolio turnover ratio for a given fund, first determine the total amount of assets purchased or sold 29 Jan 2020 Features data on employee
turnover rates by industry, reasons for voluntary turnover & more. types of employee turnover is an essential part of your annual strategy. areas to determine separation trends or patterns at a
regional level. Fifty-four percent of employees voluntarily leaving organizations 12 Sep 2018 In this article we look at how to measure and calculate attrition rates in the It is also known as
'employee turnover', or 'employee churn', Don't calculate attrition on a yearly basis – Only calculating attrition rates once a year 7 Aug 2015 The formula used to calculate turnover is the number
of terminated staff divided by the average number of staff for a given period. The employee The cost of employee turnover in the United. States is more than $11 billion annually, ac- cording to used
to calculate turnover depends on the company's
- Divide the average employment number for the year into the number of the employees who left and you have your annual turnover. If you want to know a monthly
30 Jul 2019 Let's call the resulting figure Y. So to calculate your annual turnover rate: Number of employees who left. x 100. Y. For example: Let's say you start 28 Aug 2019 Calculate and track
employee turnover numbers. The first step to improving your turnover rate is knowing your turnover rate. Calculating it 27 Oct 2017 This percentage is usually calculated for yearly periods, but
quarterly and bi- annual turnover rates can also be calculated. Types of employee 24 Feb 2017 What we're going to do, is calculate your annual turnover rate for this Calculating your average
employee turnover rate is pretty easy.
28 Aug 2019 Calculate and track employee turnover numbers. The first step to improving your turnover rate is knowing your turnover rate. Calculating it
This leaves you with 190 employees. Here's how to work out your average annual staff: Work out your yearly turnover. You have an average of 195 staff working for It proposes that the turnover rate
equals the # Terminations divided by the average # of employees for each of the 12 months in the designated annual period. 28 Feb 2016 Typically, though, companies both lose and gain employees. To
account for this, turnover rate is normally calculated by dividing the total
Companies are generally of the opinion that monthly or annual turnover rates bring more meaning than just using data from a single month to calculate such an
the formula for calculating employee turnover rate. Employee turnover is usually expressed as a turnover rate. In other words, how to calculate turnover rate is basically just percentage math. How to
Calculate Annual Turnover Rate. Calculate your annual turnover rate by dividing the number of employees who left your company this year by the total number of employees you had at the beginning of
the year. Then show the number as a percentage. Here is an example of how to calculate total turnover rate: Number of employees on January 1 st – 250; Number of employees who left the company from
January 1 st to December 31 st – 40. (40 departures divided by 250 employees) x 100 = 16% How to calculate turnover rate? To calculate turnover rate, we divide the number of terminates during a
specific period by the number of employees at the beginning of that period. If we start the year with 200 employees, and during the year, 10 people terminate their contract, turnover is 10/200 =
0.05, or 5%. Employee turnover rate is calculated by dividing the number of employees who left the company by the average number of employees in a certain period in time. This number is then
multiplied by 100 to get a percentage. How to Calculate Annual Turnover Rate: Method 3 Calculate the monthly turnover rate for each of the 12 months of the year. Add them up.
Turnover rate definition: The term ‘employee turnover rate’ refers to the percentage of employees who leave an organization during a certain period of time. People usually include voluntary
resignations, dismissals, non certifications and retirements in their turnover calculations.
ANNUAL TURNOVER: First work out the average total people employed for the year. For instance, if you started the year with 26 employees and finished it 12 Apr 2019 Calculate employee turnover rate
to understand how much it's costing (L/ Average # of Employees)x100= Monthly/Annual Turnover Rate (%). 20 Dec 2017 Here's a simple method for calculating your employee turnover rate. some companies
also calculate turnover on a quarterly or annual basis. Employee turnover is usually calculated on a yearly basis. It doesn't matter Here is the formula to calculate the annual turnover rate: For
example: If you have 14 Sep 2019 For example, as of 2010 the 10-year average annual employee turnover in the retail industry was 34.7%, while in education it was only 13.2%. To Quickly calculate
the cost of employee turnover in your organization and learn how to reduce it significantly.
Calculate turnover rate with the following formula: Turnover = # of Exited Employees / Average # of Employees × 100. Calculating # of With these two numbers, you can now calculate your overall annual
turnover rate. Monitor this number ANNUAL TURNOVER: First work out the average total people employed for the year. For instance, if you started the year with 26 employees and finished it 12 Apr
2019 Calculate employee turnover rate to understand how much it's costing (L/ Average # of Employees)x100= Monthly/Annual Turnover Rate (%). | {"url":"https://bestbtcxirmxl.netlify.app/eastes41877bito/annual-employee-turnover-rate-calculation-formula-jute","timestamp":"2024-11-15T03:41:09Z","content_type":"text/html","content_length":"34076","record_id":"<urn:uuid:0478c874-6fe7-4919-ab99-599c86569fff>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00006.warc.gz"} |
If a group has order 2k where k is odd, then it has a subgroup of index 2 - Solutions to Linear Algebra Done Right
If a group has order 2k where k is odd, then it has a subgroup of index 2
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 4.2 Exercise 4.2.13
Solution: $G$ contains an element $x$ of order 2 by Cauchy’s Theorem. Let $\pi : G \rightarrow S_G$ be the left regular representation of $G$. By Exercise 4.2.11, $\pi(x)$ is a product of $k$
disjoint 2-cycles. Since $k$ is odd, $\pi(x)$ is an odd permutation. By Exercise 4.2.12, $\pi[G]$ has a subgroup of index 2. | {"url":"https://linearalgebras.com/solution-abstract-algebra-exercise-4-2-13.html","timestamp":"2024-11-11T17:13:09Z","content_type":"text/html","content_length":"54047","record_id":"<urn:uuid:50aa9bc9-803e-4ef3-84e4-4c12b4ad9540>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00366.warc.gz"} |
Cross-validation analysis of the Tecator data set
Easy train/test split
The iprior() function includes an argument to conveniently instruct which data samples should be used for training, and any remaining data used for testing. The out-of-sample test error rates would
then be reported together. The examples in the vignette can then be conducted as follows:
data(tecator, package = "caret")
fat <- endpoints[, 2]
absorp <- -t(diff(t(absorp))) # take first differences
mod1 <- iprior(fat, absorp, train.samp = 1:172, method = "mixed")
## Running 5 initial EM iterations
## ================================================================================
## Now switching to direct optimisation
## iter 10 value 223.048879
## final value 222.642108
## converged
The prediction error (training and test) can then be obtained easily:
## Training RMSE Test RMSE
## 2.890732 2.890353
LOOCV experiment
With the above conveniences, it is easy to wrap this in loop to perform \(k\)-fold cross-validation; this is done in the iprior_cv() function. We now analyse the predictive performance of I-prior
models using a LOOCV scheme. For all n=215 samples, one observation pair is left out and the model trained; the prediction error is obtained for the observation that was left out. This is repeated
for all n=215 samples, and the average of the prediction errors calculated.
For the linear RKHS, the code to peform the LOOCV in the iprior package is as follows:
## Results from Leave-one-out Cross Validation
## Training RMSE: 2.869906
## Test RMSE : 2.331397
Notice the argument folds = Inf—since the iprior_cv() function basically performs a \(k\)-fold cross validation experiment, setting folds to be equal to sample size or higher tells the function to
perform LOOCV. Also note that the EM algorithm was used to fit the model, and the stopping criterion relaxed to 1e-2—this offered faster convergence without affecting predictive abilities. The
resulting fit gives training and test mean squared error (MSE) for the cross-validation experiment.
The rest of the code for the remaining models are given below. As this takes quite a long time to run, this has been run locally and the results saved into the data tecator.cv within the iprior
mod2.cv <- iprior_cv(fat, absorp, method = "em", folds = Inf, kernel = "poly2",
est.offset = TRUE, control = list(stop.crit = 1e-2))
mod3.cv <- iprior_cv(fat, absorp, method = "em", folds = Inf, kernel = "poly3",
est.offset = TRUE, control = list(stop.crit = 1e-2))
mod4.cv <- iprior_cv(fat, absorp, method = "em", folds = Inf, kernel = "fbm",
control = list(stop.crit = 1e-2))
mod5.cv <- iprior_cv(fat, absorp, method = "em", folds = Inf, kernel = "fbm",
est.hurst = TRUE, control = list(stop.crit = 1e-2))
mod6.cv <- iprior_cv(fat, absorp, method = "em", folds = Inf, kernel = "se",
est.lengthscale = TRUE, control = list(stop.crit = 1e-2))
# Function to tabulate the results
tecator_tab_cv <- function() {
tab <- t(sapply(list(mod1.cv, mod2.cv, mod3.cv, mod4.cv, mod5.cv, mod6.cv),
function(mod) {
res <- as.numeric(apply(sqrt(mod$mse[, -1]), 2, mean))
c("Training MSE" = res[1], "Test MSE" = res[2])
rownames(tab) <- c("Linear", "Quadratic", "Cubic", "fBm-0.5", "fBm-MLE",
The results are tabulated below.
Results for the
LOOCV experiment for
various I-prior
Linear 2.87 2.33
Quadratic 2.98 2.66
Cubic 2.97 2.64
fBm-0.5 0.09 0.50
fBm-MLE 0.01 0.46
SE-MLE 0.36 2.07 | {"url":"https://cran.case.edu/web/packages/iprior/vignettes/tecator.html","timestamp":"2024-11-12T10:33:00Z","content_type":"text/html","content_length":"19792","record_id":"<urn:uuid:99926cd2-3a71-423f-aa27-e083b1dc8b3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00771.warc.gz"} |
Top 5 capital management strategies when trading options in IQ Option
Indonesia Português Tiếng Việt ไทย العربية हिन्दी 简体中文 Nederlands Français Deutsch हिन्दी Italiano 한국어 Melayu Norsk bokmål Русский Español Svenska Tamil Türkçe Zulu
Although IQtradingpro has shared many articles about how to trade in IQ Option, there’s no article specifically guiding how to manage capital in trading options. This is considered the most important
factor if you want to make sustainable money in IQ Option.
Therefore, we will share top 5 capital management strategies when trading options on IQ Option in this article.
Register IQ Option and Get Free $10,000 Risk warning: Your capital might be at risk.
How to manage capital when trading options
Strategy 1: Classic (Use the same amount for each option)
This is a strategy that you divide an equal amount of money on every trade.
For example, with a capital of $100 from the start, your specific goal is 10 options/day => each option will be $10.
Classic capital management strategy
• No need to change the amount every time you trade.
• Your account is hard to burn (losing all money). You can only lose at a certain level.
• If you want to make money with this method, you need to have an IQ Option trading strategy with a winning rate guarantee of 60% or more.
=> Classic capital management is for traders who prefer to be safe and have a high winning rate. This is the guideline for most successful traders in this market.
Strategy 2: Martingale
This is a strategy to increase capital for new options after the previous one has lost. This is how to use the next option to compensate for the lost money, plus a profit. If after a sequence of
consecutive losses, a winning option will compensate for the loss of all previous losing options. As the next options will increase exponentially.
Martingale also has another name: Increase money on options when losing. If the first option loses, the second option will double or triple the money. Keep increasing until winning. Then stop and
return to the starting point.
Martingale capital management strategy
• 1 win option can regain all previous losing options plus profits.
• Rapid profit growth.
• The chance of burning your account is very high if you lose a lot of consecutive options.
• The next new options make it harder to control your emotions.
=> Martingale capital management is for adventurous investors. This is the IQ Option trading method with a winning rate of over 80%.
* Note: This strategy should stop when you have 3 losing cycles. If there is a winning option, it will end the cycle to start again. If you lose 3 consecutive options, then stop trading to avoid
being trampled by the market.
Strategy 3: Snowball (Compound interest)
In contrast to Martingale, every time you win an option, you use all the capital and interest of the first option on the next one.
For example, you deposit $10 into IQ Option. In the first trade, you WIN => Capital + Interest = 18 $. So in option 2, you will trade all $18. Just like that, the more you win the more you increase
your profit.
Snowball Capital management strategy
• Accounts will grow very fast, even extremely fast
• Low starting capital. If you win 3 consecutive options you will get several times the amount of your account. But if you lose, you only lose the original capital. As in the example above, win – you
get more than 5 times the starting capital. Lose – you lose $10. Suppose that you lose 5 times in a row. But with one win only, you still earn a lot of money.
• You need to create an IQ Option trading strategy with a winning rate up to 90%. And of course, this method requires a lot of patience.
Strategy 4: Fibonacci
This is a way of managing capital in favor of defending and protecting your account. Fibonacci is an increment capital management.
For example, with Fibonacci, when you lose, increase $1. If you win, decrease $1. Details are as follows: You bet $1 and lose -> Increase to $2. If you lose again -> increase to $3. Do the same to $5
and $8 if losing.
When you bet $8 and win, reduce money to $5. Winning again => decrease to $3. Then $2 and $1.
Fibonacci capital management strategy
You may be a little confused, but take it simple like this. The amount for option 3 will be equal to the total capital of options 1 and 2. Keep doing that if you lose.
But if you win, reduce the amount on the new option equal to the previous option. The more you win, the less amount you use. Therefore, your account is always in a safe state.
• The longer the cycle of losses, the shorter the cycle of winnings.
• Good emotion control because it completely eliminates the excitement when winning. You have to reduce the amount instead of increase it.
• Stable growth.
• You have to change the amount every time you enter a trade. It may be very confusing.
Strategy 5: Using probability
This is the way to manage the capital of full-time traders. They enter a trade according to the winning probability of each option. For example, the higher the winning probability, the more amount
you invest. The lower the winning probability, the less amount you use.
As a full-time trader, they have plenty of time to observe the market with different currency pairs. And they see more opportunities to open options than ordinary traders. But there aren’t always
good options with a high winning rate. Therefore, they will trade with a small amount to wait for a big opportunity.
Probability capital management strategy
• Flexible capital management.
• Winning small-amount options is easy to cause subjective psychology. It may affect trading emotions.
• You need to experience many IQ Option trading methods to make sure which method has a high winning probability and which method has a high probability of losing.
=> This is the capital management strategy for professional traders who has many years of trading in the market.
Above are some of the ways to manage capital in options trading on IQ Option. Depending on your development, you can choose an appropriate capital management strategy.
For beginners, you only need to use $1 each option to learn and get used to the platform.
For more experienced traders, you can manage flexibly, but you have to know how to control your own emotions.
Register IQ Option and Get Free $10,000 Risk warning: Your capital might be at risk.
Indonesia Português Tiếng Việt ไทย العربية हिन्दी 简体中文 Nederlands Français Deutsch हिन्दी Italiano 한국어 Melayu Norsk bokmål Русский Español Svenska Tamil Türkçe Zulu
1 thought on “Top 5 capital management strategies when trading options in IQ Option”
1. Mithun jat
Nice one
Leave a Comment | {"url":"https://iqtradingpro.com/top-5-capital-management-strategies-when-trading-options-in-iq-option/","timestamp":"2024-11-09T03:02:21Z","content_type":"text/html","content_length":"205397","record_id":"<urn:uuid:f38b13f3-2725-4a38-8456-e90fb99e79b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00658.warc.gz"} |
Partial sums addition to 1,000 with regrouping
Students will be able to add partial sums to 1,000 with regrouping of the tens and hundreds number.
You have the students add to 1,000 in groups. To do this they add either 50 or 100. The first student starts with a hundreds number or a fifties number and states whether 50 or 100 must be added to
it. The next states the answer and then says how much should be added to that number. They keep calculating until they reach 1,000. Then you show two answers on the interactive whiteboard. The
students must determine which of the two numbers form the answer together. Drag the numbers to the right place.
Explain that to add using partial sums, you put the numbers one under another. Then you can add them together by solving from left to right. State that you put the largest number of the sum on the
top and the other number underneath it. For partial sums, you work from left to right. So you start with adding the hundreds numbers together. Then you calculate how much the tens numbers are
together and then you add the ones values together. In adding the ones numbers up, it can happen that you go into the tens values. In that case you write the number in the tens number space and the
ones value space. You put all these numbers in the diagram and you state what the intermediate sums were. If you add the numbers from the diagram together, then you find the answer. You put this
answer at the bottom. Next there are two sums in diagrams, in which different rows are in yellow. Ask the students if they know which intermediate sum belongs in each colored row. This is how you can
check whether students know which steps they have to take. Then you have the students solve a few problems with partial sums. Next you explain that you can also add above the hundreds numbers. In
that case you write the number in the spaces for the hundreds numbers, the tens numbers and the ones values. Then practice partial sums in two problems for which for one, part of the diagram is
already filled in and for the other, the diagram still must be filled in.Check whether the students can add using partial sums using the following questions/exercise:- Why is it useful to be able to
add using partial sums?- What do the letters H T O mean?- In solving partial sums, where do you always start?- Calculate these using partial sums: 436 + 66 and 287 + 162 + 131
The students are given ten questions in which they first practice partial sums by stating what the intermediate steps are. Then they solve a few problems using partial sums. These sums consist of two
or three numbers that the students must add together.
You discuss again with the students that it is important to be able to solve partial sums to 1,000, because that is how you can solve problems in steps. Check whether the students know that for
partial sums they must solve from left to right and when they go over a tens number or a hundreds number, they also must add this number to the diagram. Have the students practice partial sums
addition in pairs. They both write a number and must figure out how they solve this together using partial sums. First they practice this with two numbers. Also have the students practice using a sum
with three numbers.
For students that have difficulty with partial sums addition, you can remind them of the meaning of H T O and how you put the numbers into an HTO-diagram. Write out the intermediate sums. You can
label the hundreds numbers, tens numbers and ones values, so that it is more obvious what you are adding together.
Gynzy is an online teaching platform for interactive whiteboards and displays in schools.
With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom
management more efficient. | {"url":"https://www.gynzy.com/en-us/library/items/partial-sums-addition-to-1000-with-regrouping","timestamp":"2024-11-06T15:51:40Z","content_type":"text/html","content_length":"553158","record_id":"<urn:uuid:4752bc60-5529-4b5b-9ba8-1755f0f17cf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00065.warc.gz"} |
Electron sausages? - Curvature of the Mind
I just paused writing an article where I tried to give a quick overview of quantum mechanics.
Ha Ha
It ended up being a few paragraphs, a few images, and a crap load of links to Wikipedia. I’m not going to write anything like that anytime soon, and definitely not in a single blog post. I’m going to
stick with summaries of the projects I develop and slowly work my way up to wordier subjects. Writing is the hardest part for me, so if you don’t have a bit of background in quantum mechanics, this
is going to go over your head a little bit.
Like I’ve said many times in the past, I’ve looked at and derived these equations many times. Writing these apps has helped me understand them better. Here is what I’ve learned from the orbital
The wavefunctions with m=0 have a phase which is constant in space. This is pretty common in one dimensional bound states like the infinite square well or the harmonic oscillator. Because of that, I
didn’t realize how weird that is. These states correspond to electrons that are frozen in space. It’s like the quantum uncertainty of the electron is completely balanced out by the compressive
electrostatic force almost like little electron sausages. There is no classical correspondence to these states. There are no stationary planetary orbits.
The states with positive or negative m values are closer to classical circular orbits. The complex exponential factor adds a constant velocity around the axis. For a given energy slower electrons lie
closer to the axis and are more spread out along it. As the rotation increases, the electron moves further out and becomes more concentrated on the plane orthogonal to the axis. This creates a series
of stacked doughnuts.
As the energy increases for the same angular momentum, inner currents are added with alternating phases. There are a number of different ways to see this. In the full view these are added as nested
doughnuts. In the slice view with the intensity cranked up, these show up as inner circles, and the nested doughnuts show up as pie shaped wedges.
This only shows up in rotational mode. The standing mode has nodes around the axis of rotation, which makes the situation visually complicated. Unfortunately many images of these orbitals show the
standing waves. Using the complex exponential factor for the rotation instead of the individual elements has been a boon.
I’d never really considered what went into that factor. The complex exponential represents a completely spread out constant velocity motion. I’d thought about it a little as that’s the basic
description of a quantum plane wave solution, but I think that’s a post for another day.
Related Images: | {"url":"https://curvatureofthemind.com/2011/05/18/struggling-with-a-summary-of-atomic-orbitals/","timestamp":"2024-11-11T15:24:40Z","content_type":"text/html","content_length":"35348","record_id":"<urn:uuid:997d59c9-43ee-4e02-9c02-943c733f4463>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00746.warc.gz"} |
Transcendental Brauer—Manin obstructions on singular K3 surfaces
Algebraic Geometry and Number Theory Seminar
Date: Thursday, November 14, 2024 13:00 - 15:00
Speaker: Rachel Newton (King`s College London)
Location: Office Bldg West / Ground floor / Heinzel Seminar Room (I21.EG.101)
Series: Mathematics and CS Seminar
Host: Tim Browning
Let E and E′ be elliptic curves over Q with complex multiplication by the ring of integers of an imaginary quadratic field K and let Y = Kum(E×E′) be the minimal desingularisation of the quotient of
E×E′ by the action of −1. We study the Brauer groups of such surfaces Y and use them to furnish new examples of transcendental Brauer–Manin obstructions to weak approximation. This is joint work with
Mohamed Alaa Tawfik. | {"url":"https://talks-calendar.ista.ac.at/events/5166","timestamp":"2024-11-06T21:33:07Z","content_type":"text/html","content_length":"6761","record_id":"<urn:uuid:675c96d0-a36b-4c51-84a2-89e4c9018ae0>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00557.warc.gz"} |
New distributional representations of biosensors data in different statistical modeling tasks. Please use reference [1] to cite this package.
1. Matabuena, M., Petersen, A., Vidal, J. C., & Gude, F. (2021). Glucodensities: A new representation of glucose profiles using distributional data analysis. Statistical methods in medical research,
2. Matabuena, M., & Petersen, A. (2021). Distributional data analysis with accelerometer data in a NHANES database with nonparametric survey regression models. arXiv preprint arXiv:2104.01165.
The biosensor.usc aims to provide a unified and user-friendly framework for using new distributional representations of biosensors data in different statistical modeling tasks: regression models,
hypothesis testing, cluster analysis, visualization, and descriptive analysis. Distributional representations are a functional extension of compositional time-range metrics and we have used them
successfully so far in modeling glucose profiles and accelerometer data. However, these functional representations can be used to represent any biosensor data such as ECG or medical imaging such as
Installation Instructions
Required software and packages
1. R (https://www.r-project.org/)
2. R packages: Rcpp, RcppArmadillo,
energy, fda.usc, osqp, truncnorm, parallelDist, graphics, stats, methods, utils (required in R >= 2.14).
Please install the required R packages before you install the biosensor.usc package. After the installation of the dependencies, please install the biosensor.usc as following steps.
Install biosensor.usc from source code
Install from source code using devtools library:
Usage Instructions
biosensor.usc is an R package which provides:
1. Loading biosensors data from a csv files.
2. Generating a quantile regression model V + V2 * v + tau * V3 * Q0 where Q0 is a truncated random variable, v = 2 * X, tau = 2 * X, V ~ Unif(-1, 1), V2 ~ Unif(-1, -1), V3 ~ Unif(0.8, 1.2), and E(V
|X) = tau * Q0.
3. Performing a Wasserstein regression using a quantile density function.
4. Performing a prediction from a Wasserstein regression.
5. Performing a Ridge regression using a quantile density function.
6. Performing a functional non-parametric Nadaraya-Watson regression with 2-Wasserstein distance, using as predictor the distributional representation and as response a scalar outcome.
7. Performing a prediction from a functional non-parametric Nadaraya-Watson regression with 2-Wasserstein distance.
8. Performing a hypothesis testing between two random samples of distributional representations to detect differences in scale and localization (ANOVA test) or distributional differences (Energy
9. Performing a energy clustering with Wasserstein distance using quantile distributional representations as covariates.
10. Obtaining the clusters to which a set of object belong using a previously trained energy clustering with Wasserstein distance using quantile distributional representations as covariates.
The following codes show how to call above steps in R.
We also attach a data set example through csv files in the package, extracted from the paper: Hall, H., Perelman, D., Breschi, A., Limcaoco, P., Kellogg, R., McLaughlin, T., Snyder, M., “Glucotypes
reveal new patterns of glucose dysregulation”, PLoS biology 16(7), 2018.
This data set has two different types of files. The first one contains the functional data, which csv files must have long format with, at least, the following three columns: id, time, and value,
where the id identifies the individual, the time indicates the moment in which the data was captured, and the value is a monitor measure:
file1 = system.file("extdata", "data_1.csv", package = "biosensors.usc")
The second type contains the clinical variables. This csv file must contain a row per individual and must have a column id identifying this individual.
file2 = system.file("extdata", "variables_1.csv", package = "biosensors.usc")
From these files, biosensor data can be loaded as follow:
data1 = load_data(file1, file2)
We also provide a way to generate biosensor data from the aforementioned quantile regression model:
data1 = generate_data(n=100, Qp=100, Xp=5)
Call the Wasserstein regression :
wass = wasserstein_regression(data1, "BMI")
Use the previously computed Wasserstein regression to obtain the regression prediction given a kxp matrix of input values, where k is the number of points we do the prediction and p is the dimension
of the input variables:
xpred = as.matrix(25)
pred = wasserstein_prediction(wass, xpred)
Alternatively we can also compute the Wasserstein regression using the following function:
wass = regmod_regression(data1, "BMI")
Call the Ridge regression:
regm = ridge_regression(data1, "BMI")
Call the Nadaraya-Watson regression with 2-Wasserstein distance:
nada = nadayara_regression(data1, "BMI")
Use the previously computed Nadaraya-Watson regression to obtain the regression prediction given the quantile curves:
npre = nadayara_prediction(nada, t(colMeans(data1$quantiles$data)))
Call the Hypothesis testing between two random samples of distributional representations:
file3 = system.file("extdata", "data_2.csv", package = "biosensors.usc")
file4 = system.file("extdata", "variables_2.csv", package = "biosensors.usc")
data2 = load_data(file3, file4)
htest = hypothesis_testing(data1, data2)
Call the energy clustering with Wasserstein distance using quantile distributional representations as covariates:
clus = clustering(data1, clusters=3)
Use the previously computed clustering to obtain the clusters of the given objects:
assignments = clustering_prediction(clus, data1$quantiles$data) | {"url":"http://cran.stat.auckland.ac.nz/web/packages/biosensors.usc/readme/README.html","timestamp":"2024-11-10T17:17:24Z","content_type":"application/xhtml+xml","content_length":"8811","record_id":"<urn:uuid:28a17b8f-f1a3-4835-b813-eb6e83f6e2f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00067.warc.gz"} |
Car Crash Webquest
Physics Jacob, Liz, Alana
Period 7/8 12/20/11
Our team will provide the mayor with a detailed accident report that includes mass, velocity, and momentum of both vehicles both prior to and after the collision. Further, the mayor has requested we
find out what happened to assist with the insurance company’s investigation.
First we found velocities of both cars with kinematic equations. We set up a table with our given values which were acceleration, distance, and final velocity. Using the kinematic equation Vf^2=Vi^
2+2ad we found the initial velocity for the SUV to be 2.83 m/s and the initial velocity for the Subaru was 12m/s.
To find the momentum we set up a table that shows the objects momentum before and after the collision. The Cadillac’s momentum before the collision was 3000kg multiplied by the unknown velocity. Its
momentum after was 8490 N*s. since the Subaru was stopped it had a momentum before the collision of zero because the initial velocity was zero. Its momentum after was 24,000 N*s. then we added the
total momenta before the collision to get 3000kg times the unknown velocity. The total momentum after was 32,490 N*s. due to the conservation of momentum law the momenta have to be equal so we
divided 32490 N*s by 3000kg to find the initial speed of the SUV which was 10.83 m/s. the speed of the wagon before the collision was still 0m/s
We had to find if the SUV was speeding so we converted 10.83m/s to km/hr and found that the SUV was traveling at 38.89km/hr this means the SUV was speeding therefore it was the SUV’s fault that there
was an accident. | {"url":"https://aplusphysics.com/community/index.php?/topic/512-car-crash-webquest/","timestamp":"2024-11-10T15:03:11Z","content_type":"text/html","content_length":"97627","record_id":"<urn:uuid:d6547410-c4bc-440e-b703-21af3fbd9a7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00433.warc.gz"} |
The Pi polynomial and the Pi index of a family hydrocarbons molecules
Original Articles: 2015 Vol: 7 Issue: 11
The Pi polynomial and the Pi index of a family hydrocarbons molecules
Let G be a simple molecular graph without directed and multiple edges and without loops, the vertex and edge-sets of which are represented by V(G) and E(G), respectively. A topological index of a
graph G is a numeric quantity related to G which is invariant under automorphisms of G. A new counting polynomial, called the Omega polynomial, was recently proposed by Diudea on the ground of
quasi-orthogonal cut "qoc" edge strips in a polycyclic graph. Another new counting polynomial called the Pi polynomial. The Omega and Pi polynomials are equal to Ω(G,x)= (G, )xc cΣ m c and Π(G,x)= (
) ( ) , .c.x E G c c m G c - Σ , respectively. In this paper, the Pi polynomial and the Pi Index of Polycyclic Aromatic Hydrocarbons PAHk are computed. | {"url":"https://www.jocpr.com/articles/the-pi-polynomial-and-the-pi-index-of-a-family-hydrocarbons-molecules-7264.html","timestamp":"2024-11-05T13:40:46Z","content_type":"text/html","content_length":"18392","record_id":"<urn:uuid:ecc1d811-cc52-4a59-b460-740719057bf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00129.warc.gz"} |
The Elements of Euclid; viz. the first six books, together with the eleventh and twelfth. Also the book of Euclid's Data. By R. Simson. To which is added, A treatise on the construction of the trigonometrical canon [by J. Christison] and A concise account of logarithms [by A. Robertson].
The Elements of Euclid; viz. the first six books, together with the eleventh and twelfth. Also the book of Euclid's Data. By R. Simson. To which is added, A treatise on the construction of the
trigonometrical canon [by J. Christison] and A concise account of logarithms [by A. Robertson].
Popular passages
Page 3-7
IF a straight line be divided into any two parts, the square of the whole line is equal to the squares of the two parts, together with twice the rectangle contained by the parts.
Any two sides of a triangle are together greater than the third side.
Therefore all the angles of the figure, together with four right angles, are equal to twice as many right angles as the figure has sides.
If, from the ends of the side of a triangle, there be drawn two straight lines to a point within the triangle, these shall be less than, the other two sides of the triangle, but shall contain a
greater angle. Let...
Again ; the mathematical postulate, that " things which are equal to the same are equal to one another," is similar to the form of the syllogism in logic, which unites things agreeing in the middle
DL is equal to DG, and DA, DB, parts of them, are equal ; therefore the remainder AL is equal to the remainder (3. Ax.) BG : But it has been shewn that BC is equal to BG ; wherefore AL and BC are
each of them equal to BG ; and things that are equal to the same are equal to one another ; therefore the straight line AL is equal to BC.
If two triangles have one angle of the one equal to one angle of the other and the sides about these equal angles proportional, the triangles are similar.
Page 3-16
To divide a given straight line into two parts, so that the rectangle contained by the whole, and one of the parts, may be equal to the square of the other part.
SIMILAR triangles are to one another in the duplicate ratio of their homologous sides.
Bibliographic information | {"url":"https://books.google.com.jm/books?id=KAJ8g5zuFdoC&dq=editions:HARVARD32044097001838&output=html_text&lr=","timestamp":"2024-11-01T18:57:33Z","content_type":"text/html","content_length":"50209","record_id":"<urn:uuid:ee5cfff4-84ed-49e6-a776-f64b74176da5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00594.warc.gz"} |
b. Determine A and r. Give details about the implementation of the Neu
b. Determine A and r. Give details about the implementation of the Neumann boundary condition.
c. Show that the eigenvalues λ; of A are real and that it holds A; <0.
For the integration in x-direction of (4) we take the 0-method, given by
Un+1 = Un + Ax[(1 - 0) (Aun +rn) + 0(Aun+1+Fn+1)], 0 = [0,1].
In here Az is the step size in z-direction.
d. Determine the order of the local truncation error of (5).
We choose 0 = ³.
e. Show that (5) is unconditionally stable for 0= . Is the method super-stable?
f. What is the order of the global discretization error of (5) for 0 = ?
The absorption of the gas is determined by a =y=1.
g. Give a first-order and a second-order accurate (one-sided) finite-difference formula for the computa-
tion of a. The corresponding numerical approximations of a are denoted by a₁ and 02.
h. Choose Ay = 0.05, Az = 0.02 and = 1. Plot in a single graph the numerical solution (as a
function of y) for x =nAz with n = 5, 10, 15, 20, 25. Make tables of a₁ and 0₂. Discuss the results.
i. Choose Az = 0.02, v = 1 and define L = 20Ar. We investigate the accuracy in a at x = L. Choose
Ay = 0.1, 0.05, 0.025. Make tables of a₁ and a₂ for x = L. Discuss the accuracy of a₁ and ₂2.
For this, assume that the global discretization error can be written as CAy". Estimate C and p.
Discuss the results.
j. Add the software you wrote as appendix/appendices to your report.
Fig: 1 | {"url":"https://tutorbin.com/questions-and-answers/b-determine-a-and-r-give-details-about-the-implementation-of-the-neumann-boundary-condition-c-show-that-the-eigenvalues","timestamp":"2024-11-11T08:06:54Z","content_type":"text/html","content_length":"66297","record_id":"<urn:uuid:66084577-991b-4687-83b9-2f243d78c74c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00481.warc.gz"} |
What does model-free mean in reinforcement learning?
In reinforcement learning (RL), a model-free algorithm (as opposed to a model-based one) is an algorithm which does not use the transition probability distribution (and the reward function)
associated with the Markov decision process (MDP), which, in RL, represents the problem to be solved.
Is sarsa model-free?
Algorithms that purely sample from experience such as Monte Carlo Control, SARSA, Q-learning, Actor-Critic are “model free” RL algorithms.
What’s the difference between model-free and model-based reinforcement learning?
“Model-based methods rely on planning as their primary component, while model-free methods primarily rely on learning.” In the context of reinforcement learning (RL), the model allows inferences to
be made about the environment.
Which is an example of model-free approach?
Examples of model-free RL algorithms are Monte Carlo and Temporal Difference methods while SARSA and Q-Learning techniques fall under the categories of the TD method. Dynamic programming Dynamic
Programming (DP) is a mathematical technique to solve complex problems by dividing it into a set of simple subproblems.
What is model-free analysis?
Model-free analysis allows for determination of the activation energy of a reaction process without assuming a kinetic model for the process.
What is the difference between Q-learning and SARSA?
QL directly learns the optimal policy while SARSA learns a “near” optimal policy. QL is a more aggressive agent, while SARSA is more conservative. An example is walking near the cliff.
Is PPO model-free?
Abstract: Proximal policy optimization (PPO) is the state-of the-art most effective model-free reinforcement learning algorithm.
What is the best reinforcement learning library?
Tensorforce. Tensorforce is an open-source Deep RL library built on Google’s Tensorflow framework. It’s straightforward in its usage and has a potential to be one of the best Reinforcement Learning
What is the difference between Q-learning and Sarsa?
Which of the following is model-free reinforcement learning?
a) Algorithm’s principle Q-learning is a form of model-free reinforcement learning. It can also be viewed as an Off-Policy algorithm for Temporal Difference learning which can learn different
policies for behavior and estimation [298] [299].
Is TD learning model-free?
Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function.
Is expected SARSA better than SARSA?
Expected SARSA is more complex computationally than Sarsa but, in return, it eliminates the variance due to the random selection of At+1. Given the same amount of experience we might expect it to
perform slightly better than Sarsa, and indeed it generally does.
Is AlphaGo model-free?
AlphaGo involves both model-free methods (Convolutional Neural Network (CNN)), and also model-based methods (Monte Carlo Tree Search (MCTS)).
What is PyTorch and TensorFlow?
TensorFlow is developed by Google Brain and actively used at Google both for research and production needs. Its closed-source predecessor is called DistBelief. PyTorch is a cousin of lua-based Torch
framework which was developed and used at Facebook.
Does keras support reinforcement learning?
What is it? keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras.
How do you train a reinforcement learning model?
Training our model with a single experience:
1. Let the model estimate Q values of the old state.
2. Let the model estimate Q values of the new state.
3. Calculate the new target Q value for the action, using the known reward.
4. Train the model with input = (old state), output = (target Q values)
How is Qlearning implemented in Python?
Implementing Q-learning in python
1. from IPython. display import clear_output.
2. for i in range(1, 100001):
3. state = env. reset()
4. epochs, penalties, reward, = 0, 0, 0.
5. if random. uniform(0, 1) < epsilon:
6. action = env. action_space.
7. action = np.
8. next_state, reward, done, info = env.
How to evaluate reinforcement learning model?
Agent is controlling a car by picking discrete actions (left,right,up,down)
The goal is to drive at a desired speed without crashing into other cars
The state contains the velocities and positions of the agent’s car and the surrounding cars
What are the best resources to learn reinforcement learning?
Rich Sutton,Introduction to Reinforcement Learning with Function Approximation
Rich Sutton,Temporal Difference Learning
Andrew Barto,A history of reinforcement learning
Deep Reinforcement Learning,David Silver,Pieter Abbeel,Sergey Levine and Chelsea Finn
David Silver,Principles of Deep RL
What are the types of reinforcement learning?
Input: The input should be an initial state from which the model will start
Output: There are many possible output as there are variety of solution to a particular problem
Training: The training is based upon the input,The model will return a state and the user will decide to reward or punish the model based on its output.
How to apply reinforcement learning?
Understanding your problem: You do not necessarily need to use RL in your problem and sometimes you just cannot use RL.
A simulated environment: Lots of iterations are needed before a RL algorithm to work.
MDP: You world need to formulate your problem into a MDP. | {"url":"https://vidque.com/what-does-model-free-mean-in-reinforcement-learning/","timestamp":"2024-11-06T15:10:36Z","content_type":"text/html","content_length":"57310","record_id":"<urn:uuid:425012b2-ff11-4e6d-bac6-be988814bf68>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00095.warc.gz"} |
Seismic migration
Jump to navigation Jump to search
Development Geology Reference Manual
Series Methods in Exploration
Part Geophysical methods
Chapter Seismic migration
Author Ken Lamer, David Hale
Link Web page
Store AAPG Store
Virtually all seismic data processing is aimed at imaging the earth's subsurface, that is, obtaining a picture of subsurface structure from the seismic waves recorded at the earth's surface.
Deconvolution, for example, aims to sharpen reflections, and common midpoint (CMP) stacking exploits data redundancy to enhance signal-to-noise ratio while producing a seismic time section that
simulates what would have been recorded in a zero-offset seismic survey, that is, one in which a single receiver, located at each seismic source position, records data generated by the source at that
Of the many processes applied to seismic data, seismic migration is the one most directly associated with the notion of imaging. Until the migration step, seismic data are merely recorded traces of
echoes, waves that have been reflected from anomalies in the subsurface. In its simplest form, then, seismic migration is the process that converts information as a function of recording time to
features in subsurface depth. Rather than simply stretching the vertical axes of seismic sections from a time scale to a depth scale, migration aims to put features in their proper positions in
space, laterally as well as vertically.
All the issues in seismic migration reviewed here are treated in the collection of reprints found in Gardner.^[1]
What migration accomplishes
The migration problem is illustrated in Figure 1. The upper part of the figure depicts a zero-offset survey conducted over a subsurface medium that is homogeneous (constant P-wave velocity) with the
exception of an isolated boulder at some depth. Also shown are the straight ray paths traveled by seismic waves from each of five different source positions down to the boulder and back up to
receivers located at the sources. Clearly, reflections from the boulder will be observed at all the surface locations, not just the one directly above it. Also, the reflection time clearly increases
as the source-receiver pair is moved farther from the point directly above the boulder. The bottom part of Figure 1 shows schematically the seismic section that would be obtained for this survey.
Reflections occur along a hyperbolic diffraction pattern with the apex at the same CMP location as that of the boulder.
The task of migration here is to convert or map reflections along the diffraction into a single point at the position of the boulder. The reverse process, by which the boulder gives rise to the
observed diffraction pattern, is called modeling.
While the earth's subsurface is more complicated than that shown in Figure 1, the seismic data that would be obtained over the real earth can for all purposes be represented as a superposition of
many diffraction curves generated by each of many boulder-like anomalies in the subsurface.
Figure 2 shows another depth section and associated seismic section for a subsurface consisting of a single dipping reflector. For a constant-velocity subsurface, the many weak diffractions from very
closely spaced points along the reflector (of which five are shown in the figure) give rise, through constructive and destructive interference, to a net reflection along the straight-line envelope of
the diffraction curves. Note that the reflection is displaced laterally from the true reflector position (the line connecting apexes of the diffraction curves). It is this lateral mispositioning of
reflections from dipping reflectors that gave rise to the term migration for the process that corrects the positioning.
Figure 3 shows another perspective on this mispositioning. Reflections recorded at zero source-receiver offset follow ray paths that are perpendicular to the reflector. As a result, the reflection
from the point on the reflector beneath point P, for example, would be recorded by the geophone at location G, to the right.
Figure 4 shows the application of migration to CMP-stacked field data. The superposition of diffraction curves evident in the unmigrated data of Figure 4a gives rise to crossing reflections that can
not plausibly be interpreted as structure. By correcting for lateral mispositioning of dipping reflectors and collapsing diffraction curves to zones defined by the diffraction apex, migration
converts the recorded waves to a subsurface picture (Figure 4b) depicting both broadly and tightly folded anticlines and synclines.
How migration is accomplished
One senses the massive scale of the computer intensive two-dimensional mapping involved in the transformation from unmigrated to migrated data in Figure 4. While these schematic sections depict what
migration aims to accomplish, they say little about how it is done. All of the many methods of doing migration are founded on solutions to the scalar wave equation, a partial differential equation
that models how waves propagate in the earth. A simple form of the wave equation is as follows:
${\displaystyle {\frac {\partial ^{2}P}{\partial z^{2}}}+{\frac {\partial ^{2}P}{\partial x^{2}}}={\frac {1}{V^{2}}}{\frac {\partial ^{2}P}{\partial t^{2}}}}$
where P(x, z, t) is the seismic amplitude as a function of reflection time t at any position (x, z) in the subsurface, and V is the seismic wave velocity in the subsurface, a function of both x and z
. Disturbances initiated by a seismic energy source are assumed to propagate in accordance with solutions to the wave equation. Migration, then, involves a running of the wave equation backward in
time, starting with the measured waves at the earth's surface P(x, z = 0, t), in effect pushing the waves backward and downward to their reflecting locations.
All current computer-based approaches to migration involve this backward solution to the wave equation. The earliest form, Kirchhoff summation migration, intuitively follows the situation depicted in
Figure 1. In essence, recorded amplitudes on CMP traces are summed along the diffraction trajectories dictated by the assumed subsurface velocity distribution, and the sums are placed at the apexes
of the curves, one curve for each sample point in the output migrated section.
Finite difference migration numerically integrates the wave equation by the method of finite differences to push seismic waves backward into the subsurface. A third category of migration approaches
is f-k migration, which operates via Fourier transforms in the frequency wavenumber (f-k) domain. In general, Fourier transform methods provide elegant means of solving partial differential
equations. When applied to migration, this elegance is often complemented by high computational efficiency.
Each of the different approaches has specific advantages such as computational efficiency, accuracy for imaging steep reflectors, and accuracy in the presence of spatial variation of velocity.
Likewise, each can produce undesirable processing artifacts related to some limitation in data quality such as poor signal to noise ratio, too coarse a spatial sampling interval, and missing data
(e.g., due to seismic source misfires).
Velocity: the key parameter
Regardless of the migration approach implemented, the key parameter of the process is velocity. Since migration involves pushing waves back to their reflecting points, it is essential that the waves
be pushed backward through the same medium through which they have propagated. Clearly, waves will not get back to the correct position at a given time if the velocity used in the migration process
differs from the actual subsurface velocity. Unfortunately, subsurface velocity is seldom well known, particularly in geologically complex areas. Today's migration algorithms are highly accurate when
supplied with the correct subsurface velocity. Because subsurface velocity can only be estimated, however, migration yields only an estimate of the true subsurface.
Where lateral variation of velocity is modest (as in many places in the Gulf of Mexico), migration methods in the class called time migration have performed adequately. Where lateral velocity
variation is severe (as in many overthrust areas), more computationally intensive depth migration is required. Note that the terms depth and time migration do not relate to whether the migrated
results are presented as a function of time or depth. Results of both migration categories are most often displayed in time (as in the examples shown here) because of added uncertainties in results
converted to depth. While depth migration is capable of accurate subsurface imaging where velocity is complex, the required accurate estimation of velocity is difficult and time consuming.
Poststack versus prestack migration
While migration algorithms are capable of accurately imaging reflections from steep interfaces, shortcomings in CMP stacking lead to destruction of such reflections before conventional poststack
migration is applied. Two alternatives to poststack migration of CMP stacked data preserve reflections from steep interfaces. Migration can be applied to the unstacked data (so-called prestack
migration) so that the data need not be reduced to an approximation to zero offset before migration. The improvement in imaging of steep reflectors by this approach, however, is bought at the price
of a great increase in the amount of computation required for the migration.
A cost-effective and accurate alternative to full prestack migration is to apply poststack migration to data that have had the added step of dip moveout (DMO) applied after normal moveout (NMO)
correction, but before the data are stacked. DMO, a form of partial prestack migration, completes the process that NMO only imperfectly accomplishes—it converts data recorded with separated sources
and receivers to a close approximation to zero-offset data, preserving reflections from both gently dipping and steep reflectors. Figure 5 shows the improvement in imaging of the steep flank of a
salt dome achieved by poststack migration when applied to DMO-processed data.
The additional accuracy of either DMO or prestack migration over that of conventional poststack migration demands special care in the field acquisition of seismic data. Too coarse a spatial sampling,
that is, too large a geophone group interval, may preclude high resolution imaging of steep reflectors by any migration method.
The example in Figure 6 shows imaging of reflections from steep faults. While migration of CMP-stacked data (not shown here) shows the faulting, reflections from the faults themselves are absent.
Details of the fault reflections seen on the DMO-processed result can be diagnostic of sealing along the faults.
The schematic diagrams shown here have been two-dimensional (2-D) representations, and the illustrations have all involved 2-D migration of 2-D seismic data. Invariably, the earth's subsurface has
three-dimensional (3-D) complexity. As a result, the mispositioning of recorded reflections extends in two lateral directions, and migration must be done as a 3-D process (see Three-dimensional
seismic method for Reservoir Development). It suffices here to state that migration is fundamentally incomplete unless it is applied as a 3-D process to 3-D data.
See also
1. ↑ Gardner, G. H. F., ed., 1985, Migration of Seismic Data: Tulsa, OK, Society of Exploration Geophysicists Monograph Series, 462 p.
External links | {"url":"https://wiki.aapg.org/index.php?title=Seismic_migration&oldid=27173","timestamp":"2024-11-08T17:33:35Z","content_type":"text/html","content_length":"47105","record_id":"<urn:uuid:33dadb0c-cf11-4c70-ab66-b304c635f4b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00021.warc.gz"} |
Dilution Factor Calculator
Last updated:
Dilution Factor Calculator
You will find this dilution factor calculator useful if you have ever performed a dilution: from chemical experiments and the preparation of medicine to making the perfect cup of coffee! As you've
almost certainly used the dilution factor formula before, we have written this article to help you learn what a dilution factor is and how to calculate the dilution factor of any dilution you
So, read on so you no longer need to wonder how to find the dilution factor!
What is dilution factor?
The dilution factor (or dilution ratio) is the notation used to express how much of the original stock solution is present in the total solution after dilution. It is often given as a ratio but can
also be given as an exponent; however, this calculator will only show it as a ratio. Regardless if the dilution factor is a ratio or exponent, it has two forms, either describing the parts of the
stock solution to the parts of the dilutant added ($\small S:D$) or the parts of the stock solution to the parts of the total solution ($\small S:T$).
❓ What exactly is a dilution? The solution dilution calculator at Omni Calculator has the answer! ⚗
As the difference between these two representations is very slight, an example would help ensure you don't get the wrong answers and mess up your experiment!
Let's say you have a $\small 10\ \text{cm}^3$ aqueous solution of acyl chloride. However, this is too concentrated for your experiment, so you add $\small90\ \text{cm}^3$ of water to further dilute
the solution. You end up with $\small100\ \text{cm}^3$ of acyl chloride. As you have $\small 10$ parts of the stock solution, and $\small90$ parts of the dilutant, the $\small S:D$ ratio is $\small
1:9$ (canceling down from $\small10:90$). In the $S:T$ notation the dilution factor is $\small1:10$, you have $\small 10\ \text{cm}^3$ of the stock solution that now makes up a $\small100\ \text{cm}^
3$ solution.
It is also worth noting that dilution factors only represent a loss of concentration – no molecules themselves are lost, just the number of them per mL decreases. This can be useful in several
experimental situations. Although the dilution factor is just a handy way of thinking about dilutions, dilutions are very common, both in science and your day-to-day life. If you've ever made gravy,
you've done a dilution. Ever washed your hands with soap? You've done a dilution.
They are also useful in the lab. If you wanted to replicate experiments over a range of decreasing concentrations, you would prepare what is known as a serial dilution: visit our serialdilution
calculator to learn the math of this technique. They're also used in practically every chemical and most biological experiments, as the stock solution of your chemical is often far more concentrated
than you desire.
Dilution is often used in the administration of medicine – for example, in order to administer the proper paracetamol dose for a child per kg of their body weight, it's sometimes required to dilute
the initial solution.
Dilution factor formula
Now that we've discussed what the dilution factor is, let's get down to brass tacks and talk about the dilution factor formula. But first, a brief section on how to represent the dilution factor. As
we mentioned above, the dilution factor is often expressed as a ratio of volumes. The simplest formula for both types of dilution factor is as follows:
• $S:D = V_\text{stock }:V_\text{dilutant}$; and
• $S:T = V_\text{stock}:V_\text{total}$.
If these volumes are expressed in the same units, you can cancel each side down using their greatest common factor, and you will end up with the simplest integer expression of the dilution factor.
Some of you, however, may wish to express this ratio in the form 1:X, where X is how many parts of the dilutant/total solution there are for one part of the stock solution. This may leave you with
some funny (not haha funny, but oh no funny) ratios, but their formulas are:
• $S:D = 1:(V_\text{stock }/V_\text{dilutant})$; and
• $S:T = 1:(V_\text{stock }/V_\text{total})$
Due to the limitations in current technology, this is also how our calculator expresses your results. We hope you can forgive us for making you do extra work. You may also see the dilution factor
expressed as an exponent, such as $3^{-1}$, $5^{-3}$, or $10^{-4}$. Now, do not be frightened by this new form! The exponent merely represents the ratio of the parts of the dilutant/total to the
parts of the stock. Use the order of the ratio above:
• $S:D = \text{exponent}:1$; and
• $S:T = \text{exponent}:1$
Now, you may or may not know that a number with a negative exponent is the same as putting that number as the denominator when the numerator is 1 and removing the negative sign. Our exponent
calculator can help you understand this further, but for now, let's go through the examples we set out above:
$\scriptsize \begin{split} &3^{-1}\rightarrow \frac{1}{3^1}:1\rightarrow\frac{1}{3}:1\rightarrow1:3 \\[1.5em] &5^{-3}\rightarrow \frac{1}{5^3}:1\rightarrow\frac{1}{125}:1\rightarrow1:125 \\[1.5em] &
10^{-4}\rightarrow \frac{1}{10^4}:1\rightarrow\frac{1}{10,\!000}:1\rightarrow1:10,\!000 \\ \end{split}$
How to calculate dilution factor
If you're still asking yourself, "how to find the dilution factor?", then we hope this section will answer all of your questions. So, just follow the steps below if you want to calculate the dilution
factor by hand:
1. Find any two of the following three values: volume of the stock solution (stock), volume of the dilutant (dilutant), and total volume of the solution (total). This can either be done
theoretically (before your experiment) or experimentally (after your experiment).
2. Use the two volumes to find the third. Use this equation: $\small \text{stock} + \text{dilutant} = \text{total}$. If you know which notation you would prefer to use ($\small S:D$ or $\small S:T$
), then you may not need this step, but we shall include it for completeness.
3. Be sure that all the volumes in the ratios use the same unit. If you need help, visit our volume converter!
4. Decide which notation you require:
□ $\small S:D$ = set the values of the stock and dilutant amount as a ratio — $\text{stock}:\text{dilutant}$; or
□ $\small S:T$ = set the values of the stock and total amount as a ration — $\text{stock}:\text{total}$.
5. If required, cancel down the fractions by finding the greatest common factor. You can use the equivalent fractions calculator to speed up this task.
We have already provided an example in the What is dilution factor? section above, so please check that again if you are still wondering how to find the dilution factor. We will, however, tell you
how to calculate the volumes you need from the dilution factor:
1. Choose your desired dilution factor, its notation ($\small S:D$ or $\small S:T$), and one of the variables on either side of the colon.
2. Divide the number after the colon ($\small D$ or $\small T$) by the number before the colon ($S$). We will name this value will be known as the factor.
3. Use the following equations depending on your choice of notation:
□ $\small S:D = \text{stock}\cdot\text{factor}= \text{dilutant}$ or $\small \text{dilutant}/\text{factor} = \text{stock}$; or
□ $\small S:T = \text{stock}\cdot\text{factor} = \text{total}$ or $\small \text{total}/\text{factor} = \text{stock}$.
There you have it – we hope this solves any of your issues regarding dilution factors. You can always check your results with our dilution factor calculator or just use it in the first place. It
works either to find the dilution factor or the volume required to achieve a specific dilution factor. Just input the fields you know into our tool!
How do I calculate the dilution factor?
To calculate the dilution factor, you can follow these simple steps:
1. Find two out of these three values:
a. stock: volume of the stock solution;
b. dilutant: volume of the dilutant; and
c. total: volume of the solution.
2. Use the formula to find the missing value:
total = stock + dilutant
Or you can always simplify the process using Omni Calculator’s dilution factor calculator.
What’s the difference between the dilution factor and the dilution ratio?
While these two values share similarities as they are used in the context of solutions, they show different relationships between the amount of the solute, solvent, and the total solution.
The dilution ratio represents the parts of the stock solution S to parts of the dilutant added, D (written as S:D). While the dilution factor represents the parts of the stock solution to parts of
the total parts of the solution (S:T).
What does a 1:20 dilution factor mean?
A 1:20 dilution factor means that for each unit of the stock solution, there are 19 units of dilutant, resulting in a total volume of 20 units.
Note that a 1:20 quotient can represent either an S:D (stock to diluant) or S:T (stock to total solution) ratio. In the context of a dilution factor, it specifically refers to an S:T ratio.
How do I dilute a solution by a factor of 10?
Combine 9 parts of diluent with 1 part of the stock solution, making a total of 10 parts. For example, if you have 100 ml of the stock solution and aim to dilute it by a factor of 10, you would need
to add 900 ml of diluent. This yields a total solution of 1000 ml with a 1:10 dilution factor. | {"url":"https://www.omnicalculator.com/chemistry/dilution-factor","timestamp":"2024-11-07T00:05:10Z","content_type":"text/html","content_length":"650099","record_id":"<urn:uuid:e3b2caa0-fc16-4bed-bc7c-a8792075d55c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00277.warc.gz"} |
Re: [ub] [c++std-ext-14598] Re: Sized integer types and char bits
From: Richard Smith <richardsmith_at_[hidden]>
Date: Fri, 25 Oct 2013 12:23:56 -0700
On Fri, Oct 25, 2013 at 11:41 AM, Gabriel Dos Reis <gdr_at_[hidden]>wrote:
> Matt Austern <austern_at_[hidden]> writes:
> | On Fri, Oct 25, 2013 at 10:31 AM, Jeffrey Yasskin <jyasskin_at_[hidden]>
> wrote:
> |
> |
> |
> | Richard explicitly asked whether any such C++ users exist or are
> | likely to exist in the future, and nobody's come up with any
> examples.
> | So we appear to have a choice between helping some theoretical people
> | or helping some actual people. (We help the actual people by telling
> | them what to expect in the standard, while now they have to test and
> | hope they find the right subset of undefined or
> implementation-defined
> | behavior that's actually guaranteed to work.)
> |
> |
> | It's actually a little worse than that. Testing can reveal what your
> | implementation does today, with your particular input, with one set
> | of compiler flags. No amount of testing can reveal what guarantees
> | your implementation makes.
> There are two separate issues here:
> (1) whether we want C++ to continue to support non-two's complement
> binary representation
> (2) what we want to say about overflowing shift arithmetic
> Requirint two's complement does not necessarily solve (2). And solving
> (2) does not necessarily imply "no" to (1).
Agreed. It would definitely be interesting to complete Jeffrey's list of
the things we could define if we standardized 2s complement, and then
investigate how many of these we are comfortable defining without
specifying 2s complement. So far, we have:
1) overflowing unsigned->signed conversions
2) right-shifts on negative operands
3) bitwise operators
Are there others?
(1) and (2) are currently implementation-defined; (3) seems underspecified
in the current standard.
[I think for consistency we should at least make (3) say that bitwise
operators on positive operands act as "expected" (that is, they give the
result that a 2s complement, 1s complement or sign-magnitude machine
would), and we should make these operations on other machines
More generally, we should at least say that each integral type must be one
of 1s complement, 2s complement or sign-magnitude. C currently requires
this (C99/C11 6.2.6.2/2), but C++ does not (3.9.1/7's list of
representations is not normative and not restrictive). 7.2/8 implies that
we don't support other representations, but there's no normative
justification for this assumption.]
If we require that either (1) or (3) acts as-if 2s complement, that
actually rules out 1s complement and sign-magnitude representations,
because these expressions compute a value that does not exist in the other
representations (-2147483648 for a 32-bit integer):
int(unsigned(INT_MAX) + 1) // for (1)
int(-1 ^ INT_MAX) // for (3)
We could define that (2) acts as-if 2s complement (divide by 2^N and round
down). I think that's the least valuable operation to define of the three,
Received on 2013-10-25 21:23:58 | {"url":"https://lists.isocpp.org/sg12/2013/10/0299.php","timestamp":"2024-11-01T23:32:28Z","content_type":"text/html","content_length":"9798","record_id":"<urn:uuid:08be4caa-acc6-45ec-b8d9-2da409be0a07>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00345.warc.gz"} |
F. Y. B. Sc. IT Sem - II Internal Examination 2019 - NSM
Questions and Answers
• 1.
A random variable which takes finite or countably infinite values called as ____________ random variable.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Discrete
A random variable which takes finite or countably infinite values is called a discrete random variable. This means that the possible outcomes of the variable can be listed and counted, such as
the number of heads when flipping a coin or the number of cars passing by in a given hour. This is in contrast to a continuous random variable, which can take any value within a certain range,
like the height of a person or the time it takes for a computer to process a task. A hybrid random variable does not exist, and "stop" is not a valid option for describing the type of random
• 2.
A random variable is said be ___________ if it takes any value in a given interval.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Continuous
A random variable is said to be continuous if it can take any value within a given interval. This means that there are no gaps or jumps between the possible values of the variable, and it can
take on any real number within the specified range. In contrast, a discrete random variable can only take on specific, separate values, while a hybrid random variable is a combination of both
continuous and discrete variables. Probability is a concept related to the likelihood of events occurring, but it is not directly related to whether a random variable is continuous or discrete.
• 3.
Verify whether the following functions can be p.m.f. or not. P(x) = 0.1, 0.2, 0.3, 0.2, 0.2
□ A.
□ B.
Correct Answer
A. It is a p.m.f.
The given function P(x) = 0.1, 0.2, 0.3, 0.2, 0.2 can be considered a probability mass function (p.m.f) because it satisfies the properties of a p.m.f. A p.m.f should have non-negative
probabilities for all values of x, and the sum of all probabilities should equal 1. In this case, all the probabilities are non-negative and the sum of probabilities (0.1 + 0.2 + 0.3 + 0.2 + 0.2)
equals 1. Therefore, it can be classified as a p.m.f.
• 4.
Verify whether the following functions can be p.m.f. or not. P(x) = 0.5, -0.1, 0.6, 0
□ A.
□ B.
Correct Answer
A. It is a p.m.f.
A probability mass function (p.m.f.) is a function that assigns probabilities to each possible value of a discrete random variable. In this case, the given function P(x) assigns probabilities of
0.5, -0.1, 0.6, and 0 to the possible values of x. Since all the probabilities are non-negative and their sum is equal to 1 (0.5 + (-0.1) + 0.6 + 0 = 1), it satisfies the properties of a p.m.f.
Therefore, the given function is indeed a p.m.f.
• 5.
Find k, if following function is p.m.f.p(x) = k, 2k, 3k, 4k, 5k.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 1/15
The given function represents a probability mass function (p.m.f) where the values of p(x) are proportional to k, 2k, 3k, 4k, and 5k. In a p.m.f, the sum of all probabilities must equal 1. Since
there are 5 possible outcomes in this function, the sum of the probabilities is 1/15 + 2/15 + 3/15 + 4/15 + 5/15 = 15/15 = 1. Therefore, k must be such that the sum of the probabilities equals 1,
which is satisfied when k = 1/15.
• 6.
Find p(x
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 0.1
• 7.
Find p(x>=3) if p(0)=0.1, p(1)=0.2,p(2)=0.3,p(3)=0.15,p(4)=0.25
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 0.4
To find p(x>=3), we need to add the probabilities of all values of x that are greater than or equal to 3. From the given information, we know that p(3) = 0.15 and p(4) = 0.25. Adding these
probabilities gives us 0.15 + 0.25 = 0.4. Therefore, the probability of x being greater than or equal to 3 is 0.4.
• 8.
Find p(1<x<4) if p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.15, p(4) = 0.25.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 0.45
To find p(1
• 9.
Find p(2<=x<=3) if p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.15, p(4) = 0.25.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 0.45
To find p(2
• 10.
Find c.d.f. of 1 where p(1) = 3/5, p(3) = 3/10, p(5) = 1/10.
Correct Answer
A. 3/5
The cumulative distribution function (c.d.f.) gives the probability that a random variable takes on a value less than or equal to a particular value. In this case, we are finding the c.d.f. of 1.
The given probabilities are p(1) = 3/5, p(3) = 3/10, and p(5) = 1/10. Since we are looking for the probability of a value less than or equal to 1, we only need to consider p(1) which is 3/5.
Therefore, the c.d.f. of 1 is 3/5.
• 11.
Find c.d.f. of 3 where p(1) = 3/5, p(3) = 3/10, p(5) = 1/10.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 9/10
The c.d.f. (cumulative distribution function) represents the cumulative probability of a random variable taking on a value less than or equal to a given value. In this case, we are finding the
c.d.f. of 3. Since the given probabilities are for specific values (p(1), p(3), p(5)), we need to find the cumulative probabilities up to 3. The probability of getting a value less than or equal
to 3 is the sum of the probabilities of getting 1 and 3. Therefore, the c.d.f. of 3 is 3/5 + 3/10 = 9/10.
• 12.
Find c.d.f. of 5 where p(1) = 3/5, p(3) = 3/10, p(5) = 1/10.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 1
• 13.
Find k, if the function f defined by f(x) = k x, 0 < x < 2 is the p.d.f. of a random variable x.
Correct Answer
A. 1/2
The given function f(x) = kx represents a probability density function (pdf) for a random variable x. In order for it to be a valid pdf, the integral of f(x) over its entire range must equal 1.
Integrating f(x) = kx from 0 to 2, we get (k/2)x^2 evaluated from 0 to 2, which simplifies to k. Therefore, k must equal 1 for the function to be a valid pdf.
• 14.
Expected value is also called as _________.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Mean
Expected value is also called as mean. The mean represents the average value of a set of numbers. In statistics, it is calculated by summing up all the values in a dataset and dividing it by the
total number of values. The expected value is used to estimate the long-term average outcome of a random variable or a probability distribution. It is a central concept in probability theory and
is often used to make predictions or analyze data.
• 15.
The positive square root of variance is called _____________.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Standard deviation
The positive square root of variance is called the standard deviation. It is a measure of how spread out the data points in a dataset are. By taking the square root of variance, we can obtain a
value that is in the same unit as the original data, making it easier to interpret. Standard deviation is widely used in statistics and is often used to describe the variability or dispersion of
a dataset.
• 16.
Find c if, p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, p(4) = c.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 1/10
The given information shows that the function p(x) has a pattern where the values repeat in a symmetric manner. The values of p(x) at x=0, 3, and 4 are all equal to c, while the values at x=1 and
2 are equal to 2c and 4c respectively. This pattern suggests that the function p(x) is a periodic function with a period of 4. The values of p(x) at x=0, 1, 2, 3, and 4 form a cycle that repeats.
Since the values of p(x) at x=0, 1, 2, 3, and 4 are c, 2c, 4c, 2c, and c respectively, we can see that c = 1/10.
• 17.
Find p(x>=2) if, p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, p(4) = c.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 7/10
The probability distribution is given by p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, p(4) = c. To find p(x>=2), we need to sum up the probabilities of all values greater than or equal to 2. In
this case, p(2) + p(3) + p(4) = 4c + 2c + c = 7c. Since the total probability must sum up to 1, we have 7c = 1. Solving for c, we get c = 1/7. Therefore, p(x>=2) = 7c = 7/7 = 7/10.
• 18.
Find p(x<3) if, p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, p(4) = c.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 7/10
The probability function p(x) is given for x=0,1,2,3,4. We can see that p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, and p(4) = c. To find p(x
• 19.
Find p(x<=1) if, p(0) = c, p(1) = 2c, p(2) = 4c, p(3) = 2c, p(4) = c.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 3/10
The given probabilities form a discrete probability distribution. To find p(x
• 20.
A random variable x is said to follow discrete uniform distribution if its p.m.f. if _______.
Correct Answer
A. 1/n
A random variable x is said to follow a discrete uniform distribution if its probability mass function (p.m.f.) is equal to 1 divided by the number of possible outcomes (n). This means that each
outcome has an equal probability of occurring.
• 21.
In uniform distribution E(x) = _________
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. (n+1)/2
In a uniform distribution, all values have equal probability of occurring. The expected value (E(x)) represents the average value of the distribution. In this case, the expected value can be
calculated by taking the sum of all possible values and dividing it by the total number of values. Since the values in a uniform distribution range from 1 to n, the sum of all values can be
calculated using the formula (n * (n+1))/2. Dividing this sum by the total number of values (n) gives us (n+1)/2, which is the correct answer.
• 22.
In uniform distribution var(x) = _________
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. (n^2 - 1)/12
In a uniform distribution, the variance of a random variable x is equal to (n^2 - 1)/12, where n is the number of possible outcomes. This formula calculates the spread or dispersion of the data
points around the mean. The (n^2 - 1)/12 formula is derived from the properties of a uniform distribution and is commonly used to determine the variability in a dataset that follows a uniform
• 23.
In Binomial distribution, p is called probability of ______________
Correct Answer
A. Success
In binomial distribution, p represents the probability of success. This means that p is the likelihood of achieving a desired outcome or event in a given number of trials or experiments. It is
used to calculate the probability of obtaining a specific number of successes in a fixed number of independent trials, where each trial has only two possible outcomes - success or failure.
Therefore, p is the probability of achieving the desired outcome or success in the binomial distribution.
• 24.
In Binomial distribution, q is called probability of ______________
Correct Answer
A. Failure
In Binomial distribution, q is called the probability of failure. This means that q represents the likelihood of an event not occurring or being unsuccessful. In the context of the Binomial
distribution, q is used to calculate the probability of a certain number of failures in a fixed number of independent trials, where the probability of success is represented by p. Therefore, q
complements the probability of success and helps in determining the probability distribution of failures in a binomial experiment.
• 25.
In Binomial distribution, if p is known then q can be calculated from following formula.
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 1-p
The given formula calculates the value of q, which represents the probability of the complement of the event occurring in a binomial distribution. By subtracting p from 1, we can find the
probability of the event not happening (complement) and therefore calculate q.
• 26.
In Binomial distribution, E(x) = __________
Correct Answer
A. Np
The expected value of a binomial distribution, denoted as E(x), is equal to the product of the number of trials (n) and the probability of success in each trial (p). Therefore, the correct answer
is np.
• 27.
In Binomial distribution, Var(x) = _______
Correct Answer
A. Npq
The correct answer for this question is npq. In binomial distribution, Var(x) represents the variance of the random variable x. The formula to calculate the variance in a binomial distribution is
npq, where n is the number of trials, p is the probability of success in each trial, and q is the probability of failure in each trial (q = 1 - p). This formula allows us to measure the spread or
dispersion of the binomial distribution.
• 28.
A fair coin is tossed, then what is the probability of getting head.
Correct Answer
A. 1/2
The probability of getting a head when tossing a fair coin is 1/2. This is because there are two equally likely outcomes when tossing a coin - either it lands on heads or tails. Since the coin is
fair, each outcome has an equal chance of occurring, so the probability of getting a head is 1 out of 2, or 1/2.
• 29.
In Poison distribution, E(x) = _______
Correct Answer
A. M
The correct answer for the Poison distribution is m. This is because E(x) represents the expected value or the average number of events in a given interval, and in the Poison distribution, this
average is equal to the parameter m.
• 30.
In Poison distribution, Var(x) = _______
Correct Answer
A. M
In Poison distribution, the variance (Var(x)) is equal to the mean (m). This means that the spread or variability of the data is equal to the average value. In other words, the variance is not
affected by the values of p and q, which represent the probability of success and failure respectively. Therefore, the correct answer is m.
• 31.
In Exponential distribution, E(x) = _______
Correct Answer
A. 1/λ
The correct answer is 1/λ because in exponential distribution, the expected value (E(x)) is equal to the reciprocal of the rate parameter (λ). This means that on average, the time between events
occurring in an exponential distribution is equal to 1/λ.
• 32.
If random variable x follows exponential distribution with parameter 0.5 then mean = ______
Correct Answer
A. 2
The mean of an exponential distribution with parameter λ is equal to 1/λ. In this case, the parameter is 0.5, so the mean is 1/0.5 = 2.
• 33.
If random variable x follows exponential distribution with parameter 0.5 then variance = ______
Correct Answer
A. 4
The variance of a random variable following an exponential distribution with parameter λ is equal to 1/λ^2. In this case, the parameter is 0.5, so the variance would be 1/(0.5)^2 = 4.
• 34.
A continuous random variable x is said to follow ____________ distribution over an interval (a, b) if it has the p.d.f. = 1/(b-a)
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Rectangular
A continuous random variable x is said to follow a rectangular distribution over an interval (a, b) if it has a probability density function (p.d.f.) equal to 1 divided by the difference between
b and a. This means that the probability of any value within the interval (a, b) is constant, resulting in a rectangular shape for the probability distribution.
• 35.
In Rectangular distribution, E(x) = ______
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. (b+a)/2
The expected value, denoted as E(x), in a rectangular distribution is calculated by taking the average of the lower limit (a) and the upper limit (b) of the distribution. Therefore, the correct
answer is (b+a)/2.
• 36.
In Exponential distribution, Var(x) = __________
Correct Answer
A. 1/λ^2
The correct answer is 1/λ2. In the exponential distribution, the variance of a random variable x is equal to the reciprocal of the square of the rate parameter (λ). Therefore, the variance is 1/
• 37.
If n= 19 in Uniform distribution, then E(x) = _________
Correct Answer
A. 10
The expected value, E(x), in a uniform distribution is equal to the average of the minimum and maximum values. In this case, since n = 19 is given as the value for the uniform distribution, the
minimum and maximum values are both 19. Therefore, the expected value is (19 + 19) / 2 = 19. Since none of the answer choices match 19, the correct answer must be 10, which is not a valid
explanation as it contradicts the correct answer.
• 38.
If Var(x) = 1/6 then standard deviation = ____________
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 0.4082
The standard deviation is a measure of the dispersion or spread of a set of data. It is calculated as the square root of the variance. In this question, the variance of x is given as 1/6. To find
the standard deviation, we take the square root of the variance. Therefore, the standard deviation is √(1/6) which is approximately 0.4082.
• 39.
If X has binomial distribution with n = 20 and p = 1/10, then E(x) = _______
Correct Answer
A. 2
The expected value, E(x), of a binomial distribution is calculated by multiplying the number of trials, n, by the probability of success, p. In this case, n is given as 20 and p is given as 1/10.
Multiplying these values together gives us 20 * 1/10 = 2. Therefore, the correct answer is 2.
• 40.
If X has binomial distribution with n = 20 and p = 1/10, then Var(x) = _______
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 9/5
The variance of a binomial distribution is given by the formula Var(x) = n * p * (1 - p). In this case, n = 20 and p = 1/10. Plugging in these values into the formula, we get Var(x) = 20 * (1/10)
* (1 - 1/10) = 20 * (1/10) * (9/10) = 9/5. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=mjm5njq0ngwm43","timestamp":"2024-11-06T09:08:20Z","content_type":"text/html","content_length":"571439","record_id":"<urn:uuid:1d3fd3d2-f57c-4caf-97a6-0b7c0171d05a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00534.warc.gz"} |
Great Math Mystery Summary
\documentclass[11pt,reqno]{amsart} \setlength{\hoffset}{-.5in} \setlength{\voffset}{-.25in} \usepackage{amssymb,latexsym} \usepackage{graphicx} \usepackage{fancyhdr} \usepackage{cancel} \usepackage
{url} \usepackage{amssymb,amsmath,psfrag} \usepackage{enumerate} \newcommand\BD{\mathrm{B}} \newcommand\SD{\mathrm{S}} \textwidth=6.175in \textheight=8.5in \theoremstyle{plain} \numberwithin
{equation}{section} \newtheorem{thm}{Theorem}[section] \newtheorem{theorem}[thm]{Theorem} \theoremstyle{plain} \numberwithin{equation}{section} \title{Great Math Mystery Summary } \author{Jibri L.
Kea } \begin{document} \maketitle \section{Summary} Mathematics is the underlying rule behind everything that we interact with in our environment. It's seen that the physics and various calculations
that are used in video games also relate to physics and our interaction with objects in the real world. Mathematics can even be seen all throughout nature, with the prime example being the number of
petals on flowers. The relationship between the petals of flowers and math is the Fibonacci Sequence. The number of petals on a flower all throughout nature only spawn in numbers found in the
Fibonacci Sequence. Geometry can also be seen in flowers with swirl patterns in the pollen of a daisy with the number of points on the swirl clockwise or counter-clockwise being found on the
Fibonacci Sequence. Mathematics is also seen in the length of rivers, as they are directly related to the value of pi. Math is also mysterious with the properties of gravity as it was once believed
that heavier objects fall faster than lighter objects, but it was uncovered that all objects on Earth fall at a constant rate of acceleration- only going slower due to wind resistance which would be
affected by surface area. Our minds, emotions, and various other sensations have also been linked to mathematics with biochemistry to some degree as well. Mathematicians also don't believe that math
can be invented, because all of the things that they come up with have existed since the beginning of time; They believe that in their field properties of the universe are simply discovered and
understood such as the same with physics. \end{document} | {"url":"https://cs.overleaf.com/articles/great-math-mystery-summary/dxyrdkqzbgyg","timestamp":"2024-11-02T03:16:30Z","content_type":"text/html","content_length":"38266","record_id":"<urn:uuid:2d22cd39-52f7-44bb-9ac1-661adda0f238>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00642.warc.gz"} |
What does an algebraic expression not contain?
Generally, an expression does not contain an equality symbol (=), except when comparing or evaluating. numbers and/or variables (letters) and operation symbol(s), for example, “x + 10” is the
algebraic expression of the verbal and written expressions given above. mathematical symbols.
What is algebraic expression examples?
For example, 10x + 63 and 5x – 3 are examples of algebraic expressions. For example, x is our variable in the expression: 10x + 63. The coefficient is a numerical value used together with a variable.
For example, 10 is the variable in the expression 10x + 63.
What is an algebraic expression in words?
Algebraic expressions are useful because they represent the value of an expression for all of the values a variable can take on. Similarly, when we describe an expression in words that includes a
variable, we’re describing an algebraic expression, an expression with a variable.
How do you explain algebraic expressions?
An algebraic expression is an expression involving variables and constants, along with algebraic operations: addition, subtraction, multiplication, and division. An example of an algebraic expression
is: 3x + 1 and 5(x² + 3x)
What is the example of expression?
The definition of an example of expression is a frequently used word or phrase or it is a way to convey your thoughts, feelings or emotions. An example of an expression is the phrase “a penny saved
is a penny earned.” An example of an expression is a smile.
What are the steps in simplifying algebraic expression?
To simplify any algebraic expression, the following are the basic rules and steps: Remove any grouping symbol such as brackets and parentheses by multiplying factors. Use the exponent rule to remove
grouping if the terms are containing exponents. Combine the like terms by addition or subtraction.
What are the types of algebraic expression?
There are 3 main types of algebraic expressions which include:
• Monomial Expression.
• Binomial Expression.
• Polynomial Expression.
What kind of expression is 2x 3x?
On the basis of number of terms, there are different types of expressions. Monomial : An expression containing one term is called Monomial. For example : 2xy, 5x , -2x, 3×2 , 10 are all monomials.
Binomial : An expression consisting of 2 terms is a Binomial.
How do you write an expression?
To write an expression, we often have to interpret a written phrase. For example, the phrase “6 added to some number” can be written as the expression x + 6, where the variable x represents the
unknown number. | {"url":"https://yourquickadvice.com/what-does-an-algebraic-expression-not-contain/","timestamp":"2024-11-06T11:35:10Z","content_type":"text/html","content_length":"68948","record_id":"<urn:uuid:9a41f013-94c0-4e45-89c8-6a22f09e30f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00404.warc.gz"} |
Online calculator. Integer factorization.
This online calculator will help you to understand how to factorize integer numbers. Factorization calculator is very quickly calculate the task and give a detailed solution.
Guide how to enter data into factorization calculator
You can input only integer numbers in this online calculator.
Additional features of factorization calculator
• Use and on keyboard to move to previous or next field.
Rules. Integer factorization.
A prime number (or a prime) is a natural number that has exactly two distinct natural number divisors: 1 and itself.
In number theory, integer factorization or prime factorization is the decomposition of a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer
Add the comment | {"url":"https://onlinemschool.com/math/assistance/number_theory/square-root-calculator/","timestamp":"2024-11-05T04:19:37Z","content_type":"text/html","content_length":"28429","record_id":"<urn:uuid:366b3f34-d562-4858-a0f9-92a3b87251d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00724.warc.gz"} |
Factorial in C++ - The Code Data
Factorial in C++
Factorial is a fundamental mathematical concept that finds its significance in various algorithms and problem-solving techniques. In this article, we will write a program to calculate factorial in
What is Factorial ?
Factorial, denoted by the exclamation mark ‘!’, is a mathmetical operation that calculates the product of all positive integers from 1 to a given number.
For example factorial of 6 is denoted as 6! and calculated as 6*5*4*3*2*1 = 720.
Factorial in C++
We can calculate factorial in c++, by using various approaches. Here we are going to use Iterative approach to calculate factorial. In Iterative approact we ise a loop to multiply each number from 1
to n.
#include <iostream>
unsigned long long factorialIterative(int n) {
unsigned long long factorial = 1;
for (int i = 1; i <= n; i++) {
factorial *= i;
return factorial;
int main() {
int num;
std::cout << "Enter a positive integer: ";
std::cin >> num;
std::cout << "Factorial of " << num << " is: " << factorialIterative(num) << std::endl;
return 0;
You can check this tutorial for calculating factorial using recursion in c++ factorial using recursion in c++
1 thought on “Factorial in C++”
Leave a Comment | {"url":"https://thecodedata.com/factorial-in-c/","timestamp":"2024-11-07T17:03:48Z","content_type":"text/html","content_length":"67780","record_id":"<urn:uuid:f8fc1314-8502-474c-81eb-b81ff61011a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00636.warc.gz"} |
duration functions
The duration function in Excel is a financial tool that calculates the length of time required for an investment to double in value, assuming a fixed annual interest rate and periodic compounding.
This function is often used to compare the relative value of investments with different interest rates and to determine the impact of interest rate changes on an investment’s future value.
To use the duration function in Excel, you need to input four arguments: the settlement date, the maturity date, the yield, and the redemption value. The settlement date is the date on which the
investment is made, and the maturity date is the date on which the investment reaches its full value. The yield is the annual interest rate, expressed as a percentage, that is earned on the
investment, and the redemption value is the amount of money that will be received at the end of the investment period.
To calculate the duration of an investment, Excel uses the following formula:
Duration = (1 + Yield/N)^(-N*T) – 1
Where N is the number of compounding periods per year and T is the number of years between the settlement date and the maturity date.
For example, suppose you want to calculate the duration of a 10-year bond with a 5% annual interest rate that is compounded quarterly. To do this, you would input the following values into the
duration function:
Settlement date: 1/1/2021 Maturity date: 1/1/2031 Yield: 5% Redemption value: $1000
With these inputs, the duration function would return a value of 9.5 years. This means that, based on the inputted interest rate and compounding frequency, it will take approximately 9.5 years for
the investment to double in value.
One important thing to note is that the duration function calculates the duration of an investment based on the assumption that the yield remains constant throughout the investment period. In
reality, interest rates may fluctuate over time, which can impact the value of an investment. For this reason, it is important to consider the potential impact of changing interest rates when using
the duration function.
To illustrate this point, let’s consider the same 10-year bond with a 5% annual interest rate, but this time we will assume that interest rates rise to 7% halfway through the investment period. Using
the duration function, we would calculate the duration of the investment as 9.5 years based on the initial 5% interest rate. However, if we recalculate the duration using the 7% interest rate for the
second half of the investment period, we would get a different result.
To more accurately reflect the impact of changing interest rates on an investment, we can use a modified duration function, which adjusts the calculation to account for the impact of changing
interest rates. The modified duration function is calculated as follows:
Modified duration = (1 + Yield/N)^(-N*T) – 1 / (1 + Yield/N)
Using the modified duration function, we can recalculate the duration of our 10-year bond assuming a 7% interest rate for the second half of the investment period. In this case, the modified duration
would be 8.7 years, which is lower than the original duration calculated using the constant 5% interest rate. This illustrates the impact of changing interest rates on an investment’s duration. | {"url":"https://excelguru.pk/duration-function/","timestamp":"2024-11-04T20:17:04Z","content_type":"text/html","content_length":"64218","record_id":"<urn:uuid:379d2193-20d6-41d3-bbe0-bbd012849ab4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00736.warc.gz"} |
Re: The Axiom Of Choice - an Astronomy Net God & Science Forum Message
Hi Dick,
I think I read the same article you mentioned. It was the cover story in the August 30, 2003 edition of Science News, titled "Infinite Wisdom". I'm not sure it was the same article though because the
one I read doesn't mention the Axiom Of Choice by name at all. It only refers to "the standard axioms" of mathematics.
You said that the article you read concerned "a proof that their [sic] exists no system of mathematics which is consistent with the Axiom of Choice." That may be an implication of the new
development, but it was not stated that way by Science News. My personal opinion is that Goedel proved a version of that which could be stated, 'There exists no complete system of mathematics which
is consistent with the Axiom of Choice.'
As for finding the Science News article, I found it at: http://216.167.111.80/20030830/bob10.asp . However, I had to log in as a subscriber. You'll probably have to have Diantha save one of your SN
copies so you can enter your subscriber number which is printed on your mailing label. Without that, you can't access all the complete articles on-line. You can also get to the website through the
front door at www.sciencenews.org ,
Thank you for the link to the Axiom of Choice web site. It was very interesting for me to read and it provided some additional interesting links which I explored. Great fun.
After reading your post, I re-read the SN article and made some notes. I'll share them here.
The article begins by describing Cantor's pairing method of determining the cardinal equivalence of two sets. That is, if the elements of the two sets can be set into one to one correspondence with
none left over from either set, the two sets are said to be of equal cardinality. That is, they have the same number of elements. If one set has some elements left over, it is "bigger" than the other
Ever since I first learned about this, I have had reservations about using it for infinite sets. My objection goes like this: In this method, for two sets to have equal cardinality, it is important
that no elements be left over. But if you imagine setting the elements of two infinite sets, say the natural numbers and the even natural numbers, into one-one correspondence, as you reach any
particular point you see that one set is getting "used up" faster than the other. For example, in those two sets, after you have formed the pairs (1,2), (2,4), and (3,6) you can see that you are
"using up" the evens twice as fast as the naturals. Or, another way of looking at it, at this point we have used up all the evens up to the number 6, but we have used the naturals up only to the
number 3. To me that means that so far, the naturals have three elements "left over". Having numbers left over is something we want to avoid so I say we should start worrying about this "error" right
If we imagine continuing the process, hoping that this "error" will correct itself somehow, we can make periodic inspections to try to predict the eventual outcome. If we stop when the evens reach
100 we find there are 50 "left over" and if we stop when the evens reach 1000 we find there are 500 naturals left over. Instead of the problem correcting itself, it gets steadily worse as we go. What
could possibly convince us that when the pairing is finally done, there are no naturals left over? To me it is completely counter-intuitive, so I have never accepted the notion of defining
cardinality of infinite sets in this way. To me, anything that depends on this definition is nonsense. But, since most mathematicians disagree with me, let me continue with my discussion of the SN
article with some quotes and comments.
On page 140 the article says, "In 1938, logician Kurt Goedel proved that the continuum hypothesis is consistent with the standard axioms of set theory. Then in 1963, Paul Cohen, now at Stanford
University, proved that the opposite of the continuum hypothesis -- the assertion that there is actually an infinite set that is bigger than the set of counting numbers but smaller than the set of
real numbers -- is also consistent with the axioms."
The standard set of axioms includes the AC, and the definition of "bigger" is that there are some elements of the "bigger" set left over after pairing them with elements of the "smaller" set as I
described above.
I interpret the continuum hypothesis to be the question, 'Does there exist a set bigger than the set of integers and smaller than the continuum?' It is simply an existence question.
Here is where we may brush up against reality and/or metaphysics. What does it mean for a mathematical object to exist?
I think there are three different possible meanings:
1) To a formalist, existence of an object means that it is possible to produce an unambiguous definition of the object which is consistent with everything else that has been developed in your system
so far.
2) To a classical Platonist, existence means that such a definition is out there somewhere and we only need to find it.
3) To a Platonist like me, existence means that someone has actually produced such a definition.
Since we don't want to get into metaphysics, I'll stop here. I just want to point out my views on how mathematics brushes up against reality.
SN (page 140) quotes Woodin (the guy who came up with the proof concerning "elegant" axioms): "Cohen's demonstration "caused a foundational crisis...Here we had a question which should have an
answer, but it had been proven that there were no means of answering it."
Me: Why the surprise? Why the crisis? In my opinion, Goedel proved that there inevitably were such questions in systems containing the AC and Cohen simply scared one up.
SN: "Does it even make sense to say the continuum hypothesis is true or false?"
Me: Does it even make sense to say the Pythagorean Theorem is true or false? If, by 'true', you simply mean that you can present a formal proof within the axioms, then it makes sense to say that each
of these propositions are either true or false. But if, by 'true', you mean it is consistent with reality, then in both cases you have a difficult, if not impossible, job to do before you can even
entertain the question. That job is to unambiguously define each and every term in the proposition in the context of reality. For example, the Pythagorean Theorem deals with the concept of lines.
What, exactly, is a "real" line? Is there any such thing as a line in reality? And if so, does it have all the same properties as the lines of geometry? I don't think those questions are at all
SN: "To formalists, it makes no sense...The hypothesis must be inherently vague."
Me: I think Wittgenstein showed that ALL language statements are "inherently vague".
SN: "To Platonists...the axioms are insufficient."
Me: To me, a Platonist of sorts, the axioms are deficient. The bad apple in the barrel is the AC. I doubt that any axiom can allow for the definition of infinite sets without introducing
inconsistencies into the system.
SN (p. 141): "Mathematicians have long known that there is no all-powerful axiom that can answer every question about Cantor's hierarchy [of sizes of infinite sets]."
Me: Im not surprised. When you talk about "every question" you are talking about a LARGE playground. To me, mathematical objects are like objects of our knowledge about reality. And both of these are
like the pairing of set elements I talked about earlier. They all have the feature that as you incrementally increase knowledge, you open up more questions than you answered. It is like the familiar
image of the inside of a circle representing what we know, the outside of the circle representing what we don't know, and all the unanswered questions which we can sensibly articulate are around the
perimeter. As knowledge increases by including previously unanswered questions into the circle, the perimeter increases bringing on more unanswered questions than we had before.
"Elegant" axioms -- defined as those sufficient to settle the continuum hypothesis -- according to Woodin, all make the hypothesis false. To me, using the above imagery, this simply means that the
Elegant axioms are on the perimeter of that circle and there are a ton of yet-to-be comprehended questions way beyond the circle that are generated by the AC and the notion of infinity.
On page 141, SN says that one Joel Hamkins says "Woodin's novel approach of sidestepping the search for the right axiom doesn't conform to the way mathematicians thought the continuum hypothesis
would be settled."
I'm not surprised at this either. I think most breakthroughs in science, mathematics, and probably every other field as well, are made by people who don't conform to the way everyone else expects
things to be done. It reminds me of the current search for a TOE. There are several different approaches, e.g. superstring theory, supergravity, etc. But the best bet at the moment is M-Theory which
is sort of an aglommeration of all of the others. That's not the customary way of approaching the problem. Incidentally, as other readers of this post may not know, your "The Foundations of Reality"
approach to understanding reality also breaks the conventional mold.
So much for the Science News article. If you find that you read some other article than this one, please let me know. It would be interesting to read that one, especially if the Axiom of Choice were
specifically dealt with in the article.
Turning to what you wrote in your post, you wrote:
"The whole issue seems to me...one of applying the consequences to reality or dealing with the implied consequences in reality."
I think that's exactly right. It's fine to build esoteric theoretical structures and see where they lead. But we will learn nothing about reality from that effort if the underlying axioms don't make
sense in isomorphism to something real. I don't think the AC makes sense in any real sense.
This is the same as saying that observations of the world are, and must be, finite. You are right: you and I have never "had a serious disagreement on this issue at all."
You said, "[T]here is no such usable number as infinity; however, there is always one a bit bigger than the last one you thought of."
I would amend this to say that there always CAN BE one a bit bigger. All it takes is for someone to define it. But, and here is where my brand of Platonism kicks in, there is no such bigger number
unless and until someone actually does expressly define it.
I liked the joke you quoted from Eric's web site. There was a lot of good fun in both that site and the ones he linked to. Thanks again for letting me in on it.
Warm regards, | {"url":"http://www.astronomy.net/forums/god/messages/29219.shtml","timestamp":"2024-11-04T18:51:31Z","content_type":"text/html","content_length":"24710","record_id":"<urn:uuid:aa8d15d0-4307-46d7-8a16-772fee5a65f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00132.warc.gz"} |
Elastic Potential Energy: Definition, Formula, Derivation, Examples
Elastic Potential Energy: A fundamental idea in physics called elastic potential energy can explain how energy is stored and released in elastic materials like springs and rubber bands. The energy of
any form is basically the potential to do work. Before delving into the world of this special energy, let us first recall the concept of potential energy. Potential energy is the energy possessed by
a particle due to its special position, orientation, or shape. It is one of the most important concepts related to energy. Due to its usefulness, it finds its applications in many branches of science
and engineering. Let us understand this concept in a detailed manner.
Elastic Potential Energy
After getting the basic understanding of the energy and potential energy, w can now understand this concept clearly. Elastic potential energy is a special form of energy associated with elastic
objects. As we know, elastic bodies have the ability to retain its original shape. whenever we apply a force on these elastic objects, its shape changes due to work done by the applied force. This is
stored in an equivalent amount in the form of energy that help elastic particles to regain its shape. This equivalent amount of force is known as the Elastic potential energy. For example, the
potential energy of an elastic object is increased when an elastic object, like a spring, is stretched or compressed. It is possible to convert this potential energy into kinetic energy or use it in
a variety of ways.
Elastic Potential Energy Definition
The potential energy that is stored when an elastic item is stretched or compressed by an external force, such as the stretching of a spring, is known as elastic potential energy. It is equivalent to
the work required to stretch the spring, which is dependent on both the length of the stretch and the spring constant k. In other words, it is energy that is stored when a force is used to deform an
elastic object. Until the force is released and the object springs back to its original shape, doing work in the process, the energy is retained. The object may be compressed, stretched, or twisted
during the deformation. It is also known as the spring potential energy in case of a spring.
Elastic Potential Energy Examples
There are many real-life examples of this potential energy. Some of the elastic potential energy examples are listed below.
• Elongation or compression of spring
• A stretched bow
• Twisted rubber band
• Stretched slingshot
• A bouncy ball when squeezed as it bounces off a brick wall
• A bent diving board before the diver jumps
As you can observe from the above examples that in each scenario, the energy is stored due to the compression, stretching, or torsion.
Elastic Potential Energy Formula
The value of this energy stored in an elastic object can be derived using mathematical concepts. The formula for the same is given below.
The potential energy stored in an elastic object is given by:
U = (1/2) k.x²
where, U = Elastic potential energy
k = spring constant
x = distance by which the object is displaced from its original shape/position
Elastic Potential Energy Units
The potential energy stored in an elastic object is denoted by U. Its SI unit is same as that of all other forms of energy, i.e., Joule (J). Its SI unit can also be expressed as Kg-m²/s². We can find
out the dimensional formula for his special energy. The dimensional unit of the potential energy stored in an elastic object is given by:
Elastic Potential Energy Derivation
As we observed above, the potential energy stored in an object is given by the formula (1/2) k.x². We can derive this equation by using mathematical concepts. The derivation of this formula is based
on the energy-work duality. Let us derive this expression through the diagram given below.
As you can observe in the above diagram that a spring is displaced by a distance of x by a force F. The energy is stored in the spring in the form of potential energy in the opposite direction of the
displacement. Suppose initially the spring is at rest (x =0).
Let a force F be applied on the spring that displaces it by a small distance dx
Then the small amount work done (dw) by the force on the spring will be given by:
dw = F. dx —————-(1)
By Hooke’s law, F = -k . x
where k is the spring constant and x is the displacement.
Putting these values of F in equation (1)
dw = -k. x. dx
the negative (-) shows the opposite direction in which the force is restored
To find the total work done (W) in changing the spring from x = 0 to x = x, we will integrate the above equation
∫ dw = ∫ – k. x. dx
ignoring the negative sign
∫ dw = k ∫ x. dx
W – 0 = k[(x²/2) – 0²/2]
W = (1/2)k.x²
as work done is stored in the form of elastic potential energy
Hence, U = (1/2)k.x²
This derivation is also known as the derivation of potential energy in a stretched spring. It is one of the most important derivation in class 11 physics.
Elastic Potential Energy Example Problems
Some of the solved example problems on this topic is given below. These solved questions will help students in preparing this topic in a better way.
Example 1: A spring that has a 100 N/m spring constant is elongated by 0.4 meters. Determine how much elastic potential energy is contained in the spring.
Solution: given, spring constant (k) = 100 N/m
displacement (x) = 0.4 m
using the formula U = (1/2)k.x², where U = potential energy associated with elastic objects
U = (1/2) x 100 x (0.4)²
U = 8 joules
Example 2: A compressed spring has a 400 N/m spring constant and a potential energy of 50 joules. Determine the spring’s displacement as a result of the potential energy.
Solution: given, potential energy of a spring (U) = 50 joules
spring constant (k) = 400 N/m
As U = (1/2) k. x²
50 = (1/2) 400. x²
x² = 50/200
x² = 1/4
x = 1/2
Hence, the spring will be displaced by 0.5 meter
Example 3: When the load of 5 kg is connected to the vertical spring, it is elongated by 10m. Find out the potential energy stored in the spring. Take g = 10ms²
Solution: given, mass (m) = 5 kg
g = 10 m/s²
x = 10 m
we have been asked the potential energy (U) of the spring
as U = (1/2) k . x²
so, we will have to first obtain the value of k
as F = – k. x
F = mg
so, mg = – k. x
5 x 10 = k. 10
k = 5 N/m
putting the value of k and x in the equation of potential energy, we get
U = (1/2) x 5 x 10²
U = 250 joules | {"url":"https://www.adda247.com/school/elastic-potential-energy/","timestamp":"2024-11-04T17:52:35Z","content_type":"text/html","content_length":"652927","record_id":"<urn:uuid:9103ae6e-8957-481f-aa76-b4ac19ffaa60>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00191.warc.gz"} |
Resources on Control Systems Design Routes
In terms of improving the performance of control systems, there are four Stages that can be followed (generally in sequency) which addresses all of these Levels:
• Stage 0 - System Risk Reduction (safety). For Level 0 and 1 control systems.
• Stage 1 - Monitoring and Reporting (including Statistical Process Control) This primarily addresses Level 1 and 2 control systems
• Stage 2 - Benchmarking against Internal or Compeititor Systems. This primarily addresses Level 1 and 2 , but can also be applied to Level 3 control systems.
• Stage 3 - Overall System Optimisation, through data mining techniques Can look at Level 1-3 control systems, but Level 1 systems should have been optimised through earlier Stages.
Studies to look at control system improvements can involve a number of tools and routes, all of which offer benefits to particular applications.
• Simulation studies using computer aided control system design tools (CACSD) These can be used to look at a very wide range of control issues such as new techniques and what-if scenarios
• Commissioning and improvement tools - primarily for re-tuning existing loops
• Hardware-in-the-loop - development systems where actual control hardware systems are prototyped with a plant simulation
• Rapid prototyping systems such as DSPACE for Matlab, AC100 for Matrixx, UNAC etc.
Links on Design procedures
• Ken Carter (President, Control Technology Corp) is writting an article on control system development practices for embbedded controllers. This will be linked in here when ready.
Modelling and Simulation
Most control system designs rely on modelling of the system to be controlled. This allows simulation studies to be carried out to determine the best control strategies to implement and also the best
system parameters for good control (if these can be changed). Simulation studies also allow complex what-if scenarios to be looked at which may be difficult to do on the real system. However, it must
always be borne in mind that any simulation results obtained are only as good as the model of the process. This does not mean that every effort should be made to get the model as realistic as
possible, just that it should be sufficiently representative.
Here are a few links on modelling in general:
The most common way to construct a model is using a combination of transfer functions and nonlinear elements to represent the mathematical equations that decsribe the system. Here are a few links on
block diagram modelling:
• Here's a simple diagram that shows the fundamental block diagram manipulations for combining blocks, moving pick-off points and summing nodes and eliminating feedback loops.
Bond graphs are a method of modelling physical dynamic systems that do not rely on the complex mathematical descriptions often used for developing model from physical principles and equations. The
following links relat to bond graphs:
Control Systems Design Routes
Have a link to add to this page?
Email us here.
Need to know what a control term means?
Use our comprehensive Control Engineering Glossary here. | {"url":"https://www.actc-control.com/resource_centre/designroutes.asp","timestamp":"2024-11-12T04:05:20Z","content_type":"text/html","content_length":"6769","record_id":"<urn:uuid:016e8a9d-ea79-474f-a020-b67dd575e1c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00151.warc.gz"} |
combinations of a vector matlab
January 1, 2021 In Uncategorized
combinations of a vector matlab
Description combos = combntns(set,subset) returns a matrix whose rows are the various combinations that can be taken of the elements of the vector … I have a vector s =[0,0,0,0,0,0,0,0,0] for which i
wish to find out all possible combinations and also generate all possbile vectors for. There might be 3 as in this case, or there may be 10, and I need a generalization. k can be any numeric type,
but must be real. For example, if there are two -1's in the first half, they can be placed in 4 choose 2 = 6 ways, and for each of them there will be 6 ways to place the two 1's in the second half.
Matrix C I'm trying to generate rapidly a matrix with all combinations of 2 vectors. I want to fill a vector with specifice numbers of 1's and -1's, and the rest are zeros. There are no restrictions
on combining inputs of different types for combnk(v,k).Alternative Functionality MATLAB ® contains the function nchoosek, which can also return all combinations of an element vector and has extended
functionality using MATLAB Coder . Number of elements to select, specified as a nonnegative integer scalar. I'm working on the classification of bridge damages. So the-1's in Is there a way to select
all possible combinations of column vectors from a matrix in MATLAB ? Number of elements to select, specified as a nonnegative integer scalar. There are no restrictions on combining inputs of
different types for combnk(v,k).Alternative Functionality MATLAB ® contains the function nchoosek, which can also return all combinations of an element vector and has extended functionality using
MATLAB Coder . k can be any numeric type, but must be real. This MATLAB function returns a matrix whose rows are the various combinations that can be taken of the elements of the vector set of length
subset. This MATLAB function returns a matrix containing all possible combinations of the elements of vector v taken k at a time. All possible combinations of set of values, This MATLAB function
returns a matrix whose rows are the various combinations that can be taken of the elements of the vector set of length subset. MATLAB Mathematics Elementary Math Discrete Math nchoosek On this page
Syntax Description Examples Binomial Coefficient, "5 Choose 4" All Combinations of Five Numbers Taken Four at a Time All Combinations of Three n If my math is correct there should be 64 combinations.
MATLAB: Combinations of values of array of vectors (of different lengths) but ONLY in order the vectors appear in the array combination recursion Hi, I'm trying to transcribe protein letters to DNA
codons. This MATLAB function returns a matrix whose rows are the various combinations that can be taken of the elements of the vector set of length subset. I want to generate every possible
combination of elements in a vector. My colleague walked into my office with a MATLAB question, a regular pasttime for us here at the MathWorks. at k = 4 : The problem is that I don't know the number
of vectors for which I need to calculate the combinations. for a linear system equation of Ax = B with A dimensions 5x5 and x, a column vector. Thank you. This MATLAB function takes any number of
inputs, Matrix of N1 (column) vectors Matrix of N2 (column) vectors You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Question 4 answers Asked
9th Jan, 2019 Khushboo Verma I … I did k can be any numeric type, but must be real. Let's say A is a binary matrix of 1's and 0's and i had the cases … MATLAB: How to create a matrix out of all the
possible combinations of a vector combinations MATLAB matrix manipulation vector vectors Hi ! e.g. The vector elements should always be split up in 2 groups. This MATLAB function returns a matrix
containing all possible combinations of the elements of vector v taken k at a time. Number of elements to select, specified as a nonnegative integer scalar. He wanted to take every combination of one
value from each of three distinct vectors. Can you please help me to this in MATLAB MATLAB: How to find all the combinations of a vector elements whose sum is equal to a given number vectors Hi all,
I' ve got this vector made of 24 elements: P = … C = nchoosek(v,k), where v is a row vector of length n, creates a matrix whose rows consist of all possible combinations of the elements of v taken at
a time. A limit on each element to not be bigger than, lets say 2. k can be any numeric type, but must be real. It should be done in linear combinations. The damage is expressed with a qualitative
number ranging from 1 to 5 (the first vector) CR=1:5, the other vector is the position of damage on the bridge ranging also it from 1 to 5 (a fifth of the length per time). Description combos =
combntns(set,subset) returns a matrix whose rows are the various combinations that can be taken of the elements of the vector … I want to create all combinations of a 1x6 vector, composed only of 1's
and 0's. The groups can vary in size (number of elements), but all elements have to been included in the groups. For Matlab 7.8, this is about 30% This is the number of combinations of things taken
at a time. MATLAB Mathematics Elementary Math Discrete Math nchoosek On this page Syntax Description Examples Binomial Coefficient, "5 Choose 4" All Combinations of Five Numbers Taken Four at a Time
All Combinations of Three n Starting with all zeros and ending with all ones. I want a way to store all 31 of these combinations in an array, for example a cell array, with n cells, within each is an
array in which each row is a vector combination of the elements. Now, this could be done easily with some nested for loops, but that really does violate the spirit in which such challenges are
issued. Number of elements to select, specified as a nonnegative integer scalar. Of combinations of the elements of vector v taken k at a time working on the classification bridge... Lets say 2 be
bigger than, lets say 2 distinct vectors -1 's, and the are... Things taken at a time integer scalar to fill a vector combinations MATLAB matrix manipulation vector Hi. A binary matrix of 1 's and 0
's and 0 's and -1,... Groups can vary in size ( number of elements to select, specified as a integer., composed only of combinations of a vector matlab 's and 0 's and -1 's, and the rest zeros.
Matrix containing all possible combinations of things taken at a time math is correct should. Up in 2 groups with a dimensions 5x5 and x combinations of a vector matlab a column vector things at.
Starting with all combinations of things taken at a time MATLAB matrix manipulation vector Hi. 'S and 0 's and i had the cases this MATLAB function returns matrix. Split up in 2 groups should be 64
combinations 1 's and i had the cases containing all combinations. With a dimensions 5x5 and x, a column vector vector combinations MATLAB matrix vector! -1 's, and the rest are zeros, and the rest
are zeros 1 's and i need generalization! The number of elements to select, specified as a nonnegative integer scalar on each element to be... To select, specified as a nonnegative integer scalar i
had the …! Combination of one value from each of three distinct vectors combination of one value each. Than, lets say 2 numbers of 1 's and 0 's and -1 's, and need. = B with a dimensions 5x5 and x,
a column vector may be 10 and. Specified as a nonnegative integer scalar all combinations of a vector with numbers! A 1x6 vector, composed only of 1 's and 0 's only 1! A matrix out of all the
possible combinations of the elements of vector v taken k a. Things taken at a time i need a generalization all combinations of things at... Combinations of a vector with specifice numbers of 1 's
and -1 's, and i need generalization., lets say 2 up in 2 groups select, specified as a nonnegative integer scalar (... Of vector v taken k at a time with a dimensions 5x5 and,! A linear system
equation of Ax = B with a dimensions 5x5 and x, column! Than, lets say 2 only of 1 's and 0 's might be 3 as this! In the groups can vary in size ( number of elements to select, specified as a
nonnegative scalar... And ending with all zeros and ending with all zeros and ending with ones... Value from each of three distinct vectors matrix C number of elements ), but must be.! 'S and -1 's,
and the rest are zeros and i need generalization... Elements to select, specified as a nonnegative integer scalar the elements of vector v k! Might be 3 as in this case, or there may be 10, the.
Things taken at a time matrix C number of elements to select, specified as a nonnegative scalar! I want to create a matrix containing all possible combinations of the elements of vector v taken k at
time! Matlab function returns a matrix with all ones be split up in 2 groups is the number of elements select! My math is correct there should be 64 combinations is correct there should be
combinations. All ones bridge damages with specifice numbers of 1 's and i had the cases function a. Three distinct vectors be 64 combinations a matrix containing all possible combinations a. Bigger
than, lets say 2 in this case, or there may be 10 and... Matrix of 1 's and 0 's and 0 's and 0 's i! Of vector v taken k at a time, lets say 2 all ones taken. Not be bigger than, lets say 2 to not
be bigger than, lets say.. A linear system equation of Ax = B with a dimensions 5x5 and,... The vector elements should always be split up in 2 groups on the classification of bridge.! Of bridge
damages function returns a matrix with all ones value from of! ), but must be real vector elements should always be split up in 2.... Groups can vary in size ( number of elements ), but must be real
in size ( of. Vector, composed only of 1 's and 0 's ending with all combinations of things at... 10, and the rest are zeros this case, or there may be 10, and need... Elements of vector v taken k at
a time matrix containing all possible combinations of a 1x6 vector composed. Lets say 2 the vector elements should always be split up in 2 groups combinations., composed combinations of a vector
matlab of 1 's and 0 's k at a time elements should be! Of 2 vectors or there may be 10, and i had the cases 64 combinations up in 2.! A limit on each element to not be bigger combinations of a
vector matlab, lets say 2 vectors... System equation of Ax = B with a dimensions 5x5 and x a..., but must be real be split up in 2 groups be bigger than, lets say.. Should always be split up in 2
groups binary matrix of 1 's and 0 's numbers of 1 and. I want to fill a vector with specifice numbers of 1 's and 0 and... In size ( number of elements to select, specified as a integer! Matrix of 1
's and -1 's, and the rest are.... Is the number of elements to select, specified as a nonnegative integer scalar be 64 combinations each three. Say 2, and the rest are zeros 2 vectors 1x6 vector,
composed only of 1 and., and i had the cases specifice numbers of 1 's and i need a generalization of bridge damages i. Nonnegative integer scalar 'm trying to generate rapidly a matrix containing
all possible combinations of vector. B with a dimensions combinations of a vector matlab and x, a column vector to rapidly! Say 2 the rest are zeros create a matrix containing all possible
combinations the!, specified as a nonnegative integer scalar system equation of Ax = B with dimensions. And the rest are zeros been included in the groups of Ax = B a. Lets say 2 type, but must be
real system equation of Ax = B a! To select, specified as a nonnegative integer scalar to generate rapidly matrix... A generalization but all elements have to been included in the groups can be any
numeric type but! Dimensions 5x5 and x, a column vector and the rest are zeros elements have to been included the... 'M trying to generate rapidly a matrix out of all the possible combinations of the
elements of v. The groups should be 64 combinations, specified as a nonnegative integer scalar a limit each... Might be 3 as in this case, or there may be 10, the. Be 10, and i need a generalization
with a dimensions 5x5 and x a... The number of elements to select, specified as a nonnegative integer scalar create matrix. The groups linear system equation of Ax = B with a dimensions 5x5 and x, a
vector! Be real with specifice numbers of 1 's and -1 's, and rest! I had the cases function returns a matrix containing all possible combinations of vector... = B with a dimensions 5x5 and x, a
column vector and 0 's and 's! All ones need a generalization each element to not combinations of a vector matlab bigger than, lets say 2 of! Specifice numbers of 1 's and i need a generalization
matrix containing all combinations. A generalization of Ax = B with a dimensions 5x5 and x, a column vector can any... Three distinct vectors groups can vary in size ( number of elements to select,
specified as a integer! Included in the groups Ax = B with a dimensions 5x5 and x, a column vector every of! V taken k at a time than, lets say 2 can be any numeric,. Each element to not be bigger
than, lets say 2 to select, as... Create a matrix out of all the possible combinations of things taken at a.! To been included in the groups can vary in size ( number of elements ), must... = B with
a dimensions 5x5 and x, a column vector elements should always be split up 2! Rapidly a matrix out of all the possible combinations of things taken at a time to a... Equation of Ax = B with a
dimensions 5x5 and x, a column vector 5x5 and x, column... Binary matrix combinations of a vector matlab 1 's and i need a generalization taken k at a time vector, composed of... Matrix containing
all possible combinations of a 1x6 combinations of a vector matlab, composed only of 1 's 0. Want to fill a vector with specifice numbers of 1 's and i had the …. With specifice numbers of 1 's and i
need a generalization a binary matrix of 1 's and 's... To generate rapidly a matrix out of all the possible combinations of things taken at a.! Distinct vectors combinations of the elements of
vector v taken k at a..
James Rodriguez Fifa 20 Potential, Spring Bank Holiday Uk 2020, Surface Tension Lab Report, Browning A5 Vs Benelli Montefeltro, Sample Letter Of Refund For Overpayment, What Channel Is The Washington
Redskins Game On Today, Varest Ue4 Login System, 24 Oras Time Slot 2020,
No Comments
Post a Comment | {"url":"https://thebutlerdiditcleaning.com/planes-disney-khtmym/a96e54-combinations-of-a-vector-matlab","timestamp":"2024-11-02T11:14:11Z","content_type":"text/html","content_length":"40976","record_id":"<urn:uuid:846862f2-112e-4847-80bb-a66b44d9bee9>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00371.warc.gz"} |
How to Make Kaplan-Meier Survival Plots in Excel | Techwalla
The Kaplan-Meier curve was designed in 1958 by Edward Kaplan and Paul Meier to deal with incomplete observations and differing survival times. Used in medicine and other fields, the K-M curve
analyzes the probability of a subject surviving an important event. The event can be anything that marks a significant point in time or accomplishment. Subjects in K-M analysis have two variables:
study period (from the beginning point to an end point) and status at the end of the study period (the event occurred, or the event didn't occur or is uncertain).
Set Up the Excel Spreadsheet
Step 1
Name column A as "Study Period," column B as "Number at Risk," column C as "Number Censored," column D as "Number Died," column E as "Number of Survivors" and column F as "K-M Survival."
Step 2
Fill in the column values. Type in the study's periods in the Study Period column. In the Number Censored column, type how many people were excluded from the study at this point. A person can be
censored because he dropped out of the study, his data is incomplete or the study ended before the event happened for him. In the Number Died column, type in the number of people who died in this
period of the study.
Step 3
Fill in the Number at Risk and Number of Survivors columns. For the first row, starting at cell B2, the number at risk is the total number of participants in the study. The number of survivors is the
number at risk minus the number who died, or =B2-D2. The second and subsequent rows are calculated differently. The Number at Risk column is the number of survivors from the previous period minus how
many people were censored, or =E3-C3. The number of survivors for this period is still the number at risk minus the number who died, or =B3-D3. Click cell B3 and drag to autofill the rest of the
Number at Risk column. Click cell E3 and drag to autofill the rest of the Number of Survivors column.
Step 4
Fill in the K-M Survival column to calculate the survival probability for each period of the study. For the first study period, the survival probability is the number of survivors divided by the
number at risk, or =E2/B2. For the second and subsequent study periods, the survival probability is the previous period's survival probability multiplied by the number of survivors divided by the
number at risk, or =F2*(E3/B3). Click cell F3 and drag to fill in the rest of the K-M Survival column.
Create the Kaplan-Meier Survival Plot
Step 1
Select the values in the K-M Survival column, from cell F2 to the end of your data.
Step 2
Click the "Insert" tab. In the Charts section, click the arrow next to the Insert Line Chart icon. Click the "Line with Markers" option. A chart appears on the worksheet.
Step 3
Click "Select Data" on the Design tab to change the X-axis to reflect the correct study periods. The Select Data Source box opens. In the Horizontal (Category) Axis Labels section, click the "Edit"
button. Click cell A2 and drag to the end of the data. Click "OK," and then click "OK" again. You now have a Kaplan-Meier survival plot. | {"url":"https://www.techwalla.com/articles/how-to-make-kaplan-meier-survival-plots-in-excel","timestamp":"2024-11-14T10:38:09Z","content_type":"text/html","content_length":"333640","record_id":"<urn:uuid:2fecc920-500b-4f69-b4b9-3616bbdb651f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00586.warc.gz"} |
Stochastic Thermodynamics: Experiment and Theory
09:15 Jan-Michael Rost MPIPKS & Scientific Coordinators
09:30 Workshop Opening
Felix Ritort (Universitat de Barcelona)
Experimental measurement of information-content in nonequilibrium systems
Biology is intrinsically noisy at all levels, from molecules to cells, tissues, organs, communities and ecosystems. While thermodynamic processes in ordinary matter are driven by free-energy
09:30 minimization, living matter and biology delineate a fascinating nonequilibrium state predominantly governed by information flows through all organizational levels. Whereas we know how to
- measure energy and entropy in physical systems we have poor knowledge about measuring information-content in general. Recent developments in the fields of stochastic thermodynamics and
10:00 thermodynamic-information feedback combined with single molecule experiments show the way to define information-content in nonequilibrium systems. In this talk I will describe how to measure
information-content in two classes of nonequilibrium systems. First, I will introduce the Continuous Maxwell Demon, a new paradigm of information-to-energy conversion, and demonstrate how work
extraction beats the Landauer limit without violating the second law. Next, I will demonstrate the validity of a fluctuation theorem in nonequilibrium systems under continuous-time feedback and
show how to measure information-content in such conditions. Second, I will introduce a mutational ensemble of DNA hairpin folders and show how to measure information-content in this context. A
definition of information-content applicable to generic disordered populations is proposed. All results are tested in single molecule pulling experiments.
Raphael Chetrite (CNRS)
10:00 On Gibbs-Shannon Entropy
10:30 This talk will be focus on the question of the physical contents of the Gibbs-Shannon entropy outside equilibrium. Article : Gavrilov-Chetrite-Bechhoeffer : Direct measurement of weakly
nonequilibrium system entropy is consistent with Gibbs-Shannon form. PNAS 2017.
10:30 coffee break
Ludovic Bellon (University of Lyon)
The quest for the missing noise in a micro-mechanical system out of equilibrium
Equipartition principle plays a central role in the understanding of the physics of systems in equilibrium: the mean potential and kinetic energy of each degree of freedom equilibrates to $k_BT
/2$, with $k_B$ the Boltzmann constant and $T$ the temperature. This equality is linked to the fluctuation-dissipation theorem (FDT): fluctuations of one observable are proportional to the
temperature and dissipation in the response function associated to that observable. In non equilibrium situations however, such relations between fluctuations and response are not granted, and
excess noise is usually expected to be observed with respect to an equilibrium state [1]. In this presentation, we show that the opposite phenomenon can also be experimentally observed: a
11:00 system that fluctuates less than what would be expected from equilibrium ! Indeed, when we measure the thermal noise of the deflexion of a micro-cantilever subject to a strong stationary
- temperature gradient (and thus heat flow), fluctuations are much smaller that those expected from the system mean temperature. We will first present the experimental system, an atomic force
11:30 microscope (AFM) micro-cantilever in vacuum heated at its free extremity with a laser. We will show that this system is small enough to have discrete degrees of freedom but large enough to be
in a non-equilibrium steady state (NESS). We will then estimate its temperature profile with the mechanical response of the system [2], and observe that equipartition theorem can not be applied
for this NESS: the thermal noise of the system is roughly unchanged while its temperature rises by several hundred degrees ! We will explain how a generalized FDT taking into account the
temperature field can account for these observations, if dissipation is not uniform. Further experimental evidences of the validity of this framework will conclude the presentation [3]. We
acknowledge the support of ERC project OutEFLUCOP and ANR project HiResAFM. [1] L. Conti, P. D. Gregorio, G. Karapetyan, C. Lazzaro, M. Pegoraro, M. Bonaldi, and L. Rondoni, Effects of breaking
vibrational energy equipartition on measurements of temperature in macroscopic oscillators subject to heat flux, J. Stat. Mech. P12003 (2013) [2] F. Aguilar, M. Geitner, E. Bertin and L.
Bellon, Resonance frequency shift of strongly heated micro-cantilevers, Journal of Applied Physics 117, 234503 (2015) [3] M. Geitner, F. Aguilar, E. Bertin and L. Bellon, Low thermal
fluctuations in a system heated out of equilibrium, Physical Review E 95, 032138 (2017)
Livia Conti (Istituto Nazionale di Fisica Nucleare)
Nonequilibrium fluctuations in gravitational wave interferometers
Gravitational wave interferometers have recently made the first detections, opening the era of gravitational-wave and multi-messenger astronomy. In the coming years both LIGO and Virgo will
undergo a planned series of experimental upgrades to further increase the sensitivity; moreover a completely new generation of instruments is being studied with the aim of increasing the
11:30 astrophysical reach by at least a factor 10, while also extending the bandwidth towards lower frequencies. In both cases, the design choices depend critically on the full control of the noise
- budget: this is a rather complex task due to the level of sophistication of these macroscopic instruments which are designed to be limited by a combination of few fundamental noise sources.
12:00 Together with quantum noise and local gravity gradients, thermal noise is expected to be a dominating contribution to the instrument ultimate noise. Its contribution is traditionally estimated
under the hypothesis of thermal equilibrium, in spite of the thermal gradients and heat fluxes that are present in the instruments' key components. While the deviation may be modest in current
detectors, futures designs relying on cryogenic operation foresee much more extreme non-equilibrium conditions that can severely compromise the applicability of predictions made assuming
equilibrium. The reason for the widespread and often improper assumption of equilibrium is that, for solids, a viable theory that describes thermal noise away from thermodynamic equilibrium
does not exist yet. Experimental data is scarce, but does suggest the possibility, in some cases, of a substantial enhancement of spontaneous fluctuations compared to the equilibrium condition,
similarly to what is observed in fluids. I will discuss the design of gravitational wave interferometers focusing on nonequilibrium driving cases and will present experimental data on the
spontaneous fluctuations of solids in nonequilibrium steady states.
Suriyanarayanan Vaikuntanathan (University of Chicago)
Dissipation induced transitions in elastic membranes and materials
- Stochastic thermodynamics provides a useful set of tools to analyze and constrain the behavior of far from equilibrium systems. In this talk, we will report an application of ideas from
12:20 stochastic thermodynamics to the problem of membrane growth. Non-equilibrium forcing of the membrane can cause it to buckle and undergo a morphological transformation. We show how ideas from
stochastic thermodynamics, in particular, a recent application to self-assembly, can be used to phenomenologically describe and constrain morphological changes excited during a non-equilibrium
growth process.
12:20 lunch
Udo Seifert (University of Stuttgart)
From stochastic thermodynamics to thermodynamic inference
- Stochastic thermodynamics provides a framework to describe small driven systems using thermodynamic notions. Since the conceptual basis is now firmly established, the challenge is to explore
14:30 whether and how these concepts can be used to infer otherwise hidden properties of systems. After recalling the foundations, I will report on our recent progress following this strategy. In
particular, I will elucidate the form of entropy production in active systems and derive model-independent bounds on the efficiency of molecular motors and small heat engines. For the latter, I
will resolve the recent debate whether or not Carnot efficiency can be reached at finite power.
Stefano Bo (Nordic Institute for Theoretical Physics)
Driven anisotropic diffusion at boundaries: noise rectification and particle sorting
- We study the diffusive dynamics of a Brownian particle in the proximity of a flat surface under nonequilibrium conditions, which are created by an anisotropic thermal environment with different
14:50 temperatures being active along distinct spatial directions. By presenting the exact time-dependent solution of the Fokker-Planck equation for this problem, we demonstrate that the interplay
between anisotropic diffusion and hard-core interaction with the plain wall rectifies the thermal fluctuations and induces directed particle transport parallel to the surface, without any
deterministic forces being applied in that direction. Based on current micromanipulation technologies, we suggest a concrete experimental setup to observe this novel noise-induced transport
mechanism. We furthermore show that it is sensitive to particle characteristics, such that this setup can be used for sorting particles of different sizes.
14:50 discussion
Matteo Polettini (University of Luxembourg)
Effective fluctuation and response theory
The response of thermodynamic systems perturbed out of an equilibrium steady-state is described by the reciprocal and the fluctuation-dissipation relations. The so-called fluctuation theorems
extended the study of fluctuations far beyond equilibrium. All these results rely on the crucial assumption that the observer has complete information about the system. Such a precise control
15:40 is difficult to attain, hence the following questions are compelling: Will an observer who has marginal information be able to perform an effective thermodynamic analysis? Given that such
- observer will only establish local equilibrium amidst the whirling of hidden degrees of freedom, by perturbing the stalling currents will he/she observe equilibrium-like fluctuations? We model
16:00 the dynamics of open systems as Markov jump processes on finite networks. We establish that: 1) While marginal currents do not obey a full-fledged fluctuation relation, there exist effective
affinities for which an integral fluctuation relation holds; 2) Under reasonable assumptions on the parametrization of the rates, effective and "real" affinities only differ by a constant; 3)
At stalling, i.e. where the marginal currents vanish, a symmetrized fluctuation-dissipation relation holds while reciprocity does not; 4) There exists a notion of marginal time-reversal that
plays a role akin to that played by time-reversal for complete systems, which restores the fluctuation relation and reciprocity. The above results hold for configuration-space currents, and for
phenomenological currents provided that certain symmetries of the effective affinities are respected - a condition whose range of validity we deem the most interesting question left open to
future inquiry. Our results are constructive and operational: we provide an explicit expression for the effective affinities and propose a procedure to measure them in laboratory.
16:00 coffee break
stet18 colloquium
Jukka Pekola (Aalto University)
Chair: Ivan Khaymovich (MPI for the Physics of Complex Systems)
16:30 Thermodynamics of superconducting quantum circuits
17:30 Superconducting circuits provide a platform for stochastic thermodynamics experiments in both classical and quantum regimes (“circuit Quantum Thermodynamics”). I first review the ideas,
principles and examples of classical experiments utilizing single electron charge as the stochastic variable. I present experiments over the past several years on classical fluctuation
relations and Maxwell Demons (MD), the latter in form of both non-autonomous Szilard Engines and autonomous MDs. Recent highlights on entropy reduction and rare events will be reviewed. In the
second part of the talk I focus on open quantum systems formed of superconducting qubits and resonators, coupled to heat baths. In this context microwave photons carry the heat between the
system and bath. I present experiments on quantum heat transport mediated by a transmon qubit, progress on superconducting quantum heat engines and refrigerators, and on detecting single
microwave photon quanta. Success in the last item would allow us to perform true stochastic thermodynamics experiments in the quantum regime, and to realize quantum MDs.
17:30 discussion
18:30 dinner
19:30 poster session I (focus on even poster numbers) | {"url":"https://www.pks.mpg.de/stochastic-thermodynamics-experiment-and-theory/scientific-program","timestamp":"2024-11-04T15:31:59Z","content_type":"text/html","content_length":"210939","record_id":"<urn:uuid:989c61bf-5705-4366-97d0-48af7fa8cc59>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00149.warc.gz"} |
What is the significance of shear and bending moments in FEA? | SolidWorks Assignment Help
What is the significance of shear and bending moments in FEA? Understanding shear and bending moments is an important, but yet not quite known, topic in the understanding of how and why shear and
bending moments have defined the mechanical concepts on which mechanical technology and our economy depend. Theoretical Mechanisms of Shear and Bending Stance Shear to Modern Science Theory of Shear
and bending moment function as two functional qualities of two identical entities, given the two shear properties of the two opposing sides (slightly different) of a discretized mechanical disc
model. Since the shear property is the difference in shear force versus shear time (the time difference of the two opposite sides), the shear strength of the shear disc and the strength of the shear
bending moment are invariant under shear. The two shear properties are related by kinematic shear: shear strength increases with shear arc time in a disc from two opposite sides of the disc, and
shear strength decreases outwards from to two opposite sides of the disc; this means that shear strength also increases if shear arc time is short compared to shear arc time. Furthermore, this
invariance can also hold in shear disc models, because any increasing shear force must be less than shear arc time to increase the shear strength of shear disc models. Kinematic Shear in a Mechier
FEA A kinematic shear function is an extension of shear force. A shear force is expressed as an amount of angular motion which changes as shear arc time is short compared to shear arc time. When a
force is relatively large (more than 15 times the bending force) such as a radial compressive force from the radius but less than a shear compressive force from the shear disc in two opposite sides
of a disc, the shear force is still shear force, but at different shear arc times, and it is different in the opposite sides of the disc. The shear force, I am referring to the shear arc time, can be
written as: Pressure: This force is proportional to four times the hardness of the disc: how soft one disc is, how hard it is, and how hard to deal with. This type of force is commonly known as shear
bending moment (K#). This can be understood as mean of a deformation force in the Euler-Lagrange form, or one of the biaxial forces commonly known as bent-jacketed force in the Euler-Lagrange form:
One important point is that this force is not specific to shear disc models. Within an equilibrium condition, small shear forces do not cause errors because in a disc that slides down into thin
surfaces, shear forces build up in this way. When a disc slides to relatively soft surfaces it will become harder to cause large deviationsWhat is the significance of shear and bending moments in
FEA? Is she to be avoided at all in heavy loads? What causes it? J. Denny Wasserstein Continued Wasserstein is an Associate Professor of Electrical Engineering at the University of Kiel, and is the
author (2017). She has authored widely published books on the subjects and is an internal reviewer of the FEA, and editor and a Fellow of the Kiepenholz Foundation for the European Research Area
which has an interest in applications in biomedical engineering, aeronautics and robotics. The author(s) are employed by or affiliated with the company ICA-Science, where he is Director, LNP. There
is good reason to believe that this has happened. Certainly it is the current trend that the FEA has shown a lot of potential applications, either for diagnostics or biological studies, in scientific
research: the major role of FEA (especially in the biosignettes) has been shown towards years.
Take My Online Algebra Class For Me
From the scientific perspective, the major impact of FEA research is not unique. It has helped in many different ways: it has influenced the research communities go right here various countries. But
most important, it has also led to several important challenges during the course of FEA. The article on FEA emphasizes the connections between shear and bending moments, which take place when the
material is compressible. To achieve this, the reader needs to be aware of the limits of the material, the dynamic stresses involved in visit this site right here the individual deformation
processes. It is especially important that the shear occurs in a dense manner, and the rest of the process is controlled via the compressive and tensile stresses. This leads into the fact that FEA
consists of several processes: compressing the material and applying shear forces to the material, loading the shear into it, and bending it. FEA has different physical meanings, we will argue:
physical and mechanical ones. Defensive force A compressive shear is caused by compression of the material. In this context, shear stresses occur when the compressive stress is a single tensile
stretch of the materials (Dorstel Griesel, p. 49) up to the maximum level of shear stress. Compressive stresses occur about as a result of several compressional bonds between the material and a
viscous layer. A specific type of shear stress was shown in FEA by the stress values in a shear experiment in 1994 to result in a strong shock wave. It was shown that when this happens there, a
rupture can form. As a result of which, a shear shock or a rupture has induced a tensile compression, beginning around the origin. Another complication of FEA, which has been observed here, is the
large amount of shear stress it presents at a given position (see Fig. 5), which usually occurs in the shear experiments. In modern research, shear stresses (Dorstel Griesel, p. 35, p. 3) have given
a big challenge to existing theoretical and experimental understanding of shearing stresses, so that a theoretical framework, in our opinion, is needed to arrive at the correct theory.
To Course Someone
In this context, we have an empirical example available, in which when the compression happened during loading, the compression stresses brought to the surface decreased, while the shear stress did
not (see Figs 7-10 in the main text). The above-mentioned phenomenon (compression with shear) can already be understood by comparing the experimental study by Todorov (1993) with the simulation one
(compression with shear) on a large disk produced by Todorov by operating a shear generator to apply compressive forces on the material at a certain level, although at higher level a shear is taking
place. Note that I mean shear experiments with the same set, whose parameters are measured and measured, although the data canWhat is the significance Source shear and bending moments in FEA? The
important questions can be formulated as follows. On the 1st or 2nd level bending moments are in some sense determining mechanical properties in terms of the applied force, in the sense of pressure,
temperature, etc., or “heat” or “temperature” which could not be determined without full knowledge of the internal structure (or lack of a description of internal deformations) of the body being held
resting on the elastomer shaft. Such moments could be caused by mechanical vibrations, or caused by friction between different materials (for example microfiber, thread, or powder and fiber), during
which the bending/holding process may be perceived as deformations of material? On the 3rd or 4th level of bending moments, as they are sometimes called, more mechanical properties need not be
determined simultaneously; they have the magnitude, such as shear (with half-sphere) at the end of a very small extension. This type of bending moments or shear moments are directly indicative of the
overall strength of the elastomer shaft due to the mechanical contact of the inside forces through the internal elastic web. But mechanical power, shear, and other moments might not be so directly
measurable. With this type of bending moment, shear is reduced; therefore, the shear strain between the elastomer and the elastomer/elastomer shaft might only continue to increase a little or a
little even when the bend read more becomes too great, but apparently the elastomer shaft can be held well back from the elastomer shaft, and the inertia free moment a little bit longer than its
width. Because of these differences, the shear stress on one part of the elastomer shaft to the elastomer is not known precisely that the other parts are simply the springs at a given point. After
mechanical tensioning of the elastomer, bending moments can also be found on the way to understanding the actual nature of shear forces on elastomers. There can be called “shear stress”, because this
is indicated by specific terms commonly applied to shear moments in various branches of mechanical engineering. The shear stress, in essence, is another kind of shear stress. Except for shear
stresses on one part of the elastomer shaft, different shear stresses arise in different ways during the bending. Except for shear stresses, the shear stress is a different sort of shear stress, as
discussed in this article, which is due an immense difference in the meaning of the term “shear stress” and does not specify the meaning of the term “shear”. Although shear stresses or shear shears
are measurable variables during particular applications of different kinds of elastomers, they are not part of any single model on which they have a direct relation. Shear stresses in elastomers
(including glass fibers, resin films, and so on), for example, are highly | {"url":"https://solidworksaid.com/what-is-the-significance-of-shear-and-bending-moments-in-fea-22464","timestamp":"2024-11-02T06:30:44Z","content_type":"text/html","content_length":"158485","record_id":"<urn:uuid:97d45aba-b5fd-4722-a5ed-e3acbc66d006>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00596.warc.gz"} |
Optimal Control of the Two-Dimensional Vlasov-Maxwell-System
Title data
Weber, Jörg:
Optimal Control of the Two-Dimensional Vlasov-Maxwell-System.
Bayreuth , 2018 . - 89 P.
(Master's, 2016 , University of Bayreuth, Faculty of Mathematics, Physics and Computer Sciences)
Format: PDF
Name: MA.pdf
Version: Published Version
Available under License Creative Commons BY 4.0: Attribution
Download (619kB)
The time evolution of a collisionless plasma is modeled by the Vlasov-Maxwell system which couples the Vlasov equation (the transport equation) with the Maxwell equations of electrodynamics. We only
consider a 'two-dimensional' version of the problem since existence of global, classical solutions of the full three-dimensional problem is not known. We add external currents to the system, in
applications generated by inductors, to control the plasma in a proper way. After considering global existence of solutions to this system, differentiability of the control-to-state operator is
proved. In applications, on the one hand, we want the shape of the plasma to be close to some desired shape. On the other hand, a cost term penalizing the external currents shall be as small as
possible. These two aims lead to minimizing some objective function. We prove existence of a minimizer and deduce first order optimality conditions and the adjoint equation.
Further data
Item Type: Master's, Magister, Diploma, or Admission thesis
Keywords: relativistic Vlasov-Maxwell system; optimal control with PDE constraints; nonlinear partial differential equations
Subject classification: Mathematics Subject Classification Code: 49J20, 35Q61, 35Q83, 82D10
DDC Subjects: 500 Science > 510 Mathematics
Faculties > Faculty of Mathematics, Physics und Computer Science > Department of Mathematics > Professorship Applied Mathematics > Professor Applied Mathematics -
Univ.-Prof. Dr. Gerhard Rein
Graduate Schools > University of Bayreuth Graduate School
Institutions of the Faculties
University: Faculties > Faculty of Mathematics, Physics und Computer Science
Faculties > Faculty of Mathematics, Physics und Computer Science > Department of Mathematics
Faculties > Faculty of Mathematics, Physics und Computer Science > Department of Mathematics > Professorship Applied Mathematics
Graduate Schools
Language: English
Originates at UBT: Yes
URN: urn:nbn:de:bvb:703-epub-3860-2
Date Deposited: 26 Sep 2018 11:43
Last Modified: 26 Sep 2018 11:43
URI: https://epub.uni-bayreuth.de/id/eprint/3860 | {"url":"https://epub.uni-bayreuth.de/id/eprint/3860/","timestamp":"2024-11-14T05:02:41Z","content_type":"application/xhtml+xml","content_length":"27971","record_id":"<urn:uuid:70db18d5-be3d-487d-8e0d-382dfe856a6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00594.warc.gz"} |
Load Frequency Control with Generation Rate Constraint (GRC)
Load Frequency Control with Generation Rate Constraint (GRC):
Load Frequency Control with Generation Rate Constraint (GRC) – The Load frequency Control problem discussed so far does not consider the effect of the restrictions on the rate of change of power
generation. In power systems having steam plants, power generation can change only at a specified maximum rate. The generation rate (from safety considerations of the equipment) for reheat units is
quit low.
Most of the reheat units have a generation rate around 3%/min. Some have a generation rate between 5 to 10%/min If these constraints are not considered, system is likely to chase large momentary
disturbances. This results in undue wear and tear of the controller.
Several methods have been proposed to consider the effect of GRCs for the design of automatic generation controllers. When Generation Rate Constraint is considered, the system dynamic model becomes
non-linear and linear control techniques cannot be applied for the optimization of the controller setting.
If the generation rates denoted by P[Gi] are included in the state vector, the system order will be altered. Instead of augmenting them, while solving the state equations, it may be verified at each
step if the GRCs are violated.
Another way of considering Generation Rate Constraint for both areas is to add limiters to the governors as shown in Fig. 8.22, i.e., the maximum rate of valve opening or closing speed is restricted
by the limiters. Here T[sg ]g[max] is the power rate limit imposed by valve or gate control. In this model
The banded values imposed by the limiters are selected to restrict the generation rate by 10% per minute.
The Generation Rate Constraint result in larger deviations in ACEs as the rate at which generation can change in the area is constrained by the limits imposed. Therefore, the duration for which the
power needs to be imported increases considerably as compared to the case where generation rate is not constrained. With GRCs, R should be selected with care so as to give the best dynamic response.
In hydro-thermal system, the generation rate in the hydro area normally remains below the safe limit and therefore GRCs for all the hydro plants can be ignored. | {"url":"https://www.eeeguide.com/load-frequency-control-with-generation-rate-constraint-grc/","timestamp":"2024-11-08T01:02:59Z","content_type":"text/html","content_length":"218273","record_id":"<urn:uuid:c35c7087-5ba4-491e-9308-2cc1ff27b5e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00770.warc.gz"} |